Search Results

Keyword: ‘design’

DVCon India 2014 aims to bring Indian design, verification and ESL community closer!

September 11, 2014 1 comment

DVCon India 2014 has come to Bangalore, India, for the first time. It will be held at the Hotel Park Plaza in Bangalore, on Sept. 25-26. Dr. Wally Rhines, CEO, Mentor Graphics will open the proceedings with his inaugural keynote.

proxy?url=http%3A%2F%2F1.bp.blogspot.com%2F-gBtXKeIK9Ss%2FVBHi1IJxT1I%2FAAAAAAAALYM%2FfSqqoFgTuas%2Fs1600%2Fdvconindia2014logo2.png&container=blogger&gadget=a&rewriteMime=image%2F*Other keynotes will be from Dr. Mahesh Mehendale, MCU chief technologist, TI, Janick Bergeron, verification fellow, Synopsys, and Vishwas Vaidya, assistant GM, Electronics, Tata Motors.

Gaurav Jalan, SmartPlay, chair – promotions committee took time to speak about DVCon 2014 India.

Focus of DVCon 2014 India

First, what’s the focus of DVCon 2014 India? According to Jalan, DVCon has been a premiere conference in the US contributing to quality tutorials, papers and an excellent platform for networking. DVCON India focuses on filling the void of a vendor neutral quality conference in the neighbourhood – one that will grow over time.

The idea is to bring together, hitherto dispersed, yet substantial, design, verification and ESL community and give them a voice. Engineers get a chance to learn solutions to the verification problems, share the effectiveness of the solutions they have experimented, understand off the shelf solutions that are available in market and meet the vendor agnostic user fraternity. Moving forward the expectation is to get the users involved as early adopters of upcoming standards and actively contribute to them.

Trends in design

Next, what are the trends today in design? Jalan said while the designs continue to parade on the lines of Moore’s law there is a lot happening beyond the mere gate count. Defining and developing IPs with a wide configuration options serving a variety of application domains is a challenge.

The SoCs are crossing multi billion gate design (A8 in iPhone6 is 2 billion) with multi-fold increase in complexity due to multiple clock domains, multiple power domains, multiple voltage domains while delivering required performance in different application modes with sleek foot print.

Trends in verification

Now, let’s examine the trends today in verification. When design increases linearly, verification jumps exponentially. While UVM has settled dust to some extent on the IP verification level, there is a huge of challenges still awaiting to be addressed. The IP itself is growing in size limiting the simulator and encouraging users to move to emulators. While UVM solved the methodology war the VIPs available are still not simulator agnostic and expecting a emulator agnostic VIP portfolio is still a distant dream.

SoC verification is still a challenge not just due to the sheer size but because porting an env from block to SoC is difficult. The test plan definition and development for SoC level itself is a challenge. Portable stimulus group from Accellera is addressing this.

Similarly, coverage collection from different tools is difficult to merge. Unified coverage group at Accellera is addressing this. Low power today is a norm and verifying a power aware design is quite challenging. UPF is an attempt to standardize this.

Porting a SoC to emulator to enable hardware acceleration so as to run usecases is another trend picking up. Teams now are able to boot android on an SoC even before the silicon arrives. With growing analog content on chip the onus is on the verification engineers to ensure the digital and analog sides of the chip work in conjunction as per specs. Formal apps have picked so as to address connectivity tests, register spec testing, low power static checks and many more.

Accelearating EDA innovation

So, how will EDA innovation get accelerated? According to Jalan, the semiconductor industry has always witnessed that startups and smaller companies lead the innovation. Given the plethora of challenges around, there are multiple opportunities to be addressed from both the biggies and the start-ups.

The evolution of standards at Accellera definitely is a great step so as to bring the focus on real innovation in the tools while providing a platform for the user community to come forward sharing the challenges and proposing alternates. With a standard baseline that is defined with collaboration from all partners of the ecosystem, the EDA companies can focus on competing on performance, user interface, increased tool capacity and enabling faster time to market.

Forums like DVCON India help in growing awareness on standard promoted by Accellera while encouraging participants from different organizations and geographies join to contribute. Apart from tools areas where EDA innovation would pick up include new IT technologies and platforms – Cloud, Mobile devices.

Next level of verification productivity

Where is the next level of verification productivity likely to come from? To this, Jalan replied that productivity in the verification improves from different aspects.

While faster tools with increased capacity comes from innovation at EDA end, standard have played an excellent role in addressing it. UVM has helped in displacing vendor specific technologies to improve inter-operability, quick ramp up for engineers and reusability. Similarly on power format, UPF has played an important role in bridging the gaps.

Unified coverage is another aspect where it will help in closing early with coverage driven verification. IPXACT and SystemRDL standards help further in packaging IPs and easier hand off to enable reuse. Similarly other standards on ESL, AMS etc help in closing the loop holes that prevent productivity.

New, portable stimulus specification now being developed under Accellera that will help in easing out test development at different levels from IP to sub system to SoC. For faster simulations, the increase in adoption of hardware acceleration platforms is helping verification engineers to improve regression turn around time.

Formal technologies play an important role in providing a mathematical proofs to common verification challenges at an accelerated pace in comparison to simulation. Finally events like DVCON enables users to share their experiences and knowledge encouraging others to try out solutions instead of struggling with the process of discovering or inventing one.

More Indian start-ups

Finally, do the organizers expect to see more Indian start-ups post this event? Yes, says Jalan. “We even have a special incubation booth that is encouraging young startups to come forth and exhibit at a reduced cost (only $300). We are creating a platform and soon we will see new players in all areas of Semiconductor.

“Also, the Indian government’s push in the semiconductor space will give new startups further incentive to mushroom. These conferences help entrepreneurs to talk to everyone in the community about problems, vet potential solutions and seek blessings from gurus.”

Categories: Semiconductors

Plunify’s InTime helps FPGA design engineers meet timing and area goals!


Kirvy Teo

Kirvy Teo

Engineers designing FPGA applications face many challenges. Using Plunify’s automation and analysis platform, engineers can run 100 times more builds, analyze a larger set of builds and quickly zoom in on better quality results. Using data analytics and the cloud, Plunify created new capabilities for FPGA design, with InTime being an example.

Kirvy Teo said: What happens when you need to close timing in FPGA design and still can’t get it to work? Here is a new way to solve that problem – machine learning and analytics. InTime is an expert software that helps FPGA design engineers meet timing and area goals by recommending “strategies”. Strategies are combination of settings found in the existing FPGA software. With more than 70 settings available in the FPGA software, no sane FPGA design engineer have the time or capacity to understand how these affect the design outcomes.

One of the common methods now is to try random bruteforce using seeds. This is a one-way street. If you get to your desired result, great! If not, you would have wasted a bunch of time running builds with you none the wiser. Another aspect of running seeds is that the variance of the results is usually not very big, meaning you can’t run seeds on a design with bad timing scores.

However, using InTime, all builds become part of the data that we used to recommend strategies that can give you better results, using machine learning and predictive analytics. This means you will definitely get a better answer at the end of the day, and we have seen 40 percent performance improvements on designs!

How has Plunify been doing this year so far? According to Teo, Plunify did a controlled release to selected customers in first quarter of 2014, who are mainly based in China. It is easier to guess who as we nicknamed them “BCC” – Big Chinese Corporations.

Unsurprisingly, they have different methodologies to solving timing problems and design guidelines, many of which were done to pre-empt timing problems at the later stage of the design. InTime was a great way to help them to achieve their performance targets without disrupting their tool flows.

Plunify is announcing the launch of InTime during DAC and will be looking to partner with sale organizations in US.

What’s the future path likely to be? Teo added: “Machine learning and predictive analytics are one of the hottest topics and we have yet seen it being used much in chip design. We see a lot of potential in this sector. Beyond what InTime is doing now, there are still many chip design problems that can be solved with similar techniques.

“First, there is a need to determine the type of problems that can be solved with these techniques. Second, we are re-looking at existing design problems and wondering, if I can throw 100 or 1000 machines to this problem, can I get a better result? Third, how to get that better result without even running it!

“As you know, we do offer a FPGA cloud platform on Amazon. One of the most surprising observations is that people do not know how to use all those cheap power in the cloud! FPGA design is still confined to a single machine for daily work, like email. Even if I give you 100 machines, you don’t know how to check your emails faster! We see the same thing, the only method they know is to run seeds. InTime is what they need to make use of all these resources intelligently.

Why would FPGA providers take up the solution?

The InTime software works as a desktop software which can be installed in internal data centers or desktops. It is on longer just a cloud play. It works with the current  in-house FPGA software that the customer already own. We are helping FPGA providers like Xilinx or Altera, by helping their customers with the designs. They will feel: How about “Getting better results without touching your RTL code!”

Agnisys makes design verification process extremely efficient!


Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.

Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.

The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.

Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.

IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”

The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.

“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.

Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which  are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.

Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.

The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.

Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.

How is IDesignSpec working as chip-level assertion-based verification?

Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
Read more…

On-chip networks: Future of SoC design


Selection of the right on-chip network is critical to meeting the requirements of today’s advanced SoCs. There is easy IP integration with IP cores from many sources with different protocols, and an UVM verification environment.

John Bainbridge, staff technologist, CTO Office, Sonics Inc., said that it optimizes the system performance. Virtual channels offer efficient resource usage – saves gates and wires. The non-blocking network leads to an improved system performance. There are flexible topology choices with optimal network to match requirements.

Power management is key with advanced system partitioning, and an improved design flow and timing closure. Finally, the development environment allows easy design capture and has performance analysis tools.

For the record, there are several SoC integration challenges that need to be addressed, such as IP integration, frequency, throughput, physical design, power management, security, time-to-market and development costs.

SGN exceeds requirements
SGN met the tablet performance requirement with fabric frequency of 1066MHz. It has an efficient gate count of 508K gates. There are Sonicsfeatures such as an advanced system partitioning, security and I/O coherency. There is support for system concurrency as well as advanced power management.

Sonics offers system IP solutions such as SGN, a router based NoC solution, with flexible partitioning and VC (Virtual Channel) support. The frequency is optimized with credit based flow control.

SSX/SLX is message based crossbar/ShareLink solutions based on interleaved multi-channel technology. It has target based QoS with three arbitration levels. The SonicsExpress is for power centric clock domain crossing. There is sub-system re-use and decoupling. The MemMax manages and optimizes the DRAM efficiency while maintaining system QoS. There is run-time programmability for all traffic types. The SonicsConnect is a non-blocking peripheral interconnect.

Analog Devices launches portable lab for electronic circuit design


Analog Devices, as part of its University Program, has launched a personal, affordable and portable lab for electronic circuit design in India at the 26th international conference on VLSI, currently ongoing in Pune, India.

Somshubhro Pal Choudhury, MD, Analog Devices India Pvt Ltd said that miniaturization and portability are the key trends today. Desktops have given way to laptops, and then to smartphones and tablets. The expensive vital signs monitoring equipment in hospitals is giving way to more wearable miniaturized power sipping (and not guzzling) medical gadgets. It is natural that education and training for engineering students start taking a similar route.

What is this personal lab?
What it means that the lab will fit in the palm of your hand and would enable students to learn analog and mixed signal design, anywhere and everywhere not limited by their expensive university/college lab setup where access is fairly limited and the amount of time is limited as well to a few hours every week.

Analog Devices' portable lab.

Analog Devices’ portable lab.

What does it mean for students?
With the lab, now, the students can carry on their experiments in their hostels/dorms and in their classrooms, using this portable lab, run experiments quickly during the class to see how real time real-life how a certain change in circuit impacts the results.

It has all the elements of a complete and expensive Lab setup on this portable kit connected with the student’s laptop. Students would not need equipment like oscilloscopes, waveform generators, logic analyzers and power supplies, expensive equipment that only top universities can afford.

Along with the portable kit, online and downloadable software and teaching materials, circuit simulation tools, online support and community, online textbook, reference designs and lab projects to design to enhance learning as a supplement to their core engineering curriculum are also provided free of charge.

This launch is likely to revolutionize electronic circuit design education and learning among the engineering academic community.

Next wave of design challenges, and future growth of EDA: Dr. Wally Rhines

December 15, 2012 1 comment

Dr. Wally Rhines.

Dr. Wally Rhines.

Today, EDA requires specialization. Elaborating on EDA over the past decade, Dr. Walden (Wally) C. Rhines, chairman and CEO, of Mentor Graphics, and vice chairman of the EDA Consortium, USA, said that PCB design has been flat despite growth in analysis, DFM and new emerging markets. Front end design has seen growth from RF/analog design and simulation, and analysis As design methodologies mature, EDA expenditures stop growing. He was speaking at Mentor Graphics’ U2U (User2User) conference in Bangalore, India.

Most of the EDA revenue growth comes from major new design methodologies, such as ESL, DFM, analog-mixed signal and RF. PCB design trend continues to be flat, and includes license and maintenance. The IC layout verification market is pointing to a 2.1 percent CAGR at the end of 2011. The RTL simulation market has been growing at 1.3 percent CAGR for the last decade. The IC physical implementation market has been growing at 3,4 percent CAGR for the last decade.

Growth areas in EDA from 2000-2011 include DFM at 28 percent CAGR, formal verification at 12 percent, ESL at 11 pecent, and IC/ASIC analysis at 9 percent, respectively.

What will generate the next wave of electronic product design challenges, and the future growth of EDA? This would involve solving new problems that are not part of the traditional EDA, and ‘do what others don’t do!

Methodology changes that may change EDA
There are five factors that can make this happen. These are:
* Low power design beyond RTL (and even ESL).
* Functional verification beyond simulation.
* Physical verification beyond design for manufacturability.
* Design for test beyond compression.
* System design beyond PCBs

Low power design at higher levels
Power affects every design stage. Sometimes, designing for low power at system level is required. System level optimization has the biggest impact on power/performance. And, embedded software is a major point of leverage.

Embedded software has an increasing share of the design effort. Here, Mentor’s Nucleus power management framework is key. It has an unique API for power management, enables software engineers to optimize power consumption, and reduces lines of application code. Also, power aware design optimizes code efficiency.

Functional verification beyond RTL simulation
The Verification methodology standards war is over. UVM is expected to grow by 286 percent in the next 12 months. Mentor Graphics Questa inFact is the industry’s most advanced testbench automation solution. It enables Testbench re-use and accelerates time-to-coverage. Intelligent test bench facilitates linear transition to multi-processing.

Questa accelerates the hardware/software verification environment. In-circuit emulation has been evolving to virtual hardware acceleration and embedded software development. Offline debug increases development productivity. A four-hour on-emulator software debug session drops to 30 minutes batch run. The offline debug allows 150 software designers to jumpstart debug process on source code. Virtual stimulus increases the flexibility of the emulator. As an example, Veloce is 700x more efficient than large simulation farms.

Physical verification beyond design for manufacturability
The Calibre PERC is a new approach to circuit verification. The Calibre 3DSTACK is the verification flow for 3D.
Read more…

FPGA design heads to the cloud!

December 6, 2012 3 comments

Han Hua Ng

Harn Hua Ng

Singapore based Plunify claims that chip design companies can design faster and better using cloud computing. Stressing on the company’s go-to-market strategy, Plunify’s founder, Harn Hua Ng, said the Plunify partners with tool vendors, their distributors and complementary sales representatives.

Since pay-as-you-go business models are rare in the semiconductor industry, we went through several steps, of which the first was to better understand the market, the available tools and stake-holders:
* How is the market reacting to cloud computing and licensing schemes?
* What are current tool capabilities with regards to multiple CPUs/servers? Which parts of the chip design workflow can best take advantage of scalable, parallel features?
* What tools are more suitable for a cloud environment?

With these in mind, the next step was to build the cloud platform and the application clients to address immediate concerns – security, accessibility and cost.

“Then, we partner with tool vendors, their distributors and sales reps to bring our solutions to end-users. Companies of different sizes Plunify
view the advantages of cloud computing differently, so solutions need to be customized accordingly. Some see Plunify as solving longer term IT problems of scaling and provisioning; while others use us as an immediate way to speed up their design workflows. We are still in the process of learning about the market.”

How can the on-demand cloud computing platform dramatically accelerate chip design workflows? According to Harn Hua Ng, one  immediate benefit is an almost instantaneous fulfillment of peak demand IT requirements, for example, a urgent request to do 100 synthesis builds to fix a problem due yesterday. Or if the problem cannot be fixed, at least the design team will find out in a day rather than potentially in three months’ worth of runtime without a cloud solution. The longer term acceleration is a gradual parallelization of the design workflow.

Currently, chip designers tend to visualize the design workflow as a chain of mostly serial steps with many dependencies, just because many steps can be time-consuming (both in terms of runtime and time taken to analyze intermediate results).

With an on-demand compute platform, designers can have more room to experiment and to optimize, more readily incorporating agile practices in hardware development.
Read more…

Netronome and Argon Design launch Blaster flow simulation solution

November 23, 2012 2 comments

Blaster flow simulation solution.

Blaster flow simulation solution.

Argon Design, a leading developer of high performance software applications for manycore communications processors, launched Argon Blaster, the industry’s first flow simulation solution for generating realistic, Internet scale traffic loads and applications to test networking and security equipment.

Blaster delivers a line rate, timing accurate, flow simulation application on an affordable PCIe acceleration card for use in standard x86 platforms. This enables OEMs to cost effectively distribute a high performance simulation and traffic generation solution throughout the engineering organization. The approach significantly reduces development time and cost, while simultaneously increasing product quality.

Blaster is designed for enterprise and carrier network operators for performance testing of flow based cyber security and network analytics applications. It enables network managers to verify that these systems are designed and deployed in a manner to match expected network loads.

High performance, accuracy rule!
Elaborating on the features, Daniel Proch, director of product management, Netronome, said: “Argon Blaster is the industry’s highest-performance and most-accurate flow simulation solution, in an affordable package. Developed by Argon Design, Blaster enables a standard x86 PC with a Netronome flow processor PCIe card to generate realistic, Internet-scale traffic loads and application mixes.

“For many networking applications, the ability to classify and manage traffic flows is key to enabling the highest level of performance and scalability. Quality of Service, Load Balancing, Firewall, Intrusion Detection, Content Inspection, Data Loss Prevention and similar applications all typically require flow-aware processing capabilities and this flow-aware traffic generation solution for development and QA. Blaster is the first traffic generation tool designed specifically for flow simulation applications. With Blaster, you can emulate up to a million unique flows with accurate, consistent, per-flow rate control.”

It will be interesting to know how Blaster will help the ISVs and OEMs generate realistic, Internet-scale traffic loads and applications to test networking and security equipment.

Blaster can be installed in any modern PC running Linux. It installs as a KVM virtual machine and can be operated from within the virtual machine or externally. It replays one more multiple .pcap files and can take that traffic and emulate any type of traffic profile from that pcap(s). The user can change the # flows per pcap file, the addressing scheme (# clients and servers based on MAC and or IP address).

From this set of knobs and given a set of pcaps with appropriate application traffic to any traffic load and application mix that is desired. Organizations can then offer:
* Performance benchmarking to isolate bottlenecks.
* Stress testing with real-world loads.
* Security testing with background, application and attack traffic.
* Quality assurance with broad spectrum of application and protocols.

Let’s find out a bit more about the role played by Netronome as well as Argon Design. Proch said: “The product is an Argon branded product that is a joint development with Argon Design. Netronome provides the accelerated flow processing hardware for the solution in the form of a standard PCIe card, and Argon designed and engineered the software. Netronome will be handling sales and marketing of the product. Software and support will be handled by Argon.”

Will there be an upgrade sometime later, next year, perhaps? “Most certainly,” he continued. “Our early access customers and internal use has already developed a robust roadmap and we anticipate these features and others to be rolled out over several subsequent software releases. We also expect to have a new hardware version based on our recently announced NFP-6xxx family of flow processors when available.”

Xilinx intros Vivado Design Suite


Xilinx's Vivado Design Suite.

Xilinx's Vivado Design Suite.

Xilinx Inc. has announced the Vivado Design Suite. It enables an IP and system centric next generation design environment. Especially meant for the next decade of ‘All-Programmable’ devices, it also accelerates the integration and implementation up to 4X. And, why now? That’s because the all-programmable devices enable programmable systems ‘integration.

There are system integration bottlenecks, such as design and IP re-use, integrating algorithmic and RTL level IP, mixing DSP, embedded, connectivity and logic, and verification of blocks and “systems”.

There are implementation bottlenecks as well, such as hierarchical chip planning, multi-domain and multi-die physical optimization, predictable ‘design’ vs. ‘timing’ closure, and late ECOs and rippling effect of changes.

Vivado accelerates productivity up to 4X. The design suite elements include an integrated design environment, has a shared scalable data model, is scalable to 100 million gates, and debug and analysis. It shares design information between implementation steps that ensures fast convergence and timing closure. This enables highly efficient memory utilization. Also, it is scalable to future families, that are greater than 10 million logic cells (100 million gates) and enables cross-probing across the entire design.

Vivado also enables packaging designs into system-level IP for re-use. You can share IP within your team, project or company. Any 3rd party IP is delivered with a common look and feel. You can re-use IP at any point in the implementation process. The IP can be source, placed, or placed and routed.

Designing systems to thrive in disruptive trends!


Srini Rajam, CEO, Ittiam Systems presented the guest keynote at the CDNLive! 2011 in Bangalore, India, titled ‘Designing Systems to Thrive in Disruptive Trends’. According to him, key factors for design project success include scope definition, realistic targets, good estimation and right resources. Today, smart system design enables being a step ahead in the world of disruptive system demands.

The concergence decade saw an affordable convergence of media and functions. The world also moved from the PC in 2000 to the smartphone in 2010. There has also been a convergence of audio, video and communications. The SoC and system design require performance, quality and price to work in tandem.

In the imagination decade, we have come to expect electronics to do whatever we fancy. In the smart system design era, we have come to anticipate a future system that will also work perfectly today.

Today, we are in the world of IP video communication. First, everything is evolving. There have been advances in video technology, SoC and infrastructure. Technologies designed elsewhere are being brought in. There is a virtually infinite range in quality and price levels. The video communication system holds the key dynamics. The SoC, software and system have entered into a synergistic relationship.

For smart system design, there is a need to look at the big picture. Scaling down is easier than scaling up. Smart system is built to achieve efficiency in scale down. The reference platform is needed for the development roadmap.

For designing, the system may function as a module in other system. Also, critical components of the system may evolve outside. Parts of the system may also get replaced by the ecosystem. As for the SoC, there must be a roadmap enabling application software portability. There should be modular scaling with plug and play of IPs/components. Tools for hardware-software co-development must be available from the early stages.

All of this would enable you to being a step ahead in the world of disruptive system demands.

%d bloggers like this: