San Jose, USA-based Atrenta’s SpyGlass Predictive Analyzer gives engineers a powerful guidance dashboard that enables efficient verification and optimization of SoC designs early, before expensive and time-consuming traditional EDA tools are deployed. I recently met up with Dr. Ajoy Bose, chairman, president and CEO, Atrenta, to find out more.
I started by asking how Atrenta provides early design analysis for logic designers? He said: “The key ingredient is something we call predictive analysis. That is, we need to analyze a design at a high level of abstraction and predict what will happen when it undergoes detailed implementation. We have a rich library of algorithms that provide highly accurate ‘predictions’, without the time and cost required to actually send a design through detailed implementation.”
There’s a saying: electronic system level (ESL) is where the future of EDA lies. Why? Its because the lower level of abstraction (detailed implementation) of the EDA market is undergoing commoditization and consolidation. There are fewer solutions, and less differentiation between them. At the upper levels of abstraction (ESL), this is not the case. There still exists ample opportunity to provide new and innovative solutions.
Now, how will this help EDA to move up the embedded software space? According to Dr. Bose, the ability to do true hardware/software co-design is still not a solved problem. Once viable solutions are developed, then EDA will be able to sell to the embedded software engineer. This will be a new market, and new revenue for EDA.
How are SpyGlass and GenSys platforms helping the industry? What problems are those solving? Dr. Ajoy Bose said: “SpyGlass is Atrenta’s platform for RTL signoff. It is used by virtually all SoC design teams to ensure the power, performance and cost of their SoC is as good as it can be prior to handoff to detailed implementation.SpyGlass is also used to select and qualify semiconductor IP – a major challenge for all SoC design teams.
“GenSys provides a way to easily assemble and modify designs at the RTL level of abstraction. As a lot of each SoC is re-used design data, the need to modify this data to fit the new design is very prevalent. GenSys provides an easy, correct-by-construction way to get this job done.”
How does the SpyGlass solve RTL design issues, ensuring high quality RTL with fewer design bugs? He added that it’s the predictive analysis technology. SpyGlass provides accurate and relevant information about what will happen when a design is implemented and tested. By fixing these problems early, at RTL, a much higher quality design is handed off to detailed implementation with fewer bugs and associated schedule challenges.
On another note, I asked him why Apple’s choice of chips a factor in influencing the global chip industry? The primary reason is their volume and buying power. Apple is something of a “King Maker” when it comes to who manufactures their chips. Apple is also a thought leader and trend setter, so their decisions affect the decisions of others.
Finally, the global semiconductor industry! How is the global semicon industry doing in H1-2013? As per Dr. Bose: “We see strong growth. Our customers are undertaking many new designs at advanced process technology nodes. We think that this speaks well for future growth of the industry. At a macro level, the consumer sector will drive a lot of the growth ahead. For EDA, the higher levels of abstraction is where the growth will be.”
Its a pleasure to talk to Dr. Walden (Wally) C. Rhines, chairman and CEO, Mentor Graphics Corp. On his way to DAC 2013, where he will be giving a ten-minute “Visionary Talk”, he found time to speak with me. First, I asked him given that the global semiconductor industry is entering the sub-20nm era, will it continue to be ‘business as usual’ or ‘it’s going to be different this time’?
Dr. Rhines said: “Every generation has some differences, even though it usually seems like we’ve seen all this before. The primary change that comes with “sub-20nm” is the change in transistor structure to FinFET. This will give designers a boost toward achieving lower power. However, compared to 28nm, there will be a wafer cost penalty to pay for the additional process complexity that also includes two additional levels of resolution enhancement.”
Impact of new transistor structures
How will the new transistor structures impact on design and manufacturing?
According to him, the relatively easy impact on design is related to the simulation of a new device structure; models have already been developed and characterized but will be continuously updated until the processes are stable. More complex are the requirements for place and route and verification; support for “fin grids” and new routing and placement rules has already been implemented by the leading place and route suppliers.
He added: “Most complex is test; FinFET will require transistor-level (or “cell-aware”) design for test to detect failures, rather than just the traditional gate-level stuck-at fault models. Initial results suggest that failure to move to cell-aware ATPG will result in 500 to 1000 DPM parts being shipped to customers.
“Fortunately, “cell-aware” ATPG design tools have been available for about a year and are easily implemented with no additional EDA cost. Finally, there will be manufacturing challenges but, like all manufacturing challenges, they will be attacked, analyzed and resolved as we ramp up more volume.”
Introducing 450mm wafer handling and new lithography
Is it possible to introduce 450mm wafer handling and new lithography successfully at this point in time?
“Yes, of course,” Dr. Rhines said. “However, there are a limited number of companies that have the volume of demand to justify the investment. The wafer diameter transition decision is always a difficult one for the semiconductor manufacturing equipment companies because it is so costly and it requires a minimum volume of machines for a payback. In this case, it will happen. The base of semiconductor manufacturing equipment companies is becoming very concentrated and most of the large ones need the 450mm capability.”
What will be the impact of transistor variability and other physics issues?
As per Dr. Rhines, the impact should be significant. FinFET, for example requires controlling physical characteristics of multiple fins within a narrow range of variability. As geometries shrink, small variations become big percentages. New design challenges are always interesting for engineers but the problems will be overcome relatively quickly.
Some time ago, Cadence Design Systems Inc. had announced the EDA360 vision! As per Jaswinder Ahuja, corporate VP and MD of Cadence Design Systems India, the Cadence vision of EDA360 is said to be well and alive. The organization has been aligned around the EDA360 vision.
The EDA360 is a five-year vision for defining the trends in the EDA industry, based on what Cadence is observing in the industry and the direction in which, it feels, the industry will go.
At Cadence, the Silicon Realization Group is headed by Dr. Chi-ping Hsu. The SoC Realization Group is headed by Martin Lund, and Nimish Modi is looking after the System Realization Group. Cadence’s focus has been on in-house development and innovation. Tempus has been a major announcement from the Silicon Realization Group.
What’s going on with EDA360?
There has been a renewed thrust in the SoC Realization Group at Cadence. Already, there have been three acquisitions this year — Cosmic Circuits, Tensilica and Evatronix. Cadence is buying the IP part of the business from Evatronix. This acquisition is ongoing and will be announced in June 2013.
On the relationship between the electronics and the EDA industries, Ahuja said the electronics industry is going through a transition, and that the EDA industry needs to change. The importance of system-level design has increased. Companies are currently focusing on optimizing the end user experience.
Cadence Design Systems Inc. has announced the Tempus timing signoff solution. It facilitates ground-breaking signoff timing analysis and closure. The new technology accelerates timing analysis and closure by weeks. It is said to be up to 10X faster than competing solutions. Tempus has also been endorsed by Texas Instruments (TI).
Complexity is growing exponentially and signoff is the bottleneck. There is an increasing design complexity. Low power is important across markets — from smartphones to datacenters. Time to market remains critical as well. Feature-rich devices are growing the design size.
Timing closure schedule and complexity have been increasing. In fact, up until now, timing closure solutions are said to have not kept pace with design complexity. The number of timing views are increasing with each new process node. The increased margins make timing closure very difficult. Exponential growth in design size and complexity are stretching the analysis capacity. Time in signoff closure has been increasing up to 40 percent of the design flow at 20nm.
The Tempus timing signoff solution is big on performance, accuracy and closure. For performance, it facilitates massively parallelized computation, is scalable to 100s of CPUs and there are optimized data structures. It allows up to 10X faster path-based analysis (PBA) and advanced process modeling for accuracy. Finally, for closure, it provides up to 10X reduction in closure time, is placement and routing aware and offers unlimited MMMC capacity.
Tempus offers an unprecedented performance, and handles 100s of millions of cells flat! It has an innovative hierarchical/incremental analysis. For design closure, the multi-mode, multi-corner (MMMC) is distributed or concurrent. There is physically aware optimization, such as graph- or path-based. The PBA is a detailed view of timing based on slew propagation.
With Tempus, Cadence is solving the design complexity challenge by eliminating the signoff bottleneck and enabling customers to meet power, performance and time-to-market goals.
Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.
Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.
The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.
Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.
IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”
The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.
“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.
Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.
Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.
The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.
Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.
How is IDesignSpec working as chip-level assertion-based verification?
Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
Last week (March 11, 2013), Cadence Design Systems Inc. entered into a definitive agreement to acquire Tensilica Inc., a leader in dataplane processing IP, for approximately $380 million in cash.
With this acquisition, Tensilica dataplane processing units (DPUs) combined with Cadence design IP will deliver more optimized IP solutions for mobile wireless, network infrastructure, auto infotainment and home applications.
The Tensilica IP also complements industry-standard processor architectures, providing application-optimized subsystems to increase differentiation and get to market faster. Finally, over 200 licensees, including system OEMs and seven of the top 10 semiconductor companies, have shipped over 2 billion Tensilica IP cores.
Talking about the rationale behind Cadence acquiring Tensilica, Pankaj Mayor, VP and head of Marketing, Cadence, said: “Tensilica fits and furthers our IP strategy – the combination of Tensilica’s DPU and Cadence IP portfolio will broaden our IP portfolio. Tensilica also brings significant engineering and management talent. The combination will allow us to deliver to our customers configurable, differentiated, and application-optimized subsystems that improve time to market.”
It is expected that the Cadence acquisition will also see the Tensilica dataplane IP to complement Cadence and Cosmic Circuits’ IP. Cadence had acquired Cosmic Circuits in February 2013.
What are the possible advantages of DPUs over DSPs? Does it mean a possible end of the road for DSPs?
As per Mayor, DSPs are special purpose processors targeted to address digital signaling. Tensilica’s DPUs are programmable and customizable for a specific function, providing optimal data throughput and processing speed; in other words, the DPUs from Tensilica provide a unique combination of customized processing, plus DSP. Tensilica’s DPUs can outperform traditional DSPs in power and performance.
So, what will happens to the MegaChips design center agreement with Tensilica? Does it still carry on? According to Mayor, right now, Cadence and Tensilica are operating as two independent companies and therefire, Cadence cannot comment until the closing of the acquisition, which is in 30-60 days.
Thanks to Sheryl Gulizia, senior manager, Worldwide Public Relations, Synopsys Inc., I was able to connect with John Chilton, senior VP of Marketing and Strategic Development, Synopsys. We discussed the global (and Indian) outlook for the semiconductor industry in detail. Dr. Aart De Geus was apparently away on a business meet.
According to Chilton, the semiconductor industry has repeatedly stared down the daunting technical challenges caused by the necessity of Moore’s Law and the inevitability of the laws of physics. Every time, the industry has risen to the challenge and delivered silicon that is smaller, faster and cheaper, and the design and systems companies that were quickest to exploit the new technologies reaped the great benefit.
Power dissipation challenging
One trend that has proven to be especially challenging is power dissipation. Although transistors get smaller, faster and cheaper, chip power keeps increasing. Increasing power and decreasing size could have caused device-melting energy densities, but the industry rose to the challenge with more innovative physics along with smarter design methods and tools.
This time around, the challenge seems more fundamental, with the new nodes offering either better performance or lower power, but not both at the same time, and maybe not at a lower cost. The fundamental driving factor behind innovation has been smaller, faster and cheaper transistors, with the cheaper part making the migration a no-brainer. Unfortunately, this time the new node is not expected to be cheaper.
App processors to drive move to 20nm
Application processors for mobile and cloud-based services will drive the move to 20nm. These applications have the volume and power/performance needs to justify the expected investment required to embrace the 20nm node. Recent product announcements at CES underscore the emergence of the ‘cloud to mobile client’ trend in consumer electronics.
Dell and Wyse unveiled the project Ophelia. Ophelia is a USB memory stick-sized thin client that will plug into any compatible TV or Dell monitor. The device will boot into an Android OS and turn any TV into a portal to access a computer somewhere else. Ophelia works by taking advantage of the MHL protocol and works with any MHL-enabled display. Over 100-million MHL-compliant chipsets have already been shipped, so the opportunities for this type of interaction are growing.
MHL, along with established standards such as USB and HDMI or even future short-range wireless standards, will enable consumers to plug their cell phone into any monitor or TV and consume content via their phone on a larger, more satisfying display.
Coincidentally, on the same day, Samsung announced consumer displays that utilize voice and gesture recognition. These emerging technologies will begin to redefine the way we interact with the cloud. Instead of carrying a laptop, you may end up waving and talking to a TV. In a futuristic presentation, Lexus showed a prototype of a laser-scanning system that is small enough to be mounted on a grill and makes 3-D maps of the environment surrounding a car. This kind of embedded vision technology will make its way into more devices as processor performance increases.
Chilton said that developing such complex systems and applications require a robust verification solution. Chip designers already use complex and exhaustive test benches to test individual blocks and subsystems. Verification engineers will need to move up to the next level and handle the full verification of the SoC within a target system.
Verification of an integrated system will require an integrated verification solution that includes not just simulation but also acceleration, emulation and formal debug. A new, integrated verification platform should combine these existing discrete technologies to offer the productivity needed to realize complex systems with predictable, manageable schedules.
Delivering the hardware simultaneously with a working OS and development kit will require virtual prototypes, which will be used by software developers prior to the release of working hardware.
It is always a pleasure to chat with Dr. Wally (Walden C.) Rhines, chairman and CEO, of Mentor Graphics. I chatted with him, trying to understand gigascale design, verification trends, strategy for power-aware verification, SERDES design challenges, migrating to 3D FinFET transistors, and Moore’s Law getting to be “Moore Stress”!
Chip design in gigascale, hertz, complex
First, I asked him to elaborate on how implementation of chip design will evolve, with respect to gigascale design, gigahertz and gigacomplex geometries.
He said: “Thanks to close co-operation among members of the foundry ecosystem, as well as cooperation between IDMs and their suppliers, serious development of design methods and software tools is running two to three generations ahead of volume manufacturing capability. For most applications, “Gigascale” power dissipation is a bigger challenge than managing the complexity but “system-level” power optimization tools will continue to allow rapid progress. Thermal analysis is becoming part of the designer’s toolkit.”
Functional verification is continually challenged by complexity but there have been, and continue to be, many orders of magnitude improvement in performance just from adoption of emulation, intelligent test benches and formal methods so this will not be a major limitation.
The complexity of new physical design problems will, however, be very challenging. Design problems ranging from basic ESD analysis, made more complex due to multiple power domains, to EMI, electromigration and intra-die variability are now being addressed with new design approaches. Fortunately, programmable electrical rule checking is being widely adopted and will help to minimize the impact of these physical effects.
Is verification keeping up?
How is the innovation in verification keeping up with trends?
Dr. Rhines added that over the past decade, microprocessor clock speeds have leveled out at 3 to 4 GHz and server performance improvement has come mostly from multi-core architectures. Although some innovative approaches have allowed simulators to gain some advantage from multi-core architectures, the speed of simulators hasn’t kept up with the growing complexity of leading edge chips.
Emulators have more than made up the difference. Emulators offer more than four orders of magnitude faster performance than simulators and emulators do so at about 0.005X the cost per cycle of simulation. The cost of power per year is more than one third the cost of hardware in a large simulation farm today, while emulation offers a 12X savings in power per verification clock cycle. For those who design really complex chips, a combination of emulation and simulation, along with formal methods and intelligent test benches, has become standard.
At the block and subsystem level, high level synthesis is enabling the next move up in design and verification abstraction. Since verification complexity grows at about the square of component count, we have plenty of room to handle larger chips by taking advantage of the four orders of magnitude improvement through emulation plus another three or four orders of magnitude through formal verification techniques, two to three orders of magnitude from intelligent test benches and three orders of magnitude from higher levels of abstraction.
By applying multiple engines and multiple abstraction levels to the challenge of verifying chips, the pressure is on to integrate the flow. Easily transitioning and reusing verification efforts from every level—including tests and coverage models, from high level models to RTL and from simulation to emulation—is being enabled through more powerful and adaptable verification IP and high level, graph-based test specification capabilities. These are keys to driving verification reuse to match the level of design reuse.
Powerful verification management solutions enable the collection of coverage information from all engines and abstraction levels, tracking progress against functional specifications and verification plans. Combining verification cycle productivity growth from emulation, formal, simulation and intelligent testing with higher verification abstraction, re-use and process management provides a path forward to economically verifying even the largest, most complex chips on time and within budget.
Good power-aware verification strategy for SoCs
What should be a good power-aware verification strategy for SoCs
According to him, the most important guideline is to start power-aware design at the highest possible level of system description. The opportunity to reduce system power is typically an order of magnitude greater at the system level than at the RTL level. For most chips today, that means at least the transaction level when the design is still described in C++ or SystemC.
Significant experience and effort should then be invested at the RTL level using synthesis and UPF-enabled simulation. Verification solutions typically automate the generation of correctness checks for power-control sequences and power-state coverage metrics. As SoC power is typically managed by software, the value of a hardware/software co-verification and co-debug solution in simulation and emulation becomes apparent in power-management verification at this level.
As designers proceed to the gate and transistor level, accuracy of power estimation improves. That is why gate level analysis and verification of the fully implemented power management architecture is important. Finally, at the physical layout, designers traditionally were stuck with whatever power budget was passed down to them. Now,they increasingly have power goals that can be achieved using dozens of physical design techniques that are built into the place and route tools.
Cadence Design Systems Inc. has released the Allegro/OrCAD 16.6 that provides unmatched, scalable design solutions addressing technology challenges.
Allegro is meant for simple to more complex boards, for the geographically dispersed design teams. They also cater to the personal productivity needs for start-ups and small to medium companies. It provides high-speed, power aware SI, HDI managed design environments, that is meant for medium to large enterprises.
Allegro, Sigrity and OrCAD – major themes of Allegro PCB 16.6
The major themes of Allegro PCB 16.6 include the first ECAD collaborative environment for work-in-progress (WiP) PCB design using Microsoft SharePoint. It accelerates the timing closure for high-speed interfaces such as DDRx. There is support for advanced miniaturization techniques that embed passive and active components on inner layers of a PCB.
Regarding the percentage of enhanced miniaturization capabilities for embedding dual-sided and vertical components, Hemant Shah, product marketing director, Allegro PCB and FPGA Products, Cadence, said: “Components with dual-sided contacts offer customers the ability to connect to the components pins through the layer above or through the layer below. This provides routing flexibility. It also allows customers who want redundant connections on inner layers.”
There can be accelerated design implementation through flexible PCB team design capability. Leveraging GPUs for faster display, you can pan and zoom on dense PCBs, enabling greater design productivity.
Typical product creation environment includes the best-in-class hardware design environment for implementation, analysis and compliance sign-off. The design team collaboration capability encompasses design-chain partners. Design data can be integrated into corporate systems to manage cost and quality, and provide visibility. It allows on-time release into manufacturing to build products.
Allegro, OrCAD and Sigrity ensure product creation with the best-in-class PCB design environment. Design team collaboration and productivity is integrated at the PCB. Design data integration is done into corporate systems to manage cost and quality. There is on-time release into manufacturing to build products. Allegro, OrCAD and Sigrity are helping create market leading products.
Users can create the first ECAD collaborative environment using Microsoft SharePoint. Design creation and collaboration trends include geographically distributed teams, increase in outsourcing (OEMs-ODMs), product life cycle management tools are not designed/targeted for managing work-in-progress data. For every design engineer, there are 3X-5X collaborators, reviewers, etc.
The Allegro Design workbench integrates seamlessly to SharePoint 2010. The Team Design Authoring cockpit facilitates design data in and out of SharePoint providing WiP design data management. There is block level check-in, check-out, etc. SharePoint is already deployed in many companies. It is scalable from single server to cloud environment. It has rich platform for power users to configure and developers to customize.
By creating the first ECAD collaborative environment, users can reduce the product creation time by up to 40 per cent. Chances of first pass failure are also reduced from 25 per cent to 1 per cent. Users can accelerate the timing closure on high-speed, multi-Gigabit signals.
It is always a pleasure interacting with Dr. Walden (Wally) C. Rhines, the chairman and CEO, Mentor Graphics, and vice chairman of the EDA Consortium, USA. I started by enquiring about the global semiconductor industry.
Dr. Wally Rhines said: “The absolute size of the semiconductor industry (in terms or total revenue) differs depending on which analyst you ask, because of differences in methodology and the breadth of analysts’ surveys. Current 2012 forecasts include $316 billion from Gartner, $320 billion from IDC, $324.5 billion from IHS iSuppli, $327.2 billion from Semico Research and $339 billion from IC Insights.
“These numbers reflect growth rates from 4 per cent to 9.2 per cent, based on the different analyst-specific 2011 totals. Capital spending forecasts for the three largest semiconductor companies have increased by almost 50 per cent just since the beginning of this year. However, the initial spurt of demand was influenced by the replenishment of computer and disc drive inventories caused by the Thailand flooding. Now that this is largely complete, there is some uncertainty about the second half.
“So, overall it looks like the industry will pass $310 billion this year, but it may not be by very much. The strong capital spending and demand for leading edge capacity should impact the second half but the bigger impact will probably be in 2013.
What’s with 28.20nm?
Has 28/20nm semiconductor technology become a major ‘work horse’? What’s going on in that area? At least, this area is now of considerable interest.
Dr. Rhines said that the semiconductor industry’s transition to the 28nm family of technologies, which broadly includes 32nm and 20nm, is a much larger transition than we have experienced for many technology generations.
The world’s 28nm-capable capacity now comprises almost 20 per cent of the total silicon area in production and yet, the silicon foundries are fully loaded with more 28nm demand than they can handle. In fact, high demand for 28/20nm has created a capacity pinch that is currently spurring additional capital expenditure by foundries.
He added: “As yields and throughput mature at 28nm, the major wave of capital investment will provide plentiful foundry capacity at lower cost, stimulating a major wave of design activity. Cost-effective, high yield 28nm foundry capacity will not only drive increasing numbers of new designs but it will also force re-designs of mature products to take advantage of the cost reduction opportunity.”