We are now entering the sub-20nm era. So, will it be business as usual or is it going to be different this time? With DAC 2013 around the corner, I met up with John Chilton, senior VP, Marketing and Strategic Development for Synopsys to find out more regarding the impact of new transistor structures on design and manufacturing, 450mm wafers and the impact of transistor variability.
Impact of new transistor structures on design and manufacturing
First, let us understand what will be the impact of new transistor structures on design and manufacturing.
Chilton said: “Most of the impact is really on the manufacturing end since they are effectively 3D transistors. Traditional lithography methods would not work for manufacturing the tall and thin fins where self-aligned double patterning steps are now required.
“Our broad, production-proven products have all been updated to handle the complexity of FinFETs from both the manufacturing and the designer’s end.
“From the design implementation perspective, the foundries’ and Synopsys’ goal is to provide a transparent adoption process where the methodology (from Metal 1 and above) remains essentially the same as that of previous nodes where products have been updated to handle the process complexity.”
Given the scenario, will it be possible to introduce 450mm wafer handling and new lithography successfully?
According to Chilton: “This is a question best asked of the semiconductor manufacturers and equipment vendors. Our opinion is ‘very likely’.” The semiconductor manufacturers, equipment vendors, and the EDA tool providers have a long history of introducing new technology successfully when the economics of deploying the technology is favorable.
The 300nm wafer deployment was quite complex, but was completed, for example. The introduction of double patterning at 20nm is another recent example in which manufacturers, equipment vendors and EDA companies work together to deploy a new technology.
Impact of transistor variability and other physics issues
Finally, what will be the impact of transistor variability and other physics issues?
Chilton said that as transistor scaling progresses into FinFET technologies and beyond, the variability of device behavior becomes more prominent. There are several sources of device variability.
Random doping fluctuations (RDF) are a result of the statistical nature of the position and the discreteness of the electrical charge of the dopant atoms. Whereas in past technologies the effect of the dopant atoms could be treated as a continuum of charge, FinFETs are so small that the charge distribution of the dopant atoms becomes ‘lumpy’ and variable from one transistor to the next.
With the introduction of metal gates in the advanced CMOS processes, random work function fluctuations arising from the formation of finite-sized metal grains with different lattice orientations have also become important. In this effect, each metal grain in the gate, whose crystalline orientation is random, interacts with the underlying gate dielectric and silicon in a different way, with the consequence that the channel electrons no longer see a uniform gate potential.
The other key sources of variability are due to the random location of traps and the etching and lithography processes which produce slightly different dimensions in critical shapes such as fin width and gate length.
“The impact of these variability sources is evident in the output characteristics of FinFETs and circuits, and the systematic analysis of these effects has become a priority for technology development and IP design teams alike,” he added.
Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.
Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.
The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.
Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.
IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”
The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.
“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.
Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.
Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.
The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.
Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.
How is IDesignSpec working as chip-level assertion-based verification?
Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
It is always a pleasure speaking with Dr. Walden (Wally) C. Rhines, chairman and CEO, Mentor Graphics Corp. I met him on the sidelines of the 13th Global Electronics Summit, held at the Chaminade Resort & Spa, Santa Cruz, USA.
Status of global EDA industry
First, I asked Dr. Rhines how the EDA industry was doing. Dr. Rhines said: “The global EDA industry has been doing pretty well. The results have been pretty good for 2012. In general, the EDA industry tends to follow the semiconductor R&D by at least 18 months.”
For the record, the electronic design automation (EDA) industry revenue increased 4.6 percent for Q4 2012 to $1,779.1 million, compared to $1,700.1 million in Q4 2011.
Every region, barring Japan, grew in 2012. The Asia Pacific rim grew the fastest – about 12.5 percent. The Americas was the second fastest region in terms of growth at 7.4 percent, and Europe grew at 6.8 percent. However, Japan decreased by 3 percent in 2012.
In 2012, the segments that have grown the fastest within the EDA industry include PCB design and IP, respectively. The front-end CAE (computer aided engineering) group grew faster than the backend CAE. By product category, CAE grew 9.8 percent. The overall growth for license and maintenance was 7 percent. Among the CAE areas, design entry grew 36 percent and emulation 24 percent, respectively.
DFM also grew 28 percent last year. Overall, PCB grew 7.6 percent, while PCB analysis was 25 percent. IP grew 12.6 percent, while the verification IP grew 60 percent. Formal verification and power analysis grew 16 percent each, respectively. “That’s actually a little faster than how semiconductor R&D is growing,” added Dr. Rhines.
Status of global semicon industry
On the fortunes of the global semiconductor industry. Dr. Rhines said: “The global semiconductor industry grew very slowly in 2012. Year 2013 should be better. Revenue was actually consolidated by a lot of consolidations in the wireless industry.”
According to him, smartphones should see further growth. “There are big investments in capacities in the 28nm segment. Folks will likely redesign their products over the next few years,” he said. “A lot of firms are waiting for FinFET to go to 20nm. People who need it for power reduction should benefit.”
“A lot of people are concerned about Japan. We believe that Japan can recover due to the Yen,” he added.
Last week (March 11, 2013), Cadence Design Systems Inc. entered into a definitive agreement to acquire Tensilica Inc., a leader in dataplane processing IP, for approximately $380 million in cash.
With this acquisition, Tensilica dataplane processing units (DPUs) combined with Cadence design IP will deliver more optimized IP solutions for mobile wireless, network infrastructure, auto infotainment and home applications.
The Tensilica IP also complements industry-standard processor architectures, providing application-optimized subsystems to increase differentiation and get to market faster. Finally, over 200 licensees, including system OEMs and seven of the top 10 semiconductor companies, have shipped over 2 billion Tensilica IP cores.
Talking about the rationale behind Cadence acquiring Tensilica, Pankaj Mayor, VP and head of Marketing, Cadence, said: “Tensilica fits and furthers our IP strategy – the combination of Tensilica’s DPU and Cadence IP portfolio will broaden our IP portfolio. Tensilica also brings significant engineering and management talent. The combination will allow us to deliver to our customers configurable, differentiated, and application-optimized subsystems that improve time to market.”
It is expected that the Cadence acquisition will also see the Tensilica dataplane IP to complement Cadence and Cosmic Circuits’ IP. Cadence had acquired Cosmic Circuits in February 2013.
What are the possible advantages of DPUs over DSPs? Does it mean a possible end of the road for DSPs?
As per Mayor, DSPs are special purpose processors targeted to address digital signaling. Tensilica’s DPUs are programmable and customizable for a specific function, providing optimal data throughput and processing speed; in other words, the DPUs from Tensilica provide a unique combination of customized processing, plus DSP. Tensilica’s DPUs can outperform traditional DSPs in power and performance.
So, what will happens to the MegaChips design center agreement with Tensilica? Does it still carry on? According to Mayor, right now, Cadence and Tensilica are operating as two independent companies and therefire, Cadence cannot comment until the closing of the acquisition, which is in 30-60 days.
Thanks to Sheryl Gulizia, senior manager, Worldwide Public Relations, Synopsys Inc., I was able to connect with John Chilton, senior VP of Marketing and Strategic Development, Synopsys. We discussed the global (and Indian) outlook for the semiconductor industry in detail. Dr. Aart De Geus was apparently away on a business meet.
According to Chilton, the semiconductor industry has repeatedly stared down the daunting technical challenges caused by the necessity of Moore’s Law and the inevitability of the laws of physics. Every time, the industry has risen to the challenge and delivered silicon that is smaller, faster and cheaper, and the design and systems companies that were quickest to exploit the new technologies reaped the great benefit.
Power dissipation challenging
One trend that has proven to be especially challenging is power dissipation. Although transistors get smaller, faster and cheaper, chip power keeps increasing. Increasing power and decreasing size could have caused device-melting energy densities, but the industry rose to the challenge with more innovative physics along with smarter design methods and tools.
This time around, the challenge seems more fundamental, with the new nodes offering either better performance or lower power, but not both at the same time, and maybe not at a lower cost. The fundamental driving factor behind innovation has been smaller, faster and cheaper transistors, with the cheaper part making the migration a no-brainer. Unfortunately, this time the new node is not expected to be cheaper.
App processors to drive move to 20nm
Application processors for mobile and cloud-based services will drive the move to 20nm. These applications have the volume and power/performance needs to justify the expected investment required to embrace the 20nm node. Recent product announcements at CES underscore the emergence of the ‘cloud to mobile client’ trend in consumer electronics.
Dell and Wyse unveiled the project Ophelia. Ophelia is a USB memory stick-sized thin client that will plug into any compatible TV or Dell monitor. The device will boot into an Android OS and turn any TV into a portal to access a computer somewhere else. Ophelia works by taking advantage of the MHL protocol and works with any MHL-enabled display. Over 100-million MHL-compliant chipsets have already been shipped, so the opportunities for this type of interaction are growing.
MHL, along with established standards such as USB and HDMI or even future short-range wireless standards, will enable consumers to plug their cell phone into any monitor or TV and consume content via their phone on a larger, more satisfying display.
Coincidentally, on the same day, Samsung announced consumer displays that utilize voice and gesture recognition. These emerging technologies will begin to redefine the way we interact with the cloud. Instead of carrying a laptop, you may end up waving and talking to a TV. In a futuristic presentation, Lexus showed a prototype of a laser-scanning system that is small enough to be mounted on a grill and makes 3-D maps of the environment surrounding a car. This kind of embedded vision technology will make its way into more devices as processor performance increases.
Chilton said that developing such complex systems and applications require a robust verification solution. Chip designers already use complex and exhaustive test benches to test individual blocks and subsystems. Verification engineers will need to move up to the next level and handle the full verification of the SoC within a target system.
Verification of an integrated system will require an integrated verification solution that includes not just simulation but also acceleration, emulation and formal debug. A new, integrated verification platform should combine these existing discrete technologies to offer the productivity needed to realize complex systems with predictable, manageable schedules.
Delivering the hardware simultaneously with a working OS and development kit will require virtual prototypes, which will be used by software developers prior to the release of working hardware.
It is always a pleasure to chat with Dr. Wally (Walden C.) Rhines, chairman and CEO, of Mentor Graphics. I chatted with him, trying to understand gigascale design, verification trends, strategy for power-aware verification, SERDES design challenges, migrating to 3D FinFET transistors, and Moore’s Law getting to be “Moore Stress”!
Chip design in gigascale, hertz, complex
First, I asked him to elaborate on how implementation of chip design will evolve, with respect to gigascale design, gigahertz and gigacomplex geometries.
He said: “Thanks to close co-operation among members of the foundry ecosystem, as well as cooperation between IDMs and their suppliers, serious development of design methods and software tools is running two to three generations ahead of volume manufacturing capability. For most applications, “Gigascale” power dissipation is a bigger challenge than managing the complexity but “system-level” power optimization tools will continue to allow rapid progress. Thermal analysis is becoming part of the designer’s toolkit.”
Functional verification is continually challenged by complexity but there have been, and continue to be, many orders of magnitude improvement in performance just from adoption of emulation, intelligent test benches and formal methods so this will not be a major limitation.
The complexity of new physical design problems will, however, be very challenging. Design problems ranging from basic ESD analysis, made more complex due to multiple power domains, to EMI, electromigration and intra-die variability are now being addressed with new design approaches. Fortunately, programmable electrical rule checking is being widely adopted and will help to minimize the impact of these physical effects.
Is verification keeping up?
How is the innovation in verification keeping up with trends?
Dr. Rhines added that over the past decade, microprocessor clock speeds have leveled out at 3 to 4 GHz and server performance improvement has come mostly from multi-core architectures. Although some innovative approaches have allowed simulators to gain some advantage from multi-core architectures, the speed of simulators hasn’t kept up with the growing complexity of leading edge chips.
Emulators have more than made up the difference. Emulators offer more than four orders of magnitude faster performance than simulators and emulators do so at about 0.005X the cost per cycle of simulation. The cost of power per year is more than one third the cost of hardware in a large simulation farm today, while emulation offers a 12X savings in power per verification clock cycle. For those who design really complex chips, a combination of emulation and simulation, along with formal methods and intelligent test benches, has become standard.
At the block and subsystem level, high level synthesis is enabling the next move up in design and verification abstraction. Since verification complexity grows at about the square of component count, we have plenty of room to handle larger chips by taking advantage of the four orders of magnitude improvement through emulation plus another three or four orders of magnitude through formal verification techniques, two to three orders of magnitude from intelligent test benches and three orders of magnitude from higher levels of abstraction.
By applying multiple engines and multiple abstraction levels to the challenge of verifying chips, the pressure is on to integrate the flow. Easily transitioning and reusing verification efforts from every level—including tests and coverage models, from high level models to RTL and from simulation to emulation—is being enabled through more powerful and adaptable verification IP and high level, graph-based test specification capabilities. These are keys to driving verification reuse to match the level of design reuse.
Powerful verification management solutions enable the collection of coverage information from all engines and abstraction levels, tracking progress against functional specifications and verification plans. Combining verification cycle productivity growth from emulation, formal, simulation and intelligent testing with higher verification abstraction, re-use and process management provides a path forward to economically verifying even the largest, most complex chips on time and within budget.
Good power-aware verification strategy for SoCs
What should be a good power-aware verification strategy for SoCs
According to him, the most important guideline is to start power-aware design at the highest possible level of system description. The opportunity to reduce system power is typically an order of magnitude greater at the system level than at the RTL level. For most chips today, that means at least the transaction level when the design is still described in C++ or SystemC.
Significant experience and effort should then be invested at the RTL level using synthesis and UPF-enabled simulation. Verification solutions typically automate the generation of correctness checks for power-control sequences and power-state coverage metrics. As SoC power is typically managed by software, the value of a hardware/software co-verification and co-debug solution in simulation and emulation becomes apparent in power-management verification at this level.
As designers proceed to the gate and transistor level, accuracy of power estimation improves. That is why gate level analysis and verification of the fully implemented power management architecture is important. Finally, at the physical layout, designers traditionally were stuck with whatever power budget was passed down to them. Now,they increasingly have power goals that can be achieved using dozens of physical design techniques that are built into the place and route tools.
Happy new year to everyone! Here is an outlook for the electronics and semiconductors sectors in 2013, provided by Jaswinder Ahuja, corporate VP and MD, Cadence Design Systems (India) Pvt Ltd. (Thanks a lot, Pallavi).
First, the past year, 2012, in review.
Globally, 2012 has been a challenging year for the semiconductor industry with the economic slump in Europe and the US. However, the long term outlook remains positive, with Gartner reporting that the growth in the electronics and semiconductor industries will outpace world GDP growth till 2016.
In India, the ambiguity around the telecom market, traditionally the biggest consumer of semiconductor equipment, was the main handicap to growth. On the positive side, the passing of the National Policy on Electronics (NPE) in 2012 promises a much-needed fillip to the electronics ecosystem. In 2013 we expect to see a positive impact in terms of home-grown electronics thanks to the provisions of the Policy.
Worldwide technology trends in 2013
User experience is the driving force behind many of the semiconductor design trends that we will see in 2013 and beyond. Consumers are demanding devices on which games, music, cameras, internet, and other apps all run simultaneously and seamlessly. As a result, mobility, application-driven design, video, cloud and security, all of which enable an enhanced user experience, are the drivers of the electronics and semiconductor world today.
Mobility is the single biggest driver for the semiconductor industry. The pervasiveness of mobility does not only affect the telecommunications industry, but also entertainment, home electronics, automotive and medical electronics.
For example, cutting edge mobile solutions in the healthcare field include devices that can monitor blood pressure and blood sugar levels remotely, and then transmit the readings to the physician for diagnosis and treatment; in the automotive sector, in-vehicle infotainment is expected to be the next big thing and end-consumers can look forward to real-time traffic reports, weather information, and entertainment options from next-generation cars.
Mobility has fundamentally altered how we produce and consume information. In the future, we can expect that devices will go one step further and actually interact intelligently with the user – we see the first steps of that with Apple’s Siri software.
Mobility has also created a completely new market for applications that enable a more interactive and satisfying user experience. It is via applications that system companies differentiate themselves and stand apart from the competition. The need to have applications on all kinds of devices is posing unique challenges to the semiconductor and EDA companies.
Whereas traditionally the hardware (silicon) was built first and then the software was added later, now developing the software and designing the hardware are becoming a parallel process. This gives rise to new EDA technologies that enable early software development using software models of system hardware long before silicon is ready. We will see this new way of designing continue to be a challenge going into 2013.
Per reports from Cisco, video will soon drive more than 90 percent of all global traffic on the Internet. As more and more entertainment and collaboration tools are launched, bandwidth-hungry video traffic will drive growth both in the end consumer market (mobile platforms) and the enterprise space (networking industry).
The cloud is closely intertwined with the growth in mobility – it is the cloud of network servers and backbone equipment that deliver the content and value to all mobile devices. For every 600 smart phones and every 120 tablets, one dedicated server is needed. With the demand for mobiles showing accelerated growth, the need for cloud computing technologies will be another key driver for the semiconductor industry.
Security underpins our information age. The vast amount of data residing in mobile platforms and cloud architectures is extremely vulnerable. As we move into 2013, we foresee a sharper focus on securing data and critical infrastructure from theft and hacker attacks.