Are we at an inflection point in verification today? Delivering the guest keynote at the UVM 1.2 day, Vikas Gautam, senior director, Verification Group, Synopsys, said that today, mobile and the Internet of Things are driving growth. Naturally, the SoCs are becoming even more complex. It is also opening up new verification challenges, such as power efficiency, more software, and reducing time-to-market. There is a need to shift-left to be able to meet time-to-market goal.
The goal is to complete your verification as early as possible. There have been breakthrough verification innovations. System Verilog brought in a single language. Every 10-15 years, there has been a need to upgrade verification.
Today, many verification technologies are needed. There is a growing demand for smarter verification. There is need for much upfront verification planning. There is an automated setup and re-use with VIP. There is a need to deploy new technologies and different debug environments. The current flows are limitimg smart verification. There are disjointed environments with many tools and vendors.
Synopsys has introduced the Verification Compiler. You get access to each required technology, as well as next-gen technology. These technologies are natively integrated. All of this enables 3X verification productivity.
Regarding next gen static and formal platforms, there will be capacity and performance for SoCs. It should be compatible with implementation products and flows. There is a comprehensive set of applications. The NLP+X-Prop can help find tough wake-up bug at RTL. Simulation is tuned for the VIP. There is a ~50 percent runtime improvement.
System Verilog has brought in many new changes. Now, we have the Verification Compiler. Verdi is an open platform. It offers VIA – a platform for customizing Verdi. VIA improves the debug efficiency.
Here is the concluding part of my conversation with Synopsys’ Rich Goldman on the global semiconductor industry.
Global semicon in sub 20nm era
How is the global semicon industry performing after entering the sub 20nm era? Rich Goldman, VP, corporate marketing and strategic alliances, Synopsys, said that driving the fastest pace of change in the history of mankind is not for the faint of heart. Keeping up with Moore’s Law has always required significant investment and ingenuity.
“The sub-20nm era brings additional challenges in device structures (namely FinFETs), materials and methodologies. As costs rise, a dwindling number of semiconductor companies can afford to build fabs at the leading edge. Those thriving include foundries, which spread capital expenses over the revenue from many customers, and fabless companies, which leverage foundries’ capital investment rather than risking their own. Thriving, leading-edge IDMs are now the exception.
“Semiconductor companies focused on mobile and the Internet of Things are also thriving as their market quickly expands. Semiconductor companies who dominate their space in such segments as automotive, mil/aero and medical are also doing quite well, while non-leaders find rough waters.”
Performance of FinFETs
Have FinFETs gone to below 20nm? Also, are those looking for power reduction now benefiting?
He added that 20nm was a pivotal point in advanced process development. The 20nm process node’s new set of challenges, including double patterning and very leaky transistors due to short channel effects, negated the benefits of transistor scaling.
To further complicate matters, the migration from 28nm to 20nm lacked the performance and area gains seen with prior generations, making it economically questionable. While planar FET may be nearing the end of its scalable lifespan at 20nm, FinFETs provide a viable alternative for advanced processes at emerging nodes.
The industry’s experience with 20nm paved the way for an easier FinFET transition. FinFET processes are in production today, and many IC design companies are rapidly moving to manufacture their devices on the emerging 16nm and 14nm FinFET-based process geometries due to the compelling power and performance benefits. Numerous test chips have taped out, and results are coming in.
“FinFET is delivering on its promise of power reduction. With 20nm planar FET technologies, leakage current can flow across the channel between the source and the drain, making it very difficult to completely turn the transistor off. FinFETs provide better channel control, allowing very little current to leak when the device is in the “off” state. This enables the use of lower threshold voltages, resulting in better power and performance. FinFET devices also operate at a lower nominal voltage supply, significantly improving dynamic power.”
What are the top five trends likely to rule the semicon industry in 2014 and why? Rich Goldman, VP, corporate marketing and strategic alliances, Synopsys, had this to say.
FinFETs will be a huge trend through 2014 and beyond. Semiconductor companies will certainly keep us well informed as they progress through FinFET tapeouts and ultimately deliver production FinFET processes.
They will tout the power and speed advantages that their FinFET processes deliver for their customers, and those semiconductor companies early to market with FinFETs will press their advantage by driving and announcing aggressive FinFET roadmaps.
IP and subsystems
As devices grow more complex, integrating third-party IP has become mainstream. Designers recognize as a matter of course that today’s complex designs benefit greatly from integrating third-party IP in such areas as microprocessors and specialized I/Os.
The trend for re-use is beginning to expand upwards to systems of integrated, tested IP so that designers no longer need to redesign well-understood systems, such as memory, audio and sensor systems.
Internet of Things/sensors
Everybody is talking about the Internet of Things for good reason. It is happening, and 2014 will be a year of huge growth for connected things. Sensors will emerge as a big enabler of the Internet of Things, as they connect our real world to computation.
Beyond the mobile juggernaut, new devices such as Google’s (formerly Nest’s) thermostat and smoke detector will enter the market, allowing us to observe and control our surrounding environment remotely.
The mobile phone will continue to subsume and disrupt markets, such as cameras, fitness devices, satellite navigation systems and even flashlights, enabled by sensors such as touch, capacitive pattern, gyroscopic, accelerometers, compasses, altimeters, light, CO, ionization etc. Semiconductor companies positioned to serve the Internet of Things with sensor integration will do well.
Systems companies bringing IC design in-house
Large and successful systems companies wanting to differentiate their solutions are bringing IC specification and/or design in house. Previously, these companies were focused primarily on systems and solutions design and development.
Driven by a belief that they can design the best ICs for their specific needs, today’s large and successful companies such as Google, Microsoft and others are leading this trend, aided by IP reuse.
Advanced designs at both emerging and established process nodes
While leading-edge semiconductor companies drive forward on emerging process nodes such as 20nm, others are finding success by focusing on established nodes (28nm and above) that deliver required performance at reduced risk. Thus, challenging designs will emerge at both ends of the spectrum.
Part II of this discussion will look at FinFETs below 20nm and 3D ICs.
We are now entering the sub-20nm era. So, will it be business as usual or is it going to be different this time? With DAC 2013 around the corner, I met up with John Chilton, senior VP, Marketing and Strategic Development for Synopsys to find out more regarding the impact of new transistor structures on design and manufacturing, 450mm wafers and the impact of transistor variability.
Impact of new transistor structures on design and manufacturing
First, let us understand what will be the impact of new transistor structures on design and manufacturing.
Chilton said: “Most of the impact is really on the manufacturing end since they are effectively 3D transistors. Traditional lithography methods would not work for manufacturing the tall and thin fins where self-aligned double patterning steps are now required.
“Our broad, production-proven products have all been updated to handle the complexity of FinFETs from both the manufacturing and the designer’s end.
“From the design implementation perspective, the foundries’ and Synopsys’ goal is to provide a transparent adoption process where the methodology (from Metal 1 and above) remains essentially the same as that of previous nodes where products have been updated to handle the process complexity.”
Given the scenario, will it be possible to introduce 450mm wafer handling and new lithography successfully?
According to Chilton: “This is a question best asked of the semiconductor manufacturers and equipment vendors. Our opinion is ‘very likely’.” The semiconductor manufacturers, equipment vendors, and the EDA tool providers have a long history of introducing new technology successfully when the economics of deploying the technology is favorable.
The 300nm wafer deployment was quite complex, but was completed, for example. The introduction of double patterning at 20nm is another recent example in which manufacturers, equipment vendors and EDA companies work together to deploy a new technology.
Impact of transistor variability and other physics issues
Finally, what will be the impact of transistor variability and other physics issues?
Chilton said that as transistor scaling progresses into FinFET technologies and beyond, the variability of device behavior becomes more prominent. There are several sources of device variability.
Random doping fluctuations (RDF) are a result of the statistical nature of the position and the discreteness of the electrical charge of the dopant atoms. Whereas in past technologies the effect of the dopant atoms could be treated as a continuum of charge, FinFETs are so small that the charge distribution of the dopant atoms becomes ‘lumpy’ and variable from one transistor to the next.
With the introduction of metal gates in the advanced CMOS processes, random work function fluctuations arising from the formation of finite-sized metal grains with different lattice orientations have also become important. In this effect, each metal grain in the gate, whose crystalline orientation is random, interacts with the underlying gate dielectric and silicon in a different way, with the consequence that the channel electrons no longer see a uniform gate potential.
The other key sources of variability are due to the random location of traps and the etching and lithography processes which produce slightly different dimensions in critical shapes such as fin width and gate length.
“The impact of these variability sources is evident in the output characteristics of FinFETs and circuits, and the systematic analysis of these effects has become a priority for technology development and IP design teams alike,” he added.
Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.
Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.
The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.
Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.
IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”
The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.
“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.
Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.
Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.
The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.
Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.
How is IDesignSpec working as chip-level assertion-based verification?
Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
Thanks to Sheryl Gulizia, senior manager, Worldwide Public Relations, Synopsys Inc., I was able to connect with John Chilton, senior VP of Marketing and Strategic Development, Synopsys. We discussed the global (and Indian) outlook for the semiconductor industry in detail. Dr. Aart De Geus was apparently away on a business meet.
According to Chilton, the semiconductor industry has repeatedly stared down the daunting technical challenges caused by the necessity of Moore’s Law and the inevitability of the laws of physics. Every time, the industry has risen to the challenge and delivered silicon that is smaller, faster and cheaper, and the design and systems companies that were quickest to exploit the new technologies reaped the great benefit.
Power dissipation challenging
One trend that has proven to be especially challenging is power dissipation. Although transistors get smaller, faster and cheaper, chip power keeps increasing. Increasing power and decreasing size could have caused device-melting energy densities, but the industry rose to the challenge with more innovative physics along with smarter design methods and tools.
This time around, the challenge seems more fundamental, with the new nodes offering either better performance or lower power, but not both at the same time, and maybe not at a lower cost. The fundamental driving factor behind innovation has been smaller, faster and cheaper transistors, with the cheaper part making the migration a no-brainer. Unfortunately, this time the new node is not expected to be cheaper.
App processors to drive move to 20nm
Application processors for mobile and cloud-based services will drive the move to 20nm. These applications have the volume and power/performance needs to justify the expected investment required to embrace the 20nm node. Recent product announcements at CES underscore the emergence of the ‘cloud to mobile client’ trend in consumer electronics.
Dell and Wyse unveiled the project Ophelia. Ophelia is a USB memory stick-sized thin client that will plug into any compatible TV or Dell monitor. The device will boot into an Android OS and turn any TV into a portal to access a computer somewhere else. Ophelia works by taking advantage of the MHL protocol and works with any MHL-enabled display. Over 100-million MHL-compliant chipsets have already been shipped, so the opportunities for this type of interaction are growing.
MHL, along with established standards such as USB and HDMI or even future short-range wireless standards, will enable consumers to plug their cell phone into any monitor or TV and consume content via their phone on a larger, more satisfying display.
Coincidentally, on the same day, Samsung announced consumer displays that utilize voice and gesture recognition. These emerging technologies will begin to redefine the way we interact with the cloud. Instead of carrying a laptop, you may end up waving and talking to a TV. In a futuristic presentation, Lexus showed a prototype of a laser-scanning system that is small enough to be mounted on a grill and makes 3-D maps of the environment surrounding a car. This kind of embedded vision technology will make its way into more devices as processor performance increases.
Chilton said that developing such complex systems and applications require a robust verification solution. Chip designers already use complex and exhaustive test benches to test individual blocks and subsystems. Verification engineers will need to move up to the next level and handle the full verification of the SoC within a target system.
Verification of an integrated system will require an integrated verification solution that includes not just simulation but also acceleration, emulation and formal debug. A new, integrated verification platform should combine these existing discrete technologies to offer the productivity needed to realize complex systems with predictable, manageable schedules.
Delivering the hardware simultaneously with a working OS and development kit will require virtual prototypes, which will be used by software developers prior to the release of working hardware.