We are now entering the sub-20nm era. So, will it be business as usual or is it going to be different this time? With DAC 2013 around the corner, I met up with John Chilton, senior VP, Marketing and Strategic Development for Synopsys to find out more regarding the impact of new transistor structures on design and manufacturing, 450mm wafers and the impact of transistor variability.
Impact of new transistor structures on design and manufacturing
First, let us understand what will be the impact of new transistor structures on design and manufacturing.
Chilton said: “Most of the impact is really on the manufacturing end since they are effectively 3D transistors. Traditional lithography methods would not work for manufacturing the tall and thin fins where self-aligned double patterning steps are now required.
“Our broad, production-proven products have all been updated to handle the complexity of FinFETs from both the manufacturing and the designer’s end.
“From the design implementation perspective, the foundries’ and Synopsys’ goal is to provide a transparent adoption process where the methodology (from Metal 1 and above) remains essentially the same as that of previous nodes where products have been updated to handle the process complexity.”
Given the scenario, will it be possible to introduce 450mm wafer handling and new lithography successfully?
According to Chilton: “This is a question best asked of the semiconductor manufacturers and equipment vendors. Our opinion is ‘very likely’.” The semiconductor manufacturers, equipment vendors, and the EDA tool providers have a long history of introducing new technology successfully when the economics of deploying the technology is favorable.
The 300nm wafer deployment was quite complex, but was completed, for example. The introduction of double patterning at 20nm is another recent example in which manufacturers, equipment vendors and EDA companies work together to deploy a new technology.
Impact of transistor variability and other physics issues
Finally, what will be the impact of transistor variability and other physics issues?
Chilton said that as transistor scaling progresses into FinFET technologies and beyond, the variability of device behavior becomes more prominent. There are several sources of device variability.
Random doping fluctuations (RDF) are a result of the statistical nature of the position and the discreteness of the electrical charge of the dopant atoms. Whereas in past technologies the effect of the dopant atoms could be treated as a continuum of charge, FinFETs are so small that the charge distribution of the dopant atoms becomes ‘lumpy’ and variable from one transistor to the next.
With the introduction of metal gates in the advanced CMOS processes, random work function fluctuations arising from the formation of finite-sized metal grains with different lattice orientations have also become important. In this effect, each metal grain in the gate, whose crystalline orientation is random, interacts with the underlying gate dielectric and silicon in a different way, with the consequence that the channel electrons no longer see a uniform gate potential.
The other key sources of variability are due to the random location of traps and the etching and lithography processes which produce slightly different dimensions in critical shapes such as fin width and gate length.
“The impact of these variability sources is evident in the output characteristics of FinFETs and circuits, and the systematic analysis of these effects has become a priority for technology development and IP design teams alike,” he added.
Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.
Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.
The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.
Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.
IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”
The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.
“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.
Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.
Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.
The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.
Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.
How is IDesignSpec working as chip-level assertion-based verification?
Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
Thanks to Sheryl Gulizia, senior manager, Worldwide Public Relations, Synopsys Inc., I was able to connect with John Chilton, senior VP of Marketing and Strategic Development, Synopsys. We discussed the global (and Indian) outlook for the semiconductor industry in detail. Dr. Aart De Geus was apparently away on a business meet.
According to Chilton, the semiconductor industry has repeatedly stared down the daunting technical challenges caused by the necessity of Moore’s Law and the inevitability of the laws of physics. Every time, the industry has risen to the challenge and delivered silicon that is smaller, faster and cheaper, and the design and systems companies that were quickest to exploit the new technologies reaped the great benefit.
Power dissipation challenging
One trend that has proven to be especially challenging is power dissipation. Although transistors get smaller, faster and cheaper, chip power keeps increasing. Increasing power and decreasing size could have caused device-melting energy densities, but the industry rose to the challenge with more innovative physics along with smarter design methods and tools.
This time around, the challenge seems more fundamental, with the new nodes offering either better performance or lower power, but not both at the same time, and maybe not at a lower cost. The fundamental driving factor behind innovation has been smaller, faster and cheaper transistors, with the cheaper part making the migration a no-brainer. Unfortunately, this time the new node is not expected to be cheaper.
App processors to drive move to 20nm
Application processors for mobile and cloud-based services will drive the move to 20nm. These applications have the volume and power/performance needs to justify the expected investment required to embrace the 20nm node. Recent product announcements at CES underscore the emergence of the ‘cloud to mobile client’ trend in consumer electronics.
Dell and Wyse unveiled the project Ophelia. Ophelia is a USB memory stick-sized thin client that will plug into any compatible TV or Dell monitor. The device will boot into an Android OS and turn any TV into a portal to access a computer somewhere else. Ophelia works by taking advantage of the MHL protocol and works with any MHL-enabled display. Over 100-million MHL-compliant chipsets have already been shipped, so the opportunities for this type of interaction are growing.
MHL, along with established standards such as USB and HDMI or even future short-range wireless standards, will enable consumers to plug their cell phone into any monitor or TV and consume content via their phone on a larger, more satisfying display.
Coincidentally, on the same day, Samsung announced consumer displays that utilize voice and gesture recognition. These emerging technologies will begin to redefine the way we interact with the cloud. Instead of carrying a laptop, you may end up waving and talking to a TV. In a futuristic presentation, Lexus showed a prototype of a laser-scanning system that is small enough to be mounted on a grill and makes 3-D maps of the environment surrounding a car. This kind of embedded vision technology will make its way into more devices as processor performance increases.
Chilton said that developing such complex systems and applications require a robust verification solution. Chip designers already use complex and exhaustive test benches to test individual blocks and subsystems. Verification engineers will need to move up to the next level and handle the full verification of the SoC within a target system.
Verification of an integrated system will require an integrated verification solution that includes not just simulation but also acceleration, emulation and formal debug. A new, integrated verification platform should combine these existing discrete technologies to offer the productivity needed to realize complex systems with predictable, manageable schedules.
Delivering the hardware simultaneously with a working OS and development kit will require virtual prototypes, which will be used by software developers prior to the release of working hardware.
Device volume, variety and complexity are only going to increase. Transformative technologies like virtual prototypes give organizations the tools to transcend challenges. Companies like Altera are creating competitive advantage and innovation with these solutions. Virtual prototyping is now ready for the masses.
Industry trends and challenges make virtual prototyping a must-have solution. New realities make prior adoption barriers mere myths. Virtual prototyping has become a key process for early software development and supply chain enablement. Industry trends also alter design requirements. For instance, earlier, it used to be computing and single core, which has since moved on to connectivity and multi-core.
This opens up implications for SoC development, especially, in terms of increased complexity and volume of software. There is a need to get the architecture right. No amount of downstream tools will compensate for the fundamentally wrong architecture. There is also a need to start software development earlier, in parallel with hardware design. Needless to say, hardware-software integration must be accelerated and system validation will minimize waterfall development process.
New realities of prototyping render prior barriers as mere myths. For instance, earlier, it was believed that creating a prototype is hard. IP models, TLMCentral and model creation software have come a long way, in reality. Earlier, there was a need to wait for complete prototype. Now, software can be developed incrementally and VDKs are jumpstarting the software development. Earlier, one felt the need to change software environment. In reality, the very same tools, debuggers and environment used for hardware can be used here.
Also, today, there are multiple use cases, verticals and customers of virtual prototyping. There is industry support for system-level models. The TLMCentral is an open, web-based portal that provides consolidated access to transaction-level models available across the industry, helping virtual prototype developers accelerate the creation and deployment of their prototypes for early software design.
Open and free, TLMCentral is the first industry-wide portal to aggregate available transaction-level models. It has over 1,000 models of most common IP blocks and interfaces for wireless, consumer and automotive applications. TLMCentral is supported by leading IP vendors, tool providers, service companies and universities. It also offers model developers, architects and software engineers an infrastructure for news, forums and blogs.
Integrated into the software development environment, there are popular debuggers, powerful controls and debugging information. VDK is a great starting point and for ongoing use. One can install and start using. There is no need to wait for months for a prototype. Templates, sample software and reference prototypes are available in one place. Post-silicon support and validation is provided, besides early availability for software development and testing.
Key process for earlier software development includes hardware-software integration and system validation. Semis are engaging customers earlier. The VDKs are driving tangible time-to-volume reduction. Tangible benefits of virtual prototyping include faster time to revenue, faster customer success, and faster field and ecosystem readiness.
Wow! Yesterday, Synopsys signed a definitive agreement to acquire Magma Design Automation Inc. This news is interesting, and not surprising. This acquisition seemed to be on the cards, but at least, not so soon. Nevertheless!
So, that leaves Synopsys, Cadence and Mentor Graphics as the big three EDA vendors, now that Magma has been acquired.
Just a couple of months back, I was in discussion with Rajeev Madhavan, chairman and CEO, Magma, regarding Silicon One technology solutions on the sidelines of MUSIC India. Magma had outlined five technologies: Talus, Tekton, Titan, FineSim and Excalibur and expected to have the opportunity to be a dominant yield management company.
Where has all of this gone, one wonders! It can safely be assumed that the Silicon One series can very well go on, now under the guidance of Synopsys. However, it will only add up to boosting the revenues of Synopsys in the long run.
Some time ago, one thought that the EDA industry was having four big players. Now, there are three. In between, there was news such as Cadence trying to acquire Mentor Graphics, which did not happen. Even Magma seemed to be doing fine, at least, till 2006-07.
Thereafter, it has been a slightly different story, with not only the CEO leaving Magma India, and some changes in the Indian management team, as well as certain MUSIC India events with less attendances, and so on. One can accept these as the part and parcel for any industry/organization.
On Magma’s website, there is a statement from Madhavan, which says: “Magma and Synopsys have always shared a common goal of enabling chip designers to improve performance, area and power while reducing turnaround time and costs on complex ICs,” said Rajeev Madhavan, CEO of Magma. “By joining forces now we can ensure that chip designers have access to the advanced technology they need for silicon success at 28, 20 nanometer and below.”
All the best to both Synopsys and Magma!
Great! That’s what was required!! As though software piracy isn’t enough, there is now an article about EDA software piracy!!!
According to the article, the anti-piracy committee of the Electronic Design Automation Consortium (EDAC) estimates that 30-40 percent of all EDA software use is via pirated licenses. That’s a huge number!
What are the chief reasons for EDA software piracy? Surely, it can’t be attributed to the Far East countries alone, and definitely not China and Taiwan, and perhaps, India, for that matter.
Everyone in the semiconductor industry knows that EDA software is required to design. There are hefty license fees involved that companies have to pay.
Designing a chip is a very complex activity and that requires EDA software. EDA firms send out sales guys to all over the country. Why, some of the EDA vendors are also known to form alliances with the technical colleges and universities. They offer their EDA software to such institutes at a very low cost.
Back in 2006, John Tanner wrote an article in Chip Design, stating: EDA tools shouldn’t cost more than the design engineer!
However, how many of such EDA licenses are properly used? Also, has the EDA vendor, who does go out to the technical institutes made a study about any particular institute’s usage of the EDA tool?
The recently held Design and Automation Conference (DAC) showered praises on itself for double-digit rise in attendance. Was there a mention of EDA piracy in all of that? No way! If so, why not?
The reasons are: the EDA industry already churns out a sizeable revenue from the global usage of EDA software. EDA firms are busy trying to keep up with the latest process nodes and develop the requisite EDA tools. New products are constantly being developed, and so, product R&D is a continuous event! Of course, in all of this race, EDA firms are continuously looking to keep their revenues running high, lest there is an industry climb-down!
Where then, are the reasons for EDA firms to even check, leave alone, control piracy?
An industry friend had this to say regarding EDA software piracy. “It is the inability to use certain ‘tool modules’ only at ‘certain time’. Like, if a IP company wants to just run PrimeTime (Synopsys) few times to ensure its timing worthiness before releasing that IP, and doesn’t need it after that. However, it is is not possible to get such a short time license.” Cost and unethical practices by the stake holders were some other reasons EDA users have cited.
Regarding the status in India, especially, the difference isn’t that much, from say, China. Another user said it is not such a prevelant, ‘worrisome’ aspect, yet. Yet another EDA user said that EDA piracy is there more in the sense of ‘unauthorized’ usage than ‘unpaid’ usage — not using it for what it is supposed to be used for. For instance, using academic licenses for ‘commercial developments’, etc.
That leads to the key question: can EDA software piracy be curtailed to some extent? One user feels that yes, it can. Perhaps, Microsoft type ‘detection’ technologies exist. However, another said that the EDA companies’ expenses have to do, so it can be more than actual losses. Hence, they are probably not quite doing it!
According to Dr. Chi-Foon Chan, president and COO, Synopsys Inc., there are five reasons for the global semiconductor industry to be optimistic. These are:
* Devices that need semiconductors are on a rise. Eg: telecom — Apple iPad, tablet PC, etc. Everything requires semiconductors.
* Digital media, growing downloads.
* Data storage.
Dr. Chan was delivering the keynote at last week’s Synopsys User Group (SNUG) India conference.
Key design challenges today include developing high performance chips with low power, design complexity, and shrinking design cycle. Some other challenges are:
* Exploding cost of verification.
* Smart and fast verification.
* Challenges in advanced verification.
* IP and its re-use — power/performance, complexity, schedule.
* Growth in IP business leading to high-quality IP.
* Toward physics – TCAD.
* Yield loss is a digital issue and is big money.
The Design and Automation Conference (DAC) 2011, kicked off today in San Diego, USA, with its usual slew of announcements. Leading the pack were Magma Design Automation and Cadence Design Systems, along with Synopsys, Mentor Graphics, and several others.
Magma Design Automation Inc. announced a partnership with Fraunhofer Institute for Integrated Circuits IIS to develop process-independent Titan FlexCell models of the Institute’s analog intellectual property (IP) cores. It also announced the availability of a netlist-to-GDSII reference flow for GLOBALFOUNDRIES’ 28nm super low-power (SLP) high-k metal-gate (HKMG) technology.
Magma announced the immediate availability of the Titan Analog Design Kit for TSMC 180nm and 65nm processes, that implements Titan’s model-based design methodology with Titan FlexCells, which are modular, process- and specification-independent, reusable analog building blocks.
Magma Design Automation also launched Silicon One, an initiative to bring focus to making silicon profitable for customers by providing differentiated solutions and technologies that address business imperatives facing semiconductor makers today – time to market, product differentiation, cost, power and performance.
Silicon One’s initial focus is on five types of devices that are key to electronic products that are most prevalent today:
* ASIC /ASSP
* Analog/mixed-signal (AMS)
* Processing cores
Cadence Design Systems Inc. isn’t far behind either! It announced an array of new technologies incorporated into the new TSMC Reference Flow 12.0 and AMS Reference Flow v2.0 that ensure 28nm production readiness. Cadence also announced a close collaboration with TSMC that will extend its interface IP offering. With Imec, in Belgium, Cadence announced a new technology that delivers an automated test solution for design teams deploying 3D stacked ICs (3D-ICs).
Cadence also announced the immediate availability of verification IP (VIP) for ARM’s new AMBA 4 Coherency Extensions protocol (ACE), extending its popular VIP catalog and speeding the development of multiprocessor mobile devices. Cadence further outlined the technologies and steps required to move the industry to advanced node design, with a particular focus on 20nm and 28nm design.
Mentor Graphics announced that the Catapult C high-level synthesis tool now supports the synthesis of transaction level models (TLMs). It also announced a unified embedded software debugging platform, from pre-silicon to final product, based on the integration of the Mentor Embedded Sourcery CodeBench embedded software development tools with Mentor’s leading electronic system level (ESL), verification, and hardware emulation products. These include the Mentor Graphics Vista Virtual Prototyping product, Veloce hardware emulator, prototype target boards, and end products or any combination thereof.
Mentor Graphics announced support for 3D-IC in TSMC’s Reference Flow 12.0 (RF12). Solutions for both silicon interposer and through silicon via (TSV) stacked die configurations are now supported by the Calibre physical verification and extraction platform and the Tessent IC test solution.
ARM and Synopsys Inc. have signed an expanded multi-year agreement extending ARM’s access to Synopsys’ innovative EDA technology. ARM will also provide Synopsys with access to the ARM Cortex-A15 processor to maximize performance and energy efficiency of SoCs built by ARM’s Partners using this advanced ARM processor and Synopsys tools. Read more…
I have known Dr Pradip Dutta, corporate VP of Synopsys Inc. and MD of Synopsys (India), as well as vice chairman, India Semiconductor Association (ISA), and now, chairman designate for 2011, ISA, for close to a decade now. We recently got into an interesting discussion on the Indian semiconductor industry.
Growth of semicon and electronics in India
First, I asked what should be done about the growth of semiconductors and electronics in the Indian eco-system?
Dr. Dutta said: “My view on this subject has been the same for many years now; high-tech electronics has to be a national mission. The defense and the government labs played a major role in promoting this sector in the US; e.g. Sandia National Laboratory, Lawrence Livermore Laboratory, Jet Propulsion Laboratory, NASA etc. DARPA, which is part of US Department of Defense has sponsored phenomenal amount of research in semiconductors and electronics.
“If we now look at countries closer to our part of the world, in Asia, we will see a similar focused effort from the governments. The STARC initiative in Japan, the National SOC program in Taiwan, the 839 program in Korea, the 863 program of Ministry of Science and Technology in China, all catered to a flourishing investment in R&D and innovation in high tech. Our country is poised for it too. We need to encourage start-ups in fabless design, explore manufacturing, foster innovation, create favorable policies for the industry and most certainly develop the talent pool.”
Need for domestic manufacturing
There is a need for domestic manufacturing in high tech electronics. Where are the Indian companies going? According to him, domestic manufacturing in high tech electronics has been flagged as a critical area in the ESDM (Electronic Design and Manufacturing) report that was submitted to the government by the industry in 2010. There is a need for initial funding, both in R&D as well as manufacturing. Duty structures need to be rationalized between import of CBU, SKD, CKD and components.
He added: “We have seen that manufacturing prospers in cluster environment and hence there is a recommendation to promote manufacturing clusters for specific product categories. However, it is safe to say that we have long ways to go in this area.”
Co-operation with international trade bodies
Now, what is the required policy framework and co-operation with international trade bodies? As per Dr. Dutta, the ISA has been active in forging close working relationships with multiple trade bodies from various parts of the world. “We have signed several MoUs with entities such as HTIA (Israel), ASTSA (Japan), DSP Valley (Belgium), TSIA (Taiwan), Semi (USA), GSA (USA) and UKTI (UK).
“Of course, we need to have a focus and these relationships should be driven by strategy. We have carried several delegations to these countries and hosted bi-lateral visits as well. These visits provide an opportunity for our member companies to have direct B2B opportunities.
“We learn valuable best practices from other entities and try and implement in our environment. For example, Israel does a great job in taking innovative ideas from entrepreneurs to incubation, many times inside of universities, and then spinning them into companies which later become part of the global value chain. In the process, this small country has created at least 150 NASDAQ listed high tech firms. Innovation to incubation to wealth creation – a formula that works very well there. We could certainly learn a lot from that model.”
Future of Indian semicon industry
So, how does Dr. Pradip Dutta see the Indian semicon industry, going forward? He said: “The Indian semiconductor industry is now poised at a very interesting juncture. While the MNCs are designing chips at the bleeding edge, we see a lot of high quality work being done by the design service companies and also local start-ups. Incidentally, the start-up scenario is quite active in the system space. This ties in with the ESDM focus of our industry. Read more…
To make the ‘machine’ called SoC work, one needs to look simultaneously at economics and technology, and hence the word, techonomic.
Commenting on the global economy, he said, the industry had just come out of a very severe recession. Last year, Dr. Geus had introduced the recession compiler.
Today, there’s a clear sense of turn, and a huge shift in the global economy during the recession. According to him, China will pass Japan and become the second largest economy. China has continued to evolve quite a bit, and so has India. Dr. Geus also introduced the recovery compiler six months ago.
He added that people on one side are looking at how to minimize costs and risks. How does this impact semicon? Most semicon companies are now reporting good results. Today, there has been about 5.8-6 percent of growth, indicating a steady state. Semicon is in the center to drive growth.
In the foundry world, there has been some consolidation, and you now find some really large ones. So far, semicon has rebounded somewhat much faster. The memory folks are also feeling pretty good. It must be noted that during the last three years, they invested really nothing in capex, and some players also disappeared.
If one were to look at cool killer applications today, there’s certainly a theme around video more and more on mobile apps, HD, 3D, etc. All of this is leading to the fact that bandwidth and storgae will grow even more. Smart grids are also clearly becoming more important In future. The word ‘smart’ will be critical. “Eveything around us will commmunicate in some form or another, in future,” Dr. Geus added. Read more…