We are now entering the sub-20nm era. So, will it be business as usual or is it going to be different this time? With DAC 2013 around the corner, I met up with John Chilton, senior VP, Marketing and Strategic Development for Synopsys to find out more regarding the impact of new transistor structures on design and manufacturing, 450mm wafers and the impact of transistor variability.
Impact of new transistor structures on design and manufacturing
First, let us understand what will be the impact of new transistor structures on design and manufacturing.
Chilton said: “Most of the impact is really on the manufacturing end since they are effectively 3D transistors. Traditional lithography methods would not work for manufacturing the tall and thin fins where self-aligned double patterning steps are now required.
“Our broad, production-proven products have all been updated to handle the complexity of FinFETs from both the manufacturing and the designer’s end.
“From the design implementation perspective, the foundries’ and Synopsys’ goal is to provide a transparent adoption process where the methodology (from Metal 1 and above) remains essentially the same as that of previous nodes where products have been updated to handle the process complexity.”
Given the scenario, will it be possible to introduce 450mm wafer handling and new lithography successfully?
According to Chilton: “This is a question best asked of the semiconductor manufacturers and equipment vendors. Our opinion is ‘very likely’.” The semiconductor manufacturers, equipment vendors, and the EDA tool providers have a long history of introducing new technology successfully when the economics of deploying the technology is favorable.
The 300nm wafer deployment was quite complex, but was completed, for example. The introduction of double patterning at 20nm is another recent example in which manufacturers, equipment vendors and EDA companies work together to deploy a new technology.
Impact of transistor variability and other physics issues
Finally, what will be the impact of transistor variability and other physics issues?
Chilton said that as transistor scaling progresses into FinFET technologies and beyond, the variability of device behavior becomes more prominent. There are several sources of device variability.
Random doping fluctuations (RDF) are a result of the statistical nature of the position and the discreteness of the electrical charge of the dopant atoms. Whereas in past technologies the effect of the dopant atoms could be treated as a continuum of charge, FinFETs are so small that the charge distribution of the dopant atoms becomes ‘lumpy’ and variable from one transistor to the next.
With the introduction of metal gates in the advanced CMOS processes, random work function fluctuations arising from the formation of finite-sized metal grains with different lattice orientations have also become important. In this effect, each metal grain in the gate, whose crystalline orientation is random, interacts with the underlying gate dielectric and silicon in a different way, with the consequence that the channel electrons no longer see a uniform gate potential.
The other key sources of variability are due to the random location of traps and the etching and lithography processes which produce slightly different dimensions in critical shapes such as fin width and gate length.
“The impact of these variability sources is evident in the output characteristics of FinFETs and circuits, and the systematic analysis of these effects has become a priority for technology development and IP design teams alike,” he added.
A few weeks ago, I was fortunate enough to be able to speak with Dr. Pradip Dutta Corporate Vice President and Managing Director, and Treasurer, regarding the state of the global EDA industry and in India. What followed was a very interesting conversation, some of which is reproduced here!
Any sign of improvements in EDA?
To start with, the state of the global EDA industry is well known, and it has also seen revenue drops Q-on-Q in the past. Are there any signs of improvement?
According to Dr. Dutta, the last several quarters in the semiconductor industry have been extremely challenging as consumer demand for electronic products has declined with the heavy stress on the global economy.
“While we are starting to see signs of the semiconductor industry rebounding off the bottom with inventory replenishment and an uptick in end demand for key consumer items such as PCs and mobile phones, the environment is expected to remain difficult at least well into next year.
“During this time, the challenge for the semiconductor industry and its suppliers will be to find the next level of efficiency. The good news is that across a broad field of applications, semiconductors are a key enabler to future prosperity. Green solutions, low-cost netbooks, advances in connectivity and evolving products like the Kindle are just a few examples of areas that could help drive future development.
“The long-term ramifications of this scenario on the EDA industry are starting to become visible. More than ever, customers want to get their products out on time, and get it right with high quality.
“In addition to some immediate cost-cutting to respond to the crisis, most semiconductor and design businesses are re-focusing their market strategies, streamlining their operations, de-risking their supplier and partner relationships, and in some cases actively pursuing consolidation opportunities to drive economic efficiency.
“This situation presents as an opportunity for EDA companies to focus on important product developments that can enable leading semiconductor design and manufacturing companies to not only create more advanced devices, but to simultaneously lower risks and cut costs. In today’s economy, companies need to find ways to manage expenses while still investing in the future so they don’t just survive the recession, they emerge from it stronger.”
State of the Indian EDA industry
Obviously, it would be interesting to see how is the Indian EDA industry holding up in these times.
Dr. Dutta said that the Indian EDA industry is a combination of catering to global semiconductor players and addressing the needs of a domestic market that is slowly developing. The global players that operate out of India are rapidly moving up the value chain in terms of owning and architecting the next generation chips. This leads to an enormous opportunity for EDA companies to get associated at the front end of tool decisions.
“As you are aware, the level of technology that is being witnessed in the chips that are getting designed here is absolutely bleeding edge. The EDA companies are therefore paying concomitant attention to robust application support and in-house R&D effort. It has to be a full package here and now to address these kinds of customer requirements.
“Beyond the global players, India is seeing a few, but committed fabless design companies coming up in recent times. In addition to that, the Indian government is showing a lot of interest in country-specific programs, primarily in defense areas that require EDA support.
“We have also recently seen media reports about an “India Chip” being conceived at the central government level for domestic security applications. The ISA is working toward a blueprint for targeting semiconductors into a national agenda and hopefully, many ideas for systems and corresponding chips that will emanate from it to keep EDA companies interested,” he added.
Recently, Synopsys Inc. introduced an IC Validator design rule checking/layout verification signoff (DRC/LVS) for in-design physical verification and signoff for advanced designs at 45nm and below.
Said to provide a step up in physical designer productivity, it is architected to deliver the high accuracy necessary for leading-edge process nodes, superior scalability for efficient utilization of available hardware, and ease-of-use.
What does IC Validator do?
According to Sanjay Bali, Director of Marketing, Physical Verification & DFM, Synopsys, the IC Validator is a complete physical verification tool, performing increasingly complex DRC and LVS sign-off checks.
It has been specifically architected for in-design physical verification. This means: the place-and-route engineers can run DRC and practical DFM steps alongside place and route within the familiar IC Compiler physical design environment.
And, why need for such a solution? He added that three key summary challenges are driving the need for a new approach and hence the new tool. These are:
a) Increase in complexity and count of manufacturing rules.
b) Unabated growth on design complexity.
c) Increasing DFM challenges, which just cannot be handled in a post processing approach.
Currently, the solution is aimed at 45nm and below as these nodes largely represent the challenges listed above.
Enhancing physical designer’s productivity
Three key tenants of the IC Validator that offer improved physical designer productivity are:
a) High accuracy necessary for leading-edge process nodes.
b) Superior scalability for efficient utilization of available hardware. And,
c) Ease of use with seamless integration of IC Validator and IC Compiler
Bali said: “The IC Validator has been architected from the ground up for in-design physical verification. In-design physical verification enables place-and-route engineers to accelerate the time to tapeout by enabling sign-off quality physical verification from within implementation or physical design. Physical designers designing with IC Compiler can now benefit from the in-design physical verification approach with the push of a button, incurring minimal overhead cost to eliminate surprises late in the design.
“With the verify-as-you-go approach replacing the implement-then-verify approach, physical designers can significantly reduce iteration count, eliminate streamouts and streamins, and accelerate time to tapeout. In addition, the integration enables several productivity enhancing flows like incremental DRC verification, incremental metal fill flows and ECO flows — all leading to significant reduction in time to tapeout.”
It would be interesting to determine or know by approximately what percent is the total physical verification time reduced, and what all does it cover in the process?
Bali added that in extreme cases, finding and fixing DRC violations can easily impact the schedules by a few weeks! The key here is that physical designers typically wait until the final stages of the tapeout to run physical verification. Inevitably, the schedule at this point is squeezed and the cost of fixing the error is high.
“With a sign-off quality physical verification tool integrated into the physical design environment, place-and-route engineers can verify as they implement and eliminate late surprises while speeding up the total physical verification turnaround time. In addition, the outcome of this process is a sign-off clean design.
The Synopsys IC Validator is also said to ‘production ready!” What exactly does that mean?
The IC Validator has been successfully used to tapeout designs at several chip manufacturers, said Bali. In addition, it is currently being used for production designs at Nvidia and Toshiba. Besides other leading foundry’s and chip manufactures it is also qualified by TSMC for 40nm and 28nm process nodes.
For those interested, Toshiba already has Synopsys as its key EDA partner, and NVIDIA adopted the IC Validator for sign-off physical verification, within days of its launch! More are bound to follow!
Saving design spins!
Will the IC Validator approach be able to save design spins? How much is the physical design cycle time reduced?
With the in-design physical verification, place-and-route engineers will be able to run sign-off quality DRC checks, timing aware and sign-off quality metal fill, all within the familiar IC Compiler environment. Linear scalability for efficient use of hardware, sign-off accuracy and integration with IC Compiler will enable productivity enhancing flows like auto detect and autofix, incremental verification flows — all can significantly reduce time to tapeout.
How can it help in avoiding the painful sign-off failure-to-physical-redesign iterations that are increasingly common below 90nm?
With the seamless integration of the IC Validator with the IC Compiler, physical designers can now verify the design as they implement for manufacturing sign-off accuracy.
Incremental DRC’s strength
How good is the incremental design-rule checker (DRC)? Is it really parallelized for the multicore servers?
According to Bali, incremental flows are one of the strongest tenants of IC Validator. To improve physical designer productivity, rule-based only or layer-based only incremental verification runs can be initiated from within IC Compiler.
He said: “For ECO validation, the IC Validator supports window or an area-based incremental verification approach to speed up surgical checks. The incremental flows are meant to be quick, but the IC Validator has multicore capability to further speed up the process.”
The IC Validator discovers and fixes design rule violations within the global context of the design as well. How is this made possible?
With the in-design physical verification, the IC Validator can accurately and automatically identify DRC violation and automatically provide fix guidance to IC Compiler to fix the violation and then re-verify it again.
Handling metal fills and design changes
Operations typically performed during physical verification, such as metal fills, may trigger additional design changes to achieve timing closure. How is this handled by the IC Validator?
Bali said that the prevailing post-processing oriented DFM flows introduce excessive and lengthy discover-fix iterations. Metal fill insertion, a mandatory DFM step at the advanced nodes, exemplifies this issue.
“Physical designers stream out the timing closed post-fill design for signoff validation and then stream it back in to fix any signoff errors flagged during physical verification. This multi-hour discover-fix loop is typically repeated per block till the post-fill design is both signoff qualified and timing clean.
“With in-design physical verification, the IC Validator and IC Compiler address the challenges of DFM, within the place-and-route environment. The seamless integration enables a single pass metal fill flow that is timing aware and of signoff quality and is void of expensive streamouts and streamins,” he added.
* Synopsys introduced the Discovery 2009 verification platform, delivering faster, unified verification solutions.
* It unveiled the VCS multicore technology, delivering 2x verification speed-up.
* It introduced the CustomSim Unified Circuit Simulation solution, which addresses custom digital, analog and memory verification challenges.
I met up with Dr. Pradip K. Dutta, Corporate Vice President & Managing Director, Synopsys (India) Pvt Ltd and Manoj Gandhi, vice president and general manager, verification group @ Synopsys, in an attempt to understand how significant these announcements are for verification.
Verification is huge!
According to Manoj Gandhi, at the macro level, design complexities continue to grow. As this grows, one big challenge is verification. The reason is: today’s SoC designs and large IC designs, they are being approached like large software projects.
He said: “Verification becomes huge, like software. It is expensive in hardware design. We focus on the verification challenges. We introduced the System Verilog about four to five years ago, and we had also acquired ArchPro. Yesterday, we announced the Discovery 2009, CustomSim and VCS2009.”
How can users make use of new CPUs coming out? “We aim to get higher much performance using multicore architecture,” he added.
The VCS2009 is multicore enabled, runs the industry’s first low-power verification methodology, and enables fastest mixed-signal simulation with the CustomSIM. Focusing on the VCS2009, Gandhi said: “In verification, there’s a design under test and verification. A lot of designs now have multicores. AMD is among the many folks using the VCS2009. Almost every CPU is designed using VCS. It plays a big role in large SoCs.”
Design companies have several activities such as test bench, debug, etc. All of these can now be parallelized. “Customer designs can be simulated on multiple threads,” Gandhi said. “Also, the applications can also be simulated on different threads, called application level parallelism. We can actually bring about 5-7X improvement in verification with the VCS2009.”
According to him, this product is already being used by some large customers. “This is our next phase of performance innovation. The processor roadmap is getting more and more multicore. We have over 200 customers,” he added.
The VCS distributes time consuming activities across multiple cores. Gandhi added that each core has a lot of computations. You may do lot of parallel activities with the mobile phones. All activities are now in parallel.
And how about the speed-up from parallel computation with the industry-leading Native Testbench (NTB)? He said: “We were one of the first to introduce all technologies as part of a single compiler. That brought the 5X speed-up. We did all of this in verification, and a test bench core was brought into verification.”
The combination of DLP and ALP optimizes VCS performance over multicore CPUs. Design level parallelism (DLP) and application level parallelism (ALP) — all CPUs can be threaded on different cores.
Low-power verification methodology published
Synopsys has published a book on industry’s first low-power verification methodology, along with ARM and Renasas. It is an attempt to bring technology to the mainstream — how to do low-power verification. There are other 30 companies who participated in this exercise.
On the CPF vs. UPF debate, he said that UPF is a standard where Magma, Mentor, Synopsys, etc. have participated. Cadence has CPF. Users can make use of this book and apply, on top of both UPF and CPF.
Introducing Discovery 2009
According to Synopsys, this solution is doing very well in the market. The company has seen strong technology leadership over the last two to three years. It has also created strong investments.
CustomSIM is a unified circuit simulation solution. “We have a software to silicon verification focus. We are all the way from system level design to RTL, to software verification, etc. Discovery has some technologies as part of that, noted Gandhi.
What has Synopsys done right?
A most interesting point in the EDA industry, I feel, has been the performance of Synopsys, in an otherwise difficult segment over the past year. So, what are the reasons behind this success?
Gandhi added: “Our management are all strong technologists. We have invested tremendously in bringing in strong technology leaders. In India, many companies needed R&D collaborations locally. For us, it was a big win when we invested in Bangalore. We work closely with customers delivering technologies that will address challenges two-three years from now.
Dr. Pradip Dutta elaborated: “Synopsys is very strong in product leadership (PL). The other two key areas are customer intimacy (CI) and operational excellence (OE). You need to be highest in PL. We have been very conservative even during strong times.”
That is indeed a marvellous thought! Those who are typically strong in technology, generally go on to develop great intimacy with customers, and all of this starts reflecting on their operations, which are anyway excellent! Here’s a message for those who wish to do well in tough times — strong product leadership, coupled with customer intimacy and well, corresponding operational excellence!
Focus on verification
Now that the focus is quite clearly on verification, how do EVE and the other verification companies stand out? EVE is currently in the emulation space. Gandhi added that EVE competes more wtih Cadence and Mentor. “We work with EVE on many accounts. Verification is all about finding bugs. Emulation has been more cyclical.”
According to him, Synopsys is now looking at tackling the next level — how do you reduce the overall cost? “We will go beyond selling tools. We would look at how to identify issues and saving verification costs.” I believe, verification takes up close to 70 percent of an overall design test.
Commenting on the EDA industry in India, both, Dr. Dutta and Gandhi feel it is still buzzing quite well, despite what’s been happening in the global context. “We have invested quite a lot. We have a large team here. We continue to collaborate with local institutions here as well,” Dr. Dutta added.
Early December 2008, Cadence Design Systems launched the Cadence Encounter Digital Implementation System, said to be a configurable digital implementation platform that delivers an incredible scalability with complete support for parallel processing across the design flow. Will it change the fortunes of the struggling EDA industry? EDA industry stats for Q3-08 given at the end of this post!
My first thoughts immediately went to Synopsys’ Galaxy Custom Designer solution. This is the industry’s first modern-era mixed-signal implementation solution. Is the Cadence Encounter an answer to Synopsys’ Galaxy? This is worth a shot!
The Encounter Digital Implementation System is a next generation high-performance, high-capacity RTL-GDS-II design closure solution with the industry’s first end-to-end parallel processing flow that enables all steps of the design flow to be multi-CPU enabled — from floorplanning, placement, routing, extraction to timing and signal integrity sign-off. He said, “At its core is a new memory management architecture and end to end multi-CPU backplane that provides scalability with increased performance and capacity to reduce design time and time-to-market.”
Does it intend to take on Synopsys’ Galaxy? Well, Deokar said: “Yes, it surpasses the other solutions available in the marketplace based on the following capabilities and features, which are:
* Ultra-scalable RTL-to-GDS-II system with superior design closure and signoff analysis for low-power, mixed-signal, advanced node designs.
* End-to-end multi-core infrastructure and advanced memory architecture for unparalleled scalability of capacity, design turnaround time, and throughput.
* Robust design exploration and automated floorplan synthesis and ranking solution.
* Embedded signoff-qualified variation analysis and optimization across design flow.
* Integrated diagnostic tools for rapid global timing, clock and power analysis/debug
Here’s a list of benefits that it provides designers:
* Significantly reduces design time, schedule and development risk.
* Increased productivity through automation; superior quality of results.
* Configurable and extensible platform that ensures maximum utilization and ROI — upgrades proven design flow and amplifies existing expertise.
* Interoperability across package, logic, custom IC design, and manufacturability.
Harnessing power of multicore computing
According to Cadence, it provides complete support for parallel processing across the design flow. Does this mean that designers can fully harness the power of multicore computing? It would also mean that today’s EDA tools capable enough to meet the multi-core challenge.
Deokar added: “Yes, the end-to-end parallel processing flow is supported across the entire design flow and consequently. Also, designers can fully harness the power of multicore computing. Today’s designers commonly have dual CPU or even quad CPU machines on their desktop. The Encounter Digital Implementation System allows the designers to leverage their multi-CPU hardware and gain significant TAT improvements on the design cycle time and overall development schedule.”
The Encounter end-to-end multi-CPU backplane delivers ultra-scale performance gains up to 16X in key areas such as routing and timing closure. All steps of the design flow are multi-CPU enabled. For instance, on a production design, when the Encounter is run on four CPUs, the user can get a 3.2X performance boost across the entire, end-to-end design flow.
Encounter deployed by over 15 customers?
Designers are said to be reporting dramatically improved design time, design closure, and faster time-to-market for advanced digital and mixed-signal devices. By what factors, and against which other tool(s) has Encounter been rated?
Deokar said that the Encounter Digital Implementation System has been developed in close collaboration with over 15 customer partners who have extensively used, validated and now, deployed it.
“Customers are already seeing overall design cycles significantly shorted by 25-30 percent, which translates to multiple weeks or even months. These significant improvements are against competitive tool flows in their current methodology,” he added.
Encounter is also said to be offering new technologies for silicon virtual prototyping, die-size exploration and RTL and physical synthesis, providing improved predictability and optimization in early stages of the design flow.
Regarding this aspect, he pointed out that large scale design complexities (increased functionality, predictability, productivity, etc.,) pose some of the biggest challenges. Designs are getting huge at 100M+ gates, 100+ macros in the design, putting significant requirements on design tools, particularly, floorplanning of these macros, and the whole design becomes a huge challenge.
“The new Silicon Virtual Prototyping capabilities of Automated Floorplan Synthesis and Die Size Exploration help out exactly on that front. These can quickly provide floorplanning for that large 100M+ gates, 100+ macro design.
“And not just one floorplan, but designers can provide multiple criteria (say, along the lines of timing or power or area or congestion) and you will get multiple floorplans with their rankings…– all this in a matter of minutes! Essentially, you could finish your breakfast or lunch (depending upon how fast you eat!) and be back to have multiple floorplans that you can then pick and choose from, and then proceed to implementation.”
Addressing new problems at 45nm/40nm/32nm
Obviously, targeted at 45nm/40nm/32nm, etc., how can or how does Encounter anticipate and address the majority of the new problems associated with these geometries across the entire flow?
Deokar noted that its main customers include semiconductor companies working on 45nm and 32nm designs, with aggressive design specifications including 100 million or more instances, 1,000-plus macros, operating speeds exceeding 1GHz, ultra-low power budgets, and large amounts of mixed-signal content.
“The challenges facing these designs comprise of an increasing demand for design tool performance/capacity and design features for challenging ultra-large scale designs in the areas of low power, mixed signal, advanced node and signoff analysis. In addition, small market windows and product life-cycles and the cost pressures further exacerbate the situation,” he noted.
The Encounter Digital Implementation System’s core design closure capabilities, plus the new advanced node technologies, including litho-, CMP-, thermal, and statistical-aware optimization provide comprehensive manufacturing- and variation-aware implementation, and an end-to-end multi-core infrastructure for fast, predictable design closure even on the most challenging designs.
Reducing memory footprints
It will be interesting to learn about the kind of work that has gone into reducing the memory footprint of the most memory-retentive applications.
Deokar said that an innovative memory architecture is at the core of the Encounter System that enables capacity and performance gains of 30-40 percent for full flat and hierarchical designs, even if you are running on a single-CPU machine.
Cadence’s R&D team has developed an advanced memory defragmentation algorithm that allows the applications to be extremely memory-frugal …and that memory-efficiency enables designers to handle their biggest 100M+ instance designs.
Parallels with Synopsys’ Galaxy Custom Designer?
There seem to be parallels with Synopsys’ Galaxy Custom Designer for AMS. Also, there could be some chance of Cadence’s Virtuoso and Encounter coming together in future.
According to Deokar, Synopsys’ Custom Designer for AMS is its entry into the full-custom/analog design marketplace, where the Cadence Virtuoso platform is a strong incumbent.
He said: “The biggest challenge for mixed signal designers is the efforts/resources involved in taking design data from the full-custom/analog tools to the digital implementation tools, and back and forth…in never-ending iterations.
“Now, with the Encounter Digital Implementation System, designers get the seamless full-custom/analog and digital design implementation interoperability…with unified constraints handling, mixed-signal floorplanning and ECO. It executes off a common design database (OpenAccess), enabling edits made in one design environment (e.g. Virtuoso) to be easily seen in the other design environment (e.g. Encounter). It also enables the design team to easily transfer the design data, to determine the optimal floorplan based on analog and digital constraints.”
For example, the analog design team moves pins on the analog block, when the design is opened in Encounter, the modified pin locations are easily seen and the digital design team can execute a pin optimization to re-align the pins at the top-level.
In addition, the user can enter routing constraints in either Encounter or Virtuoso, and implement mixed signal routing in either environment. Top-level routing constraints could be defined within Virtuoso, then the top-level routing completed using the mixed signal routing functionality within Encounter.
Customers are already seeing their overall design schedules significantly reduced, added Deokar.
Postscript: Well, as expected, the EDA industry has taken a hit again. As per the EDA Consortium (EDAC) Market Statistics Service (MSS), the EDA industry revenue for Q3 2008 declined 10.9 percent to $1,258.6 million compared to $1,412.1 million in Q3 2007. The four-quarter moving average declined 2.8 percent.
Now, does Cadence’s Encounter have the ability to turn around the EDA industry’s fortunes? I don’t think so! Much more needs to be done by Cadence and all of the other EDA companies!
Synopsys recently introduced the Galaxy Custom Designer, which provides a unified solution for custom and digital designs, thereby enhancing designer efficiency.
Well, this solution invariably draws a comparison with Cadence’s Virtuoso platform within the EDA industry!
That prompted me to engage Sandeep Mehndiratta, Product Marketing Group Director, Cadence Design Systems, in this discussion. We discussed a range of issues, such as how the Synopsys’ Galaxy Custom Designer matches up with the Virtuoso, and whether designers can now design what they wish, including concepts and flows, as well as the relevance of open architectures.
For the record, a few years ago, Cadence introduced the next-generation Virtuoso custom design IC 6.1 platform, which had a major upgrade recently with the IC 6.1.3 release. This release has been production-proven with tapeouts from many customers. However, as I said, it is Synopsys’ Galaxy Custom Designer doing the rounds in the EDA circles as of now!
Galaxy Custom Designer vs. Virtuoso
It is well known that Cadence has been the established leader in custom IC design space for decades, and has been constantly improving and upgrading technology to ensure it is providing best-in-class platform for designing today’s complex custom chips.
Mehndiratta said: “A couple of years ago we introduced the next-generation Virtuoso custom design IC 6.1 platform. This release has been production-proven with tapeouts from many customers. Some of the leading customers that have adopted the Virtuoso platform include Ricoh, National Semiconductor, Cambridge Analog Technologies Inc., Matsushita, etc.
“Synopsys has recently launched Galaxy Designer and it is unproven as yet. From what we’ve read and heard from some of our mutual customers, the competitive introduction may be attempting to replicate older custom IC technology. While the jury will probably be out for some time on this unproven tool, Cadence continues to provide a complete solution for design, verification and implementation of complex analog and mixed-signal designs, differentiated by the tight integration between the underlying technologies.”
With the advent of Galaxy, is it now safe to say that designers can finally design what they wish, including concepts and flows? Well, the answer’s not yet there! However, Mehndiratta did touch upon Cadence’s solution that is built upon decades of experience in this area and a strong eco-system made up of partners, third-party providers and foundries. Virtuoso, he added, is the most complete eco-system for designing ICs; not only with its inherent flow, but also because of its linkages to multiple tools inside and outside of Cadence.
“For many years, we have provided a consistent front-to-back flow, and over that time we have learned much about what customers need to do their designs efficiently. It is that knowledge base that we leveraged to accelerate productivity with 6.1 release couple of years back,” he added.
If that is the case, why has it taken so long for a first modern-era mixed-signal implementation solution to be in place?
He referred to Cadence’s next generation Virtuoso 6.1 introduced in November 2006, said to be the first modern, and most complete custom design solution released natively on the OA database. Productivity benefits are significant. RFIC Solutions Inc., a third-party intellectual property and design service provider, is said to have increased productivity two-fold by adopting the Cadence Virtuoso custom design platform.
Likewise, INSIDE Contactless, a fabless company and leader in contactless technology providing high-performance chipsets for secure, fast and reliable transactions with electronic identification, saved 20 percent in development time by adopting Cadence Virtuoso UltraSim Full-Chip Simulator, a component of Virtuoso Multi-Mode Simulation with a high-performance digital-solver technology, for the verification of its current and next-generation contactless and Near Field Communication (NFC) system-on-chip (SoC) designs.
He noted: “Specifically, mixed-signal design is evolutionary, not revolutionary. The concept of mixed-signal design isn’t new. People have been designing in this manner for 15+ years. What is new is the more holistic approach being taken by designers developing mixed-signal circuits. The once clear lines between analog and digital design are blurring, and now the idea of “mixed-signal” is being architected in right from the beginning.
“That is why Cadence’s AMS Designer covers transistor to system level design with a single simulation solution for complete verification. It is why Cadence has combined the power of its leading implementation platforms (Virtuoso and Encounter) to handle the implementation of mixed-signal designs.”
Given that Synopsys’ Galaxy Custom Designer can provide a unified solution for custom and digital designs, thereby enhancing designer efficiency, how will it change/affect designing, and the EDA landscape?
Mehndiratta pointed out that Cadence had defined a unified solution long ago. “Our industry leadership in this area, and Synopsys mimicking of that solution are testaments of Cadence’s vision. Competition is good for all industries, the end-customer usually benefits. You can count on Cadence to not only remain competitive, but also retain our industry leadership in custom/mixed-signal design.”
Importance of open architecture
Let us also look at the importance of open architecture that natively supports interoperable PDKs.
Cadence also believes in open architectures. Its Design Framework II was built as an open architecture, and that’s the reason, why there are many companies that have connected (30+) to form a larger ecosystem. Whereas, the Industry Standard Framework has been tried and failed, the company maintains.
Mehndiratta said: The reason it was a failure is the same as interoperable PDKs. Building frameworks and PDKs that are based on a “lowest common denominator” principle do not provide the most optimized design flow. Instead, you are left with systems that try to please everyone and in the end are rejected as bloated beasts retarding the progress of design.”
Finally, how does Cadence propose to address the Galaxy challenge?
As expected, Cadence hopes to continue to provide customers and partners with a framework in which they can build their tools into the Virtuoso design flow in the most optimized way possible.
Also, by providing its proven and industry standard Pcell technology that takes advantage of the key features in Cadence’s design flow, thereby allowing for fast and productive design today and in the future.
Synopsys Inc. recently unveiled its Galaxy Custom Designer solution, the industry’s first modern-era mixed-signal implementation solution. Architected for productivity, the Galaxy Custom Designer leverages Synopsys’ Galaxy Design Platform to provide a unified solution for custom and digital designs, thereby enhancing designer efficiency.
Galaxy Custom Designer delivers a familiar user interface while integrating a common use model for simulation, analysis, parasitic extraction and physical verification. It is the first-ever implementation solution built natively on the OpenAccess database for legacy designs as well as a new componentized infrastructure offering unprecedented openness and interoperability with process design kits (PDKs) from leading foundries.
Subhash Bal, Country Director, Synopsys (India) EDA Software Pvt. Ltd, highlighted three key features: One, it is architected for productivity. Two, it is a complete custom design solution. And three, it is based on an open environment.
The key question: why the Galaxy Custom Designer, and why now? Simple! The modern AMS era is characterized by interdependent custom and digital functions; analog IP is now mainstream; and there is the phenomena of an increased embedded memory. Current solutions are said to possess limited horizon. Hence, Galaxy!
A new solution is said to be the need of the hour, which is complete — verification and implementation, with common models, extraction, analysis. Re-spin is not an option. Next, unified implementation, which addresses both custom and cell based needs. Custom and cell based functions are highly interdependent. Also, close to 100 percent of designs today are AMS.
Finally, the solution has to be open and portable. This accelerates the design cycle and IP portability. Also, quicker access to process details is a must! Architected for productivity, Galaxy Custom Designer has a similar look and feel, and works using fewer clicks — three as against six!
Galaxy Custom Designer’s Schematic Editor has productivity enhancers such as real-time connectivity, on-canvas editing and smart connect. Similarly, productivity enhancers for its Layout Editor include push button DRC and Extract, standard TCL and Python PCells, and auto via and guard ring generation. The WaveView Analyzer has features such as highest capacity and performance, complex analysis toolbox, and an automated TCL verification scripting.
A unified platform for cell and custom means superior ease of use, performance, capacity and data integrity. Open and portable, it facilitates plug-and-Play IP as well as standards based PDK, which means one PDK for all tools, added Synopsys’ Bal.
The IPL (Interoperable PDK Libraries) is an industry alliance established on April 2007 to collaborate on the creation and promotion of interoperable process design kit (PDK) standards.
Wipro Technologies has been among the early users of the Galaxy Custom Designer. I also managed to speak with Anand Valavi, Group Head, Analog Mixed-Signal Group, Wipro Technologies.
Valavi said: “From an EDA tool perspective, in AMS area, the amount of productivity is a lot less than in the digital area. The per transistor productivity for a digital designer is several magnitudes higher than an analog designer.”
The methodology is definitely not as evolved as in the digital area. According to him, ‘This productivity will now increase for analog and AMS areas, and people can do a lot more complex designs in a shorter period of time. There has been a lot of integration.”
On the salient features or enhancements, he said there have been reasonably good improvement in several areas. One, there is an alternative, now, and that brings a lot of advantages. Two, when you take it down to next level, there are several other technical reasons.
Valavi added that an integrated environment definitely improved the productivity. There are other minor things. When you start using it, there are things that helps technical users — for example, an on-canvas editing. Also, the usage or collaborative results in the iPDK libraries will improve the effectiveness in chips designs that are churned out. “It will surely give people working in analog design area a choice,” he noted.
This is a continuation of my recent discussion with Joseph Sawicki, vice president & GM, Design to Silicon Division, Mentor Graphics.
There have been whispers that the EDA industry has been presently lagging behind semiconductors and is in the catch-up mode. “That’s a matter of perspective. There are definitely unsolved challenges at 32nm and 22nm, but the reality is that we are still in the technology development stage,” he says.
For EDA tools that address implementation and manufacturing issues (i.e., Mentor design-to-silicon products), there are dependencies that cannot be fully resolved until the process technology has stabilized. Mentor Graphics is laying the groundwork for those challenges and working in concert with the process technology leaders to ensure that our products address all issues and are production-worthy before the process technology goes mainstream.
On the other hand, although Mentor’s products are fully-qualified for 45nm, there have only been a handful of tapeouts at that node, so for the majority of customers, we are ahead of the curve.
On ESL and DFM as growth drivers
ESL and DFM are said to be the new growth drivers. Sawicki adds: “As Wally Rhines has said in his public presentations, system level design and IC implementation are the stages of development where there are the most challenges, and therefore the most opportunities. To continue the traditional grow spiral that the electronic industry has enjoyed as a result of device scaling, we need more sophisticated EDA solutions to deal with both of these challenges.”
ESL is responding to the growth of design complexity and the need for earlier and more thorough design verification, including low power characteristics, and software integration.
The Design-to-Silicon division is addressing the issues of IC implementation which result not only from the increase in design complexity and devices sizes, but also from increasing sensitivity of the manufacturing process to physical design decisions, a phenomenon often referred to as “manufacturing variability.”
Although the term “Design-For-Manufacturing” reflects the need to consider manufacturability in design and to optimize for both functional and parametric yield, it is important to emphasize that DFM is not simply an additional tool or discrete step in the design process, but rather an integration of manufacturing process information throughout the IC implementation flow.
With single threading, we can no longer handle designs over 100 million gates. Of course, at 45nm, you can do a 100mn gates. That rewriting process is another issue that is also slowing out. It would be interesting to see how is Mentor handling this.
According to Sawicki, Mentor has incorporated sophisticated multi-threading and multi-processing technologies into all of its performance-sensitive applications, from place-and-route, through physical verification, resolution enhancement and testing.
He says, “Our tools have a track record of impressive and consistent and performance and scalability improvements, which is why we continue to lead the industry in performance.”
In addition to merely adding multi-threading and support for multi-core processors, Calibre products have a robust workflow management environment that automatically distributes the processing workload in the most efficient manner across any number of available clustered computing nodes.
Mentor’s Olympus-SoC place-and-route is inherently scalable due to its advanced architecture which includes an extremely efficient graph representation for timing information, and a very concise memory footprint. In addition, all the engines within Olympus-SoC can take advantage of multi-threaded and multi-core processors for high performance. These features enable Olympus-SoC to handle 100M+ gates designs in flat mode without excessive turnaround time.
Mentor’s ATPG tools are also designed to operate in multiprocessing mode over the multiple computing platforms to reduce test pattern generation time. In addition, Mentor test pattern compression technology reduces test pattern volume and test time, making it feasible to fully test 100M gate devices and maintain product quality without an explosion in test cost.
With EDA is starting to move up to the system level, will this make EDA less dependent on the semiconductor world?
Sawicki agrees that there are challenges at both the front end and back end of the electronic products design and manufacturing life cycle. Both of these opportunities are growing. In addition, developments like multi-level (3D) die packaging, through-silicon via (TSV) structures and other non-traditional techniques for device scaling are pushing system and silicon design issues closer together.
Reaching the 22nm node will require highly compute intensive EDA techniques for physical design to compensate for limitations in the manufacturing process. Beyond that, we could see a major shift to new materials and manufacturing techniques that would open new green fields for EDA in the IC implementation flow.
EDA going forward
How does Mentor see the EDA industry evolving, going forward?
Sawicki adds: “There are three key trends to watch. Firstly, for design to remain affordable at the leading edge, we need to enable radical increases in productivity. Electronic System Level (ESL) design is the key here, allowing designers to move to a new level of abstraction for both design and verification.
“Secondly, the challenges of manufacturing a well-yielding and reliable device as we move to 22nm will require a far more sophisticated physical implementation environment—one that accounts for physical effects in the design loop, and accounts for manufacturing variability in it’s optimization routines.
“Finally, the manufacturing challenges also open significant opportunity for EDA in the manufacturing space. A great example of this is the September 17, 2008 announcement we did with IBM on a joint development program to enable manufacturing at the 22nm node.”
Finally, given the roles already defined by Magma and Synopsys in solar, is there an opportunity for EDA in solar/PV?
According to Sawicki, as the photovoltaic devices have very simple and regular structures, most of the opportunity for EDA is not in logic design tools, but in material science, transistor-level device modeling, and manufacturing efficiencies with a focus on conversion efficiency and manufacturing cost reduction.
EDA’s role in solar will be in the newer areas related to Design-for-Manufacturing and other manufacturing optimizations, he concludes.
Our last discussion on DFM will follow in a later blog post!
There have been reports about the troubles within the EDA industry in recent times, especially those related with quarter sales. Interestingly, Synopsys has been the one sailing along fine! If that’s not enough, it made its intention known of playing a role on the solar/PV segment, an area where lot of investments have been happening!
Given this scenario, I was fortuitous enough, rather, extremely lucky to be able to get into a conversation with Dr. Chi-Foon Chan, President and Chief Operating Officer, Synopsys Inc., during his recent visit to India.
On the state of the global semiconductor industry, he said, it was somewhere now in the low 10s [well below 10 percent]. The EDA industry is currently tracking below that level. However, Synopsys has been growing at around 10 percent. He said, “The technology challenges today are very high.”
Synopsys has a substantial number of R&D population based out of India. Giving his assessment of the Indian semiconductor industry, Dr. Chan added: “Our main interest in India is largely talent and the academia. India can very well get more into the product development side. Even the outsourcing of designs have increased. Our capabilities, of the Indian team, have also increased.”
As with any good semiconductor ecosystem, the Indian industry also needs a proactive industry association, a role played to near perfection by the ISA (India Semiconductor Association). Acknowledging the ISA’s role, Dr. Chan said, “The ISA has also formed a very cohesive team.”
There is little doubt about India’s growing importance in technology strengths and managerial leadership. Dr. Chan added: “We are more on the high-end side and also track what others design. In India, the profiles of designs are definitely high-end in nature. This is largely due to the presence of a large number of MNCs. A very high percentage of designs are in the 45nm and 65nm process technology nodes.”
There is another significant indicator of India’s growing importance, and that is the huge rise in the attendance of the SNUG. In 2000, this event attracted 180 people. However, in 2008, the SNUG attracted over 2,000 people.
Moving India to next level
Given the very high level of commitment on Synopsys’ part toward India, there was a need to find out from Dr. Chan what exactly India needs to do to move to the next level in the value chain in the semiconductor ecosystem.
He advised: “India can do two to three things. One, for the system to grow, you need the government, academia and industry to grow together. India has all of the ingredients required to drive products.”
Comparing India with China, he highlighted the fact that while in China, the local consumption was higher than local supply, that was not the case with India!
“Therefore, looking at merely the local market is not the only thing. Products developed here can also be targeted at the Middle East and Southeast Asia.” He was quite forthright in his analysis, adding: “Industries start when you find markets. The skill sets are already present here. There can well be multiple startups.”
Dr. Chan also touched upon the fab vs. fabless issue, noting that there could well be more of fabless companies in India. “Building a fab requires lot of capital. Also, consolidation will continue to happen.”
What role does Dr. Chan see Synopsys playing in the Indian context? He said: “Synopsys will continue to be a catalyst for the industry. A healthy design industry in India continues to help us. We also work well with the Indian universities. Having more people from the universities will always help. We also invest a lot in application support. The application team also trains others. I now look forward to seeing more fabless companies here and India to become even more global.”
On low power design
India is also a centre of expertise in low power design, given that low power is hugely important in today’s electronics ecosystem. Dr. Chan commented that low power has always been the number one design issue. It cannot be taken care of at one single stage.
He added: “A slightly new concept that has emerged is low-power verification. There are so many schemes for attacking low power, such as multiple voltage islands. We (Synopsys) are spending a lot of effort in low power.
“As a designer, you require detailed analysis. Low-power verification is now coming up. Another area is testing. As an example, if so much power is required, how do you have the power cut from the tool you are using to test? From a Synopsys point of view, we are involved in several points, such as front-end synthesis, testing, sign-off, verification, etc. We are trying to put in a whole lot of methodologies.”
Synopsys in solar
EDA may be able to help by lowering power requirements and leakage on better products. Especially, the Synopsys’ TCAD product can be used to create more efficient and effective solar cells. Now, this is not a new development anymore. Synopsys, along with Magma, have already made known their intentions about setting foot in the solar/PV space.
On the TCAD, Dr. Chan said: “We have a very strong position in the TCAD, commercially. Now, it is one of our most critical elements in high-performance. Our TCAD is among the strongest in the EDA industry.
“In solar, it does not have to be a complicated place-and-route, etc. From an entire solar industry point of view, we have now used some effort from TCAD into this space. Heat transfer issues, etc., are more in the EDA space.”
I will continue my conversation with Synopsys on its solar initiative sometime later. Keep watching this space, folks