Archive

Posts Tagged ‘Synopsys’

Semicon in sub-20nm era: Business as usual or different?


We are now entering the sub-20nm era. So, will it be business as usual or is it going to be different this time? With DAC 2013 around the corner, I met up with John Chilton, senior VP, Marketing and Strategic Development for Synopsys to find out more regarding the impact of new transistor structures on design and manufacturing, 450mm wafers and the impact of transistor variability.

Impact of new transistor structures on design and manufacturing
First, let us understand what will be the impact of new transistor structures on design and manufacturing.

John Chilton.

John Chilton.

Chilton said: “Most of the impact is really on the manufacturing end since they are effectively 3D transistors. Traditional lithography methods would not work for manufacturing the tall and thin fins where self-aligned double patterning steps are now required.

“Our broad, production-proven products have all been updated to handle the complexity of FinFETs from both the manufacturing and the designer’s end.

“From the design implementation perspective, the foundries’ and Synopsys’ goal is to provide a transparent adoption process where the methodology (from Metal 1 and above) remains essentially the same as that of previous nodes where products have been updated to handle the process complexity.”

Given the scenario, will it be possible to introduce 450mm wafer handling and new lithography successfully?

According to Chilton: “This is a question best asked of the semiconductor manufacturers and equipment vendors. Our opinion is ‘very likely’.” The semiconductor manufacturers, equipment vendors, and the EDA tool providers have a long history of introducing new technology successfully when the economics of deploying the technology is favorable.

The 300nm wafer deployment was quite complex, but was completed, for example. The introduction of double patterning at 20nm is another recent example in which manufacturers, equipment vendors and EDA companies work together to deploy a new technology.

Impact of transistor variability and other physics issues
Finally, what will be the impact of transistor variability and other physics issues?

Chilton said that as transistor scaling progresses into FinFET technologies and beyond, the variability of device behavior becomes more prominent. There are several sources of device variability.

Random doping fluctuations (RDF) are a result of the statistical nature of the position and the discreteness of the electrical charge of the dopant atoms. Whereas in past technologies the effect of the dopant atoms could be treated as a continuum of charge, FinFETs are so small that the charge distribution of the dopant atoms becomes ‘lumpy’ and variable from one transistor to the next.

With the introduction of metal gates in the advanced CMOS processes, random work function fluctuations arising from the formation of finite-sized metal grains with different lattice orientations have also become important. In this effect, each metal grain in the gate, whose crystalline orientation is random, interacts with the underlying gate dielectric and silicon in a different way, with the consequence that the channel electrons no longer see a uniform gate potential.

The other key sources of variability are due to the random location of traps and the etching and lithography processes which produce slightly different dimensions in critical shapes such as fin width and gate length.

“The impact of these variability sources is evident in the output characteristics of FinFETs and circuits, and the systematic analysis of these effects has become a priority for technology development and IP design teams alike,” he added.

State of the global EDA industry: Dr. Pradip Dutta, Synopsys


A few weeks ago, I was fortunate enough to be able to speak with Dr. Pradip Dutta Corporate Vice President and Managing Director, and Treasurer, regarding the state of the global EDA industry and in India. What followed was a very interesting conversation, some of which is reproduced here!

Any sign of improvements in EDA?
To start with, the state of the global EDA industry is well known, and it has also seen revenue drops Q-on-Q in the past. Are there any signs of improvement?

According to Dr. Dutta, the last several quarters in the semiconductor industry have been extremely challenging as consumer demand for electronic products has declined with the heavy stress on the global economy.

“While we are starting to see signs of the semiconductor industry rebounding off the bottom with inventory replenishment and an uptick in end demand for key consumer items such as PCs and mobile phones, the environment is expected to remain difficult at least well into next year.

“During this time, the challenge for the semiconductor industry and its suppliers will be to find the next level of efficiency. The good news is that across a broad field of applications, semiconductors are a key enabler to future prosperity. Green solutions, low-cost netbooks, advances in connectivity and evolving products like the Kindle are just a few examples of areas that could help drive future development.

“The long-term ramifications of this scenario on the EDA industry are starting to become visible. More than ever, customers want to get their products out on time, and get it right with high quality.

“In addition to some immediate cost-cutting to respond to the crisis, most semiconductor and design businesses are re-focusing their market strategies, streamlining their operations, de-risking their supplier and partner relationships, and in some cases actively pursuing consolidation opportunities to drive economic efficiency.

“This situation presents as an opportunity for EDA companies to focus on important product developments that can enable leading semiconductor design and manufacturing companies to not only create more advanced devices, but to simultaneously lower risks and cut costs. In today’s economy, companies need to find ways to manage expenses while still investing in the future so they don’t just survive the recession, they emerge from it stronger.”

State of the Indian EDA industry
Obviously, it would be interesting to see how is the Indian EDA industry holding up in these times.

Dr. Dutta said that the Indian EDA industry is a combination of catering to global semiconductor players and addressing the needs of a domestic market that is slowly developing. The global players that operate out of India are rapidly moving up the value chain in terms of owning and architecting the next generation chips. This leads to an enormous opportunity for EDA companies to get associated at the front end of tool decisions.

“As you are aware, the level of technology that is being witnessed in the chips that are getting designed here is absolutely bleeding edge. The EDA companies are therefore paying concomitant attention to robust application support and in-house R&D effort. It has to be a full package here and now to address these kinds of customer requirements.

“Beyond the global players, India is seeing a few, but committed fabless design companies coming up in recent times. In addition to that, the Indian government is showing a lot of interest in country-specific programs, primarily in defense areas that require EDA support.

“We have also recently seen media reports about an “India Chip” being conceived at the central government level for domestic security applications. The ISA is working toward a blueprint for targeting semiconductors into a national agenda and hopefully, many ideas for systems and corresponding chips that will emanate from it to keep EDA companies interested,” he added.

Read more…

IC Validator offers step up in physical designer’s productivity


Recently, Synopsys Inc. introduced an IC Validator design rule checking/layout verification signoff (DRC/LVS) for in-design physical verification and signoff for advanced designs at 45nm and below.

Said to provide a step up in physical designer productivity, it is architected to deliver the high accuracy necessary for leading-edge process nodes, superior scalability for efficient utilization of available hardware, and ease-of-use.

What does IC Validator do?
According to Sanjay Bali, Director of Marketing, Physical Verification & DFM, Synopsys, the IC Validator is a complete physical verification tool, performing increasingly complex DRC and LVS sign-off checks.

It has been specifically architected for in-design physical verification. This means: the place-and-route engineers can run DRC and practical DFM steps alongside place and route within the familiar IC Compiler physical design environment.

And, why need for such a solution? He added that three key summary challenges are driving the need for a new approach and hence the new tool. These are:
a) Increase in complexity and count of manufacturing rules.
b) Unabated growth on design complexity.
c) Increasing DFM challenges, which just cannot be handled in a post processing approach.

Currently, the solution is aimed at 45nm and below as these nodes largely represent the challenges listed above.

Enhancing physical designer’s productivity
Three key tenants of the IC Validator that offer improved physical designer productivity are:
a) High accuracy necessary for leading-edge process nodes.
b) Superior scalability for efficient utilization of available hardware. And,
c) Ease of use with seamless integration of IC Validator and IC Compiler

Bali said: “The IC Validator has been architected from the ground up for in-design physical verification. In-design physical verification enables place-and-route engineers to accelerate the time to tapeout by enabling sign-off quality physical verification from within implementation or physical design. Physical designers designing with IC Compiler can now benefit from the in-design physical verification approach with the push of a button, incurring minimal overhead cost to eliminate surprises late in the design.

“With the verify-as-you-go approach replacing the implement-then-verify approach, physical designers can significantly reduce iteration count, eliminate streamouts and streamins, and accelerate time to tapeout. In addition, the integration enables several productivity enhancing flows like incremental DRC verification, incremental metal fill flows and ECO flows — all leading to significant reduction in time to tapeout.”

It would be interesting to determine or know by approximately what percent is the total physical verification time reduced, and what all does it cover in the process?

Bali added that in extreme cases, finding and fixing DRC violations can easily impact the schedules by a few weeks! The key here is that physical designers typically wait until the final stages of the tapeout to run physical verification. Inevitably, the schedule at this point is squeezed and the cost of fixing the error is high.

“With a sign-off quality physical verification tool integrated into the physical design environment, place-and-route engineers can verify as they implement and eliminate late surprises while speeding up the total physical verification turnaround time. In addition, the outcome of this process is a sign-off clean design.

Production ready!
The Synopsys IC Validator is also said to ‘production ready!” What exactly does that mean?

The IC Validator has been successfully used to tapeout designs at several chip manufacturers, said Bali. In addition, it is currently being used for production designs at Nvidia and Toshiba. Besides other leading foundry’s and chip manufactures it is also qualified by TSMC for 40nm and 28nm process nodes.

For those interested, Toshiba already has Synopsys as its key EDA partner, and NVIDIA adopted the IC Validator for sign-off physical verification, within days of its launch! More are bound to follow!

Saving design spins!
Will the IC Validator approach be able to save design spins? How much is the physical design cycle time reduced?

With the in-design physical verification, place-and-route engineers will be able to run sign-off quality DRC checks, timing aware and sign-off quality metal fill, all within the familiar IC Compiler environment. Linear scalability for efficient use of hardware, sign-off accuracy and integration with IC Compiler will enable productivity enhancing flows like auto detect and autofix, incremental verification flows — all can significantly reduce time to tapeout.

How can it help in avoiding the painful sign-off failure-to-physical-redesign iterations that are increasingly common below 90nm?

With the seamless integration of the IC Validator with the IC Compiler, physical designers can now verify the design as they implement for manufacturing sign-off accuracy.

Incremental DRC’s strength
How good is the incremental design-rule checker (DRC)? Is it really parallelized for the multicore servers?

According to Bali, incremental flows are one of the strongest tenants of IC Validator. To improve physical designer productivity, rule-based only or layer-based only incremental verification runs can be initiated from within IC Compiler.

He said: “For ECO validation, the IC Validator supports window or an area-based incremental verification approach to speed up surgical checks. The incremental flows are meant to be quick, but the IC Validator has multicore capability to further speed up the process.”

The IC Validator discovers and fixes design rule violations within the global context of the design as well. How is this made possible?

With the in-design physical verification, the IC Validator can accurately and automatically identify DRC violation and automatically provide fix guidance to IC Compiler to fix the violation and then re-verify it again.

Handling metal fills and design changes
Operations typically performed during physical verification, such as metal fills, may trigger additional design changes to achieve timing closure. How is this handled by the IC Validator?

Bali said that the prevailing post-processing oriented DFM flows introduce excessive and lengthy discover-fix iterations. Metal fill insertion, a mandatory DFM step at the advanced nodes, exemplifies this issue.

“Physical designers stream out the timing closed post-fill design for signoff validation and then stream it back in to fix any signoff errors flagged during physical verification. This multi-hour discover-fix loop is typically repeated per block till the post-fill design is both signoff qualified and timing clean.

“With in-design physical verification, the IC Validator and IC Compiler address the challenges of DFM, within the place-and-route environment. The seamless integration enables a single pass metal fill flow that is timing aware and of signoff quality and is void of expensive streamouts and streamins,” he added.

Synopsys on Discovery 2009, VCS2009 and CustomSIM


If you’ve been following the EDA industry closely, you’d be well aware of three major announcements by Synopsys over the last couple of days. These are:

* Synopsys introduced the Discovery 2009 verification platform, delivering faster, unified verification solutions.
* It unveiled the VCS multicore technology, delivering 2x verification speed-up.
* It introduced the CustomSim Unified Circuit Simulation solution, which addresses custom digital, analog and memory verification challenges.

I met up with Dr. Pradip K. Dutta, Corporate Vice President & Managing Director, Synopsys (India) Pvt Ltd and Manoj Gandhi, vice president and general manager, verification group @ Synopsys, in an attempt to understand how significant these announcements are for verification.

Verification is huge!
According to Manoj Gandhi, at the macro level, design complexities continue to grow. As this grows, one big challenge is verification. The reason is: today’s SoC designs and large IC designs, they are being approached like large software projects.

He said: “Verification becomes huge, like software. It is expensive in hardware design. We focus on the verification challenges. We introduced the System Verilog about four to five years ago, and we had also acquired ArchPro. Yesterday, we announced the Discovery 2009, CustomSim and VCS2009.”

How can users make use of new CPUs coming out? “We aim to get higher much performance using multicore architecture,” he added.

Introducing VCS2009
The VCS2009 is multicore enabled, runs the industry’s first low-power verification methodology, and enables fastest mixed-signal simulation with the CustomSIM. Focusing on the VCS2009, Gandhi said: “In verification, there’s a design under test and verification. A lot of designs now have multicores. AMD is among the many folks using the VCS2009. Almost every CPU is designed using VCS. It plays a big role in large SoCs.”

Design companies have several activities such as test bench, debug, etc. All of these can now be parallelized. “Customer designs can be simulated on multiple threads,” Gandhi said. “Also, the applications can also be simulated on different threads, called application level parallelism. We can actually bring about 5-7X improvement in verification with the VCS2009.”

According to him, this product is already being used by some large customers. “This is our next phase of performance innovation. The processor roadmap is getting more and more multicore. We have over 200 customers,” he added.

The VCS distributes time consuming activities across multiple cores. Gandhi added that each core has a lot of computations. You may do lot of parallel activities with the mobile phones. All activities are now in parallel.

And how about the speed-up from parallel computation with the industry-leading Native Testbench (NTB)? He said: “We were one of the first to introduce all technologies as part of a single compiler. That brought the 5X speed-up. We did all of this in verification, and a test bench core was brought into verification.”

The combination of DLP and ALP optimizes VCS performance over multicore CPUs. Design level parallelism (DLP) and application level parallelism (ALP) — all CPUs can be threaded on different cores.

Low-power verification methodology published
Synopsys has published a book on industry’s first low-power verification methodology, along with ARM and Renasas. It is an attempt to bring technology to the mainstream — how to do low-power verification. There are other 30 companies who participated in this exercise.

On the CPF vs. UPF debate, he said that UPF is a standard where Magma, Mentor, Synopsys, etc. have participated. Cadence has CPF. Users can make use of this book and apply, on top of both UPF and CPF.

Introducing Discovery 2009
According to Synopsys, this solution is doing very well in the market. The company has seen strong technology leadership over the last two to three years. It has also created strong investments.

CustomSIM is a unified circuit simulation solution. “We have a software to silicon verification focus. We are all the way from system level design to RTL, to software verification, etc. Discovery has some technologies as part of that, noted Gandhi.

What has Synopsys done right?
A most interesting point in the EDA industry, I feel, has been the performance of Synopsys, in an otherwise difficult segment over the past year. So, what are the reasons behind this success?

Gandhi added: “Our management are all strong technologists. We have invested tremendously in bringing in strong technology leaders. In India, many companies needed R&D collaborations locally. For us, it was a big win when we invested in Bangalore. We work closely with customers delivering technologies that will address challenges two-three years from now.

Dr. Pradip Dutta elaborated: “Synopsys is very strong in product leadership (PL). The other two key areas are customer intimacy (CI) and operational excellence (OE). You need to be highest in PL. We have been very conservative even during strong times.”

That is indeed a marvellous thought! Those who are typically strong in technology, generally go on to develop great intimacy with customers, and all of this starts reflecting on their operations, which are anyway excellent! Here’s a message for those who wish to do well in tough times — strong product leadership, coupled with customer intimacy and well, corresponding operational excellence!

Focus on verification
Now that the focus is quite clearly on verification, how do EVE and the other verification companies stand out? EVE is currently in the emulation space. Gandhi added that EVE competes more wtih Cadence and Mentor. “We work with EVE on many accounts. Verification is all about finding bugs. Emulation has been more cyclical.”

According to him, Synopsys is now looking at tackling the next level — how do you reduce the overall cost? “We will go beyond selling tools. We would look at how to identify issues and saving verification costs.” I believe, verification takes up close to 70 percent of an overall design test.

Commenting on the EDA industry in India, both, Dr. Dutta and Gandhi feel it is still buzzing quite well, despite what’s been happening in the global context. “We have invested quite a lot. We have a large team here. We continue to collaborate with local institutions here as well,” Dr. Dutta added.

Cadence’s Encounter and how it matches up to Synopsys’ Galaxy!


Early December 2008, Cadence Design Systems launched the Cadence Encounter Digital Implementation System, said to be a configurable digital implementation platform that delivers an incredible scalability with complete support for parallel processing across the design flow. Will it change the fortunes of the struggling EDA industry? EDA industry stats for Q3-08 given at the end of this post!

My first thoughts immediately went to Synopsys’ Galaxy Custom Designer solution. This is the industry’s first modern-era mixed-signal implementation solution. Is the Cadence Encounter an answer to Synopsys’ Galaxy? This is worth a shot!

Obviously, why has Cadence released Encounter now? How will the Encounter take on Synopsys’ Galaxy? I managed to engage Rahul Deokar, Product Marketing Director, Cadence, to find out more.

The Encounter Digital Implementation System is a next generation high-performance, high-capacity RTL-GDS-II design closure solution with the industry’s first end-to-end parallel processing flow that enables all steps of the design flow to be multi-CPU enabled — from floorplanning, placement, routing, extraction to timing and signal integrity sign-off. He said, “At its core is a new memory management architecture and end to end multi-CPU backplane that provides scalability with increased performance and capacity to reduce design time and time-to-market.”

Does it intend to take on Synopsys’ Galaxy? Well, Deokar said: “Yes, it surpasses the other solutions available in the marketplace based on the following capabilities and features, which are:
* Ultra-scalable RTL-to-GDS-II system with superior design closure and signoff analysis for low-power, mixed-signal, advanced node designs.
* End-to-end multi-core infrastructure and advanced memory architecture for unparalleled scalability of capacity, design turnaround time, and throughput.
* Robust design exploration and automated floorplan synthesis and ranking solution.
* Embedded signoff-qualified variation analysis and optimization across design flow.
* Integrated diagnostic tools for rapid global timing, clock and power analysis/debug

Here’s a list of benefits that it provides designers:
* Significantly reduces design time, schedule and development risk.
* Increased productivity through automation; superior quality of results.
* Configurable and extensible platform that ensures maximum utilization and ROI — upgrades proven design flow and amplifies existing expertise.
* Interoperability across package, logic, custom IC design, and manufacturability.

Harnessing power of multicore computing
According to Cadence, it provides complete support for parallel processing across the design flow. Does this mean that designers can fully harness the power of multicore computing? It would also mean that today’s EDA tools capable enough to meet the multi-core challenge.

Deokar added: “Yes, the end-to-end parallel processing flow is supported across the entire design flow and consequently. Also, designers can fully harness the power of multicore computing. Today’s designers commonly have dual CPU or even quad CPU machines on their desktop. The Encounter Digital Implementation System allows the designers to leverage their multi-CPU hardware and gain significant TAT improvements on the design cycle time and overall development schedule.”

The Encounter end-to-end multi-CPU backplane delivers ultra-scale performance gains up to 16X in key areas such as routing and timing closure. All steps of the design flow are multi-CPU enabled. For instance, on a production design, when the Encounter is run on four CPUs, the user can get a 3.2X performance boost across the entire, end-to-end design flow.

Encounter deployed by over 15 customers?
Designers are said to be reporting dramatically improved design time, design closure, and faster time-to-market for advanced digital and mixed-signal devices. By what factors, and against which other tool(s) has Encounter been rated?

Deokar said that the Encounter Digital Implementation System has been developed in close collaboration with over 15 customer partners who have extensively used, validated and now, deployed it.

“Customers are already seeing overall design cycles significantly shorted by 25-30 percent, which translates to multiple weeks or even months. These significant improvements are against competitive tool flows in their current methodology,” he added.

Encounter is also said to be offering new technologies for silicon virtual prototyping, die-size exploration and RTL and physical synthesis, providing improved predictability and optimization in early stages of the design flow.

Regarding this aspect, he pointed out that large scale design complexities (increased functionality, predictability, productivity, etc.,) pose some of the biggest challenges. Designs are getting huge at 100M+ gates, 100+ macros in the design, putting significant requirements on design tools, particularly, floorplanning of these macros, and the whole design becomes a huge challenge.

“The new Silicon Virtual Prototyping capabilities of Automated Floorplan Synthesis and Die Size Exploration help out exactly on that front. These can quickly provide floorplanning for that large 100M+ gates, 100+ macro design.

“And not just one floorplan, but designers can provide multiple criteria (say, along the lines of timing or power or area or congestion) and you will get multiple floorplans with their rankings…– all this in a matter of minutes! Essentially, you could finish your breakfast or lunch (depending upon how fast you eat!) and be back to have multiple floorplans that you can then pick and choose from, and then proceed to implementation.”

Addressing new problems at 45nm/40nm/32nm
Obviously, targeted at 45nm/40nm/32nm, etc., how can or how does Encounter anticipate and address the majority of the new problems associated with these geometries across the entire flow?

Deokar noted that its main customers include semiconductor companies working on 45nm and 32nm designs, with aggressive design specifications including 100 million or more instances, 1,000-plus macros, operating speeds exceeding 1GHz, ultra-low power budgets, and large amounts of mixed-signal content.

“The challenges facing these designs comprise of an increasing demand for design tool performance/capacity and design features for challenging ultra-large scale designs in the areas of low power, mixed signal, advanced node and signoff analysis. In addition, small market windows and product life-cycles and the cost pressures further exacerbate the situation,” he noted.

The Encounter Digital Implementation System’s core design closure capabilities, plus the new advanced node technologies, including litho-, CMP-, thermal, and statistical-aware optimization provide comprehensive manufacturing- and variation-aware implementation, and an end-to-end multi-core infrastructure for fast, predictable design closure even on the most challenging designs.

Reducing memory footprints
It will be interesting to learn about the kind of work that has gone into reducing the memory footprint of the most memory-retentive applications.

Deokar said that an innovative memory architecture is at the core of the Encounter System that enables capacity and performance gains of 30-40 percent for full flat and hierarchical designs, even if you are running on a single-CPU machine.

Cadence’s R&D team has developed an advanced memory defragmentation algorithm that allows the applications to be extremely memory-frugal …and that memory-efficiency enables designers to handle their biggest 100M+ instance designs.

Parallels with Synopsys’ Galaxy Custom Designer?
There seem to be parallels with Synopsys’ Galaxy Custom Designer for AMS. Also, there could be some chance of Cadence’s Virtuoso and Encounter coming together in future.

According to Deokar, Synopsys’ Custom Designer for AMS is its entry into the full-custom/analog design marketplace, where the Cadence Virtuoso platform is a strong incumbent.

He said: “The biggest challenge for mixed signal designers is the efforts/resources involved in taking design data from the full-custom/analog tools to the digital implementation tools, and back and forth…in never-ending iterations.

“Now, with the Encounter Digital Implementation System, designers get the seamless full-custom/analog and digital design implementation interoperability…with unified constraints handling, mixed-signal floorplanning and ECO. It executes off a common design database (OpenAccess), enabling edits made in one design environment (e.g. Virtuoso) to be easily seen in the other design environment (e.g. Encounter). It also enables the design team to easily transfer the design data, to determine the optimal floorplan based on analog and digital constraints.”

For example, the analog design team moves pins on the analog block, when the design is opened in Encounter, the modified pin locations are easily seen and the digital design team can execute a pin optimization to re-align the pins at the top-level.

In addition, the user can enter routing constraints in either Encounter or Virtuoso, and implement mixed signal routing in either environment. Top-level routing constraints could be defined within Virtuoso, then the top-level routing completed using the mixed signal routing functionality within Encounter.

Customers are already seeing their overall design schedules significantly reduced, added Deokar.

Postscript: Well, as expected, the EDA industry has taken a hit again. As per the EDA Consortium (EDAC) Market Statistics Service (MSS), the EDA industry revenue for Q3 2008 declined 10.9 percent to $1,258.6 million compared to $1,412.1 million in Q3 2007. The four-quarter moving average declined 2.8 percent.

Now, does Cadence’s Encounter have the ability to turn around the EDA industry’s fortunes? I don’t think so! Much more needs to be done by Cadence and all of the other EDA companies!

Top 10 captivating moments in Indian semicon during 2008


Yes, the time has come for all of us to say goodbye to this year. It has been a very captivating year for the Indian semiconductor industry. Some consider it to be a year the industry came of age, while some others would look at the year as one where fab promises failed India.

Nevertheless, as I’ve maintained, having or not having a fab won’t affect India very much as its traditional strengths have been in embedded and design services.

There have been several moments during the year that I personally savor. In fact, I have either witnessed most of those or written/blogged about them.

The top 10 captivating moments in Indian semiconductors during 2008, according to me, are:

1. S. Janakiraman, former chairman, ISA, declared before the world, in May at Dubai, during the IEF 2008, about India’s growing strength in global telecom.

2. Growing interest in the solar photovoltaic industry in India, and subsequent proposals made by various companies, including Reliance.

3. EDA companies, such as Magma and also Synopsys, making their entry, or at least, intentions known, in the solar/PV industry.

4. Intel’s new chip, designed largely in Bangalore, and of course, the Intel Developer Forum in Taipei, Taiwan.

5. Visit of a strong Japanese delegation to Bangalore, which showed remarkable keenness regarding possible investments in India.

6. BV Naidu quitting SemIndia, and putting in doubt India’s fab story. Well, that’s a different story, and one person’s exit would not mean much to such a large industry.

7. ISA Excite, and the minister announcing that Karnataka could have its own semiconductor policy. The policy should be out in the new year, hopefully.

8. AMD’s new chip, the Shanghai, which again, had a lot of involvement from AMD’s Bangalore team.

9. NXP India achieving RF CMOS in a single chip. The entire analog and RF work was done in Bangalore, India.

10. Go parallel or perish, said James Reinders, of Intel! Parallelism or parallel computing involves the simultaneous use of more than one computer or processor to execute a program.

I was also present during the launch of Synopsys’ Galaxy Custom Designer, which tackles the analog mixed-signal (AMS) challenges. It would occupy a joint 10th position.

There may have been some other moments as well! Would like to hear from all of you what are those other great times in India semiconductor industry during 2008!

Cadence’s Virtuoso vs. Synopsys’ Galaxy Custom Designer!


Synopsys recently introduced the Galaxy Custom Designer, which provides a unified solution for custom and digital designs, thereby enhancing designer efficiency.

Well, this solution invariably draws a comparison with Cadence’s Virtuoso platform within the EDA industry!

That prompted me to engage Sandeep Mehndiratta, Product Marketing Group Director, Cadence Design Systems, in this discussion. We discussed a range of issues, such as how the Synopsys’ Galaxy Custom Designer matches up with the Virtuoso, and whether designers can now design what they wish, including concepts and flows, as well as the relevance of open architectures.

For the record, a few years ago, Cadence introduced the next-generation Virtuoso custom design IC 6.1 platform, which had a major upgrade recently with the IC 6.1.3 release. This release has been production-proven with tapeouts from many customers. However, as I said, it is Synopsys’ Galaxy Custom Designer doing the rounds in the EDA circles as of now!

Galaxy Custom Designer vs. Virtuoso
It is well known that Cadence has been the established leader in custom IC design space for decades, and has been constantly improving and upgrading technology to ensure it is providing best-in-class platform for designing today’s complex custom chips.

Mehndiratta said: “A couple of years ago we introduced the next-generation Virtuoso custom design IC 6.1 platform. This release has been production-proven with tapeouts from many customers. Some of the leading customers that have adopted the Virtuoso platform include Ricoh, National Semiconductor, Cambridge Analog Technologies Inc., Matsushita, etc.

“Synopsys has recently launched Galaxy Designer and it is unproven as yet. From what we’ve read and heard from some of our mutual customers, the competitive introduction may be attempting to replicate older custom IC technology. While the jury will probably be out for some time on this unproven tool, Cadence continues to provide a complete solution for design, verification and implementation of complex analog and mixed-signal designs, differentiated by the tight integration between the underlying technologies.”

With the advent of Galaxy, is it now safe to say that designers can finally design what they wish, including concepts and flows? Well, the answer’s not yet there! However, Mehndiratta did touch upon Cadence’s solution that is built upon decades of experience in this area and a strong eco-system made up of partners, third-party providers and foundries. Virtuoso, he added, is the most complete eco-system for designing ICs; not only with its inherent flow, but also because of its linkages to multiple tools inside and outside of Cadence.

“For many years, we have provided a consistent front-to-back flow, and over that time we have learned much about what customers need to do their designs efficiently. It is that knowledge base that we leveraged to accelerate productivity with 6.1 release couple of years back,” he added.

If that is the case, why has it taken so long for a first modern-era mixed-signal implementation solution to be in place?

He referred to Cadence’s next generation Virtuoso 6.1 introduced in November 2006, said to be the first modern, and most complete custom design solution released natively on the OA database. Productivity benefits are significant. RFIC Solutions Inc., a third-party intellectual property and design service provider, is said to have increased productivity two-fold by adopting the Cadence Virtuoso custom design platform.

Likewise, INSIDE Contactless, a fabless company and leader in contactless technology providing high-performance chipsets for secure, fast and reliable transactions with electronic identification, saved 20 percent in development time by adopting Cadence Virtuoso UltraSim Full-Chip Simulator, a component of Virtuoso Multi-Mode Simulation with a high-performance digital-solver technology, for the verification of its current and next-generation contactless and Near Field Communication (NFC) system-on-chip (SoC) designs.

He noted: “Specifically, mixed-signal design is evolutionary, not revolutionary. The concept of mixed-signal design isn’t new. People have been designing in this manner for 15+ years. What is new is the more holistic approach being taken by designers developing mixed-signal circuits. The once clear lines between analog and digital design are blurring, and now the idea of “mixed-signal” is being architected in right from the beginning.

“That is why Cadence’s AMS Designer covers transistor to system level design with a single simulation solution for complete verification. It is why Cadence has combined the power of its leading implementation platforms (Virtuoso and Encounter) to handle the implementation of mixed-signal designs.”

Given that Synopsys’ Galaxy Custom Designer can provide a unified solution for custom and digital designs, thereby enhancing designer efficiency, how will it change/affect designing, and the EDA landscape?

Mehndiratta pointed out that Cadence had defined a unified solution long ago. “Our industry leadership in this area, and Synopsys mimicking of that solution are testaments of Cadence’s vision. Competition is good for all industries, the end-customer usually benefits. You can count on Cadence to not only remain competitive, but also retain our industry leadership in custom/mixed-signal design.”

Importance of open architecture
Let us also look at the importance of open architecture that natively supports interoperable PDKs.

Cadence also believes in open architectures. Its Design Framework II was built as an open architecture, and that’s the reason, why there are many companies that have connected (30+) to form a larger ecosystem. Whereas, the Industry Standard Framework has been tried and failed, the company maintains.

Mehndiratta said: The reason it was a failure is the same as interoperable PDKs. Building frameworks and PDKs that are based on a “lowest common denominator” principle do not provide the most optimized design flow. Instead, you are left with systems that try to please everyone and in the end are rejected as bloated beasts retarding the progress of design.”

Finally, how does Cadence propose to address the Galaxy challenge?

As expected, Cadence hopes to continue to provide customers and partners with a framework in which they can build their tools into the Virtuoso design flow in the most optimized way possible.

Also, by providing its proven and industry standard Pcell technology that takes advantage of the key features in Cadence’s design flow, thereby allowing for fast and productive design today and in the future.

%d bloggers like this: