Archive

Archive for the ‘design challenges’ Category

Accelerating EDA innovation through SoC design methodology convergence

September 26, 2014 Comments off

According to Dr. Walden C. Rhines, chairman and CEO, Mentor Graphics Corp., verification has to improve and change every year just to keep up with the rapidly changing semiconductor technology. Fortunately, the innovations are running ahead of the technology and there are no fundamental reasons why we cannot adequately verify the most complex chips and systems of the future. He was speaking at the recently held DVCON 2014 in Bangalore, India.

DVCON India 2014.

DVCON India 2014.

A design engineer’s project time for doing design has reduced by 15 percent from 2007-2014, while the engineer’s time for doing verification had seen 17 percent increase during the same time. At this rate, in about 40 years, all of a designer’s time will be devoted to verification. At the current rate, there is almost no chance of getting a single-gate design correct on first pass!

Looking at a crossover of verification engineers vs. designer engineers, there is a CAGR designers of 4.55 percent, and for CAGR verifiers, it is 12.62 percent.

The on-time completion remains constant, as we look at the non-FPGA project’s schedule completion trends, which are: 67 percent behind schedule for 2007, 66 percent behind schedule for 2010, 67 percent behind schedule for 2012, and 59 percent behind schedule for 2014. There has been an increase in the average number of embedded processors per design size, moving from 1.12 to 4.05.

Macro trends
Looking at the macro trends, there has been standardization of verification languages. SystemVerilog is the only verification language growing. Now, interestingly, India leads the world in SystemVerilog adoption. It is also remarkable that the industry converged on IEEE 1800. SystemVerilog is now mainstream.

There has been standardization in base class libraries as well. There was 56 percent UVM growth between 2012 and 2014, and 13 percent is projected growth in UVM the next year. Again, India leads the world in UVM adoption.

The second macro trend is standardization of the SoC verification flow. It is emerging from ad hoc approaches to systematic processes. The verification paradox is: a good verification process lets you get the most out of best-in-class verification tools.

The goal of unit-level checking is to verify that the functionality is correct for each IP, while achieving high coverage. Use of advanced verification techniques has also increased from 2007 to 2014.

Next, the goal of connectivity checking is to ensure that the IP blocks are connected correctly, a common goal with IP integration and data path checking.

The goal of system-level checking is performance, power analysis and  SoC functionality. Also, there are SoC ‘features’ that need to be verified.

A third macro trend is the coverage and power across all aspects of verification. The Unified Coverage Interoperability Standard or UCIS standard was announced at DAC 2012 by Accellera. Standards accelerate the EDA innovation!

The fourth trend is active power management. Now, low-power design requires multiple verification approaches. Trends in power management verification include things like Hypervisor/OS control of power management, application-level power management, operation in each system power state, interactions between power domains, hardware power control sequence generation, transitions between system power states, power domain state reset/restoration, and power domain power down/power up.

Macro enablers in verification
Looking at the macro enablers in verification, there is the intelligent test bench, multi-engine verification platforms, and application-specific formal. The intelligent test bench technology accelerates coverage closure. It has also seen the emergence of intelligent software driven verification.

Embedded software headcount surges with every node. Clock speed scaling slows the simulation performance improvement. Growing at over 30 percent CAGR from 2010-14, emulation is the fastest growing segment of EDA.

As for system-level checking, as the design sizes increase emulation up, the FPGA prototyping goes down. The modern emulation performance nmakes virtual debug fast. Virtual stimulus makes emulator a server, and moves the emulator from the lab to the datacenter, thereby delivering more productivity, flexibility, and reliability. Effective 100MHz embedded software debug makes virtual prototype behave like real silicon. Now, integrated simulation/emulation/software verification environments have emerged.

Lastly, for application-specific formal, the larger designs use more formal. The application-specific formal includes checking clock domain crossings.

Advertisements

Ph.D candidates in VLSI industry! Is enough being done?


“Fine art is that in which the hand, the head, and the heart of man go together.” – John Ruskin.

“Great men’s honor ought always to be measured by the methods they made use of in attaining it.” – François Duc De La Rochefoucauld.

The 26th International Conference on VLSI Design 2013 is starting tomorrow at Hyatt Regency, Pune. Over the years, it has served as a forum for VLSI folks to discuss topics related to VLSI design, EDA, embedded systems, etc. The theme for the VLSI and embedded systems conference is green technology.

That brings me to a point raised by one reader of this blog- what’s the future of  Ph.D candidates in the VLSI industry! First, do not believe when you are told that you can only join academics in case you are a Ph.D. You can certainly switch over to R&D at the various VLSI companies! Or, you can start on your own, by developing something noteworthy!!

As for the current scenario, especially in India, students, or well, Ph.D holders should seriously consider developing useful projects for  use in India, and globally. It seems all too very easy for folks to join some large MNC in India or overseas, as according to such people: their jobs are done!

For some strange reason, semiconductor/VLSI development seems to have remained in the backburner in India! I was surprised on visiting a center in Bangalore to find students – actually, some Ph.D. holders – working on projects that may never even see the light of the day! That leads to the question: are the tutors guiding them enough? Do we even have systems in place that backs development?

Having spent a long time in the Far East, I have seen young Chinese and Taiwanese, Korean and Japanese men and women take to VLSI earnestly. How did they manage to do that? Mainly, by starting their own companies and developing some product!

Now, this is something not yet evident in India! Has anyone else asked this question? And, can the Indian VLSI community make this happen? It should not be very difficult, if the head, hand and heart are there in the deed!

As John Ruskin says, “Fine art is that in which the hand, the head, and the heart of man go together.”

François Duc De La Rochefoucauld. says, “Great men’s honor ought always to be measured by the methods they made use of in attaining it.”

Hope these words make sense! Developing and designing solutions is a fine art where the hand, the head and the heart must be in sync. And, if you have really developed a solution or a product, what were the methods you used to attain that! Answering these two questions are tough, but the answers really lie within us!

My question remains: do students (in India) really spend time for developing projects, or do they simply copy or buy projects?

Coming back to the VLSI conference, this year’s program will play host to the 4th IEEE International Workshop on Reliability Aware System Design and Test (RASDAT) as well. There will be discussions around topics such as design-for-test, fault-tolerant micro architecture, low power test, reliability of CMOS circuits, design for reliability, dependability and verifiability, etc.

A semiconductor company will likely be introducing a portable and affordable analog design kit. Students will no longer be required to go to expensive labs for developing projects. There should be lot of simulation tools, online course materials, community support, lab materials, etc. to use using the analog design kit. There should be a string of announcements too, so let’s wait for the event to start!

How Intel manages IT through downturn — Server and data center optimization!

September 15, 2009 Comments off

Ever wondered how Intel is managing IT through the downturn — Server and data center optimization? According to Kenny Sng, data center engineering manager, Intel Technology Asia Pte Ltd, there are three key things that Intel IT does. These are:

• Internal efficiencies are critical in freeing up resources and reducing operational costs.

• Server refresh is a key strategy to ensure IT runs efficiently.

• Intel continues to look at innovation in DC operations for reducing running costs

Server and data center optimization? Intel's Kenny Sng, data center engineering manager, making a point!

Server and data center optimization? Intel's Kenny Sng, data center engineering manager, making a point!

How can IT make a difference?

* Drive employee productivity — by way of mobile client refresh

* Drive business productivity

* Continue IT efficiencies — by way of data center and server refresh

Intel data center profile

Intel has four major groups currently driving individual data center requirements (DOME).

Design:

Support the chip design community

Design Computing: Has most of the servers in Intel

Office:

Supporting typical IT and customer services

General Purpose

Manufacturing:

Manufacturing computing supporting fabrication and assembly

FAB/ATM

Enterprise:

Enterprise applications supporting eBiz and ERP

About 80 percent of servers in Intel are in D. And, 20 percent of servers in Intel are in O, M and E, categories.

Intel IT’s approach to data center optimization

Intel’s approach is very simple — standardize, improve and optimize.

Standardize

* Supply and demand forecasting

* Processes and design specs

* Overall data center structure

All of this  enables IT and consolidations, prevents unnecessary spending and ensures consistency in the overall data center structure.

Improve

* Batch processing pools via grid computing (DCV) – (D)

* Virtualization (DCU) – (O) & (E)

* Replace single core with quad-core servers

* Information Lifecycle Management

* Intel “Green” data center initiatives

* Containerized Data Centers

These go on to reduce server spending and storage/hardware expenses, Contain costs (network, power, space), simplify the environment and well, improve energy efficiency by at least 6x.

Optimize

* Close inefficient and unnecessary data centers

* Assure batch and virtualized servers are in optimal data center locations

What do these do? One, maximize data center utilization in all locations, and two, maximize server asset utilization across the world. Read more…

Synopsys’ Galaxy Custom Designer tackles analog mixed signal (AMS) challenges

October 11, 2008 Comments off

Synopsys Inc. recently unveiled its Galaxy Custom Designer solution, the industry’s first modern-era mixed-signal implementation solution. Architected for productivity, the Galaxy Custom Designer leverages Synopsys’ Galaxy Design Platform to provide a unified solution for custom and digital designs, thereby enhancing designer efficiency.

Galaxy Custom Designer delivers a familiar user interface while integrating a common use model for simulation, analysis, parasitic extraction and physical verification. It is the first-ever implementation solution built natively on the OpenAccess database for legacy designs as well as a new componentized infrastructure offering unprecedented openness and interoperability with process design kits (PDKs) from leading foundries.

Subhash Bal, Country Director, Synopsys (India) EDA Software Pvt. Ltd, highlighted three key features: One, it is architected for productivity. Two, it is a complete custom design solution. And three, it is based on an open environment.

The key question: why the Galaxy Custom Designer, and why now? Simple! The modern AMS era is characterized by interdependent custom and digital functions; analog IP is now mainstream; and there is the phenomena of an increased embedded memory. Current solutions are said to possess limited horizon. Hence, Galaxy!

A new solution is said to be the need of the hour, which is complete — verification and implementation, with common models, extraction, analysis. Re-spin is not an option. Next, unified implementation, which addresses both custom and cell based needs. Custom and cell based functions are highly interdependent. Also, close to 100 percent of designs today are AMS.

Finally, the solution has to be open and portable. This accelerates the design cycle and IP portability. Also, quicker access to process details is a must! Architected for productivity, Galaxy Custom Designer has a similar look and feel, and works using fewer clicks — three as against six!

Galaxy Custom Designer’s Schematic Editor has productivity enhancers such as real-time connectivity, on-canvas editing and smart connect. Similarly, productivity enhancers for its Layout Editor include push button DRC and Extract, standard TCL and Python PCells, and auto via and guard ring generation. The WaveView Analyzer has features such as highest capacity and performance, complex analysis toolbox, and an automated TCL verification scripting.

A unified platform for cell and custom means superior ease of use, performance, capacity and data integrity. Open and portable, it facilitates plug-and-Play IP as well as standards based PDK, which means one PDK for all tools, added Synopsys’ Bal.

The IPL (Interoperable PDK Libraries) is an industry alliance established on April 2007 to collaborate on the creation and promotion of interoperable process design kit (PDK) standards.

Wipro Technologies has been among the early users of the Galaxy Custom Designer. I also managed to speak with Anand Valavi, Group Head, Analog Mixed-Signal Group, Wipro Technologies.

Valavi said: “From an EDA tool perspective, in AMS area, the amount of productivity is a lot less than in the digital area. The per transistor productivity for a digital designer is several magnitudes higher than an analog designer.”

The methodology is definitely not as evolved as in the digital area. According to him, ‘This productivity will now increase for analog and AMS areas, and people can do a lot more complex designs in a shorter period of time. There has been a lot of integration.”

On the salient features or enhancements, he said there have been reasonably good improvement in several areas. One, there is an alternative, now, and that brings a lot of advantages. Two, when you take it down to next level, there are several other technical reasons.

Valavi added that an integrated environment definitely improved the productivity. There are other minor things. When you start using it, there are things that helps technical users — for example, an on-canvas editing. Also, the usage or collaborative results in the iPDK libraries will improve the effectiveness in chips designs that are churned out. “It will surely give people working in analog design area a choice,” he noted.

Mentor on EDA trends and solar/PV

September 24, 2008 Comments off

This is a continuation of my recent discussion with Joseph Sawicki, vice president & GM, Design to Silicon Division, Mentor Graphics.

There have been whispers that the EDA industry has been presently lagging behind semiconductors and is in the catch-up mode. “That’s a matter of perspective. There are definitely unsolved challenges at 32nm and 22nm, but the reality is that we are still in the technology development stage,” he says.

For EDA tools that address implementation and manufacturing issues (i.e., Mentor design-to-silicon products), there are dependencies that cannot be fully resolved until the process technology has stabilized. Mentor Graphics is laying the groundwork for those challenges and working in concert with the process technology leaders to ensure that our products address all issues and are production-worthy before the process technology goes mainstream.

On the other hand, although Mentor’s products are fully-qualified for 45nm, there have only been a handful of tapeouts at that node, so for the majority of customers, we are ahead of the curve.

On ESL and DFM as growth drivers
ESL and DFM are said to be the new growth drivers. Sawicki adds: “As Wally Rhines has said in his public presentations, system level design and IC implementation are the stages of development where there are the most challenges, and therefore the most opportunities. To continue the traditional grow spiral that the electronic industry has enjoyed as a result of device scaling, we need more sophisticated EDA solutions to deal with both of these challenges.”

ESL is responding to the growth of design complexity and the need for earlier and more thorough design verification, including low power characteristics, and software integration.

The Design-to-Silicon division is addressing the issues of IC implementation which result not only from the increase in design complexity and devices sizes, but also from increasing sensitivity of the manufacturing process to physical design decisions, a phenomenon often referred to as “manufacturing variability.”

Although the term “Design-For-Manufacturing” reflects the need to consider manufacturability in design and to optimize for both functional and parametric yield, it is important to emphasize that DFM is not simply an additional tool or discrete step in the design process, but rather an integration of manufacturing process information throughout the IC implementation flow.

With single threading, we can no longer handle designs over 100 million gates. Of course, at 45nm, you can do a 100mn gates. That rewriting process is another issue that is also slowing out. It would be interesting to see how is Mentor handling this.

According to Sawicki, Mentor has incorporated sophisticated multi-threading and multi-processing technologies into all of its performance-sensitive applications, from place-and-route, through physical verification, resolution enhancement and testing.

He says, “Our tools have a track record of impressive and consistent and performance and scalability improvements, which is why we continue to lead the industry in performance.”

In addition to merely adding multi-threading and support for multi-core processors, Calibre products have a robust workflow management environment that automatically distributes the processing workload in the most efficient manner across any number of available clustered computing nodes.

Mentor’s Olympus-SoC place-and-route is inherently scalable due to its advanced architecture which includes an extremely efficient graph representation for timing information, and a very concise memory footprint. In addition, all the engines within Olympus-SoC can take advantage of multi-threaded and multi-core processors for high performance. These features enable Olympus-SoC to handle 100M+ gates designs in flat mode without excessive turnaround time.

Mentor’s ATPG tools are also designed to operate in multiprocessing mode over the multiple computing platforms to reduce test pattern generation time. In addition, Mentor test pattern compression technology reduces test pattern volume and test time, making it feasible to fully test 100M gate devices and maintain product quality without an explosion in test cost.

With EDA is starting to move up to the system level, will this make EDA less dependent on the semiconductor world?

Sawicki agrees that there are challenges at both the front end and back end of the electronic products design and manufacturing life cycle. Both of these opportunities are growing. In addition, developments like multi-level (3D) die packaging, through-silicon via (TSV) structures and other non-traditional techniques for device scaling are pushing system and silicon design issues closer together.

Reaching the 22nm node will require highly compute intensive EDA techniques for physical design to compensate for limitations in the manufacturing process. Beyond that, we could see a major shift to new materials and manufacturing techniques that would open new green fields for EDA in the IC implementation flow.

EDA going forward
How does Mentor see the EDA industry evolving, going forward?

Sawicki adds: “There are three key trends to watch. Firstly, for design to remain affordable at the leading edge, we need to enable radical increases in productivity. Electronic System Level (ESL) design is the key here, allowing designers to move to a new level of abstraction for both design and verification.

“Secondly, the challenges of manufacturing a well-yielding and reliable device as we move to 22nm will require a far more sophisticated physical implementation environment—one that accounts for physical effects in the design loop, and accounts for manufacturing variability in it’s optimization routines.

“Finally, the manufacturing challenges also open significant opportunity for EDA in the manufacturing space. A great example of this is the September 17, 2008 announcement we did with IBM on a joint development program to enable manufacturing at the 22nm node.”

Finally, given the roles already defined by Magma and Synopsys in solar, is there an opportunity for EDA in solar/PV?

According to Sawicki, as the photovoltaic devices have very simple and regular structures, most of the opportunity for EDA is not in logic design tools, but in material science, transistor-level device modeling, and manufacturing efficiencies with a focus on conversion efficiency and manufacturing cost reduction.

EDA’s role in solar will be in the newer areas related to Design-for-Manufacturing and other manufacturing optimizations, he concludes.

Our last discussion on DFM will follow in a later blog post!

%d bloggers like this: