Functional verification is critical in advanced SoC designs. Abey Thomas, verification competency manager, Embitel Technologies, said that over 70 percent effort in the SoC lifecycle is verification. Only one in three SoCs achieves first silicon success.
Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.
An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.
There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.
Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.
So, what’s needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.
Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.
Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format – CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.
Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.
Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.
Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.
Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.
Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.
Flip-Chip is a chip packaging technique in which the active area of the chip is ‘flipped over’ facing downward, instead of facing up and bonded to the package leads with wires from the outside edges of the chip.
Any surface area of the Flip-Chip can be used for interconnection, which is typically done through metal bumps. These bumps are soldered onto the package and underfilled with epoxy. The Flip-Chip allows for a large number of interconnects with shorter distances than wire, which greatly reduces inductance.
According to Lionel Cadix, market and technology analyst, Yole Developpement, France, metal bumps can be made of solder (tin, tin-lead or lead-free alloys), copper, gold and copper-tin or Au-tin alloys. The package substrates are epoxy-based (organic substrates), ceramic based, copper based (leadframe substrates), and silicon or glass based.
In the period 2010-2018, Flip-Chip will likely grow at a CAGR of 19 percent. In 2012, laptop and desktop PCs were the top end products using Flip-Chip. It represents 50 percent of the Flip-Chip market by end product with more than 6.2 million of wafer starts. PCs are followed by smart TV and LCD TVs (for LCD drivers), smartphones and high performance computers.
The Flip-Chip market in 2012 is around $20 billion, selling 20 billion units approximately in 12’’ equivalent wafers. Taiwan is so far the no. 1 producer. At least 50 percent of the Flip-Chips devices get into end products. By 2018, the Flip-Chip market should grow to a $35 billion market, selling 68 billion units.
Applications and market focus
Looking at the applications and market focus, Flip-Chip technology is already present in a wide range of application, from high volumes/consumer applications, to low volumes/high end applications. All these applications have their own requirements, specifications and challenges!
Some of these are military and aerospace, medical devices, automobiles, HPC, servers, networks, base stations, etc, in low volumes. It is present in set-top boxes, game stations, smart TVs/displays, desktops/laptops and smartphones/tablets in high volumes. Flip-chip applications are also in imaging, logic 2D SoCs, HB-LEDs, RF, power, analog and mixed-signal, stacked memories, and logic 3D-SiP/SoCs.
In computing applications, for instance, the Intel core i5 is the first MCM combining a 77mm2 CPU together with a 115mm2 GPU in a 37.5mm side package. Solder bumps with a pitch of 185μm are used for the slicon to substrate (1st) interconnect. This MCM configuration is suitable for office applications, with relatively low demanding processing powers. For mobile/wireless applications, there are opportunities for MEMS in smartphones/feature phones. Similarly, Flip-Chip is available for consumer applications.
For microbumping in interposers for FPGA there is a focus on Xilinx Virtex 7 HT. Last year, Xilinx announced a single-layer, multi-chip silicon interposer for its 28nm 7 series FPGAs. Key features include two million logic cells for a high level of computational performance, and high bandwidth, four slice processed in 28 nm, 25 x 31mm, 100 μm thick silicon interposer, 45 um pitch microbumps and 10 μm TSV, and 35 x 35 mm BGA with 180 μm pitch C4 bumps.
Even if the infrastructure had been ready for full 3D stacking, the 2.5D Interposer would still have been the right choice for FPGAs since the ’10,000 routing connections’ would have used up valuable chip area, making the chip slices larger and more costly than they are now. Virtex 7 HT will consist of three FPGA slices and two 28 gbps SerDes chips on an Interposer capable of operating at 2.8 Tb/sec.
Sensor fusion encompasses hardware and software elements. There can be many data sources, such as MEMS. non-MEMS, etc.
The obvious question: why sensor fusion? Tony Massimini, chief of technology, Semico Research Corp., USA, said that it is useful for power savings, and the initial reason was to improve accuracy and reliability of inertial measurement units (IMUs, etc. If we look at the progression of sensors to sensor fusion, there have been simple interrupts such as screen orientation, tap detection, fall detection, and so on. IMUs are available for location-based services (LBS) and navigation, and IMUs are available and other data sources, etc.
Senosr fusion enhances user experience with portable devices. The growth is driven by smartphones. Competing devices will add more features to keep up with smartphones such as tablets, notebooks (ultraportables). Key growth markets today will provide basis for future end use markets (see graph: systems with sensor fusion). The market will likely grow at CAGR of 58.8 percent till 2016.
New end use markets and applications include areas such as gaming, HUD (heads-up display), sports, health and fitness, personal navigation, personal medical, context awareness, voice recognition, visual recognition, augmented reality and automation.
Sensor fusion is used for enhancing the user experience. For instance, add data to 3D axes frame of reference. Sensor fusion offers always ON and low latency. You can also connect to external sensors — wearable for health and fitness. Life tagging is possible too, e.g. photo and video library for context aware services. Next, there is improved security with biometrics.
Summarizing the sensor fusion market, the MEMS sensor ASPs continue to erode. There are an increasing number of sensors. There are improved MEMS sensors, including hardware accelerators. There is interaction with cloud for data. It also enables application innovations. Finally, there are new end use markets.
SLIMbus is a multi-drop, time division multiplexed serial bus. It has one clock and one data line, with CMOS signalling and no analog PHY. It is targeted for low bandwidth connectivity between the AP/modem and audio/Bluetooth/haptic. SLIMbus was originally specified by the MIPI Alliance in 2007. Arasan’s total IP solution delivery demystifies the adoption of SLIMbus.
According to Ajay Jain, director, Mobile Connectivity Products, Arasan Chip Systems, the SLIMBus system overview includes a host component (e.g., apps processor), a device component (e.g., a broadband modem), and a SLIMbus device component (e.g., audio processor, Bluetooth modem). The logical implementation of SLIMbus system feature is realized through devices within SLIMbus IPs.
The AP/modem have software infrastructure and an active manager device that manages the SLIMbus. Any component can have a framer device activated to drive the SLIMbus CLK. Each component can have one or more generic devices to buffer and transmit/receive audio and other data.
The physical layer enables TDM. The data line NRZI is encoded. The active framer can drive clock gears 1 to 10 for power management. There is an interleaving of control and data on the SLIMbus frames.
As far as device evaluation and enumeration are concerned, each component initializes its devices in correct order under the direction of the interface device. The active framer drives the SLIMbus CLK and framing channels with default values. All components perform frames, superframes and message synchronization. All active devices report presence and characteristics with broadcast messages. Arasan provides the software stack to perform SLIMbus.
The SLIMbus allows a finite set of channel rate multipliers (data segments/superframes). If SLIMbus CLK frequency, it allows channel rate multiplier of audio data rate. Other transfer protocols may be preferred in certain cases, e.g., flow control required, pushed or pulled protocol. All transfer protocols are programmable through the Arasan software stack.
Each port-port connection needs to be mapped onto a SLIMbus data channel. There is two-channel audio on SLIMbus data channels 6 and 7. A subframe length of 32 slots is assumed. The SLIMbus is amazing, yet complex. There are a finite set of parameters. Arasan’s IPs have addressed the low-level complexities of implementation.
It is always a pleasure to chat with Dr. Wally (Walden C.) Rhines, chairman and CEO, of Mentor Graphics. I chatted with him, trying to understand gigascale design, verification trends, strategy for power-aware verification, SERDES design challenges, migrating to 3D FinFET transistors, and Moore’s Law getting to be “Moore Stress”!
Chip design in gigascale, hertz, complex
First, I asked him to elaborate on how implementation of chip design will evolve, with respect to gigascale design, gigahertz and gigacomplex geometries.
He said: “Thanks to close co-operation among members of the foundry ecosystem, as well as cooperation between IDMs and their suppliers, serious development of design methods and software tools is running two to three generations ahead of volume manufacturing capability. For most applications, “Gigascale” power dissipation is a bigger challenge than managing the complexity but “system-level” power optimization tools will continue to allow rapid progress. Thermal analysis is becoming part of the designer’s toolkit.”
Functional verification is continually challenged by complexity but there have been, and continue to be, many orders of magnitude improvement in performance just from adoption of emulation, intelligent test benches and formal methods so this will not be a major limitation.
The complexity of new physical design problems will, however, be very challenging. Design problems ranging from basic ESD analysis, made more complex due to multiple power domains, to EMI, electromigration and intra-die variability are now being addressed with new design approaches. Fortunately, programmable electrical rule checking is being widely adopted and will help to minimize the impact of these physical effects.
Is verification keeping up?
How is the innovation in verification keeping up with trends?
Dr. Rhines added that over the past decade, microprocessor clock speeds have leveled out at 3 to 4 GHz and server performance improvement has come mostly from multi-core architectures. Although some innovative approaches have allowed simulators to gain some advantage from multi-core architectures, the speed of simulators hasn’t kept up with the growing complexity of leading edge chips.
Emulators have more than made up the difference. Emulators offer more than four orders of magnitude faster performance than simulators and emulators do so at about 0.005X the cost per cycle of simulation. The cost of power per year is more than one third the cost of hardware in a large simulation farm today, while emulation offers a 12X savings in power per verification clock cycle. For those who design really complex chips, a combination of emulation and simulation, along with formal methods and intelligent test benches, has become standard.
At the block and subsystem level, high level synthesis is enabling the next move up in design and verification abstraction. Since verification complexity grows at about the square of component count, we have plenty of room to handle larger chips by taking advantage of the four orders of magnitude improvement through emulation plus another three or four orders of magnitude through formal verification techniques, two to three orders of magnitude from intelligent test benches and three orders of magnitude from higher levels of abstraction.
By applying multiple engines and multiple abstraction levels to the challenge of verifying chips, the pressure is on to integrate the flow. Easily transitioning and reusing verification efforts from every level—including tests and coverage models, from high level models to RTL and from simulation to emulation—is being enabled through more powerful and adaptable verification IP and high level, graph-based test specification capabilities. These are keys to driving verification reuse to match the level of design reuse.
Powerful verification management solutions enable the collection of coverage information from all engines and abstraction levels, tracking progress against functional specifications and verification plans. Combining verification cycle productivity growth from emulation, formal, simulation and intelligent testing with higher verification abstraction, re-use and process management provides a path forward to economically verifying even the largest, most complex chips on time and within budget.
Good power-aware verification strategy for SoCs
What should be a good power-aware verification strategy for SoCs
According to him, the most important guideline is to start power-aware design at the highest possible level of system description. The opportunity to reduce system power is typically an order of magnitude greater at the system level than at the RTL level. For most chips today, that means at least the transaction level when the design is still described in C++ or SystemC.
Significant experience and effort should then be invested at the RTL level using synthesis and UPF-enabled simulation. Verification solutions typically automate the generation of correctness checks for power-control sequences and power-state coverage metrics. As SoC power is typically managed by software, the value of a hardware/software co-verification and co-debug solution in simulation and emulation becomes apparent in power-management verification at this level.
As designers proceed to the gate and transistor level, accuracy of power estimation improves. That is why gate level analysis and verification of the fully implemented power management architecture is important. Finally, at the physical layout, designers traditionally were stuck with whatever power budget was passed down to them. Now,they increasingly have power goals that can be achieved using dozens of physical design techniques that are built into the place and route tools.
How will 2013 turn out to be for the global semiconductor industry? Will there be growth for the global EDA industry? Importantly, how will the Indian semiconductor industry perform in 2013? I asked Jaswinder Ahuja, corporate VP and MD, Cadence Design Systems India these questions.
Outlook for global semicon industry in 2013
First, how is the outlook for global semiconductor industry in 2013 going to be? Ahuja said: “The long term outlook for the semiconductor industry remains positive, with mobility and cloud computing being the key drivers. The global economy is forecast to grow around 4 percent annually through 2016, according to an April 2012 report from the International Monetary Fund (IMF).
“In its June 2012 report, Gartner predicted growth in electronics and semiconductor industries to outpace that of the world GDP growth, at 5½ percent annually to approach $2 trillion for electronics and 6 percent annually for semiconductors through 2016. So, the semiconductor industry outlook remains very positive overall.
“In the near term, multiple challenges will need to be weathered with respect to the global economic climate, especially in European markets. The JP Morgan/GSA Semiconductor Index of Leading Indicators points to a soft semiconductor industry in 2013. However, there are lot of new products in the mobile and tablet space that are driving demand, such as the iPhone 5, Microsoft Surface, and Samsung Galaxy S III.
“The China semiconductor space is emerging as a key market for semiconductor company revenue, and forecasts predict that it will show rapid annual growth rate. The consolidation and M&A activities that we are seeing in the global semiconductor industry also indicate a positive outlook for the upcoming year.
“In India as well, the semiconductor industry will continue to see growth. The injection of funds and other support outlined in the National Policy on Electronics will provide an impetus to home-grown design and manufacturing, which should start gaining traction in 2013.”
Five trends for 2013
What would be the three or five trends likely to be visible in 2013? Ahuja said Cadence sees five big trends that will drive growth in the near and long term. These are: mobility, application driven design, video, cloud and security.
Probably, the most pervasive change in electronics recently has been mobility. When we talk about mobility, it’s just not about smart phones or tablets, but any kind of device which is mobile. Within the mobile space, software applications help system manufacturers and vendors differentiate themselves and stand apart from the competition. The need to have apps on all kinds of devices is driving rapid growth, as well as placing new demands on EDA companies.
The entertainment industry will be the key driver for video, and as the year progresses, we will continue to see more and more products and solutions introduced to tap into the demand. For the semiconductor industry, video will drive growth both in the end consumer market (mobile platforms) and the enterprise space (networking industry).
In many ways, the backbone to mobility is the cloud. With its network servers and infrastructure, the cloud is what delivers much of the content and value to all of those mobile devices. Statistics show that we need one server for every 600 smart phones and one for every 120 tablets. So there is a big need for data centers which can provide support for all the computing and back-end operations.
Security of data in mobile devices and the cloud will continue to be a challenge in the near future. There will be renewed calls to develop products that can protect critical infrastructure and sensitive information from security breaches.
Too many new entrants on sapphire for LED market with unrealistic capacity plans. Most underestimated the technical challenges! Prices are likely to remain low through 2013. Many new entrants will fail in 2013-2014: rationalization (M&A, bankruptcy, attrition). In the long term, vertical integration is desirable to avoid margin stacking, said Eric Virey, senior market and technology analyst, LED Materials and Sevices, Yole Developpement. He was presenting a seminar on how new sapphire applications can trigger an investment cycle.
According to him, adoption of CFL and LED stretches the replacement cycle and cannibalizes lamp volume sales. As for LED manufacturing capacity, with respect to nitride MOCVD reactors, 2009 and 2010 saw increases in Taiwan and Korea in late driven by LCD display market. The years 2010-2012 saw phenomenal increase in China. Government subsidies are likely to build up epitaxy capacity in the mainland, which should be more than $1.5 billion.
Currently there are ~110 companies with epitaxy capacity. Many will likely disappear! The current excess MOCVD capacity will be fully absorbed by mid-2014. The MOCVD reactor installation will resume mid-late 2013. The global MOCVD utilization rate is 61 percent. There is wide variability between leaders and tier 2 players in China. The Q4-2012 LED sapphire consumption was worth 3.9 million two inch equivalent per month.
As for companies in sapphire wafer, 130+ companies are involved in the sapphire substrate (established or development stage). Less than 30 currently are deriving meaningful revenue from LED substrates. The capacity is ~80 percent higher than demand. It could get worse in 2013! Prices are likely to remain low. Many new entrants will disappear, and others will scale back. A few will succeed.
Conditions for survival through 2013 include, having a lot of cash, be qualified in supply chain, achieve <$4/mm cost (2” basis), and serving other market could be a plus. As for wafer price trends, the finished wafers following similar trends. The 6” is now offered for <$200, but price can vary significantly based on specifications. There are said to be simulated 4” core cost structure for various manufacturers.
It always gives me great pleasure chatting with Dr. Walden (Wallly) C. Rhines, chairman and CEO, of Mentor Graphics, and vice chairman of the EDA Consortium, USA. 2013 is just round the corner. What lies ahead for the global semiconductor industry is a question on everyone’s lips! How will the EDA industry do next year? For that matter, what should the Indian semiconductor industry look forward to next year?
Three trends for 2013
First, I asked Dr. Wally Rhines regarding the trends in the global semiconductor industry. He cited:
* Growth in communication ICs.
* Growth in the third dimension.
* Accelerated design activity at the leading edge.
Growth in communication ICs: On the macro level, silicon area shipments continue to grow gradually, as do semiconductor unit shipments. However, there’s a major shift in application segments from computing to communications. Communications used to be only one third the size of computing in terms of semiconductor usage.
Communications are expected to surpass computing in terms of semiconductor consumption by 2014 thanks to the rapid growth of wireless applications, the incorporation of computing into communications devices like smart phones and the addition of communications to computing devices like tablet computers.
Growth in the third dimension: Shrinking feature sizes and growing wafer diameters will continue to contribute to the annual 30 percent decrease in the average cost per transistor and average 72 percent unit growth of transistors, but they will do so at a diminished rate. Fortunately, other avenues are emerging that can help sustain the semiconductor industry’s remarkable rate of growth. One largely untapped opportunity is in the third dimension, i.e. growing vertically instead of shrinking in the XY plane.
DRAM stacks of eight or more die are already possible, although they are still more expensive on a cost per bit basis compared to unstacked devices. Complex packaged systems made up of multiple heterogeneous die, memory stacked on logic and interposers to connect the die are evolving rapidly. Layers in the IC manufacturing process continue to increase as well.
Accelerated design activity at the leading edge: Another interesting trend is the recent surge in capital spending among foundries to add capacity at the leading edge. This wave of spending will result in excess capacity, at least initially, which may force foundries to lower prices to boost demand. In fact, capacity utilization data in the last few months shows a dramatic decline in utilization at 28/32nm and 22nm nodes, suggesting that excess capacity is already happening to an extent.
While differences in 28 and 20nm processes—such as double patterning—create challenges, the existing capital equipment is largely compatible with both processes. Such a high volume of wafers and the large available capacity will lead to increasingly aggressive wafer pricing over time. As a result, cost-effective wafers from foundries will encourage totally new designs that would not have been possible at today’s wafer cost.
Industry outlook 2013
So, how is the outlook for 2013 going to shape up? Dr. Rhines said: ”After almost no growth in 2012, most analysts are expecting improvement in semiconductor market growth in the coming year. Currently, the analyst forecasts for the semiconductor industry in 2013 range from 4.2 percent on the low side to 16.6 percent on the high side, with most firms coming in between 6 percent and 10 percent. The average of forecasts among the major semiconductor analyst firms is approximately 8.2 percent.
“However, most semiconductor companies are less optimistic in their published outlooks. This seems to be influenced by the level of uncertainty that exists because of unknown government actions and market conditions in the US, Europe and China.”
Any more consolidations?
It would be interesting to hear Dr. Rhines’ opinion on any further consolidations within the industry. He said: “It is common misperception that the semiconductor industry is consolidating. A closer look at the data shows that the semiconductor industry has been doing the opposite. It has been DE-consolidating for more than 40 years.
“Take the #1 semiconductor supplier, Intel. Intel’s market share is the same today as it was a decade ago. And, the combined market share of the top five semiconductor suppliers has been slowly declining since the 1960s. Similar trends also apply to the top ten and top 50—both are the same or lower than they were a decade, as well as decades, ago. In fact, the combined market share of the top 50 semiconductor companies has decreased 11 points in the last 12 years.