Last week (March 11, 2013), Cadence Design Systems Inc. entered into a definitive agreement to acquire Tensilica Inc., a leader in dataplane processing IP, for approximately $380 million in cash.
With this acquisition, Tensilica dataplane processing units (DPUs) combined with Cadence design IP will deliver more optimized IP solutions for mobile wireless, network infrastructure, auto infotainment and home applications.
The Tensilica IP also complements industry-standard processor architectures, providing application-optimized subsystems to increase differentiation and get to market faster. Finally, over 200 licensees, including system OEMs and seven of the top 10 semiconductor companies, have shipped over 2 billion Tensilica IP cores.
Talking about the rationale behind Cadence acquiring Tensilica, Pankaj Mayor, VP and head of Marketing, Cadence, said: “Tensilica fits and furthers our IP strategy – the combination of Tensilica’s DPU and Cadence IP portfolio will broaden our IP portfolio. Tensilica also brings significant engineering and management talent. The combination will allow us to deliver to our customers configurable, differentiated, and application-optimized subsystems that improve time to market.”
It is expected that the Cadence acquisition will also see the Tensilica dataplane IP to complement Cadence and Cosmic Circuits’ IP. Cadence had acquired Cosmic Circuits in February 2013.
What are the possible advantages of DPUs over DSPs? Does it mean a possible end of the road for DSPs?
As per Mayor, DSPs are special purpose processors targeted to address digital signaling. Tensilica’s DPUs are programmable and customizable for a specific function, providing optimal data throughput and processing speed; in other words, the DPUs from Tensilica provide a unique combination of customized processing, plus DSP. Tensilica’s DPUs can outperform traditional DSPs in power and performance.
So, what will happens to the MegaChips design center agreement with Tensilica? Does it still carry on? According to Mayor, right now, Cadence and Tensilica are operating as two independent companies and therefire, Cadence cannot comment until the closing of the acquisition, which is in 30-60 days.
Thanks to Sheryl Gulizia, senior manager, Worldwide Public Relations, Synopsys Inc., I was able to connect with John Chilton, senior VP of Marketing and Strategic Development, Synopsys. We discussed the global (and Indian) outlook for the semiconductor industry in detail. Dr. Aart De Geus was apparently away on a business meet.
According to Chilton, the semiconductor industry has repeatedly stared down the daunting technical challenges caused by the necessity of Moore’s Law and the inevitability of the laws of physics. Every time, the industry has risen to the challenge and delivered silicon that is smaller, faster and cheaper, and the design and systems companies that were quickest to exploit the new technologies reaped the great benefit.
Power dissipation challenging
One trend that has proven to be especially challenging is power dissipation. Although transistors get smaller, faster and cheaper, chip power keeps increasing. Increasing power and decreasing size could have caused device-melting energy densities, but the industry rose to the challenge with more innovative physics along with smarter design methods and tools.
This time around, the challenge seems more fundamental, with the new nodes offering either better performance or lower power, but not both at the same time, and maybe not at a lower cost. The fundamental driving factor behind innovation has been smaller, faster and cheaper transistors, with the cheaper part making the migration a no-brainer. Unfortunately, this time the new node is not expected to be cheaper.
App processors to drive move to 20nm
Application processors for mobile and cloud-based services will drive the move to 20nm. These applications have the volume and power/performance needs to justify the expected investment required to embrace the 20nm node. Recent product announcements at CES underscore the emergence of the ‘cloud to mobile client’ trend in consumer electronics.
Dell and Wyse unveiled the project Ophelia. Ophelia is a USB memory stick-sized thin client that will plug into any compatible TV or Dell monitor. The device will boot into an Android OS and turn any TV into a portal to access a computer somewhere else. Ophelia works by taking advantage of the MHL protocol and works with any MHL-enabled display. Over 100-million MHL-compliant chipsets have already been shipped, so the opportunities for this type of interaction are growing.
MHL, along with established standards such as USB and HDMI or even future short-range wireless standards, will enable consumers to plug their cell phone into any monitor or TV and consume content via their phone on a larger, more satisfying display.
Coincidentally, on the same day, Samsung announced consumer displays that utilize voice and gesture recognition. These emerging technologies will begin to redefine the way we interact with the cloud. Instead of carrying a laptop, you may end up waving and talking to a TV. In a futuristic presentation, Lexus showed a prototype of a laser-scanning system that is small enough to be mounted on a grill and makes 3-D maps of the environment surrounding a car. This kind of embedded vision technology will make its way into more devices as processor performance increases.
Chilton said that developing such complex systems and applications require a robust verification solution. Chip designers already use complex and exhaustive test benches to test individual blocks and subsystems. Verification engineers will need to move up to the next level and handle the full verification of the SoC within a target system.
Verification of an integrated system will require an integrated verification solution that includes not just simulation but also acceleration, emulation and formal debug. A new, integrated verification platform should combine these existing discrete technologies to offer the productivity needed to realize complex systems with predictable, manageable schedules.
Delivering the hardware simultaneously with a working OS and development kit will require virtual prototypes, which will be used by software developers prior to the release of working hardware.
It is always a pleasure to chat with Dr. Wally (Walden C.) Rhines, chairman and CEO, of Mentor Graphics. I chatted with him, trying to understand gigascale design, verification trends, strategy for power-aware verification, SERDES design challenges, migrating to 3D FinFET transistors, and Moore’s Law getting to be “Moore Stress”!
Chip design in gigascale, hertz, complex
First, I asked him to elaborate on how implementation of chip design will evolve, with respect to gigascale design, gigahertz and gigacomplex geometries.
He said: “Thanks to close co-operation among members of the foundry ecosystem, as well as cooperation between IDMs and their suppliers, serious development of design methods and software tools is running two to three generations ahead of volume manufacturing capability. For most applications, “Gigascale” power dissipation is a bigger challenge than managing the complexity but “system-level” power optimization tools will continue to allow rapid progress. Thermal analysis is becoming part of the designer’s toolkit.”
Functional verification is continually challenged by complexity but there have been, and continue to be, many orders of magnitude improvement in performance just from adoption of emulation, intelligent test benches and formal methods so this will not be a major limitation.
The complexity of new physical design problems will, however, be very challenging. Design problems ranging from basic ESD analysis, made more complex due to multiple power domains, to EMI, electromigration and intra-die variability are now being addressed with new design approaches. Fortunately, programmable electrical rule checking is being widely adopted and will help to minimize the impact of these physical effects.
Is verification keeping up?
How is the innovation in verification keeping up with trends?
Dr. Rhines added that over the past decade, microprocessor clock speeds have leveled out at 3 to 4 GHz and server performance improvement has come mostly from multi-core architectures. Although some innovative approaches have allowed simulators to gain some advantage from multi-core architectures, the speed of simulators hasn’t kept up with the growing complexity of leading edge chips.
Emulators have more than made up the difference. Emulators offer more than four orders of magnitude faster performance than simulators and emulators do so at about 0.005X the cost per cycle of simulation. The cost of power per year is more than one third the cost of hardware in a large simulation farm today, while emulation offers a 12X savings in power per verification clock cycle. For those who design really complex chips, a combination of emulation and simulation, along with formal methods and intelligent test benches, has become standard.
At the block and subsystem level, high level synthesis is enabling the next move up in design and verification abstraction. Since verification complexity grows at about the square of component count, we have plenty of room to handle larger chips by taking advantage of the four orders of magnitude improvement through emulation plus another three or four orders of magnitude through formal verification techniques, two to three orders of magnitude from intelligent test benches and three orders of magnitude from higher levels of abstraction.
By applying multiple engines and multiple abstraction levels to the challenge of verifying chips, the pressure is on to integrate the flow. Easily transitioning and reusing verification efforts from every level—including tests and coverage models, from high level models to RTL and from simulation to emulation—is being enabled through more powerful and adaptable verification IP and high level, graph-based test specification capabilities. These are keys to driving verification reuse to match the level of design reuse.
Powerful verification management solutions enable the collection of coverage information from all engines and abstraction levels, tracking progress against functional specifications and verification plans. Combining verification cycle productivity growth from emulation, formal, simulation and intelligent testing with higher verification abstraction, re-use and process management provides a path forward to economically verifying even the largest, most complex chips on time and within budget.
Good power-aware verification strategy for SoCs
What should be a good power-aware verification strategy for SoCs
According to him, the most important guideline is to start power-aware design at the highest possible level of system description. The opportunity to reduce system power is typically an order of magnitude greater at the system level than at the RTL level. For most chips today, that means at least the transaction level when the design is still described in C++ or SystemC.
Significant experience and effort should then be invested at the RTL level using synthesis and UPF-enabled simulation. Verification solutions typically automate the generation of correctness checks for power-control sequences and power-state coverage metrics. As SoC power is typically managed by software, the value of a hardware/software co-verification and co-debug solution in simulation and emulation becomes apparent in power-management verification at this level.
As designers proceed to the gate and transistor level, accuracy of power estimation improves. That is why gate level analysis and verification of the fully implemented power management architecture is important. Finally, at the physical layout, designers traditionally were stuck with whatever power budget was passed down to them. Now,they increasingly have power goals that can be achieved using dozens of physical design techniques that are built into the place and route tools.
Cadence Design Systems Inc. has released the Allegro/OrCAD 16.6 that provides unmatched, scalable design solutions addressing technology challenges.
Allegro is meant for simple to more complex boards, for the geographically dispersed design teams. They also cater to the personal productivity needs for start-ups and small to medium companies. It provides high-speed, power aware SI, HDI managed design environments, that is meant for medium to large enterprises.
Allegro, Sigrity and OrCAD – major themes of Allegro PCB 16.6
The major themes of Allegro PCB 16.6 include the first ECAD collaborative environment for work-in-progress (WiP) PCB design using Microsoft SharePoint. It accelerates the timing closure for high-speed interfaces such as DDRx. There is support for advanced miniaturization techniques that embed passive and active components on inner layers of a PCB.
Regarding the percentage of enhanced miniaturization capabilities for embedding dual-sided and vertical components, Hemant Shah, product marketing director, Allegro PCB and FPGA Products, Cadence, said: “Components with dual-sided contacts offer customers the ability to connect to the components pins through the layer above or through the layer below. This provides routing flexibility. It also allows customers who want redundant connections on inner layers.”
There can be accelerated design implementation through flexible PCB team design capability. Leveraging GPUs for faster display, you can pan and zoom on dense PCBs, enabling greater design productivity.
Typical product creation environment includes the best-in-class hardware design environment for implementation, analysis and compliance sign-off. The design team collaboration capability encompasses design-chain partners. Design data can be integrated into corporate systems to manage cost and quality, and provide visibility. It allows on-time release into manufacturing to build products.
Allegro, OrCAD and Sigrity ensure product creation with the best-in-class PCB design environment. Design team collaboration and productivity is integrated at the PCB. Design data integration is done into corporate systems to manage cost and quality. There is on-time release into manufacturing to build products. Allegro, OrCAD and Sigrity are helping create market leading products.
Users can create the first ECAD collaborative environment using Microsoft SharePoint. Design creation and collaboration trends include geographically distributed teams, increase in outsourcing (OEMs-ODMs), product life cycle management tools are not designed/targeted for managing work-in-progress data. For every design engineer, there are 3X-5X collaborators, reviewers, etc.
The Allegro Design workbench integrates seamlessly to SharePoint 2010. The Team Design Authoring cockpit facilitates design data in and out of SharePoint providing WiP design data management. There is block level check-in, check-out, etc. SharePoint is already deployed in many companies. It is scalable from single server to cloud environment. It has rich platform for power users to configure and developers to customize.
By creating the first ECAD collaborative environment, users can reduce the product creation time by up to 40 per cent. Chances of first pass failure are also reduced from 25 per cent to 1 per cent. Users can accelerate the timing closure on high-speed, multi-Gigabit signals.
It is always a pleasure interacting with Dr. Walden (Wally) C. Rhines, the chairman and CEO, Mentor Graphics, and vice chairman of the EDA Consortium, USA. I started by enquiring about the global semiconductor industry.
Dr. Wally Rhines said: “The absolute size of the semiconductor industry (in terms or total revenue) differs depending on which analyst you ask, because of differences in methodology and the breadth of analysts’ surveys. Current 2012 forecasts include $316 billion from Gartner, $320 billion from IDC, $324.5 billion from IHS iSuppli, $327.2 billion from Semico Research and $339 billion from IC Insights.
“These numbers reflect growth rates from 4 per cent to 9.2 per cent, based on the different analyst-specific 2011 totals. Capital spending forecasts for the three largest semiconductor companies have increased by almost 50 per cent just since the beginning of this year. However, the initial spurt of demand was influenced by the replenishment of computer and disc drive inventories caused by the Thailand flooding. Now that this is largely complete, there is some uncertainty about the second half.
“So, overall it looks like the industry will pass $310 billion this year, but it may not be by very much. The strong capital spending and demand for leading edge capacity should impact the second half but the bigger impact will probably be in 2013.
What’s with 28.20nm?
Has 28/20nm semiconductor technology become a major ‘work horse’? What’s going on in that area? At least, this area is now of considerable interest.
Dr. Rhines said that the semiconductor industry’s transition to the 28nm family of technologies, which broadly includes 32nm and 20nm, is a much larger transition than we have experienced for many technology generations.
The world’s 28nm-capable capacity now comprises almost 20 per cent of the total silicon area in production and yet, the silicon foundries are fully loaded with more 28nm demand than they can handle. In fact, high demand for 28/20nm has created a capacity pinch that is currently spurring additional capital expenditure by foundries.
He added: “As yields and throughput mature at 28nm, the major wave of capital investment will provide plentiful foundry capacity at lower cost, stimulating a major wave of design activity. Cost-effective, high yield 28nm foundry capacity will not only drive increasing numbers of new designs but it will also force re-designs of mature products to take advantage of the cost reduction opportunity.”
Cadence Design Systems recently introduced the latest release of Cadence Encounter RTL-to-GDSII flow for high-performance and giga-scale designs, including those at the latest technology node, 20 nanometers.
Rahul Deokar, product management director, said: “We are addressing designer challenges in – high performance design, giga-scale design and advanced node design. The first challenge is the PPA – power performance and area. Next challenge is to handle giga-scale designs with efficient turnaround time. The third key challenge is fast time-to-market on advanced 28nm/20nm design.”
With this latest release, Cadence provides the industry’s best PPA. Second, it is providing 1 billion gates on designer desktops. Third, it provides the fastest path to 20nm digital design. In the new announcement, there are three new technologies – GigaOpt + CCOpt, GigaFlex, and 20nm double patterning.
Cadence has introduced “GigaOpt” – a common optimization engine that unifies physical synthesis and optimization. GigaOpt provides designers globally optimal results across the front-end and back-end of the design flow. And that results in the benefit of the industry’s best PPA for high-performance design. GigaOpt has been designed from scratch to be multi-threaded, which makes it ultra fast and ultra scalable.
Another innovation, CCOpt, is the first and only technology in EDA to unify clock tree synthesis and physical optimization. CCOpt is now an integral part of the Encounter flow accelerating design closure with the best PPA. CCOpt facilitates:
* 10 percent improvement in design performance and total power.
* 30 percent reduction in clock power and area.
* 30 percent reduction in IR drop.
The Encounter flow now allows 1 billion gates to be enabled on the designer’s desktops. This is done with the new GigaFlex abstraction technology. It is the first and only technology in EDA that enables flexible, accurate abstraction adaptable to the flow stage. There is 10X capacity and TAT gains on 100 million+ instance designs. GigaFlex abstraction technology allows accurate, early physical modeling, concurrent top-and-block interface optimization, and concurrent hierarchical closure and late-stage ECO. Read more…
The Cadence Executive Forum, titled, ‘Game Changers: New Paradigms for the Future of Electronic Product Realization’, was held this evening in Bangalore, India. The speakers were Lip-Bu Tan, president and CEO, Cadence, and Bhaskar Pramanik, chairman of Microsoft India.
In the opening address, Tan remarked that there is likely to be challenging next 12 months in the USA and Europe. It may also impact the Asia Pacific region. However, from an EDA perspective, there will be new design, as companies would be involved in designing next-generation products and killer applications. There will also be more consolidation, which will continue. Another trend is that the number of start-ups has dropped.
There are two main drivers — technology and market. The cloud is starting to present a big opportunity. Other key areas include green technology and power management. Video will be driving a lot of traffic. The impact on the electronics industry will be new product development, with the IP having expanded beyond processor cores, an increase in collaborations and a changing EDA landscape — Cadence is investing on its decision to deliver the on the EDA360 vision.
Some of the recent highlights include Cadence’s new software development suite that addresses the hardware-software design gap, expansion of the Palladium XP, and releasing the industry’s first DDR4 solution, which includes controller, soft and hard PHY, drivers, verfication IP (VIP) memory models and signal integrity reference designs.
He spoke about horizontal collaborations such as app programing interface, and vertical collaborations, which creates differentiation in the end markets. It also engages foundries in EDA, IP, etc. As an example, Tan spoke of Spreadtrum achieving one-pass silicon realization for the first 40nm product. Some other examples include Samsung designing and implementing 20nm product, ARM and Cadence collaborating on GHz implementation of Cortex-A15, and ARM, TSMC and Cadence collaborating on the industry’s first 20nm Cortex-A15.
Speaking on ‘Consumerization of IT’, Bhaskar Pramanik touched upon consumer trends driving IT. These trends include the economic system of computers, natural interaction, data explosion, social computing, pervasive displays, ubiquitous connectivity, and cloud computing.
According to him, computers will adapt to us. They will enable computing interfaces that are far more easier to use. The key business requirement is to balance the user expectations with the enterprise requirements.
Long-term trends are strong for semiconductor and electronics. According to databeans estimate (Feb. 2011), semiconductor revenue will likely reach $450 billion by 2015 and electronics revenuw will likely reach $2,800 billion by 2015.
Speaking at the CDNLive! 2011 event in Bangalore, India, Charlie Huang, SVP of Worldwide Field Operations, Cadence Design Systems Inc., said that the challenges in the near term are slowdown in Europe and USA. The weakness is driven by increasingly negative views on the global economy, end demand, orders and outlook. Key indicators are also showing that the economy is facing headwinds. The 2011 GDP growth projections have deteriorated since the beginning of the year. The economy has been marred by high unemployment and low consumer confidence.
As of now, innovation has been driving growth. Apps have been driving innovation, followed by video, mobility, cloud and green technology. The impact on the electronics industry is multi-fold. There is a new development paradigm and collaboration has been increasing. The IP is also expanding beyond cores and the EDA is changing.
The new development paradigm for system companies is to differentiate on applications and semiconductor companies must deliver on application-driven hardware-software platforms. IP has now expanded well beyond the core. EDA is also changing, and Cadence is investing to deliver on the EDA360 vision. There are multiple silicon realization challenges. Cadence silicon realization solutions enable fast, deterministic, end-to-end path to silicon success.
As an example, ARM and Cadence have collaborated on the GHz implementation of Cortex-A15. ARM chose ARM Artisan physical IP, evaluated the Cortex-A15 RTL, and supported CPF. Cadence optimized the EDA flow, experienced support at EAC, and provided EDA tool releases and iRM.
ARM, TSMC and Cadence also collaborated on the industry’s first 20nm Cortex-A15. TSMC provided the 20nm process qualification and A15 learnings. ARM handled the 20nm implementation experience, A15 considerations in 20nm and TSMC 20nm readiness milestone. Cadence provided the 20nm research to reality, contributed and grew the A15 expertise and TSMC 20nm readiness milestone.
The end result: the industry’s first 20nm Cortex-A15 tapeout, thanks to a successful three-way vertical collaboration. ARM, Cadence and TSMC engineers worked side-by-side. The project priorities included 20nm DPT implementation schedule and 20nm readiness milestone.
Magma Design Automation has introduced the Silicon One technology solutions for Magma users in India. This was announced by Rajeev Madhavan, chairman and CEO, Magma, on the sidelines of the MUSIC 2011 in Bangalore, India, this afternoon.
Silicon One aims at making silicon profitable, especially for Magma’s customers. It is a presentation of innovative solutions for advanced analog and digital design challenges. Magma outlined five technologies: Talus, Tekton, Titan, FineSim and Excalibur. The solutions work off a unified database for designing chips that combine analog, digital, memory, etc. Just about a week ago, Magma launched the global Silicon One seminar series in the US, Canada, Korea, China, Taiwan, Japan, Israel and Europe, from Sept. 20 to Nov. 10.
“We are making solutions that customers can use. The global EDA industry is currently worth $4.5-$5 billion today, growing at a rate of 10 percent.” As of now, 21 of the top 25 customers use Magma tools. It happens to be key EDA supplier to some household names in wireless.
Magma currently employs 696 people, of whom 77 percent are in application engineering or R&D. India has 220 (32 percent) employees as of now.
According to Madhavan, Silicon One is a platform of EDA solutions for emerging silicon. The three main pillars are: integration — with a unified data model comprising capacity, concurrent optimization and chip finishing; completeness — comprising full flow IP characterization, design implementation and design verification; and throughput — comprising concurrent analysis and verification.
Magma has built three categories of solutions. These are:
SoC/ASSP: Building killer applications with an entire SoC.
AMS: Building analog mixed signal chips for mobility market.
Memory: Building high-speed memory chips for consumer applications.
Madhavan added: “We are mapping Silicon One solutions to the market. We are touching every single point of the silicon. We are providing a series for platforms — such as digital design (Talus), analog verification (FineSim), analog design (Titan), digital sign-off (Tekton) and yield management (Excalibur). We have the opportunity to be a dominant yield management company.”
Thanks to my friend, Veeresh Shetty at Mentor Graphics, I was able to meet up with Dr. Walden (Wally) C. Rhines, chairman and CEO, Mentor Graphics, as well as with Hanns Windele, VP Mentor Graphics (Europe & India), for a short conversation regarding the global semiconductor industry.
Growth of global semicon industry
First, I sought Dr. Rhines’ views on the growth of the global semiconductor industry. Dr. Rhines said: “Capital investment in the foundries has been quite high. TSMC, GlobalFoundries, Samsung, etc., have invested double. In 2012, some of the foundries will run at a lower percentage of capacity. If that happens, foundry wafer prices might fall. However, equipment prices would not decrease.”
So, what has the industry learned from the previous recession? He said: “Capacity in the semicon industry was relatively tight in Q408. In 2009, we called it as inventory correction. If we had not had a recession, there would have been a capacity shortage.
“Now, companies seem to have caught up. There was large investment in the manufacturing capacity in 2010, and that has continued into 2011. There is more new capacity coming into foundries by 2012. Investment in memory has been modest. However, fabless companies should find more capacity in 2012.”
Hanns Windele added: “The automotive industry was contributing to all of this as well. As of now, 45 percent is consumed by the computer industry, 20 percent by the communications industry, and consumer electronics and automotive account for 5-10 percent, approximately.”
It appears that everything that one buys today, communications seems to be attached to it. Dr. Rhines also reckoned that PC shipments were holding up well, for now. Read more…