Search Results

Keyword: ‘integration’

Pick video IP as close as to "plug-and-play" for SoC integration

November 8, 2007 Comments off

While designing, it is critical to pick the appropriate codec or formats that can be handled by a video IP to support any given application. It is also very important to select the correct video IP with proper and standard interfaces so that it can be as close as possible to ‘plug-and-play’ in terms of System on a Chip (SoC) integration.

Ravishankar Ganesan, VP, SoC IP Business Unit, Ittiam Systems, commenting on the selection of the video IP for SoC designs, said that SoCs use the divide and conquer strategy very well.

The SoC is today truly defining and integrating multiple specialized blocks or subsystems keeping the target application of the SoC in mind. Each one of these specialized subsystems needs to be the best in terms of its performance, area and power so that the SoC can be the best, competitive and well suited for the target market.

The video intellectual property (IP) is one of these specialized subsystems, and hence, critically important for SoCs, which are targeted for video based applications. Needless to mention, there is no one video IP that ‘fits all’ video SoCs.

So what should any SoC designer look for in terms of supporting video profiles and codecs? This really depends on the application(s) for which the SoC is likely to address. If you are targeting video IP for mobile TV application in a cellular phone, the profiles and codecs will get determined by the appropriate broadcasting system.

Similarly, if the SoC is targeting the high-definition ((HD) DVD player segment, the video codecs and their profiles/levels needs to be determined based on the video encoder configuration that was used to create the content on the DVD disc.

There has to be a way on going about selecting/understanding video codecs. In this context, it is very critical to pick the appropriate codec or formats that can be handled by the video IP to support the given application.

It is also very important to pick the video IP with the proper and standard interfaces so that it can be as close as to “plug-and-play” in terms of the SoC integration. The area and power dissipation are important as well, so that the SoC can be sold at a competitive price in the market.

At high pixel rates, what would be the situation with the video subsystem? Simply put, the higher resolutions result in the explosion of data. The video subsystem needs to be highly efficient in order to handle the high data movement. It also needs to have very efficient video processing engines to meet the real-time requirements.

As for the amount of off-chip video bandwidth that is actually needed by an IP block, Ganesan said that it depends a lot on the resolution that the video IP is likely to handle. The video resolution, profiles and levels will get determined by the application. Trade-offs between silicon real-estate and off-chip video bandwidth plays very critical role.

Improving video performance
Video performance is said to deteriorate as the off-chip memory latency increases. What can be done to improve this? Internal buffering will definitely help to reduce this impact. However, that can affect the silicon size of the device. Hence, care needs to be taken and trade-off needs to be done depending upon the Video system requirements.

Finally, let’s examine how best can a designer integrate the video IP core into an SOC design. Depending upon the interfaces, the video IP can slide easily into the SoC. The IP could be just an engine, or processor core based soft IP or a combination of both.

So, the SoC designer needs to evaluate the application requirements, and determine the right interfaces and the appropriate processor core, along with the memory sub-system. There could be peripheral interface IPs [that are either part of the Video IP or separate], which also needs to be inserted as part of the SoC and the data flow on the device needs good management.

Accelerating EDA innovation through SoC design methodology convergence

September 26, 2014 Comments off

According to Dr. Walden C. Rhines, chairman and CEO, Mentor Graphics Corp., verification has to improve and change every year just to keep up with the rapidly changing semiconductor technology. Fortunately, the innovations are running ahead of the technology and there are no fundamental reasons why we cannot adequately verify the most complex chips and systems of the future. He was speaking at the recently held DVCON 2014 in Bangalore, India.

DVCON India 2014.

DVCON India 2014.

A design engineer’s project time for doing design has reduced by 15 percent from 2007-2014, while the engineer’s time for doing verification had seen 17 percent increase during the same time. At this rate, in about 40 years, all of a designer’s time will be devoted to verification. At the current rate, there is almost no chance of getting a single-gate design correct on first pass!

Looking at a crossover of verification engineers vs. designer engineers, there is a CAGR designers of 4.55 percent, and for CAGR verifiers, it is 12.62 percent.

The on-time completion remains constant, as we look at the non-FPGA project’s schedule completion trends, which are: 67 percent behind schedule for 2007, 66 percent behind schedule for 2010, 67 percent behind schedule for 2012, and 59 percent behind schedule for 2014. There has been an increase in the average number of embedded processors per design size, moving from 1.12 to 4.05.

Macro trends
Looking at the macro trends, there has been standardization of verification languages. SystemVerilog is the only verification language growing. Now, interestingly, India leads the world in SystemVerilog adoption. It is also remarkable that the industry converged on IEEE 1800. SystemVerilog is now mainstream.

There has been standardization in base class libraries as well. There was 56 percent UVM growth between 2012 and 2014, and 13 percent is projected growth in UVM the next year. Again, India leads the world in UVM adoption.

The second macro trend is standardization of the SoC verification flow. It is emerging from ad hoc approaches to systematic processes. The verification paradox is: a good verification process lets you get the most out of best-in-class verification tools.

The goal of unit-level checking is to verify that the functionality is correct for each IP, while achieving high coverage. Use of advanced verification techniques has also increased from 2007 to 2014.

Next, the goal of connectivity checking is to ensure that the IP blocks are connected correctly, a common goal with IP integration and data path checking.

The goal of system-level checking is performance, power analysis and  SoC functionality. Also, there are SoC ‘features’ that need to be verified.

A third macro trend is the coverage and power across all aspects of verification. The Unified Coverage Interoperability Standard or UCIS standard was announced at DAC 2012 by Accellera. Standards accelerate the EDA innovation!

The fourth trend is active power management. Now, low-power design requires multiple verification approaches. Trends in power management verification include things like Hypervisor/OS control of power management, application-level power management, operation in each system power state, interactions between power domains, hardware power control sequence generation, transitions between system power states, power domain state reset/restoration, and power domain power down/power up.

Macro enablers in verification
Looking at the macro enablers in verification, there is the intelligent test bench, multi-engine verification platforms, and application-specific formal. The intelligent test bench technology accelerates coverage closure. It has also seen the emergence of intelligent software driven verification.

Embedded software headcount surges with every node. Clock speed scaling slows the simulation performance improvement. Growing at over 30 percent CAGR from 2010-14, emulation is the fastest growing segment of EDA.

As for system-level checking, as the design sizes increase emulation up, the FPGA prototyping goes down. The modern emulation performance nmakes virtual debug fast. Virtual stimulus makes emulator a server, and moves the emulator from the lab to the datacenter, thereby delivering more productivity, flexibility, and reliability. Effective 100MHz embedded software debug makes virtual prototype behave like real silicon. Now, integrated simulation/emulation/software verification environments have emerged.

Lastly, for application-specific formal, the larger designs use more formal. The application-specific formal includes checking clock domain crossings.

SEMICON Europa 2014 calls for innovators

September 16, 2014 1 comment

SEMICON Europa 2014 will be held at Grenoble, France, on October 7-9 October, 2014. The event will see over 400 exhibitors, which means, the exhibition area has expanded by over 40 percent vs. 2013. There will be over 70 programs featuring 300+ speakers. SEMI expects 6,000+ visitors.

proxy?url=http%3A%2F%2F1.bp.blogspot.com%2F-7LbV2FosDBQ%2FVBhQEkHch9I%2FAAAAAAAALZw%2F4Jegc5f9lWs%2Fs1600%2Fsemi.jpg&container=blogger&gadget=a&rewriteMime=image%2F*

A feature of the event will be the Innovation Village that will feature 35 start-ups. I have been just informed that four start-ups have cancelled. So, that leaves 31 start-ups: ActLight, Aryballe, Avalun, Bluwireless Technology, CALAO Systems, Enerstone, Euresis, Epigan, Evaderis, Exagan, Feeligreen, Genes’Ink, Grapheat, Gridbee, Heyday, Hotblock Onboard, Imagsa Technology, Irlynx, Madci, Metablue Solution, Nessos, Nocilis Materials, Noivion, PETsys Electronics, Pollen Technology, Scint-X, Sepcell, Silicon Line GmbH, Smoltek, Sol Voltaics and Wavelens.

A few start-ups are given below:

ActLight SA: It focuses in the field of CMOS photonics.

AVALUN SAS: It currently develops the LabPad®, a next – generation mobile point-of-care (POC) device.

CALAO Systems: It is the specialist of onboard connected computers.

eVaderis: It offers energy efficient, low power mixed – signal data – centric control processors.

Enerstone: It works with rechargeable battery manufacturers and integrators to improve the charge quality of their batteries.

Exagan: It is a leading supplier of Gallium Nitride based transistor devices.

Feeligreen: It provides micro – current devices for dermo – cosmetics and dermo – therapeutics.

Grapheat: It is a young startup specialized in the production and integration of monolayer high – quality of graphene on wafers and substrates for specific applications.

Gridbee Communications: It is developing Innovative long range Mesh Network solution for connected objects.

Heyday: It develops semiconductor ICs for the power conversion market.

Irlynx: It develops and commercializes infrared sensors.

Nessos Information Technologies SA: Nessos is a highly qualified software development company.

Nocilis Materials: It offers various silicon based semiconductor materials.

Noivion: It developed and patented a new thin film deposition technique named Ionized Jet Deposition (IJD).

PETsys Electronics SA: It developed new PET detectors for next generation of medical PET scanners.

Pollen Technology: It is a software company.

Scint-X: It develops and produces cutting – edge structured scintillators.

Silicon Line: A leading global provider of innovative ultra – low power optical link technology for mobile and consumer electronics markets.

Smoltek AB: It offers a proprietary conductive nano – scale carbon technology.

Wavelens: It is developing disruptive optical MEMS solutions.

On October 7, there will be a five-minute pitch for each start-up participating. It will be followed by a panel discussion: ‘Fundraising for the Future Champions of European Electronics: Strategies, Challenges and Opportunities. Day two will host the Innovation conference.

Categories: Semiconductors

Innovating in system of systems: Lip Bu-Tan

August 17, 2014 Comments off

Lip-Bu Tan, president and CEO, Cadence Design Systems Inc.

Lip-Bu Tan, president and CEO, Cadence Design Systems Inc.

There have been several innovations of innovations happening in the global technology industry. The IoT, mobility, cloud computing, etc., are creating opportunities for the system of systems, according to Lip-Bu Tan, president and CEO, Cadence Design Systems Inc.

Tan was delivering the main keynote. at the recently held CDNLive 2014 in Bangalore, India,

Some of the trends driving the global semiconductor market growth in the end markets include automotives at $24 billion, computers at $76 billion, industrial electronics at $14,1 billion, medical electronics at $12.5 billion, and mobile phones at $100 billion. In India, especially, a lot of fabless companies are said to be coming up.

The tablet is a system of systems. It has communications, navigation, recording and photography, etc. Even the automotive vehicle is a convincing example. Next, there is the IoT. There are said to be diverse needs for the IoT.

There are said to be several challenges for the system of systems. Some of these are more IP and software requirements, and more needs for low power and mixed signal. System design enablement requires system integration, packaging and board, etc.

Cadence has a comprehensive SoC IP solution. The mixed signal verification solution ensures functionality, reliability and performance. Cadence also introduced the Voltus-Fi custom power integrity solution in Shanghai the week before. Its Quantus QRC extraction solution gives up to 5X performance.

Next, the Jasper acquisition expands the Cadence development suite. Cadence also provides the FPGA-based prototyping with Palladium flow for software development.

Tan concluded that new technologies always require closer collaboration — from IP through manufacturing. Cadence is here to help designers innovate — from systems to silicon.

Categories: Semiconductors

How Intel competes on today’s fabless ecosystem?


The SEMI/Gartner Market Symposium was held Semicon West 2014 at San Francisco, on July 7. Am grateful to Ms. Becky Tonnesen, Gartner, and Ms Agnes Cobar, SEMI, for providing me the presentations. Thanks are also due to Ms Deborah Geiger, SEMI.

Dean Freeman, research VP, Gartner, outlined the speakers:

• Sunit Rikhi, VP, Technology and Manufacturing Group, GM, Intel Custom Foundry Intel, presented on Competing in today’s Fabless Ecosystem.

• Bob Johnson, VP Research, Gartner, presented the Semiconductor Capital Spending Outlook.

• Christian Gregor Dieseldorff, director Market Research, SEMI, presented the SEMI World Fab Forecast: Analysis and Forecast for Fab Spending, Capacity and Technology.

• Sam Wang, VP Research Analyst, Gartner, presented on How Foundries will Compete in a 3D World.

• Jim Walker, VP Research, Gartner, presented on Foundry versus SATS: The Battle for 3D and Wafer Level Supremacy.

• Dr. Dan Tracy, senior director, Industry Research & Statistics, SEMI, presented on Semiconductor Materials Market Outlook.

Let’s start with Sunit Rikhi at Intel.

As a new player in the fabless eco-system, Intel focuses on:
* The value it brings to the table.
* How it delivers on platforms of capability and services.
* How it leverage the advantages of being inside the world’s leading Integrated Device Manufacturer (IDM)
* How it face the challenges of being inside the world’s leading IDM.

Intel sunit-rikhiIntel has leadership in silicon technologies. Transistor performance per watt is the critical enabler for all. Density improvements offset wafer cost trends. Intel currently has ~3.5-year lead in introducing revolutionary transistor technologies.

In foundry capabilities and services platforms, Intel brings differentiated value on industry standard platforms. 22nm was started in 2011, while 14nm was started in 2013. 10nm will be starting in 2015. To date, 125 prototype designs have been processed.

Intel offers broad capability and services on industry standard platforms. It also has fuller array of co-optimized end-to-end services. As for the packaging technology, Intel has been building better products through
multi-component integration. Intel has also been starting high on the yield learning curve.

Regarding IDM challenges, such as high-mix-low-volume configuration, Intel has been doing configuration optimization in tooling and set-up. It has also been separating priority and planning process for customers. Intel has been providing an effective response for every challenge.

Some of Intel Custom Foundry announced customers include Achronix, Altera, Microsemi, Netronome, Panasonic and Tabula.

Set up strong methodology teams to create better verification infrastructure: Synopsys

April 21, 2014 Comments off

Arindam Ghosh

Arindam Ghosh

This is the third installment on verification, now, taken up by Synopsys. Regarding the biggest verification mistakes today, Arindam Ghosh, director – Global Technical Services, Synopsys India, attributed these as:

* Spending no time on verification planning (not documenting what needs to be verified) and focusing more on running simulations or on execution.
* No or very low investment in building better verification environments (based on best/new methodologies and best practices); instead maintaining older verification environments.
* Compromising on verification completeness because of tape out pressures and time-to-market considerations.

Would you agree that many companies STILL do not know how to verify a chip?

He said that it could be true for smaller companies or start-ups, but most of the major semiconductor design engineers know about the better approaches/methodologies to verify their chips. However, they may not be investing in implementing the new methodologies for multiple reasons and may instead continue to follow the traditional flows.

One way to address these mistakes would be to set up strong methodology teams to create a better verification infrastructure for future chips. However, few companies are doing this.

Are companies realizing this and building an infrastructure that gets you business advantage? He added that some companies do realize this and are investing in building a better infrastructure (in terms of better methodology and flows) for verification.

When should good verification start?
When should good verification start — after design; as you are designing and architecting your design environment? Ghosh said that good verification starts as soon as we start designing and architecting the design. Verification leads should start discussing the verification environment components with the lead architect and also start writing the verification plan.

Are folks mistaking by looking at tools and not at the verification process itself? According to him, tools play a major role in the effectiveness of any verification process, but we still see a lot of scope in methodology improvements beyond the tools.

What all needs to get into verification planning as the ‘right’ verification path is fraught with complexities? Ghosh said that there is no single, full-proof recipe for a ‘right’ verification path. It depends on multiple factors, including whether the design is a new product or derivative, the design application etc. But yes, it is very important to do comprehensive verification planning before starting the verification process.

Synopsys is said to be building a comprehensive, unified and integrated verification environment is required for today’s revolutionary SoCs and would offer a fundamental shift forward in productivity, performance, capacity and functionality.  Synopsys’ Verification Compiler provides the software capabilities, technology, methodologies and VIP required for the functional verification of advanced SoC designs in one solution.

Verification Compiler includes:
* Better capacity and compile and runtime performance.
* Next-generation static and formal technology delivering performance improvement and the capacity to analyze a complete SoC (Property checking, LP, CDC, connectivity).
* Comprehensive low power verification solution.
* Verification planning and management.
* Next-generation verification IP and a deep integration between VIP and the simulation engine, which in turn can greatly improve productivity.  The constraint engine is tuned for optimal performance with its VIP library. It has integrated debug solutions for VIP so one can do protocol-level analysis and transaction-based analysis with the rest of the testbench.
* Support for industry standard verification methodologies.
* X-propagation simulation with both RTL and low power simulations.
* Common debug platform with better debug technology having new capabilities, tight integrations with simulation, emulation, testbench, transaction debug, power-aware debug , hw/sw debug, formal, VIP and coverage.

Top five recommendations for verification
What would be Synopsys’ top five recommendations for verification?

* Spend a meaningful amount of time and effort on verification planning before execution.
* Continuously invest in building a better verification infrastructure and methodologies across the company for better productivity.
* Collaborate with EDA companies to develop, evaluate and deploy new technologies and flows, which can bring more productivity to verification processes.
* Nurture fresh talent through regular on and off-the-job trainings (on flows, methodologies, tools, technology).
* Conduct regular reviews of the completed verification projects with the goal of trying to improve the verification process after every tapeout through methodology enhancements.

Categories: Semiconductors

Cadence: Plan verification to avoid mistakes!


Apurva Kalia

Apurva Kalia

Following Mentor Graphics, Cadence Design Systems Inc. has entered the verification debate. ;)  I met Apurva Kalia, VP R&D – System & Verification Group, Cadence Design Systems. In a nutshell, he advised that there needs to be proper verification planning in order to avoid mistakes. First, let’s try to find out the the biggest verification mistakes.

Top verification mistakes
Kalia said that the biggest verification mistakes made today are:
* Verification engineers do not define a structured notion of verification completeness.
* Verification planning is not done up front and is carried out as verification is going along.
* A well-defined reusable verification methodology is not applied.
* Legacy tools continue to be used for verification; new tools and technologies are not adopted.

In that case, why are some companies STILL not knowing how to verify a chip?

He added: “I would not describe the situation as companies not knowing how to verify a chip. Instead, I think a more accurate description of the problem is that the verification complexity has increased so much that companies do not know how to meet their verification goals.

“For example, the number of cycles needed to verify a current generation processor – as calculated by traditional methods of doing verification – is too prohibitive to be done in any reasonable timeframe using legacy verification methodologies. Hence, new methodologies and tools are needed. Designs today need to be verified together with software. This also requires new tools and methodologies. Companies are not moving fast enough to define, adopt and use these new tools and methodologies thereby leading to challenges in verifying a chip.”

Addressing challenges
How are companies trying to address the challenges?

Companies are trying to address the challenges in various ways:
* Companies at the cutting edge of designs and verification are indeed trying to adopt structured verification methodologies to address these challenges.

* Smaller companies are trying to address these challenges by outsourcing their verification to experts and by hiring more verification experts.

* Verification acceleration and prototyping solutions are being adopted to get faster verification and which will allow companies to do more verification in the same amount of time.

* Verification environment re-use helps to cut down the time required to develop verification environments.

* Key requirements of SoC integration and verification—including functionality, compliance, power, performance, etc.—are hardware/software debug efficiency, multi-language verification, low power, mixed signal, fast time to debug, and execution speed.

Cadence has the widest portfolio of tools to help companies meet verification challenges, including:

Incisive Enterprise Manager, which provides hierarchical verification technology for multiple IPs, interconnects, hardware/software, and plans to improve management productivity and visibility;

The recently launched vManager solution, a verification planning and management solution enabled by client/server technology to address the growing verification closure challenge driven by increasing design size and complexity;

Incisive Enterprise Verifier, which delivers dual power from tightly integrated formal analysis and simulation engines; and

Incisive Enterprise Simulator, which provides the most comprehensive IEEE language support with unique capabilities supporting the intent, abstraction, and convergence needed to speed silicon realization.

Are companies building an infrastructure that gets you business advantage? Yes, companies are realizing the problems. It is these companies that are the winners in managing today’s design and verification challenges, he said.

Good verification
When should good verification start?

Kalia noted: “Good verification should start right at the time of the high level architecture of the design. A verification strategy should be defined at that time, and an overall verification plan should be written at that time. This is where a comprehensive solution like Incisive vManager can help companies manage their verification challenges by ensuring that SoC developers have a consistent methodology for design quality enhancements.”

Are folks mistaking by looking at tools and not at the verification process itself?

He addded that right tools and methodology are needed to resolve today’s verification challenges. Users need to work on defining verification methodologies and at the same time look at the tools that are needed to achieve verification goals.

Verification planning
Finally, there’s verification planning! What should be the ‘right’ verification path?

Verification planning needs to include:

* A formal definition of verification goals;
* A formal definition of coverage goals at all levels – starting with code coverage all the way to functional coverage;
* Required resources – human and compute;
* Verification timelines;
* All the verification tools to be used for verification; and
* Minimum and maximum signoff criteria.

Are we about to reach end of Moore’s Law?


Here is the concluding part of my discussion with Sam Fuller, CTO, Analog Devices. We discussed the technology aspects of Moore’s Law and

Sam Fuller

Sam Fuller

‘More than Moore’, among other things.

Are we at the end of Moore’s Law?
First, I asked Fuller that as Gordon Moore suggested – are we about to reach the end of Moore’s Law? What will it mean for personal computing?

Fuller replied: “There is definitely still life left in Moore’s law, but we’re leaving the golden age after the wonderful ride that we have had for the last 40 years. We will continue to make chips denser, but it is becoming difficult to continue to improve the performance as well as lower the power and cost.

“Therefore, as Moore’s law goes forward, more innovation is required with each new generation. As we move from Planer CMOS to FinFET (a new technology for multi-gate architecture of transistors); from silicon to more advanced materials Moore’s law will still have life for the next decade, but we are definitely moving into its final stages.

“For personal computing, there is still a lot of innovation left before we begin to run out of ideas. There will continue to be great advances in smart phones, mobile computing and tablets because software applications are really just beginning to take advantage of the phenomenal power and capacity of today’s semiconductors. The whole concept of ‘Internet of things’ will also throw up plenty of new opportunities.

“As we put more and more sensors in our personal gadgets, in factories, in industries, in infrastructures, in hospitals, and in homes and in vehicles, it will open up a completely new set of applications. The huge amount of data generated out of these sensors and wirelessly connected to the Internet will feed into the big data and analytics. This would create a plethora of application innovations.”

What’s happening in the plane?
The plane opportunity – 90nm – 65nm – 45nm – 22nm – 20nm – 14/18nm – is starting to get difficult and probably won’t work at 12nm, for purely physics reasons. What is Analog Devices’ take on this?

Fuller said: “You are right! We have been going from 45 nm down to lower nodes, it’ll probably go down to 10 nm, but we are beginning to run into some fundamental physics issues here. After all, it’s a relatively finite number of atoms that make up the channels in these transistors. So, you’re going to have to look at innovations beyond simply going down to finer dimensions.

“There are FinFETS and other ways that can help move you into the third dimension. We’re getting to a point where we can put a lot of complexity and a number of functions on a single die. We have moved beyond purely digital design to having more analog and mixed signal components in the same chip. There are also options such as stacked dies and multiple dies.

“Beyond integration on a single chip, Analog Devices leads in advanced packaging technologies for System in a Package (SiP) where sensors, digital and analog/mixed signal components are all in a single package as the individual components would typically use different technology nodes and it might not be practical to do such integration on a single die.

“So, the challenge often gets described as “More than Moore”, which is going beyond Moore’s law, bringing those capabilities to do analog processing as well as digital and then integrating sensors for temperature sensing, pressure sensing, motion sensing and a whole range of sensors integrated for enabling the ‘Internet of Things’.

“At Analog Devices, we have the capability in analog as well as digital, and having worked for over 20 years on MEMS devices, we are particularly well positioned as we get into ‘More than Moore’.”
Read more…

3D remains central theme for Applied in 2014!

February 10, 2014 Comments off

Om Nalamasu

Om Nalamasu

Following a host of forecasts for 2014, it is now the turn of Applied Materials with its forecast for the year. First, I asked Om Nalamasu, senior VP, CTO, Applied Materials regarding the outlook for the global semicon industry in 2014.

Semicon outlook 2014
He said that Gartner expects the semiconductor industry to grow in mid-single digits to over $330 billion in 2014.

“In our industry – the semiconductor wafer fab equipment sector – we are at the beginning of major technology transitions, driven by FinFET and 3D NAND, and based a wide range of analyst projections, wafer fab equipment investment is expected to be up 10-20 percent in 2014. We expect to see a year-over-year increase in foundry, NAND, and DRAM investment, with logic and other spending flat to down.”

Five trends for 2014
Next, what are the top five trends likely to rule the industry in 2014?

Nalamasu said that the key trends continuing to drive technology in 2014 and beyond include 3D transistors, 3D NAND, and 3D packaging. 3D remains a central theme. In logic, foundries will ramp to 20nm production and begin early transition stages to3D finFET transistors.

With respect to 3D NAND, some products will be commercially available, but most memory manufacturers plan to crossover from planar NAND to vertical NAND starting this year. In wafer level packaging, critical mechanical and electrical characterization work is bringing the manufacturability of 3D-integrated stacked chips closer to reality.

These device architecture inflections require significant advances in precision materials engineering. This spans such critical steps as precision film deposition, precision materials removal, materials modification and interface engineering. Smaller features and atomic-level thin films also make interface engineering and process integration more critical than ever.

Driving technology innovations are mobility applications which need high performance, low power semiconductors. Smartphones, smart watches, tablets and wearable gadgets continue to propel industry growth. Our customers are engaged in a fierce battle for mobility leadership as they race to be the first to market with new products that improve the performance, battery-life, form-factor and user experience of mobile devices.

How is the global semiconductor industry managing the move to the sub 20nm era?

He said that extensive R&D work is underway to move the industry into the sub-20nm realm. For the 1x nodes, more complex architectures and structures as well as new higher performance materials will be required.

Some specific areas where changes and technology innovations are needed include new hard mask and channel materials, selective material deposition and removal, patterning, inspection, and advanced interface engineering. For the memory space, different memory architectures like MRAM are being explored.

FinFETs in 20nm!
By the way, have FinFETs gone to 20nm? Are those looking for power reduction now benefiting?

FinFET transistors are in production in the most advanced 2x designs by a leading IDM, while the foundries are in limited R&D production. In addition to the disruptive 3D architecture, finFET transistors in corporate new materials such as high-k metal gate (HKMG) that help to drastically reduce power leakage.

Based on public statements, HKMG FinFET designs are expected to deliver more than a 20 percent improvement in speed and a 30 percent reduction in power consumption compared to28nm devices. These are significant advantages for mobile applications.

Status of 3D ICs
Finally, what’s the status with 3D ICs? How is Applied helping with true 3D stacking integration?

Nalamasu replied that vertically stacked 3D ICs are expected to enter into production first for niche applications. This is due primarily to the higher cost associated with building 3D wafer-level-packaged (WLP) devices. While such applications are limited today, Applied Materials expects greater utilization and demand to grow in the future.

Applied is an industry leader in WLP, having spear-headed the industry’s development of through silicon via (TSV) technology. Applied offers a suite of systems that enable customers to implement a variety of packaging techniques, from bumping to redistribution layer (RDL) to TSV. Because of work in this area, Applied is strongly positioned to support customers as they begin to adopt this technology.

To manufacture a robust integrated 3D stack, several fundamental innovations are needed. These include improving defect density and developing new materials such as low warpage laminates and less hygroscopic dielectrics.

Another essential requirement is supporting finer copper line/spacing. Important considerations here are maintaining good adhesion while watching out for corrosion. Finally, for creating the necessary smaller vias, the industry needs high quality laser etching to replace mechanical drilling techniques.

FinFETs delivering on promise of power reduction: Synopsys

February 1, 2014 Comments off

Here is the concluding part of my conversation with Synopsys’ Rich Goldman on the global semiconductor industry.

Rich Goldman

Rich Goldman

Global semicon in sub 20nm era
How is the global semicon industry performing after entering the sub 20nm era? Rich Goldman, VP, corporate marketing and strategic alliances, Synopsys, said that driving the fastest pace of change in the history of mankind is not for the faint of heart. Keeping up with Moore’s Law has always required significant investment and ingenuity.

“The sub-20nm era brings additional challenges in device structures (namely FinFETs), materials and methodologies. As costs rise, a dwindling number of semiconductor companies can afford to build fabs at the leading edge. Those thriving include foundries, which spread capital expenses over the revenue from many customers, and fabless companies, which leverage foundries’ capital investment rather than risking their own. Thriving, leading-edge IDMs are now the exception.

“Semiconductor companies focused on mobile and the Internet of Things are also thriving as their market quickly expands. Semiconductor companies who dominate their space in such segments as automotive, mil/aero and medical are also doing quite well, while non-leaders find rough waters.”

Performance of FinFETs
Have FinFETs gone to below 20nm? Also, are those looking for power reduction now benefiting?

He added that 20nm was a pivotal point in advanced process development. The 20nm process node’s new set of challenges, including double patterning and very leaky transistors due to short channel effects, negated the benefits of transistor scaling.

To further complicate matters, the migration from 28nm to 20nm lacked the performance and area gains seen with prior generations, making it economically questionable. While planar FET may be nearing the end of its scalable lifespan at 20nm, FinFETs provide a viable alternative for advanced processes at emerging nodes.

The industry’s experience with 20nm paved the way for an easier FinFET transition. FinFET processes are in production today, and many IC design companies are rapidly moving to manufacture their devices on the emerging 16nm and 14nm FinFET-based process geometries due to the compelling power and performance benefits. Numerous test chips have taped out, and results are coming in.

“FinFET is delivering on its promise of power reduction. With 20nm planar FET technologies, leakage current can flow across the channel between the source and the drain, making it very difficult to completely turn the transistor off. FinFETs provide better channel control, allowing very little current to leak when the device is in the “off” state. This enables the use of lower threshold voltages, resulting in better power and performance. FinFET devices also operate at a lower nominal voltage supply, significantly improving dynamic power.”
Read more…

%d bloggers like this: