Search Results

Keyword: ‘performance’

ST intros STM32F4 series high-performance Cortex-M4 MCUs


STMicroelectronics has introduced the STM32F4 series STM32 F4x9 and STM32F401, which are high-performance Cortex-M4 microcontrollers (MCUs).

On the growth drivers for GP MCUs, the market growth is driven by faster migration to 32 bit platform. ST has been the first to bring the ARM Cortex based solution, and now targets leadership position on 32bit MCUs. An overview of the STM32 portfolio indicates high-performance MCUs with DSP and FPU up to 608 CoreMark and up to180 MHz/225 DMIPS.

Features of the STM32F4 product lines, specifically, the STM32F429/439, include 180 MHz, 1 to 2-MB Flash and 256-KB SRAM. The low end STM32F401 has features such as 84 MHz, 128-KB to 256-KB Flash and 64-KB SRAM.

The STM32F401 provides thebest balance in performance, power consumption, integration and cost. The STM32F429/439 is providing more resources, more performance and more features. There is close pin-to-pin and software compatibility within the STM32F4
series and STM32 platform.

The STM32 F429-F439 high-performance MCUs with DSP and FPU are:
• World’s highest performance Cortex-M MCU executing from Embedded Flash, Cortex-M4 core with FPU up to 180 MHz/225 DMIPS.
• High integration thanks to ST 90nm process (same platform as F2 serie): up to 2MB Flash/256kB SRAM.
• Advanced connectivity USB OTG, Ethernet, CAN, SDRAM interface, LCD TFT controller.
• Power efficiency, thanks to ST90nm process and voltage scaling.

In terms of providing more performance, the STM32F4 provides up to 180 MHz/225 DMIPS with ART Accelerator, up to 608 CoreMark result, and ARM Cortex-M4 with floating-point unit (FPU).

The STM32F427/429 highlights include:
• 180 MHz/225 DMIPS.
• Dual bank Flash (in both 1-MB and 2-MB), 256kB SRAM.
• SDRAM Interface (up to 32-bit).
• LCD-TFT controller supporting up to SVGA (800×600).
• Better graphic with ST Chrom-ART Accelerator:
– x2 more performance vs. CPU alone
– Offloads the CPU for graphical data generation
* Raw data copy
* Pixel format conversion
* Image blending (image mixing with some transparency).
• 100 μA typ. in Stop mode.

Some real-life examples of the STM32F4 include the smart watch, where it is the main application controller or sensor hub, the smartphone, tablets and monitors, where it is the sensor hub for MEMS and optical touch, and the industrial/home automation panel, where it is the main application controller. These can also be used in Wi-Fi modules for the Internet of Things (IoT), such as appliances, door cameras, home thermostats, etc.

These offer outstanding dynamic power consumption thanks to ST 90nm process, as well as low leakage current made possible by advanced design technics and architecture (voltage scaling). ST is making a large offering of evaluation boards and Discovery kits. The STM32F4 is also offering new firmware libraries. SEGGER and ST signed an agreement around the emWin graphical stack. The solution is called STemWin.

How semicon firms can achieve high performance — Part II


Friends, as promised, here is the second part of the discussion I had with Accenture’s Scott Grant, based on Accenture’s recent study: Managing Through Challenging Times!

4. Reducing the time to cash for new products.
When companies industrialize the market concept, and they procure design win opportunities, we tend to see critical components involved with this: a) maintaining relationships of requirements from market analysis through final manufacturing build plan; b) leaders who use consistent lifecycle management of a product development flow; and c) IP management with integrated roadmap portfolio capabilities.

“Firms at times are not able to convert concepts to cash quickly. The process to integrate them has several gaps including innovation lifecycles, conversion of R&D concepts to volume products, and ability to optimize the engineering capacity constraints within their P&Ls.”

Product lifecycle management, portfolio & market analytics, and engineer skills/human resource management help to address these gaps. Portfolio management and roadmap planning process are a must. When done, semiconductor companies will be able to map quickly with the customers and the market insights.

5. Sharpening customer focus through more in-depth and accurate customer insight.
Most firms won’t survive if they are unable to gain rapid adoption of their product offering. From our experience, high performing companies build detailed customer usage-models and insight into end-device markets early in their R&D process.

The challenge many find is that without this baseline of understanding it is difficult to convert concepts into cash once the end-product is delivered to the market.

Many of the insights are available from Point of Sale trends, which can help a semicon firm exist at either an OEM (PC, handset, etc.) or distributor. High performers have enhanced the relationship with their work collaborators and customers to gain access to this data. They also build a “Trusted Advisor” relationship where they build scenarios for each end market to better predict what their end-customer may desire in features or functions.

It is difficult for a semicon firm to know how a product will be used. It is really the beginning of gaining insight into utilization, the consumer, and what usage model should be employed. So a semicon firm should study carefully how things can be used in the market. User behavior is crucial. If companies don’t understand that, they may be missing out.

6. Pursuing alliances to share the cost burden of new product development.
The point here is to make sure that semiconductor companies are taking a strategic view and look at the right places to pursue alliances. There’s a lot of impact in pursuing alliances. When semicon companies do this, they can absolutely share the burdens, but it can impact the operating model.

Other recommendations for the industry
What are the other recommendations that Accenture have for the semiconductor industry going forward?

Grant recommends the industry to focus on achieving high performance business results. Those include sustained leadership in various financial metrics such as return to shareholders, profits, and revenue growth.

“Recognize and adapt to the reality that we are now living in a multi-polar world. This is a world in which a growing number of emerging countries and economies are becoming more financially powerful, competitive and relevant in competing against the traditionally more developed parts of the world such as North America, Asia and Europe. This means there are a multitude of growing business opportunities in these emerging nations for semiconductor companies to capitalize on.

“Proactively invest during a recession rather than pull back investments and just wait until the economy pulls out of this down cycle. History has shown that those companies that invest the most perform better in the years after the market recovers.”

Companies repeating mistakes?
Now, these recessions always have a bad habit of occuring cyclically! Therefore, why do semiconductor (and other) companies tend to repeat those same mistakes again and again?

According to Grant, one reason is they tend to indiscriminately and rapidly cut costs without thinking more strategically and carefully about what costs to cut. “They tend to lay off workers who they need when the market recovers, but they can’t hire them back because those employees have moved on with their careers. These semiconductor companies don’t think hard enough about what employees and assets they will need when the market recovers.”

Layoffs? What about design and development?
Finally, are layoffs the only solution to combat recession? What happens to design and development?

Grant agrees that layoffs are absolutely not the only solution to combat recession. Investing in core competencies is crucial, and spending less time and effort on non-core capabilities is important.

“Employee morale tends to fall within design and development during a recession because they see some of their colleagues lose their jobs and they take on more work. And they lose more control of what work they are assigned to do. And they’re less secure about their job security.

“But, much of this can be alleviated by giving employees a chance to share their ideas and concerns at regularly scheduled Town Hall meetings, to communicate with them regularly and candidly, and to focus them on achieving high performance business results.”

CONCLUDED

Categories: Accenture, Scott Grant

How semicon firms can achieve high performance by simplifying business!


Engineers in the global semiconductor industry have typically have had considerable control of their work. Processes are pretty straightforward, sequential, and logical — and satisfying for an honest day’s work.

However, due to the ongoing global economic downturn, many of these engineers are rapidly losing control of more of their professional lives. Caught like the rest of the world in a recession, they are losing more control of what work they are assigned to do, how they do it, in what sequence, by when and with whom.

Given these inter-related problems, many semiconductor companies need to make rapid and fundamental changes in their business operations, strategies and workforce management practices to emerge from this downturn, and for year beyond, as high performers.

Once this recession ends, these people will be entering a market with a different landscape than the market that existed when the downturn began. They need to figure out how to restart their businesses, regain their footing and connect to a new purpose.

They need to address the so-called ‘soft’ aspects of business, such as the engineers who design chips and how they feel. It’s time for them to pay more attention to the little things that may seem innocuous but are actually central to achieving high performance.

Thanks to Charlie Hartley, Accenture, US, I was able to get hold of Accenture’s recent study: Managing Through Challenging Times!! Quite an interesting read!

Naturally, it led to a conversation with Scott Grant, Executive Global Lead of Accenture’s Semiconductor Operating Unit (see image here), who led the research and analysis of this new Accenture report released now about these issues and recommended solutions.

Accenture’s report has seven suggestions or recommendations.

1. Divesting the business of unproductive assets.
2. Infusing a higher degree of operational excellence into the business.
3. Maintaining morale and energy in the workforce, especially in the key area of innovation.
4. Reducing the time to cash for new products.
5. Sharpening customer focus through more in-depth and accurate customer insight.
6. Pursuing alliances to share the cost burden of new product development.
7. Acquiring key assets.

Let’s take a look at those, one by one!

1. Divesting the business of unproductive assets.
From Accenture’s perspective, it has become evident during the past few years that among the top 20 semiconductor a growing number are fabless. That trend will continue in the future mainly because fabless companies have more competitive cost structures than semiconductor manufacturing companies that incur such high fixed-asset costs for their operations. Accenture’s clients (customers) are seeking to understand the business operating model that best fits their desired position in the market. Our assessment leads to having a leaner product portfolio.

The first thing we look at is true cost at length. Traditionally, industry looks at cost per wafer metrics. Accenture studies what the hidden costs are. We look at Total Cost to Land including NPI re-spin costs, complete organization costs, advanced manufacturing process costs, plus the traditional material and labor costs. The goal is to find a fair comparison with an external manufacturing model that presents key improvement opportunities.

We also look for an integrated roadmap for manufacturing, design technology and intellectual property (IP). There are opportunities to better use IP investments across both leading products and derivatives, resulting in reduced cost in product ramp/readiness. To divest of unproductive assets, high performing firms build an accurate and balanced cost baseline for comparison.

In addition, we also look at strategic sourcing. Semiconductor companies often ask how they can lower costs. Sometimes this has the adverse affect within material quality. Strategic sourcing is an important factor to balance both sides of this equation. We suggest that our clients compare costs objectively against their peer groups and external suppliers. Many times we see lower direct material costs through use of external manufacturing models, because of the manufacturing supplier’s economies of scale.

2. Infusing a higher degree of operational excellence into the business.
Traditionally, semiconductor companies were all about operational excellence. In the late 90s and early 2000s, the industry was about R&D excellence. Now, we see operational excellence in terms of sales and marketing — with the amount of feet on the ground, the amount of time invested per design wins. Accenture strives to understand how companies better integrate sales operations into the manufacturing and production operation process.

Given the focus on external manufacturing, operational excellence is now being applied to the IP Ecosystem. IP management is critical for the current industry landscape. Semiconductor companies need to have a compelling argument to differentiate their IP. IP management and external management have been the crux of the strategy. Companies see the design importance growing. They see the change in their clients’ requests towards a focus on sales operation and the IP ecosystem.

We see a few shifts in sales opeations. Many of Accenture’s clients are challenged when they take emerging products into certain regional and local markets. One key challenge is the ability to maintain consistency in quoting, contracting and ordering. The other challenge is training and investing in sales. Sales is being asked to do more. They seem to spend 45 percent of their time in non-sales activities such as administrative tasks. However, they need to spend much more of their total time than that on sales activities and have others do more of the administration.

When Accenture examines the sales cycles of semiconductor companies, we tend to see limited performance metrics that follow. These companies tend to adhere to regional sales models — and the complexity arises regarding how to be consistent with quoting, contracting and ordering.

3. Maintaining morale and energy in the workforce, especially in the key area of innovation.
One of the key decisions during a downturn is workforce reduction. For those employees remaining with the companies after reductions, it’s key for these companies to re-enforce their connection to the new strategy, and how can they re-adjust from a training perspective to prepare such employees for innovation.

Investing in innovation is a huge priority. The transition Accenture sees in workforce reduction includes engineers feeling a loss of control. To maintain moral and energy, semiconductor executives need to continue to communicate strategic objectives to all employees.

Sometimes amid the change, a semiconductor company needs to ask whether it has thought beyond the change event (portfolio, workforce or facility reductions) and also focused on the complete organizational transition. This is a process of communication — to help employees reconnect with their companies. Getting employees to understand, adapt and connect to the new direction takes a lot longer, and it also impacts productivity. Yet it must be emphasized.

Part II continues tomorrow. Stay tuned, folks!

Measuring performance of carbon nanotubes as building blocks for ultra-tiny computer chips of the future


There is this really great story from IBM Research Labs that I simply have to seed here for my readers.

IBM’s scientists have created a method to measure the performance of carbon nanotubes as building blocks for ultra-tiny computer chips of the future. Of course, you can also read it on IBM Research Lab’s site as well as on CIOL’s semicon site.

IBM scientists have measured the distribution of electrical charges in tubes of carbon that measure less than 2nm in diameter, 50,000 times thinner than a strand of human hair.

This novel technique, which relies on the interactions between electrons and phonons, provides a detailed understanding of the electrical behavior of carbon nanotubes, a material that shows promise as a building block for much smaller, faster and lower power computer chips compared to today’s conventional silicon transistors.

Phonons are the atomic vibrations that occur inside material, and can determine the material’s thermal and electrical conductivity. Electrons carry and produce the current. Both are important features of materials that can be used to carry electrical signals and perform computations.

The interaction between electrons and phonons can release heat and impede electrical flow inside computer chips. By understanding the interaction of electrons and phonons in carbon nanotubes, the researchers have developed a better way to measure their suitability as wires and semiconductors inside of future computer chips.

In order to make carbon nanotubes useful in building logic circuitry, scientists are pushing to demonstrate their high speed, high packing density and low power consumption capabilities as well as the ability to make them viable for potential mass production.

Dr. Phaedon Avouris, IBM Fellow and lead researcher for IBM’s carbon nanotube efforts, said: “The success of nanoelectronics will largely depend on the ability to prepare well characterized and reproducible nano-structures, such as carbon nanotubes. Using this technique, we are now able to see and understand the local electronic behavior of individual carbon nanotubes.”

To date, researchers have been able to build carbon nanotube transistors with superior performance, but have been challenged with reproducibility issues. Carbon nanotubes are sensitive to environmental influences.

For example, their properties can be altered by foreign substances, affecting the flow of electrical current and changing device performance. These interactions are typically local and change the density of electrons in the various devices of an integrated circuit, and even along a single nanotube.

Set up strong methodology teams to create better verification infrastructure: Synopsys


Arindam Ghosh

Arindam Ghosh

This is the third installment on verification, now, taken up by Synopsys. Regarding the biggest verification mistakes today, Arindam Ghosh, director – Global Technical Services, Synopsys India, attributed these as:

* Spending no time on verification planning (not documenting what needs to be verified) and focusing more on running simulations or on execution.
* No or very low investment in building better verification environments (based on best/new methodologies and best practices); instead maintaining older verification environments.
* Compromising on verification completeness because of tape out pressures and time-to-market considerations.

Would you agree that many companies STILL do not know how to verify a chip?

He said that it could be true for smaller companies or start-ups, but most of the major semiconductor design engineers know about the better approaches/methodologies to verify their chips. However, they may not be investing in implementing the new methodologies for multiple reasons and may instead continue to follow the traditional flows.

One way to address these mistakes would be to set up strong methodology teams to create a better verification infrastructure for future chips. However, few companies are doing this.

Are companies realizing this and building an infrastructure that gets you business advantage? He added that some companies do realize this and are investing in building a better infrastructure (in terms of better methodology and flows) for verification.

When should good verification start?
When should good verification start — after design; as you are designing and architecting your design environment? Ghosh said that good verification starts as soon as we start designing and architecting the design. Verification leads should start discussing the verification environment components with the lead architect and also start writing the verification plan.

Are folks mistaking by looking at tools and not at the verification process itself? According to him, tools play a major role in the effectiveness of any verification process, but we still see a lot of scope in methodology improvements beyond the tools.

What all needs to get into verification planning as the ‘right’ verification path is fraught with complexities? Ghosh said that there is no single, full-proof recipe for a ‘right’ verification path. It depends on multiple factors, including whether the design is a new product or derivative, the design application etc. But yes, it is very important to do comprehensive verification planning before starting the verification process.

Synopsys is said to be building a comprehensive, unified and integrated verification environment is required for today’s revolutionary SoCs and would offer a fundamental shift forward in productivity, performance, capacity and functionality.  Synopsys’ Verification Compiler provides the software capabilities, technology, methodologies and VIP required for the functional verification of advanced SoC designs in one solution.

Verification Compiler includes:
* Better capacity and compile and runtime performance.
* Next-generation static and formal technology delivering performance improvement and the capacity to analyze a complete SoC (Property checking, LP, CDC, connectivity).
* Comprehensive low power verification solution.
* Verification planning and management.
* Next-generation verification IP and a deep integration between VIP and the simulation engine, which in turn can greatly improve productivity.  The constraint engine is tuned for optimal performance with its VIP library. It has integrated debug solutions for VIP so one can do protocol-level analysis and transaction-based analysis with the rest of the testbench.
* Support for industry standard verification methodologies.
* X-propagation simulation with both RTL and low power simulations.
* Common debug platform with better debug technology having new capabilities, tight integrations with simulation, emulation, testbench, transaction debug, power-aware debug , hw/sw debug, formal, VIP and coverage.

Top five recommendations for verification
What would be Synopsys’ top five recommendations for verification?

* Spend a meaningful amount of time and effort on verification planning before execution.
* Continuously invest in building a better verification infrastructure and methodologies across the company for better productivity.
* Collaborate with EDA companies to develop, evaluate and deploy new technologies and flows, which can bring more productivity to verification processes.
* Nurture fresh talent through regular on and off-the-job trainings (on flows, methodologies, tools, technology).
* Conduct regular reviews of the completed verification projects with the goal of trying to improve the verification process after every tapeout through methodology enhancements.

Categories: Semiconductors

Cadence: Plan verification to avoid mistakes!


Apurva Kalia

Apurva Kalia

Following Mentor Graphics, Cadence Design Systems Inc. has entered the verification debate. ;)  I met Apurva Kalia, VP R&D – System & Verification Group, Cadence Design Systems. In a nutshell, he advised that there needs to be proper verification planning in order to avoid mistakes. First, let’s try to find out the the biggest verification mistakes.

Top verification mistakes
Kalia said that the biggest verification mistakes made today are:
* Verification engineers do not define a structured notion of verification completeness.
* Verification planning is not done up front and is carried out as verification is going along.
* A well-defined reusable verification methodology is not applied.
* Legacy tools continue to be used for verification; new tools and technologies are not adopted.

In that case, why are some companies STILL not knowing how to verify a chip?

He added: “I would not describe the situation as companies not knowing how to verify a chip. Instead, I think a more accurate description of the problem is that the verification complexity has increased so much that companies do not know how to meet their verification goals.

“For example, the number of cycles needed to verify a current generation processor – as calculated by traditional methods of doing verification – is too prohibitive to be done in any reasonable timeframe using legacy verification methodologies. Hence, new methodologies and tools are needed. Designs today need to be verified together with software. This also requires new tools and methodologies. Companies are not moving fast enough to define, adopt and use these new tools and methodologies thereby leading to challenges in verifying a chip.”

Addressing challenges
How are companies trying to address the challenges?

Companies are trying to address the challenges in various ways:
* Companies at the cutting edge of designs and verification are indeed trying to adopt structured verification methodologies to address these challenges.

* Smaller companies are trying to address these challenges by outsourcing their verification to experts and by hiring more verification experts.

* Verification acceleration and prototyping solutions are being adopted to get faster verification and which will allow companies to do more verification in the same amount of time.

* Verification environment re-use helps to cut down the time required to develop verification environments.

* Key requirements of SoC integration and verification—including functionality, compliance, power, performance, etc.—are hardware/software debug efficiency, multi-language verification, low power, mixed signal, fast time to debug, and execution speed.

Cadence has the widest portfolio of tools to help companies meet verification challenges, including:

Incisive Enterprise Manager, which provides hierarchical verification technology for multiple IPs, interconnects, hardware/software, and plans to improve management productivity and visibility;

The recently launched vManager solution, a verification planning and management solution enabled by client/server technology to address the growing verification closure challenge driven by increasing design size and complexity;

Incisive Enterprise Verifier, which delivers dual power from tightly integrated formal analysis and simulation engines; and

Incisive Enterprise Simulator, which provides the most comprehensive IEEE language support with unique capabilities supporting the intent, abstraction, and convergence needed to speed silicon realization.

Are companies building an infrastructure that gets you business advantage? Yes, companies are realizing the problems. It is these companies that are the winners in managing today’s design and verification challenges, he said.

Good verification
When should good verification start?

Kalia noted: “Good verification should start right at the time of the high level architecture of the design. A verification strategy should be defined at that time, and an overall verification plan should be written at that time. This is where a comprehensive solution like Incisive vManager can help companies manage their verification challenges by ensuring that SoC developers have a consistent methodology for design quality enhancements.”

Are folks mistaking by looking at tools and not at the verification process itself?

He addded that right tools and methodology are needed to resolve today’s verification challenges. Users need to work on defining verification methodologies and at the same time look at the tools that are needed to achieve verification goals.

Verification planning
Finally, there’s verification planning! What should be the ‘right’ verification path?

Verification planning needs to include:

* A formal definition of verification goals;
* A formal definition of coverage goals at all levels – starting with code coverage all the way to functional coverage;
* Required resources – human and compute;
* Verification timelines;
* All the verification tools to be used for verification; and
* Minimum and maximum signoff criteria.

Are we at an inflection point in verification?


synopsysAre we at an inflection point in verification today? Delivering the guest keynote at the UVM 1.2 day, Vikas Gautam, senior director, Verification Group, Synopsys, said that today, mobile and the Internet of Things are driving growth. Naturally, the SoCs are becoming even more complex. It is also opening up new verification challenges, such as power efficiency, more software, and reducing time-to-market. There is a need to shift-left to be able to meet time-to-market goal.

The goal is to complete your verification as early as possible. There have been breakthrough verification innovations. System Verilog brought in a single language. Every 10-15 years, there has been a need to upgrade verification.

Today, many verification technologies are needed. There is a growing demand for smarter verification. There is need for much upfront verification planning. There is an automated setup and re-use with VIP. There is a need to deploy new technologies and different debug environments. The current flows are limitimg smart verification. There are disjointed environments with many tools and vendors.

Synopsys has introduced the Verification Compiler. You get access to each required technology, as well as next-gen technology. These technologies are natively integrated. All of this enables 3X verification productivity.

Regarding next gen static and formal platforms, there will be capacity and performance for SoCs. It should be compatible with implementation products and flows. There is a comprehensive set of applications. The NLP+X-Prop can help find tough wake-up bug at RTL. Simulation is tuned for the VIP. There is a ~50 percent runtime improvement.

System Verilog has brought in many new changes.  Now, we have the Verification Compiler. Verdi is an open platform. It offers VIA – a platform for customizing Verdi. VIA improves the debug efficiency.

Are we about to reach end of Moore’s Law?


Here is the concluding part of my discussion with Sam Fuller, CTO, Analog Devices. We discussed the technology aspects of Moore’s Law and

Sam Fuller

Sam Fuller

‘More than Moore’, among other things.

Are we at the end of Moore’s Law?
First, I asked Fuller that as Gordon Moore suggested – are we about to reach the end of Moore’s Law? What will it mean for personal computing?

Fuller replied: “There is definitely still life left in Moore’s law, but we’re leaving the golden age after the wonderful ride that we have had for the last 40 years. We will continue to make chips denser, but it is becoming difficult to continue to improve the performance as well as lower the power and cost.

“Therefore, as Moore’s law goes forward, more innovation is required with each new generation. As we move from Planer CMOS to FinFET (a new technology for multi-gate architecture of transistors); from silicon to more advanced materials Moore’s law will still have life for the next decade, but we are definitely moving into its final stages.

“For personal computing, there is still a lot of innovation left before we begin to run out of ideas. There will continue to be great advances in smart phones, mobile computing and tablets because software applications are really just beginning to take advantage of the phenomenal power and capacity of today’s semiconductors. The whole concept of ‘Internet of things’ will also throw up plenty of new opportunities.

“As we put more and more sensors in our personal gadgets, in factories, in industries, in infrastructures, in hospitals, and in homes and in vehicles, it will open up a completely new set of applications. The huge amount of data generated out of these sensors and wirelessly connected to the Internet will feed into the big data and analytics. This would create a plethora of application innovations.”

What’s happening in the plane?
The plane opportunity – 90nm – 65nm – 45nm – 22nm – 20nm – 14/18nm – is starting to get difficult and probably won’t work at 12nm, for purely physics reasons. What is Analog Devices’ take on this?

Fuller said: “You are right! We have been going from 45 nm down to lower nodes, it’ll probably go down to 10 nm, but we are beginning to run into some fundamental physics issues here. After all, it’s a relatively finite number of atoms that make up the channels in these transistors. So, you’re going to have to look at innovations beyond simply going down to finer dimensions.

“There are FinFETS and other ways that can help move you into the third dimension. We’re getting to a point where we can put a lot of complexity and a number of functions on a single die. We have moved beyond purely digital design to having more analog and mixed signal components in the same chip. There are also options such as stacked dies and multiple dies.

“Beyond integration on a single chip, Analog Devices leads in advanced packaging technologies for System in a Package (SiP) where sensors, digital and analog/mixed signal components are all in a single package as the individual components would typically use different technology nodes and it might not be practical to do such integration on a single die.

“So, the challenge often gets described as “More than Moore”, which is going beyond Moore’s law, bringing those capabilities to do analog processing as well as digital and then integrating sensors for temperature sensing, pressure sensing, motion sensing and a whole range of sensors integrated for enabling the ‘Internet of Things’.

“At Analog Devices, we have the capability in analog as well as digital, and having worked for over 20 years on MEMS devices, we are particularly well positioned as we get into ‘More than Moore’.”
Read more…

Global semicon industry trends in 2014: Analog Devices


Sam Fuller

Sam Fuller

I recently met Sam Fuller, CTO, Analog Devices, and had an interesting conversation. First, I asked him about the state of the global semicon industry in 2013.

Industry in 2013
He said: “Due to the uncertainties in the global economy in the last couple of years, the state of the global semiconductor industry has been quite modest growth. Because of the modest growth, there has been a buildup in demand. As the global economies begin to be more robust going forward, we expect to see more growth.”

Industry in 2014?
How does Analog Devices see the industry going forward in 2014? What are the five key trends?

He added: “I would talk about the trends more from an eco-system and applications perspective. Increased capability on a single chip: Given all the advances to Moore’s law, the capability of a chip has increased considerably in all dimensions and not just performance, be it the horsepower we see in today’s smartphones or the miniaturization and power consumption of wearable gadgets that were on show this year at CES.

“In Analog Devices’ case, as we are focused on high performance signal processing, we can put more of the entire signal chain on a single die. For our customers, the challenge is to provide their customers a more capable product which means a more complex product, but with a simpler interface.

“A classic example is our AD9361 chip, which is a single chip wideband radio transceiver for Software Defined Radio (SDR). It is a very capable ASSP (Application Specific Standard Products) as well as RF front end with a wide operating frequency of 70 MHz to 6 GHz.

“This chip, coupled with an all-purpose FPGA, can build a very flexible SDR operating across different radio protocols, wide frequency range and bandwidth requirements all controlled via software configuration. It finds a number of applications in wireless communication infrastructure, small cell Base stations as well as a whole range of custom radios in the industrial and aerospace businesses.”

Now, let’s see the trends for 2014!

More collaboration with customers: There is a greater emphasis on understanding customers’ end applications to provide a complete signal chain, all in a System on a Chip (SoC) or a System in a package (SiP). The relationship with our customers is changing as we move more towards ASSPs focused with few lead customers for target markets and target applications. While this has already been ongoing in the consumer industry with PCs and laptops, customers in other vertical markets like healthcare, automotive and industrial are and will collaborate more with semiconductor companies like Analog Devices to innovate at a solutions level.

More complete products: We have evolved from delivering just the silicon at a component level to delivering more complete products with more advanced packaging for various 3D chips or multi-die within a package. Our solutions now have typically much more software that makes it easier to configure or program the chips. It is a solution that is a combination of more advanced silicon, advanced packaging and more appropriate software.

With providing the complete solution, the products are more application specific and hence, the need for more collaboration with customers. For example, there may be one focused on Software Defined Radio, one for motor control, and one for vital signs monitoring for consumer health that we have launched recently.

We need it to be generic enough that multiple customers can use it, but it needs to be as tailored as possible to the customers’ needs for specific market segments. While because of the volume and standardization, availability of complete reference designs in the consumer world has been the norm, other market segments are demanding more complete products not-withstanding the huge variation in protocols and applications.

Truly global industry: The semiconductor and electronics industry has become truly global, so multiple design sites around the globe collaborate to create products. For example for Analog Devices, one of our premier design sites is our Bangalore product design center where we quite literally developed our most complex and capable chips. At the same time our customers are also global.

We see large multinational companies like GE, Honeywell, Cisco, Juniper, ABB, Schneider and many of our top strategic customers globally doing substantial system design work in Bangalore along with a multitude of India design houses. Our fastest growing region is in Asia, but we have substantial engagement with customers in North America and Europe. And our competition is also global, which means that the industry is ever moving faster as the competition is global.

Smarter design tools: The final trend worth talking about is the need for smarter design tools.  As our products and our customers’ products become more complex and capable, there have to be rapidly developing design tools, for us to design them.

This cannot be done by brute force but by designing smarter and better tools. There is a lot of innovation that goes on in developing better tool suites. There is also ever more capable software that caters to a market moving from 100s of transistors to literally billions of transistors for an application.

3D remains central theme for Applied in 2014!


Om Nalamasu

Om Nalamasu

Following a host of forecasts for 2014, it is now the turn of Applied Materials with its forecast for the year. First, I asked Om Nalamasu, senior VP, CTO, Applied Materials regarding the outlook for the global semicon industry in 2014.

Semicon outlook 2014
He said that Gartner expects the semiconductor industry to grow in mid-single digits to over $330 billion in 2014.

“In our industry – the semiconductor wafer fab equipment sector – we are at the beginning of major technology transitions, driven by FinFET and 3D NAND, and based a wide range of analyst projections, wafer fab equipment investment is expected to be up 10-20 percent in 2014. We expect to see a year-over-year increase in foundry, NAND, and DRAM investment, with logic and other spending flat to down.”

Five trends for 2014
Next, what are the top five trends likely to rule the industry in 2014?

Nalamasu said that the key trends continuing to drive technology in 2014 and beyond include 3D transistors, 3D NAND, and 3D packaging. 3D remains a central theme. In logic, foundries will ramp to 20nm production and begin early transition stages to3D finFET transistors.

With respect to 3D NAND, some products will be commercially available, but most memory manufacturers plan to crossover from planar NAND to vertical NAND starting this year. In wafer level packaging, critical mechanical and electrical characterization work is bringing the manufacturability of 3D-integrated stacked chips closer to reality.

These device architecture inflections require significant advances in precision materials engineering. This spans such critical steps as precision film deposition, precision materials removal, materials modification and interface engineering. Smaller features and atomic-level thin films also make interface engineering and process integration more critical than ever.

Driving technology innovations are mobility applications which need high performance, low power semiconductors. Smartphones, smart watches, tablets and wearable gadgets continue to propel industry growth. Our customers are engaged in a fierce battle for mobility leadership as they race to be the first to market with new products that improve the performance, battery-life, form-factor and user experience of mobile devices.

How is the global semiconductor industry managing the move to the sub 20nm era?

He said that extensive R&D work is underway to move the industry into the sub-20nm realm. For the 1x nodes, more complex architectures and structures as well as new higher performance materials will be required.

Some specific areas where changes and technology innovations are needed include new hard mask and channel materials, selective material deposition and removal, patterning, inspection, and advanced interface engineering. For the memory space, different memory architectures like MRAM are being explored.

FinFETs in 20nm!
By the way, have FinFETs gone to 20nm? Are those looking for power reduction now benefiting?

FinFET transistors are in production in the most advanced 2x designs by a leading IDM, while the foundries are in limited R&D production. In addition to the disruptive 3D architecture, finFET transistors in corporate new materials such as high-k metal gate (HKMG) that help to drastically reduce power leakage.

Based on public statements, HKMG FinFET designs are expected to deliver more than a 20 percent improvement in speed and a 30 percent reduction in power consumption compared to28nm devices. These are significant advantages for mobile applications.

Status of 3D ICs
Finally, what’s the status with 3D ICs? How is Applied helping with true 3D stacking integration?

Nalamasu replied that vertically stacked 3D ICs are expected to enter into production first for niche applications. This is due primarily to the higher cost associated with building 3D wafer-level-packaged (WLP) devices. While such applications are limited today, Applied Materials expects greater utilization and demand to grow in the future.

Applied is an industry leader in WLP, having spear-headed the industry’s development of through silicon via (TSV) technology. Applied offers a suite of systems that enable customers to implement a variety of packaging techniques, from bumping to redistribution layer (RDL) to TSV. Because of work in this area, Applied is strongly positioned to support customers as they begin to adopt this technology.

To manufacture a robust integrated 3D stack, several fundamental innovations are needed. These include improving defect density and developing new materials such as low warpage laminates and less hygroscopic dielectrics.

Another essential requirement is supporting finer copper line/spacing. Important considerations here are maintaining good adhesion while watching out for corrosion. Finally, for creating the necessary smaller vias, the industry needs high quality laser etching to replace mechanical drilling techniques.

%d bloggers like this: