Search Results

Keyword: ‘performance’

ST intros STM32F4 series high-performance Cortex-M4 MCUs

September 18, 2013 Comments off

STMicroelectronics has introduced the STM32F4 series STM32 F4x9 and STM32F401, which are high-performance Cortex-M4 microcontrollers (MCUs).

On the growth drivers for GP MCUs, the market growth is driven by faster migration to 32 bit platform. ST has been the first to bring the ARM Cortex based solution, and now targets leadership position on 32bit MCUs. An overview of the STM32 portfolio indicates high-performance MCUs with DSP and FPU up to 608 CoreMark and up to180 MHz/225 DMIPS.

Features of the STM32F4 product lines, specifically, the STM32F429/439, include 180 MHz, 1 to 2-MB Flash and 256-KB SRAM. The low end STM32F401 has features such as 84 MHz, 128-KB to 256-KB Flash and 64-KB SRAM.

The STM32F401 provides thebest balance in performance, power consumption, integration and cost. The STM32F429/439 is providing more resources, more performance and more features. There is close pin-to-pin and software compatibility within the STM32F4
series and STM32 platform.

The STM32 F429-F439 high-performance MCUs with DSP and FPU are:
• World’s highest performance Cortex-M MCU executing from Embedded Flash, Cortex-M4 core with FPU up to 180 MHz/225 DMIPS.
• High integration thanks to ST 90nm process (same platform as F2 serie): up to 2MB Flash/256kB SRAM.
• Advanced connectivity USB OTG, Ethernet, CAN, SDRAM interface, LCD TFT controller.
• Power efficiency, thanks to ST90nm process and voltage scaling.

In terms of providing more performance, the STM32F4 provides up to 180 MHz/225 DMIPS with ART Accelerator, up to 608 CoreMark result, and ARM Cortex-M4 with floating-point unit (FPU).

The STM32F427/429 highlights include:
• 180 MHz/225 DMIPS.
• Dual bank Flash (in both 1-MB and 2-MB), 256kB SRAM.
• SDRAM Interface (up to 32-bit).
• LCD-TFT controller supporting up to SVGA (800×600).
• Better graphic with ST Chrom-ART Accelerator:
— x2 more performance vs. CPU alone
— Offloads the CPU for graphical data generation
* Raw data copy
* Pixel format conversion
* Image blending (image mixing with some transparency).
• 100 μA typ. in Stop mode.

Some real-life examples of the STM32F4 include the smart watch, where it is the main application controller or sensor hub, the smartphone, tablets and monitors, where it is the sensor hub for MEMS and optical touch, and the industrial/home automation panel, where it is the main application controller. These can also be used in Wi-Fi modules for the Internet of Things (IoT), such as appliances, door cameras, home thermostats, etc.

These offer outstanding dynamic power consumption thanks to ST 90nm process, as well as low leakage current made possible by advanced design technics and architecture (voltage scaling). ST is making a large offering of evaluation boards and Discovery kits. The STM32F4 is also offering new firmware libraries. SEGGER and ST signed an agreement around the emWin graphical stack. The solution is called STemWin.

How semicon firms can achieve high performance — Part II

May 6, 2009 Comments off

Friends, as promised, here is the second part of the discussion I had with Accenture’s Scott Grant, based on Accenture’s recent study: Managing Through Challenging Times!

4. Reducing the time to cash for new products.
When companies industrialize the market concept, and they procure design win opportunities, we tend to see critical components involved with this: a) maintaining relationships of requirements from market analysis through final manufacturing build plan; b) leaders who use consistent lifecycle management of a product development flow; and c) IP management with integrated roadmap portfolio capabilities.

“Firms at times are not able to convert concepts to cash quickly. The process to integrate them has several gaps including innovation lifecycles, conversion of R&D concepts to volume products, and ability to optimize the engineering capacity constraints within their P&Ls.”

Product lifecycle management, portfolio & market analytics, and engineer skills/human resource management help to address these gaps. Portfolio management and roadmap planning process are a must. When done, semiconductor companies will be able to map quickly with the customers and the market insights.

5. Sharpening customer focus through more in-depth and accurate customer insight.
Most firms won’t survive if they are unable to gain rapid adoption of their product offering. From our experience, high performing companies build detailed customer usage-models and insight into end-device markets early in their R&D process.

The challenge many find is that without this baseline of understanding it is difficult to convert concepts into cash once the end-product is delivered to the market.

Many of the insights are available from Point of Sale trends, which can help a semicon firm exist at either an OEM (PC, handset, etc.) or distributor. High performers have enhanced the relationship with their work collaborators and customers to gain access to this data. They also build a “Trusted Advisor” relationship where they build scenarios for each end market to better predict what their end-customer may desire in features or functions.

It is difficult for a semicon firm to know how a product will be used. It is really the beginning of gaining insight into utilization, the consumer, and what usage model should be employed. So a semicon firm should study carefully how things can be used in the market. User behavior is crucial. If companies don’t understand that, they may be missing out.

6. Pursuing alliances to share the cost burden of new product development.
The point here is to make sure that semiconductor companies are taking a strategic view and look at the right places to pursue alliances. There’s a lot of impact in pursuing alliances. When semicon companies do this, they can absolutely share the burdens, but it can impact the operating model.

Other recommendations for the industry
What are the other recommendations that Accenture have for the semiconductor industry going forward?

Grant recommends the industry to focus on achieving high performance business results. Those include sustained leadership in various financial metrics such as return to shareholders, profits, and revenue growth.

“Recognize and adapt to the reality that we are now living in a multi-polar world. This is a world in which a growing number of emerging countries and economies are becoming more financially powerful, competitive and relevant in competing against the traditionally more developed parts of the world such as North America, Asia and Europe. This means there are a multitude of growing business opportunities in these emerging nations for semiconductor companies to capitalize on.

“Proactively invest during a recession rather than pull back investments and just wait until the economy pulls out of this down cycle. History has shown that those companies that invest the most perform better in the years after the market recovers.”

Companies repeating mistakes?
Now, these recessions always have a bad habit of occuring cyclically! Therefore, why do semiconductor (and other) companies tend to repeat those same mistakes again and again?

According to Grant, one reason is they tend to indiscriminately and rapidly cut costs without thinking more strategically and carefully about what costs to cut. “They tend to lay off workers who they need when the market recovers, but they can’t hire them back because those employees have moved on with their careers. These semiconductor companies don’t think hard enough about what employees and assets they will need when the market recovers.”

Layoffs? What about design and development?
Finally, are layoffs the only solution to combat recession? What happens to design and development?

Grant agrees that layoffs are absolutely not the only solution to combat recession. Investing in core competencies is crucial, and spending less time and effort on non-core capabilities is important.

“Employee morale tends to fall within design and development during a recession because they see some of their colleagues lose their jobs and they take on more work. And they lose more control of what work they are assigned to do. And they’re less secure about their job security.

“But, much of this can be alleviated by giving employees a chance to share their ideas and concerns at regularly scheduled Town Hall meetings, to communicate with them regularly and candidly, and to focus them on achieving high performance business results.”

CONCLUDED

Categories: Accenture, Scott Grant

How semicon firms can achieve high performance by simplifying business!

May 4, 2009 Comments off

Engineers in the global semiconductor industry have typically have had considerable control of their work. Processes are pretty straightforward, sequential, and logical — and satisfying for an honest day’s work.

However, due to the ongoing global economic downturn, many of these engineers are rapidly losing control of more of their professional lives. Caught like the rest of the world in a recession, they are losing more control of what work they are assigned to do, how they do it, in what sequence, by when and with whom.

Given these inter-related problems, many semiconductor companies need to make rapid and fundamental changes in their business operations, strategies and workforce management practices to emerge from this downturn, and for year beyond, as high performers.

Once this recession ends, these people will be entering a market with a different landscape than the market that existed when the downturn began. They need to figure out how to restart their businesses, regain their footing and connect to a new purpose.

They need to address the so-called ‘soft’ aspects of business, such as the engineers who design chips and how they feel. It’s time for them to pay more attention to the little things that may seem innocuous but are actually central to achieving high performance.

Thanks to Charlie Hartley, Accenture, US, I was able to get hold of Accenture’s recent study: Managing Through Challenging Times!! Quite an interesting read!

Naturally, it led to a conversation with Scott Grant, Executive Global Lead of Accenture’s Semiconductor Operating Unit (see image here), who led the research and analysis of this new Accenture report released now about these issues and recommended solutions.

Accenture’s report has seven suggestions or recommendations.

1. Divesting the business of unproductive assets.
2. Infusing a higher degree of operational excellence into the business.
3. Maintaining morale and energy in the workforce, especially in the key area of innovation.
4. Reducing the time to cash for new products.
5. Sharpening customer focus through more in-depth and accurate customer insight.
6. Pursuing alliances to share the cost burden of new product development.
7. Acquiring key assets.

Let’s take a look at those, one by one!

1. Divesting the business of unproductive assets.
From Accenture’s perspective, it has become evident during the past few years that among the top 20 semiconductor a growing number are fabless. That trend will continue in the future mainly because fabless companies have more competitive cost structures than semiconductor manufacturing companies that incur such high fixed-asset costs for their operations. Accenture’s clients (customers) are seeking to understand the business operating model that best fits their desired position in the market. Our assessment leads to having a leaner product portfolio.

The first thing we look at is true cost at length. Traditionally, industry looks at cost per wafer metrics. Accenture studies what the hidden costs are. We look at Total Cost to Land including NPI re-spin costs, complete organization costs, advanced manufacturing process costs, plus the traditional material and labor costs. The goal is to find a fair comparison with an external manufacturing model that presents key improvement opportunities.

We also look for an integrated roadmap for manufacturing, design technology and intellectual property (IP). There are opportunities to better use IP investments across both leading products and derivatives, resulting in reduced cost in product ramp/readiness. To divest of unproductive assets, high performing firms build an accurate and balanced cost baseline for comparison.

In addition, we also look at strategic sourcing. Semiconductor companies often ask how they can lower costs. Sometimes this has the adverse affect within material quality. Strategic sourcing is an important factor to balance both sides of this equation. We suggest that our clients compare costs objectively against their peer groups and external suppliers. Many times we see lower direct material costs through use of external manufacturing models, because of the manufacturing supplier’s economies of scale.

2. Infusing a higher degree of operational excellence into the business.
Traditionally, semiconductor companies were all about operational excellence. In the late 90s and early 2000s, the industry was about R&D excellence. Now, we see operational excellence in terms of sales and marketing — with the amount of feet on the ground, the amount of time invested per design wins. Accenture strives to understand how companies better integrate sales operations into the manufacturing and production operation process.

Given the focus on external manufacturing, operational excellence is now being applied to the IP Ecosystem. IP management is critical for the current industry landscape. Semiconductor companies need to have a compelling argument to differentiate their IP. IP management and external management have been the crux of the strategy. Companies see the design importance growing. They see the change in their clients’ requests towards a focus on sales operation and the IP ecosystem.

We see a few shifts in sales opeations. Many of Accenture’s clients are challenged when they take emerging products into certain regional and local markets. One key challenge is the ability to maintain consistency in quoting, contracting and ordering. The other challenge is training and investing in sales. Sales is being asked to do more. They seem to spend 45 percent of their time in non-sales activities such as administrative tasks. However, they need to spend much more of their total time than that on sales activities and have others do more of the administration.

When Accenture examines the sales cycles of semiconductor companies, we tend to see limited performance metrics that follow. These companies tend to adhere to regional sales models — and the complexity arises regarding how to be consistent with quoting, contracting and ordering.

3. Maintaining morale and energy in the workforce, especially in the key area of innovation.
One of the key decisions during a downturn is workforce reduction. For those employees remaining with the companies after reductions, it’s key for these companies to re-enforce their connection to the new strategy, and how can they re-adjust from a training perspective to prepare such employees for innovation.

Investing in innovation is a huge priority. The transition Accenture sees in workforce reduction includes engineers feeling a loss of control. To maintain moral and energy, semiconductor executives need to continue to communicate strategic objectives to all employees.

Sometimes amid the change, a semiconductor company needs to ask whether it has thought beyond the change event (portfolio, workforce or facility reductions) and also focused on the complete organizational transition. This is a process of communication — to help employees reconnect with their companies. Getting employees to understand, adapt and connect to the new direction takes a lot longer, and it also impacts productivity. Yet it must be emphasized.

Part II continues tomorrow. Stay tuned, folks!

Measuring performance of carbon nanotubes as building blocks for ultra-tiny computer chips of the future

October 15, 2007 Comments off

There is this really great story from IBM Research Labs that I simply have to seed here for my readers.

IBM’s scientists have created a method to measure the performance of carbon nanotubes as building blocks for ultra-tiny computer chips of the future. Of course, you can also read it on IBM Research Lab’s site as well as on CIOL’s semicon site.

IBM scientists have measured the distribution of electrical charges in tubes of carbon that measure less than 2nm in diameter, 50,000 times thinner than a strand of human hair.

This novel technique, which relies on the interactions between electrons and phonons, provides a detailed understanding of the electrical behavior of carbon nanotubes, a material that shows promise as a building block for much smaller, faster and lower power computer chips compared to today’s conventional silicon transistors.

Phonons are the atomic vibrations that occur inside material, and can determine the material’s thermal and electrical conductivity. Electrons carry and produce the current. Both are important features of materials that can be used to carry electrical signals and perform computations.

The interaction between electrons and phonons can release heat and impede electrical flow inside computer chips. By understanding the interaction of electrons and phonons in carbon nanotubes, the researchers have developed a better way to measure their suitability as wires and semiconductors inside of future computer chips.

In order to make carbon nanotubes useful in building logic circuitry, scientists are pushing to demonstrate their high speed, high packing density and low power consumption capabilities as well as the ability to make them viable for potential mass production.

Dr. Phaedon Avouris, IBM Fellow and lead researcher for IBM’s carbon nanotube efforts, said: “The success of nanoelectronics will largely depend on the ability to prepare well characterized and reproducible nano-structures, such as carbon nanotubes. Using this technique, we are now able to see and understand the local electronic behavior of individual carbon nanotubes.”

To date, researchers have been able to build carbon nanotube transistors with superior performance, but have been challenged with reproducibility issues. Carbon nanotubes are sensitive to environmental influences.

For example, their properties can be altered by foreign substances, affecting the flow of electrical current and changing device performance. These interactions are typically local and change the density of electrons in the various devices of an integrated circuit, and even along a single nanotube.

Accelerating EDA innovation through SoC design methodology convergence

September 26, 2014 Comments off

According to Dr. Walden C. Rhines, chairman and CEO, Mentor Graphics Corp., verification has to improve and change every year just to keep up with the rapidly changing semiconductor technology. Fortunately, the innovations are running ahead of the technology and there are no fundamental reasons why we cannot adequately verify the most complex chips and systems of the future. He was speaking at the recently held DVCON 2014 in Bangalore, India.

DVCON India 2014.

DVCON India 2014.

A design engineer’s project time for doing design has reduced by 15 percent from 2007-2014, while the engineer’s time for doing verification had seen 17 percent increase during the same time. At this rate, in about 40 years, all of a designer’s time will be devoted to verification. At the current rate, there is almost no chance of getting a single-gate design correct on first pass!

Looking at a crossover of verification engineers vs. designer engineers, there is a CAGR designers of 4.55 percent, and for CAGR verifiers, it is 12.62 percent.

The on-time completion remains constant, as we look at the non-FPGA project’s schedule completion trends, which are: 67 percent behind schedule for 2007, 66 percent behind schedule for 2010, 67 percent behind schedule for 2012, and 59 percent behind schedule for 2014. There has been an increase in the average number of embedded processors per design size, moving from 1.12 to 4.05.

Macro trends
Looking at the macro trends, there has been standardization of verification languages. SystemVerilog is the only verification language growing. Now, interestingly, India leads the world in SystemVerilog adoption. It is also remarkable that the industry converged on IEEE 1800. SystemVerilog is now mainstream.

There has been standardization in base class libraries as well. There was 56 percent UVM growth between 2012 and 2014, and 13 percent is projected growth in UVM the next year. Again, India leads the world in UVM adoption.

The second macro trend is standardization of the SoC verification flow. It is emerging from ad hoc approaches to systematic processes. The verification paradox is: a good verification process lets you get the most out of best-in-class verification tools.

The goal of unit-level checking is to verify that the functionality is correct for each IP, while achieving high coverage. Use of advanced verification techniques has also increased from 2007 to 2014.

Next, the goal of connectivity checking is to ensure that the IP blocks are connected correctly, a common goal with IP integration and data path checking.

The goal of system-level checking is performance, power analysis and  SoC functionality. Also, there are SoC ‘features’ that need to be verified.

A third macro trend is the coverage and power across all aspects of verification. The Unified Coverage Interoperability Standard or UCIS standard was announced at DAC 2012 by Accellera. Standards accelerate the EDA innovation!

The fourth trend is active power management. Now, low-power design requires multiple verification approaches. Trends in power management verification include things like Hypervisor/OS control of power management, application-level power management, operation in each system power state, interactions between power domains, hardware power control sequence generation, transitions between system power states, power domain state reset/restoration, and power domain power down/power up.

Macro enablers in verification
Looking at the macro enablers in verification, there is the intelligent test bench, multi-engine verification platforms, and application-specific formal. The intelligent test bench technology accelerates coverage closure. It has also seen the emergence of intelligent software driven verification.

Embedded software headcount surges with every node. Clock speed scaling slows the simulation performance improvement. Growing at over 30 percent CAGR from 2010-14, emulation is the fastest growing segment of EDA.

As for system-level checking, as the design sizes increase emulation up, the FPGA prototyping goes down. The modern emulation performance nmakes virtual debug fast. Virtual stimulus makes emulator a server, and moves the emulator from the lab to the datacenter, thereby delivering more productivity, flexibility, and reliability. Effective 100MHz embedded software debug makes virtual prototype behave like real silicon. Now, integrated simulation/emulation/software verification environments have emerged.

Lastly, for application-specific formal, the larger designs use more formal. The application-specific formal includes checking clock domain crossings.

DVCon India 2014 aims to bring Indian design, verification and ESL community closer!

September 11, 2014 5 comments

DVCon India 2014 has come to Bangalore, India, for the first time. It will be held at the Hotel Park Plaza in Bangalore, on Sept. 25-26. Dr. Wally Rhines, CEO, Mentor Graphics will open the proceedings with his inaugural keynote.

proxy?url=http%3A%2F%2F1.bp.blogspot.com%2F-gBtXKeIK9Ss%2FVBHi1IJxT1I%2FAAAAAAAALYM%2FfSqqoFgTuas%2Fs1600%2Fdvconindia2014logo2.png&container=blogger&gadget=a&rewriteMime=image%2F*Other keynotes will be from Dr. Mahesh Mehendale, MCU chief technologist, TI, Janick Bergeron, verification fellow, Synopsys, and Vishwas Vaidya, assistant GM, Electronics, Tata Motors.

Gaurav Jalan, SmartPlay, chair – promotions committee took time to speak about DVCon 2014 India.

Focus of DVCon 2014 India

First, what’s the focus of DVCon 2014 India? According to Jalan, DVCon has been a premiere conference in the US contributing to quality tutorials, papers and an excellent platform for networking. DVCON India focuses on filling the void of a vendor neutral quality conference in the neighbourhood – one that will grow over time.

The idea is to bring together, hitherto dispersed, yet substantial, design, verification and ESL community and give them a voice. Engineers get a chance to learn solutions to the verification problems, share the effectiveness of the solutions they have experimented, understand off the shelf solutions that are available in market and meet the vendor agnostic user fraternity. Moving forward the expectation is to get the users involved as early adopters of upcoming standards and actively contribute to them.

Trends in design

Next, what are the trends today in design? Jalan said while the designs continue to parade on the lines of Moore’s law there is a lot happening beyond the mere gate count. Defining and developing IPs with a wide configuration options serving a variety of application domains is a challenge.

The SoCs are crossing multi billion gate design (A8 in iPhone6 is 2 billion) with multi-fold increase in complexity due to multiple clock domains, multiple power domains, multiple voltage domains while delivering required performance in different application modes with sleek foot print.

Trends in verification

Now, let’s examine the trends today in verification. When design increases linearly, verification jumps exponentially. While UVM has settled dust to some extent on the IP verification level, there is a huge of challenges still awaiting to be addressed. The IP itself is growing in size limiting the simulator and encouraging users to move to emulators. While UVM solved the methodology war the VIPs available are still not simulator agnostic and expecting a emulator agnostic VIP portfolio is still a distant dream.

SoC verification is still a challenge not just due to the sheer size but because porting an env from block to SoC is difficult. The test plan definition and development for SoC level itself is a challenge. Portable stimulus group from Accellera is addressing this.

Similarly, coverage collection from different tools is difficult to merge. Unified coverage group at Accellera is addressing this. Low power today is a norm and verifying a power aware design is quite challenging. UPF is an attempt to standardize this.

Porting a SoC to emulator to enable hardware acceleration so as to run usecases is another trend picking up. Teams now are able to boot android on an SoC even before the silicon arrives. With growing analog content on chip the onus is on the verification engineers to ensure the digital and analog sides of the chip work in conjunction as per specs. Formal apps have picked so as to address connectivity tests, register spec testing, low power static checks and many more.

Accelearating EDA innovation

So, how will EDA innovation get accelerated? According to Jalan, the semiconductor industry has always witnessed that startups and smaller companies lead the innovation. Given the plethora of challenges around, there are multiple opportunities to be addressed from both the biggies and the start-ups.

The evolution of standards at Accellera definitely is a great step so as to bring the focus on real innovation in the tools while providing a platform for the user community to come forward sharing the challenges and proposing alternates. With a standard baseline that is defined with collaboration from all partners of the ecosystem, the EDA companies can focus on competing on performance, user interface, increased tool capacity and enabling faster time to market.

Forums like DVCON India help in growing awareness on standard promoted by Accellera while encouraging participants from different organizations and geographies join to contribute. Apart from tools areas where EDA innovation would pick up include new IT technologies and platforms – Cloud, Mobile devices.

Next level of verification productivity

Where is the next level of verification productivity likely to come from? To this, Jalan replied that productivity in the verification improves from different aspects.

While faster tools with increased capacity comes from innovation at EDA end, standard have played an excellent role in addressing it. UVM has helped in displacing vendor specific technologies to improve inter-operability, quick ramp up for engineers and reusability. Similarly on power format, UPF has played an important role in bridging the gaps.

Unified coverage is another aspect where it will help in closing early with coverage driven verification. IPXACT and SystemRDL standards help further in packaging IPs and easier hand off to enable reuse. Similarly other standards on ESL, AMS etc help in closing the loop holes that prevent productivity.

New, portable stimulus specification now being developed under Accellera that will help in easing out test development at different levels from IP to sub system to SoC. For faster simulations, the increase in adoption of hardware acceleration platforms is helping verification engineers to improve regression turn around time.

Formal technologies play an important role in providing a mathematical proofs to common verification challenges at an accelerated pace in comparison to simulation. Finally events like DVCON enables users to share their experiences and knowledge encouraging others to try out solutions instead of struggling with the process of discovering or inventing one.

More Indian start-ups

Finally, do the organizers expect to see more Indian start-ups post this event? Yes, says Jalan. “We even have a special incubation booth that is encouraging young startups to come forth and exhibit at a reduced cost (only $300). We are creating a platform and soon we will see new players in all areas of Semiconductor.

“Also, the Indian government’s push in the semiconductor space will give new startups further incentive to mushroom. These conferences help entrepreneurs to talk to everyone in the community about problems, vet potential solutions and seek blessings from gurus.”

Categories: Semiconductors

Innovating in system of systems: Lip Bu-Tan

August 17, 2014 Comments off

Lip-Bu Tan, president and CEO, Cadence Design Systems Inc.

Lip-Bu Tan, president and CEO, Cadence Design Systems Inc.

There have been several innovations of innovations happening in the global technology industry. The IoT, mobility, cloud computing, etc., are creating opportunities for the system of systems, according to Lip-Bu Tan, president and CEO, Cadence Design Systems Inc.

Tan was delivering the main keynote. at the recently held CDNLive 2014 in Bangalore, India,

Some of the trends driving the global semiconductor market growth in the end markets include automotives at $24 billion, computers at $76 billion, industrial electronics at $14,1 billion, medical electronics at $12.5 billion, and mobile phones at $100 billion. In India, especially, a lot of fabless companies are said to be coming up.

The tablet is a system of systems. It has communications, navigation, recording and photography, etc. Even the automotive vehicle is a convincing example. Next, there is the IoT. There are said to be diverse needs for the IoT.

There are said to be several challenges for the system of systems. Some of these are more IP and software requirements, and more needs for low power and mixed signal. System design enablement requires system integration, packaging and board, etc.

Cadence has a comprehensive SoC IP solution. The mixed signal verification solution ensures functionality, reliability and performance. Cadence also introduced the Voltus-Fi custom power integrity solution in Shanghai the week before. Its Quantus QRC extraction solution gives up to 5X performance.

Next, the Jasper acquisition expands the Cadence development suite. Cadence also provides the FPGA-based prototyping with Palladium flow for software development.

Tan concluded that new technologies always require closer collaboration — from IP through manufacturing. Cadence is here to help designers innovate — from systems to silicon.

Categories: Semiconductors

Cadence Quantus solution meets 16nm FinFET challenges


Cadence Design Systems Inc. recently announced the Quantus QRC extraction solution had been certified for TSMC 16nm FinFET.

So, what’s the uniqueness about the Cadence Quantus QRC extraction solution?

Quantus

Quantus

KT Moore, senior group director – Product Marketing, Digital and Signoff Group, Cadence Design Systems, said: “There are several parasitic challenges that are associated with advanced node designs — especially FinFET – and it’s not just about tighter geometries and new design rules. We can bucket these challenges into two main categories: increasing complexity and modeling challenges.

“The number of process corners is exploding, and for FinFET devices specifically, there is an explosion in the parasitic coupling capacitances and resistances. This increases the design complexity and sizes. The netlist is getting bigger and bigger, and as a result, there is an increase in extraction runtimes for SoC designs and post-layout simulation and characterization runtimes for custom/analog designs.

“Our customers consistently tell us that, for advanced nodes, and especially for FinFET designs, while their extraction runtimes and time-to-signoff is increasing, their actual time-to-market is shrinking and putting an enormous amount of pressure on designers to deliver on-time tapeout. In order to address these market pressures, we have employed the massively parallel technology that was first introduced in our Tempus Timing Signoff Solution and Voltus IC Power Integrity Solution to our next-generation extraction tool, Quantus QRC Extraction Solution.

“Quantus QRC Extraction Solution enables us to deliver up to 5X better performance than competing solutions and allows scalability of up to 100s of CPUs and machines.”

Support for FinFET features
How is Quantus providing significant enhancements to support FinFET features?

Parasitic extraction is at the forefront with the introduction of any new technology node. For FinFET designs, it’s a bit more challenging due to the introduction of non-planar FinFET devices. There are more layers to be handled, more RC effects that need to be modeled and an introduction of local interconnects. There are also secondary and third order manufacturing effects that need to modeled, and all these new features have to be modeled with precise accuracy.

Performance and turnaround times are absolutely important, but if you can’t provide accuracy for these devices — especially in correlation to the foundry golden data — designers would have to over-margin their designs and leave performance on the table.

Best-in-class accuracy
How can Cadence claim that it has the ‘tightest correlation to foundry golden data at TSMC vs. competing solutions’? And, why 16nm only?

According to Moore, the foundry partner, TSMC, asserts that Quantus QRC Extraction Solution provides best-in-class accuracy, which was referenced in the recent press announcement:

“Cadence Quantus QRC Extraction Solution successfully passed TSMC’s rigorous parasitic extraction certification requirements to achieve best-in-class accuracy against the foundry golden data for FinFET technology.”

FinFET structures present unique challenges since they are non-planar devices as opposed to its CMOS predecessor, which is a planar device. We partnered with TSMC from the very beginning to address the modeling challenges, and we’ve seen many complex shapes and structures over the year that we’ve modeled accurately.

“We’re not surprised that TSMC has recognized our best-in-class accuracy because we’re the leader in providing extraction solutions for RF designs. Cadence Quantus QRC Extraction Solution has been certified for TSMC 16nm FinFET, however, it’s important to note that we’ve been certified for all other technology nodes and our QRC techfiles are available to our customers from TSMC today.”

SEMI materials outlook: Semicon West 2014


Source: SEMI, USA.

Source: SEMI, USA.

At the recently held Semicon West 2014, Daniel P. Tracy, senior director, Industry Research and Statistics, SEMI, presented on SEMI Materials Outlook. He estimated that semiconductor materials will see unit growth of 6 percent or more. There may be low revenue growth in a large number of segments due to the pricing pressures and change in material.

For semiconductor eequipment, he estimated ~20 percent growth this year, following two years of spending decline. It is currently estimated at ~11 percent spending growth in 2015.

Overall, the year to date estimate is positive growth vs. same period 2013, for units and materials shipments, and for equipment billings.

For equipment outlook, it is pointing to ~18 percent growth in equipment for 2014. Total equipment orders are up ~17 percent year-to-date.

For wafer fab materials outlook, the silicon area monthly shipments are at an all-time high for the moment. Lithography process chemicals saw -7 percent sales decline in 2013. The 2014 outlook is downward pressure on ASPs for some chemicals. 193nm resists are approaching $600 million. ARC has been growing 5-7 percent, respectively.

For packaging materials, the Flip Chip growth drivers are a flip chip growth of ~25 percent from 2012 to 2017 in units. There are trends toward copper pillar and micro bumps for TSV. Future flip chip growth in wireless products are driven by form factor and performance. BB and AP processors are also moving to flip chip.

There has been growth in WLP shipments. Major applications for WLP are driven by mobile products such as smartphones and tablets. It should grow at a CAGR of ~11 percent in units (2012-2017).

Solder balls were $280 million market in 2013. Shipments of lead-free solder balls continues to increase. Underfillls were $208 million in 2013. It includes underfills for flip chip and packages. The increased use of underfills for CSPs and WLPs are likely to pass the drop test in high-end mobile devices.

Wafer-level dielectrics were $94 million market in 2013. Materials and structures are likely to enhance board-level reliability performance.

Die-attach materials has over a dozen suppliers. Hitachi Chemical and Henkel account for major share of total die attach market. New players are continuing to emerge in China and Korea. Stacked-die CSP package applications have been increasing. Industry acceptance of film (flow)-over-wire (FOW) and dicing die attach film (DDF) technologies are also happening.

 

Semiconductor capital spending outlook 2013-18: Gartner

July 11, 2014 Comments off

At Semicon West 2014, Bob Johnson, VP Research, Gartner, presented the Semiconductor Capital Spending Outlook at the SEMI/Gartner Market Symposium on July 7.

First, a look at the semiconductor revenue forecast: it is likely to grow at a 4.3 percent CAGR from 2013-2018. Logic continues to dominate, but growth falters. As per the 2013-2018 CAGRs, logic will be growing 3.5 percent, memory at 4.5 percent, and other at 6.3 percent.

Bob Johnson

Bob Johnson

As for the memory forecast, NAND should surpass DRAM. At 2013-2018 CAGRs, DRAM should grow -1.1 percent, while NAND should grow 10.8 percent. Smartphone, SSD and Ultramobile are the applications driving growth through 2018. SSDs are powering the NAND market.

Among ultramobiles, tablets should dominate through 2018. They should also take share from PCs. Next, smartphones have been dominating mobile phones.

Looking at the critical markets for capital investment, smartphones are the largest growth segment, but have been showing signs of saturation. The revenue growth could slow dramatically by 2018. Ultramobiles have the highest overall CAGR, but at the expense of PC market. Tablets are driving down semiconductor content. Desktop and notebook PCs are a large, but declining market. This also requires critical revenue to fund logic capex. Lastly, SSDs are driving NAND Flash growth. The move to data centers is driving sustainable growth.

In capital spending, memory is strong, but logic is weak through 2018. The 2014 spending is up 7.1 percent, driven by strong memory market. Strength in NAND spending will drive future growth. Note that memory oversupply in 2016 can create next cycle. NAND is the capex growth driver in memory spending.

The major semiconductor markets, which justify investment in logic leading edge capacity, are now running out of gas. Ultramobiles are cannibalizing PCs, smartphones are saturating and both are moving to lower cost alternatives. It is increasingly difficult to manufacture complex SoCs successfully at the absolute leading edge. Moore’s Law is slowing down, while costs are going up. Breakthrough technologies (i.e., EUV) are not ready when needed. Much of the intelligence of future applications is moving to the cloud. The data centers’ needs for fast, low power storage solutions are creating sustainable growth for NAND Flash.

The traditional two-year per node pace of Moore’s Law will continue to slow down. Only a few high volume/high performance applications will be able to justify the costs of 20nm and beyond. Whether this will require new or upgraded capacity is uncertain. 28nm will be a long lived node as mid-range mobility products demand higher levels of performance. Finally, the cloud will continue to grow in size and influence creating demand for new NAND Flash capacity and technology.

Categories: Semiconductors
%d bloggers like this: