ARM calls the spirit of innovation as collective intelligence at every level. It is within devices, between people, through tech and across the world. We are still pushing boundaries of mobile devices.
Speaking at the ARM Summit in Bangalore, Dr Mark Brass, corporate VP, Operations, ARM, said that the first challenge was the number of people on the planet. Technology development and innovation also pose challenges.
According to him, mobile phones are forecast to grow 7.3 percent in 2013 driven by 1 billion smartphones. Mobile data will ramp up 12 times between now and 2018. Mobile and connectivity are creating further innovation.
August, a compamy, has introduced an electronic lock for doors, controlled by the smartphone. Another one is Proteus, which looks at healthcare. The smartphone is becoming the center of our world. All sorts of sensors are also getting into smartphones. Next, mobile and connectivity are growing in automobiles. Companies like TomTom are competing with automobile companies. Connectivity is also transforming infrastructure and data centers. They are now building off the mobile experience.
As per ARM, an IoT survey done has revealed that 76 percent of companies are dealing with IoT. As more things own information, there will be much more data. The IoT runs on ARM.
“There’s more going on than just what you think. IoT is not just about things. Skills development should not be an afterthought. Co-operation is critical. Solutions will emerge. All sorts of things are going to happen. Three years from now, only 4 percent of companies won’t have IoT in the business at all,” Dr. Brass added.
IoT will be present in industrial, especially motors, transportation, energy, and healthcare. Smart meters are coming in to help with energy management. There is a move to Big Data from Little Data.
Challenges in 2020 would be in transportation, energy, healthcare and education. ARM and the ARM partnership is addressing those. “We are delivering an unmatched diversity of solutions. We are scaling from sensors to servers, connecting our world,” Dr. Brass concluded.
Future Horizons hosted the 22nd Annual International Electronics Forum, in association with IDA Ireland, on Oct. 2-4, 2013, at Dublin, Blanchardstown, Ireland. The forum was titled ‘New Markets and Opportunities in the Sub-20nm Era: Business as Usual OR It’s Different This Time.” Here are excerpts from some of the sessions. Those desirous of finding out much more should contact Malcolm Penn, CEO, Future Horizons.
The global interest in graphene research has facilitated our understanding of this rather unique material. However, the transition from the laboratory to factory has hit some challenging obstacles. In this talk I will review the current state of graphene research, focusing on the techniques which allow large scale production.
I will then discuss various aspects of our research which is based on more complex structures beyond graphene. Firstly, hexagonal boron nitride can be used as a thin dielectric material where electrons can tunnel through. Secondly, graphene-boron nitride stacks can be used as tunnelling transistor devices with promising characteristics. The same devices show interesting physics, for example, negative differential conductivity can be found at higher biases. Finally, graphene stacked with thin semiconducting layers which show promising results in photodetection.
I will conclude by speculating the fields where graphene may realistically find applications and discuss the role of the National Graphene Institute in commercializing graphene.
The key challenge for future high-end computing chips is energy efficiency in addition to traditional challenges such as yield/cost, static power, data transfer. In 2020, in order to maintain at an acceptable level the overall power consumption of all the computing systems, a gain in term of power efficiency of 1000 will be required.
To reach this objective, we need to work not only at process and technology level, but to propose disruptive multi-processor SoC architecture and to make some major evolutions on software and on the development of
applications. Some key semiconductor technologies will definitely play a key role such as: low power CMOS technologies, 3D stacking, silicon photonics and embedded non-volatile memory.
To reach this goal, the involvement of semiconductor industries will be necessary and a new ecosystem has to be put in place for establishing stronger partnerships between the semiconductor industry (IDM, foundry), IP provider, EDA provider, design house, systems and software industries.
This presentation looks at the development of the semiconductor and electronics industries from an African perspective, both globally and in Africa. Understanding the challenges that are associated with the wide scale adoption of new electronics in the African continent.
Electronics have taken over the world, and it is unthinkable in today’s modern life to operate without utilising some form of electronics on a daily basis. Similarly, in Africa the development and adoption of electronics and utilisation of semiconductors have grown exponentially. This growth on the African continent was due to the rapid uptake of mobile communications. However, this has placed in stark relief the challenges facing increased adoption of electronics in Africa, namely power consumption.
This background is central to the thesis that the industry needs to look at addressing the twin challenges of low powered and low cost devices. In Africa there are limits to the ability to frequently and consistently charge or keep electronics connected to a reliable electricity grid. Therefore, the current advances in electronics has resulted in the power industry being the biggest beneficiary of the growth in the adoption of electronics.
What needs to be done is for the industry to support and foster research on this subject in Africa, working as a global community. The challenge is creating electronics that meet these cost and power challenges. Importantly, the solution needs to be driven by the semiconductor industry not the power industry. Focus is to be placed on operating in an off-grid environment and building sustainable solutions to the continued challenge of the absence of reliable and available power.
It is my contention that Africa, as it has done with the mobile communications industry and adoption of LED lighting, will leapfrog in terms of developing and adopting low powered and cost effective electronics.
Personalized, preventive, predictive and participatory healthcare is on the horizon. Many nano-electronics research groups have entered the quest for more efficient health care in their mission statement. Electronic systems are proposed to assist in ambulatory monitoring of socalled ‘markers’ for wellness and health.
New life science tools deliver the prospect of personal diagnostics and therapy in e.g., the cardiac, neurological and oncology field. Early diagnose, detailed and fast screening technology and companioning devices to deliver the evidence of therapy effectiveness could indeed stir a – desperately needed – healthcare revolution. This talk addresses the exciting trends in ‘PPPP’ health care and relates them to an innovation roadmap in process technology, electronic circuits and system concepts.
POET Technologies Inc., based in Storrs Mansfield, Connecticut, USA, and formerly, OPEL Technologies Inc., is the developer of an integrated circuit platform that will power the next wave of innovation in integrated circuits, by combining electronics and optics onto a single chip for massive improvements in size, power, speed and cost.
POET’s current IP portfolio includes more than 34 patents and seven pending. POET’s core principles have been in development by director and chief scientist, Dr. Geoff Taylor, and his team at the University of Connecticut for the past 18 years, and are now nearing readiness for commercialization opportunities. It recently managed to successfully integrate optics and electronics onto one monolithic chip.
Elaborating, Dr. Geoff Taylor, said: “POET stands for Planar Opto Electronic Technology. The POET platform is a patented semiconductor fabrication process, which provides integrated circuit devices containing both electronic and optical elements on a single chip. This has significant advantages over today’s solutions in terms of density, reliability and power, at a lower cost.
“POET removes the need for retooling, while providing lower costs, power savings and increased reliability. For example, an optoelectronic device using POET technology can achieve estimated cost savings back to the manufacturer of 80 percent compared to the hybrid silicon devices that are widely used today.
“The POET platform is a flexible one that can be applied to virtually any market, including memory, digital/mobile, sensor/laser and electro-optical, among many others. The platform uses two compounds – gallium and arsenide – that will allow semiconductor manufacturers to make microchips that are faster and more energy efficient than current silicon devices, and less expensive to produce.
“The core POET research and development team has spent more than 20 years on components of the platform, including 32 patents (and six patents pending).”
Moore’s Law to end next decade?
Is silicon dead and how much more there is to Moore’s Law?
According to Dr. Taylor, POET Technologies’ view is that Moore’s Law could come to an end within the next decade, particularly as semiconductor companies have recently highlighted difficulties in transitioning to the next generation of chipsets, or can only see two to three generations ahead.
Transistor density and its impact on product cost has been the traditional guideline for advancing computer technology because density has been accomplished by device shrinkage translating to performance improvement. Moore’s Law begins to fail when performance improvement translates less and less to device shrinkage – and this is occurring now at an increasing rate.
He added: “For POET Technologies, however, the question to answer is not when Moore’s Law will end – but what next. Rather than focus on how many more years we can expect Moore’s Law to last – or pinpoint a specific stumbling block to achieving the next generation of chipsets, POET looks at the opportunities for new developments and solutions to continue advancements in computing.
“So, for POET Technologies, we’re focusing less on existing integrated circuit materials and processes and more towards a different track with significant future runway. Our platform is a patented semiconductor fabrication process, which concentrates on delivering increases in performance at lower cost – and meets ongoing consumer appetites for faster, smaller and more power efficient computing.”
The European Commission is said to have a goal: to reach 20 percent world-share in chip manufacturing by 2020! Heinz Kundert, president, SEMI Europe, has even laid out an industrial strategy that will cover three complementary lines, such as:
* Transition to 450mm, expected to primarily benefit equipment and material manufacturers in Europe.
* “More than Moore” on 200mm and 300 mm.
* “More Moore” for ultimate miniaturization on 300mm wafers.
Investment will be focusing on Europe’s clusters of excellence in manufacturing and design — Grenoble, Dresden and Eindhoven-Leuven — and support partnerships and alliances across the value chain in Europe.
The key question of why Europe needs 450mm wafers has been answered by Mike Bryant of Future Horizons. The European semiconductor industry’s vision is to recover a leading position in the world throughout the entire value chain and to reverse the current negative trend of its worldwide competitiveness.
Among the many strategies the EC is planning to adopt include:
* Benefit from a single explicit European semiconductor industry policy.
* Maintain a high level of R&D effort, in a balanced way between the 150/200/300/450mm fields, between “More Moore” and “More than Moore”.
* Strengthen all elements of the value chain, from design to application.
* Develop co-operating programs and synergy initiatives between all semiconductor actors operating in Europe.
Europe has always stressed on stronger co-operation among the other industry segments. Some of these are automotive, energy, healthcare and well-being, security and safety, etc.
Functional verification is critical in advanced SoC designs. Abey Thomas, verification competency manager, Embitel Technologies, said that over 70 percent effort in the SoC lifecycle is verification. Only one in three SoCs achieves first silicon success.
Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.
An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.
There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.
Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.
So, what’s needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.
Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.
Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format – CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.
Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.
Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.
Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.
Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.
Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.
Flip-Chip is a chip packaging technique in which the active area of the chip is ‘flipped over’ facing downward, instead of facing up and bonded to the package leads with wires from the outside edges of the chip.
Any surface area of the Flip-Chip can be used for interconnection, which is typically done through metal bumps. These bumps are soldered onto the package and underfilled with epoxy. The Flip-Chip allows for a large number of interconnects with shorter distances than wire, which greatly reduces inductance.
According to Lionel Cadix, market and technology analyst, Yole Developpement, France, metal bumps can be made of solder (tin, tin-lead or lead-free alloys), copper, gold and copper-tin or Au-tin alloys. The package substrates are epoxy-based (organic substrates), ceramic based, copper based (leadframe substrates), and silicon or glass based.
In the period 2010-2018, Flip-Chip will likely grow at a CAGR of 19 percent. In 2012, laptop and desktop PCs were the top end products using Flip-Chip. It represents 50 percent of the Flip-Chip market by end product with more than 6.2 million of wafer starts. PCs are followed by smart TV and LCD TVs (for LCD drivers), smartphones and high performance computers.
The Flip-Chip market in 2012 is around $20 billion, selling 20 billion units approximately in 12’’ equivalent wafers. Taiwan is so far the no. 1 producer. At least 50 percent of the Flip-Chips devices get into end products. By 2018, the Flip-Chip market should grow to a $35 billion market, selling 68 billion units.
Applications and market focus
Looking at the applications and market focus, Flip-Chip technology is already present in a wide range of application, from high volumes/consumer applications, to low volumes/high end applications. All these applications have their own requirements, specifications and challenges!
Some of these are military and aerospace, medical devices, automobiles, HPC, servers, networks, base stations, etc, in low volumes. It is present in set-top boxes, game stations, smart TVs/displays, desktops/laptops and smartphones/tablets in high volumes. Flip-chip applications are also in imaging, logic 2D SoCs, HB-LEDs, RF, power, analog and mixed-signal, stacked memories, and logic 3D-SiP/SoCs.
In computing applications, for instance, the Intel core i5 is the first MCM combining a 77mm2 CPU together with a 115mm2 GPU in a 37.5mm side package. Solder bumps with a pitch of 185μm are used for the slicon to substrate (1st) interconnect. This MCM configuration is suitable for office applications, with relatively low demanding processing powers. For mobile/wireless applications, there are opportunities for MEMS in smartphones/feature phones. Similarly, Flip-Chip is available for consumer applications.
For microbumping in interposers for FPGA there is a focus on Xilinx Virtex 7 HT. Last year, Xilinx announced a single-layer, multi-chip silicon interposer for its 28nm 7 series FPGAs. Key features include two million logic cells for a high level of computational performance, and high bandwidth, four slice processed in 28 nm, 25 x 31mm, 100 μm thick silicon interposer, 45 um pitch microbumps and 10 μm TSV, and 35 x 35 mm BGA with 180 μm pitch C4 bumps.
Even if the infrastructure had been ready for full 3D stacking, the 2.5D Interposer would still have been the right choice for FPGAs since the ’10,000 routing connections’ would have used up valuable chip area, making the chip slices larger and more costly than they are now. Virtex 7 HT will consist of three FPGA slices and two 28 gbps SerDes chips on an Interposer capable of operating at 2.8 Tb/sec.
Sensor fusion encompasses hardware and software elements. There can be many data sources, such as MEMS. non-MEMS, etc.
The obvious question: why sensor fusion? Tony Massimini, chief of technology, Semico Research Corp., USA, said that it is useful for power savings, and the initial reason was to improve accuracy and reliability of inertial measurement units (IMUs, etc. If we look at the progression of sensors to sensor fusion, there have been simple interrupts such as screen orientation, tap detection, fall detection, and so on. IMUs are available for location-based services (LBS) and navigation, and IMUs are available and other data sources, etc.
Senosr fusion enhances user experience with portable devices. The growth is driven by smartphones. Competing devices will add more features to keep up with smartphones such as tablets, notebooks (ultraportables). Key growth markets today will provide basis for future end use markets (see graph: systems with sensor fusion). The market will likely grow at CAGR of 58.8 percent till 2016.
New end use markets and applications include areas such as gaming, HUD (heads-up display), sports, health and fitness, personal navigation, personal medical, context awareness, voice recognition, visual recognition, augmented reality and automation.
Sensor fusion is used for enhancing the user experience. For instance, add data to 3D axes frame of reference. Sensor fusion offers always ON and low latency. You can also connect to external sensors — wearable for health and fitness. Life tagging is possible too, e.g. photo and video library for context aware services. Next, there is improved security with biometrics.
Summarizing the sensor fusion market, the MEMS sensor ASPs continue to erode. There are an increasing number of sensors. There are improved MEMS sensors, including hardware accelerators. There is interaction with cloud for data. It also enables application innovations. Finally, there are new end use markets.
SLIMbus is a multi-drop, time division multiplexed serial bus. It has one clock and one data line, with CMOS signalling and no analog PHY. It is targeted for low bandwidth connectivity between the AP/modem and audio/Bluetooth/haptic. SLIMbus was originally specified by the MIPI Alliance in 2007. Arasan’s total IP solution delivery demystifies the adoption of SLIMbus.
According to Ajay Jain, director, Mobile Connectivity Products, Arasan Chip Systems, the SLIMBus system overview includes a host component (e.g., apps processor), a device component (e.g., a broadband modem), and a SLIMbus device component (e.g., audio processor, Bluetooth modem). The logical implementation of SLIMbus system feature is realized through devices within SLIMbus IPs.
The AP/modem have software infrastructure and an active manager device that manages the SLIMbus. Any component can have a framer device activated to drive the SLIMbus CLK. Each component can have one or more generic devices to buffer and transmit/receive audio and other data.
The physical layer enables TDM. The data line NRZI is encoded. The active framer can drive clock gears 1 to 10 for power management. There is an interleaving of control and data on the SLIMbus frames.
As far as device evaluation and enumeration are concerned, each component initializes its devices in correct order under the direction of the interface device. The active framer drives the SLIMbus CLK and framing channels with default values. All components perform frames, superframes and message synchronization. All active devices report presence and characteristics with broadcast messages. Arasan provides the software stack to perform SLIMbus.
The SLIMbus allows a finite set of channel rate multipliers (data segments/superframes). If SLIMbus CLK frequency, it allows channel rate multiplier of audio data rate. Other transfer protocols may be preferred in certain cases, e.g., flow control required, pushed or pulled protocol. All transfer protocols are programmable through the Arasan software stack.
Each port-port connection needs to be mapped onto a SLIMbus data channel. There is two-channel audio on SLIMbus data channels 6 and 7. A subframe length of 32 slots is assumed. The SLIMbus is amazing, yet complex. There are a finite set of parameters. Arasan’s IPs have addressed the low-level complexities of implementation.
It is always a pleasure to chat with Dr. Wally (Walden C.) Rhines, chairman and CEO, of Mentor Graphics. I chatted with him, trying to understand gigascale design, verification trends, strategy for power-aware verification, SERDES design challenges, migrating to 3D FinFET transistors, and Moore’s Law getting to be “Moore Stress”!
Chip design in gigascale, hertz, complex
First, I asked him to elaborate on how implementation of chip design will evolve, with respect to gigascale design, gigahertz and gigacomplex geometries.
He said: “Thanks to close co-operation among members of the foundry ecosystem, as well as cooperation between IDMs and their suppliers, serious development of design methods and software tools is running two to three generations ahead of volume manufacturing capability. For most applications, “Gigascale” power dissipation is a bigger challenge than managing the complexity but “system-level” power optimization tools will continue to allow rapid progress. Thermal analysis is becoming part of the designer’s toolkit.”
Functional verification is continually challenged by complexity but there have been, and continue to be, many orders of magnitude improvement in performance just from adoption of emulation, intelligent test benches and formal methods so this will not be a major limitation.
The complexity of new physical design problems will, however, be very challenging. Design problems ranging from basic ESD analysis, made more complex due to multiple power domains, to EMI, electromigration and intra-die variability are now being addressed with new design approaches. Fortunately, programmable electrical rule checking is being widely adopted and will help to minimize the impact of these physical effects.
Is verification keeping up?
How is the innovation in verification keeping up with trends?
Dr. Rhines added that over the past decade, microprocessor clock speeds have leveled out at 3 to 4 GHz and server performance improvement has come mostly from multi-core architectures. Although some innovative approaches have allowed simulators to gain some advantage from multi-core architectures, the speed of simulators hasn’t kept up with the growing complexity of leading edge chips.
Emulators have more than made up the difference. Emulators offer more than four orders of magnitude faster performance than simulators and emulators do so at about 0.005X the cost per cycle of simulation. The cost of power per year is more than one third the cost of hardware in a large simulation farm today, while emulation offers a 12X savings in power per verification clock cycle. For those who design really complex chips, a combination of emulation and simulation, along with formal methods and intelligent test benches, has become standard.
At the block and subsystem level, high level synthesis is enabling the next move up in design and verification abstraction. Since verification complexity grows at about the square of component count, we have plenty of room to handle larger chips by taking advantage of the four orders of magnitude improvement through emulation plus another three or four orders of magnitude through formal verification techniques, two to three orders of magnitude from intelligent test benches and three orders of magnitude from higher levels of abstraction.
By applying multiple engines and multiple abstraction levels to the challenge of verifying chips, the pressure is on to integrate the flow. Easily transitioning and reusing verification efforts from every level—including tests and coverage models, from high level models to RTL and from simulation to emulation—is being enabled through more powerful and adaptable verification IP and high level, graph-based test specification capabilities. These are keys to driving verification reuse to match the level of design reuse.
Powerful verification management solutions enable the collection of coverage information from all engines and abstraction levels, tracking progress against functional specifications and verification plans. Combining verification cycle productivity growth from emulation, formal, simulation and intelligent testing with higher verification abstraction, re-use and process management provides a path forward to economically verifying even the largest, most complex chips on time and within budget.
Good power-aware verification strategy for SoCs
What should be a good power-aware verification strategy for SoCs
According to him, the most important guideline is to start power-aware design at the highest possible level of system description. The opportunity to reduce system power is typically an order of magnitude greater at the system level than at the RTL level. For most chips today, that means at least the transaction level when the design is still described in C++ or SystemC.
Significant experience and effort should then be invested at the RTL level using synthesis and UPF-enabled simulation. Verification solutions typically automate the generation of correctness checks for power-control sequences and power-state coverage metrics. As SoC power is typically managed by software, the value of a hardware/software co-verification and co-debug solution in simulation and emulation becomes apparent in power-management verification at this level.
As designers proceed to the gate and transistor level, accuracy of power estimation improves. That is why gate level analysis and verification of the fully implemented power management architecture is important. Finally, at the physical layout, designers traditionally were stuck with whatever power budget was passed down to them. Now,they increasingly have power goals that can be achieved using dozens of physical design techniques that are built into the place and route tools.