ARM calls the spirit of innovation as collective intelligence at every level. It is within devices, between people, through tech and across the world. We are still pushing boundaries of mobile devices.
Speaking at the ARM Summit in Bangalore, Dr Mark Brass, corporate VP, Operations, ARM, said that the first challenge was the number of people on the planet. Technology development and innovation also pose challenges.
According to him, mobile phones are forecast to grow 7.3 percent in 2013 driven by 1 billion smartphones. Mobile data will ramp up 12 times between now and 2018. Mobile and connectivity are creating further innovation.
August, a compamy, has introduced an electronic lock for doors, controlled by the smartphone. Another one is Proteus, which looks at healthcare. The smartphone is becoming the center of our world. All sorts of sensors are also getting into smartphones. Next, mobile and connectivity are growing in automobiles. Companies like TomTom are competing with automobile companies. Connectivity is also transforming infrastructure and data centers. They are now building off the mobile experience.
As per ARM, an IoT survey done has revealed that 76 percent of companies are dealing with IoT. As more things own information, there will be much more data. The IoT runs on ARM.
“There’s more going on than just what you think. IoT is not just about things. Skills development should not be an afterthought. Co-operation is critical. Solutions will emerge. All sorts of things are going to happen. Three years from now, only 4 percent of companies won’t have IoT in the business at all,” Dr. Brass added.
IoT will be present in industrial, especially motors, transportation, energy, and healthcare. Smart meters are coming in to help with energy management. There is a move to Big Data from Little Data.
Challenges in 2020 would be in transportation, energy, healthcare and education. ARM and the ARM partnership is addressing those. “We are delivering an unmatched diversity of solutions. We are scaling from sensors to servers, connecting our world,” Dr. Brass concluded.
Today, at San Francisco, Apple introduced the iPad Air that features a 9.7-inch Retina display in a new thinner and lighter design!
Earlier, yesterday, in India, Samsung announced the GALAXY Note 10.1! However, it isn’t any match for the iPad Air!!
The release says that precision-engineered to weigh just one pound, the iPad Air is now 20 percent thinner and 28 percent lighter than the fourth generation iPad, and with a narrower bezel the borders of iPad Air are dramatically thinner-making content even more immersive.
Apple also introduced the new iPad mini featuring Retina display and 64-bit Apple-designed A7 chip.
The iPads feature two antennae to support Multiple-In-Multiple-Out (MIMO) technology, thus bringing twice the Wi-Fi performance to iPad Air and iPad mini with Retina display at a blazingly fast data rate up to 300 Mbps. The fact that Apple will be offering 128GB for both the models shows that it really means business!
And, what will it do to Apple’s business? Well, there will definitely be many more buyers for sure. Gosh, what will I do? I have a third-gen iPad and this is a fifth-gen model! Never mind! All the best to those who would be buying the iPad Air!
Besides the iPad Air, there were some other announcements made by Apple. First, Apple introduced the next gen iWork and iLife apps for OS X, as well as the OS X Mavericks, which is now available free from Mac App store. Apple also introduced the all new Mac Pro! Brilliant!
Future Horizons hosted the 22nd Annual International Electronics Forum, in association with IDA Ireland, on Oct. 2-4, 2013, at Dublin, Blanchardstown, Ireland. The forum was titled ‘New Markets and Opportunities in the Sub-20nm Era: Business as Usual OR It’s Different This Time.” Here are excerpts from some of the sessions. Those desirous of finding out much more should contact Malcolm Penn, CEO, Future Horizons.
The global interest in graphene research has facilitated our understanding of this rather unique material. However, the transition from the laboratory to factory has hit some challenging obstacles. In this talk I will review the current state of graphene research, focusing on the techniques which allow large scale production.
I will then discuss various aspects of our research which is based on more complex structures beyond graphene. Firstly, hexagonal boron nitride can be used as a thin dielectric material where electrons can tunnel through. Secondly, graphene-boron nitride stacks can be used as tunnelling transistor devices with promising characteristics. The same devices show interesting physics, for example, negative differential conductivity can be found at higher biases. Finally, graphene stacked with thin semiconducting layers which show promising results in photodetection.
I will conclude by speculating the fields where graphene may realistically find applications and discuss the role of the National Graphene Institute in commercializing graphene.
The key challenge for future high-end computing chips is energy efficiency in addition to traditional challenges such as yield/cost, static power, data transfer. In 2020, in order to maintain at an acceptable level the overall power consumption of all the computing systems, a gain in term of power efficiency of 1000 will be required.
To reach this objective, we need to work not only at process and technology level, but to propose disruptive multi-processor SoC architecture and to make some major evolutions on software and on the development of
applications. Some key semiconductor technologies will definitely play a key role such as: low power CMOS technologies, 3D stacking, silicon photonics and embedded non-volatile memory.
To reach this goal, the involvement of semiconductor industries will be necessary and a new ecosystem has to be put in place for establishing stronger partnerships between the semiconductor industry (IDM, foundry), IP provider, EDA provider, design house, systems and software industries.
This presentation looks at the development of the semiconductor and electronics industries from an African perspective, both globally and in Africa. Understanding the challenges that are associated with the wide scale adoption of new electronics in the African continent.
Electronics have taken over the world, and it is unthinkable in today’s modern life to operate without utilising some form of electronics on a daily basis. Similarly, in Africa the development and adoption of electronics and utilisation of semiconductors have grown exponentially. This growth on the African continent was due to the rapid uptake of mobile communications. However, this has placed in stark relief the challenges facing increased adoption of electronics in Africa, namely power consumption.
This background is central to the thesis that the industry needs to look at addressing the twin challenges of low powered and low cost devices. In Africa there are limits to the ability to frequently and consistently charge or keep electronics connected to a reliable electricity grid. Therefore, the current advances in electronics has resulted in the power industry being the biggest beneficiary of the growth in the adoption of electronics.
What needs to be done is for the industry to support and foster research on this subject in Africa, working as a global community. The challenge is creating electronics that meet these cost and power challenges. Importantly, the solution needs to be driven by the semiconductor industry not the power industry. Focus is to be placed on operating in an off-grid environment and building sustainable solutions to the continued challenge of the absence of reliable and available power.
It is my contention that Africa, as it has done with the mobile communications industry and adoption of LED lighting, will leapfrog in terms of developing and adopting low powered and cost effective electronics.
Personalized, preventive, predictive and participatory healthcare is on the horizon. Many nano-electronics research groups have entered the quest for more efficient health care in their mission statement. Electronic systems are proposed to assist in ambulatory monitoring of socalled ‘markers’ for wellness and health.
New life science tools deliver the prospect of personal diagnostics and therapy in e.g., the cardiac, neurological and oncology field. Early diagnose, detailed and fast screening technology and companioning devices to deliver the evidence of therapy effectiveness could indeed stir a – desperately needed – healthcare revolution. This talk addresses the exciting trends in ‘PPPP’ health care and relates them to an innovation roadmap in process technology, electronic circuits and system concepts.
On the growth drivers for GP MCUs, the market growth is driven by faster migration to 32 bit platform. ST has been the first to bring the ARM Cortex based solution, and now targets leadership position on 32bit MCUs. An overview of the STM32 portfolio indicates high-performance MCUs with DSP and FPU up to 608 CoreMark and up to180 MHz/225 DMIPS.
Features of the STM32F4 product lines, specifically, the STM32F429/439, include 180 MHz, 1 to 2-MB Flash and 256-KB SRAM. The low end STM32F401 has features such as 84 MHz, 128-KB to 256-KB Flash and 64-KB SRAM.
The STM32F401 provides thebest balance in performance, power consumption, integration and cost. The STM32F429/439 is providing more resources, more performance and more features. There is close pin-to-pin and software compatibility within the STM32F4
series and STM32 platform.
The STM32 F429-F439 high-performance MCUs with DSP and FPU are:
• World’s highest performance Cortex-M MCU executing from Embedded Flash, Cortex-M4 core with FPU up to 180 MHz/225 DMIPS.
• High integration thanks to ST 90nm process (same platform as F2 serie): up to 2MB Flash/256kB SRAM.
• Advanced connectivity USB OTG, Ethernet, CAN, SDRAM interface, LCD TFT controller.
• Power efficiency, thanks to ST90nm process and voltage scaling.
In terms of providing more performance, the STM32F4 provides up to 180 MHz/225 DMIPS with ART Accelerator, up to 608 CoreMark result, and ARM Cortex-M4 with floating-point unit (FPU).
The STM32F427/429 highlights include:
• 180 MHz/225 DMIPS.
• Dual bank Flash (in both 1-MB and 2-MB), 256kB SRAM.
• SDRAM Interface (up to 32-bit).
• LCD-TFT controller supporting up to SVGA (800×600).
• Better graphic with ST Chrom-ART Accelerator:
– x2 more performance vs. CPU alone
– Offloads the CPU for graphical data generation
* Raw data copy
* Pixel format conversion
* Image blending (image mixing with some transparency).
• 100 μA typ. in Stop mode.
Some real-life examples of the STM32F4 include the smart watch, where it is the main application controller or sensor hub, the smartphone, tablets and monitors, where it is the sensor hub for MEMS and optical touch, and the industrial/home automation panel, where it is the main application controller. These can also be used in Wi-Fi modules for the Internet of Things (IoT), such as appliances, door cameras, home thermostats, etc.
These offer outstanding dynamic power consumption thanks to ST 90nm process, as well as low leakage current made possible by advanced design technics and architecture (voltage scaling). ST is making a large offering of evaluation boards and Discovery kits. The STM32F4 is also offering new firmware libraries. SEGGER and ST signed an agreement around the emWin graphical stack. The solution is called STemWin.
San Jose, USA-based Atrenta’s SpyGlass Predictive Analyzer gives engineers a powerful guidance dashboard that enables efficient verification and optimization of SoC designs early, before expensive and time-consuming traditional EDA tools are deployed. I recently met up with Dr. Ajoy Bose, chairman, president and CEO, Atrenta, to find out more.
I started by asking how Atrenta provides early design analysis for logic designers? He said: “The key ingredient is something we call predictive analysis. That is, we need to analyze a design at a high level of abstraction and predict what will happen when it undergoes detailed implementation. We have a rich library of algorithms that provide highly accurate ‘predictions’, without the time and cost required to actually send a design through detailed implementation.”
There’s a saying: electronic system level (ESL) is where the future of EDA lies. Why? Its because the lower level of abstraction (detailed implementation) of the EDA market is undergoing commoditization and consolidation. There are fewer solutions, and less differentiation between them. At the upper levels of abstraction (ESL), this is not the case. There still exists ample opportunity to provide new and innovative solutions.
Now, how will this help EDA to move up the embedded software space? According to Dr. Bose, the ability to do true hardware/software co-design is still not a solved problem. Once viable solutions are developed, then EDA will be able to sell to the embedded software engineer. This will be a new market, and new revenue for EDA.
How are SpyGlass and GenSys platforms helping the industry? What problems are those solving? Dr. Ajoy Bose said: “SpyGlass is Atrenta’s platform for RTL signoff. It is used by virtually all SoC design teams to ensure the power, performance and cost of their SoC is as good as it can be prior to handoff to detailed implementation.SpyGlass is also used to select and qualify semiconductor IP – a major challenge for all SoC design teams.
“GenSys provides a way to easily assemble and modify designs at the RTL level of abstraction. As a lot of each SoC is re-used design data, the need to modify this data to fit the new design is very prevalent. GenSys provides an easy, correct-by-construction way to get this job done.”
How does the SpyGlass solve RTL design issues, ensuring high quality RTL with fewer design bugs? He added that it’s the predictive analysis technology. SpyGlass provides accurate and relevant information about what will happen when a design is implemented and tested. By fixing these problems early, at RTL, a much higher quality design is handed off to detailed implementation with fewer bugs and associated schedule challenges.
On another note, I asked him why Apple’s choice of chips a factor in influencing the global chip industry? The primary reason is their volume and buying power. Apple is something of a “King Maker” when it comes to who manufactures their chips. Apple is also a thought leader and trend setter, so their decisions affect the decisions of others.
Finally, the global semiconductor industry! How is the global semicon industry doing in H1-2013? As per Dr. Bose: “We see strong growth. Our customers are undertaking many new designs at advanced process technology nodes. We think that this speaks well for future growth of the industry. At a macro level, the consumer sector will drive a lot of the growth ahead. For EDA, the higher levels of abstraction is where the growth will be.”
Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.
Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.
The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.
Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.
IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”
The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.
“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.
Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.
Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.
The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.
Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.
How is IDesignSpec working as chip-level assertion-based verification?
Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
SiC is implemented in several power systems and is gaining momentum and credibility.
Yole Developpement stays convinced that the most pertinent market for SiC lands in high and very high voltage (more than 1.2kV), where applications are less cost-driven and where few incumbent technologies can’t compete in performance. This transition is on its way as several device/module makers have already planned such products at short term.
Even though EV/HEV skips SiC, the industry could expand among other apps. The only question remains: Is there enough business to make so many contenders live decently? Probably, yes, as green-techs are expanding fast, strongly requesting SiC. Newcomers should carefully manage strategy and properly size capex according to the market size.
Power electronics industry outlook
Electronics systems were worth $122 billion in 2012, and will likely grow to $144 billion by 2020 at a CAGR of 1.9 percent. Power inverters will grow from $41 billion in 2012 to over $70 billion by 2020 at a CAGR of 7.2 percent. Semiconductor power devices (discretes and modules) will grow from $12.5 billion in 2012 to $21.9 billion by 2020 at a CAGR of 7.9 percent. Power wafers will grow $912 million in 2012 to $1.3 billion by 2020 at a CAGR of 5.6 percent.
Looking at the power electronics market in 2012 by application and the main expectations to 2015, computer and office will account for 25 percent, industry and energy 24 percent, consumer electronics 18 percent, automotive and transport 17 percent, telecom 7 percent and others 9 percent.
The main trends expected for 2013-2015 are:
* Significant increase of automotive sector following EV and HEV ramp-up.
* Renewable energies and smart-grid implementation will drive industry sector ramp-up.
* Steady erosion of consumer segment due to pressure on price (however, volumes (units) will keep on increase).
The 2011 power devices sales by region reveals that overall, Asia is still the landing-field for more than 65 percent of power products. Most of the integrators are located in China, Japan or Korea. Europe is very dynamic as well with top players in traction, grid, PV inverter, motor control, etc. Asia leads with 39 percent, followed by Japan with 27 percent, Europe with 21 percent and North America with 13 percent.
The 2011 revenues by company/headquarter locations reveals that the big-names of the power electronics industry are historically from Japan. Nine companies of the top-20 are Japanese. There are very few power manufacturers in Asia except in Japan. Europe and US are sharing four of the top five companies. Japan leads with 42 percent, followed by Europe and North America with 28 percent each, respectively, and Asia with 2 percent.
Looking at the TAM comparison for SiC (and GaN), very high voltage, high voltage of 2kV and medium voltage of 1.2kV appear as a more comfortable area for SiC. The apps are less cost-driven and SiC added value is obvious. Low voltage from 0-900V is providing strong competition with traditional silicon technologies, SJ MOSFET and GaN. There are cost-driven apps.
Functional verification is critical in advanced SoC designs. Abey Thomas, verification competency manager, Embitel Technologies, said that over 70 percent effort in the SoC lifecycle is verification. Only one in three SoCs achieves first silicon success.
Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.
An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.
There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.
Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.
So, what’s needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.
Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.
Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format – CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.
Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.
Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.
Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.
Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.
Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.
Flip-Chip is a chip packaging technique in which the active area of the chip is ‘flipped over’ facing downward, instead of facing up and bonded to the package leads with wires from the outside edges of the chip.
Any surface area of the Flip-Chip can be used for interconnection, which is typically done through metal bumps. These bumps are soldered onto the package and underfilled with epoxy. The Flip-Chip allows for a large number of interconnects with shorter distances than wire, which greatly reduces inductance.
According to Lionel Cadix, market and technology analyst, Yole Developpement, France, metal bumps can be made of solder (tin, tin-lead or lead-free alloys), copper, gold and copper-tin or Au-tin alloys. The package substrates are epoxy-based (organic substrates), ceramic based, copper based (leadframe substrates), and silicon or glass based.
In the period 2010-2018, Flip-Chip will likely grow at a CAGR of 19 percent. In 2012, laptop and desktop PCs were the top end products using Flip-Chip. It represents 50 percent of the Flip-Chip market by end product with more than 6.2 million of wafer starts. PCs are followed by smart TV and LCD TVs (for LCD drivers), smartphones and high performance computers.
The Flip-Chip market in 2012 is around $20 billion, selling 20 billion units approximately in 12’’ equivalent wafers. Taiwan is so far the no. 1 producer. At least 50 percent of the Flip-Chips devices get into end products. By 2018, the Flip-Chip market should grow to a $35 billion market, selling 68 billion units.
Applications and market focus
Looking at the applications and market focus, Flip-Chip technology is already present in a wide range of application, from high volumes/consumer applications, to low volumes/high end applications. All these applications have their own requirements, specifications and challenges!
Some of these are military and aerospace, medical devices, automobiles, HPC, servers, networks, base stations, etc, in low volumes. It is present in set-top boxes, game stations, smart TVs/displays, desktops/laptops and smartphones/tablets in high volumes. Flip-chip applications are also in imaging, logic 2D SoCs, HB-LEDs, RF, power, analog and mixed-signal, stacked memories, and logic 3D-SiP/SoCs.
In computing applications, for instance, the Intel core i5 is the first MCM combining a 77mm2 CPU together with a 115mm2 GPU in a 37.5mm side package. Solder bumps with a pitch of 185μm are used for the slicon to substrate (1st) interconnect. This MCM configuration is suitable for office applications, with relatively low demanding processing powers. For mobile/wireless applications, there are opportunities for MEMS in smartphones/feature phones. Similarly, Flip-Chip is available for consumer applications.
For microbumping in interposers for FPGA there is a focus on Xilinx Virtex 7 HT. Last year, Xilinx announced a single-layer, multi-chip silicon interposer for its 28nm 7 series FPGAs. Key features include two million logic cells for a high level of computational performance, and high bandwidth, four slice processed in 28 nm, 25 x 31mm, 100 μm thick silicon interposer, 45 um pitch microbumps and 10 μm TSV, and 35 x 35 mm BGA with 180 μm pitch C4 bumps.
Even if the infrastructure had been ready for full 3D stacking, the 2.5D Interposer would still have been the right choice for FPGAs since the ’10,000 routing connections’ would have used up valuable chip area, making the chip slices larger and more costly than they are now. Virtex 7 HT will consist of three FPGA slices and two 28 gbps SerDes chips on an Interposer capable of operating at 2.8 Tb/sec.