POET Technologies Inc., based in Storrs Mansfield, Connecticut, USA, and formerly, OPEL Technologies Inc., is the developer of an integrated circuit platform that will power the next wave of innovation in integrated circuits, by combining electronics and optics onto a single chip for massive improvements in size, power, speed and cost.
POET’s current IP portfolio includes more than 34 patents and seven pending. POET’s core principles have been in development by director and chief scientist, Dr. Geoff Taylor, and his team at the University of Connecticut for the past 18 years, and are now nearing readiness for commercialization opportunities. It recently managed to successfully integrate optics and electronics onto one monolithic chip.
Elaborating, Dr. Geoff Taylor, said: “POET stands for Planar Opto Electronic Technology. The POET platform is a patented semiconductor fabrication process, which provides integrated circuit devices containing both electronic and optical elements on a single chip. This has significant advantages over today’s solutions in terms of density, reliability and power, at a lower cost.
“POET removes the need for retooling, while providing lower costs, power savings and increased reliability. For example, an optoelectronic device using POET technology can achieve estimated cost savings back to the manufacturer of 80 percent compared to the hybrid silicon devices that are widely used today.
“The POET platform is a flexible one that can be applied to virtually any market, including memory, digital/mobile, sensor/laser and electro-optical, among many others. The platform uses two compounds – gallium and arsenide – that will allow semiconductor manufacturers to make microchips that are faster and more energy efficient than current silicon devices, and less expensive to produce.
“The core POET research and development team has spent more than 20 years on components of the platform, including 32 patents (and six patents pending).”
Moore’s Law to end next decade?
Is silicon dead and how much more there is to Moore’s Law?
According to Dr. Taylor, POET Technologies’ view is that Moore’s Law could come to an end within the next decade, particularly as semiconductor companies have recently highlighted difficulties in transitioning to the next generation of chipsets, or can only see two to three generations ahead.
Transistor density and its impact on product cost has been the traditional guideline for advancing computer technology because density has been accomplished by device shrinkage translating to performance improvement. Moore’s Law begins to fail when performance improvement translates less and less to device shrinkage – and this is occurring now at an increasing rate.
He added: “For POET Technologies, however, the question to answer is not when Moore’s Law will end – but what next. Rather than focus on how many more years we can expect Moore’s Law to last – or pinpoint a specific stumbling block to achieving the next generation of chipsets, POET looks at the opportunities for new developments and solutions to continue advancements in computing.
“So, for POET Technologies, we’re focusing less on existing integrated circuit materials and processes and more towards a different track with significant future runway. Our platform is a patented semiconductor fabrication process, which concentrates on delivering increases in performance at lower cost – and meets ongoing consumer appetites for faster, smaller and more power efficient computing.”
About 318 engineers and managers completed a blind, anonymous survey on ‘On-Chip Communications Networks (OCCN), also referred to as an “on-chip networks”, defined as the entire interconnect fabric for an SoC. The on-chip communications network report was done by Sonics Inc. A summary of some of the highlights is as follows.
The average estimated time spent on designing, modifying and/or verifying on-chip communications networks was 28 percent (for the respondents that knew their estimate time).
The two biggest challenges for implementing OCCNs were meeting product specifications and balancing frequency, latency and throughput. Second tier challenges were integrating IP elements/sub-systems and getting timing closure.
As for 2013 SoC design expectations, a majority of respondents are targeting a core speed of at least 1 GHz for SoCs design starts within the next 12 months, based on those respondents that knew their target core speeds. Forty percent of respondents expect to have 2-5 power domain partitions for their next SoC design.
A variety of topologies are being considered for respondents’ next on-chip communications networks, including NoCs (half), followed by crossbars, multi-layer bus matrices and peripheral interconnects; respondents that knew their plans here, were seriously considering an average of 1.7 different topologies.
Twenty percent of respondents stated they already had a commercial Network-on-Chip (NoC) implemented or plan to implement one in the next 12 months, while over a quarter plan to evaluate a NoC over the next 12 months. A NoC was defined as a configurable network interconnect that packetizes address/data for multicore SoCs.
For respondents who had an opinion when commercial Networks-on-Chip became an important consideration versus internal development when implementing an SoC, 43 percent said they would consider commercial NoCs at 10 or fewer cores; approximately two-thirds said they would consider commercial NoCs at 20 or fewer cores.
The survey participants’ top three criteria for selecting a Network on Chip were: scalability-adaptability, quality of service and system verification, followed by layout friendly, support for power domain partitioning. Half of respondents saw reduced wiring congestion as the primary reason to use virtual channels, followed by increased throughput and meeting system concurrency with limited bandwidth.
Functional verification is critical in advanced SoC designs. Abey Thomas, verification competency manager, Embitel Technologies, said that over 70 percent effort in the SoC lifecycle is verification. Only one in three SoCs achieves first silicon success.
Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.
An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.
There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.
Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.
So, what’s needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.
Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.
Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format – CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.
Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.
Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.
Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.
Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.
Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.
Today, EDA requires specialization. Elaborating on EDA over the past decade, Dr. Walden (Wally) C. Rhines, chairman and CEO, of Mentor Graphics, and vice chairman of the EDA Consortium, USA, said that PCB design has been flat despite growth in analysis, DFM and new emerging markets. Front end design has seen growth from RF/analog design and simulation, and analysis As design methodologies mature, EDA expenditures stop growing. He was speaking at Mentor Graphics’ U2U (User2User) conference in Bangalore, India.
Most of the EDA revenue growth comes from major new design methodologies, such as ESL, DFM, analog-mixed signal and RF. PCB design trend continues to be flat, and includes license and maintenance. The IC layout verification market is pointing to a 2.1 percent CAGR at the end of 2011. The RTL simulation market has been growing at 1.3 percent CAGR for the last decade. The IC physical implementation market has been growing at 3,4 percent CAGR for the last decade.
Growth areas in EDA from 2000-2011 include DFM at 28 percent CAGR, formal verification at 12 percent, ESL at 11 pecent, and IC/ASIC analysis at 9 percent, respectively.
What will generate the next wave of electronic product design challenges, and the future growth of EDA? This would involve solving new problems that are not part of the traditional EDA, and ‘do what others don’t do!
Methodology changes that may change EDA
There are five factors that can make this happen. These are:
* Low power design beyond RTL (and even ESL).
* Functional verification beyond simulation.
* Physical verification beyond design for manufacturability.
* Design for test beyond compression.
* System design beyond PCBs
Low power design at higher levels
Power affects every design stage. Sometimes, designing for low power at system level is required. System level optimization has the biggest impact on power/performance. And, embedded software is a major point of leverage.
Embedded software has an increasing share of the design effort. Here, Mentor’s Nucleus power management framework is key. It has an unique API for power management, enables software engineers to optimize power consumption, and reduces lines of application code. Also, power aware design optimizes code efficiency.
Functional verification beyond RTL simulation
The Verification methodology standards war is over. UVM is expected to grow by 286 percent in the next 12 months. Mentor Graphics Questa inFact is the industry’s most advanced testbench automation solution. It enables Testbench re-use and accelerates time-to-coverage. Intelligent test bench facilitates linear transition to multi-processing.
Questa accelerates the hardware/software verification environment. In-circuit emulation has been evolving to virtual hardware acceleration and embedded software development. Offline debug increases development productivity. A four-hour on-emulator software debug session drops to 30 minutes batch run. The offline debug allows 150 software designers to jumpstart debug process on source code. Virtual stimulus increases the flexibility of the emulator. As an example, Veloce is 700x more efficient than large simulation farms.
Physical verification beyond design for manufacturability
The Calibre PERC is a new approach to circuit verification. The Calibre 3DSTACK is the verification flow for 3D.
CH2M HLL is a global leader in consulting, design, design-build, operations, and program management. Its ultimate goal is to link nanotechnologies to high-tech manufacturing.
Nanomanufacturing techniques for scale include photo-lithography techniques, e-beam lithography techniques, ion-beam lithography techniques, nano-imprint lithography, nanofabrication by self-assembly and laser technology processes.
There are three major challenges for cost-effective nanomanufacturing — flexibility, critical environment scale-up and safety, sustainability and health (SSH). Also, nanomanufacturing requires high flexibility. Nanofacility critical environments include electromagnetic interference, cleanliness, vibration, temperature and humidity control, adaptive HVAC zones, airborne molecular contamination (AMC) and acoustics.
There are nano facility site planning challenges such as surface transit, direct current light rail, high voltage lines and truck and bus traffic. There is a need to analyse the detailed ambient conditions study and subsurface vibration testing, which is 3-4 meters below grade. Solutions include ‘no-build zones for vibration, EMI and RFI, building outside zones, identify ‘sweet spot’, VC-E lower/first level, and remediation by mass such as slab size lower level and slab size first level.
The proposed model for CH2M HLL’s China nanomanufacturing includes top level R&D labs, stacked cleanrooms for pilot and manufacturing, nano/MEMS/NEMS, ISO 5 and 7 cleanrooms, VC-D and VC-C vibration criteria, E-beam metrology, TEM suite capability and remote bulk gas pad. The proposed China baseline is in Suzhou, China.
Headquartered in Englewood, Colorado, USA, CH2M HLL has nearly 30,000 employees. Broadly diversified across multiple business sectors, it had $6.4 billion in revenue (2011).
Yole Developpement of France recently organized a seminar on next generation MEMS. The speakers were Dr. Eric Mounier, project manager, Yole Développement, and Dr. Adrian Devasahayam, senior director, Technology, Veeco Instruments.
As performance requirements for MEMS and other devices become more stringent, the industry is encountering etch challenges that cannot be overcome with existing toolsets. The use of materials that are not readily etched reactively, combined with higher sensitivities to post etch corrosion in smaller devices, is driving a search for a more suitable etch solution for certain applications.
According to Dr. Mounier, Yole, it is estimated that until 2015, the ferroelectric thin film business will grow at rate of +7.5 percent per year with many current or new applications. In the MEMS field, these applications could be wafer level autofocus, IR sensors, RF switches, medical ultrasonic transducers. In other markets, applications would include IPD tunable capacitor, IPD hearing aids, FeRAM, optical switches, etc.
Dr. Mounier added that the ferroelectric thin films global market growth is mainly driven by two high growth rate MEMS applications until 2015, namely, IR sensors and wafer level optic autofocus. He added that many other applications are expected to emerge in 2014-2015. These would include RFMEMS and ultrasonic thin film technologies that are under development by large groups, such as IBM, Philips, Toshiba, etc. IPD high density planar capacitors with thin films are being evaluated all over the world by key companies, such as STMicroelectronics, Ipdia, On Semi, Maxim, etc.
Magnetometers using MEMS technologies are currently under development, such as at Bosch, VTT, etc.. They are likely to be integrated with accelerometers to create inertial sensing modules (combo sensors) for consumer/auto applications. Read more…
Moshe Gavrielov, president and CEO, Xilinx Inc., presented the keynote on day 2 of the ongoing ISA Vision Summit 2011 here in Bangalore.
More later! ;)
According to Dr. Prem Kalra, director, IIT, Rajasthan, one should be able to solve problems on becoming educated. To make students job creators, you need to empower them!
More later! ;)
The India Semiconductor Association (ISA) has released a sector report on the opportunities in the Indian medical electronics field, titled: “Current status and potential for medical electronics in India”, 2010, at the Narayana Hrudayalaya campus in Bangalore.
The Indian healthcare market (FY ’09) has been valued at Rs. 300,000 crores ($63 billion). Of this, healthcare delivery makes up 72 percent, pharmaceutical industry 20 percent, health insurance 5 percent, medical equipment 1.4 percent, medical consumables 1.1 percent, and medical IT 0.2 percent, respectively.
Medical electronics has been valued at Rs. 3,850 crores ($820 million) of the overall Indian healthcare market of Rs. 300,000 crores. The Indian medical equipment market is estimated to grow at around 17 percent CAGR over the next five years and reach about Rs. 9,735 crores ($2.075 billion).
As per the ISA report, the Indian healthcare industry currently contributes to 5.6 percent of GDP, which is estimated to increase to 8-8.5 percent in FY 13.
The domestic market for medical equipment currently stands at Rs. 3,850 crores ($820 million). Annually, medical equipment worth Rs. 2,450 crores ($520 million) is manufactured in India, out of which Rs. 350 crore ($75 million) is exported.
Growth of the medical equipment market is directly proportionate to growth of healthcare delivery, which was Rs. 216,000 crores ($45.36 billion) in 2009 Siemens, Wipro GE and Philips are leaders in the space with 18 percent, 17 percent and 10 percent share, respectively. However, 45 percent of the market is addressed by smaller, niche domestic players.
The report was released by Dr. Devi Prasad Shetty, CMD, Narayana Hrudayalaya, in the presence of Dr. Bobby Mitra, ISA chairman, Poornima Shenoy, ISA president and Vivek Sharma, convener of the ISA Medical Electronics Segment. Read more…