The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog. Semiconductors, especially, is very technical in nature, and therefore, I do not think many people visit my blog. Many, many thanks are due to all of my readers for making this blog so very successful. I would like to thank WordPress, especially when I was unable to blog as much as I would like to for nearly eight months. I hope to keep blogging regularly in the new year. Happy new year to all of my readers, well wishers and friends!
Here’s an excerpt:
19,000 people fit into the new Barclays Center to see Jay-Z perform. This blog was viewed about 95,000 times in 2012. If it were a concert at the Barclays Center, it would take about 5 sold-out performances for that many people to see it.
Friends, I did mention some time ago that I shall start blogging or talking about photonics — yet another subject close to my heart!
Well, that dream has been realized, thanks to Photonics.com — the world’s leading site on the subject, from Laurin Publishing, USA.
How many of you are aware that some of the best work done in photonics in Asia is carried out in India? Are you aware that one of the best institutes in the country is located down south — in Cochin — known as the International School of Photonics at Cochin University of Science and Technology.
It is in this very institute that the Photonics Society of India (PSI) was founded in 2000, which also administers the society. The PSI is a professional organization of people, institutions and companies working with photonics in India.
The PSI has some very distinguished gentlemen at the helm. Professor P. Radhakrishnan, International School of Photonics, is its president. He is assisted by Dr. Reji Philip, vice president, and associate professor, Optics Group, Raman Research Institute, Bangalore. Professor V. P. N. Nampoori, International School of Photonics, is general secretary.
In fact, it was a pleasure to recently visit the famous Raman Research Institute (RRI) in Bangalore, where I had the good fortune of interacting with some of the best and renowned researchers that India has on this subject.
The various kinds of equipment the RRI has at the labs is mind boggling! Makes you wonder — these folks are really bright and highly talented to be doing such exemplary work.
Enough said! May I take this opportunity to thank Laurin Publishing for helping me realize another dream. I hope you all enjoy my Photonics Blog! Thanks for your support, as always, dear friends.
We are now entering the sub-20nm era. So, will it be business as usual or is it going to be different this time? With DAC 2013 around the corner, I met up with John Chilton, senior VP, Marketing and Strategic Development for Synopsys to find out more regarding the impact of new transistor structures on design and manufacturing, 450mm wafers and the impact of transistor variability.
Impact of new transistor structures on design and manufacturing
First, let us understand what will be the impact of new transistor structures on design and manufacturing.
Chilton said: “Most of the impact is really on the manufacturing end since they are effectively 3D transistors. Traditional lithography methods would not work for manufacturing the tall and thin fins where self-aligned double patterning steps are now required.
“Our broad, production-proven products have all been updated to handle the complexity of FinFETs from both the manufacturing and the designer’s end.
“From the design implementation perspective, the foundries’ and Synopsys’ goal is to provide a transparent adoption process where the methodology (from Metal 1 and above) remains essentially the same as that of previous nodes where products have been updated to handle the process complexity.”
Given the scenario, will it be possible to introduce 450mm wafer handling and new lithography successfully?
According to Chilton: “This is a question best asked of the semiconductor manufacturers and equipment vendors. Our opinion is ‘very likely’.” The semiconductor manufacturers, equipment vendors, and the EDA tool providers have a long history of introducing new technology successfully when the economics of deploying the technology is favorable.
The 300nm wafer deployment was quite complex, but was completed, for example. The introduction of double patterning at 20nm is another recent example in which manufacturers, equipment vendors and EDA companies work together to deploy a new technology.
Impact of transistor variability and other physics issues
Finally, what will be the impact of transistor variability and other physics issues?
Chilton said that as transistor scaling progresses into FinFET technologies and beyond, the variability of device behavior becomes more prominent. There are several sources of device variability.
Random doping fluctuations (RDF) are a result of the statistical nature of the position and the discreteness of the electrical charge of the dopant atoms. Whereas in past technologies the effect of the dopant atoms could be treated as a continuum of charge, FinFETs are so small that the charge distribution of the dopant atoms becomes ‘lumpy’ and variable from one transistor to the next.
With the introduction of metal gates in the advanced CMOS processes, random work function fluctuations arising from the formation of finite-sized metal grains with different lattice orientations have also become important. In this effect, each metal grain in the gate, whose crystalline orientation is random, interacts with the underlying gate dielectric and silicon in a different way, with the consequence that the channel electrons no longer see a uniform gate potential.
The other key sources of variability are due to the random location of traps and the etching and lithography processes which produce slightly different dimensions in critical shapes such as fin width and gate length.
“The impact of these variability sources is evident in the output characteristics of FinFETs and circuits, and the systematic analysis of these effects has become a priority for technology development and IP design teams alike,” he added.
Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.
Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.
The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.
Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.
IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”
The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.
“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.
Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.
Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.
The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.
Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.
How is IDesignSpec working as chip-level assertion-based verification?
Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
The number of MEMS and sensors going into mobile, consumer and gaming applications is expected to continue to skyrocket. As a result, OSAT and Wafer foundry players are getting more and more interest in MEMS module packaging, as volume and complexity of MEMS SiP modules is increasing dramatically, said Dr. Eric Mourier, Yole Developpement.
It implies that IDMs needs to find second source partnersand qualify some OSATs in order to secure their supply chain. Also, standardization(coming from both foundries, OSAT, WLP houses or substrate suppliers) is critical and necessary to implement in order to keep the packaging, assembly, and test cost of MEMS modules under control. There are many different players with different designs, and it’s not likely we’ll see one solution adopted by all the players.
As for wafer-level packaging (WLP) for LEDs, WLP has not been strongly deployed in the LED industry due to associated technical challenges. In the short-term, there is ESD integration in Si substrate. In the long-term, LED drivers could be integrated at the package level for Intelligent lighting. Ultimately, there are wafer-to-wafer manufacturing schemes for certain packaget types.
Real production of HB-LEDs with a mixed approach of WLP+through silicon vias (TSV) is just starting. There are some Taiwanese players such as TSMC, Xintec, Visera, Touch MicroTech and Sibdi, and South Korea-based LG Innotek. Additional players in the semiconductor and MEMS industry are seeking to enter the field.
What exactly is smart energy profile (SEP 2) IP-based energy management for the home? Introducing the SEP 2, Tobin Richardson, chairman and CEO, ZigBee Alliance said ZigBee smart energy is the standard of choice for home area networks (HANs).
About 40+ million ZigBee electric meters are being deployed. ZigBee smart energy is being enhanced by network/communications options, support for forward-looking developments, etc. SEP 2 is a joint effort with the HomePlug Alliance. There is a vision of MAC/PHY agnostic SmartEnergy profile.
Robby Simpson, SEP 2 Technical Working Group Chair, system architect, GE Digital Energy, provided the features and benefits of Smart Energy. Features include price communication, demand response and load control, energy usage information/metering data, prepayment metering, text messaging, plug-in electric vehicles, distributed energy resources, billing communication, etc.
Example applications are many, such as smartphones, ESI in the sky, tablets, TVs, plug-in electric vehicles, PCs, solar inverters, thermostats, energy management systems, smart meters, building management systems, smart appliances, etc. There is support for a variety of architectures. The use of IP eases convergence and architecture changes. A consortium for SEP 2 interoperability (CSEP) has been established.
Skip Ashton, ZigBee Arch. review committee chair, senior apps director, Silicon Labs said implementations of SEP 2 are available from a number of companies and across several MAC/PHYs. All standard documents are available for review.
Jeff Gooding, Southern California Edison (SCE), spoke about creating SEP 2 energy ecosysyems. SEP 2 can bridge multi-platform customer technologies to create a rich ecosystem. SEP 2 customer focused solutions can allow the utilities and energy service providers to use any customer communication channel. SEP 2 pilots at SCE include a gateway pilot and a smart charging pilot. Both are separate pilots.
Selection of the right on-chip network is critical to meeting the requirements of today’s advanced SoCs. There is easy IP integration with IP cores from many sources with different protocols, and an UVM verification environment.
John Bainbridge, staff technologist, CTO Office, Sonics Inc., said that it optimizes the system performance. Virtual channels offer efficient resource usage – saves gates and wires. The non-blocking network leads to an improved system performance. There are flexible topology choices with optimal network to match requirements.
Power management is key with advanced system partitioning, and an improved design flow and timing closure. Finally, the development environment allows easy design capture and has performance analysis tools.
For the record, there are several SoC integration challenges that need to be addressed, such as IP integration, frequency, throughput, physical design, power management, security, time-to-market and development costs.
SGN exceeds requirements
SGN met the tablet performance requirement with fabric frequency of 1066MHz. It has an efficient gate count of 508K gates. There are features such as an advanced system partitioning, security and I/O coherency. There is support for system concurrency as well as advanced power management.
Sonics offers system IP solutions such as SGN, a router based NoC solution, with flexible partitioning and VC (Virtual Channel) support. The frequency is optimized with credit based flow control.
SSX/SLX is message based crossbar/ShareLink solutions based on interleaved multi-channel technology. It has target based QoS with three arbitration levels. The SonicsExpress is for power centric clock domain crossing. There is sub-system re-use and decoupling. The MemMax manages and optimizes the DRAM efficiency while maintaining system QoS. There is run-time programmability for all traffic types. The SonicsConnect is a non-blocking peripheral interconnect.
SEMI, USA recently hosted the seminar on ‘Convergence of PV Materials, Test and Reliability: What Really Matters?
Reliability in growing PV industry
Speaking on the importance of reliability to a growing PV industry, Sarah Kurtz, principal scientist, Reliability group manager, NREL, said that confidence in long-term performance is a necessity in the PV industry. Current failure rates are low. There is need to demonstrate confidence so that failure rates will stay low. There has been exponential growth of the PV industry so far. PV is a significant fraction of new installations. It now represents a significant fraction of new electricity generating installations of all kinds.
How does one predict the lifetime of PV modules? There has been a qualification test evolution for JPL block buys. Most studies of c-Si modules show module failures are small. Internal electrical current issues often dominate.
The vast majority of installations show very low PV module failure rates (often less than 0.1 percent). There has been evidence that PV is low risk compared to other investments. To sustain the current installation rate, we need to demonstrate confidence that justifies the annual investment of $100 million or so.
Critical factors in economic viability of PV
DuPont has broad capabilities under one roof. It offers materials, solar cell design, and processes integrated with panel engineering. Speaking about Critical factors in economic viability of PV – materials matter – Conrad Burke, global marketing director, DuPont PV Solutions, said that material suppliers have a distinct advantage to view trends. The industry can expect consolidation among large PV module producers and large materials suppliers.
There is an increasing dependence on materials suppliers for processes, tech support and roadmap. There is renewed attention to long-term reliability and quality of materials in PV products.
There is a race for survival among panel producers. There are dropping prices for solar panels, and quality is getting compromised. There are reduced incentives in established markets. The market will continue to grow. Key factors that determine investment return for PV include lifetime, efficiency and cost.
When materials fail, the consequences are dire. There are failures such as encapsulant discoloration, backsheet failure, glass delamination, etc. Average defect rates in new-build modules has been increasing. Significant number of PV installations do not deliver the projected RoI. The system lifetime is as important as cost and incentives.
Solar cell power continues to improve. There have been improvements from metal pastes and processes. Performance loss impacts the RoI. The US Department of Energy hired JPL to develop 30-year PV modules. Recent cost pressures have led to the dramatic changes in module materials and a lack of transparency.
Analyzing modules from the recent service environments show performance issues. Certification does not mitigate risk. Tests do not predict the actual field performance. He showed tier-1 solar panel manufacturing problems from China, Japan and the USA. Backsheet is critical to protect solar panels. Few materials have lengthy field experience. We will continue to see drop in prices for solar panels and opening of new markets. Focus for PV module makers will remain efficiency, etc.
Here is a view from Mike Bryant of Future Horizons, taken from the Enable450 newsletter, for which, I must thank Malcolm Penn, chairman and CEO.
This is a question often asked by journalists and others not directly involved in 450mm technology, and indeed was one of the questions that formed the basis of the SMART 2010/062 report Future Horizons produced for the European Commission.
It is also a question every new 450mm project has to answer in its funding request to the European Commission, and whilst working on the Bridge450 submission we realised the arguments have become rather unclear over time. The following gives some insight and clarity into the question.
In 1970, Gordon Moore re-formulated predictions on computer storage by Turing and others into a simple statement that the number of transistors per unit area of an IC will double every two years for at least the next ten years. This became known as “Moore’s Law” and apart from the occasional hiccup has in fact been followed for the past forty years. Note that Moore never suggested a doubling in density every 18 months, this time period coming from a different statement concerning transistor performance.
Of course, doubling the number of transistors would not be that helpful if the price per unit area also doubled. The semiconductor industry has thus strived to maintain the cost of manufacturing per unit area at a constant price, and analysed over time has done a remarkable job in maintaining this number such that the ASP of logic devices has sat at around $9 per square centimetre for this whole period during which the cost of everything else including the equipment, materials and labour used to make the IC have increased, labour costs in particular increasing by a factor of around five times.
The actual cost of processing a wafer appreciates by around 6 percent per annum due to technology cycle upgrades and insertions, for example in the past the replacement of aluminium interconnects with copper or more recently the move to double patterning for lithography of critical layers. Several approaches have been used to maintain a constant area cost, these being:
Improvements in yield – this obviously reduces wastage and vast improvements have been made in this field though yields are now so good that the problem is more maintaining these levels with each new process node rather than improving them further.
Increasing levels of automation – this is still an area undergoing improvement but again we have entered an area of diminishing returns on the investment required.
Introducing larger wafer sizes – this has been performed on an irregular basis over the history of the semiconductor industry. The increase in surface area reduces many but not all of the processing costs whilst material costs tend to stay fairly constant per unit area. Thus at the 300mm transition the increase in area by 2.25 times gave a cost per unit area reduction of 30 percent, approximately compensating for the increased processing costs acquired over the 90nm and 65nm nodes.
A team of scientists at the Massachusetts Institute of Technology (MIT), comprising principally of Dr. Ishan Barman, Dr. Narahara Chari Dingari and Dr. Jaqueline Soares, and their clinical collaborators at University Hospitals, Cleveland have developed the Raman scattering-based concomitant diagnosis of breast cancer lesions and related micro-calcifications.
Let’s find out more about this new breast cancer research done by the team at MIT.
Early detection necessary!
According to MIT, one in eight women in the US will suffer from breast cancer in her lifetime and breast cancer is the second leading cause of cancer death in women. Worldwide, breast cancer accounts for 22.9 percent of all cancers (excluding non-melanoma skin cancers) in women. In 2008, breast cancer caused 458,503 deaths worldwide (13.7 percent of cancer deaths in women).
Therefore, technological advancements for its early detection and subsequent treatment can make a significant impact by preventing patient morbidity and mortality and reducing healthcare costs, and are thus of utmost importance to society. Currently, mammography followed by stereotactic breast biopsy serves as the most promising route for screening and early detection of cancer lesions.
Nearly 1.6 million breast biopsies are performed and roughly 250,000 new breast cancers are diagnosed in the US each year. One of the most frequent reasons for breast biopsy is microcalcifications seen on screening mammography, the initial step in early detection of breast cancer. Microcalcifications are micron-scale deposits of calcium minerals in breast tissue that are considered one of the early mammographic signs of breast cancer and are, therefore, a target for stereotactic breast needle biopsy.
However, despite stereotactic guidance, needle biopsy fails to retrieve microcalcifications in one of five breast biopsy patients. In such cases, the resulting breast biopsies are either non-diagnostic or false-negative, thereby, placing the patient at risk and potentially necessitating a repeat biopsy, often as a surgical procedure.
There is an unmet clinical need for a tool to detect microcalcifications in real time and provide feedback to the radiologist during the stereotactic needle biopsy procedure as to whether the microcalcifications seen on mammography will be retrieved or the needle should be re-positioned, without the need to wait for a confirmatory specimen radiograph.
Such a tool could enable more efficient retrieval of microcalcifications, which would, in turn, minimize the number of x-rays and tissue cores required to achieve a diagnostic biopsy, shorten procedure time, reduce patient anxiety, distress and discomfort, prevent complications such as bleeding into the biopsy site seen after multiple biopsy passes and ultimately reduce the morbidity and mortality associated with non-diagnostic and false-negative biopsies and the need for follow up surgical biopsy.
If 200,000 repeat biopsies were avoided, at a cost of $5,000 per biopsy (a conservative estimate and would be much higher for surgical biopsies), a billion dollars per year can be saved by the US healthcare system. The MIT Laser Biomedical Research Center, has recently performed pioneering studies to address this need by proposing, developing and validating Raman and diffuse reflectance spectroscopy as powerful guidance tools, due to their ability to provide exquisite molecular information with minimal perturbation.
Specifics of the technique
Stating the specifics of the technique developed by MIT, the team said that their research focuses on the development of Raman spectroscopy as a clinical tool for the real time diagnosis of breast cancer at the patient bedside. “We report for the first time development of a novel Raman spectroscopy algorithm to simultaneously determine microcalcification status and diagnose the underlying breast lesion, in real time, during stereotactic breast core needle biopsy procedures.”
In this study, Raman spectra were obtained ex vivo from fresh stereotactic breast needle biopsies using a compact clinical Raman system, modeled and analyzed using support vector machines to develop a single-step, Raman spectroscopy based diagnostic algorithm to distinguish normal breast tissue, fibrocystic change, fibroadenoma and breast cancer, with and without microcalcifications.
The developed decision algorithm exhibits a positive and negative predictive value of 100 percent and 96 percent, respectively, for the diagnosis of breast cancer with or without microcalcifications in the clinical dataset of nearly 50 patients.
Significantly, the majority of breast cancers diagnosed using this Raman algorithm are ductal carcinoma in situ (DCIS), the most common lesion associated with microcalcifications, which has classically presented considerable diagnostic challenges.
This study demonstrates the potential of Raman spectroscopy to provide real-time feedback to radiologists during stereotactic breast needle biopsy procedures, reducing non-diagnostic and false negative biopsies. Indeed, the proposed approach lends itself to facile assembly of a side-viewing probe that could be inserted into the central channel of the biopsy needle for intermittent acquisition of the spectra, which would, in turn, reveal whether or not the tissue to be biopsied contains the targeted microcalcifications.