Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.
Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.
The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.
Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.
IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”
The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.
“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.
Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.
Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.
The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.
Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.
How is IDesignSpec working as chip-level assertion-based verification?
Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
Selection of the right on-chip network is critical to meeting the requirements of today’s advanced SoCs. There is easy IP integration with IP cores from many sources with different protocols, and an UVM verification environment.
John Bainbridge, staff technologist, CTO Office, Sonics Inc., said that it optimizes the system performance. Virtual channels offer efficient resource usage – saves gates and wires. The non-blocking network leads to an improved system performance. There are flexible topology choices with optimal network to match requirements.
Power management is key with advanced system partitioning, and an improved design flow and timing closure. Finally, the development environment allows easy design capture and has performance analysis tools.
For the record, there are several SoC integration challenges that need to be addressed, such as IP integration, frequency, throughput, physical design, power management, security, time-to-market and development costs.
SGN exceeds requirements
SGN met the tablet performance requirement with fabric frequency of 1066MHz. It has an efficient gate count of 508K gates. There are features such as an advanced system partitioning, security and I/O coherency. There is support for system concurrency as well as advanced power management.
Sonics offers system IP solutions such as SGN, a router based NoC solution, with flexible partitioning and VC (Virtual Channel) support. The frequency is optimized with credit based flow control.
SSX/SLX is message based crossbar/ShareLink solutions based on interleaved multi-channel technology. It has target based QoS with three arbitration levels. The SonicsExpress is for power centric clock domain crossing. There is sub-system re-use and decoupling. The MemMax manages and optimizes the DRAM efficiency while maintaining system QoS. There is run-time programmability for all traffic types. The SonicsConnect is a non-blocking peripheral interconnect.
About 318 engineers and managers completed a blind, anonymous survey on ‘On-Chip Communications Networks (OCCN), also referred to as an “on-chip networks”, defined as the entire interconnect fabric for an SoC. The on-chip communications network report was done by Sonics Inc. A summary of some of the highlights is as follows.
The average estimated time spent on designing, modifying and/or verifying on-chip communications networks was 28 percent (for the respondents that knew their estimate time).
The two biggest challenges for implementing OCCNs were meeting product specifications and balancing frequency, latency and throughput. Second tier challenges were integrating IP elements/sub-systems and getting timing closure.
As for 2013 SoC design expectations, a majority of respondents are targeting a core speed of at least 1 GHz for SoCs design starts within the next 12 months, based on those respondents that knew their target core speeds. Forty percent of respondents expect to have 2-5 power domain partitions for their next SoC design.
A variety of topologies are being considered for respondents’ next on-chip communications networks, including NoCs (half), followed by crossbars, multi-layer bus matrices and peripheral interconnects; respondents that knew their plans here, were seriously considering an average of 1.7 different topologies.
Twenty percent of respondents stated they already had a commercial Network-on-Chip (NoC) implemented or plan to implement one in the next 12 months, while over a quarter plan to evaluate a NoC over the next 12 months. A NoC was defined as a configurable network interconnect that packetizes address/data for multicore SoCs.
For respondents who had an opinion when commercial Networks-on-Chip became an important consideration versus internal development when implementing an SoC, 43 percent said they would consider commercial NoCs at 10 or fewer cores; approximately two-thirds said they would consider commercial NoCs at 20 or fewer cores.
The survey participants’ top three criteria for selecting a Network on Chip were: scalability-adaptability, quality of service and system verification, followed by layout friendly, support for power domain partitioning. Half of respondents saw reduced wiring congestion as the primary reason to use virtual channels, followed by increased throughput and meeting system concurrency with limited bandwidth.
Functional verification is critical in advanced SoC designs. Abey Thomas, verification competency manager, Embitel Technologies, said that over 70 percent effort in the SoC lifecycle is verification. Only one in three SoCs achieves first silicon success.
Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.
An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.
There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.
Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.
So, what’s needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.
Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.
Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format – CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.
Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.
Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.
Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.
Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.
Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.
Milpitas, USA-based Sonics Inc. participated in TSMC’s Soft IP Alliance 2.0 beta program. Driving high quality soft IP eases customer integration and expedites time-to-market.
Sonic’s role in TSMC beta program
Speaking on the beta program and Sonics’ role, Frank Ferro, director of Product Marketing, Sonics, said: “TSMC’s Soft IP kit 2.0 beta program is part of TSMC’s Open Innovation Platform program that creates a complete ecosystem for customers with the overall goal of shortening design time. This is done by providing a large catalog of partner provided IP that is silicon-verified and production-proven.
For vendors like Sonics, TSMC has extended this ecosystem to include Soft-IP (IP not designed for a specific process, but delivered as RTL). The program allows Soft-IP partners to access and leverage TSMC’s process technologies to optimize power, performance and area for their IP.
IP cores are checked through TSMC’s foundry checklist to ensure the customers have optimized design results with fast IP integration built into their design. This flow also facilitates easy IP reuse for subsequent designs. The soft IP Kit beta 2.0 program is an extension of the current program through implementing additional quality checks, improving results and making the flow easier for customers.
There are several advantages to Sonics as a participant in this program. First, customers of TSMC will have access to Sonics IP through TSMC’s IP library. Given TSMC’s strong market share, this will allow Sonics IP to be visible to a large customer base. In addition, TSMC’s customers will feel securing using Sonics IP because they know that it has been put through a rigorous series of IP checks that meet the highest quality standards. It also allows Sonics early access to TSMC’s process libraries, allowing Sonics to optimize performance and area for each IP product.
So, what can the TSMC’s Soft IP Kit 2.0 do? How does Sonics enhance its capabilities? The Soft IP Kit 2.0 provides a specific RTL design flow methodology and hand-off which includes: lint (RTL coding consistency), clock domain crossings (CDC), power (CPF/UPF), physical design (routing congestion), design for test (DFT), constraints and documentation.
Using this flow enhances Sonics IP quality and reliability because many RTL errors can be caught at an early stage. As mentioned above, this flow ensures lowest power and best performance of the IP for a given process node.
Atrenta SpyGlass improves packaging
There is a role played by Atrenta SpyGlass. According to Ferro, Atrenta SpyGlass is the tool used to run all the tests. The flow was developed to TSMC’s standards and implemented by Atrenta. Given Sonics strong relationship with TSMC and Atrenta, we were invited to be a beta partner using our IP to test the new flow. A number of companies do participate in the program, although only Sonics has announced participation in the beta 2.0 program to date.
This tie up with Atrenta will likely improve IP packaging. As part of the overall flow, the final step, after all basic and advanced IP checks, is IP packaging. This step includes providing the IP with information on the design intent, set-up and analysis reports. Again, this is done using the SpyGlass tool from Atrenta.
This IP packaging was available to customers in the past via the Soft IP 1.0 program. The attraction of this type of IP packaging is a result of the growing number of IP cores being integrated into complex SoCs. As the number of third party IP grew, the need for a better, broader methodology was developed.
The VLSI Society of India recently organized a two-day faculty development workshop on SoC design, — Train-the-Trainer program — on Oct. 30-31, 2010, at the Texas Instruments India office, in co-operation with PragaTI (TI India Technical University) and Visweswaraya Technological University (VTU).
I am highly obliged and very grateful to the VLSI Society of India and Dr. C.P. Ravikumar, technical director, University Relations, Texas Instruments India, for extending an invitation. Here is a report on the workshop, which the VSI Secretariat and Dr. Ravikumar have been most kind to share.
System-on-chip (SoC) refers to the technological revolution, which allows semiconductor manufacturers to integrate electronic systems on the same chip. System-on-board, which has been the conventional implementation of electronic systems, uses semiconductor chips soldered onto printed circuit boards (PCBs) to realize system functionality.
Systems typically include sensors, analog frontend, digital processors, memories and peripherals. Thanks to the advances in VLSI technology, these sub-systems can be integrated on the same chip, reducing the footprint, cutting down the cost, improving the performance and power efficiency.
While the industry has adopted SoC design for many years, the academic community around the world (India not being an exception) has not caught up with the state-of-the-art. Electrical/electronics engineering departments continue to teach a course on VLSI design, where the level of design abstraction is device-level, transistor-level, or gate-level.
Register-transfer-level (RTL) design using hardware description languages is taught in some Masters’ programs, but colleges often do not have the lab infrastructure to carry out large design projects; very few Indian universities have tie-ups with foundry services to get samples. A semester is too short a time to complete a large project.
The complexity of modern-day design flow is not easy to impart in a single undergraduate course. Masters’ programs are particularly relevant in VLSI, but the M.Tech programs in the country languish due to several reasons.
“M.Tech programs do not attract top students who are highly motivated,” said a professor who attended the two-day faculty development program organized by VLSI Society of India. “Almost all undergraduate programs today have a course on VLSI technology and design. But since we get students from different backgrounds, they do not have the pre-requisites. So, a course on VLSI design at M.Tech level will have a significant overlap with an undergraduate course on VLSI design.”
“Faculty members need training,” said another teacher. “When a new course is introduced, significant time is needed for preparation. Prescribed textbooks for a new course are often not available. Internet search for course materials often returns too much material and it is hard to decide what to use. Colleges that have autonomy can decide their own curriculum, but in a university setup, the faculty face a major challenge. We are evaluated on how well our students fare in the exams. Yet, our students have to face an exam made by a central committee.”
“Having a common exam poses many problems in setting up a relevant question paper. The format of the question paper is fixed. The students get a choice of answering five questions from a set of eight. Due to the common nature of the question paper, the questions tend to demand descriptive answers.”
Faculty development workshop on SoC design
About 30 faculty members interested in system-on-chip design took part in the faculty development workshop. The attendees came from about 25 different colleges from VTU, VIT University, and Anna University. The workshop was conducted in co-operation with the Viswesaraya Technological University (VTU) and sponsored by Texas Instruments, India.
The premise for the workshop was that a course on SoC design is required at the Masters’ level, since industrial practice has clearly moved in that direction. The RTL-to-layout flow, which continues to be relevant for IPs that constitute an SoC, aspects of SoC design, which relies on IP integration, are not covered in any course.
The workshop provided a forum for industry-academia interaction. Several professionals from the industry took part in the workshop and answered questions from the faculty members. Read more…
Folks, here’s the full report on the India Semiconductor Association – Frost & Sullivan study on the Indian semiconductor industry. I’ve already provided my views on the Indian semiconductor industry report in an earlier post, for those who would like to know more.
First, the findings:
• The Total Semiconductor Market (TM) revenues poised to grow from $5.9 billion in 2008 to $7.59 billion in 2010. The market is estimated to grow at a CAGR of 13.4 percent.
• The corresponding period is likely to witness a CAGR of 13.1 percent in the Total Semiconductor Available Market (TAM). TAM revenues is anticipated to climb to $3.24 billion in 2010 from $2.53 in 2008.
According to the study:
• Memory and MPU are the leaders in the TM and TAM revenues, respectively.
• IT/OA, wireless handsets and communications are the top three contributors to the TM revenues.
• IT/OA, wireless handsets and consumer are the mainstay of TAM revenue generation.
• Greater affordability of notebooks, netbooks, government IT initiatives, increased usage of memory cards to drive TM revenues from IT/OA. Ratio of desktops to notebooks reduces to 1:5
• Emphasis on rural mobile telephony and decline in handset pricesto drive demand; economically priced handsets in GSM and CDMA to witness higher growth. Mid priced handset segment, with enhanced features, to benefit.
• Rollout of 3G and WiMAX services to act as harbinger of associated infrastructure equipment TM. SDH 64 to increasingly replace SDH 4 and SDH 16. Increased manufacturing expected to favor TAM revenues.
• Evolving lifestyle expected to assist consumer electronics related semiconductor TM. DTH revolution creates demand for STB like never before. The market is expected to sustain as technology upgrades from MPEG2 to MPEG4.
• Projects like national ID cards, bank cards and kisan cards are likely to favor the semiconductor usage in emerging segment of smart cards.
• Low manufacturing index leads to opportunity loss of $3.37 billion semiconductor market revenues. This loss anticipated to increase to $4.35 billion by 2010.
• Immense, yet untapped, opportunities exist for semiconductors in STBs, LCD TVs, digital cameras and storage Flash memory markets.
• Decline in semiconductor product prices result in lower revenue realization; key semiconductor products impacted are memory, MCU and discrete. Increase in memory usage in a variety of products to offset revenue loss on accountof decline in prices.
• Increased usage of system-on-chip (SoC) leads to decline in the overall revenues. Though the decline is not proportionate to the reduction of components, the impact is significant.
• Higher penetration of notebooks to impact market for desktops and offline UPS
• Current slowdown to impact overall growth and manufacturing investment prospects for 2009; uncertainty in government decision-making adversely affects growth.
Some of the other forecasts of the report indicate that India will likely improve its share to 2.8 percent of the global semiconductor market by 2010. Also, the India market CAGR forecast is at 6.4 times the global market CAGR, over next two years !
Again, do not get carried away by these statistics!
Further, in an update to the 2007 forecast, the previous study had non-inclusion of select products segments such as digital cameras, power supplies, CFL, CCTV, PoS, Weighing Scale, etc., which have been now added. This update sees the entry of new players and an unprecedented expansion of the DTH market. Migration of select products manufacturing outside the country has also taken place.
The total TM and TAM revenue constituents (2008) are: TM revenues: $5,901.8 million; and TAM revenues: $2,531.8 million. Now, for the segment wise break-ups and segment drivers, respectively.
IT/OA semiconductor constituents (2008)
TM revenues: $2,503.4 million; TAM revenues: $1,161.3 million.
* Notebooks, desktops and servers were the key contributors to the MPU, memory and ASSP TM revenues.
* Desktops are key revenue generators for MPU TAM revenues.
* CAGR for IT/OA is TM at 13.5 percent and TAM at 7.4 percent for 2008-10.
* Key drivers for TM are government IT initiatives, low priced notebooks, netbooks and storage flash memory; while low priced desktops and LCD monitors are the drivers for TAM.
Wireless handsets semiconductor constituents (2008)
TM revenues: $1,738.3 million; TAM revenues: $791 million.
* DSP and ASSP to ride on growth of economically priced handsets in GSM and CDMA.
* Smartphones in GSM to drive growth of TM revenues for memory, DSP and ASSP.
* CAGR for wireless handsets is TM at 5.7 percent and TAM at 5.1 percent for 2008-10.
* Key drivers for TM and TAM include GSM handsets priced <$125 and between $125-250, as well as CDMA handsets priced $250 is the key driver.
Communications semiconductor constituents (2008)
TM revenues: $754 million; TAM Revenues: $153.9 million.
* WiMAX BTS is the driver for ASIC market.
* Infrastructure equipment like WiMAX and STM were the key factors behind analog power’s TM and TAM revenues.
* Logic/FPGA rode on the STM and BTS markets.
* Low manufacturing index conspicuous in this key segment.
* CAGR for communications is TM at 27.9 percent and TAM at 64.1 percent for 2008-10.
* Key drivers for TM and TAM include the rollout of 3G, WiMAX and penetration of broadband services. For TAM, BTS, STM and WiMAX are the major drivers.
Consumer semiconductor constituents (2008)
TM revenues: $432.9 million; TAM revenues: $165.6 million.
* ASSP market growth on account of penetration of LCD into CRT TVs, STBs and DVD players.
* Low manufacturing index indicates lost opportunity for semiconductor revenues.
* CAGR for consumer equipment is TM at 12.2 percent and TAM at 18.7 percent for 2008-10.
* Key drivers for TM include STBs, LCD TVs and digital cameras, while those for TAM include STBs, LCD TVs and water purifiers.
Industrial semiconductor constituents (2008)
TM revenues: $144.9 million; TAM revenues: $106.7 million.
* Energy meters, UPS and weighing scales are the contributors to the MCUs.
* Discrete and analog power are omnipresent products across applications.
* CAGR for industrial electronics segment is TM at 12.5 percent and TAM at 14.9 percent for the period 2008-10.
* Key drivers for TM include online UPS, CFL, energy meters and power supplies. Those for TAM include energy meters, CFL and power supplies.
Automotive semiconductor constituents (2008)
TM revenues: $76.5 million; TAM revenues: $50.8 million.
* The MCU market has high dependence on the EMS and body electronics markets
* The Nano car, statutory regulations on emission norms, and safety features are likely to sustain demand.
* CAGR for automotive electronics is TM at 23.1 percent and TAM at 24.8 percent.
* Key drivers include two-wheeler instrument clusters, EMS and immobilizers.
Other electronics semiconductor constituents (2008)
TM revenues: $251.7 million; TAM revenues: $102.5 million.
* Applications like smart cards, and aerospace and defence are driving the ASSP TM and TAM revenues, respectively.
* CAGR for this segment is TM at 16.8 percent and TAM at 23.8 percent.
* Smart cards and government space research programs are the key drivers.
While designing, it is critical to pick the appropriate codec or formats that can be handled by a video IP to support any given application. It is also very important to select the correct video IP with proper and standard interfaces so that it can be as close as possible to ‘plug-and-play’ in terms of System on a Chip (SoC) integration.
Ravishankar Ganesan, VP, SoC IP Business Unit, Ittiam Systems, commenting on the selection of the video IP for SoC designs, said that SoCs use the divide and conquer strategy very well.
The SoC is today truly defining and integrating multiple specialized blocks or subsystems keeping the target application of the SoC in mind. Each one of these specialized subsystems needs to be the best in terms of its performance, area and power so that the SoC can be the best, competitive and well suited for the target market.
The video intellectual property (IP) is one of these specialized subsystems, and hence, critically important for SoCs, which are targeted for video based applications. Needless to mention, there is no one video IP that ‘fits all’ video SoCs.
So what should any SoC designer look for in terms of supporting video profiles and codecs? This really depends on the application(s) for which the SoC is likely to address. If you are targeting video IP for mobile TV application in a cellular phone, the profiles and codecs will get determined by the appropriate broadcasting system.
Similarly, if the SoC is targeting the high-definition ((HD) DVD player segment, the video codecs and their profiles/levels needs to be determined based on the video encoder configuration that was used to create the content on the DVD disc.
There has to be a way on going about selecting/understanding video codecs. In this context, it is very critical to pick the appropriate codec or formats that can be handled by the video IP to support the given application.
It is also very important to pick the video IP with the proper and standard interfaces so that it can be as close as to “plug-and-play” in terms of the SoC integration. The area and power dissipation are important as well, so that the SoC can be sold at a competitive price in the market.
At high pixel rates, what would be the situation with the video subsystem? Simply put, the higher resolutions result in the explosion of data. The video subsystem needs to be highly efficient in order to handle the high data movement. It also needs to have very efficient video processing engines to meet the real-time requirements.
As for the amount of off-chip video bandwidth that is actually needed by an IP block, Ganesan said that it depends a lot on the resolution that the video IP is likely to handle. The video resolution, profiles and levels will get determined by the application. Trade-offs between silicon real-estate and off-chip video bandwidth plays very critical role.
Improving video performance
Video performance is said to deteriorate as the off-chip memory latency increases. What can be done to improve this? Internal buffering will definitely help to reduce this impact. However, that can affect the silicon size of the device. Hence, care needs to be taken and trade-off needs to be done depending upon the Video system requirements.
Finally, let’s examine how best can a designer integrate the video IP core into an SOC design. Depending upon the interfaces, the video IP can slide easily into the SoC. The IP could be just an engine, or processor core based soft IP or a combination of both.
So, the SoC designer needs to evaluate the application requirements, and determine the right interfaces and the appropriate processor core, along with the memory sub-system. There could be peripheral interface IPs [that are either part of the Video IP or separate], which also needs to be inserted as part of the SoC and the data flow on the device needs good management.
Michael J. Fister, president and CEO, Cadence Design Systems Inc., who was in India for the CDNLive event, delivered a wonderful keynote at the recently held CDNLive. Here’s what he had to say!
The semiconductor industry is maturing. Since 2000, the industry’s annual growth rate has experienced extreme highs and lows.
Though the semiconductor industry’s revenue growth will be low in 2007, the good news is that growth rates are smoothing out as costly fabs demand consistent production. Wireless communications, computers, and consumer products continue to be the growth drivers for semiconductors. A couple of the semiconductor technology trends driving electronic design and product development are:
* More designs at advanced nodes — Beginning this year, 90nm designs will outnumber those at 130nm. Meanwhile, 65nm is design activity is ramping up and advanced designs are targeting 45nm.
* Growth in transistor count and logic — Not only are transistor counts increasing according to Moore’s Law, those transistors are being used to create more functions -– and therefore more complexity -– on a single chip, not just adding memory to the existing designs.
A related trend is that the amount of chip production outsourced to foundries continues to grow, with many Integrated Device Manufacturers (IDMs) moving to a ‘Fab-lite’ strategy for advanced nodes. This is happening as design is becoming a greater product differentiation than production.
Note that Fister’s reference to Fab-lite is interesting, even though lot of new investments are said to be getting into, and he himself says, “costly fabs demand consistent production.” There is another point that should not be overlooked — the one concerning Qualcomm, a fabless company, making it to the Top 10 semicon companies, for the first time.
Coming back the Cadence CEO, all of these trends create two kinds of challenges for chip design. These are: 1) manufacturability at advanced process nodes like 90nm and below, and 2) increased complexity and scale of chip design of system-on-chip (SoC).
Design solutions today must address these challenges, and increase team productivity and schedule predictability. To accomplish this, Cadence is focused on a holistic approach to the design flow. The Cadence Low-Power Solution and the Encounter Timing System are good examples of this holistic approach addressing the challenges of escalating scale and complexity.
The same holistic approach is shown in Cadence’s approach to manufacturability, which is to integrate design for manufacturability (DFM) into all aspects of the design flow, rather than just apply DFM techniques as a post-design step.