Archive

Archive for the ‘SoC’ Category

Agnisys makes design verification process extremely efficient!


Agnisys Inc. was established in 2007 in Massachusetts, USA, with a mission to deliver innovative automation to the semiconductor industry. The company offers affordable VLSI design and verification tools for SoCs, FPGAs and IPs that makes the design verification process extremely efficient.

Agnisys’ IDesignSpec is an award winning engineering tool that allows an IP, chip or system designer to create the register map specification once and automatically generate all possible views from it. Various outputs are possible, such as UVM, OVM, RALF, SystemRDL, IP-XACT etc. User defined outputs can be created using Tcl or XSLT scripts. IDesignSpec’s patented technology improves engineer’s productivity and design quality.

The IDesignSpec automates the creation of registers and sequences guaranteeing higher quality and consistent results across hardware and software teams. As your ASIC or FPGA design specification changes, IDesignSpec automatically adjusts your design and verification code, keeping the critical integration milestones of your design engineering projects synchronized.

Register verification and sequences consume up to 40 percent of project time or more when errors are the source of re-spins of SoC silicon or an increase in the number of FPGA builds. IDesignSpec family of products is available in various flavors such as IDSWord, IDSExcel, IDSOO and IDSBatch.

IDesignSpec more than a tool for creating register models!
Anupam Bakshi, founder, CEO and chairman, Agnisys, said: “IDesignSpec is more than a tool for creating register models. It is now a complete Executable Design Specification tool. The underlying theme is always to capture the specification in an executable form and generate as much code in the output as possible.”

The latest additions in the IDesignSpec are Constraints, Coverage, Interrupts, Sequences, Assertions, Multiple Bus Domains, Special Registers and Parameterization of outputs.

“IDesignSpec offers a simple and intuitive way to specify constraints. These constraints, specified by the user, are used to capture the design intent. This design intent is transformed into code for design, verification and software. Functional Coverage models can be automatically generated from the spec so that once again the intent is captured and converted into appropriate coverage models,” added Bakshi.

Using an add-on function of capturing Sequences, the user is now able to capture various programming sequences in the spec, which  are translated into C++ and UVM sequences, respectively. Further, the interrupt registers can now be identified by the user and appropriate RTL can be generated from the spec. Both edge sensitive and level interrupts can be handled and interrupts from various blocks can be stacked.

Assertions can be automatically generated from the high level constraint specification. These assertions can be created with the RTL or in the external files such that they can be optionally bound to the RTL. Unit level assertions are good for SoC level verification and debug, and help the user in identifying issues deep down in the simulation hierarchy.

The user can now identify one or more bus domains associated with Registers and Blocks, and generate appropriate code from it. Special Registers such as shadow registers and register aliasing is also automatically generated.

Finally all of the outputs such as RTL, UVM, etc., can be parameterized now, so that a single master specification can be used to create outputs that can be parameterized at the elaboration time.

How is IDesignSpec working as chip-level assertion-based verification?

Bakshi said: “It really isn’t an assertion tool! The only assertion that we automatically generate is from the constraints that the user specifies. The user does not need to specify the assertions. We transform the constraints into assertions.”
Read more…

On-chip networks: Future of SoC design


Selection of the right on-chip network is critical to meeting the requirements of today’s advanced SoCs. There is easy IP integration with IP cores from many sources with different protocols, and an UVM verification environment.

John Bainbridge, staff technologist, CTO Office, Sonics Inc., said that it optimizes the system performance. Virtual channels offer efficient resource usage – saves gates and wires. The non-blocking network leads to an improved system performance. There are flexible topology choices with optimal network to match requirements.

Power management is key with advanced system partitioning, and an improved design flow and timing closure. Finally, the development environment allows easy design capture and has performance analysis tools.

For the record, there are several SoC integration challenges that need to be addressed, such as IP integration, frequency, throughput, physical design, power management, security, time-to-market and development costs.

SGN exceeds requirements
SGN met the tablet performance requirement with fabric frequency of 1066MHz. It has an efficient gate count of 508K gates. There are Sonicsfeatures such as an advanced system partitioning, security and I/O coherency. There is support for system concurrency as well as advanced power management.

Sonics offers system IP solutions such as SGN, a router based NoC solution, with flexible partitioning and VC (Virtual Channel) support. The frequency is optimized with credit based flow control.

SSX/SLX is message based crossbar/ShareLink solutions based on interleaved multi-channel technology. It has target based QoS with three arbitration levels. The SonicsExpress is for power centric clock domain crossing. There is sub-system re-use and decoupling. The MemMax manages and optimizes the DRAM efficiency while maintaining system QoS. There is run-time programmability for all traffic types. The SonicsConnect is a non-blocking peripheral interconnect.

What technology SoC engineers need for next-gen chips?


About 318 engineers and managers completed a blind, anonymous survey on ‘On-Chip Communications Networks (OCCN), also referred to as an “on-chip networks”, defined as the entire interconnect fabric for an SoC. The on-chip communications network report was done by Sonics Inc. A summary of some of the highlights is as follows.

The average estimated time spent on designing, modifying and/or verifying on-chip communications networks was 28 percent (for the respondents that knew their estimate time).

The two biggest challenges for implementing OCCNs were meeting product specifications and balancing frequency, latency and throughput. Second tier challenges were integrating IP elements/sub-systems and getting timing closure.

As for 2013 SoC design expectations, a majority of respondents are targeting a core speed of at least 1 GHz for SoCs design starts within the next 12 months, based on those respondents that knew their target core speeds. Forty percent of respondents expect to have 2-5 power domain partitions for their next SoC design.

A variety of topologies are being considered for respondents’ next on-chip communications networks, including NoCs (half), followed by crossbars, multi-layer bus matrices and peripheral interconnects; respondents that knew their plans here, were seriously considering an average of 1.7 different topologies.

Source: Sonics Inc., USA.

Source: Sonics Inc., USA.

Twenty percent of respondents stated they already had a commercial Network-on-Chip (NoC) implemented or plan to implement one in the next 12 months, while over a quarter plan to evaluate a NoC over the next 12 months. A NoC was defined as a configurable network interconnect that packetizes address/data for multicore SoCs.

For respondents who had an opinion when commercial Networks-on-Chip became an important consideration versus internal development when implementing an SoC, 43 percent said they would consider commercial NoCs at 10 or fewer cores; approximately two-thirds said they would consider commercial NoCs at 20 or fewer cores.

The survey participants’ top three criteria for selecting a Network on Chip were: scalability-adaptability, quality of service and system verification, followed by layout friendly, support for power domain partitioning. Half of respondents saw reduced wiring congestion as the primary reason to use virtual channels, followed by increased throughput and meeting system concurrency with limited bandwidth.

What’s next in complex SoC verification?


Functional verification is critical in advanced SoC designs. Abey Thomas, verification competency manager, Embitel Technologies, said that over 70 percent effort in the SoC lifecycle is verification. Only one in three SoCs achieves first silicon success.

Thirty percent designs needed three or more re-spins. Three out of four designs are SoCs with one or more processors. Three out of four designs re-use existing IPs. Almost all of the embedded processor IPs have power controllability. Almost all of the SoCs have multiple asynchronous clock domains.

An average of 75 percent designs are less than 20 million gates. Significant increase in formal checking is approaching. Average number of tests performed has increased exponentially. Regression runs now span several days and weeks. Hardware emulation and FPGA prototyping is rising exponentially. There has been a significant increase in verification engineers involved. A lot of HVLs and methodologies are now available.

Verification challenges
Verification challenges include unexpected conflicts in accessing the shared resource. Complexities can arise due to an interaction between standalone systems. Next, there are arbitration priority related issues and access deadlocks, as well as exception handling priority conflicts. There are issues related to the hardware/software sequencing, and long loops and unoptimized code segments. The leakage power management and thermal management also pose problems.

There needs to be verification of performance and system power management. Multiple power regions are turned ON and OFF. Multiple clocks are also gated ON and OFF. Next, asynchronous clock domain crossing, and issues related to protocol compliance for standard interfaces. There are issues related to system stability and component reliability. Some other challenges include voltage level translators and isolation cells.

Where are we now? It is at clock gating, power gating with or without retention, multi-switching (multi-Vt) threshold transistors, multi-supply multi-voltage (MSMV), DVFS, logic optimization, thermal compensation, 2D-3D stacking, and fab process and substrate level bias control.

So, what’s needed? There must be be low power methods without impacting on performance. Careful design partitions are needed. The clock trees must be optimized. Crucial software operations need to be identified at early stages. Also, functional verification needs to be thorough.

Power hungry processes must be shortlisted. There needs to be compiler level optimization as well as hardware acceleration based optimization. There should be duplicate registers and branch prediction optimization. Finally, there should be big-little processor approach.

Present verification trends and methodologies include clock partitions, power partitions, isolation cells, level shifters and translators, serializers-deserializers, power controller, clock domain manager, and power information format – CPF or UPF. In low-power related verification, there is on power-down and on power-up. In the latter, the behavioral processes are re-enabled for evaluation.

Open source verification challenges
First, the EDA vendor decides what to support! Too many versions are released in short time frame. Object oriented concepts are used that are sometimes unfit for hardware. Modelling is sometimes done by an engineer who does not know the difference between a clock cycle and motor cycle! Next, there is too much of open source implementations without much documentation. There can be multiple, confusing implementation options as well. In some cases, no open source tools are available. There is limited tech support due to open source.

Power aware simulation steps perform register/latch recognition from RTL design. They perform identification of power elements and power control signals.They support UPF or CPF based simulation. Power reports are generated, which can be exported to a unique coverage database.

Common pitfalls include wrapper on wrapper bugs, eg. Verilog + e wrapper + SV. There is also a dependency on machine generated functional coverage goals. There may be a disconnect between the designer and verification language. There are meaningless coverage reports and defective reference models, as well as unclear and ambiguous specification definition. The proven IP can become buggy due to wrapper condition.

Tips and tricks
There needs to be some early planning tips. Certain steps need to be completed. There should be completion of code coverage targets, completion of functional coverage targets, completion of targeted checker coverage, completion of correlation between functional coverage and checker coverage list, and a complete review of all known bugs, etc.

Tips and tricks include bridging the gap between design language and verification language. There must be use of minimal wrappers to avoid wrapper level bugs. There should be a thorough review of the coverage goals. There should be better interaction between designer and verification engineers. Run using basic EDA tool versions and lower costs.

Sonics participates in TSMC’s Soft IP Alliance 2.0 beta program

November 30, 2012 2 comments

Milpitas, USA-based Sonics Inc. participated in TSMC’s Soft IP Alliance 2.0 beta program. Driving high quality soft IP eases customer integration and expedites time-to-market.

Sonic’s role in TSMC beta program
Speaking on the beta program and Sonics’ role, Frank Ferro, director of Product Marketing, Sonics, said: “TSMC’s Soft IP kit 2.0 beta program is part of TSMC’s Open Innovation Platform program that creates a complete ecosystem for customers with the overall goal of shortening design time. This is done by providing a large catalog of partner provided IP that is silicon-verified and production-proven.

A complex SoC with Sonics’ SGN on-chip network.

For vendors like Sonics, TSMC has extended this ecosystem to include Soft-IP (IP not designed for a specific process, but delivered as RTL). The program allows Soft-IP partners to access and leverage TSMC’s process technologies to optimize power, performance and area for their IP.

IP cores are checked through TSMC’s foundry checklist to ensure the customers have optimized design results with fast IP integration built into their design. This flow also facilitates easy IP reuse for subsequent designs. The soft IP Kit beta 2.0 program is an extension of the current program through implementing additional quality checks, improving results and making the flow easier for customers.

There are several advantages to Sonics as a participant in this program. First, customers of TSMC will have access to Sonics IP through TSMC’s IP library. Given TSMC’s strong market share, this will allow Sonics IP to be visible to a large customer base. In addition, TSMC’s customers will feel securing using Sonics IP because they know that it has been put through a rigorous series of IP checks that meet the highest quality standards. It also allows Sonics early access to TSMC’s process libraries, allowing Sonics to optimize performance and area for each IP product.

So, what can the TSMC’s Soft IP Kit 2.0 do? How does Sonics enhance its capabilities? The Soft IP Kit 2.0 provides a specific RTL design flow methodology and hand-off which includes: lint (RTL coding consistency), clock domain crossings (CDC), power (CPF/UPF), physical design (routing congestion), design for test (DFT), constraints and documentation.

Using this flow enhances Sonics IP quality and reliability because many RTL errors can be caught at an early stage. As mentioned above, this flow ensures lowest power and best performance of the IP for a given process node.

Atrenta SpyGlass improves packaging
There is a role played by Atrenta SpyGlass. According to Ferro, Atrenta SpyGlass is the tool used to run all the tests. The flow was developed to TSMC’s standards and implemented by Atrenta.  Given Sonics strong relationship with TSMC and Atrenta, we were invited to be a beta partner using our IP to test the new flow. A number of companies do participate in the program, although only Sonics has announced participation in the beta 2.0 program to date.

This tie up with Atrenta will likely improve IP packaging. As part of the overall flow, the final step, after all basic and advanced IP checks, is IP packaging. This step includes providing the IP with information on the design intent, set-up and analysis reports. Again, this is done using the SpyGlass tool from Atrenta.

This IP packaging was available to customers in the past via the Soft IP 1.0 program. The attraction of this type of IP packaging is a result of the growing number of IP cores being integrated into complex SoCs. As the number of third party IP grew, the need for a better, broader methodology was developed.
Read more…

Smarter systems in third era of computing!

September 19, 2011 2 comments

Jeff Chu, director of Consumer, Client Computing at ARM.

Jeff Chu, director of Consumer, Client Computing at ARM.

Over 1.8 billion ARM cores were shipped in chips during Q1-2011. Consumers are now driving computing. The Internet of things envisages 100 billion+ units by 2020, according to Jeff Chu, director of Consumer, Client Computing at ARM, who was speaking on ‘Smarter systems for smarter consumers: 3rd era of computing’ at the ARM Technical Symposium.

ARM’s ecosystem has benefitted. Tablets have changed the competitive landscape. New OSs such as Android Honeycomb, Google Chrome OS and RIM QNX are enabling innovation. Also, Microsoft Windows 8 will likely transform PCs forever.

Consumers are always demanding more as they want choices. There are a range of devices available. These come in a lot of cool form factors, along with applications and services. There is a growing software ecosystem as well. It is all about smarter systems.

Smarter systems require a balanced approach. High-performance, low power CPUs are critical. The GPU is now critical and more important than the CPU. Video is now moving to 3D. All of these functions require processors that perform. ARM multicore enables the best of both worlds, allowing a perfect balance of peak performance and optimum power.

ARM offers a broad range of application processors. It also has power optimized MALI GPUs. ARM is providing choices in silicon solutions — such as ARM Cortex A8, A9 or ARMv7A. ARM also has the TrustZone security to keep everything safe. A whole lot of software is also required. ARM’s application diversity really delivers here. ARM also maintains a leadership in Android with over 550K ARM devices shipped.

Momentum is leading to innovation. New devices and user experience is based on open source hardware. Local innovation has led to regional designs. As a result, we are now witnessing broader adoption and expanding markets. Enterprise needs are being met by thin clients. There are also a growing number of ARM SoCs.

ARM is building on the smartphone ecosystem. ARM works with OEMs and software developers to create an ecosystem.

India’s teaching community contemplates SoC design

November 4, 2010 11 comments

The VLSI Society of India recently organized a two-day faculty development workshop on SoC design, — Train-the-Trainer program — on Oct. 30-31, 2010, at the Texas Instruments India office, in co-operation with PragaTI (TI India Technical University) and Visweswaraya Technological University (VTU).

Dr. C.P. Ravikumar, TI, addressing the teachers at the workshop.

Dr. C.P. Ravikumar, TI, addressing the teachers at the workshop on SoC design.

I am highly obliged and very grateful to the VLSI Society of India and Dr. C.P. Ravikumar, technical director, University Relations, Texas Instruments India, for extending an invitation. Here is a report on the workshop, which the VSI Secretariat and Dr. Ravikumar have been most kind to share.

System-on-chip (SoC) refers to the technological revolution, which allows semiconductor manufacturers to integrate electronic systems on the same chip. System-on-board, which has been the conventional implementation of electronic systems, uses semiconductor chips soldered onto printed circuit boards (PCBs) to realize system functionality.

Systems typically include sensors, analog frontend, digital processors, memories and peripherals. Thanks to the advances in VLSI technology, these sub-systems can be integrated on the same chip, reducing the footprint, cutting down the cost, improving the performance and power efficiency.

While the industry has adopted SoC design for many years, the academic community around the world (India not being an exception) has not caught up with the state-of-the-art. Electrical/electronics engineering departments continue to teach a course on VLSI design, where the level of design abstraction is device-level, transistor-level, or gate-level.

Register-transfer-level (RTL) design using hardware description languages is taught in some Masters’ programs, but colleges often do not have the lab infrastructure to carry out large design projects; very few Indian universities have tie-ups with foundry services to get samples. A semester is too short a time to complete a large project.

The complexity of modern-day design flow is not easy to impart in a single undergraduate course. Masters’ programs are particularly relevant in VLSI, but the M.Tech programs in the country languish due to several reasons.

Ground realities
“M.Tech programs do not attract top students who are highly motivated,” said a professor who attended the two-day faculty development program organized by VLSI Society of India. “Almost all undergraduate programs today have a course on VLSI technology and design. But since we get students from different backgrounds, they do not have the pre-requisites. So, a course on VLSI design at M.Tech level will have a significant overlap with an undergraduate course on VLSI design.”

“Faculty members need training,” said another teacher. “When a new course is introduced, significant time is needed for preparation.  Prescribed textbooks for a new course are often not available. Internet search for course materials often returns too much material and it is hard to decide what to use. Colleges that have autonomy can decide their own curriculum, but in a university setup, the faculty face a major challenge. We are evaluated on how well our students fare in the exams. Yet, our students have to face an exam made by a central committee.”

“Having a common exam poses many problems in setting up a relevant question paper. The format of the question paper is fixed. The students get a choice of answering five questions from a set of eight. Due to the common nature of the question paper, the questions tend to demand descriptive answers.”

Faculty development workshop on SoC design
About 30 faculty members interested in system-on-chip design took part in the faculty development workshop. The attendees came from about 25 different colleges from VTU, VIT University, and Anna University. The workshop was conducted in co-operation with the Viswesaraya Technological University (VTU) and sponsored by Texas Instruments, India.

The premise for the workshop was that a course on SoC design is required at the Masters’ level, since industrial practice has clearly moved in that direction. The RTL-to-layout flow, which continues to be relevant for IPs that constitute an SoC, aspects of SoC design, which relies on IP integration, are not covered in any course.

The workshop provided a forum for industry-academia interaction. Several professionals from the industry took part in the workshop and answered questions from the faculty members.  Read more…

%d bloggers like this: