Argon Design, a leading developer of high performance software applications for manycore communications processors, launched Argon Blaster, the industry’s first flow simulation solution for generating realistic, Internet scale traffic loads and applications to test networking and security equipment.
Blaster delivers a line rate, timing accurate, flow simulation application on an affordable PCIe acceleration card for use in standard x86 platforms. This enables OEMs to cost effectively distribute a high performance simulation and traffic generation solution throughout the engineering organization. The approach significantly reduces development time and cost, while simultaneously increasing product quality.
Blaster is designed for enterprise and carrier network operators for performance testing of flow based cyber security and network analytics applications. It enables network managers to verify that these systems are designed and deployed in a manner to match expected network loads.
High performance, accuracy rule!
Elaborating on the features, Daniel Proch, director of product management, Netronome, said: “Argon Blaster is the industry’s highest-performance and most-accurate flow simulation solution, in an affordable package. Developed by Argon Design, Blaster enables a standard x86 PC with a Netronome flow processor PCIe card to generate realistic, Internet-scale traffic loads and application mixes.
“For many networking applications, the ability to classify and manage traffic flows is key to enabling the highest level of performance and scalability. Quality of Service, Load Balancing, Firewall, Intrusion Detection, Content Inspection, Data Loss Prevention and similar applications all typically require flow-aware processing capabilities and this flow-aware traffic generation solution for development and QA. Blaster is the first traffic generation tool designed specifically for flow simulation applications. With Blaster, you can emulate up to a million unique flows with accurate, consistent, per-flow rate control.”
It will be interesting to know how Blaster will help the ISVs and OEMs generate realistic, Internet-scale traffic loads and applications to test networking and security equipment.
Blaster can be installed in any modern PC running Linux. It installs as a KVM virtual machine and can be operated from within the virtual machine or externally. It replays one more multiple .pcap files and can take that traffic and emulate any type of traffic profile from that pcap(s). The user can change the # flows per pcap file, the addressing scheme (# clients and servers based on MAC and or IP address).
From this set of knobs and given a set of pcaps with appropriate application traffic to any traffic load and application mix that is desired. Organizations can then offer:
* Performance benchmarking to isolate bottlenecks.
* Stress testing with real-world loads.
* Security testing with background, application and attack traffic.
* Quality assurance with broad spectrum of application and protocols.
Let’s find out a bit more about the role played by Netronome as well as Argon Design. Proch said: “The product is an Argon branded product that is a joint development with Argon Design. Netronome provides the accelerated flow processing hardware for the solution in the form of a standard PCIe card, and Argon designed and engineered the software. Netronome will be handling sales and marketing of the product. Software and support will be handled by Argon.”
Will there be an upgrade sometime later, next year, perhaps? “Most certainly,” he continued. “Our early access customers and internal use has already developed a robust roadmap and we anticipate these features and others to be rolled out over several subsequent software releases. We also expect to have a new hardware version based on our recently announced NFP-6xxx family of flow processors when available.”
June 8 happens to be World IPv6 Day. On this day, tomorrow, Google, Facebook, Yahoo!, Akamai and Limelight Networks will be among some of the major global organizations offering content over IPv6 networks on a 24-hour test flight! World IPv6 Day’s goal is to motivate organizations — ISPs, hardware vendors, OS vendors, web companies, etc., to prepare their services for IPv6, as IPv4 addresses run out!
Internet Protocol version 6 (IPv6) is a version of the Internet Protocol (IP) that is designed to succeed Internet Protocol version 4 (IPv4). The growth of the Internet has mandated a need for more addresses than is possible with IPv4. IPv6 allows for vastly more addresses.
Thanks to Lauren Willard at Sterling Communications, I got into a conversation with Dave Kresse, CEO of Mu Dynamics, on the eve of the IPv6 Day. Mu has been working with network operators and service providers for years now to ensure that their networks are up for IPv6.
Wednesday, the company will be announcing a free solution for network operators and service providers to ensure that their networks will operate smoothly both during the transition to IPv6 and once it’s complete. Mu is doing all of this in conjunction with the leading lab for IPv6 testing in the nation – UNH-IOL InterOperability Lab.
Talking about the significance of the World IPv6 day from Mu’s perspective, Kresse says that everybody has been talking about IPv6 for the longest time, and a majority of our customers have been focusing on it for awhile. The IPv6 World Day bring additional visibility to the exhaustion of the IPv4 addresses and for those who have not started to make the transition, they are definitely behind the game.
As for Mu’s role in IPv6, he adds: “For the last several years, our proven testing solution has helped network equipment manufacturers and operators around the world with their IPv6 testing and certification. The Mu Test Suite for IPv6 is comprehensive suite of automated testings solutions and test content assisting customers and prospects to test, certify and validate their products and services for conformance, security and resiliency.” Read more…
Year 2010 has been a good year for the global electronics industry, rather, the technology industry, coming right after a couple of years of recession. Well, it is time to look back on 2010 and see the good, bad and ugly sides, if any, of electronics, telecom and technology.
Presenting my list of top posts for 2010 from these three segments.
Electronics for energy efficient powertrain
Best wishes for a very, very happy and prosperous 2011! :)
Early this month, The Defense Advanced Research Projects Agency (DARPA) awarded an $8.4 million grant to the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science for research on a technology known as non-volatile logic, which enables computers and electronic devices to keep their state even while powered off, then start up and run complex programs instantaneously.
The research has broad implications across a range of technologies, including portable electronics, remote sensors, unmanned aerial vehicles and high-performance computing. UCLA Engineering researchers will conduct studies into the materials, design, fabrication and tools used to develop such technologies.
“To achieve the ambitious goals of this program, we are planning to introduce key innovations in terms of both material and device structures. This is an opportunity to study new nano-magnetic physics while developing an exciting technology,” said research associate Pedram Khalili, who will be the project manager at UCLA, in a release.
Thanks to Ms Wileen Wong Kromhout, director of Media Relations & Marketing, UCLA Henry Samueli School of Engineering and Applied Science, I was able to connect with Pedram Khalili, research associate, Department of Electrical Engineering, UCLA, and project manager, UCLA-DARPA STT-RAM and NV Logic Programs.
Logic technology could lead to instant-on computers
First, I asked Khalili what’s this technology that is known as non-volatile logic all about? He said: “In a nutshell, it is a logic technology, which retains its state, while doing computation. That means, you can turn it off, and turn it on again, and it will resume the computation where it had left off. This is not the case with the current computers. Hence, it can lead to instant-on computers.”
UCLA Engineering researchers will also conduct studies into the materials, design, fabrication and tools used to develop such technologies. So, what are these materials, design, tools, etc. going to be? Khalili added: “The materials will be ferromagnetic, i.e., we will be using dynamic phenomena — known as spin waves — in magnetic thin films to perform logic. The memory effect (i.e., non-volatility) will also be provided by a magnetic memory bit.”
The UCLA researchers are said to be aiming to develop a prototype non-volatile logic circuit, which could lead to development of new classes of ultra–low-power, high-performance electronics. Khalili noted, “The prototype that we refer to will be a logic circuit performing a logic operation in a non-volatile manner.”
The researchers are also planning to introduce key innovations in terms of both material and device structures. This is said to an opportunity to study new nano-magnetic physics, while developing an exciting technology. Khalili clarified, “Generally, we will be looking for new ways to control magnetization on the nanoscale, in a fast and energy-efficient manner.”
The project will be led by UCLA under principal investigators Kang Wang and Alex Khitun, an assistant research engineer, and will involve researchers from UCLA, UC Irvine, Yale University and the University of Massachusetts.
On a personal note, I am extremely delighted to touch base with such a renowned and globally acclaimed institution like the UCLA Henry Samueli School of Engineering and Applied Science and its researchers/faculty.
Am looking forward to many more interactions with UCLA and several other globally renowned institutes, and hopefully, with many such institutes across India, who are doing cutting-edge technology research.
Recently, I had the pleasure of interacting with Nataraj Kumar, director, Consumer Lifestyle, Philips Innovation Campus (PIC), where we discussed things such as Philips technology in interoperability, and the role of this technology in the Philips development ecosystem.
Content sharing platform and consumer behaviour are two key areas of focus for the Dutch electronics giant, Philips. As you know, connectivity and interoperability, as well as certification, play key roles in the overall make up of CE devices as well. To ensure that all devices work smoothly, consumer electronics manufacturers have to be very careful regarding testing and interoperability issues.
Last month, Philips had organized the Philips Connectivity Plugfest-02 at the Philips Innovation Campus in Bangalore, India. It attracted 31 companies who showcased 90 devices focusing on connectivity technologies — HDMI, USB, Bluetooth and DLNA.
As you can see, the focus was on content sharing over multiple devices — all of whom need to operate and function in unison — and that’s where the interoperability factor comes in!
In fact, more than 70 percent of the companies participating in the Plugfest-02 focused on HDMI. According to Nataraj Kumar, there were 42 products related to HDMI, while there were 23 products focused on USB. Bluetooth had 17 products and there were four related to DLNA (Digital Living Network Alliance).
In contrast, the Philips Connectivity Plugfest-01, held in June 2009 at the same venue, had attracted 15 companies who showcased 40 devices focusing on technologies such as Bluetooth, HDMI and DLNA!
Strong current focus on HDMI
As per Nataraj Kumar, HDMI 1.4 supports the audio return channel, provides 3D support, as well as an HDMI Ethernet channel.
Elaborating on the Plugfest-02, he said that there were a range of CE devices, such as TV sets, graphic cards, active HDMI cables, control boxes, products that get into DVD players, etc.
He said: “We made a matrix of every company, and presented each company 45 minutes. Within that period, each company had to pick up its product — or source — and carry it to a synchronization device, which receives and displays data. Then they evaluated a variety of test cases that were already pre-defined by Philips.”
Most of the participating companies at Plugfest-02 were able to test successfully for interoperability and perhaps, also identify problems that could be later resolved.
Just how well is Philips geared up for HDMI is visible from its well equipped Interoperability and Certification Center (ICC) lab (sorry folks, no pictures).
The Philips’ ICC lab has the facility to handle HDMI 1.4 compliance testing. It also offers HDMI 1,4 CEC compliance testing, HDMI HDCP compliance testing, and HDMI and HDMI CEC interoperability testing.
The ICC lab offers interoperability testing with CE devices for Bluetooth, as well as Bluetooth profile testing. For USB, it offers USB interoperability testing, while for DLNA, it offers DLNA interoperability testing for 1.0 and 1.5, respectively. The lab offers RF4CE (radio frequency for consumer electronics) interoperability testing as well.
Now, I couldn’t find any company showcasing WHDI (wireless home digital interface) capabilities. Perhaps, the technology is still very new! And what about Philips’ interest in this technology?
On inquiring, Nataraj Kumar said that Philips is exploring opportunities as to what the WHDI standard can do for home entertainment. Should Philips participate in this specification, it would possibly look into WHDI’s standardization process as well. Read more…
Here are just two among the many data points. One, India, Brazil and Poland — all witnessed growth in malicious activity. In 2009, India accounted for 15 percent of all malicious activity in the APJ region, an increase from 10 percent in 2008. Also, 19 percent of the attacks targeting India, originated in India itself in 2009. So, India is rising — both as the country of origin and a target for attacks! Wonderful!
Another one: after the US, Brazil and India are prominent among the countries where Web-based attacks originate. Okay, India was also one of the highest ranked countries for Zeus infections in 2009!
So, the key findiings of the threat landscape are as follows: The underground economy remains unaffected by the global economy. Hence, users are still plagued by Web-based attacks. Targeted attacks focus on enterprises — no surprise! Next, attack kits make it easier for novices to indulge in information theft. Finally, malicious activity takes place in emerging countries (read India, among them). I will deal with all of these a bit later.
Dhupar elaborated on some best practices as well that we all — enterprises and end users need to follow. These include:
* Defense-in-depth strategies
* Proactive policy based approach to security
* Test security, and update definitions and patches.
* Educate management on security.
* Emergency response procedures with backup and restore.
As for the way ahead, cybercriminals will continue to innovate to fuel the underground economy. New age Internet technologies and usage will encourage novel propagation vectors. The global scale and origin of attacks requires international co-operation. Read more…
Interesting! Melzoo is the latest search engine on the block!
I tried various searches on this site, starting by typing my name ;) ! Well, the first result was of this very blog… and another surprise — the results appear on a split page! The first (or left) part of the page lists the results, while the second (or right) part of the page opened my blog’s page!
When I moved the mouse over to the second search result — my Newsvine page — the page on the right changed automatically — to my Newsvine index page. And then on to the next result — my page on BlogCatalog!
Okay, so the search lists all results on one side, and previews the page for each result — as you move the mouse over — on the other side.
One of the search results led me to an EDN Global Roundtable of early 2004! This page, especially, brought back several memories! I’d participated in this roundtable sometime in Q3 of 2003, and it was published early January of 2004. I’ve lost track of some of those participants, but it was good to see this roundtable all over again!
Another search result led me to an EDN Global Report 2, which had been done before I left for Hong Kong in Q3-05.
Some pages were unable to show the preview, though my guess is, either the Internet was slow or the pages were taking time to download, or, the previews were not yet fixed. Am sure, those would be taken care of!
By the way, I was extremely thrilled to see this blog’s name appear on the very first page, when I tried another search using ‘semicon blogs’. Also, when I tried searching for TD-SCDMA, the results on the first page displayed, among others, my own article, written for Wireless Week, US, back in 2001.
However, I’m wondering what is the MelZoo team going to do to make this search engine popular among users? Google, like Yahoo, spread by word of mouth. Later, Google added several features (and keeps adding) that greatly enhanced its image! We keep comparing any search engine invariably with Google, don’t we?
There have been other search engines like iRazoo, Cuil, etc., but they don’t seem to have gained many followers so far! On Alexa, Cuil was ranked 9,596 and iRazoo 21,865. Melzoo does not have a rank yet, so it is a brand new starter!
Also, the names of these popular search engines! The names for the new search engines are a bit difficult to remember, isn’t it so? Yahoo and Google, and perhaps, even Alta Vista, Lycos, etc., before them, have been fairly easy to remember. Even the logos, I guess, were more user friendly.
Perhaps, MelZoo would do something different over the next few weeks or months! Since the site is still in beta, except some more goodies to be added. And, even better search results to show up!
After all, any search engine is known for the quality of results it throws up! This means — showing the interesting and latest, along with the uninteresting, and maybe, equally related to the search. How recent are the updates? How frequently are new articles added on to the search results?
For instance, any online media would check MelZoo to see if the site is picking up all of its news with each passing hour. If MelZoo can cope up with this challenge, it would have done its job admirably. Best of luck to the team!
Yes, I believe so! The numbers, if one were to contend with those alone, DO NOT meet the expectations. Broadband was and is considered to be the new paradigm of India. However, are we anywhere near whatever growth we have been expecting? Let’s see the stats for the various telecom segments.
According to the statistics made available by the Telecom Regulatory Authority of India (TRAI), the total number of telephone subscribers was 232.87 million at the end of July 2007, and the overall teledensity had increased to 20.52!
In the wireless segment, 8.06 million subscribers were added in July 2007 and the total wireless subscribers (GSM, CDMA and WLL (F)) base was 192.98 million. The wireline segment subscriber base stood at 39.89 million, with a decline of 0.20 million in July 2007.
And what about broadband? For broadband (≥256Kbps downloads), the total broadband connections in the country had reached only 2.47 million by the end of July 2007. In fact, during July 2007 there was an addition of 0.05 million connections!
Let’s go back a few months! Venkat Kedalya of Convergent Communications had pointed out in an article to CIOL that India was nowhere on course to reach a target of 9 million broadband subscribers by this year! India has a target of achieving 20 million broadband subscribers by 2010, which now seems to be highly ambitious and well, unachievable!
Allocation of frequencies for BWA (broadband wireless access) is the immediate need of the moment. There is a need to look at WiMax and broadband over powerline (BPL) as far as technology is concerned. Some folks have entered the IPTV domain, so hopefully, we will get to see some content over broadband.
Even TRAI has urged the government to boost broadband growth. One of its suggestions has been to ask BSNL and MTNL to adopt a franchisee model so that local players may use their copper cables and offer high-speed Internet services. Decisions need to be taken for allocating spectrum for WiMax as well as making the National Internet Exchange of India more effective.
TRAI said: “Only 0.47 million broadband subscribers have been added in first six months of 2007, which is far below the growth trend required to achieve broadband policy targets. This necessitated an analysis of regulatory and policy frameworks, and to formulate new approach necessary for rapid roll-out of broadband in the country.”
TRAI also accepts that while the growth of Internet subscribers was satisfactory, we are seriously lagging behind as far as broadband is concerned. It adds: “The government should ensure availability of more number of Ku-band transponders to roll out broadband services through DTH platform and utilize Universal Service Obligation (USO) fund to provide subsidy for providing broadband services through satellite in remote and hilly areas.”
I’m not really sure how all of this will help. You do need at least a PC to access the Internet services. Am not sure how many folks are still willing to invest in home PCs and broadband, given that watching TV is a favorite pastime. Broadband over cable TV has not been a success either. What are we doing about this?
In engineering, it is imperative that all cogs of the wheel come together, so that the wheel rolls smoothly. Similarly, it is imperative that all key IT processes in an organization gel together and work as one.
Imagine the nightmare that enterprises, small and large, would have to go through should this did not happen!
The first basic IT asset is your company’s network, or the intranet. We have seen several times that a company’s network’s down for some reason and mails can’t be sent or received.
In such cases, the organizations or the enterprises who are ‘stuck’ with this situation, are literally crippled. Mails can’t be received, mails can’t be sent out, important mails are missed, business-critical processes are waylaid, and so on and so forth. I’ve been part of this nightmare several times.
Once, the undersea cable snapped during my stay in a company. I don’t need to add the problem we had to face for at least half a day, as service providers worked furiously to rectify the cable and restore normal service.
Some advocate satellite as the best medium for managing data transmission. Maybe! Some others cite wireless. Perhaps!! Then, I hear from many that there are issues related with security and storage. However, those would only come into play once your basic network is operational smoothly.
In telecom, they have something called five nines, or 99.9999, which means the network is up and running for this percentage of time! You’ve noticed how people go beserk and start cursing their phones or the network, should they fail to receive a network, or are unable to connect to the network!
What they don’t know or realize is the hard work that’s involved in setting up, maintaining and operating a network! It’s similar to what sometimes happens in offices when the network breaks down and we are unable to send/receive mails.
Maybe, it would be prudent to first manage the internal network as best as possible, before moving on to bigger, better things. The cogs in the wheel got to move smoothly.