Archive

Archive for the ‘data centers’ Category

Netronome and Argon Design launch Blaster flow simulation solution

November 23, 2012 2 comments

Blaster flow simulation solution.

Blaster flow simulation solution.

Argon Design, a leading developer of high performance software applications for manycore communications processors, launched Argon Blaster, the industry’s first flow simulation solution for generating realistic, Internet scale traffic loads and applications to test networking and security equipment.

Blaster delivers a line rate, timing accurate, flow simulation application on an affordable PCIe acceleration card for use in standard x86 platforms. This enables OEMs to cost effectively distribute a high performance simulation and traffic generation solution throughout the engineering organization. The approach significantly reduces development time and cost, while simultaneously increasing product quality.

Blaster is designed for enterprise and carrier network operators for performance testing of flow based cyber security and network analytics applications. It enables network managers to verify that these systems are designed and deployed in a manner to match expected network loads.

High performance, accuracy rule!
Elaborating on the features, Daniel Proch, director of product management, Netronome, said: “Argon Blaster is the industry’s highest-performance and most-accurate flow simulation solution, in an affordable package. Developed by Argon Design, Blaster enables a standard x86 PC with a Netronome flow processor PCIe card to generate realistic, Internet-scale traffic loads and application mixes.

“For many networking applications, the ability to classify and manage traffic flows is key to enabling the highest level of performance and scalability. Quality of Service, Load Balancing, Firewall, Intrusion Detection, Content Inspection, Data Loss Prevention and similar applications all typically require flow-aware processing capabilities and this flow-aware traffic generation solution for development and QA. Blaster is the first traffic generation tool designed specifically for flow simulation applications. With Blaster, you can emulate up to a million unique flows with accurate, consistent, per-flow rate control.”

It will be interesting to know how Blaster will help the ISVs and OEMs generate realistic, Internet-scale traffic loads and applications to test networking and security equipment.

Blaster can be installed in any modern PC running Linux. It installs as a KVM virtual machine and can be operated from within the virtual machine or externally. It replays one more multiple .pcap files and can take that traffic and emulate any type of traffic profile from that pcap(s). The user can change the # flows per pcap file, the addressing scheme (# clients and servers based on MAC and or IP address).

From this set of knobs and given a set of pcaps with appropriate application traffic to any traffic load and application mix that is desired. Organizations can then offer:
* Performance benchmarking to isolate bottlenecks.
* Stress testing with real-world loads.
* Security testing with background, application and attack traffic.
* Quality assurance with broad spectrum of application and protocols.

Let’s find out a bit more about the role played by Netronome as well as Argon Design. Proch said: “The product is an Argon branded product that is a joint development with Argon Design. Netronome provides the accelerated flow processing hardware for the solution in the form of a standard PCIe card, and Argon designed and engineered the software. Netronome will be handling sales and marketing of the product. Software and support will be handled by Argon.”

Will there be an upgrade sometime later, next year, perhaps? “Most certainly,” he continued. “Our early access customers and internal use has already developed a robust roadmap and we anticipate these features and others to be rolled out over several subsequent software releases. We also expect to have a new hardware version based on our recently announced NFP-6xxx family of flow processors when available.”

CIOs unfurl newer efficiencies out of IT operations


How can today’s CIOs be able to unfurl newer efficiencies out of their respective IT operations? At the same time, how can they, present solutions that can also empower their businesses even as the businesses work toward achieving their organizational goals? The answer, perhaps, lies in infrastructure optimization!

IT spends are said to be increasing and in line with the overall aim of accelerating a company’s business performance. Balance between innovation and maintenance is said to be critical for the effective functioning of IT.

According to Greg Crider, senior director of technology product marketing at Oracle, IT leaders need to embrace an alternative model for optimizing data center performance that eliminates much of the time and cost associated with integrating, tuning, and maintaining complex multi-tiered environments.

“IT leaders need to know that they have choices other than integrating these pieces themselves or paying a service partner to do some of the integration. Oracle, for instance, has thousands of examples of how to squeeze cost and complexity out of IT infrastructures by doing optimization projects at each layer,” he says.

There is typically said to be a huge difference between one, installing a system and, two, having it production ready. Why? Ask any IT manager at any company, in case you do not believe this!

Crider says: “Many organizations just don’t have expertise in every dimension of a complex architecture. So, they have to rely on outside resources or make do with default configurations that don’t take into account everything else that is going on.

“Fortunately, many important business processes are now available as optimized end-to-end solutions based on open standards.  Enterprises are beginning to realize that they can deploy customized, secure, high performance applications without taking on all the cost of integration, tuning and maintenance.”

IT leaders also need to embrace an alternative model for optimizing data center performance that eliminates much of the time and cost associated with integrating, tuning, and maintaining complex multi-tiered environments.

Driving a sustainable, future-focused transformation across an IT infrastructure is a layered process that requires the IT leaders to optimize the entire spectrum of their data center hardware and software operations. This includes servers, databases, middleware and business process management software, and so on.

Standardization, virtualization, consolidation, and rock solid cloud orchestration (management) capabilities are necessary steps organizations work on to improve the application lifecycle management process, as per Mike Palmeter, director of product management with Oracle.

“Many companies have started down the virtualization path, and have even consolidated some of the tier 2 and 3 workloads, but many still have yet to best determine how to standardize, what to standardize upon, and how to best manage all these disparate applications and workloads,” he says.

“These are key considerations especially as companies start moving mission critical workloads to a shared infrastructure. Furthermore, availability, data and app mobility, as well as performance become paramount as applications are moved from dedicated silos to a shared infrastructure.”

With the automation components that are inherently part of a properly deployed private cloud, IT administrators can install complex applications, without going through all of the traditional configuration steps. As IT leaders look to get out from under the complexities, the benefits of highly integrated and engineered solutions get strikingly clear.

IT optimization and database consolidation!


How does an organization kick-start its transformation and achieve an optimized data center ready for the future? Does an organization adopt a futuristic, focused program to achieve immediate wins?

There is a need for CIOs to formulate a winning, if not, a workable strategy! In a white paper titled: “Planning for Tomorrow’s Data Center through Strategic Infrastructure Optimization”, Greg Crider, senior director of technology product marketing at Oracle, recommends companies to take a look at their existing IT infrastructure and explore the strongest business needs. The companies can also find out where where its possible to realize immediate business benefits.

For instance, a shared database platform allows IT to get the elasticity they need so they can move resources where the demand is greatest. Standard configurations mean fewer moving parts, which means faster provisioning, notes Willie Hardie VP of database product marketing, Oracle, in the same white paper.

A quick poll on challenges of database consolidation within an organization, done in the same white paper, is interesting. At least 55 percent feel that IT resources are focused on managing the existing systems. However, 41 percent say there is no IT budget to embark on the consolidation project.

According to Crider, IT leaders need to embrace an alternative model, optimized from end-to-end by taking advantage of collective expertise and experiences throughout deployment.

Organizations also face a number of challenges, such as IT resources focused on managing existing systems, limited IT budgets to embark on consolidation projects, and running the risk of compromising enterprise information security. Hence, there is a need for database consolidation.

Benefits of database consolidation
The benefits of database consolidation are huge. According to a survey, 74 percent say it reduces IT costs, while 67 percent say it reduces complexity in the data center. It was found that 29 percent had already consolidated some or all of the databases, and 22 percent had started the process.

Private cloud computing is about consolidation, standardization and rationalization of the hardware, storage and software portfolio,” explains Hardie. As IT leaders move toward transforming data centers, a well-planned database consolidation strategy can help drive toward sustainable success. Similarly, application consolidation also plays a key role in helping IT leaders establish and support manageable environments capable of transforming data centers better prepared for an uncertain future.

Considering the potential benefits associated with application consolidation and optimization, it’s easy to understand why IT leaders are serious about embracing well-crafted plans as a key component of their data center transformation. Of course, IT leaders need to arm themselves with the right tools to overcome the potential obstacles head-on.

Specifically, it’s crucial to start by gaining an understanding of the potential challenges, developing a strategic plan, establishing a well defined end goal, securing senior support early in the process and staying determined throughout the process. That’s the surest route to an optimized infrastructure and a data center designed for the future.

Standardization, virtualization, consolidation, and cloud orchestration capabilities are necessary steps for organizations as they work to improve the application lifecycle management process, explains Mike Palmeter, director of product management with Oracle. IT leaders are starting to realize that even though they could create a list and build a system with the best-of-breed components, there is still a need to account for system efficiency.

Cloud deployment trends in APAC: IDC


Avneesh Saxena, Group VP, Domain Research Group, IDC Asia/Pacific.

Avneesh Saxena, Group VP, Domain Research Group, IDC Asia/Pacific.

Enterprises should start thinking what needs to get into their respective clouds. Transformative IT is unhinging the rules. There will be a cloud-based convergence era in the future, according to Avneesh Saxena, Group VP, Domain Research Group, IDC Asia/Pacific. He was presenting on cloud deployment trends in the Asia Pacific region at the recently held Intel APAC Cloud Summit.

According to him, China and India – with 9 percent each – led the countries in GPD and change agents. He referred to four mega trends:
* Information – exploding data. Just under 50 percent of TB shipped in 2014 will be in the public cloud.
* Mobility – third platform for industry growth. Mobile devices, services and applications will grow from 2011. This will be the intelligent economy.
* The technology catalyst. Servers, blades, cores, VMs, data transmission, 10G ports — all will grow, some, by at least 5-10 times.
* IT spending (external spend only) will be worth $282 billion in Asia Pacific excluding Japan (APeJ). Also, 31 percent of CIOs and 25 percent of LoBs (line of business) plan to spend 11-30 percent more.

The top three priorities for CIOs and LoBs are as follows:
* Simplify the IT infrastructure.
* Lower the overall cost structure.
* Harnessing IT for competitive edge.

We will be investing more in mobility and analytics. There will be a move toward consolidation, virtualization and better efficiency. There will also be a move toward a more flexible, agile and  scalable infrastructure. Saxena outlined three key transformational trends:
* Behavior/access — mobility/analytics.
* Infrastructure/devices — convergence, virtualization.
* Delivery/consumption — cloud.

Mobilution is a confluence of factors. It is mobile everything. A lot of the distribution channels aer also cloud driven. Analytics-led competitive acceleration is the primary objective of business analytics projects. Saxena added there could yet be another disruption — in the form of micro servers. The idea is to lower the cost of computing per unit of work. Even Intel’s infrastructure will get 75 percent virtualized in three to four years from now.

There will also be converged infrastructure for private clouds. Besides, server virtualization is ramping up fast. There will be a huge increase in server shipments by 2014. Next, there will be device proliferation impact on client virtualization. There is a demand to connect all of our devices — smartphones, iPads, BlackBerrys, tablets, etc.

Evolving cloud business models include, C2C. The consumer clouds are the most popular, such as Hotmail, Gmail, Google Docs, etc. B2C clouds are the next — such as NetFlix, Apple, Skype, etc. Finally, there are B2B clouds — enterprise clouds — where security, SLAs are the differentiators.

Security/regulation are critical for public clouds. As of now, private clouds are deemed to be more secure than public clouds. Solving cloud security and compliance is a huge revenue opportunity for vendors.

Reference architecture — starting point to build and optimize cloud infrastructure


Rekha Raghu, Strategic Program Manager, Intel, Software and Services Group.

Rekha Raghu, Strategic Program Manager, Intel, Software and Services Group.

Rekha Raghu, Strategic Program Manager, Intel, Software and Services Group, discussed some reference architecture (RA) case studies. Intel Cloud Builders program is a reference architecture (RA) — a starting point from where to build and optimize cloud infrastructure.

The RA Development Process takes anywhere from two to three weeks. It involves exploration, planning, integration, testing and development. The RA is said to be:
*   Detailed know-how guides.
*  Practical guidance for building and enhancing cloud infrastructure.
*  Best-known methods learned through hands-on lab work.

RA case study # 1 – efficient power management
Data center power management involves monitor and control server power, and later, manage and co-ordinate at data center level. Dynamic power management is on the server, rack, and data center levels.

Power management use cases help to save money via real time monitoring, optimized workloads and energy reduction. They allow scaling farther via power guard rail and optimization of rack density. They also help prepare for the worst in terms of disaster recovery/business continuity.

Intel also presented a power management RA overview as well as an implementation view. The monitoring, reporting and analysis provides insight into energy use and efficiency, as well as CO2 emissions. Rack density optimization and power guard rail enables more servers deployed per rack. It improves the opex cost of power delivery per system. It also extends the capex data center investment with increased node deployments.

As for disaster recovery/business continuity, there is policy based power throttling per node to bring the data center back to life more quickly and safely. The next step involves inlet temperature monitoring and response based on thermal events (already available in Intel Intelligent Power Node Manager).

Workload-power optimization identifies optimal power reduction without performance impact. Customized analysis is required as each workload draws power differently.

RA case study # 2 – enhanced cloud security
If one looks at the trends in security in the enterprise, there are shifts in types of attacks. The platform is now as a target, not just software. Stealth and control are taken as objectives.

There are increased compliance concerns. HIPPA, Payment Card Industry (PCI), etc., require security enforcement and auditing. Changes in architectures require new protections as well. These include Virtualization and multi-tenancy, third party dependencies, and location identification.

Trustable compute pools usage models lead to compliance and trust in the cloud. Compliance in the cloud involves multi-tenancy that could complicate compliance. There is need for software trust despite physical abstraction. Also, compliance requires effective reporting. There is a need to enforce VM migration based on security policy.

Intel-VMware-HyTrust enables trusted compute pools. The outcome is that data integrity is secure and there is no compliance violation.

Intel Trusted Execution Technology (TXT) enforces platform control. It allows greater control of launch stack and enables isolation in boot process. It also complements runtime protections, and reduces support and remediation costs. Hardware based trust provides verification useful in compliance.

HyTrust appliance enforces policy. It is a virtual appliance that provides unified access control, policy enforcement, and audit-quality logging for the administration of virtual infrastructure.

Intel provides solutions to pro-actively control and audit virtualized data centers.

Intel Cloud Builders program helps utilize proven reference solutions to ease deployments


Billy Cox, director, Cloud Strategy, Software and Services Group, Intel.

Billy Cox, director, Cloud Strategy, Software and Services Group, Intel.

Billy Cox, director, Cloud Strategy, Software and Services Group, Intel, said that IT and service providers need to define and prioritize IT requirements. Products and technologies go on to take advantage of new capabilities in Intel platforms. The Intel Cloud Builders program helps utilize proven reference solutions to ease your deployments.

Technology to optimize the CloudNode requires orchestration and automation. This involves: compliance and security, high performance IO, and density and efficiency.

So, what’s the need for having reference architectures now? Where do you draw the lines for a reference architecture? Enterprises have mostly relied on build to order architectures. With the advent of cloud, there is more of configure to order architectures.

Cloud hardware architectures generally focus on homogenous compute, flat networks and distributed storage. The cloud software IaaS stack looks at horizontal management roles. The focus is on service delivery.

The Open Data Center usage models include secure federation – provider assurance and compliance monitoring, automation – VM operability and ID control, common management and policy – regulatory framework, as well as transparency – service catalog, standard unit of measurement, and carbon footprint, where the cloud services become “CO2 aware”. Cox also referred to data center usage models in 2011, where Intel is delivering products/technologies to address usage models.

Intel Cloud Builders program reference architectures is a starting point from which to build and optimize cloud infrastructure. Solutions are available today to make it easier to build and optimize cloud infrastructure. Intel offers proven, open, interoperable solutions optimized for IA capabilities. It is also establishing the foundation for more secure clouds.

Data center efficiency priorities involve achieving efficiency and reliability by maximizing available capacity and modular build out for growth. Intel has a holistic approach – systems, rack, design and monitoring.

For instance, the Unified Network consolidates traffic on an 10G Ethernet fabric. It simplifies the network by migrating to 10GbE and lowers TCO by consolidating data and storage networks. Flexible network is the foundation of cloud architecture.

Intel Cloud Builders is easing cloud deployments via proven, interoperable solutions for IT.

Intel’s vision for the cloud!


Allyson Klein, director, Leadership Marketing, Data Center Group, Intel Corp.

Allyson Klein, director, Leadership Marketing, Data Center Group, Intel Corp.

According to Allyson Klein, director, Leadership Marketing, Data Center Group, Intel Corp., the compute continuum has arrived. The connected world is becoming larger and more diverse. There will approximately be over 1 billion new users by 2015.

We are witnessing a sea of new devices, limited only by our creativity. There are estimated to be more than 15 billion devices connected by 2015. All of these devices are creating a renaissance of compute experience, that is pervasive and simple computing. These will once again change the ways we work and live.

And, a new frontier of insight, simplifying our lives and making our world more efficient. So, what about the cloud? Cloud will be the performance engine of the compute continuum.

There has been an introduction of a new economic model for computing: ~ 600 Apple iPhones will need a new server; and so would ~120 iPads. And, this is only said to be the beginning!

The data center processor growth has been >2X in 10 years. Data center acceleration is estimated to be >2X in the next five years. Cloud’s contribution to data center growth will be significant. In 2010, cloud was contributing 10 percent. This should double to 20 percent in 2015.

Intel’s strategy for creating the cloud includes:
IT & service providers – define and prioritize IT requirements.
Products & technologies – take advantage of new capabilities in Intel platforms.
Intel Cloud Builders – utilize proven reference solutions to ease your deployments.

The Open Data Center Alliance is a catalyst for change, given that open and interoperable solutions are essential. In October 2010, the Alliance established the first user-driven organization for cloud requirements. There were 70 IT leaders joined by technical advisor Intel. Five technical working groups were formed.

In June 2011, the Open Data Center Alliance released the first user-driven requirements for the cloud. It now has 4X members representing  over $100 billion in annual IT spend. There have been new technical collaborations as well — four organizations and four initial solutions providers. The Alliance endorses immediate use to guide member planning and purchasing decisions.

Cloud key strategy for Intel: Liam Keating


Liam Keating, Intel APAC IT director and China IT country manager.

Liam Keating, Intel APAC IT director and China IT country manager.

“The benefits of cloud are real! We have so far seen $17 million savings to date from our internal cloud efforts,” said Liam Keating, Intel APAC IT director and China IT country manager. He was speaking at the Intel APAC Cloud Summit in Malaysia.

Intel currently runs 91 data centers, globally, from over 140, about a couple of years ago. As of now, cloud has become a key strategy for Intel.

If one views Intel’s data center profile, it looks like this:
D – Design – Expanded HPC solutions.
O – Office – Enterprise private cloud.
M – Manufacturing – Factory automation.
E – Enterprise  – Enterprise private cloud.
S – Services –  Enterprise private cloud.

In the past (in 2009), Intel had 12 percent virtualization.  It had a design grid as well. According to Keating, Intel’s experience with grid computing helped in the company’s cloud computing strategy. Currently, Intel boasts over 50 percent virtualization. In future, this would move to over 75 percent. Keating added that Intel will continue to experiment and evolve the public cloud.

As for the applications residing on the internal cloud, these include: engineering 5 percent, sales/marketing 19 percent, ERP 13 percent, HR/finance/legal 22 percent, operations/security/manageability 26 percent, and productivity/collaboration 15 percent.

The business benefits are immense. “We are improving the velocity and availability of IT services,” Keating said. He outlined five strategic benefits, as below:
* Agility – immediate provisioning.
* Higher responsiveness.
*  Lower business costs.
* Flexible configurations.
* Secured infrastructure.

In terms of business velocity, there has been reduced provisioning time — from 90 days to three hours. Intel is now on its way to minutes! As for efficiency, the server consolidation is at a 20:1 ratio. In terms of capacity, there has been a shift from capacity planning to demand forecast modes. Finally, quality, where the standard configurations improved consistency and enabled automation.

Intel has learned best practices lessons from implementing the  cloud. First, the cloud terminology itself. There have been leadership support as well as IT business partnerships. Intel has also set short-term priorities — pervasive virtualization and faster provisioning. Intel has also learned to manage with data – P2V RoI, measured services, business intelligence (BI) collection, server sizing, etc.

Current challenges facing Intel include asset management and utilization. There is a need to be cognizant of performance saturation, and also understand the degrees of separation. The integrated management view is critical in all of this.

Another challenge is presented by capacity planning, which is shifting to demand forecasting. Quicker provisioning requires the view into the future cloud as well. Next, automation re-inforces the workforce too!

Intel’s IT division has successfully developed a private cloud. This has aligned the IT strategy to business needs. Business benefits will generate value. Cloud transition has now become a multi-year journey at Intel.

Intel leads industry transformation to open data centers and cloud computing


Intel India held a demonstration of “The-Cloud-in-a-Box,” conducted by Nick Knupffer, marketing program manager, Intel Corp.

According to him, user experience is the driving force in our industry: both device and the cloud. Innovation starts with the best transistors. He added that cloud computing is not only inevitable; it is imperative. Intel is said to have the right solutions required to enable a connected world.

By 2015, there will be more users, over 15 billion connected devices, and naturally, data — 1 zetabyte Internet traffic. Internet and device expansion drives new requirements for IT solutions.

Intel’s Cloud 2015 vision.

Intel’s Cloud 2015 vision.

Intel’s Cloud 2015 vision is one of federated, automated and client aware networks. Federated, so that data can be shared securely across public and private clouds. Client aware, so that services can be optimized based on device capability. Automated, so that IT can focus more on innovation and less on management.

Data center processor growth has been >2X in five years. The mobile Internet that runs on Intel is also growing its data center business. Intel has moved to higher process nodes, and 22nm is now a revolutionary leap in process technology. Today, 70 percent of the global CIOs have cloud security top of mind.

Intel is now building the ecosystem around better, faster and stronger security based on Xeon. Here, the Intel Advanced Encryption Standard New Instructions (Intel AES-NI) and the Intel Trusted Execution Technology (Intel TXT), are currently prominent.

Brocade launches VDX switches for virtualized, cloud-optimized data centers

November 25, 2010 1 comment

Brocade recently launched what it claims is the industry’s first true Ethernet fabric switching solutions that are purpose-built for highly virtualized and cloud-optimized data centers.

Its VDX product family of Ethernet fabric switches makes use of Virtual Cluster Switching (VCS) technology. These are based on a scaled virtualized environment without adding network complexity, and enables building flexible, open and hypervisor-agnostic networks.

Brocade also launched the VDX 6720 switches – the first in VDX family. These feature 10 GbE wire-speed, low latency, LAN/SAN  convergence. They run on sixth generation fabric ASIC and proven O/S technology. The key things — you pay as you grow ports-on demand and low power usage.

What’s new?

Rajesh Kaul, country manager, Brocade.

Rajesh Kaul, country manager, Brocade.

So, what’s new about this switch? Rajesh Kaul, country manager, Brocade, said: “The technology underlying the Ethernet fabric — it has all of the resiliences of the fiber fabric and the simplicity of the Ethernet built on to it.

“Every point of the network is connected to every other point on the network — rather than the classical Ethernet. Also, we don’t use spamming tree protocol (STP). We use the TRILL protocol. In this case, every path is active.”

Brocade is working with the Internet Engineering Task Force (IETF) on a standard called Transparent Interconnection of Lots of Links (TRILL). This provides multiple paths via load splitting.

TRILL will allow reclaiming network bandwidth and improve the utilization by establishing the shortest path through Layer 2 networks and spreading traffic more evenly. Hence, the network can respond faster to failures.

Kaul added that these devices act on a layer 2 level. “Every device is intelligent and a master device. So, this is a masterless switch.” Read more…

%d bloggers like this: