In a network centric view of the data centre the network admin rules it all. Cisco?s so called unified computing requires a change of control where all data centre traffic is now piped through the centre of the network adding yet another coordination touch point. This scheme requires investing in a very large, complex, and expensive networking system, similar to ?one giant switch? in the middle of the data centre.
Before considering a giant switch to a giant switch vendor please consider the following issues:
Issue: ease of purchase/installation
Purchasing a Cisco UCS involves specifying a complex set of blade servers, blade chassis, UCS fabric extenders, UCS fabric interconnects (1) not to mention associated network and storage infrastructure. It?s no surprise that this complexity has forced Cisco to create new certifications for UCS design and support. (2)
When a customer adds the Cisco Nexus 1000v for VMware vSphere 4 Enterprise Plus with 24×7- 3 year support it adds an additional US$1138.70 per processor. (3) This extra cost adds up fast, considering that a rack of 48 two-processor servers would cost an additional US$109,315.20 just for the Nexus 1000v software.
HP BladeSystem Matrix: One platform. One service engagement. One part number.
Our goal was to make the BladeSystem Matrix simple to deploy and expand so you focus on the job at hand, not dealing with all the moving parts Matrix is a converged infrastructure consisting of pools of compute, storage, virtual fabrics, and power and cooling that can be purchased and delivered as ?chunks? of capacity as large as racks at a time.
To learn more: HP BladeSystems
Issue: complexity
In Cisco?s one-giant-switch model, all traffic must travel over a physical wire to a physical switch for every operation. (4) Consequently, it appears that traffic even between two virtual servers running next to each other on the same physical would have to traverse the network, making an elaborate ?hairpin turn? within the physical switch, only to traverse the network again before reaching the other virtual server on the same physical machine. Return traffic (or a ?response? from the second virtual machine) would have to do the same. Each of these packet traversals logically accounts for multiple interrupts, data copies and delays for your multi-core processor.
In a distributed switch model, the packet never leaves the physical server and enables copy avoidance and high performance I/O to be applied.
Issue: control
The Cisco UCS model gives the network admin unprecedented control of the data center. (5) This change of control does not take into account the roles and responsibilities of different centres of expertise within the data centre. With this view, manageability is stripped from everyone, except the networking administrators.
In our opinion, Cisco is attempting to make themselves, through the Cisco network, the control point for everything in the data centre using an unproven proprietary approach requiring massive investments and difficult migrations by customers.
When we at HP developed Virtual Connect, we were really thinking of how to simplify the way customers connect their servers to LANs and SANs. HP?s Virtual Connect makes the server administrator self-sufficient by allowing the complete LAN and SAN connection information and physical connections to be moved with the workload from one server bay to another without impacting the LAN or SAN.
Issue: security
In a Cisco UCS data centre approach, security happens at the core of the network ? making security a major concern. In the article, ?Questions arise About Security for Cisco UCS?(6) experts said:
“This is an integrated solution, so I guess if you crack part of it, you crack all of it,”
“All a [knowledgeable] hacker has to do to get into this UCS system is to hack into the [Cisco] switch, which controls the data flow and the data itself,” Desai said. “For some [sophisticated] hackers, this is not that hard to do.”
Issue: scalability
Cisco?s centralised approach is similar to a legacy mainframe ? everything must run on one massive component, which becomes an expensive, hard-to-scale bottleneck. One of the most important hidden dangers of a centralized giant-switch model is in scaling your data centre. The central switch must handle all of the cross-sectional bandwidth and perform every packet operation at the full cross-sectional bandwidth. In this model, you need to plan and pay for all the ports up front, whether you end up using them or not. Data centres grow by adding compute and storage resources to the edge of the network. Therefore, when implementing a giant switch, you need enough virtual ports, addressed table space, and packet processing available to allow future growth. This could be challenging for the Nexus 5020 which only supports 16K MAC addresses. Scaling bandwidth at the edge of the network simply makes more sense.
Cisco?s vision seems to simply require more Cisco network equipment, not less. HP Virtual Connect Flex-10 delivers 4 to 1 consolidation today. (For more see the IDC Whitepaper on realizing TCO Savings with HP BladeSystem Virtual Connect Flex-10 Technology)
Issue: vendor lock-in
A report stated , ?UCS will not accept blade servers from other vendors like HP and IBM. Nor will the Cisco-developed blade server within UCS work in any other vendors’ data centre unification or consolidation platform.”
Cisco has defined a new proprietary frame protocol; VNTag, for UCS?s Network Interface Virtualization model such that an attached physical switch, according to Cisco, cannot be connected to just any IEEE 802.1D compliant Ethernet switch. (7)
Another example: If a customer wants to connect an existing blade environment, such as an HP BladeSystem with a Cisco 3120 switch integrated in it, a Nexus 1000v soft switch would be unable to pass a VN-Tag to an upstream Nexus 5000 switch. In other words, Cisco?s VN-Tag approach doesn?t even work with their own switches! (8)
Cisco?s new server vision is largely dependent on new propriety features in their Nexus switches, UCS Fabric Extenders and Interconnects limiting choice to this approach. HP is committed to industry standard protocols. HP Virtual Connect uses native industry standard Ethernet and Fibre Channel protocols therefore it can attach to any industry standard Ethernet or NPIV enabled fibre channel switch from vendors such as HP ProCurve, Brocade, Blade Network Technologies, and even Cisco.
To learn more: HP Virtual Connect Technology
The HP alternative
The question still remains, what should a data centre look like? We believe the ideal data centre model should have a simple, high-performance, highly available core, as well as intelligence and automation at the network edge. If you think this approach sounds vaguely familiar, it should. At HP ProCurve, we?ve been talking about the fundamentals of our Adaptive Edge Architecture (AEA) for many years now. With the AEA ? and the Adaptive Networks vision based on the AEA ? policy control is managed from the centre of the network, while policy enforcement occurs at the edge, at the point where users and devices attach.
Extended to the data centre environment, a distributed-intelligence approach such as the AEA offers an alternative to the one-giant-switch view of the data centre.
Building solutions on industry standards ? and driving those standards ? has been a focus of HP and ProCurve Networking for 30 years.
HP?s leadership in the data centre has been built over decades of innovation, experience and market leadership. HP?s Adaptive Infrastructure (AI) strategy provides proven methodologies that enable customers to move from high-cost IT islands to an automated, service-ready technology infrastructure that drives more value to their businesses. Built on an industry-standard platform, HP BladeSystem Matrix is the most advanced, best integrated and easiest way for businesses to achieve an AI and can be integrated into existing infrastructure.
To learn more about HP data centre solutions, please visit the following:
» HP?s Data Center Transformation
» HP ProCurve Data Center Solutions
» HP BladeSystem Matrix
(1) ?Cisco California Pricing Revealed?
(2) Cisco Computing Design, Cisco Computing Support
(3) VMware price list
(4) Cisco Seminar
(5) See Cisco: ?Virtual Networking Features of the VMware vNetwork Distributed Switch and Cisco Nexus 1000V Series Switches? that says ?Configuration and management console and interface: Virtual networking with VMware vSwitches is configured through the VI Client interface. A VMware vCenter Server must be used when configuring and using the VMware vDS. The Cisco Nexus 1000V Series uses a combination of the Cisco command-line interface (CLI) to allow the network administrator to configure network policy and VMware vCenter Server to preserve the virtual machine provisioning workflow.?
(6) ?Questions arise About Security for Cisco UCS?
(7) Source: “Cisco VN-Link: Virtualization-Aware Networking?
Cisco explains, ?An important consequence of the NIV [Network Interface Virtualization] model is that the VIS [Virtual Interface Switch] cannot be just any IEEE 802.1D-compliant Ethernet switch, but it must implement some extensions to support the newly defined satellite relationships. These extensions are link local and must be implemented both in the switch and in the interface virtualizer. Without such extensions, the portions of traffic belonging to different virtual machines cannot be identified because the virtual machines are multiplexed over a single physical link.?
HP is committed to industry standard protocols. Because the HP Virtual Connect uses native industry standard Ethernet and Fibre Channel protocols it can attach to any industry standard Ethernet or NPIV enabled fibre channel switch from vendors such as HP ProCurve, Brocade, Blade Network Technologies, and even Cisco.
(8) HP Virtual Connect technology implementation for the HP BladeSystem c-Class