HP StoreVirtual NetworkRAID explained

Since HP decided to deliver a free 1TB version of the StoreVirtual VSA with (almost) every HP Proliant Gen8 server, I notice that more and more of my customers are interested in ‘Lefthand’ storage.
The fact that I tell them that my VSA based SAN volumes in my home lab/datacenter has an uptime of 4,5 years now also helps them to get started with this IMHO powerfull storage platform.

So the last months I had to explain quite often what is so specific about Lefthand aka StoreVirtual and the so called Network RAID that is specific for this platform.

Therefore I decided to put some time in this article as a general reference… Enjoy!


NetworkRAID (NR) is similar to RAID functionality on spinning media but which will be applied across the network on network attached storage appliances.

In my further explanation I will call them nodes. These nodes can be physical appliances (4130/4330/4335/4530…) or the Software Defined version being the Virtual SAN Appliance VSA.


Several NetworkRAID levels exist. These levels tell us how many copies of blocks of data will be written across the network on these nodes, and so deliver a certain level of data protection.


NR0 means every block of data will be written once (just like RAID0) striped across the nodes. Failure of 1 node means loss of data since no copy of the data block on that failed node is available.

NR10 is the most commonly used data protection level. All blocks of data will be written twice but also striped across 2 nodes in a sequential way, like shown in the above picture.

NR10+1 provides an even higher data protection level since every block of data will be written 3 times.

NR10+2 gives the highest available data protection level. Every block of data will be written 4 times. This gives the specific feature where, in a multi-site situation, a complete site can fail plus on top of that 1 node in the surviving site.

I make this picture always with 4 nodes; for me it is the easiest to explain the distribution of the data blocks.
Off course the minimum configuration is a setup with 2 nodes. NR10 on these 2 nodes means that both nodes will be a mirror of each other.


To avoid (useless) discussions, yes the minimum config is 1 node. BUT you can use only NR0 which will give you finally a iSCSI based DAS solution with no redundancy. A MSA will be a cheaper solution and is more redundant since the MSA has 2 controllers. So for me a 2-node setup is the absolute minimum.

It is obvious that for NR10 you need minimum 2 nodes, for NR10+1 you need minimum 3 nodes and for NR10+2 you need 4 nodes.

There is also the option to select NR5 and NR6. Just like RAID5 and 6, parity will be calculated on the data blocks and striped across the nodes. Watch out that performance is much lower, these 2 parity based levels are recommended in read-only (archiving) environments.

The setup of NR10 gives us a nice additional feature.
By moving the odd OR the even nodes (the sequence is highly important here!) across 2 sites gives us site redundancy. You may notice (I hope) that all blocks A, B, C and D are available in both sites.
This means a site failure can occur with loosing data. By default built-in in the StoreVirtual platform. No licenses whatsoever needed like with MSA and 3PAR!


By implementing sites inside the CMC Centralized Management Console you don’t have to worry about the order of the nodes, the managers running on the nodes will take care that both sites have a copy of each block of data.

The second nice feature is that all nodes are active in the data path. This means all volumes are active on all nodes.
In a clustered environment with for instance VMware servers would this mean that no volumes need to fail over from 1 SAN to the other (like with all traditional SAN boxes), no rescan HBA whatsoever. Life can be easy for a Vmware admin with StoreVirtual storage ;-).

In this multi-site scenario the inter-site-link between the 2 sites can is the weakness in this setup.
However, to avoid Split-Brain situations (where both sites ‘can’ become active, and so data corruption can occur after the link coming up again) HP provided 2 solutions to create ‘quorum’ aka majority.
If no majority is available in a site, data I/O will be stopped.
This quorum arbitrator can be in a re-active way (Virtual Manager) or pro-active (Failover Manager aka FOM). This functionality is out of scope for this article, if you want more information, you know where to find me. Or just use the Contact pages.

Finally, you should know that NetworkRAID (so the data protection level) can be set per volume.
So multiple volumes can co-exist on a StoreVirtual cluster with different NetworkRAID levels.
Know also that you can change the data protection level online. So you can change from NR10 to NR0 or vice versa on the fly. Re-striping will occur in the background.

Author: Bart Heungens

Share This Post On
%d bloggers like this: