On SDN, NFV, and OpenFlow

Introduction

Network-, and network function virtualisation are the talk of the town at the moment, the sheer amount of vendor solutions is staggering, and the projected market opportunity is enough to make anyone’s head spin. I’ve been working in ‘traditional’ networking all my career and it looks like there finally is something new and exciting, dare I say sexy, to play with.

Because of all the buzz, and because the topic reaches far beyond pure networking folks, there does seem to be a lot of confusion around some of the terminology and what piece of technology is applicable where. In this, what I hope to be an introductory post, I wanted to touch upon the high level differences between SDN and NFV and what they represent.

SDN_NFV

History

Let’s start with a little history but feel free to skip to the topic sections below.

I’m going to choose the early work being done at Stanford as the starting point, you can argue that the history of network virtualisation goes back way further, and in theory you would be right, but in order not to muddy the water too much I’ll start at Stanford.

2007 Stanford graduate Martin Casado, now at VMware, is generally credited with starting the software defined networking revolution, himself and co-conspirators Nick McKeown (Stanford), Teemu Koponen (VMware), and Scott Schenker (Berkeley) built on earlier work that attempted to separate the control plane from the data plane, but SDN more or less started there.

Martin Casado together with his colleagues created Ethane, which was intended as a way of centrally managing global policy, it used a “flow-based network and controller with a focus on network security”

The Ethane abstract reads:

Intended as a new network architecture for the enterprise. Ethane allows managers to define a single networkwide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows.”

If you look at Aruba and Meraki in the wireless space for instance you can see that they also have this model of separating control- and data plane. The networking models that companies like Yahoo!, Google, and Amazon had started to use are also very closely related to this concept of SDN.

The “official” starting point of SDN is when the NOX + OpenFlow specification emerged in 2008, but again, it’s legacy goes back a lot further.

The human middleware problem.

If you look at a traditional network setup, you most likely encounter a 3 tier network architecture, with the Core network at the top, the Distribution network in the middle, and the Access network close to the end-users and the applications. This model is highly biased towards North-South traffic, since Layer 3 decisions are made higher up instead of close to the access network. Port security and VLANs are applied at the Access layer, Policy and ACL’s are applied in the Distribution layer, etc. all this has some redundancy and scalability limits due to operational complexity. Furthermore you cannot use all the capacity of the links because of the design (spanning tree etc.).

accessdistribcore

Now if you look the practical aspects of packet forwarding, you have packets that come in, those are evaluated against a table, the table determines what happens to the packet, i.e. does the packet get dropped, forwarded, replicated, etc., and then it goes out the other end. Now these tables against which the pakets get evaluated are populated in one of two ways, some tables (L2 and L3 table) are populated using dynamic (routing) algorithms, these algorithms run on every network node, when there is a change in the network they calculate state and have to communicate with all the other nodes and populate the tables. This is actually rather functional, not a lot of human interaction, should be very stable and able to dynamically react to changes.

But there are other tables, like ACLs, QoS, port groups, etc. and these tables are populated using very different methods like human beings, scripts. etc. these are slow, and error prone, what happens if your network admin doesn’t show up Monday..?

Now there are management APIs and SDKs to programatically deal with these tables, but not really a standardised way, and certainly not to do it at scale. i.e. the APIs expose functions of the hardware chip’s and these provide a fixed function. Moreover there are limited consistency semantics in the APIs, i.e. you can not be sure that if you programmed state it would get consistently implemented on all the network devices.

This problem led to the creation of OpenFlow, in order to programmatically manage all of the “datapath” state in the network.

OpenFlow

If you look a networking devices today you generally see 3 levels of abstraction (3 planes), the Data plane, where fast packet forwarding takes place (ASICs live here) based on the information located in the forwarding table. To get state information the forwarding table this has to be populated from the Control plane. The Control plane is where your routing protocols live, like OSPF and BGP, who figure out the topology of the network by exchanging information with neighbouring devices in the network. A subset of the “best” forwarding (IP routing) table in the Control gets pushed/programmed into the forwarding table in the Data plane. On top of both the Data and Control plane we have the Management plane. This is where the configuration happens through a command line interface, GUI, or API.

In an OpenFlow based network there is a physical separation between the Control plane and the Data plane. Usually the Management plane is a dedicated controller that establishes a TCP connection (OpenFlow runs over TCP) with each networking device, the controller runs topology discovery (not the Control plane). The controller computes the forwarding information that needs to get pushed/programmed to all networking devices and uses the OpenFlow protocol to distribute this information to the flow tables of the devices.

floodlight_scaled

The OpenFlow protocol is published by the Open Networking Foundation, and it somewhat depends on the switch version what version of OpenFlow they implement (because some feel the rate of innovation is too high ;-) ) . i.e. not all OpenFlow capable devices have implemented the same version and so interoperability with features of later versions is not always guaranteed.

OpenFlow is an interface to your existing switches, it does not really enable features that are not already there.

So why do we have OpenFlow then?

OpenFlow provides you with  the ability to innovate faster within the network, allowing you to get a competitive advantage. It also moves the industry towards standards, which creates more market efficiency. And if you decouple layers, they can evolve independently, fixed monolithic systems are slower to evolve.

Software Defined Networking (SDN)

Obviously SDN and OpenFlow are closely related. You can think of OpenFlow as an enabler of SDN since SDN requires a method for the Control plane to talk to the Data plane, OpenFlow is one such mechanism.

In SDN we also talk about decoupling the Control and Data plane but SDN also looks at what happens above the Control plane, i.e. how do we take the information we get from the application space and use that as input to dynamically program the network below it.

Usually the Control plane functionality is handled by a controller cluster that is implemented as a virtual machine cluster running on top of a hypervisor for maximum flexibility.

nvp-architecture

We also use the concept of overlay networking (called encapsulation protocol in the picture above) to dynamically create connections between switches (or application tiers, or users,…) independent of the physical network. This way we can link applications using the most optimum path independent of the physical topology.

As a crazy high level example let’s look at Hadoop , generally speaking there are 2 cycles in the Hadoop process, you have the mapping phase and you have the reduce phase.

hadoopfig2

What if the flow the traffic takes between these 2 phases is best served by another path, i.e. the actual compute nodes with different capabilities are located on different parts of the network and we want traffic to run over the shortest path possible. In a physical world it would be very difficult to accomplish this dynamic reorganisation of the network but in SDN land you potentially could.

Or a more traditional example would be, I need to spin up a new virtual machine, this VM requires network connectivity, so I ask the network team to place the physical port of the switch in the right VLAN so the VM can start to communicate. In most organisations this could take some time so what if the application itself, based on the SDN policies set in place (once) was able to direct this action.

Another benefit most SDN solutions implement is the distribution of routing (and potentially other services) to the hypervisor layer, either with an Agent (virtual machine) or integrated in the kernel or virtual switch. This means that the traditional north-south bias of a physical 3 tier network architecture is no longer hairpinning traffic, if you can route layer 3 traffic from within the hypervisor layer you can potentially move traffic across the server “backplane” between 2 virtual machines living on separate layer 3 networks but on the same host. Or between hosts across the overlay network. Efficiently moving east-west traffic in the data center.

Now of course once the network exists as a software construct, you can just as easily manipulate it as a virtual machine, i.e. snapshot it, roll it back, clone it, etc.

If you have a background in VMware administrator you understand that normally you would apply configuration in vCenter and have that distributed to all you ESXi hosts, you don’t need to log into each host individually to configure them, the same thing applies to SDN, you set policy at the controller layer and it feeds down to the switches.

In terms of physical architecture the traditional 3 tier (core, aggregation, and access) design is being replaced by a more flatter network like leaf-spine whereby all the links are being utilised for more bandwidth and resiliency.

fabricpath_spine_leaf_topo

In this type of architecture spanning tree is being replaced by things like TRILL and SPB to enable all links to forward traffic, eliminate loops and create meshes to let traffic take the shortest path between switches.

Network Function Virtualisation (NFV)

While SDN has grown out of the enterprise networking space, NFV has a background in the telco and service provider world. It was CSPs looking to avoid the need to implement more hardware middle-boxes when rolling out more functionality in the network. i.e. specialised load-balancers, firewalls, caches, DNS, etc.

So taking a page out of the server virtualisation book, what if some of those higher layer services could be implemented as virtual machines, what would that mean in terms of flexibility and cost improvement (reduced CAPEX, OPEX, space and power consumption)?

Again SDN and NFV are closely related. NFV builds virtualised network functions, but how the network is built (the interconnectivity) is the job of SDN.

These virtualised building blocks can then be connected (chained) together to create services.

NFV-Alcatel-Lucent

Like I mentioned in the beginning of the post, I wanted to provide a high level, abstracted overview, and I will attempt to go into some more detail on each of these in subsequent articles.

Posted in Networking, NFV, NSX, OpenFlow, SDN

Horizon Branch Office Desktop Architecture

VMware has a number of virtual desktop architectures that give a prescriptive approach to matching a companies’ specific use case to a validated design. These architectures are not price-list bundles, they include VMware’s own products combined with 3rd party solutions with the goal of bringing customers from the pilot phase all the way into production.

At the moment there a 4 different architectures focussed on different use cases, these are the Mobile Secure Workplace, the AlwaysOn Workplace, the Branch Office Desktop, and the Business Process Desktop.

horizonArch

In this article I wanted to focus in on the Branch Office Desktop but in the interest of completeness please find below the partner solutions around:

Seeing that there are over 11 million branch offices across the globe, a lot of people are working with remote, or distributed, IT infrastructures which potentially have a lot of downsides. (No remote IT staff, slow and unreliable connectivity, no centralised management,…).

branchofficevmw

With the Horizon Branch Office Desktop you have some options to alleviate those concerns and bring the remote workers into the fold. Depending on your specific needs you could look at several options.

If you have plenty of bandwidth and low latency, using a traditional centralised Horizon View environment is going to be the most cost effective and easy path to pursue. There are of course additional options if you have bandwidth concerns but still want to provide a centralised approach.

Optimized WAN connectivity delivered by F5 Networks.

The F5 solution offers simplified access management, hardened security, and optimized WAN connectivity between the branch locations and the primary datacenter. Using a Virtual Edition of F5′s Traffic Manager in the branch combined with a physical appliance in the datacenter.

F5

The solution provides secure access management via the BIG-IP APM (access policy manager) which is an SSL-VPN solution with integrated AAA services and SSO capabilities. The BIG-IP LTM (local traffic manager) is an Application Delivery Networking solution that provides load-balancing for the Horizon View Security and Connection servers. The solution can also provide WAN optimisation through it’s Wan Optimization Manager (WOM) module, in this case focused on other non PCoIP branch traffic.

If you find that ample bandwidth is not available however you still have other options like the architectures combining Horizon with Riverbed, Cisco, and IBM which I’ll focus on in this article.

Riverbed for the VMware (Horizon) Branch Office Desktop.

With Riverbed’s architecture we essentially take your centralised storage (a LUN from your existing SAN array) and “project” this storage across the WAN towards the branch office. In the branch we have an appliance, called the Granite Edge (steelhead EX + Granite in the picture below) which then presents this “projected” LUN to any server, including itself (the Granite Edge appliance is also a x86 server running VMware ESXi). If we install the virtual desktops on the LUN we have just “projected” out from the central SAN environment then these desktops are now essentially locally available in the branch office. This means that from the POV of the end-user they setup a local (LAN) PCoIP connection toward the virtual desktop and can work with the same local performance one would expect in the datacenter location.

granite

The end-result is that from a management perspective you keep (or gain) centralised control and from an end-user perspective you get the same performance as if you were local. For more details on this architecture you can download a deployment guide here: Deployment Guide: Riverbed for the VMware Branch Office Desktop …

Cisco Office in a Box.

With Cisco’s Office in a Box architecture you take their Integrated Services Routers Generation 2 (ISR G2) platforms (Cisco 2900 and 3900 Series ISRs) and the Cisco UCS E-Series Servers, and combine those into one physical platform that can host up to 50 virtual desktops in a Branch Office.

cisco office in a box

In this case you have essentially built a remote desktop appliance that sits in the branch office, all virtual machines share the direct-attached storage (DAS) of the Cisco UCS E-Series blade. So in this case the management domain is not stretched across the WAN but instead you have a “pod-like” design that includes everything you need to run virtual desktops in the branch.

ciscoLogical

For more information on Cisco’s architecture please see: http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-e-series-servers/white_paper_c11-715347.html

IBM Branch Office Desktop.

IBM has another validated approach that combines VMware Mirage and VMware Horizon View technologies to address the varying requirements within the branch office.

With VMware Mirage you can centrally manage OS images for both persistent virtual desktops and physical endpoints, while ensuring employees have fast, secure access to applications and data. With centralized images and layered single image management, the same image can be deployed in a server-hosted virtual desktop for remote execution and natively to a physical PC or client hypervisor for local execution.

This approach let’s you deliver centrally managed desktops with LAN-like performance and disaster recovery capabilities to locations with robust and reliable as well as well as unreliable wide area networks.

These components run on IBM’s (Lenovo’s) System x and FlexSystems compute nodes, IBM storage and IBM System networking components.

ibmbranch

For more information on the IBM architecture please see: http://thoughtsoncloud.com/2012/10/vmware-robo-solution-ibm-vmworld/

Alternatively (or in conjunction with all the architectures mentioned) we can also independently leverage Horizon Mirage for the Branch Office, specifically if you have to deal with frequently disconnected users (laptop users that are not always on the office for example) or physical devices.

For more information on all these Branch Office architectures please see: http://www.vmware.com/remote-branch/remote-branch-office  and http://www.vmware.com/be/nl/remote-branch/partners.html for the partner extended capabilities.

Posted in Cisco, IBM, Riverbed, vmware

VMware Horizon DaaS and Google Chromebooks

Yesterday at VMware’s Partner Exchange (PEX), VMware announced that it is joining forces with Google to modernize corporate desktops for the Mobile Cloud Era by providing businesses with secure, cloud access to Windows applications, data and Desktops on Google Chromebooks.

In this post I want to provide an overview of what this means for VMware’s customers.

So first things first, what exactly is the “Mobile Cloud Era”?

If we look back (and generalise a bit) we can define two big computing architectures that defined IT in the past, these are the Mainframe era and the Client-Server era, more recently we are moving more and more towards a third computing architecture called the Mobile-Cloud era.

IT eras

The mainframe era was defined by highly centralised, highly controlled IT infrastructures that were meant to connect a relatively small user population to a small number of applications (mainly data processing).
In the client-server era the focus shifted to decentralisation and powering a large number of different types of applications each requiring (or so the thinking was) it’s own silo. The mobile-cloud era is all about letting billions of users consume IT as a Service, enabling access to any application, be it on-premises or in the cloud, from any location on any device.

keynote vmworld

So why are VMware and Google doing this?

I don’t think the actual delivery mechanism matters that much, we are in the fast moving/changing mobile cloud era and device choices are fluid, what really matters is the application. And, like it or not, most corporate customers are still relying heavily on the Microsoft Office suite to perform their day to day business.

Now, there is a transition under way (the cloud piece in the mobile cloud era) towards SaaS based productivity applications like Google Apps, which is why ChromeOS and the Chromebook exist in the first place. This transition will neither be all encompassing (in my humble opinion) and immediate, so until we get there this is a great option.

What is VMware Horizon DaaS?

During last year’s VMworld Europe in Barcelona VMware announced the acquisition of a company called Desktone. Desktone delivers desktops (DaaS) and applications as a service. The desktops can either be persistent Windows 7/8 desktops (so the actual Windows client-os, compatible with all your applications – it can also be a Linux desktop if you want) or Windows RDSH based desktops (the option you have here is that you can use the Windows server OS to get round the Microsoft VDI licensing snafu to potentially allow for additional cost-savings), or Remote Applications. The Desktone solutions have recently been brought under the Horizon umbrella, having been renamed to VMware Horizon DaaS.

desktone

You can access these services using PCoIP or RDP, this in contrast to the joint Google solution announced yesterday which will leverage VMware’s HTML5 Blast protocol for remote access.

What is the solution announced yesterday?

The combination of Horizon DaaS as a “back-end” and ChromeOS/Chromebooks as the “front-end” allows browser based, via the HTML5 Blast protocol, access to Windows desktops and applications via Google Chromebooks.

The Chromebooks are a low cost, “cloud first” form factor that combined with Horizon DaaS allow you to fully utilise this in a corporate setting. Different low-cost hardware options exist by vendors like Dell, HP, Samsung and Acer. Google itself also has a higher-end (if you want to maintain a low TCO for this combined solution maybe not for you) Chromebook called the Pixel.

pixel

Initially the solution will be available as an on-premises service, the joint solution is expected to be delivered as a fully managed, subscription DaaS offering by VMware and other vCloud Service Provider Partners, in the public cloud or within hybrid deployments.

Demo of Windows Desktops and Apps as a Cloud service on Chromebooks

Posted in Cloud, Google, SaaS, vmware

VMware NSX – because hairpins are for old ladies

Server virtualization has increased the amount of server to server network traffic, commonly described as east-west traffic. Let’s assume that you have 2 VM’s living on the same host and both VM’s are in different layer 3 networks, in a traditional network traffic flow would be:

l3noNSX

essentially “hair pinning” the traffic , meaning the traffic now goes out the VMware host across the network to end up at the same VMware host.

With the NSX distributed Logical Router (DLR) embedded in the VMware kernel the traffic flow would be:

l3withNSX

Essentially freeing the external network and using the kernel to move traffic between VM1 and VM2, so now east-west traffic is not externalized and you end up with less hops, less traffic, and more speed.

Posted in NSX, vmware

vCenter Log Insight – Turn data into intelligence

I haven’t been blogging a lot the last couple of months, I’m still trying to drink from the water-hose having recently started at VMware, and of course customers come before blogs…

One of the products in VMware’s portfolio which in my humble opinion doesn’t get enough attention externally is vCenter Log Insight, but maybe that’s just because I got really excited when I first learned about Splunk and log analytics.

In IT every device, os, application and everything in between generates lot’s of unstructured log data, so much so that most of it probably gets discarded. I remember when I first started out in IT support 14 years ago grep’ing my way through unix logs, or even worse clicking away all those yellow warning signs on Windows boxes.

The basic premise of vCenter Log Insight is turning all that unstructured log data into actionable data, building relevant dashboards that add meaning instead of meaning more chores for the IT department (or business).

As an example let’s look at building a dashboard out of troubleshooting data, in our case latency issues with storage in a virtual environment.

loginsight1

As you can see above we have collected about 30 million log entries in total, about a million every 12 hours as you can see in the graph in the upper half of the screen, which is just a visual representation of the collected data in the lower half of the screen.

We could then do a simple text search on all of the collected log data.

loginsight2

By doing this we have cut down the results from 30+ million, most of them unrelated to our storage performance issues, to 443 all of them related to scsi performance.

loginsight3

Since we are chasing a problem we can further fine-tune the results by searching for performance deterioration cutting the results further down to 252 events.

loginsight4

Because we need to filter out the actual devices that are impacted by/impacting the performance issue we could click on the scsi id in the text results and vCenter Log Insight will automatically try to find a pattern in all filtered results.

loginsight6

We then click on “Extract Field”, in this case there is a match and we can now define a user-defineable field.

loginsight7

In the interactive analytics section we can now use any of the built-in fields (in this case – latency in microseconds) to build a visual representation of our filtered results.

loginsight12

We can now refine our query based on the user-defined field “scsci_device” we defined earlier.

loginsight13

since this gives us some useful information (we see several devices, some have higher latency than others) we can now opt to save this query to the dashboard by clicking “Add to dashboard”.

loginsight14

We can further fine-tune by adding some constraints to the results, in this case we are looking for devices with a high latency.

loginsight15

So now we have a new graph excluding all devices that have a latency of less than 100.000 microseconds which we can also add to the dashboard.

So if we go to the dashboard view we can see the 2 graphs we just added.

loginsight16

The dashboard is highly customisable so we can change the naming, position, and size of the graphs.

loginsight17

Besides these user created dashboards there are already a great number of predefined dashboards available under the guise of contents packs from different vendors. In our case we just have the vSphere content pack installed which we can switch to by clicking on “My dashboards”.

loginsight18

So now you can easily look at the predefined dashboard by making a selection on the left, in this case “ESXi Hosts”.

loginsight19

You can download the 3rd party content packs from the link below:
https://solutionexchange.vmware.com/store/loginsight

contentpacks

Of course there is also integration between vCOPS and vCenter Log Insight so you can have analytics on both structured (vCOPS) and unstructured (Log Insight) data to help you find the proverbial needle in the haystack.

Posted in vmware

vSphere hypervisor-based replication

vSphere Replication is VMware’s hypervisor (as opposed to storage array) asynchronous (minimum 15 minutes) based replication solution that works at the virtual machine (VMDK) level whereas storage array replication usually works at the datastore (VMFS) level. (One of the reasons why vVols will be interesting going forward).

Since vSphere replication happens at the host level it is storage agnostic meaning that you can replicate between between different storage arrays, for instance from your enterprise level array in production to a cheaper array in the DR site, or even from a storage array to local disk. Obviously VSAN could also be a good replication partner use case.

Within the RPO you as an admin specify (again 15 minutes is the minimum here, and you can set it per VMDK) the vSphere replication engine looks at the changed blocks that need to be sent over the network to the other site.

Now sending (potentially) lot’s of data over a network can be tricky depending on bandwidth and latency. vSphere replication uses CUBIC TCP to better cope with high bandwidth links where you potentially have lot’s of bandwidth but the round trip time is high. It is mainly about optimizing the congestion control algorithm in order to avoid low link utilization because of the way TCP generally works (see my post on TCP throughput for more info).

In terms of bandwidth requirements you need to take into account the way vSphere replication works on a per VM model, it will not consume all the bandwidth that is available but depending on the data set, change rate and your RPO will intelligently decide what to replicate and when. There is a KB article that goes into detail about calculating the bandwidth requirements.

Screen Shot 2013-10-31 at 4.53.48 PM

It is included with vSphere Essentials Plus or higher. You need to download a single virtual appliance that performs a dual role. (the vSphere Replication Management Server (VRMS) and the vSphere Replication Server(VRS)).

You can only have one VRMS per site but you can have multiple VRS’s (up to 10). The VRMS performs configuration management and the VRS manages replica instances.

Initially you select the source and target location and VR will scan the entire disk on both sites (you could already have a copy of the VM sitting in the target site if you want), and generate a checksum for each block, it then compares both and figures out what needs to be initially replicated to get both sites in sync. This initial sync happens over TCP port 3103 and is called a full sync.

After the initial sync the VR agent (VRA – inside the ESXi kernel) will (together with a with a passive virtual SCSI filter) track all I/O and keep an in-memory bitmap of all changed blocks, this bitmap is backed up with a persistent state file (psf) in the home directory of the VM. The psf file contains pointers to the changed blocks. Now the replication engine figures out based on your RPO when the time is right (you cannot yourself schedule this) to send the changed blocks to the replication site. This “ongoing” replication happens over TCP port 44046 initiated from the vmkernel management NICs.

At the recovery site you have deployed your VRS to which the changed blocks will be sent. After the VRS has received all changed blocks for that replication cycle it will pass those off to the ESXi’s network file copy (NFC) service to write the blocks to its target storage. This means that the vSphere replication process is completely abstracted from the underlying storage and as such gives you flexibility in terms of underlying hardware being different at both ends.

Posted in replication, Storage, vmware

VMware’s Storage Portfolio

I recently joined VMware as an SE, one of the reasons that motivated and influenced my decision to join is a thirst for technology, at VMware you get to work on such a broad and interesting technology stack that anyone is bound to find one or more things that are deeply interesting.

One (out of many) of those things that I find compelling is storage and as such my first blog post as an employee on storage seemed fitting.

In this post I’ll briefly cover VSAN, vFRC, Virsto, and vVols.

I won’t cover vSphere data services like VMFS, VAAI, VASA, Storage vMotion, Storage DRS, VADP, and vSphere Replication here.vmware sds

VSAN

VSAN is a software based distributed strorage solution that is integrated directly into the hypervisor (in contrast to running as a virtual appliance, like the vSphere Storage Appliance (VSA)). It uses directly connected storage (a combination of SSD for performance and spinning disks for capacity) and creates a distributed storage layer across multiple (up to 8 hosts in the beta, rumored 16 hosts at GA in 2014) ESXi hosts.

One of the most obvious uses cases for VSAN will be virtual desktops, in fact VSAN will be a part of the new Horizon View 5.3

For more information on VSAN I suggest heading over to Cormac Hogan’s blog for an entire series on VSAN, or (AND!) Duncan Epping’s blog

vFRC

vSphere Flash Read Cache takes your direct attached flash resources (SSD and/or PCIe flash devices) on your vSphere host and allows you to utilize them as a dedicated (per VM) read cache for your virtual workloads. Like VSAN this solution is integrated into the hypervisor itself.

For more information on vFRC I recommend heading over to the Punching Clouds blog

vVols

vVols or virtual volumes is in technology preview at the moment (i.e. not available), the idea behind vVols is to make storage VM (VMDK) aware in contrast to VMFS aware (usually LUN or Volume based) for things like clones, snapshots, and replication.

For more information on Virtual Volumes I recommend reading Duncan Epping’s blog post.

You can also see a (tech preview) demo of vVols by NetApp here.

Virsto

Vmware acquired Virsto earlier this year in an effort to optimize external block storage (i.e. external SAN only) performance and space utilization. It is no secret that block based storage performs best with sequential I/O, but when you start using virtualization you end up with mostly random I/O (if you have more than 1 VM that is). Virsto tries to solve the random I/O issue by intercepting the random I/O, writing it to a serialized log file and later de-staging it to your SAN storage resulting in mostly sequantial data. Virsto is installed as a (set of) virtual appliance. Virsto presented at Storage Field day 2 before they were acquired by VMware.

Posted in Storage, vmware
Follow

Get every new post delivered to your Inbox.

Join 131 other followers