Category: Uncategorized

Rubrik and Microsoft, rest assured.

Rubrik and Microsoft, rest assured.

Rubrik has been working with Microsoft’s solutions in various ways since version 2.3 with the initial support of Microsoft Azure as an archive location for long-term retention data. You can even argue the relationship started earlier than that with support for application consistent backups for Microsoft applications through our own VSS provider and requester (for Windows OS’s on Virtual Machines) since the beginning.

But for this overview, I’ll show the Microsoft specifics from version 2.3 till version 4.0, which is currently GA.

Azure Blob storage as an archive location

At the heart of Rubrik sit our SLA policies that govern how we handle your data end-to-end, as part of that SLA policy you can define archival settings, these will determine when we need to start moving data of the local Rubrik cluster and onto Azure Blob storage.

Azure Blob storage is a massively-scalable object storage location for unstructured data, the nice thing about using it as an archive with Rubrik is that because we index all data we can search for specific files/folders in this unstructured data, and only pull back those specific objects instead of needing to re-hydrate the entire fileset. Furthermore the data we sent to Azure Blob storage is encrypted.

Screen Shot 2017-09-03 at 21.22.03.png

Support for physical Windows and Microsoft SQL Server

In Rubrik CDM version 3.0 we added support for physical Windows and MS SQL server via the use of the backup service. The backup service is a very lightweight agent that you install on your physical Windows server, it runs in user-space and doesn’t require a reboot, once installed it is managed throughout it’s lifecycle via the Rubrik cluster itself. In other words once the backup service is installed you don’t need to manage it manually, it is also valid for all applications, meaning that it provides filesystem backup but also SQL backup capabilities.

Screen Shot 2017-09-05 at 15.53.05

Rubrik also provides protection and data management of SQL Server databases that are installed across nodes in a Windows Server Failover Clustering, additionally we support Always-On availability groups as wel, Rubrik detects that a database is part of an availability group. In the event of a failover, protection will be transferred to another availability database in the same availability group. When protection is transferred, the Rubrik cluster transfers the existing metadata for history and data protection to the replacement database.

Rubrik Cloud Cluster in Microsoft Azure

Since Rubrik CDM v3.2 we support running a 4-node (minimum) cluster in Microsoft Azure. We use the DSv2-series VM sizes, which gives us 4 vCPU, 14GiB RAM, 400 GiB SSD, and a max of 8 HDDs per VM.

Screen Shot 2017-09-06 at 12.00.56

Through the Rubrik backup service we are able to support native Azure workloads running either Windows or Linux, and Native SQL. Since Cloud Cluster in Azure has the same capabilities we can also archive to another Azure Blob storage location within Azure for long term retention data (i.e. move backup data of the the DSv2 based instances and unto more cost-effective Blob storage), and even replicate to another Rubrik Cluster, either in Azure or another Public Cloud provider, or even to a physical Rubrik cluster on-premises.

Microsoft SQL Server Live Mount

One of the coolest, in my humble opinion, new features in version 4.0 of Rubrik CDM is the ability to Live Mount SQL databases. Rubrik will use the SSD tier to rapidly materialize any point-in-time copy of the SQL database and then expose this, fully writetable DB to the SQL server as a new DB. In other words you are not consuming any space on your production storage system to do this.

Screen Shot 2017-09-06 at 12.13.16

As a SQL DBA you can now easily restore individual objects between the original and copy. The speed of recovery is greatly reduced, and the original backup copy is still maintained as an immutable object on Rubrik safeguarding it against ransomware.

Microsoft Hyper-V

Last, but not least, is our support for Microsoft Hyper-V based workloads which we achieve by integrating directly with the Hyper-V 2016 WMI based APIs, so this works independent of the underlying storage layer for the broadest support.

Screen Shot 2017-09-06 at 12.43.17

We leverage Hyper-V’s Resilient Change Tracking (RCT) to perform incremental forever backups. Older versions of Hyper-V are also supported through the use of the Rubrik backup service.

Independent of the source, be it virtual or physical, we can leverage the same SLA policy based system to avoid having to set individual, and manual backup policies.

 

Marrying consumer convenience with enterprise prowess

If you look at the consumer applications we interface with on a daily basis, things like Facebook, Google, Twitter,… these all tend to be very easy to use and understand, typically little to no explanation is needed on how to use them, you simply sign-up and you get going.

dc8370d5-9876-458d-8f97-4bfcce30bfd4But behind the covers of these very straightforward interactions lies a pretty complex world of intricate components, a lot of moving parts that make the application tick. For a little insight into the back-end architecture of Facebook for example, check out these videos; https://developers.facebook.com/videos/f8-2016/inside-facebooks-infrastructure-part-1-the-system-that-serves-billions/

The typical target audience of Facebook I would guess is completely unaware of this, and rightfully so.

Now think about your typical enterprise applications, probably a lot less straight forward to interact with, a lot of nerd knobs to turn and a lot of certifications to be attained in mastering how to make best use of them.

“Well of course it is a lot more complex, because I need it to be!”  I hear you say, but does it really though?

What if most of the heavy lifting is taken care of by the system itself, internal algorithms that govern the state of the solution and make the application perform how you need it to, minimizing the interaction with the end-user, and making that interaction as enjoyable as possible.

That is what the Rubrik Cloud Data Management solution is trying to achieve, under the hood it is a very, very capable piece of equipment but most of the complexity that comes with these enterprise capabilities has been automated away, and the little interaction that is left to the end-user is very straight forward, and dare I say enjoyable?

laptop-desktop@1x

Matching enterprise data management capabilities with the simplicity of a consumer application is a lofty goal, but one worthy to pursue in my humble opinion. After all…

“Simplicity is the Ultimate Sophistication”

 

 

Data on tape is no longer useful

Data on tape is no longer useful

Data is valuable, I think most people would agree with that, but not all data is treated equally, the key is to enable all your data to remain active, but in the most economical way possible.

Analyst firm IDC is predicting that the amount of data will more than double every two years til 2020 to an astounding 44 trillion gigabytes. Meanwhile data scientists are finding new and interesting ways to activate this enormous amount of information. IDC furthermore states that 86% of business believe all data has value, but at the same time 48% of businesses are not (capable of) storing all available data.

Screen Shot 2016-10-27 at 13.15.46.png

Organisations have a real need to store all this data, including historical information, especially as that data can now be activated through the means of things like big data analytics. One of the sources of data can be your backups, which is typically not-active, especially when it is stored on tape, even more so when stored on tape and archived off-site.

What we are focusing on at Rubrik is to manage the data through it’s entire lifecycle, independent of it’s location. We do this in a couple of ways, first we backup your primary data and store it on our appliances (we are both the backup software, and the backup target), next, instead of archiving that data of to tape, and essentially rendering it useless (except for slow restores), we can archive/tier it to an object storage system so you can still store it in an economically feasible way.

screen-shot-2016-10-27-at-13-30-30

 

When the data is sitting in the archive (on object storage, in the public cloud, or on NFS, depending on your needs) we can still “enable” it. Right now we can instantly search your data independent of the location (and obviously restore it), but also use backup copies of your data to provide them (using what we call Live Mount) to your in-house developers/testers/QA-people, or provide copies to your data scientist that want to run analytics’ jobs against them, all without the need to store these copies on your primary storage system. Compare that to backup copies sitting on tape, in the dark, providing no additional value whatsoever, this way you can still extract value out of your data no matter when you created it.

Docker networking overview

Docker networking overview

Introduction

There are of course a lot of blog posts out there already regarding Docker networking, I don’t want to replicate that work but instead wanted to provide a clear overview of what is possible with Docker networking today by showing some examples of the different options.

In general the networking piece of Docker, and arguably Docker itself, is still quite young so things move fast and will likely change over time. A lot of progress has been made via the SocketPlane acquisition last year and it’s subsequent pluggable model, but more about that later.

Docker containers are ephemeral by design (pets vs cattle), this leads to several potential issues, not the least of which is not being able to keep your firewall configuration up to date because of difficult IP address management, it’s also hard to connect to services that might disappear at any moment, and no, using DNS as a stopgap is not a good solution (DNS as a SPOF, don’t go there). Of course there are several options and methods available to overcome these things.

Single host Docker networking

You basically have 4 options for single host Docker networking; Bridge mode, Host mode, Container mode, and No networking.

Bridge mode (the default Docker networking mode)

The Docker deamon creates “docker0” a virtual ethernet bridge that forwards packets between all interfaces attached to it. All containers on the host are attached to this internal bridge which assings one interface as the containers’ “eth0” interface and another interface in the host’s namespace (think VRF). The container get’s a private IP address assignment. To prevent ARP collisions on the local network, the Docker daemon generates a random MAC address from the allocated IP address. In the example below Docker assigns the private IP 172.17.0.1 to the container.

docker1

Host mode

In this mode the container shares the networking namespace of the host, directly exposing it to the outside world. This means you need to use port mapping to reach services inside the container, in Bridge mode, Docker can automatically assign ports and thus make them routable. In the example below the Docker host has the IP 10.0.0.4 and as you can see the container shares this IP address.

docker2

Container mode

This mode forces Docker to reuse the networking namespace of another container. This is used if you want to provide custom networking from said container, this is for example what Kubernetes uses to provide networking for multiple containers. In the example below the container to which we are going to connect the subsequent containers into has the IP 172.17.0.2 and as you can see the container being launched has the same IP address.

docker3
docker4

No networking

This mode does not configure networking, useful for containers that don’t require network access, but it can also be used to setup custom networking.
This is the mode Nuage Networks leverages pre-Docker 1.9 (more info here).
In the example below you can see that our new container did not get any IP address assigned.

docker5

By default Docker has inter-container communication enabled (–icc=true) meaning that containers on a host are free to communicate without restrictions which could be a security concern. Communication to the outside world is controlled via iptables and ip_forwarding.

Multi-host Docker networking

In a real world scenario you will most likely end up using Docker containers across multiple hosts depending on the needs of you containerized application. So now you need to build container networks across these hosts to have you distributed application communicate internally, and externally.

As alluded to above in march of 2015 Docker, Inc. acquired the SDN startup SocketPlane, that has given rise to Libnetwork and the Container Network Model, meant to be the default multi-host networking setup going forward.

Libnetwork

Libnetwork provides a native Go implementation for connecting containers
The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.

One of the benefits of Libnetwork is that it uses a driver / plugin model to support many underlying network technologies while stil exposing a simple and consistent network model to the end-user (common API), Nuage Networks leverages this model by having a remote plugin.

Libnetwork also introduces the Container Network Model (CNM) to provide interoperation between networks and containers.

A63EECBA-52A0-471B-B7F5-679D2142014C

The CNM defines a network sandbox, and endpoint and a network. The Network Sandbox is an isolated environment where the Networking configuration for a Docker Container lives. The Endpoint is a network interface that can be used for communication over a specific network. Endpoints join exactly one network and multiple endpoints can exist within a single Network Sandbox. And the Network is a uniquely identifiable group of endpoints that are able to communicate with each other. You could create a “Frontend” and “Backend” network and they would be completely isolated.

SDN with Underlay – Overlay visibility via VSAP

SDN with Underlay – Overlay visibility via VSAP

When using virtualization of any kind you introduce a layer of abstraction that people in operations need to deal with, i.e. if you run a couple of virtual machines on a physical server you need to be able to understand the impact that changes (or failures) in one layer have on the other. The same is true for network virtualization, we need to be able to understand the impact that say, a failure of a physical router port, has on the overlay virtual network.

To make this possible with Nuage Networks we can use our Virtualized Services Assurance Platform or VSAP for short. VSAP builds upon intellectual property from our parent company, Alcatel-Lucent (soon to be Nokia), using the 5620 Service Aware Manager as a base to combine knowledge of both the physical and virtual networks.

VSAP Key Components

VSAP communicates with Nuage Networks using firstly SNMP, connecting into the Virtualized Services Controller (VSC, the centralized SDN controller) this gives VSAP an understanding of the network topology in the Nuage SDN solution. Secondly it uses a CPAM (Control Plane Assurance Manager) module via the 7701 CPAA which acts as a listening device in the network to collect all the IGB and BGP updates which allows it to model the underlay network based on IGP.

Screen Shot 2016-01-10 at 16.55.15

The solution has both a both a Web GUI and Java GUI, pictured below is the SAM JAVA GUI which in this example is displaying the physical network including the data center routers, the VSCs, and the Top-of-Rack switches. (again all learned via SNMP).

Screen Shot 2016-01-10 at 17.02.35.png

You can also look at the IGP (in this case OSPF) topology (picture below), again this is build by the CPAM based on the routing information in the network. We can use this model to map the underlay (what you see on the picture below) with the overlay information from the Nuage Networks SDN solution.

Screen Shot 2016-01-10 at 17.05.04

You can also get a closer look at the services in the network, in the example below the VPLS (L2) and VPRN (L3) constructs. These services are created by the Nuage Networks SDN solution.

Screen Shot 2016-01-10 at 17.08.57

We can also drill deeper into the Virtual Switch constructs provided by Nuage (for example to see which virtual ports and access ports are attached, or which faults recorded are at the vSwitch level,…)

Screen Shot 2016-01-10 at 17.12.30

You can also see how traffic is passed across the virtual service, you have an option to highlight it on the IGP topology view and see the superimposed virtual service on top of the physical underlay.

Screen Shot 2016-01-10 at 17.15.37

On the picture below you can see the virtual overlay service path on top of the physical network.

Screen Shot 2016-01-10 at 17.18.12

As mentioned before there is also the option to use a Web based GUI if you don’t like a Java based client (ahem 😉 ) In the example below we are looking at the VRSs (virtual switches) inside the L3 VPRN construct, so you can see where the virtual machines connect to the Open vSwitch based VRS on each hypervisor host.

Screen Shot 2016-01-10 at 17.20.27

You also have an inventory view allowing you to search on all objects in the physical and virtual space.

Screen Shot 2016-01-10 at 17.30.13

You can also look at the inventory topology.

Screen Shot 2016-01-10 at 17.31.24

Fault correlation

One of the powerful options you now have is that you can perform fault correlation between the physical and virtual constructs. So in the example below a switch failure is introduced and you can see the impact on overlay and underlay, including, and up to the virtual machine level.

Pictured below is the overview on the Web Gui showing the alarm that has been raised, the impact and probably cause.

Screen Shot 2016-01-10 at 17.34.38

Same type of indication on the Java client depicted below.

Screen Shot 2016-01-10 at 17.35.41

For additional information and a demo of how this works I recommended having a look at the recorded Tech Field Day Extra session from ONUG Spring 2015 here.

Policy Based Abstractions through SDN

Policy Based Abstractions through SDN

As I’m sure you’re tired of hearing by now IT is typically divided in multiple silo’s which don’t always see eye to eye. Sometimes people are afraid of needing to adjust perceived best practises in their own domain to better collaborate with the rest of the organization, in many cases though it’s simply a matter of not understanding each other because you are not speaking the same language.

The ideal scenario would be a world where each practise would expose it’s infrastructure, build on best practises, through APIs so other teams can interact with it in the most optimum way.

At Nuage Networks we provide API based access to our components making full scale automation a possibility but we can also bring together teams speaking different languages via our abstraction based policies.

Nuage Networks Application Designer 

Application Designer is built for use by people with an understanding of application constructs that don’t necessarily need to understand, or care about, the underlying networking constructs, these are automatically abstracted by the Nuage platform.

In this example we initially start of with a fresh slate, no network constructs have been created beyond the L3 domain.

Screen Shot 2015-12-15 at 09.52.51

If we go to Application Designer we can see the application services that are available, these would typically be created by the network team, it is an abstract representation of a network service, for example below we are creating the application service https, providing TCP communication to port 443.

Screen Shot 2015-12-15 at 10.01.42

The application team can now use these application service abstractions to build out their application. In the example below we start by creating 3-tier application called Banking App.

Screen Shot 2015-12-15 at 10.02.56

Next we can start to add our application tiers and interconnect them by using the application services abstractions that were previously created by the networking team. You do this by dragging and dropping items from the library onto the canvas.

Screen Shot 2015-12-15 at 10.06.53

Once you have your application tiers mapped out you can use the application services to create flow security policy (what type of traffic is allowed between these 2 points) simply by drawing a line between the 2 tiers.

Screen Shot 2015-12-15 at 10.10.15

In this case we are indicating we want HTTPS to be allowed from the Internet to the front-end application tier.

One you have your application mapped out and interconnected (you could also drag and drop other complete application on the canvas and specify connectivity between those as wel) you can add workloads to the tiers, these will then  to the policies you have applied.

Screen Shot 2015-12-15 at 10.15.10

Since the the system will translate these different abstractions to the correct networking constructs we can look at the network design and verify that our application model has been completely mapped to a set of networking policies.

Screen Shot 2015-12-15 at 10.19.41

Furthermore, looking at the security policies we can see these have been translated as wel thus making it easy for different teams with different knowledge domains to focus on their area of expertise while at the same time tying everything together via our policy based abstractions.

Screen Shot 2015-12-15 at 10.22.45

Belgian VMUG session on NSX

Belgian VMUG session on NSX

I will be presenting a session on VMware NSX at the 21st Belgian VMUG meeting on the 21st of November in Antwerp. You can register for it here.

The VMUG crew did an excellent job in getting an amazing line-up of speakers including (in no particular order)  Chris Wahl, Cormac Hogan, Mike Laverick, Duncan Epping, Eric Sloof, and Luc Dekens. Besides being able to meet these guys you can also participate in Mike’s VMworld 2014 SwagBag (EVO:RAIL edition) charity raffle.

Running the best virtualization software AND donating to charity, that’s double Karma points right there 😉

Screen Shot 2014-12-10 at 07.37.31

VMware Integrated OpenStack (VIO)

VMware Integrated OpenStack (VIO)

VMware just announced the VMware Integrated OpenStack (VIO) solution at VMworld.

What is it?

VIO is not a VMware OpenStack distribution, it is a fully validated (and supported) architecture to use VMware components in OpenStack Programs, it provides pre-packaged VMware integration with OpenStack components to allow for easy deployment and management. In other words if you want to use OpenStack with VMware this is the easiest way to do it. The basic integration with OpenStack existed before the VIO packaging, VIO just makes it easier to consume and provides a supported implementation option from VMware.

Bv5_mw_CUAA02SX.jpg-large

If you think about how you can consume OpenStack today you basically end up with 3 choices, either you go with the DIY approach and start from the OpenStack sourcecode to build your own solution (a bit like building your own Linux distro from source code, far fetched and not really a path a lot of people tend to walk down), or you use a well known distro from companies like Canonical, Mirantis, SUSE,… or you use an pre-integrated product like VIO.

VMware will allow you to leverage the VMware components in either the VIO product (tightly-integrated/fully validated) or by using the components of your choice (including VMware and non-VMware ones) to build your own (loosely-integrated) stack. That is after all the beauty of the OpenStack APIs, choice AND standardisation.

Bv6A_3ZIMAE7k_d.jpg-large

What VMware components are we talking about?

Initially the focus is on vSphere (vCenter) for compute, NSX for networking, VSAN for storage, and vCenter Operations Manager for monitoring. Like mentioned above the integration was already there before, looking at the picture below you see with which OpenStack components we integrate. Components in red are OpenStack programs, components in blue are VMware’s.

VIO001

For the integration VMware provides opensource drivers, the vCenter driver for Nova, the NSX plugin for Neutron and the vCenter VMDK driver for Cinder and Glance.
Note: the ESX VMDK driver has been deprecated since the Icehouse release.

The OpenStack documentation provides a good starting point to see how the integration actually takes place.

OpenStack Nova – vSphere Integration

OpenStack Compute (Nova) supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features. The VMware vCenter driver also interacts with the OpenStack Image Service to copy VMDK images from the Image Service back end store.

VMware driver architecture

vmware-nova-driver-architecture

OpenStack – vSphere integration documentation

OpenStack Cinder – VMDK driver Integration

The VMware VMDK driver enables management of OpenStack Block Storage volumes on vCenter-managed datastores. Volumes are backed by VMDK files on datastores using any VMware compatible storage technology (such as, NFS, iSCSI, FiberChannel, and VSAN).

OpenStack – VMDK driver integration documentation

OpenStack Neutron – NSX Integration

OpenStack Networking uses the NSX plugin for Networking to integrate with an existing VMware vCenter deployment. When installed on the network nodes, the NSX plugin enables a NSX controller to centrally manage configuration settings and push them to managed network nodes. Network nodes are considered managed when they’re added as hypervisors to the NSX controller.

The diagram below depicts an example NSX deployment and illustrates the route inter-VM traffic takes between separate Compute nodes. Note the placement of the VMware NSX plugin and the neutron-server service on the network node. The NSX controller features centrally with a green line to the network node to indicate the management relationship:

vmware_nsx

OpenStack – NSX plug-in configuration

VIO

As mentioned before the VMware supported VIO solution will allow you to use VMware’s infrastructure components without needing to do the all the required configuration manually. The VIO beta release will initially be available as a private beta for which you can signup here.

SNAGHTML4c74ccb