VMware NSX with Trend Micro Deep Security 9.5

I recorded a brief demonstration showing the integration between Trend Micro Deep Security and VMware NSX.

Posted in Networking, NSX, SDN, vmware | Leave a comment

vSphere 6 – vMotion nonpareil

I remember when I was first asked to install ESX 2.x for a small internal project, must have been somewhere around 2004-2005, I was working at a local systems integrator and got the task because no one else was around at the time. When researching what this thing was and how it actually worked I got sucked in, then some time later when I saw VMotion(sic) for the first time I was hooked.

Sure I’ve implemented some competing hypervisors over the years, and even worked for the competition for a brief period of time (in my defence, I was in the application networking team ;-) ). But somehow, at the time, it was like driving a kit car when you knew the real deal was parked just around the corner.

mercy-4vsRealCar2So today VMware announced the upcoming release of vSphere 6, arguably the most recognised product in the VMware stable and the foundation to many of it’s other solutions.

A lot of new and improved features are available but I wanted to focus specifically on the feature that impressed me the most soo many years ago, vMotion.

In terms of vMotion in vSphere 6 we now have;

  • Cross vSwicth vMotion
  • Cross vCenter vMotion
  • Long Distance vMotion
  • Increased vMotion network flexibility

Cross vSwitch vMotion

Cross vSwitch vMotion allows you to seamlessly migrate a VM across different virtual switches while performing a vMotion operation, this means that you are no longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine. It also works across a mix of standard and distributed virtual switches. Previously, you could only vMotion from vSS to vSS or within a single vDS.

Screen Shot 2015-01-19 at 17.52.25

The following Cross vSwitch vMotion migrations are possible:

  • vSS to vSS
  • vSS to vDS
  • vDS to vDS (including metadata, i.e. statistics)
  • vDS to vSS is not possible

The main use case for this is data center migrations whereby you can migrate VMs to a new vSphere cluster with a new virtual switch without disruption. It does require the source and destination portgroups to share the same L2. The IP address within the VM will not change.

Cross vCenter vMotion

Expanding on the Cross vSwitch vMotion enhancement, vSphere 6 also introduces support for Cross vCenter vMotion.

vMotion can now perform the following changes simultaneously:

  • Change compute (vMotion) – Performs the migration of virtual machines across compute hosts.
  • Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
  • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches.

and finally…

  • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.

Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectiviy since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required.

Screen Shot 2015-01-19 at 17.58.38

The use cases are migration of Windows based vCenter to the vCenter Server Appliance (also take a look at the scale improvement of vCSA in vSphere 6) i.e. no more Windows and SQL license needed.
Replacement of vCenters without disruption and the possibility to migrate VMs across local, metro, and cross-continental distances.

Long Distance vMotion

Long Distance vMotion is an extension of Cross vCenter vMotion but targeted to environments where vCenter servers are spread across large geographic distances and where the latency across sites is 100ms or less.

Screen Shot 2015-01-19 at 18.03.38The requirements for Long Distance vMotion are the same as Cross vCenter vMotion, except with the addition of the maximum latency between the source and destination sites must be 100 ms or less, and there is 250 Mbps of available bandwidth.

The operation is serialized, i.e. it is a VM per VM operation requiring 250 Mbps.

The VM network will need to be a stretched L2 because the IP of the guest OS will not change. If the destination portgroup is not in the same L2 domain as the source, you will lose network connectivity to the guest OS. This means that in some topologies, such as metro or cross-continental, you will need a stretched L2 technology in place. The stretched L2 technologies are not specified (VXLAN is an option here, as are NSX L2 gateway services). Any technology that can present the L2 network to the vSphere hosts will work, as it’s unbeknown to ESX how the physical network is configured.

Increased vMotion network flexibility

ESXi 6 will have multiple TCP/IP stacks, this enables vSphere to improve scalability and offers flexibility by isolating vSphere services to their own stack. This also allows vMotion to work over a dedicated Layer 3 network since it can now have it’s own memory heap, ARP and routing tables, and default gateway.

Screen_Shot_2015-01-19_at_18_08_38

 

In other words the VM network still needs L2 connectivity since the virtual machine has to retain its IP. vMotion, management and NFC networks can all be L3 networks.

Posted in Networking, vmware | Tagged , | 2 Comments

Introducing VMware vCloud Air Advanced Networking Services

VMware just announced some additions to it’s public cloud service, vCloud Air, one of the additions is advanced networking services powered by VMware NSX. Today the networking capabilities of vCloud Air are based on vCNS features, moving forward these will be provided by NSX.

If you look at the connectivity options from your Data Center towards vCloud Air today you have:

  • Direct connect which is a private circuit such as MPLS or Metro Ethernet.
  • IPSec VPN
  • or via the WAN to a public IP address (think webhosting)

By switching from vCNS Edge devices to NSX Edge devices vCloud Air is adding SSL VPN connectivity from client devices to the vCloud Air Edge.

Screen_Shot_2015-01-20_at_10_56_32

VMware is adding, by using the NSX Edge Gateway, dynamic routing support (OSPF, BGP), and a full fledged L4/7 load-balancer (based on HA Proxy), that also provides SSL offloading.
As mentioned before SSL VPN to the vCloud Air network, for which clients are available for Mac OS X, Windows, and Linux is also available.
Furthermore the number of available interfaces have also been greatly increased from 9 to 200 sub-interfaces, and the system now also provides distributed firewall (firewall policy linked to the virtual NIC of the VM) capabilities.

Screen_Shot_2015-01-20_at_11_31_42

The NSX Edge can use BGP to exchange routes between vCloud Air and your on-premises equipment over Direct Connect. NSX Edge can also use OSPF to exchange internal routes between NSX edges or a L3 virtual network appliance.

Screen_Shot_2015-01-20_at_11_07_35

NSX also introduces the concept of Micro-Segmentation to vCloud Air. This allows implementation of firewall policy at the virtual NIC level. In the example below we have a typical 3-tier application with each tier placed on its own L2 subnet.

Screen Shot 2015-01-20 at 11.16.45

With micro-segmentation you can easily restrict traffic to the application- and database-tier while allowing traffic to the web-tier, even though, again just as an example all these VM’s sit on the same host. Assuming that at some point you will move these VM’s from one host to another host, security policy will follow the VM without the need for reconfiguration in the network. Or you can implement policy that does not allow VM’s to talk to each other even though they sit on the same L2 segment.

Screen_Shot_2015-01-20_at_11_19_47

If you combine this with security policy capabilities in the NSX Edge Gateway you can easily implement firewall rules for both North-South and East-West traffic. The system will also allow you to build a “container” by grouping certain VM’s together and applying policies to the group as a whole. For example you could create 2 applications, each consisting of 1 VM from each tier (web, app, db), and set policies in a granular level. As a service provider you can very easily create a system that supports multiple tenants in a secure fashion. Furthermore this would also allow you to set policies and move VM’s from on-premises to vCloud Air while still retaining network and security configurations.

Posted in Networking, NSX, vmware | Tagged , , , | Leave a comment

New Year, New Job.

I’m super excited to be taking on a new role in the NSBU at VMware, as of the 1st of January I’ll officially be joining the team as a Sr. Systems Engineer for the Benelux. I’ll be focused mainly on VMware NSX, including it’s integrations with other solutions (Like vRA and OpenStack for example).

Unofficially I’ve been combining this function with my “real” job for a couple of months now ever since a dear and well respected colleague decided to leave VMware. Recently I was fortunate enough to get the opportunity to attend a 2 week training at our Palo Alto campus on NSX-v, NSX-MH, OpenStack, VIO, OVS,…

vmwarecampus

The experience was face-meltingly good, I definitely learned a lot and got the opportunity to meet many wonderful people. One conclusion is that the NSX team certainly is a very interesting and exciting place to be in the company.

In the last few months I got my feet wet by training some of our partner community on NSX (most are very excited about the possibilities, even the die-hard hardware fanatics), staffing the NSX booth at VMworld Europe, and by having some speaking engagements like my NSX session at the Belgian VMUG.

vmugfv

So why NSX?

In the past I’ve been working on a wide variety of technologies (being in a very small country and working for small system integrators you need to be flexible, and I guess it’s also just the way my mind works #squirrel!) but networking and virtualisation are my two main fields of interest so how convenient that both are colliding!
I’ve been a pure networking consultant in the past, mainly working with Cisco and Foundry/HP ProCurve and then moved more into application networking at Citrix ANG and Riverbed.

The whole network virtualisation and SDN (let’s hold of the discussion of what’s what for another day) field are on fire at the moment and are making the rather tedious and boring (actually I’ve never really felt that, but I’m a bit of a geek) field of networking exciting again. The possibilities and promise of SDN have lot’s of potential to be disruptive and change an industry, and I’d like to wholeheartedly and passionately contribute and be a part of that.

As NSX is an enabling technology for a lot of other technologies it needs to integrate with a wide variety of solutions. 2 solutions from VMware that will have NSX integrated for example are EVO:RACK and VIO. I look forward to also work on those and hopefully find some time to blog about it as wel.

Other fields are also looking to the promise of SDN to enable some new ways of getting things done, like SocketPlane for example, trying to bring together Open vSwitch and Docker to provide pragmatic Software-Defined Networking for container-based clouds. As VMware is taking on a bigger and bigger role in the Cloud Native Apps space it certainly will be interesting to help support all these efforts.

“if you don’t cannibalise yourself, someone else will”
-Steve Jobs

I’m enjoying a few days off with my family and look forward to returning in 2015 to support the network virtualisation revolution!

nsx-dragon-2

Posted in Cisco, Networking, NSX, OpenStack, Riverbed, SDN, vmware | Tagged , , , | Leave a comment

NSX Logical Load Balancer

VMware NSX is a network virtualisation and security platform that enables inclusion of 3rd party integration for specific functionality, like advanced firewalling, load-balancing, anti-virus, etc. Having said that VMware NSX also provides a lot of services out of the box that depending on the customer use-case can cover a lot of requirements.

One of these functionalities is load balancing using the NSX Edge Virtual Machine that is part of the standard NSX environment. The NSX Edge load balancer enables network traffic to follow multiple paths to a specific destination. It distributes incoming service requests evenly (or depending on your specific load balancing algorithm) among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to Layer 7.

article

The technology behind our NSX load balancer is based on HAProxy and as such you can leverage a lot of the HAProxy documentation to build advanced rulesets (Application Rules) in NSX.

Like the picture above indicates you can deploy the logical load balancer either “one-armed” (Tenant B example) or transparently inline (Tenant A example).

To start using the load balancing functionality you first need to deploy an NSX edge in your vSphere environment.

Go to Networking & Security i your vSphere Web Client and select NSX Edges, then click on the green plus sign to start adding a new NSX edge. Select Edge Services Gateway as an install type and give your load balancer an indicative name.

LB001

Next you can enter a read-only CLI username and select a password of your choice. You can also choose to enable SSH access to the NSX Edge VM.

Screen Shot 2014-12-03 at 20.37.57

Now you need to select the size of the Edge VM, because this is a lab environment I’ll opt for compact but depending on your specific requirements you should select the most appropriate size. Generally speaking more vCPU drives more bandwidth and more RAM drives more connections per second. The available options are;

  • Compact: 1 vCPU, 512 MB
  • Large: 2 vCPU, 1024 MB
  • Quad Large: 4 vCPU, 1024 MB
  • X-Large: 6 vCPU, 8192 MB

Click on the green plus sign to add an Edge VM, you need to select a cluster/resource pool to place the VM and select the datastore. Typically we would place NSX edge VMs in a dedicated Edge rack cluster. For more information on NSX vSphere cluster design recommendations please refer to the NSX design guide available here: https://communities.vmware.com/servlet/JiveServlet/previewBody/27683-102-3-37383/NSXvSphereDesignGuidev2.1.pdf

LB002Since we are deploying a one-armed load balancer we only need to define one interface next. Click the green plus sign to configure the interface. Give it a name, select the logical switch it needs to connect to (i.e. where are the VMs that I want to load balance located) configure an IP address in this specific subnet (VXLAN virtual switch) and you are good to go.

LB003

Next up configure a default gateway.

Screen Shot 2014-12-03 at 21.03.50Configure the firewall default policy and select Accept to allow traffic.

Screen Shot 2014-12-03 at 21.04.22Verify all setting and click Finish to start deploying the Edge VM.

Screen Shot 2014-12-03 at 21.05.13After a few minuted the new Edge VM will be deployed and you can start to configure load balancing for the VMs located on the Web-Tier-01 VXLAN virtual switch (as we selected before) by double clicking the new Edge VM.

Click on Manage and select Load Balancer, next click on Edit and select the Enable Load Balancer check box.

LB005

Next thing we need is an application profile, you create an application profile to define the behavior of a particular type of network traffic. After configuring a profile, you associate the profile with a virtual server. The virtual server then processes traffic according to the values specified in the profile.

Click on Manage and select Load Balancer, then click on Application Profiles, give the profile a name. In our case we are going to load balance 2 HTTPS webservers but are not going to proxy the SSL traffic so we’ll select Enable SSL Passthrough.

LB006Now we need to create a pool containing the webservers that will serve the actual content.

Select Pools, give it a name, select the load balancing algorithm.

Today we can choose between;

  • Round-robin
  • IP Hash
  • Least connection
  • URI
  • HTTP Header
  • URL

Then select a monitor to check if the severs are serving https (in this case) requests.
You can add the members of the pools from the same interface.

LB007

Now we create the virtual server that will take the front-end connections from the webclients and forward them to the actual webservers. Note that the Protocol of the virtual server can be different from the protocols of the back-end servers. i.e. You could do HTTPS from the client webbrowser to the virtual server and HTTP from the virtual server to the back-end for example.

Screen Shot 2014-12-04 at 18.29.43

Now you should be able to reach the IP (VIP) of the virtual server we just created and start to load balance your web-application.

LB008Since we selected round-robin as our load-balancing algorithm each time you hit refresh (use control and click on refresh to avoid using the browsers cache) on your browser you should see the other webserver.

Posted in Networking, NSX, SDN, vmware | Tagged , | 1 Comment

Belgian VMUG session on NSX

I will be presenting a session on VMware NSX at the 21st Belgian VMUG meeting on the 21st of November in Antwerp. You can register for it here.

The VMUG crew did an excellent job in getting an amazing line-up of speakers including (in no particular order)  Chris Wahl, Cormac Hogan, Mike Laverick, Duncan Epping, Eric Sloof, and Luc Dekens. Besides being able to meet these guys you can also participate in Mike’s VMworld 2014 SwagBag (EVO:RAIL edition) charity raffle.

Running the best virtualization software AND donating to charity, that’s double Karma points right there ;-)

Screen Shot 2014-12-10 at 07.37.31

Posted in Uncategorized | Leave a comment

VMware Integrated OpenStack (VIO)

VMware just announced the VMware Integrated OpenStack (VIO) solution at VMworld.

What is it?

VIO is not a VMware OpenStack distribution, it is a fully validated (and supported) architecture to use VMware components in OpenStack Programs, it provides pre-packaged VMware integration with OpenStack components to allow for easy deployment and management. In other words if you want to use OpenStack with VMware this is the easiest way to do it. The basic integration with OpenStack existed before the VIO packaging, VIO just makes it easier to consume and provides a supported implementation option from VMware.

Bv5_mw_CUAA02SX.jpg-large

If you think about how you can consume OpenStack today you basically end up with 3 choices, either you go with the DIY approach and start from the OpenStack sourcecode to build your own solution (a bit like building your own Linux distro from source code, far fetched and not really a path a lot of people tend to walk down), or you use a well known distro from companies like Canonical, Mirantis, SUSE,… or you use an pre-integrated product like VIO.

VMware will allow you to leverage the VMware components in either the VIO product (tightly-integrated/fully validated) or by using the components of your choice (including VMware and non-VMware ones) to build your own (loosely-integrated) stack. That is after all the beauty of the OpenStack APIs, choice AND standardisation.

Bv6A_3ZIMAE7k_d.jpg-large

What VMware components are we talking about?

Initially the focus is on vSphere (vCenter) for compute, NSX for networking, VSAN for storage, and vCenter Operations Manager for monitoring. Like mentioned above the integration was already there before, looking at the picture below you see with which OpenStack components we integrate. Components in red are OpenStack programs, components in blue are VMware’s.

VIO001

For the integration VMware provides opensource drivers, the vCenter driver for Nova, the NSX plugin for Neutron and the vCenter VMDK driver for Cinder and Glance.
Note: the ESX VMDK driver has been deprecated since the Icehouse release.

The OpenStack documentation provides a good starting point to see how the integration actually takes place.

OpenStack Nova – vSphere Integration

OpenStack Compute (Nova) supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features. The VMware vCenter driver also interacts with the OpenStack Image Service to copy VMDK images from the Image Service back end store.

VMware driver architecture

vmware-nova-driver-architecture

OpenStack – vSphere integration documentation

OpenStack Cinder – VMDK driver Integration

The VMware VMDK driver enables management of OpenStack Block Storage volumes on vCenter-managed datastores. Volumes are backed by VMDK files on datastores using any VMware compatible storage technology (such as, NFS, iSCSI, FiberChannel, and VSAN).

OpenStack – VMDK driver integration documentation

OpenStack Neutron – NSX Integration

OpenStack Networking uses the NSX plugin for Networking to integrate with an existing VMware vCenter deployment. When installed on the network nodes, the NSX plugin enables a NSX controller to centrally manage configuration settings and push them to managed network nodes. Network nodes are considered managed when they’re added as hypervisors to the NSX controller.

The diagram below depicts an example NSX deployment and illustrates the route inter-VM traffic takes between separate Compute nodes. Note the placement of the VMware NSX plugin and the neutron-server service on the network node. The NSX controller features centrally with a green line to the network node to indicate the management relationship:

vmware_nsx

OpenStack – NSX plug-in configuration

VIO

As mentioned before the VMware supported VIO solution will allow you to use VMware’s infrastructure components without needing to do the all the required configuration manually. The VIO beta release will initially be available as a private beta for which you can signup here.

SNAGHTML4c74ccb

Posted in Uncategorized | 1 Comment