NSX Logical Load Balancer

VMware NSX is a network virtualisation and security platform that enables inclusion of 3rd party integration for specific functionality, like advanced firewalling, load-balancing, anti-virus, etc. Having said that VMware NSX also provides a lot of services out of the box that depending on the customer use-case can cover a lot of requirements.

One of these functionalities is load balancing using the NSX Edge Virtual Machine that is part of the standard NSX environment. The NSX Edge load balancer enables network traffic to follow multiple paths to a specific destination. It distributes incoming service requests evenly (or depending on your specific load balancing algorithm) among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to Layer 7.


The technology behind our NSX load balancer is based on HAProxy and as such you can leverage a lot of the HAProxy documentation to build advanced rulesets (Application Rules) in NSX.

Like the picture above indicates you can deploy the logical load balancer either “one-armed” (Tenant B example) or transparently inline (Tenant A example).

To start using the load balancing functionality you first need to deploy an NSX edge in your vSphere environment.

Go to Networking & Security i your vSphere Web Client and select NSX Edges, then click on the green plus sign to start adding a new NSX edge. Select Edge Services Gateway as an install type and give your load balancer an indicative name.


Next you can enter a read-only CLI username and select a password of your choice. You can also choose to enable SSH access to the NSX Edge VM.

Screen Shot 2014-12-03 at 20.37.57

Now you need to select the size of the Edge VM, because this is a lab environment I’ll opt for compact but depending on your specific requirements you should select the most appropriate size. Generally speaking more vCPU drives more bandwidth and more RAM drives more connections per second. The available options are;

  • Compact: 1 vCPU, 512 MB
  • Large: 2 vCPU, 1024 MB
  • Quad Large: 4 vCPU, 1024 MB
  • X-Large: 6 vCPU, 8192 MB

Click on the green plus sign to add an Edge VM, you need to select a cluster/resource pool to place the VM and select the datastore. Typically we would place NSX edge VMs in a dedicated Edge rack cluster. For more information on NSX vSphere cluster design recommendations please refer to the NSX design guide available here: https://communities.vmware.com/servlet/JiveServlet/previewBody/27683-102-3-37383/NSXvSphereDesignGuidev2.1.pdf

LB002Since we are deploying a one-armed load balancer we only need to define one interface next. Click the green plus sign to configure the interface. Give it a name, select the logical switch it needs to connect to (i.e. where are the VMs that I want to load balance located) configure an IP address in this specific subnet (VXLAN virtual switch) and you are good to go.


Next up configure a default gateway.

Screen Shot 2014-12-03 at 21.03.50Configure the firewall default policy and select Accept to allow traffic.

Screen Shot 2014-12-03 at 21.04.22Verify all setting and click Finish to start deploying the Edge VM.

Screen Shot 2014-12-03 at 21.05.13After a few minuted the new Edge VM will be deployed and you can start to configure load balancing for the VMs located on the Web-Tier-01 VXLAN virtual switch (as we selected before) by double clicking the new Edge VM.

Click on Manage and select Load Balancer, next click on Edit and select the Enable Load Balancer check box.


Next thing we need is an application profile, you create an application profile to define the behavior of a particular type of network traffic. After configuring a profile, you associate the profile with a virtual server. The virtual server then processes traffic according to the values specified in the profile.

Click on Manage and select Load Balancer, then click on Application Profiles, give the profile a name. In our case we are going to load balance 2 HTTPS webservers but are not going to proxy the SSL traffic so we’ll select Enable SSL Passthrough.

LB006Now we need to create a pool containing the webservers that will serve the actual content.

Select Pools, give it a name, select the load balancing algorithm.

Today we can choose between;

  • Round-robin
  • IP Hash
  • Least connection
  • URI
  • HTTP Header
  • URL

Then select a monitor to check if the severs are serving https (in this case) requests.
You can add the members of the pools from the same interface.


Now we create the virtual server that will take the front-end connections from the webclients and forward them to the actual webservers. Note that the Protocol of the virtual server can be different from the protocols of the back-end servers. i.e. You could do HTTPS from the client webbrowser to the virtual server and HTTP from the virtual server to the back-end for example.

Screen Shot 2014-12-04 at 18.29.43

Now you should be able to reach the IP (VIP) of the virtual server we just created and start to load balance your web-application.

LB008Since we selected round-robin as our load-balancing algorithm each time you hit refresh (use control and click on refresh to avoid using the browsers cache) on your browser you should see the other webserver.

Posted in Networking, NSX, SDN, vmware | Tagged , | 1 Comment

Belgian VMUG session on NSX

I will be presenting a session on VMware NSX at the 21st Belgian VMUG meeting on the 21st of November in Antwerp. You can register for it here.

The VMUG crew did an excellent job in getting an amazing line-up of speakers including (in no particular order)  Chris Wahl, Cormac Hogan, Mike Laverick, Duncan Epping, Eric Sloof, and Luc Dekens. Besides being able to meet these guys you can also participate in Mike’s VMworld 2014 SwagBag (EVO:RAIL edition) charity raffle.

Running the best virtualization software AND donating to charity, that’s double Karma points right there ;-)

Screen Shot 2014-12-10 at 07.37.31

Posted in Uncategorized | Leave a comment

VMware Integrated OpenStack (VIO)

VMware just announced the VMware Integrated OpenStack (VIO) solution at VMworld.

What is it?

VIO is not a VMware OpenStack distribution, it is a fully validated (and supported) architecture to use VMware components in OpenStack Programs, it provides pre-packaged VMware integration with OpenStack components to allow for easy deployment and management. In other words if you want to use OpenStack with VMware this is the easiest way to do it. The basic integration with OpenStack existed before the VIO packaging, VIO just makes it easier to consume and provides a supported implementation option from VMware.


If you think about how you can consume OpenStack today you basically end up with 3 choices, either you go with the DIY approach and start from the OpenStack sourcecode to build your own solution (a bit like building your own Linux distro from source code, far fetched and not really a path a lot of people tend to walk down), or you use a well known distro from companies like Canonical, Mirantis, SUSE,… or you use an pre-integrated product like VIO.

VMware will allow you to leverage the VMware components in either the VIO product (tightly-integrated/fully validated) or by using the components of your choice (including VMware and non-VMware ones) to build your own (loosely-integrated) stack. That is after all the beauty of the OpenStack APIs, choice AND standardisation.


What VMware components are we talking about?

Initially the focus is on vSphere (vCenter) for compute, NSX for networking, VSAN for storage, and vCenter Operations Manager for monitoring. Like mentioned above the integration was already there before, looking at the picture below you see with which OpenStack components we integrate. Components in red are OpenStack programs, components in blue are VMware’s.


For the integration VMware provides opensource drivers, the vCenter driver for Nova, the NSX plugin for Neutron and the vCenter VMDK driver for Cinder and Glance.
Note: the ESX VMDK driver has been deprecated since the Icehouse release.

The OpenStack documentation provides a good starting point to see how the integration actually takes place.

OpenStack Nova – vSphere Integration

OpenStack Compute (Nova) supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features. The VMware vCenter driver also interacts with the OpenStack Image Service to copy VMDK images from the Image Service back end store.

VMware driver architecture


OpenStack – vSphere integration documentation

OpenStack Cinder – VMDK driver Integration

The VMware VMDK driver enables management of OpenStack Block Storage volumes on vCenter-managed datastores. Volumes are backed by VMDK files on datastores using any VMware compatible storage technology (such as, NFS, iSCSI, FiberChannel, and VSAN).

OpenStack – VMDK driver integration documentation

OpenStack Neutron – NSX Integration

OpenStack Networking uses the NSX plugin for Networking to integrate with an existing VMware vCenter deployment. When installed on the network nodes, the NSX plugin enables a NSX controller to centrally manage configuration settings and push them to managed network nodes. Network nodes are considered managed when they’re added as hypervisors to the NSX controller.

The diagram below depicts an example NSX deployment and illustrates the route inter-VM traffic takes between separate Compute nodes. Note the placement of the VMware NSX plugin and the neutron-server service on the network node. The NSX controller features centrally with a green line to the network node to indicate the management relationship:


OpenStack – NSX plug-in configuration


As mentioned before the VMware supported VIO solution will allow you to use VMware’s infrastructure components without needing to do the all the required configuration manually. The VIO beta release will initially be available as a private beta for which you can signup here.


Posted in Uncategorized | 1 Comment

VMware OpenStack Virtual Appliance

VMware provides an OpenStack Virtual Appliance, VOVA for short, to help VMware admins get some hands-on experience with using OpenStack in a VMware environment. It is however purely a proof of concept appliance and is not supported in any way by VMware. To find out more about the OpenStack effort at VMware in general please visit https://communities.vmware.com/community/vmtn/openstack

You can download the VOVA appliance OVF package (OpenStack Icehouse release) here.

Screen Shot 2014-08-01 at 10.17.05

VOVA only works with vCenter 5.1 and above and only supports a single Datacenter, you should also not run production workloads on this cluster as a precaution. If you are running multiple hosts in your cluster you should enable DRS in “fully automated mode”, the cluster should also have only one shared datastore for all the hosts in the cluster. It is recommended to deploy the VOVA appliance on a host that is not part of the cluster managed by the appliance. (so you can manage multiple clusters in your vCenter instance).

When the OVF package is deployed it will show you the OpenStack Dashboard URL on first boot.

Screen Shot 2014-08-01 at 10.18.53

You can login with the credentials demo/vmware

Screen Shot 2014-08-01 at 10.19.38

The appliance comes pre-loaded with a Debian disk image which allows you to easily launch new instances.

Screen Shot 2014-08-01 at 10.21.47

Spawning the first VM can take a while because the 1 GB Debian disk image must be copied from the file system of the VOVA appliance to your cluster’s Datastore. All subsequent instances should be significantly faster (under 30 seconds).

VOVA also allows you to test the OpenStack CLI tools which directly acces the OpenStack APIs, you need to SSH (root/vmware) into the VOVA’s console and run the CLI commands from there.

Screen Shot 2014-08-01 at 10.25.21

The vCenter Web Client plug-in for OpenStack is also included allowing you to see OpenStack instances from the Web Client

Screen Shot 2014-08-01 at 10.49.18

Currently the VOVA appliance has some limitations;

  • No Neutron support: Neutron with vSphere requires the VMware NSX solution. We plan to release a future version of VOVA that can optionally leverage NSX.
  • No Security Groups support. With vSphere, VMware NSX is required for security groups network filtering. We plan to release a future version of VOVA that can optionally leverage NSX.
  • No exposed options to configure floating IPs. This is possible with the current appliance, but it has not been exposed via the OVF options.
  • No support for sparse disks. If you try to upload your own disk images, the images must be flat, not sparse.
  • No support for Swift (object storage). VOVA has no plans to leverage OpenStack swift for object storage. You are free to deploy swift on your own in another VM.

Also keep in mind that VOVA is not a product and will likely be discontinued once production-quality solutions with similar ease-of-use are made available (remember that there is nothing “special” about VOVA, it is just the open source OpenStack code running on Ubuntu, with proper configuration for using vSphere). However in the months following, expect to see updated versions of VOVA with the option of using Neutron + Security groups via VMware NSX.

Posted in NSX, OpenStack, vmware | 1 Comment

OpenStack Summer Reading

Seeing that I’m now getting an out of office hit rate of above 50% (I’m based in EMEA) I thought it might be interesting to share a summer reading list. I’m going (or already went, in most cases) through this material myself during my planned downtime in the second half of August.

I’m very interested in OpenStack and have been fiddling with it for quite some time, I like how it touches a lot of technologies and evolves at breakneck speeds (It has become the fastest growing open source project of all time).

Recently a team from VMware, EMC, Cisco, Cloudscaling, Comcast, Mirantis, Rackspace, Red Hat, and Verizon completed a 5 day book sprint (for more info on book sprints visit booksprints.net) writing the OpenStack Architecture Design Guide.

Screen Shot 2014-07-23 at 12.20.28

I think this a good resource to get started with expanding your OpenStack knowledge if you are planning a design. For getting started with OpenStack without any background I recommend the OpenStack website as it has some great introductions to the different technologies. The architecture design guide explains some common use cases and what to look out for in each of them. Like with most cloud automation/orchestration frameworks it is not really feasible to give you a lot of prescriptive step by step advise since the system is so versatile you really need to look at your specific use case, the book however attempts, successfully, to provide some.  It does not cover installation and operations, for this another great resource is available for free, the OpenStack Operations Guide.

If you are more interested in the security aspect of running OpenStack there is another guide available that was also the result of a book sprint, called the OpenStack Security Guide.

The OpenStack Wiki also provides a great place to get started, but I often find a nicely packaged book is more suited towards learning.

The complete set of current OpenStack documentation can be found here. You can also contribute to OpenStack without needing to provide code by helping out the documentation effort, more information on how this works can be found here.

You can find an immense amount of sessions on YouTube around OpenStack as well, but since video has no place in a reading list… (just goolge it).

VMware also supports OpenStack integration by providing open source drivers (for Nova and Cinder) and plugins (for Quantum/Neutron) to integrate with our products. You can find, and read about, these on the VMware OpenStack community site.


VMware also provides an (unsupported) vSphere OpenStack Virtual Appliance (VOVA) to allow for easy testing, proof of concepts, and educational purposes. It is a single Ubuntu Linux appliance running Nova, Glance, Cinder, Neutron, Keystone, and Horizon. There is also an VMware + OpenStack Hands On Lab available for you to experience the integration first hand.

Enjoy the summer!

Posted in OpenStack, vmware | Leave a comment

Zero Trust Network Architecture and Micro-Segmentation

A killer application

As defined by Wikipedia: In marketing terminology, a killer application (commonly shortened to killer app) is any computer program that is so necessary or desirable that it proves the core value of some larger technology, such as computer hardware, gaming console, software, a programming language, software platform, or an operating system. In other words, customers would buy the underlying technology just to run that application.

The Zero Trust Network Architecture

There is a simple philosophy at the core of Zero Trust: Security professionals must stop trusting packets as if they were people. Instead, they must eliminate the idea of a trusted network (usually the internal network) and an untrusted network (external networks). In Zero Trust, all network traffic is untrusted. Thus, security professionals must verify and secure all resources, limit and strictly enforce access control, and inspect and log all network traffic.

The core concepts of Zero Trust are:

  • There is no longer a trusted and an untrusted interface on our security devices.
  • There is no longer a trusted and an untrusted network.
  • There are no longer trusted and untrusted users.

Zero Trust mandates that information security pros treat all network traffic as untrusted. Zero Trust doesn’t say that employees are untrustworthy but that trust is a concept that information security pros should not apply to packets, network traffic, and data. The malicious insider reality demands a new trust model. By changing the trust model, we reduce the temptation for insiders to abuse or misuse the network, and we improve our chances of discovering security breaches before they impact the environment.

Screen Shot 2014-07-01 at 10.48.38

These approaches wrap security controls around much smaller groups of resources – often down to a small group of virtualized resources or individual VMs. Micro-segmentation has been understood to be a best practice approach from a security perspective, but difficult to apply in traditional environments.


Traditionally, network segmentation is a function of a switch. From a network perspective micro-segmentation is a design where each device on a network gets its own dedicated segment (collision domain) to the switch. Each network device gets the full bandwidth of the segment and does not have to share the segment with other devices. Micro-segmentation reduces and can even eliminate collisions because each segment is its own collision domain.

From a security perspective traditional segmentation works by implementing (next-generation) firewalls that act as “choke points” on the network. When application traffic is directed towards the firewall it enforces it’s rule-set and packets are blocked or allowed to pass through. This is a completely workable solution if you are just implementing controls (choke points) at limited places in the network, i.e. between the internal and external network, between business networks and the production network etc. If you would apply this same method to micro-segmentation however you would need to invest in large network boxes and with the advent of VM mobility you would potentially need to constantly update security rules to keep your policies up to date.

VMware NSX and micro-segmentation

In Virtual Machine land we would normally connect VMs to a virtual switch (port-group) and pipe that combined traffic to a network choke point, i.e. firewall, this breaks the idea of the Zero Trust architecture as you are already grouping certain VMs together. With NSX, policy can be applied to the individual VM level independent of placement. Every VM is literally first connected to the in-kernel statefull firewall before traffic goes out on the network. This implies that security can be implemented independent of the way the logical network is architected, i.e. it does not matter if this particular VM is grouped with other VM’s in say a DMZ segment, traffic will be filtered between VM’s making sure Zero Trust is maintained.

Posted in Networking, vmware | 1 Comment

On job-hopping and naivety

If you’re only interested in my technical posts feel free to skip this one, this is the second in a series of Sunday posts where I try to take a step back and structure my thoughts on work and career.

When you look at my linkedin profile, it certainly looks like I like to change jobs on a regular basis, but in fact I hate that, I think of myself as a very committed and loyal employee and I (enjoy?) work(ing) long and crazy hours in service of my company. I subscribe to the (naive?) notion of building out a long-term career that is mutually beneficial to myself and my company at the same time I also strive for authenticity, sometimes both collide, read on…

I always start a new job fully committed, if I can’t get excited about the prospect of working at company XYZ I will not even think about signing on, no matter how good the head-hunter, no matter how big the carrot. I believe herein, at least partly, lies the problem, high expectations and an idolized view of the company are rarely met, and then disillusionment sets in. A company is not some abstract concept, what you perceive on the outside are it’s products, it’s spokespeople, it’s community participation, etc. this forms a complete picture which you then see through your own lens. Looking from the inside out is of course very different than looking from the outside in, every company has it’s warts and blind spots they are usually just at different places in the organisation.


So where then does the train start to go off the track?

Like I said, and this can surely also be construed as naivety or failing to see reality on my part, I’m always very much committed to do my best work when I start someplace new, I did my research on the products and solutions and something got me exited enough to start believing, very much like being committed to a cause.

As Horace Mann’s injunction states;

Until you have done something for humanity you should be ashamed to die. -Horace Mann

Not everyone around you feels the same way, not everyone is motivated by the same things, not everyone feels they need to invest a disproportionate part of their life into their career, and that is totally ok. I just can’t help feel a little disappointed by it and then I feel I need to get moving, look for other likeminded people, driven by a bigger sense of purpose, naive as it may sound.

I really like what Dan Pink has said about what motivates us in his TED talk “The puzzle of motivation.”


Autonomy, mastery and purpose are indeed the driving concepts behind my career and I would gamble this is true for the most of us, maybe I would add a fourth one, sticking to ones principles and having a deep sense of justice. When I say “ones principles” I also mean the principles of the company, oftentimes it feels like being on the outskirts (I live in Europe) of a multinational corporation seems to somewhat dilute the message set forth at corporate, like a game of Chinese whispers, if not that, at least it feels like having less believers and more cynics around.

The only thing necessary for the triumph of evil is that good men do nothing. – Edmund Burke

This is usually the one that gets me in trouble and ultimately makes me vote with my feet, if this translates to the outside world as giving up too easily and being a job-hopper that’s unfortunate, to me it translates to standing up for your beliefs.

When you stand for nothing, you fall for everything -Alexander Hamilton

In terms of people I think the late Randy Pauch states it beautifully;

Wait long enough and people will surprise and impress. When you’re pissed off at someone and you’re angry at them, you just haven’t given them enough time. Just give them a little more time and they almost always will impress you. -Randy Pauch

I want to believe that…

People who know me socially and on Facebook (see what I did there?) will corroborate that I like to joke around, regularly get on my high horse, and pick on stuff, people, and companies. This is not reality of course, I don’t really think your company is stupid and can do no right, one of my favourites to pick on is Microsoft;

Hyper-V, virtualization brought to you by the same geniuses who invented Internet Explorer

In reality I think Microsoft is a fine company, with lot’s of great people like Mark Russinovich, Scott Hanselman, Scott Guthrie, and many others that I respect. I rarely prescribe to a Technology Religion just for the sake of religion. This translates to my employers as well, I’m perfectly capable of seeing the bigger picture, I understand the reason things sometimes are the way the are, I get why a certain decision makes sense at a certain point of time even if it goes against core principles and values, but that does not mean I have to agree with it.

Another example that perfectly describes my sentiment of what usually happens when the idea of working somewhere has little in common with reality is a scene from the episode “And it’s surely to their credit” from the acclaimed TV-series The West Wing in which republican Ainsley Hayes takes a job working in a Democrat led Whitehouse out of respect for the institution and ends up, temporarily at least, feeling let down:

Sam Seaborn: See, I was told you were just going to be working in the Majority Counsel’s office, which I wasn’t wild about to begin with, but it’s my understanding I’d be talking to Brookline and Joyce, seeing as how they work for me.
Ainsley Hayes: I was taking initiative.
Sam Seaborn: Well, wasn’t that spunky of you.
Ainsley Hayes: Sam, do you think there’s any chance that you could be rude to me tomorrow? Tomorrow is Saturday. I will be here. You can call me and be rude by phone or you can stop by and do it in person. ‘Cause I think if I have to endure another disappointment today from this place that I have worshipped, I am gonna lose it. So if you could wait until tomorrow, I would appreciate it.

Looking back I think I can come to the conclusion that I feel more at home in a “start-uppy” environment, this can be a real start-up or a specific division inside a bigger company that is going against the norm and trying to disrupt by trying something new. I like taking the road less travelled, I like pulling threads to see where they lead, I like doing something that goes against corporate dogma. I hate “this is not how it works here”, “we’ve always done it like this”, “just give it a couple of months, you’ll see”.

People who say it cannot be done should not interrupt those who are doing it. – George Bernard Shaw

So next time you throw away a resume because the person applying has had too many jobs in the past, you could very well be denying yourself of your most committed and motivated employees, if only you could figure out how to better enable him or her.

Posted in non-technical, Uncategorized | Leave a comment