Burn the heretic!

Talking about new ways to “fix” old problems or enabling functionalities that weren’t even possible before by introducing something that goes against established doctrine, can be an interesting experience.

The flip-side of the coin is that a lot of “new ways” are greatly oversold, of course a certain technology or product can’t fix ALL your problems, keep asking the tough questions.

But by keeping an open mind maybe you can see value that wasn’t there before, maybe by embracing change your world can become a whole lot more interesting, maybe you can become the automator instead of eventually becoming the automated.

“You can either be a victim of the world or an adventurer in search of treasure.”

-Paulo Coelho

But, but, if we would implement that we would loose x-y-z…

Back in the days of the mainframe (wait didn’t IBM just release the z13?, anyway…) you could do end to end performance tracing. You could issue an I/O and follow the transaction throughout the system (connect time, disconnect time, seek time, rotational delay,…), this worked because the mainframe was a single monolithic system, it had a single clock against which the I/O transaction could be measured and the protocol carrying the I/O cared about this metadata and allowed it to be traced. Today we have distributed systems, tracing something end to end is a whole lot trickier but that hasn’t stopped us from evolving because we saw the value and promise of the “new way”. What we have gained is much greater than what we have lost. Where you differentiate yourself has changed, the value you can get from IT has moved up the stack, we live in a world of abundance now, not a world of scarcities.

Screen Shot 2015-03-07 at 20.12.22

It’s all about the use case stupid!

I work for a vendor, and I evangelise a new way of looking at networking by doing network virtualisation/software defined networking (God I hate that I need to write both terms in order not to upset people, who cares what we call it, seriously). Obviously this stirs up a lot of controversy among “traditional” networking people, some of it warranted, some of it not. Just like it did when we first started talking about server virtualisation. In the end it comes down to the use case, every technology ever created was done with a specific set of use cases in mind. If those use cases make sense for your organisation, if they can move your business forward somehow maybe it is worth a (second) look.

I won’t spend time talking about my specific solutions’ use cases, that’s not what this blogpost is about.

A very interesting new way (at least in my humble opinion) of looking at things in a software defined world is this concept of machine learning. Systems that use data to make predictions or decisions rather than requiring explicit programming for instruction. What if the network can look at what is happening on the network, combine this with historical data (historical analytics) and make autonomous decisions about what is the best future configuration (maybe do things like redirect traffic by predicting congestion (using near real-time analytics), rearrange paths based on the workload’s requirements (using predictive analytics), etc.)

This kind of thinking requires a capable and unbound platform, something that can quickly adapt and incorporate new functionality. We now have big data platforms that can give us these insights using analytics, combine this with a programmable network and we have a potent solution for future networking.

Someone who is very active and vocal in this space is David Meyer, currently CTO for Service Providers and Chief Scientist at Brocade. I highly recommend checking out some of his recent talks, transcripts of which you can find on his webpage at http://www.1-4-5.net/~dmm/vita.html or have a look at the YouTube video below for his presentation during Network Field Day 8 where he talks about the concept of Software Defined Intelligence.

*Regarding the title, yes I am a warhammer 40k fanboy ;-)

Posted in Networking | Tagged , , | Leave a comment

VMware NSX with Trend Micro Deep Security 9.5

I recorded a brief demonstration showing the integration between Trend Micro Deep Security and VMware NSX.

Posted in Networking, NSX, SDN, vmware | 1 Comment

vSphere 6 – vMotion nonpareil

I remember when I was first asked to install ESX 2.x for a small internal project, must have been somewhere around 2004-2005, I was working at a local systems integrator and got the task because no one else was around at the time. When researching what this thing was and how it actually worked I got sucked in, then some time later when I saw VMotion(sic) for the first time I was hooked.

Sure I’ve implemented some competing hypervisors over the years, and even worked for the competition for a brief period of time (in my defence, I was in the application networking team ;-) ). But somehow, at the time, it was like driving a kit car when you knew the real deal was parked just around the corner.

mercy-4vsRealCar2So today VMware announced the upcoming release of vSphere 6, arguably the most recognised product in the VMware stable and the foundation to many of it’s other solutions.

A lot of new and improved features are available but I wanted to focus specifically on the feature that impressed me the most soo many years ago, vMotion.

In terms of vMotion in vSphere 6 we now have;

  • Cross vSwicth vMotion
  • Cross vCenter vMotion
  • Long Distance vMotion
  • Increased vMotion network flexibility

Cross vSwitch vMotion

Cross vSwitch vMotion allows you to seamlessly migrate a VM across different virtual switches while performing a vMotion operation, this means that you are no longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine. It also works across a mix of standard and distributed virtual switches. Previously, you could only vMotion from vSS to vSS or within a single vDS.

Screen Shot 2015-01-19 at 17.52.25

The following Cross vSwitch vMotion migrations are possible:

  • vSS to vSS
  • vSS to vDS
  • vDS to vDS (including metadata, i.e. statistics)
  • vDS to vSS is not possible

The main use case for this is data center migrations whereby you can migrate VMs to a new vSphere cluster with a new virtual switch without disruption. It does require the source and destination portgroups to share the same L2. The IP address within the VM will not change.

Cross vCenter vMotion

Expanding on the Cross vSwitch vMotion enhancement, vSphere 6 also introduces support for Cross vCenter vMotion.

vMotion can now perform the following changes simultaneously:

  • Change compute (vMotion) – Performs the migration of virtual machines across compute hosts.
  • Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
  • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches.

and finally…

  • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.

Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectiviy since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required.

Screen Shot 2015-01-19 at 17.58.38

The use cases are migration of Windows based vCenter to the vCenter Server Appliance (also take a look at the scale improvement of vCSA in vSphere 6) i.e. no more Windows and SQL license needed.
Replacement of vCenters without disruption and the possibility to migrate VMs across local, metro, and cross-continental distances.

Long Distance vMotion

Long Distance vMotion is an extension of Cross vCenter vMotion but targeted to environments where vCenter servers are spread across large geographic distances and where the latency across sites is 100ms or less.

Screen Shot 2015-01-19 at 18.03.38The requirements for Long Distance vMotion are the same as Cross vCenter vMotion, except with the addition of the maximum latency between the source and destination sites must be 100 ms or less, and there is 250 Mbps of available bandwidth.

The operation is serialized, i.e. it is a VM per VM operation requiring 250 Mbps.

The VM network will need to be a stretched L2 because the IP of the guest OS will not change. If the destination portgroup is not in the same L2 domain as the source, you will lose network connectivity to the guest OS. This means that in some topologies, such as metro or cross-continental, you will need a stretched L2 technology in place. The stretched L2 technologies are not specified (VXLAN is an option here, as are NSX L2 gateway services). Any technology that can present the L2 network to the vSphere hosts will work, as it’s unbeknown to ESX how the physical network is configured.

Increased vMotion network flexibility

ESXi 6 will have multiple TCP/IP stacks, this enables vSphere to improve scalability and offers flexibility by isolating vSphere services to their own stack. This also allows vMotion to work over a dedicated Layer 3 network since it can now have it’s own memory heap, ARP and routing tables, and default gateway.

Screen_Shot_2015-01-19_at_18_08_38

 

In other words the VM network still needs L2 connectivity since the virtual machine has to retain its IP. vMotion, management and NFC networks can all be L3 networks.

Posted in Networking, vmware | Tagged , | 2 Comments

Introducing VMware vCloud Air Advanced Networking Services

VMware just announced some additions to it’s public cloud service, vCloud Air, one of the additions is advanced networking services powered by VMware NSX. Today the networking capabilities of vCloud Air are based on vCNS features, moving forward these will be provided by NSX.

If you look at the connectivity options from your Data Center towards vCloud Air today you have:

  • Direct connect which is a private circuit such as MPLS or Metro Ethernet.
  • IPSec VPN
  • or via the WAN to a public IP address (think webhosting)

By switching from vCNS Edge devices to NSX Edge devices vCloud Air is adding SSL VPN connectivity from client devices to the vCloud Air Edge.

Screen_Shot_2015-01-20_at_10_56_32

VMware is adding, by using the NSX Edge Gateway, dynamic routing support (OSPF, BGP), and a full fledged L4/7 load-balancer (based on HA Proxy), that also provides SSL offloading.
As mentioned before SSL VPN to the vCloud Air network, for which clients are available for Mac OS X, Windows, and Linux is also available.
Furthermore the number of available interfaces have also been greatly increased from 9 to 200 sub-interfaces, and the system now also provides distributed firewall (firewall policy linked to the virtual NIC of the VM) capabilities.

Screen_Shot_2015-01-20_at_11_31_42

The NSX Edge can use BGP to exchange routes between vCloud Air and your on-premises equipment over Direct Connect. NSX Edge can also use OSPF to exchange internal routes between NSX edges or a L3 virtual network appliance.

Screen_Shot_2015-01-20_at_11_07_35

NSX also introduces the concept of Micro-Segmentation to vCloud Air. This allows implementation of firewall policy at the virtual NIC level. In the example below we have a typical 3-tier application with each tier placed on its own L2 subnet.

Screen Shot 2015-01-20 at 11.16.45

With micro-segmentation you can easily restrict traffic to the application- and database-tier while allowing traffic to the web-tier, even though, again just as an example all these VM’s sit on the same host. Assuming that at some point you will move these VM’s from one host to another host, security policy will follow the VM without the need for reconfiguration in the network. Or you can implement policy that does not allow VM’s to talk to each other even though they sit on the same L2 segment.

Screen_Shot_2015-01-20_at_11_19_47

If you combine this with security policy capabilities in the NSX Edge Gateway you can easily implement firewall rules for both North-South and East-West traffic. The system will also allow you to build a “container” by grouping certain VM’s together and applying policies to the group as a whole. For example you could create 2 applications, each consisting of 1 VM from each tier (web, app, db), and set policies in a granular level. As a service provider you can very easily create a system that supports multiple tenants in a secure fashion. Furthermore this would also allow you to set policies and move VM’s from on-premises to vCloud Air while still retaining network and security configurations.

Posted in Networking, NSX, vmware | Tagged , , , | Leave a comment

New Year, New Job.

I’m super excited to be taking on a new role in the NSBU at VMware, as of the 1st of January I’ll officially be joining the team as a Sr. Systems Engineer for the Benelux. I’ll be focused mainly on VMware NSX, including it’s integrations with other solutions (Like vRA and OpenStack for example).

Unofficially I’ve been combining this function with my “real” job for a couple of months now ever since a dear and well respected colleague decided to leave VMware. Recently I was fortunate enough to get the opportunity to attend a 2 week training at our Palo Alto campus on NSX-v, NSX-MH, OpenStack, VIO, OVS,…

vmwarecampus

The experience was face-meltingly good, I definitely learned a lot and got the opportunity to meet many wonderful people. One conclusion is that the NSX team certainly is a very interesting and exciting place to be in the company.

In the last few months I got my feet wet by training some of our partner community on NSX (most are very excited about the possibilities, even the die-hard hardware fanatics), staffing the NSX booth at VMworld Europe, and by having some speaking engagements like my NSX session at the Belgian VMUG.

vmugfv

So why NSX?

In the past I’ve been working on a wide variety of technologies (being in a very small country and working for small system integrators you need to be flexible, and I guess it’s also just the way my mind works #squirrel!) but networking and virtualisation are my two main fields of interest so how convenient that both are colliding!
I’ve been a pure networking consultant in the past, mainly working with Cisco and Foundry/HP ProCurve and then moved more into application networking at Citrix ANG and Riverbed.

The whole network virtualisation and SDN (let’s hold of the discussion of what’s what for another day) field are on fire at the moment and are making the rather tedious and boring (actually I’ve never really felt that, but I’m a bit of a geek) field of networking exciting again. The possibilities and promise of SDN have lot’s of potential to be disruptive and change an industry, and I’d like to wholeheartedly and passionately contribute and be a part of that.

As NSX is an enabling technology for a lot of other technologies it needs to integrate with a wide variety of solutions. 2 solutions from VMware that will have NSX integrated for example are EVO:RACK and VIO. I look forward to also work on those and hopefully find some time to blog about it as wel.

Other fields are also looking to the promise of SDN to enable some new ways of getting things done, like SocketPlane for example, trying to bring together Open vSwitch and Docker to provide pragmatic Software-Defined Networking for container-based clouds. As VMware is taking on a bigger and bigger role in the Cloud Native Apps space it certainly will be interesting to help support all these efforts.

“if you don’t cannibalise yourself, someone else will”
-Steve Jobs

I’m enjoying a few days off with my family and look forward to returning in 2015 to support the network virtualisation revolution!

nsx-dragon-2

Posted in Cisco, Networking, NSX, OpenStack, Riverbed, SDN, vmware | Tagged , , , | Leave a comment

NSX Logical Load Balancer

VMware NSX is a network virtualisation and security platform that enables inclusion of 3rd party integration for specific functionality, like advanced firewalling, load-balancing, anti-virus, etc. Having said that VMware NSX also provides a lot of services out of the box that depending on the customer use-case can cover a lot of requirements.

One of these functionalities is load balancing using the NSX Edge Virtual Machine that is part of the standard NSX environment. The NSX Edge load balancer enables network traffic to follow multiple paths to a specific destination. It distributes incoming service requests evenly (or depending on your specific load balancing algorithm) among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to Layer 7.

article

The technology behind our NSX load balancer is based on HAProxy and as such you can leverage a lot of the HAProxy documentation to build advanced rulesets (Application Rules) in NSX.

Like the picture above indicates you can deploy the logical load balancer either “one-armed” (Tenant B example) or transparently inline (Tenant A example).

To start using the load balancing functionality you first need to deploy an NSX edge in your vSphere environment.

Go to Networking & Security i your vSphere Web Client and select NSX Edges, then click on the green plus sign to start adding a new NSX edge. Select Edge Services Gateway as an install type and give your load balancer an indicative name.

LB001

Next you can enter a read-only CLI username and select a password of your choice. You can also choose to enable SSH access to the NSX Edge VM.

Screen Shot 2014-12-03 at 20.37.57

Now you need to select the size of the Edge VM, because this is a lab environment I’ll opt for compact but depending on your specific requirements you should select the most appropriate size. Generally speaking more vCPU drives more bandwidth and more RAM drives more connections per second. The available options are;

  • Compact: 1 vCPU, 512 MB
  • Large: 2 vCPU, 1024 MB
  • Quad Large: 4 vCPU, 1024 MB
  • X-Large: 6 vCPU, 8192 MB

Click on the green plus sign to add an Edge VM, you need to select a cluster/resource pool to place the VM and select the datastore. Typically we would place NSX edge VMs in a dedicated Edge rack cluster. For more information on NSX vSphere cluster design recommendations please refer to the NSX design guide available here: https://communities.vmware.com/servlet/JiveServlet/previewBody/27683-102-3-37383/NSXvSphereDesignGuidev2.1.pdf

LB002Since we are deploying a one-armed load balancer we only need to define one interface next. Click the green plus sign to configure the interface. Give it a name, select the logical switch it needs to connect to (i.e. where are the VMs that I want to load balance located) configure an IP address in this specific subnet (VXLAN virtual switch) and you are good to go.

LB003

Next up configure a default gateway.

Screen Shot 2014-12-03 at 21.03.50Configure the firewall default policy and select Accept to allow traffic.

Screen Shot 2014-12-03 at 21.04.22Verify all setting and click Finish to start deploying the Edge VM.

Screen Shot 2014-12-03 at 21.05.13After a few minuted the new Edge VM will be deployed and you can start to configure load balancing for the VMs located on the Web-Tier-01 VXLAN virtual switch (as we selected before) by double clicking the new Edge VM.

Click on Manage and select Load Balancer, next click on Edit and select the Enable Load Balancer check box.

LB005

Next thing we need is an application profile, you create an application profile to define the behavior of a particular type of network traffic. After configuring a profile, you associate the profile with a virtual server. The virtual server then processes traffic according to the values specified in the profile.

Click on Manage and select Load Balancer, then click on Application Profiles, give the profile a name. In our case we are going to load balance 2 HTTPS webservers but are not going to proxy the SSL traffic so we’ll select Enable SSL Passthrough.

LB006Now we need to create a pool containing the webservers that will serve the actual content.

Select Pools, give it a name, select the load balancing algorithm.

Today we can choose between;

  • Round-robin
  • IP Hash
  • Least connection
  • URI
  • HTTP Header
  • URL

Then select a monitor to check if the severs are serving https (in this case) requests.
You can add the members of the pools from the same interface.

LB007

Now we create the virtual server that will take the front-end connections from the webclients and forward them to the actual webservers. Note that the Protocol of the virtual server can be different from the protocols of the back-end servers. i.e. You could do HTTPS from the client webbrowser to the virtual server and HTTP from the virtual server to the back-end for example.

Screen Shot 2014-12-04 at 18.29.43

Now you should be able to reach the IP (VIP) of the virtual server we just created and start to load balance your web-application.

LB008Since we selected round-robin as our load-balancing algorithm each time you hit refresh (use control and click on refresh to avoid using the browsers cache) on your browser you should see the other webserver.

Posted in Networking, NSX, SDN, vmware | Tagged , | 1 Comment

Belgian VMUG session on NSX

I will be presenting a session on VMware NSX at the 21st Belgian VMUG meeting on the 21st of November in Antwerp. You can register for it here.

The VMUG crew did an excellent job in getting an amazing line-up of speakers including (in no particular order)  Chris Wahl, Cormac Hogan, Mike Laverick, Duncan Epping, Eric Sloof, and Luc Dekens. Besides being able to meet these guys you can also participate in Mike’s VMworld 2014 SwagBag (EVO:RAIL edition) charity raffle.

Running the best virtualization software AND donating to charity, that’s double Karma points right there ;-)

Screen Shot 2014-12-10 at 07.37.31

Posted in Uncategorized | Leave a comment