Category: vmware

VMworld US – Sunday

VMworld US – Sunday

This is my 7th VMworld event, but the first time attending VMworld in the US, I’ve been to Vegas before but arriving this Saturday evening it felt different somehow, maybe it was the scorching heat or the onslaught of UFC/Boxing fans at the MGM Grand, both where very “in your face”.

IMG_0427

VMware signage welcoming you to Las Vegas

I decided to check out the¬†OpeningActs event at the Beerhaus which was a pretty neat experience. The first panel session, moderated by John Troyer (aka John Trooper) was on VMware’s continued relevance in todays rapidly changing tech world which spurred a lot of Amazon AWS remarks. I think the overall consensus was that VMware might be in “sustainability” mode but that it is still an excellent foundation in a lot of customer environments. “Basic infrastructure” might be boring, but it is of great importance nevertheless. I think the other angle that might have been overlooked a bit here is that Amazon needs this relationship with VMware too, it needs a way into that “boring” enterprise market.

IMG_0435

Second panel session was on “How failing made me better”, moderated by Jody Tyrus.
My favourite quote from the session was, and I’m paraphrasing here; “I sometimes still feel like I’m a 12 year old in a body of a 42 year old that is about to be exposed”. Rebecca Fitzhugh wrote and excellent blog on her experience with dealing with failure¬†here.

IMG_0436.png

 

Unfortunately I had to skip the 3rd session because I needed to get to the Expo for a Rubrik all-hands briefing before the Welcome Reception. After the reception I headed to the VMUG party at the House of Blues, which was pretty cool (because they served Stella Artois ūüėČ ), Michael Dell made an appearance and talked about his love for the VMUG and hinted at some cool Pivotal announcements coming this week.

IMG_0443

After the party I needed to get jet-lagged self to bed. Looking forward to a great VMworld week.

 

 

VMware NSX and Palo Alto NGFW

VMware NSX and Palo Alto NGFW

VMware NSX is a platform for network and security virtualization, and as such it has the capability to integrate onto it’s platform certain functionalities that are not delivered by VMware itself. One such integration point is with Palo Alto Networks’s Next-Generation Firewall.

VMware NSX has built-in L2-4 stateful firewall capabilities both in the distributed firewall running¬†directly in¬†the ESXi hypervisor for east-west traffic, and¬†in the Edge Services Gateway VM for north-south traffic. If L2-4 is not sufficient for your specific use case we can¬†use VMware’s NSX Service Composer to steer traffic towards a third party solution provider for additional inspection.

At a high level the solution requires 3 components, VMware NSX, The Palo Alto Networks VM-series¬†VM-1000-HV, and Palo Alto Networks’ central management system, Panorama.

Screen Shot 2015-05-04 at 09.47.20

Currently the VM-1000-HV supports 250.000 sessions (8000 new sessions per second) and 1Gbps firewall throughput (with App-ID enabled). The VM-series firewall is installed on each host of the cluster¬†where you want to protect virtual machines with Palo Alto’s NGFW. Each VM-series firewall takes 2 vCPU’s and 5GB RAM.

Screen Shot 2015-05-04 at 09.53.28

If you look (summarize-dvfilter) at each ESXi host after installation you should see the VM-series show up in the dvfilter slowpath section.

Screen Shot 2015-05-04 at 09.58.59

We can also look at the Panorama central management console and verify that our VM-series are listed under managed devices.

Screen Shot 2015-05-04 at 10.03.06

Deciding which traffic to pass to the VM-series is configured using the Service Composer in NSX. The Service Composer provides a framework that allows you to dictate what you want to protect by creating security groups, and then deciding how to protect the members of this group by creating and linking security policies.

Screen Shot 2015-05-04 at 10.07.13

It is perfectly feasible to use security policy to first enable NSX’s distributed firewall to deal with certain type of traffic (up to layer 4) and only steer other “interesting” traffic towards the Palo Alto VM-series, this way you can simultaneously benefit from the distributed throughput of the DFW and the higher level capabilities of Palo Alto Networks NGFW.

Using the Service Composer, we create a security policy and use the Network Introspection Service to select which external 3rd party service that we want to steer traffic to. In this case we select the Palo Alto Networks NGFW and can further select the source, destination, and specific traffic (protocol/port) that we want to have handled by the VM-series.

Screen_Shot_2015-05-04_at_10_13_49

Today only the traffic is passed to the external service but it is feasible to pass on more metadata that additionally could be acted upon by the third party provider. For example what if we could pass along that the VM we are protecting is running Windows Server 2003 and thus needs to have certain additional security measures applied.

So now that we have a policy that redirects traffic to the VM-series we need to apply this to a specific group. The power of combining NSX with Palo Alto Networks lies in the fact that we can use dynamic groups (both on NSX and in Panorama) and that members of the dynamic groups are sync’ed (about every¬†60 seconds) between both solutions. This means that if we add or remove VM’s from groups, the firewall rules are automatically updated. No more dealing with large lists of outdated firewall rules relating to decommissioned applications that nobody is willing to risk deleting because no one is sure what the impact would be.

For example we could create a security group using dynamic membership based on a security tag, this security tag could easily be applied as metadata by a cloud management platform (vRealize Automation for example) at the time of creation of the VM. (or you can manually add/remove security tags using the vSphere Web Client).

Screen Shot 2015-05-04 at 10.27.43

In Panorama we also have this concept of dynamic address groups, these are linked in a one-to-one fashion with security groups in NSX.

Screen_Shot_2015-05-04_at_10_31_27

If we look a the group membership of the address groups in Panorama we will see the IP address of the VM, this can then be leveraged to apply firewall rules in Palo Alto Networks.

Screen Shot 2015-05-04 at 10.34.19

NOTE: if I would remove the VM from the security group in NSX about 60 seconds later the IP address in Panorama would disappear.

Traffic is redirected by using the filtering, and traffic redirection module that are running between the VM and the vNIC. The filtering module is an extension of the NSX distributed firewall, the traffic redirection module defines which traffic is steered to the third party services VM (VM-series VM in our case).

Screen_Shot_2015-05-04_at_10_55_09

If we use the same dvfilter command (summarize-dvfilter) on the ESXi host as before we can see which slots are occupied;

Slot 0 : implements vDS Access Lists.
Slot 1:  Switch Security module (swsec) capture DHCP Ack and ARP messages, this info then forwarded to the NSX Controller.
Slot 2: NSX Distributed Firewall.
Slot 4: Palo Alto Networks VM-series

Screen Shot 2015-05-04 at 11.00.00

So as we are now able to steer traffic towards the Palo Alto Networks NGFW we can apply security policies, as an example we have built some firewall rules blocking ICMP and allowing SSH between two security groups.

Screen Shot 2015-05-04 at 10.36.52

As you could see from the picture earlier the VM in the SG-PAN-WEB group has IP address 172.16.10.11 (matching the member IP seen above in the dynamic group DAG-WEB in Panorama).

We are not allowed to ping a member of the dynamic group DAG-APP as dictated by the firewall rules on the VM-series firewall.

Screen_Shot_2015-05-04_at_10_39_54Since SSH is allowed we can test this by trying to connect to a VM in the DAG-APP group.

Screen_Shot_2015-05-04_at_10_43_41

We can also verify if this session shows up on the VM-series firewall by opening the console on the vSphere web client.

Screen Shot 2015-05-04 at 10.45.38

And finally if we look at the monitoring tab on Panorama we can verify that our firewall rules are working as expected.

Screen Shot 2015-05-04 at 10.47.07

So that’s it for this brief overview of using Palo Alto Networks NGFW in combination with VMware NSX. As you can see from the screenshot below, NSX allows for a broad list of third party solutions to be integrated, so the solution is very extensible and true to it’s goal of being a network and security platform for the next generation data center.

Screen Shot 2015-05-04 at 10.48.30

vSphere 6 – vMotion nonpareil

vSphere 6 – vMotion nonpareil

I remember when I was first asked to install ESX 2.x for a small internal project, must have been somewhere around 2004-2005, I was working at a local systems integrator and got the task because no one else was around at the time. When researching what this thing was and how it actually worked I got sucked in, then some time later when I saw VMotion(sic) for the first time I was hooked.

Sure I’ve implemented some competing hypervisors over the years, and even worked for the competition for a brief¬†period of time (in my defence, I was in the application networking team ūüėČ ). But somehow, at the time, it was like driving¬†a kit car when you knew the real deal was parked just around the corner.

mercy-4vsRealCar2So today VMware announced the upcoming release of vSphere 6, arguably the most recognised product in the VMware stable and the foundation to many of it’s other solutions.

A lot of new and improved features are available but I wanted to focus specifically on the feature that impressed me the most soo many years ago, vMotion.

In terms of vMotion in vSphere 6 we now have;

  • Cross vSwicth vMotion
  • Cross vCenter vMotion
  • Long Distance vMotion
  • Increased¬†vMotion network flexibility

Cross vSwitch vMotion

Cross vSwitch vMotion allows you to seamlessly migrate a VM across different virtual switches while performing a vMotion operation, this means that you are no longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine. It also works across a mix of standard and distributed virtual switches. Previously, you could only vMotion from vSS to vSS or within a single vDS.

Screen Shot 2015-01-19 at 17.52.25

The following Cross vSwitch vMotion migrations are possible:

  • vSS to vSS
  • vSS to vDS
  • vDS to vDS (including metadata, i.e. statistics)
  • vDS to vSS is not possible

The main use case for this is data center migrations whereby you can migrate VMs to a new vSphere cluster with a new virtual switch without disruption. It does require the source and destination portgroups to share the same L2. The IP address within the VM will not change.

Cross vCenter vMotion

Expanding on the Cross vSwitch vMotion enhancement, vSphere 6 also introduces support for Cross vCenter vMotion.

vMotion can now perform the following changes simultaneously:

  • Change compute (vMotion) – Performs the migration of virtual machines across compute hosts.
  • Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
  • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches.

and finally…

  • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.

Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectiviy since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required.

Screen Shot 2015-01-19 at 17.58.38

The use cases are migration of Windows based vCenter to the vCenter Server Appliance (also take a look at the scale improvement of vCSA in vSphere 6) i.e. no more Windows and SQL license needed.
Replacement of vCenters without disruption and the possibility to migrate VMs across local, metro, and cross-continental distances.

Long Distance vMotion

Long Distance vMotion is an extension of Cross vCenter vMotion but targeted to environments where vCenter servers are spread across large geographic distances and where the latency across sites is 150ms or less.

Screen Shot 2015-01-19 at 18.03.38The requirements for Long Distance vMotion are the same as Cross vCenter vMotion, except with the addition of the maximum latency between the source and destination sites must be 150 ms or less, and there is 250 Mbps of available bandwidth.

The operation is serialized, i.e. it is a VM per VM operation requiring 250 Mbps.

The VM network will need to be a stretched L2 because the IP of the guest OS will not change. If the destination portgroup is not in the same L2 domain as the source, you will lose network connectivity to the guest OS. This means that in some topologies, such as metro or cross-continental, you will need a stretched L2 technology in place. The stretched L2 technologies are not specified (VXLAN is an option here, as are NSX L2 gateway services). Any technology that can present the L2 network to the vSphere hosts will work, as it’s unbeknown to ESX how the physical network is configured.

Increased vMotion network flexibility

ESXi¬†6 will have multiple TCP/IP stacks, this enables¬†vSphere to¬†improve scalability and offers flexibility by isolating vSphere services to their own stack. This also allows vMotion to work over a dedicated Layer 3 network since it can now have it’s own memory heap, ARP and routing tables, and default gateway.

Screen_Shot_2015-01-19_at_18_08_38

In other words the VM network still needs L2 connectivity since the virtual machine has to retain its IP. vMotion, management and NFC networks can all be L3 networks.

Introducing VMware vCloud Air Advanced Networking Services

Introducing VMware vCloud Air Advanced Networking Services

VMware just announced some additions to it’s public cloud service, vCloud Air, one of the additions is advanced networking services powered by VMware NSX. Today the networking capabilities of vCloud Air are based on vCNS features, moving forward these will be provided by NSX.

If you look at the connectivity options from your Data Center towards vCloud Air today you have:

  • Direct connect which is a private circuit such as MPLS or Metro Ethernet.
  • IPSec VPN
  • or via the WAN to a public IP address (think webhosting)

By switching from vCNS Edge devices to NSX Edge devices vCloud Air is adding SSL VPN connectivity from client devices to the vCloud Air Edge.

Screen_Shot_2015-01-20_at_10_56_32

VMware is adding, by using the NSX Edge Gateway, dynamic routing support (OSPF, BGP), and a full fledged L4/7 load-balancer (based on HA Proxy), that also provides SSL offloading.
As mentioned before SSL VPN to the vCloud Air network, for which clients are available for Mac OS X, Windows, and Linux is also available.
Furthermore the number of available interfaces have also been greatly increased from 9 to 200 sub-interfaces, and the system now also provides distributed firewall (firewall policy linked to the virtual NIC of the VM) capabilities.

Screen_Shot_2015-01-20_at_11_31_42

The NSX Edge can use BGP to exchange routes between vCloud Air and your on-premises equipment over Direct Connect. NSX Edge can also use OSPF to exchange internal routes between NSX edges or a L3 virtual network appliance.

Screen_Shot_2015-01-20_at_11_07_35

NSX also introduces the concept of Micro-Segmentation to vCloud Air. This allows implementation of firewall policy at the virtual NIC level. In the example below we have a typical 3-tier application with each tier placed on its own L2 subnet.

Screen Shot 2015-01-20 at 11.16.45

With micro-segmentation you can easily restrict traffic to the application- and database-tier while allowing traffic to the web-tier, even though, again just as an example all these VM’s sit on the same host. Assuming that at some point you will move these VM’s from one host to another host, security policy will follow the VM without the need for reconfiguration in the network.¬†Or you can implement policy that does not allow VM’s to talk to each other even though they sit on the same L2 segment.

Screen_Shot_2015-01-20_at_11_19_47

If you combine this with security policy capabilities in the NSX Edge Gateway you can easily implement firewall rules for both North-South and East-West traffic. The system will also allow you to build a “container” by grouping¬†certain VM’s together¬†and applying policies to the group as a whole. For example you could create 2 applications, each consisting of 1 VM from each tier (web, app, db), and set policies in a granular level. As a service provider you can very easily create a system that supports multiple tenants in a secure fashion. Furthermore this would also allow you to set policies¬†and move VM’s from on-premises to vCloud Air while still retaining network and security configurations.

New Year, New Job.

New Year, New Job.

I’m super excited to be taking on a new role in the NSBU at VMware, as of the 1st of January I’ll officially be joining the team¬†as a Sr. Systems Engineer for the Benelux. I’ll be focused mainly on VMware NSX,¬†including¬†it’s integrations with other solutions (Like vRA and OpenStack for example).

Unofficially I’ve been combining¬†this function with my “real” job for a couple of months now ever since a dear and well respected colleague¬†decided to leave VMware. Recently I was fortunate enough to get¬†the opportunity¬†to attend a 2 week training at¬†our Palo Alto campus on NSX-v, NSX-MH, OpenStack, VIO, OVS,…

vmwarecampus

The experience was face-meltingly good, I definitely learned a lot and got the opportunity to meet many wonderful people. One conclusion is that the NSX team certainly is a very interesting and exciting place to be in the company.

In the last few months I got my feet wet by training some of our partner community on NSX (most are very excited about the possibilities, even the die-hard hardware fanatics), staffing the NSX booth at VMworld Europe, and by having some speaking engagements like my NSX session at the Belgian VMUG.

vmugfv

So why NSX?

In the past I’ve been working on a wide variety of technologies (being in a very small country and working for small system integrators you need to be flexible, and I guess it’s also just the way my mind works #squirrel!) but networking and virtualisation are my two main fields of interest so how convenient that both are colliding!
I’ve been a pure networking consultant in the past, mainly working with Cisco and Foundry/HP ProCurve and then moved more into application networking at Citrix ANG and Riverbed.

The whole network virtualisation and SDN (let’s hold of the discussion of what’s what for another day) field are on fire at the moment and are making the rather tedious and boring (actually I’ve never really felt that, but I’m a bit of a geek) field of networking exciting again. The possibilities and promise of SDN¬†have lot’s of potential to be disruptive and change an industry, and I’d like to wholeheartedly and passionately contribute and be a part of that.

As NSX is an enabling technology for a lot of other technologies it needs to integrate with a wide variety of solutions. 2 solutions from VMware that will have NSX integrated for example are EVO:RACK and VIO. I look forward to also work on those and hopefully find some time to blog about it as wel.

Other fields are also looking to the promise of SDN to enable some new ways of getting things done, like SocketPlane for example, trying to bring together Open vSwitch and Docker to provide pragmatic Software-Defined Networking for container-based clouds. As VMware is taking on a bigger and bigger role in the Cloud Native Apps space it certainly will be interesting to help support all these efforts.

“if you don’t cannibalise yourself, someone else will”
-Steve Jobs

I’m enjoying a few days off with my family and look forward to returning in¬†2015 to support the network virtualisation revolution!

nsx-dragon-2

NSX Logical Load Balancer

NSX Logical Load Balancer

VMware NSX is a network virtualisation and security platform that enables inclusion of 3rd party integration for specific functionality, like advanced firewalling, load-balancing, anti-virus, etc. Having said that VMware NSX also provides a lot of services out of the box that depending on the customer use-case can cover a lot of requirements.

One of these functionalities is load balancing using the NSX Edge Virtual Machine that is part of the standard NSX environment. The NSX Edge load balancer enables network traffic to follow multiple paths to a specific destination. It distributes incoming service requests evenly (or depending on your specific load balancing algorithm) among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to Layer 7.

article

The technology behind our NSX load balancer is based on HAProxy and as such you can leverage a lot of the HAProxy documentation to build advanced rulesets (Application Rules) in NSX.

Like the picture above indicates you can deploy the logical load balancer either “one-armed” (Tenant B example) or transparently inline (Tenant A example).

To start using the load balancing functionality you first need to deploy an NSX edge in your vSphere environment.

Go to Networking & Security i your vSphere Web Client and select NSX Edges, then click on the green plus sign to start adding a new NSX edge. Select Edge Services Gateway as an install type and give your load balancer an indicative name.

LB001

Next you can enter a read-only CLI username and select a password of your choice. You can also choose to enable SSH access to the NSX Edge VM.

Screen Shot 2014-12-03 at 20.37.57

Now you need to select the size of the Edge VM, because this is a lab environment I’ll opt for compact but depending on your specific requirements you should select the most appropriate size. Generally speaking more vCPU drives more bandwidth and more RAM drives more connections per second. The available options are;

  • Compact: 1 vCPU, 512 MB
  • Large: 2 vCPU, 1024 MB
  • Quad Large: 4 vCPU, 1024 MB
  • X-Large: 6 vCPU, 8192 MB

Click on the green plus sign to add an Edge VM, you need to select a cluster/resource pool to place the VM and select the datastore. Typically we would place NSX edge VMs in a dedicated Edge rack cluster. For more information on NSX vSphere cluster design recommendations please refer to the NSX design guide available here: https://communities.vmware.com/servlet/JiveServlet/previewBody/27683-102-3-37383/NSXvSphereDesignGuidev2.1.pdf

LB002Since we are deploying a one-armed load balancer we only need to define one interface next. Click the green plus sign to configure the interface. Give it a name, select the logical switch it needs to connect to (i.e. where are the VMs that I want to load balance located) configure an IP address in this specific subnet (VXLAN virtual switch) and you are good to go.

LB003

Next up configure a default gateway.

Screen Shot 2014-12-03 at 21.03.50Configure the firewall default policy and select Accept to allow traffic.

Screen Shot 2014-12-03 at 21.04.22Verify all setting and click Finish to start deploying the Edge VM.

Screen Shot 2014-12-03 at 21.05.13After a few minuted the new Edge VM will be deployed and you can start to configure load balancing for the VMs located on the Web-Tier-01 VXLAN virtual switch (as we selected before) by double clicking the new Edge VM.

Click on Manage and select Load Balancer, next click on Edit and select the Enable Load Balancer check box.

LB005

Next thing we need is an application profile, you create an application profile to define the behavior of a particular type of network traffic. After configuring a profile, you associate the profile with a virtual server. The virtual server then processes traffic according to the values specified in the profile.

Click on Manage and select Load Balancer, then click on Application Profiles, give the profile a name. In our case we are going¬†to load balance 2 HTTPS webservers but are not going to proxy the SSL traffic so we’ll select Enable SSL Passthrough.

LB006Now we need to create a pool containing the webservers that will serve the actual content.

Select Pools, give it a name, select the load balancing algorithm.

Today we can choose between;

  • Round-robin
  • IP Hash
  • Least connection
  • URI
  • HTTP Header
  • URL

Then select a monitor to check if the severs are serving https (in this case) requests.
You can add the members of the pools from the same interface.

LB007

Now we create the virtual server that will take the front-end connections from the webclients and forward them to the actual webservers. Note that the Protocol of the virtual server can be different from the protocols of the back-end servers. i.e. You could do HTTPS from the client webbrowser to the virtual server and HTTP from the virtual server to the back-end for example.

Screen Shot 2014-12-04 at 18.29.43

Now you should be able to reach the IP (VIP) of the virtual server we just created and start to load balance your web-application.

LB008Since we selected round-robin as our load-balancing algorithm each time you hit refresh (use control and click on refresh to avoid using the browsers cache) on your browser you should see the other webserver.

VMware OpenStack Virtual Appliance

VMware OpenStack Virtual Appliance

VMware provides an OpenStack Virtual Appliance, VOVA for short, to help VMware admins get some hands-on experience with using OpenStack in a VMware environment. It is however purely a proof of concept appliance and is not supported in any way by VMware. To find out more about the OpenStack effort at VMware in general please visit https://communities.vmware.com/community/vmtn/openstack

You can download the VOVA appliance OVF package (OpenStack Icehouse release) here.

Screen Shot 2014-08-01 at 10.17.05

VOVA only works with vCenter 5.1 and above and only supports a single Datacenter, you should also not run production workloads on this cluster as a precaution. If you are running multiple hosts in your cluster you should enable DRS in “fully automated mode”, the cluster should also have only one shared datastore for all the hosts in the cluster. It is recommended to deploy the VOVA appliance on a host that is not part of the cluster managed by the appliance. (so you can manage multiple clusters in your vCenter instance).

When the OVF package is deployed it will show you the OpenStack Dashboard URL on first boot.

Screen Shot 2014-08-01 at 10.18.53

You can login with the credentials demo/vmware

Screen Shot 2014-08-01 at 10.19.38

The appliance comes pre-loaded with a Debian disk image which allows you to easily launch new instances.

Screen Shot 2014-08-01 at 10.21.47

Spawning the first VM can take a while¬†because the 1 GB Debian disk image must be copied from the file system of the VOVA¬†appliance to your cluster’s Datastore. All subsequent instances should be significantly faster¬†(under 30 seconds).

VOVA also allows you to test the OpenStack CLI tools which directly acces the OpenStack APIs, you need to SSH (root/vmware) into the VOVA’s console and run the CLI commands from there.

Screen Shot 2014-08-01 at 10.25.21

The vCenter Web Client plug-in for OpenStack is also included allowing you to see OpenStack instances from the Web Client

Screen Shot 2014-08-01 at 10.49.18

Currently the VOVA appliance has some limitations;

  • No Neutron support: Neutron with vSphere requires the VMware NSX solution. We plan¬†to release a future version of VOVA that can optionally leverage NSX.
  • No Security Groups support. With vSphere, VMware NSX is required for security groups¬†network filtering. We plan to release a future version of VOVA that can optionally¬†leverage NSX.
  • No exposed options to configure floating IPs. This is possible with the current appliance,¬†but it has not been exposed via the OVF options.
  • No support for sparse disks. If you try to upload your own disk images, the images must¬†be flat, not sparse.
  • No support for Swift (object storage). VOVA has no plans to leverage OpenStack swift¬†for object storage. You are free to deploy swift on your own in another VM.

Also keep in mind that VOVA is not a product¬†and will likely be discontinued once production-quality solutions with¬†similar ease-of-use are made available (remember that there is nothing “special” about VOVA, it¬†is just the open source OpenStack code running on Ubuntu, with proper configuration for using¬†vSphere). However in the months following, expect to see updated¬†versions of VOVA with the option of using Neutron + Security groups via VMware NSX.

OpenStack Summer Reading

OpenStack Summer Reading

Seeing that I’m now getting an out of office hit rate of above 50% (I’m based in EMEA) I thought it might be interesting¬†to share a summer reading list. I’m going (or already went, in most cases) through this material myself during my planned downtime in the second half of August.

I’m very interested in OpenStack and have been fiddling with it for quite some time, I like how it touches a lot of technologies and evolves at breakneck speeds (It has become the fastest growing open source project of all time).

Recently a team from VMware, EMC, Cisco, Cloudscaling, Comcast, Mirantis, Rackspace, Red Hat, and Verizon completed a 5 day book sprint (for more info on book sprints visit booksprints.net) writing the OpenStack Architecture Design Guide.

Screen Shot 2014-07-23 at 12.20.28

I think this a good resource to get started with expanding your OpenStack knowledge if you are planning a design. For getting started with OpenStack without any background I recommend the OpenStack website as it has some great introductions to the different technologies. The architecture design guide explains some common use cases and what to look out for in each of them. Like with most cloud automation/orchestration frameworks it is not really feasible to give you a lot of prescriptive step by step advise since the system is so versatile you really need to look at your specific use case, the book however attempts, successfully, to provide some.  It does not cover installation and operations, for this another great resource is available for free, the OpenStack Operations Guide.

If you are more interested in the security aspect of running OpenStack there is another guide available that was also the result of a book sprint, called the OpenStack Security Guide.

The OpenStack Wiki also provides a great place to get started, but I often find a nicely packaged book is more suited towards learning.

The complete set of current OpenStack documentation can be found here. You can also contribute to OpenStack without needing to provide code by helping out the documentation effort, more information on how this works can be found here.

You can find an immense amount of sessions on YouTube around OpenStack as well, but since video has no place in a reading list… (just goolge it).

VMware also supports OpenStack integration by providing open source drivers (for Nova and Cinder) and plugins (for Quantum/Neutron) to integrate with our products. You can find, and read about, these on the VMware OpenStack community site.

Team-OpenStack-@-VMware-300x220

VMware also provides an (unsupported) vSphere OpenStack Virtual Appliance (VOVA) to allow for easy testing, proof of concepts, and educational purposes. It is a single Ubuntu Linux appliance running Nova, Glance, Cinder, Neutron, Keystone, and Horizon. There is also an VMware + OpenStack Hands On Lab available for you to experience the integration first hand.

Enjoy the summer!

Zero Trust Network Architecture and Micro-Segmentation

Zero Trust Network Architecture and Micro-Segmentation

A killer application

As defined by Wikipedia: In marketing terminology, a killer application (commonly shortened to killer app) is any computer program that is so necessary or desirable that it proves the core value of some larger technology, such as computer hardware, gaming console, software, a programming language, software platform, or an operating system. In other words, customers would buy the underlying technology just to run that application.

The Zero Trust Network Architecture

There is a simple philosophy at the core of Zero Trust: Security professionals must stop trusting packets as if they were people. Instead, they must eliminate the idea of a trusted network (usually the internal network) and an untrusted network (external networks). In Zero Trust, all network traffic is untrusted. Thus, security professionals must verify and secure all resources, limit and strictly enforce access control, and inspect and log all network traffic.

The core concepts of Zero Trust are:

  • There is¬†no longer a trusted and an untrusted interface on our security devices.
  • There is¬†no longer a trusted and an untrusted network.
  • There are no longer trusted and untrusted users.

Zero Trust mandates that information security pros treat all network traffic as untrusted. Zero Trust doesn’t say that employees are untrustworthy but that trust is a concept that information security pros should not apply to packets, network traffic, and data. The malicious insider reality demands a new trust model. By changing the trust model, we reduce the temptation for insiders to abuse or misuse the network, and we improve our chances of discovering security breaches before they impact the environment.

Screen Shot 2014-07-01 at 10.48.38

These approaches wrap security controls around much smaller groups of resources ‚Äď often down to a small group of virtualized resources or individual VMs. Micro-segmentation has been understood to be a best practice approach from a security perspective, but difficult to apply in traditional environments.

Micro-segmentation

Traditionally, network segmentation is a function of a switch. From a network perspective micro-segmentation is a design where each device on a network gets its own dedicated segment (collision domain) to the switch. Each network device gets the full bandwidth of the segment and does not have to share the segment with other devices. Micro-segmentation reduces and can even eliminate collisions because each segment is its own collision domain.

From a security perspective traditional segmentation works by implementing (next-generation) firewalls that act as “choke points” on the network. When application traffic is directed towards the firewall it enforces it’s rule-set and packets are blocked or allowed to pass through. This is a completely workable solution if you are just implementing controls (choke points) at limited places in the network, i.e. between the internal and external network, between business networks and the production network etc. If you would apply this same method to micro-segmentation however you would need to invest in large network boxes and with the advent of VM mobility you would potentially need to constantly update security rules to keep your policies up to date.

VMware NSX and micro-segmentation

In Virtual Machine land we would normally connect VMs to a virtual switch (port-group) and pipe that combined traffic to a network choke point, i.e. firewall, this breaks the idea of the Zero Trust architecture as you are already grouping certain VMs together. With NSX, policy can be applied to the individual VM level independent of placement. Every VM is literally first connected to the in-kernel statefull firewall before traffic goes out on the network. This implies that security can be implemented independent of the way the logical network is architected, i.e. it does not matter if this particular VM is grouped with other VM’s in say a DMZ segment, traffic will be filtered between VM’s making sure Zero Trust is maintained.