Category: SDN

Fortinet integration with Nuage Networks SDN

Fortinet integration with Nuage Networks SDN

Introduction

Nuage Networks VSP, with the emphasis on P for Platform provides many integration points for 3rd party network and security providers (a.o.) this way the customer can leverage the SDN platform and build end-to-end automated services in support of his/her application needs.

One of the integration partners is Fortinet whereby we can integrate with the FortiGate Virtual Appliances to provide NGFW services and automated FW management via FortiManager.

Integration example

In the example setup below we are using OpenStack as the Cloud Management System and KVM as the OS Compute hosts.
We have the FortiGate Virtual Appliance connected to a management network (orange), a untrusted interface (red), and a trusted/internal interface (purple).
On the untrusted network we have a couple of virtual machines connected, and on the internal network

overview

Dynamic group membership

Since we are working in cloud environment where new workloads are spun up and down at a regular pace resulting in a dynamic allocation of IP addresses, we need to make sure that we can keep the firewall policy intact. To do this we use dynamic group membership that adds and deletes the IP addresses of the virtual machines based on group membership on both platforms. The added benefit of this is that security policy does not go stale, when workloads are decommissioned in the cloud environment it’s address information is automatically removed from the security policies resulting in a more secure and stable environment overall.

Looking at the FortiGate below, we are using dynamic address groups to define source and destination policy rules. The membership of the address groups is synchronised between the Nuage VSP environment and Fortinet.

Screen Shot 2016-04-17 at 14.59.55

If we look at group membership in Fortinet we can see the virtual machines that are currently a member of the group. As you can see in the image below currently the group “PG1 – Address group 1 – Internal” has 1 virtual machine member.

fortinet001

If we now create a new virtual machine instance in OpenStack and make that instance a member of the corresponding Policy Group in Nuage Networks VSP it will be automatically synced back to Fortinet.

Screen Shot 2016-04-17 at 14.12.42

Looking at Nuage Networks VSP we can see the new OpenStack virtual machine is a member of the Policy Group “PG1 – Address Group 1 – Internal”

fortinet002.png

If we now go back to our FortiGate we can see the membership of the corresponding group has been updated.

fortinet003

Traffic Redirection

Now that we have dynamic address groups we can use these to create dynamic security policy, in order to selectively forward traffic from Nuage Networks VSP to FortiGate we need to create a Forward Policy in VSP.

In the picture below we define a new redirection target pointing to the FortiGate Virtual Appliance, in this case we opted for L3 service insertion, this could also be virtual wire based.

fortinet004

Now we need to define which traffic to classify as “interesting” and forward to FortiGate, because Nuage Networks VSP has a built-in distributed stateful L4 firewall we can create a security baseline that handles common east-west traffic locally and only forwards traffic that demands a higher level inspection to the FortiGate virtual appliance.

fortinet005

In the picture above we can select the Protocol, in this case I’m forwarding all traffic, but we could just as easily select for example TCP and define interesting traffic based on source and destination ports. We need to select the origin and destination network, in this case we use the dynamic address groups that are synced with Fortinet, this could also be based on more static network information. Finally we select the forward action and point to the Fortinet Virtual Appliance.

We have a couple of policies defined, as you could see in the picture at the top of the post we are allowing ICMP traffic between the untrusted network and the internal network. In the picture below I’ve logged on to one of the untrusted clients and am pinging the internal server.

Screen Shot 2016-04-17 at 15.38.57

Screen Shot 2016-04-17 at 15.40.29.png

Since this traffic mathes the ACL redirect rule we have configured in Nuage Networks VSP we can see a flow redirection at the Open vSwitch level pointing to the FortiGate virtual appliance.

Screen Shot 2016-04-17 at 15.40.17

We can also see that the Forward statics counters in VSP are increasing and furthermore determine that the traffic is logged for referential purposes.

Screen Shot 2016-04-17 at 15.41.24.png

If we look at FortiGate we can see that the traffic is indeed received and allowed by the virtual appliance.

fortinet006

Same quick test with SSH which is blocked by our ForiGate security policy.

Screen Shot 2016-04-17 at 15.53.06

Screen Shot 2016-04-17 at 15.52.40.png

So as you can see a very solid integration between the Nuage Networks datacenter SDN solution and Fortinet to secure dynamic cloud environments in an automated way.

SDN with Underlay – Overlay visibility via VSAP

SDN with Underlay – Overlay visibility via VSAP

When using virtualization of any kind you introduce a layer of abstraction that people in operations need to deal with, i.e. if you run a couple of virtual machines on a physical server you need to be able to understand the impact that changes (or failures) in one layer have on the other. The same is true for network virtualization, we need to be able to understand the impact that say, a failure of a physical router port, has on the overlay virtual network.

To make this possible with Nuage Networks we can use our Virtualized Services Assurance Platform or VSAP for short. VSAP builds upon intellectual property from our parent company, Alcatel-Lucent (soon to be Nokia), using the 5620 Service Aware Manager as a base to combine knowledge of both the physical and virtual networks.

VSAP Key Components

VSAP communicates with Nuage Networks using firstly SNMP, connecting into the Virtualized Services Controller (VSC, the centralized SDN controller) this gives VSAP an understanding of the network topology in the Nuage SDN solution. Secondly it uses a CPAM (Control Plane Assurance Manager) module via the 7701 CPAA which acts as a listening device in the network to collect all the IGB and BGP updates which allows it to model the underlay network based on IGP.

Screen Shot 2016-01-10 at 16.55.15

The solution has both a both a Web GUI and Java GUI, pictured below is the SAM JAVA GUI which in this example is displaying the physical network including the data center routers, the VSCs, and the Top-of-Rack switches. (again all learned via SNMP).

Screen Shot 2016-01-10 at 17.02.35.png

You can also look at the IGP (in this case OSPF) topology (picture below), again this is build by the CPAM based on the routing information in the network. We can use this model to map the underlay (what you see on the picture below) with the overlay information from the Nuage Networks SDN solution.

Screen Shot 2016-01-10 at 17.05.04

You can also get a closer look at the services in the network, in the example below the VPLS (L2) and VPRN (L3) constructs. These services are created by the Nuage Networks SDN solution.

Screen Shot 2016-01-10 at 17.08.57

We can also drill deeper into the Virtual Switch constructs provided by Nuage (for example to see which virtual ports and access ports are attached, or which faults recorded are at the vSwitch level,…)

Screen Shot 2016-01-10 at 17.12.30

You can also see how traffic is passed across the virtual service, you have an option to highlight it on the IGP topology view and see the superimposed virtual service on top of the physical underlay.

Screen Shot 2016-01-10 at 17.15.37

On the picture below you can see the virtual overlay service path on top of the physical network.

Screen Shot 2016-01-10 at 17.18.12

As mentioned before there is also the option to use a Web based GUI if you don’t like a Java based client (ahem 😉 ) In the example below we are looking at the VRSs (virtual switches) inside the L3 VPRN construct, so you can see where the virtual machines connect to the Open vSwitch based VRS on each hypervisor host.

Screen Shot 2016-01-10 at 17.20.27

You also have an inventory view allowing you to search on all objects in the physical and virtual space.

Screen Shot 2016-01-10 at 17.30.13

You can also look at the inventory topology.

Screen Shot 2016-01-10 at 17.31.24

Fault correlation

One of the powerful options you now have is that you can perform fault correlation between the physical and virtual constructs. So in the example below a switch failure is introduced and you can see the impact on overlay and underlay, including, and up to the virtual machine level.

Pictured below is the overview on the Web Gui showing the alarm that has been raised, the impact and probably cause.

Screen Shot 2016-01-10 at 17.34.38

Same type of indication on the Java client depicted below.

Screen Shot 2016-01-10 at 17.35.41

For additional information and a demo of how this works I recommended having a look at the recorded Tech Field Day Extra session from ONUG Spring 2015 here.

New Year, New Job.

New Year, New Job.

I’m super excited to be taking on a new role in the NSBU at VMware, as of the 1st of January I’ll officially be joining the team as a Sr. Systems Engineer for the Benelux. I’ll be focused mainly on VMware NSX, including it’s integrations with other solutions (Like vRA and OpenStack for example).

Unofficially I’ve been combining this function with my “real” job for a couple of months now ever since a dear and well respected colleague decided to leave VMware. Recently I was fortunate enough to get the opportunity to attend a 2 week training at our Palo Alto campus on NSX-v, NSX-MH, OpenStack, VIO, OVS,…

vmwarecampus

The experience was face-meltingly good, I definitely learned a lot and got the opportunity to meet many wonderful people. One conclusion is that the NSX team certainly is a very interesting and exciting place to be in the company.

In the last few months I got my feet wet by training some of our partner community on NSX (most are very excited about the possibilities, even the die-hard hardware fanatics), staffing the NSX booth at VMworld Europe, and by having some speaking engagements like my NSX session at the Belgian VMUG.

vmugfv

So why NSX?

In the past I’ve been working on a wide variety of technologies (being in a very small country and working for small system integrators you need to be flexible, and I guess it’s also just the way my mind works #squirrel!) but networking and virtualisation are my two main fields of interest so how convenient that both are colliding!
I’ve been a pure networking consultant in the past, mainly working with Cisco and Foundry/HP ProCurve and then moved more into application networking at Citrix ANG and Riverbed.

The whole network virtualisation and SDN (let’s hold of the discussion of what’s what for another day) field are on fire at the moment and are making the rather tedious and boring (actually I’ve never really felt that, but I’m a bit of a geek) field of networking exciting again. The possibilities and promise of SDN have lot’s of potential to be disruptive and change an industry, and I’d like to wholeheartedly and passionately contribute and be a part of that.

As NSX is an enabling technology for a lot of other technologies it needs to integrate with a wide variety of solutions. 2 solutions from VMware that will have NSX integrated for example are EVO:RACK and VIO. I look forward to also work on those and hopefully find some time to blog about it as wel.

Other fields are also looking to the promise of SDN to enable some new ways of getting things done, like SocketPlane for example, trying to bring together Open vSwitch and Docker to provide pragmatic Software-Defined Networking for container-based clouds. As VMware is taking on a bigger and bigger role in the Cloud Native Apps space it certainly will be interesting to help support all these efforts.

“if you don’t cannibalise yourself, someone else will”
-Steve Jobs

I’m enjoying a few days off with my family and look forward to returning in 2015 to support the network virtualisation revolution!

nsx-dragon-2

NSX Logical Load Balancer

NSX Logical Load Balancer

VMware NSX is a network virtualisation and security platform that enables inclusion of 3rd party integration for specific functionality, like advanced firewalling, load-balancing, anti-virus, etc. Having said that VMware NSX also provides a lot of services out of the box that depending on the customer use-case can cover a lot of requirements.

One of these functionalities is load balancing using the NSX Edge Virtual Machine that is part of the standard NSX environment. The NSX Edge load balancer enables network traffic to follow multiple paths to a specific destination. It distributes incoming service requests evenly (or depending on your specific load balancing algorithm) among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to Layer 7.

article

The technology behind our NSX load balancer is based on HAProxy and as such you can leverage a lot of the HAProxy documentation to build advanced rulesets (Application Rules) in NSX.

Like the picture above indicates you can deploy the logical load balancer either “one-armed” (Tenant B example) or transparently inline (Tenant A example).

To start using the load balancing functionality you first need to deploy an NSX edge in your vSphere environment.

Go to Networking & Security i your vSphere Web Client and select NSX Edges, then click on the green plus sign to start adding a new NSX edge. Select Edge Services Gateway as an install type and give your load balancer an indicative name.

LB001

Next you can enter a read-only CLI username and select a password of your choice. You can also choose to enable SSH access to the NSX Edge VM.

Screen Shot 2014-12-03 at 20.37.57

Now you need to select the size of the Edge VM, because this is a lab environment I’ll opt for compact but depending on your specific requirements you should select the most appropriate size. Generally speaking more vCPU drives more bandwidth and more RAM drives more connections per second. The available options are;

  • Compact: 1 vCPU, 512 MB
  • Large: 2 vCPU, 1024 MB
  • Quad Large: 4 vCPU, 1024 MB
  • X-Large: 6 vCPU, 8192 MB

Click on the green plus sign to add an Edge VM, you need to select a cluster/resource pool to place the VM and select the datastore. Typically we would place NSX edge VMs in a dedicated Edge rack cluster. For more information on NSX vSphere cluster design recommendations please refer to the NSX design guide available here: https://communities.vmware.com/servlet/JiveServlet/previewBody/27683-102-3-37383/NSXvSphereDesignGuidev2.1.pdf

LB002Since we are deploying a one-armed load balancer we only need to define one interface next. Click the green plus sign to configure the interface. Give it a name, select the logical switch it needs to connect to (i.e. where are the VMs that I want to load balance located) configure an IP address in this specific subnet (VXLAN virtual switch) and you are good to go.

LB003

Next up configure a default gateway.

Screen Shot 2014-12-03 at 21.03.50Configure the firewall default policy and select Accept to allow traffic.

Screen Shot 2014-12-03 at 21.04.22Verify all setting and click Finish to start deploying the Edge VM.

Screen Shot 2014-12-03 at 21.05.13After a few minuted the new Edge VM will be deployed and you can start to configure load balancing for the VMs located on the Web-Tier-01 VXLAN virtual switch (as we selected before) by double clicking the new Edge VM.

Click on Manage and select Load Balancer, next click on Edit and select the Enable Load Balancer check box.

LB005

Next thing we need is an application profile, you create an application profile to define the behavior of a particular type of network traffic. After configuring a profile, you associate the profile with a virtual server. The virtual server then processes traffic according to the values specified in the profile.

Click on Manage and select Load Balancer, then click on Application Profiles, give the profile a name. In our case we are going to load balance 2 HTTPS webservers but are not going to proxy the SSL traffic so we’ll select Enable SSL Passthrough.

LB006Now we need to create a pool containing the webservers that will serve the actual content.

Select Pools, give it a name, select the load balancing algorithm.

Today we can choose between;

  • Round-robin
  • IP Hash
  • Least connection
  • URI
  • HTTP Header
  • URL

Then select a monitor to check if the severs are serving https (in this case) requests.
You can add the members of the pools from the same interface.

LB007

Now we create the virtual server that will take the front-end connections from the webclients and forward them to the actual webservers. Note that the Protocol of the virtual server can be different from the protocols of the back-end servers. i.e. You could do HTTPS from the client webbrowser to the virtual server and HTTP from the virtual server to the back-end for example.

Screen Shot 2014-12-04 at 18.29.43

Now you should be able to reach the IP (VIP) of the virtual server we just created and start to load balance your web-application.

LB008Since we selected round-robin as our load-balancing algorithm each time you hit refresh (use control and click on refresh to avoid using the browsers cache) on your browser you should see the other webserver.

NSX Service Chaining

NSX Service Chaining

When NSX was introduced to the public at large during VMworld 2013, there was a logo slide including all the initial NSX partners that had a working demo together with NSX. Now in order to have both solutions work together in a meaningful way we often resort to service insertions or service chaining.

nsx_partners_vmworld2013

What is service chaining?

Some different definitions exist which is somewhat expected in what is a very active (SDN) market right now, I’ll liken it to when you, in a traditional network with lots of middle boxes (load-balancer, WAN accelerator, Firewall, etc.) you have to cable up these appliances in your network and then manually configure the traffic flow to “route” via these boxes for a specific service. i.e. you want your external website to be load-balanced across multiple front-end servers so you point your webclients towards the load-balancer whom then spreads the traffic across multiple servers. In the SDN/NFV space this is somewhat similar but more rapid and dynamic (on-demand), i.e. you can create policies that govern when certain services are required and these (usually through VM’s) are then put into the data path if and when needed. Since it should be fast to provision a new VM/application, provision network services should be equally fast, and we are not limited to the standard network plumbing (L3-4) either, which is quite important. In the world of Virtualization where Virtual Machines can move around physical hosts and physical network devices it is imperative that service chaining or service insertion can deal with these events, this is somewhat trickier in purely physical world.

Some service chaining/service insertion solution examples;

Security with Palo Alto Networks

If and when the added security capabilities of the virtual Palo Alto security appliance are needed traffic from the VM is transparently (you don’t need to do network configuration) inserted in the path of the virtual security appliance. Context is shared between NSX and Palo Alto Panorama (centralized management) to keep track of VM movement and make sure that security policies can still be enforced.

WAN optimisation with Silver Peak

The Silver Peak integration (Agility for VMware Software Defined Data Centers / point-and-click workload optimization) provides traffic redirection for workload optimization, meaning that if WAN optimization is required you can enable redirection for a specific application/VM toward the Silver Peak VX VM which then does the optimization across the WAN.

ADC services with F5 Networks

Trend Micro Deep Security

Many more examples, in various stages of development, exist.

On SDN, NFV, and OpenFlow

Introduction

Network-, and network function virtualisation are the talk of the town at the moment, the sheer amount of vendor solutions is staggering, and the projected market opportunity is enough to make anyone’s head spin. I’ve been working in ‘traditional’ networking all my career and it looks like there finally is something new and exciting, dare I say sexy, to play with.

Because of all the buzz, and because the topic reaches far beyond pure networking folks, there does seem to be a lot of confusion around some of the terminology and what piece of technology is applicable where. In this, what I hope to be an introductory post, I wanted to touch upon the high level differences between SDN and NFV and what they represent.

SDN_NFV

History

Let’s start with a little history but feel free to skip to the topic sections below.

I’m going to choose the early work being done at Stanford as the starting point, you can argue that the history of network virtualisation goes back way further, and in theory you would be right, but in order not to muddy the water too much I’ll start at Stanford.

2007 Stanford graduate Martin Casado, now at VMware, is generally credited with starting the software defined networking revolution, himself and co-conspirators Nick McKeown (Stanford), Teemu Koponen (VMware), and Scott Schenker (Berkeley) built on earlier work that attempted to separate the control plane from the data plane, but SDN more or less started there.

Martin Casado together with his colleagues created Ethane, which was intended as a way of centrally managing global policy, it used a “flow-based network and controller with a focus on network security”

The Ethane abstract reads:

Intended as a new network architecture for the enterprise. Ethane allows managers to define a single networkwide fine-grain policy, and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows.”

If you look at Aruba and Meraki in the wireless space for instance you can see that they also have this model of separating control- and data plane. The networking models that companies like Yahoo!, Google, and Amazon had started to use are also very closely related to this concept of SDN.

The “official” starting point of SDN is when the NOX + OpenFlow specification emerged in 2008, but again, it’s legacy goes back a lot further.

The human middleware problem.

If you look at a traditional network setup, you most likely encounter a 3 tier network architecture, with the Core network at the top, the Distribution network in the middle, and the Access network close to the end-users and the applications. This model is highly biased towards North-South traffic, since Layer 3 decisions are made higher up instead of close to the access network. Port security and VLANs are applied at the Access layer, Policy and ACL’s are applied in the Distribution layer, etc. all this has some redundancy and scalability limits due to operational complexity. Furthermore you cannot use all the capacity of the links because of the design (spanning tree etc.).

accessdistribcore

Now if you look the practical aspects of packet forwarding, you have packets that come in, those are evaluated against a table, the table determines what happens to the packet, i.e. does the packet get dropped, forwarded, replicated, etc., and then it goes out the other end. Now these tables against which the pakets get evaluated are populated in one of two ways, some tables (L2 and L3 table) are populated using dynamic (routing) algorithms, these algorithms run on every network node, when there is a change in the network they calculate state and have to communicate with all the other nodes and populate the tables. This is actually rather functional, not a lot of human interaction, should be very stable and able to dynamically react to changes.

But there are other tables, like ACLs, QoS, port groups, etc. and these tables are populated using very different methods like human beings, scripts. etc. these are slow, and error prone, what happens if your network admin doesn’t show up Monday..?

Now there are management APIs and SDKs to programatically deal with these tables, but not really a standardised way, and certainly not to do it at scale. i.e. the APIs expose functions of the hardware chip’s and these provide a fixed function. Moreover there are limited consistency semantics in the APIs, i.e. you can not be sure that if you programmed state it would get consistently implemented on all the network devices.

This problem led to the creation of OpenFlow, in order to programmatically manage all of the “datapath” state in the network.

OpenFlow

If you look a networking devices today you generally see 3 levels of abstraction (3 planes), the Data plane, where fast packet forwarding takes place (ASICs live here) based on the information located in the forwarding table. To get state information the forwarding table this has to be populated from the Control plane. The Control plane is where your routing protocols live, like OSPF and BGP, who figure out the topology of the network by exchanging information with neighbouring devices in the network. A subset of the “best” forwarding (IP routing) table in the Control gets pushed/programmed into the forwarding table in the Data plane. On top of both the Data and Control plane we have the Management plane. This is where the configuration happens through a command line interface, GUI, or API.

In an OpenFlow based network there is a physical separation between the Control plane and the Data plane. Usually the Management plane is a dedicated controller that establishes a TCP connection (OpenFlow runs over TCP) with each networking device, the controller runs topology discovery (not the Control plane). The controller computes the forwarding information that needs to get pushed/programmed to all networking devices and uses the OpenFlow protocol to distribute this information to the flow tables of the devices.

floodlight_scaled

The OpenFlow protocol is published by the Open Networking Foundation, and it somewhat depends on the switch version what version of OpenFlow they implement (because some feel the rate of innovation is too high 😉 ) . i.e. not all OpenFlow capable devices have implemented the same version and so interoperability with features of later versions is not always guaranteed.

OpenFlow is an interface to your existing switches, it does not really enable features that are not already there.

So why do we have OpenFlow then?

OpenFlow provides you with  the ability to innovate faster within the network, allowing you to get a competitive advantage. It also moves the industry towards standards, which creates more market efficiency. And if you decouple layers, they can evolve independently, fixed monolithic systems are slower to evolve.

Software Defined Networking (SDN)

Obviously SDN and OpenFlow are closely related. You can think of OpenFlow as an enabler of SDN since SDN requires a method for the Control plane to talk to the Data plane, OpenFlow is one such mechanism.

In SDN we also talk about decoupling the Control and Data plane but SDN also looks at what happens above the Control plane, i.e. how do we take the information we get from the application space and use that as input to dynamically program the network below it.

Usually the Control plane functionality is handled by a controller cluster that is implemented as a virtual machine cluster running on top of a hypervisor for maximum flexibility.

nvp-architecture

We also use the concept of overlay networking (called encapsulation protocol in the picture above) to dynamically create connections between switches (or application tiers, or users,…) independent of the physical network. This way we can link applications using the most optimum path independent of the physical topology.

As a crazy high level example let’s look at Hadoop , generally speaking there are 2 cycles in the Hadoop process, you have the mapping phase and you have the reduce phase.

hadoopfig2

What if the flow the traffic takes between these 2 phases is best served by another path, i.e. the actual compute nodes with different capabilities are located on different parts of the network and we want traffic to run over the shortest path possible. In a physical world it would be very difficult to accomplish this dynamic reorganisation of the network but in SDN land you potentially could.

Or a more traditional example would be, I need to spin up a new virtual machine, this VM requires network connectivity, so I ask the network team to place the physical port of the switch in the right VLAN so the VM can start to communicate. In most organisations this could take some time so what if the application itself, based on the SDN policies set in place (once) was able to direct this action.

Another benefit most SDN solutions implement is the distribution of routing (and potentially other services) to the hypervisor layer, either with an Agent (virtual machine) or integrated in the kernel or virtual switch. This means that the traditional north-south bias of a physical 3 tier network architecture is no longer hairpinning traffic, if you can route layer 3 traffic from within the hypervisor layer you can potentially move traffic across the server “backplane” between 2 virtual machines living on separate layer 3 networks but on the same host. Or between hosts across the overlay network. Efficiently moving east-west traffic in the data center.

Now of course once the network exists as a software construct, you can just as easily manipulate it as a virtual machine, i.e. snapshot it, roll it back, clone it, etc.

If you have a background in VMware administrator you understand that normally you would apply configuration in vCenter and have that distributed to all you ESXi hosts, you don’t need to log into each host individually to configure them, the same thing applies to SDN, you set policy at the controller layer and it feeds down to the switches.

In terms of physical architecture the traditional 3 tier (core, aggregation, and access) design is being replaced by a more flatter network like leaf-spine whereby all the links are being utilised for more bandwidth and resiliency.

fabricpath_spine_leaf_topo

In this type of architecture spanning tree is being replaced by things like TRILL and SPB to enable all links to forward traffic, eliminate loops and create meshes to let traffic take the shortest path between switches.

Network Function Virtualisation (NFV)

While SDN has grown out of the enterprise networking space, NFV has a background in the telco and service provider world. It was CSPs looking to avoid the need to implement more hardware middle-boxes when rolling out more functionality in the network. i.e. specialised load-balancers, firewalls, caches, DNS, etc.

So taking a page out of the server virtualisation book, what if some of those higher layer services could be implemented as virtual machines, what would that mean in terms of flexibility and cost improvement (reduced CAPEX, OPEX, space and power consumption)?

Again SDN and NFV are closely related. NFV builds virtualised network functions, but how the network is built (the interconnectivity) is the job of SDN.

These virtualised building blocks can then be connected (chained) together to create services.

NFV-Alcatel-Lucent

Like I mentioned in the beginning of the post, I wanted to provide a high level, abstracted overview, and I will attempt to go into some more detail on each of these in subsequent articles.