Category: nuagenetworks

Fortinet integration with Nuage Networks SDN

Fortinet integration with Nuage Networks SDN

Introduction

Nuage Networks VSP, with the emphasis on P for Platform provides many integration points for 3rd party network and security providers (a.o.) this way the customer can leverage the SDN platform and build end-to-end automated services in support of his/her application needs.

One of the integration partners is Fortinet whereby we can integrate with the FortiGate Virtual Appliances to provide NGFW services and automated FW management via FortiManager.

Integration example

In the example setup below we are using OpenStack as the Cloud Management System and KVM as the OS Compute hosts.
We have the FortiGate Virtual Appliance connected to a management network (orange), a untrusted interface (red), and a trusted/internal interface (purple).
On the untrusted network we have a couple of virtual machines connected, and on the internal network

overview

Dynamic group membership

Since we are working in cloud environment where new workloads are spun up and down at a regular pace resulting in a dynamic allocation of IP addresses, we need to make sure that we can keep the firewall policy intact. To do this we use dynamic group membership that adds and deletes the IP addresses of the virtual machines based on group membership on both platforms. The added benefit of this is that security policy does not go stale, when workloads are decommissioned in the cloud environment it’s address information is automatically removed from the security policies resulting in a more secure and stable environment overall.

Looking at the FortiGate below, we are using dynamic address groups to define source and destination policy rules. The membership of the address groups is synchronised between the Nuage VSP environment and Fortinet.

Screen Shot 2016-04-17 at 14.59.55

If we look at group membership in Fortinet we can see the virtual machines that are currently a member of the group. As you can see in the image below currently the group “PG1 – Address group 1 – Internal” has 1 virtual machine member.

fortinet001

If we now create a new virtual machine instance in OpenStack and make that instance a member of the corresponding Policy Group in Nuage Networks VSP it will be automatically synced back to Fortinet.

Screen Shot 2016-04-17 at 14.12.42

Looking at Nuage Networks VSP we can see the new OpenStack virtual machine is a member of the Policy Group “PG1 – Address Group 1 – Internal”

fortinet002.png

If we now go back to our FortiGate we can see the membership of the corresponding group has been updated.

fortinet003

Traffic Redirection

Now that we have dynamic address groups we can use these to create dynamic security policy, in order to selectively forward traffic from Nuage Networks VSP to FortiGate we need to create a Forward Policy in VSP.

In the picture below we define a new redirection target pointing to the FortiGate Virtual Appliance, in this case we opted for L3 service insertion, this could also be virtual wire based.

fortinet004

Now we need to define which traffic to classify as “interesting” and forward to FortiGate, because Nuage Networks VSP has a built-in distributed stateful L4 firewall we can create a security baseline that handles common east-west traffic locally and only forwards traffic that demands a higher level inspection to the FortiGate virtual appliance.

fortinet005

In the picture above we can select the Protocol, in this case I’m forwarding all traffic, but we could just as easily select for example TCP and define interesting traffic based on source and destination ports. We need to select the origin and destination network, in this case we use the dynamic address groups that are synced with Fortinet, this could also be based on more static network information. Finally we select the forward action and point to the Fortinet Virtual Appliance.

We have a couple of policies defined, as you could see in the picture at the top of the post we are allowing ICMP traffic between the untrusted network and the internal network. In the picture below I’ve logged on to one of the untrusted clients and am pinging the internal server.

Screen Shot 2016-04-17 at 15.38.57

Screen Shot 2016-04-17 at 15.40.29.png

Since this traffic mathes the ACL redirect rule we have configured in Nuage Networks VSP we can see a flow redirection at the Open vSwitch level pointing to the FortiGate virtual appliance.

Screen Shot 2016-04-17 at 15.40.17

We can also see that the Forward statics counters in VSP are increasing and furthermore determine that the traffic is logged for referential purposes.

Screen Shot 2016-04-17 at 15.41.24.png

If we look at FortiGate we can see that the traffic is indeed received and allowed by the virtual appliance.

fortinet006

Same quick test with SSH which is blocked by our ForiGate security policy.

Screen Shot 2016-04-17 at 15.53.06

Screen Shot 2016-04-17 at 15.52.40.png

So as you can see a very solid integration between the Nuage Networks datacenter SDN solution and Fortinet to secure dynamic cloud environments in an automated way.

Docker networking overview

Docker networking overview

Introduction

There are of course a lot of blog posts out there already regarding Docker networking, I don’t want to replicate that work but instead wanted to provide a clear overview of what is possible with Docker networking today by showing some examples of the different options.

In general the networking piece of Docker, and arguably Docker itself, is still quite young so things move fast and will likely change over time. A lot of progress has been made via the SocketPlane acquisition last year and it’s subsequent pluggable model, but more about that later.

Docker containers are ephemeral by design (pets vs cattle), this leads to several potential issues, not the least of which is not being able to keep your firewall configuration up to date because of difficult IP address management, it’s also hard to connect to services that might disappear at any moment, and no, using DNS as a stopgap is not a good solution (DNS as a SPOF, don’t go there). Of course there are several options and methods available to overcome these things.

Single host Docker networking

You basically have 4 options for single host Docker networking; Bridge mode, Host mode, Container mode, and No networking.

Bridge mode (the default Docker networking mode)

The Docker deamon creates “docker0” a virtual ethernet bridge that forwards packets between all interfaces attached to it. All containers on the host are attached to this internal bridge which assings one interface as the containers’ “eth0” interface and another interface in the host’s namespace (think VRF). The container get’s a private IP address assignment. To prevent ARP collisions on the local network, the Docker daemon generates a random MAC address from the allocated IP address. In the example below Docker assigns the private IP 172.17.0.1 to the container.

docker1

Host mode

In this mode the container shares the networking namespace of the host, directly exposing it to the outside world. This means you need to use port mapping to reach services inside the container, in Bridge mode, Docker can automatically assign ports and thus make them routable. In the example below the Docker host has the IP 10.0.0.4 and as you can see the container shares this IP address.

docker2

Container mode

This mode forces Docker to reuse the networking namespace of another container. This is used if you want to provide custom networking from said container, this is for example what Kubernetes uses to provide networking for multiple containers. In the example below the container to which we are going to connect the subsequent containers into has the IP 172.17.0.2 and as you can see the container being launched has the same IP address.

docker3
docker4

No networking

This mode does not configure networking, useful for containers that don’t require network access, but it can also be used to setup custom networking.
This is the mode Nuage Networks leverages pre-Docker 1.9 (more info here).
In the example below you can see that our new container did not get any IP address assigned.

docker5

By default Docker has inter-container communication enabled (–icc=true) meaning that containers on a host are free to communicate without restrictions which could be a security concern. Communication to the outside world is controlled via iptables and ip_forwarding.

Multi-host Docker networking

In a real world scenario you will most likely end up using Docker containers across multiple hosts depending on the needs of you containerized application. So now you need to build container networks across these hosts to have you distributed application communicate internally, and externally.

As alluded to above in march of 2015 Docker, Inc. acquired the SDN startup SocketPlane, that has given rise to Libnetwork and the Container Network Model, meant to be the default multi-host networking setup going forward.

Libnetwork

Libnetwork provides a native Go implementation for connecting containers
The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.

One of the benefits of Libnetwork is that it uses a driver / plugin model to support many underlying network technologies while stil exposing a simple and consistent network model to the end-user (common API), Nuage Networks leverages this model by having a remote plugin.

Libnetwork also introduces the Container Network Model (CNM) to provide interoperation between networks and containers.

A63EECBA-52A0-471B-B7F5-679D2142014C

The CNM defines a network sandbox, and endpoint and a network. The Network Sandbox is an isolated environment where the Networking configuration for a Docker Container lives. The Endpoint is a network interface that can be used for communication over a specific network. Endpoints join exactly one network and multiple endpoints can exist within a single Network Sandbox. And the Network is a uniquely identifiable group of endpoints that are able to communicate with each other. You could create a “Frontend” and “Backend” network and they would be completely isolated.

SDN with Underlay – Overlay visibility via VSAP

SDN with Underlay – Overlay visibility via VSAP

When using virtualization of any kind you introduce a layer of abstraction that people in operations need to deal with, i.e. if you run a couple of virtual machines on a physical server you need to be able to understand the impact that changes (or failures) in one layer have on the other. The same is true for network virtualization, we need to be able to understand the impact that say, a failure of a physical router port, has on the overlay virtual network.

To make this possible with Nuage Networks we can use our Virtualized Services Assurance Platform or VSAP for short. VSAP builds upon intellectual property from our parent company, Alcatel-Lucent (soon to be Nokia), using the 5620 Service Aware Manager as a base to combine knowledge of both the physical and virtual networks.

VSAP Key Components

VSAP communicates with Nuage Networks using firstly SNMP, connecting into the Virtualized Services Controller (VSC, the centralized SDN controller) this gives VSAP an understanding of the network topology in the Nuage SDN solution. Secondly it uses a CPAM (Control Plane Assurance Manager) module via the 7701 CPAA which acts as a listening device in the network to collect all the IGB and BGP updates which allows it to model the underlay network based on IGP.

Screen Shot 2016-01-10 at 16.55.15

The solution has both a both a Web GUI and Java GUI, pictured below is the SAM JAVA GUI which in this example is displaying the physical network including the data center routers, the VSCs, and the Top-of-Rack switches. (again all learned via SNMP).

Screen Shot 2016-01-10 at 17.02.35.png

You can also look at the IGP (in this case OSPF) topology (picture below), again this is build by the CPAM based on the routing information in the network. We can use this model to map the underlay (what you see on the picture below) with the overlay information from the Nuage Networks SDN solution.

Screen Shot 2016-01-10 at 17.05.04

You can also get a closer look at the services in the network, in the example below the VPLS (L2) and VPRN (L3) constructs. These services are created by the Nuage Networks SDN solution.

Screen Shot 2016-01-10 at 17.08.57

We can also drill deeper into the Virtual Switch constructs provided by Nuage (for example to see which virtual ports and access ports are attached, or which faults recorded are at the vSwitch level,…)

Screen Shot 2016-01-10 at 17.12.30

You can also see how traffic is passed across the virtual service, you have an option to highlight it on the IGP topology view and see the superimposed virtual service on top of the physical underlay.

Screen Shot 2016-01-10 at 17.15.37

On the picture below you can see the virtual overlay service path on top of the physical network.

Screen Shot 2016-01-10 at 17.18.12

As mentioned before there is also the option to use a Web based GUI if you don’t like a Java based client (ahem 😉 ) In the example below we are looking at the VRSs (virtual switches) inside the L3 VPRN construct, so you can see where the virtual machines connect to the Open vSwitch based VRS on each hypervisor host.

Screen Shot 2016-01-10 at 17.20.27

You also have an inventory view allowing you to search on all objects in the physical and virtual space.

Screen Shot 2016-01-10 at 17.30.13

You can also look at the inventory topology.

Screen Shot 2016-01-10 at 17.31.24

Fault correlation

One of the powerful options you now have is that you can perform fault correlation between the physical and virtual constructs. So in the example below a switch failure is introduced and you can see the impact on overlay and underlay, including, and up to the virtual machine level.

Pictured below is the overview on the Web Gui showing the alarm that has been raised, the impact and probably cause.

Screen Shot 2016-01-10 at 17.34.38

Same type of indication on the Java client depicted below.

Screen Shot 2016-01-10 at 17.35.41

For additional information and a demo of how this works I recommended having a look at the recorded Tech Field Day Extra session from ONUG Spring 2015 here.

Unifying Docker Container and VM networking

Unifying Docker Container and VM networking

Introduction

Most environments are not homogeneous, typically you have multiple types of workloads and I believe this will only increase in the near future with the rise of containers, PaaS, VM’s, bare metal,… In this brief overview I wanted to demonstrate how you can connect Virtual Machines and Containers on the same overlay network in an automated manner via our SDN solution. This way every time you spin up a new workload it will automatically get its network and security policy applied and behaves like any other endpoint on the network.

Screen Shot 2015-11-06 at 13.01.15

Docker networking

There are multiple options to do networking in Docker, typically a container (running a specific service) can be exposed externally by mapping an internal port to external one. When you install Docker, it creates three networks automatically (bridge, null, and host), when you run a container you can use the –net flag to specify which network you want to run a container on. By default the Docker daemon connects containers to the bridge network. If you run ifconfig on the host you can see the bridge as part of the host’s network stack.

The none network adds a container to a container-specific network stack, this is what we will use in the case of Nuage Networks to connect the Docker container to our VRS (Open vSwitch)

The host network adds a container on the hosts network stack. You’ll find the network configuration inside the container is identical to the host.

Nuage Networks and Docker containers

In the case of Nuage Networks we will attach every container to a tenant (overlay) network which is provided by our centralised management (VSD) and control (VSC) plane and configured on the Docker host in our VRS (Open vSwitch). This allows us to use our centralised networking and security policies providing IP configuration, firewall rules, QoS, etc. If traffic leaves the Docker host it is encapsulated in VXLAN so from a management point of view this no different then how we work with Virtual Machines.

Demo

I’ve created a L2 network (called DockerSN below) in Nuage (synced to OpenStack) where I’m connecting both my containers and VM workloads. The subnet has a range of 192.168.200.0/24.

Screen Shot 2015-11-06 at 09.11.28

So when I spin up a new container on my Docker host and connect it to the Nuage VRS I’ll automatically get the policies from that construct applied.

Screen Shot 2015-11-06 at 09.23.22

So as you can see above, my new container (gloomy_jang) has gotten the IP address 192.168.200.190, if we go back to our Virtualized Services Architect interface we can see 2 containers (one I created earlier) and a VM attached to the same subnet (could also be to a separate subnet ofc).

Screen Shot 2015-11-06 at 09.28.40

We can drill down on the newly created container and get all the network and security policy details

Screen Shot 2015-11-06 at 09.29.48

We now have connectivity between our container workloads and VM (192.168.200.2).

[[email protected] ~]# docker exec 152f5660e56a ping -c 3 192.168.200.2
PING 192.168.200.2 (192.168.200.2) 56(84) bytes of data.
64 bytes from 192.168.200.2: icmp_seq=1 ttl=64 time=0.650 ms
64 bytes from 192.168.200.2: icmp_seq=2 ttl=64 time=0.450 ms
64 bytes from 192.168.200.2: icmp_seq=3 ttl=64 time=0.450 ms

Nuage Networks Service Insertion demos

Nuage Networks Service Insertion demos

During the OpenStack Summit in Tokyo, Nuage Networks announced 5 new partnerships (Fortinet, vArmour, Citrix, Guardicore, CounterTack). To give a quick overview of what these new (and some previous) partnerships represent from a technical point of view some demo movies were made available on YouTube.

Nuage Networks VSP and Citrix NetScaler VPX Demo In a Red Hat OpenStack environment

Nuage Networks VSP and F5 Big-IP Virtual Edition demo in a Red Hat OpenStack environment

Nuage Networks VSP and Palo Alto Networks Virtualized Firewall Demo in a Red Hat OpenStack environment

Nuage Networks VSP, and both Palo Alto Networks Virtualized Firewall and F5 Big-IP VE Demonstration

Nuage Networks VSP and Fortinet FortiGate Virtual Appliance Demo in a Red Hat OpenStack environment

Nuage Networks VSP and GuardiCore Data Center Security Suite Demo in a Red Hat OpenStack environment

Nuage Networks VSP and CounterTack Sentinel Demo in a Red Hat OpenStack environment

Networking Field Day 10 – Nuage Networks

Networking Field Day 10 – Nuage Networks

Introduction

Networking Field Day 10 was held from August 19 till 21 2015, in Silicon Valley, NFD is part of the Tech Field Day series of events organized by Stephen Foskett and team, and aims to bring together independent bloggers and IT product vendors to share information and opinions in a presentation format. In this setting demo’s and whiteboard sessions are usually appreciated more than slide-ware and marketing pitches. Assuming you are an independent (e.g. you don’t work for a vendor) blogger/influencer you can ask to become a TFD delegate and when selected get to experience these sessions first hand.

One of the vendors at NFD 10 was Nuage Networks, Nuage has appeared at various TFD events on numerous (10 times by my count) occasions and that’s how I first heard about and got interested in their solutions.

So what did they show at NFD 10?

First up was Sunil Khandekar, Founder and CEO, with an introduction to Nuage Networks and an update on the products. He talked about building software defined, programmable, automated data centers and how Nuage through its declarative policy based automation is applicable to all types of workloads, be it bare metal, virtual machines, or containers. Further he mentioned how Nuage is a complete SDN solution seeing that it hits on all the key tenets, namely; abstraction, automation, control, and visibility. Then he went on to give an update on Nuage Networks.

Screen Shot 2015-09-02 at 10.22.05

Nuage was started January 2012 with the idea that networking should be as instantaneous and consumable as compute has historically become, the solution, VSP, was launched about a year later in April 2013 delivering on this thesis. Currently Nuage is on its 4th release and has seen great customer traction.

The Virtual Services Platform (VSP) has 3 main components, the Virtual Services Directory (VSD), the Virtual Services Controller (VSC), and the Virtual Routing & Switching engine (VRS). Additionally Nuage also optionally provides a hardware VTEP gateway, the 7850 Virtualized Services Gateway (VSG), to connect legacy networking to VXLAN overlays.

Screen Shot 2015-09-02 at 10.32.19

To get a more detailed overview of the solution please see my Nuage Compendium page*.

Sunil also announced the VSP SDK (VSPK) available on https://github.com/nuagenetworks with the idea of fostering open collaboration around the platform. Through this proposed github collaboration, custom scripts for network automation, control or visibility can be developed and shared by its customer community

It is also important to understand that Nuage takes the concept of network automation beyond the data center and extends it to the WAN (branch office) with Virtual Network Services (VNS).

Virtual Network Services (VNS) basics

Next up was Rotem Salomonovitch, head of product management for VNS, talking about how it works in some more detail. The idea of VNS is to make setting up a new branch stupidly easy and independent of the underlying (carrier) technology, essentially SDWAN with a lot of automation. VNS is available as a hardware box, a VM, and as a software only package to install on a bare metal server.

Screen Shot 2015-09-02 at 15.25.36

VNS uses the same control plane components from the data center (VSD / VSC) but has a different data plane, e.g. forwarding entity called the Network Services Gateway (NSG), reason being is that data plane’s in branches typically differ from those in data centers where you have GigE, 10GigE, 40GigE and beyond. In the branch you have different interface types and things like encryption in the WAN, additional security requirements, etc. The NSG is meant to be a platform, initial applications of this platform are networking services (routing, QoS, FW,…) but the goal is to go beyond network connectivity and enable application flexibility. (see section below).
The appliance has multiple WAN ports and also USB connectivity so you could do things like extend it with external LTE connectivity for example.

Rotem then moved on to talk about different abstractions for different types of audiences, e.g. a developer just wants his application to be “connected” whereas the network design team are concerned with connection points, bandwidth consumption etc. To support this he talked about both vertical abstractions, for multi-tenancy, and horizontal abstractions for different audience types within each tenant. The idea is to expose only those abstractions that a particular audience is interested in (until we break all IT silos and everyone is a unicorn of course).

Screen Shot 2015-09-02 at 15.42.34

The idea of abstractions is not to have each team work in their own silo but rather have the system interpret a certain audience’ set of abstractions and translate those to complete the end-to-end policy setup e.g. the application developer uses the Application Designer to create a new app, he only defines the app concepts (e.g. front-end, middle-tier, database) and then the system translates those to subnets, ACLs, etc.

Another example is the VPN designer where you can bring up a new site by linking the device object in the GUI to a location, then, depending on the authentication method, the only thing that needs to happen is physically connecting the WAN and LAN ports on the device at the branch and the device will “bootstrap” automatically and pick up its configuration. The forwarding engine in the device is multi-tenant capable, each subnet is created as a L2 EVPN construct (remember that the idea of VNS is to be independent of the WAN technology), the access ports of the box are piped into those.

Application Flexibility at the Branch

The VNS does not only provide network connectivity but also allows you to run containerized workloads on top of it (it’s an Intel Atom based system). The main idea is to automate the attachment of existing container based applications to the branch network. One example would be to enable running external network operations tools to perform local logging of LAN elements and then setup a single encrypted connection over the WAN or do local auditing of running configurations. Another would be to run a user simulation at the branch before go-live to validate user experience and adjust as needed.

Screen Shot 2015-09-02 at 16.51.00

Theoretically you can run any containerized app (pull it from docker hub) on VNS (provided that you have enough resources to run it), Nuage takes care of the multi-tenant network aspects of running multiple containers on a single host.

Boundary-less Wide Area Networking

Next up was Hussein Khazaal, solution director, talking about extending connectivity from the data center to the branch, to the public cloud, making a VPC your own personal branch office with consistent business policies between all of them.

Screen Shot 2015-09-02 at 17.11.26

The way it works is you get (in this case) a virtual NSG from Nuage which comes as an Amazon AMI (Amazon Machine Image) and deploy it as any other instance to your VPC. Now you can use the NSG-v in Amazon just like any other NSG from your application designer / vpn designer. The classic use case would be to load-balance your front-end (web) application between your data center and Amazon in times of increased load, essentially cloud-bursting made real. If you would want to do something like this without Nuage you would need to figure out how to translate your business policies to the available constructs at each public cloud provider, with the NSG-v it consumes the centralized policies from the VSD just like any other forwarding engine.

*The Nuage Compendium page is under construction, and will be expanded over time.