NSX-T Manager Clustering

NSX-T Manager Clustering

In NSX-T 2.4 the NSX-T Manager is a Converged Appliance where Policy, Management and Control Roles are available on each NSX-T Manager Node and creating a Cluster of three NSX-T Managers. The NSX-T Managers in the Cluster also share a Distributed Persistent Datastore where the Desired State is stored. This feature brings the benefit of availability of all management services across the cluster, improves the install and upgrade process and makes operations easier with less systems to monitor and maintain.

NSX-T Datacenter Components

Management Plane

Starting in NSX-T 2.4 the Managed and Control Plane are Converged in one three node cluster for scale and high availability. The user interacts with NSX-T manager using Graphical User Interface(GUI), using REST API framework or a CMP Platform.

When a user configures for example a firewall policy rule, the NSX manager validates the configuration and stores persistently on the NSX manager. The NSX manager pushes the user published policies to the control plane cluster (CCP) which in turn pushes it to the data plane.

For monitoring and troubleshooting the NSX-T Manager interacts with the Management Plane Agent (MPA) on hosts for getting Distributed Firewall (DFW) status, rule & flow statistics.
The NSX-T Management Plane is also collecting the VM Inventory in order to maintain an inventory of all hosted virtualised workloads (VM’s) on NSX-T transport nodes. This is dynamically collected and updated from all NSX-T transport nodes.

Control Plane

The NSX-T Control Plane consists of two components, the Central Control Plane (CCP), which includes the NSX-T controller clusters, and the Local Control Plane (LCP), which consists of the userspace module on all of the NSX-T transport nodes which interact with CCP for exchanging configuration and state information.

When a user configures a DFW, the NSX-T controllers (CCP) will receive policy rules pushed by NSX-T manager.  If the policy contains objects like Logical Switches or Security Groups, it converts the objects used in the rule into IP addresses using the Object to IP mapping table, which is maintained by the Controller and updated using the IP Discovery mechanism. Once the policy is converted into a set of rules based on actual IP addresses, the CCP will ten push the rules to the Local Control Plane (LCP) on all NSX-T transport nodes (Hypervisors).

Data Plane

The NSX-T Transport Nodes makes distributed data plane with the DFW enforcement done at the Hypervisor kernel level. Each of the Transport Nodes, at any given time, connects to only one of the CCP controller based on mastership for that node. On each of the Transport node, once Local Control Plane (LCP) has the policy configuration from CCP it pushes the firewall policy and rules to the data plane filters (in kernel) for each of the virtual NICs. With consideration of the “Applied To” field in the rule or section which defines scope of enforcement, LCP makes sure only relevant DFW rules are programmed on relevant virtual NICs, instead of every rule everywhere which is not the optimal use of hypervisor resources.

NSX-T Manager Clustering with Virtual IP

Starting with NSX-T 2.4 we have the possibility for creating a NSX-T Cluster, consisting of three NSX-T Managers using a Virtual IP, which creates a High Available Management Plane for the GUI, API and CMP platform. This feature reduces the likelihood of failures of operation with NSX and provides API and GUI clients with a single High Available VIP. A requirement here is that all NSX-T Managers must be in the same Layer 2 network and subnet.

NSX-T Clustering with using a Load Balancer

With NSX-T 2.4 it is also possible to create a High Available NSX-T Cluster using an external Load Balancer which can load balance traffic from GUI, API clients and CMP Platforms to each NSX-T Manager. In this configuration NSX-T Managers can be in different subnets.

Configuration of NSX-T Clustering with Virtual IP

Configuration of NSX-T Clustering with a Virtual IP is very easy. First go to the Overview Section. Click on Edit Next to Virtual IP: Not Set.

Enter the Virtual IP and click Save.

The new Virtual IP has been assigned.

The new Virtual IP is now displayed and you can see to which NSX-T Manager it currently is associated with.

Summary

With NSX-T 2.4 where having the Management and Control Plane consolidated in a cluster consisting of three nodes, we have two design options to create a High Available NSX-T platform for being used by either GUI, API or Customers CMP usage, which reduces operational complexity with less amount of systems to maintain.

Leave a Reply

Your email address will not be published. Required fields are marked *

Twitter