Category Archives: Network Virtualization with VMware NSX

Syslog Configuration for NSX-T Components using API

In this post, quickly i’ll walk through how to Configure the NSX-T components to Forward Log Events to vRealize Log Insight using API.

Once you have VMware vRealize Log Insight (vRLI) designed and deployment, you can use API call to configure your NSX-T components to forward logs to log management servers. In this case i am going to push vRLI VIP FQDN through API call on NSX-T Managers and NSX-T Edges.

  • nsx01a
  • nsx01b
  • nsx01c
  1. Open POSTMAN and configure Authorization –> Select Basic Auth under TYPE and Provide NSX-T Manager username and Password to allow Postman to talk to NSX-T managers. 

2. Next select Headers, set  KEY as Content-Type and VALUE as  application/json

3. Next Select Body –> raw –> and provide Syslog server, protocol, post and log level you want to sent from NSX-T managers to log insight.

4. Next select POST –> https://xx-m01-nsx01a.xxxxx.com/api/v1/node/services/syslog/exporters and Click Send.

In the lower Body section, it will display content which confirms that syslog settings has successfully pushed on NSX-T Manager.

5. Repeat this for another NSX-T Managers node nsx01b and nsx01c.

POST – https://xx-m01-nsx01b.xxxxx.com/api/v1/node/services/syslog/exporters

POST – https://xx-m01-nsx01c.xxxxx.com/api/v1/node/services/syslog/exporters

6. Now time to verify, Clear the text from Body  section and send GET to retrieve configuration data from NSX-T Managers.

GEThttps://xx-m01-nsx01a.xxxxx.com/api/v1/node/services/syslog/exporters

In the lower Body section, it retrieves the configured syslog settings from  NSX-T Manager.

Configure the NSX-T Edges to Forward Log Events to vRealize Log Insight 

Now will Configure the NSX-T Edge nodes to send audit logs and system events to vRealize Log Insight.

To configure on NSX-T Edge nodes first, you retrieve the ID of each edge transport node by using the NSX-T Manager user interface. Then, you use the Postman application to configure log forwarding for all edge transport nodes by sending a post request to each NSX-T Edge request URL.

  1. Login to NSX-T Manager to retrieve the ID of each edge nodes.

  • nsxedge-01 — 16420ffa-d159-41a2-9f02-b4ac30d32636
  • nsxedge-02 — 39fe9748-c6ae-4a32-9023-ad610ea87249

2. Here is syntax for edge node – POSThttps://xx-m01-nsx01.xxxxx.com/api/v1/transport-nodes/16420ffa-d159-41a2-9f02-b4ac30d32636/node/services/syslog/exporters and Send

3. Now time to verify, Clear the text from Body  section and send GET to same url to retrieve configuration data from NSX-T edge node.

Repeat this for rest of the NSX-T edge nodes. 

That’s all.  Hope you enjoyed reading this post. Feel free to share 🙂

 

How to Configure centralized logging for the NSX Manager 6.x.x, NSX Controllers and NSX Edge devices

In my previous article discussed about VMware NSX Manager 6.x.x Backup and Restore and in this article I am going to discuss how to Configure centralized logging for the NSX Manager 6.x.x, NSX Controllers and NSX Edge devices.

In the production environment it is always recommenced to have remote log collector server configured, so that NSX Manager 6.x.x, NSX Controllers and NSX Edge devices sends all audit logs and system events from  NSX Manager 6.x.x, NSX Controllers and NSX Edge devices to the syslog server. This will be handy to troubleshoot or to get the final RCA in the event of any issue.

Let’s start with configuring syslog server for NSX Manager:-

1. Login to the VMware NSX Manager Virtual Appliance with Admin account.b1
2. Go to Manage –> General –> Click Edit in the Syslog Server section.p13. Provide Syslog Server, Port and Protocol details in the syslog server window and Click OK to test and save the  settings.p2

4. Once it is saved. It will show the settings like below.p3This is how we can configure Syslog server for NSX Manager.


Next is how to Configure Syslog Server for VMware NSX controllers :-

For NSX Controllers only supported method to configure syslog Server is through the NSX API. And using Rest API we need to push Syslog Server details on all the NSX controllers one by one.

Before we go ahead and push the syslog server on NSX controllers through REST API, We need to enable/Add REST API client to the browser. You can search for Rest API Client for the browser for Chrome or Mozilla and Add to the Browser.

api1

api2

Once you are done with adding REST API plug-in to your browser. There are couple of thing that needs to be remember.

REST API requests requires Authentication  header and Content-Type as application/xml to send HTTP body.

api4

Now we are ready to send the request body to configure Syslog Server for NSX controllers.

Open the Rest Client to set the request body to configure Syslog for NSX for vSphere Controllers. Make sure you have selected the Method as POST and URL as https://<NSX Manager IP>/api/2.0/vdn/controller/{controller-id}/syslog where controller-id is the name of NSX controller and can be found on the NSX Installation page.

HTTP Request body has to be this:

<controllerSyslogServer>
<syslogServer>x.x.x.x</syslogServer>
<port>514</port>
<protocol>UDP</protocol>
<level>INFO</level>
</controllerSyslogServer>

api3

This is how we can configure Syslog Server on NSX Controllers. If you want to DELETE the Syslog exporter use below request:-

Method :- DELETE and URL:- https://<NSX-Manager-IP>/api/2.0/vdn/controller/{controller-ID}/syslog.


How to configure Syslog Server for Distributed Logical Router.

1.  Login to vCenter Server using vSphere Web Client and choose Networking and Security –> NSX Edges –> and Double click on Logical Router.lrs1

2. Under Manage –> Settings –> Configuration click on Change under Syslog Servers.LRs2

3. Enter the Syslog Server and Protocol details in the Edit Syslog Server Configuration page and Click OK.LRs3

4. Now we can see Syslog is configured and ready to send all the logs to Remote Server.LRs4


How to configure Syslog Server for NSX Edge.

1.  Login to vCenter Server using vSphere Web Client and choose Networking and Security –> NSX Edges –> and Double click on NSX Edge.DRS1

2. Under Manage –> Settings –> Configuration click on Change under Syslog Servers.DRS2

3. Enter the Syslog Server and Protocol details and Click OK.

DRS4

That’s All. This is how you can configure Syslog Server for NSX Manager, NSX Controllers and NSX Edges.

Thank you and Happy learning 🙂

 

VMware NSX Manager 6.x.x Backup and Restore

In this post i am going to discuss how to configure Backup for NSX Manager 6.x.x, Schedule backup for NSX Manager 6.x.x, How to take On-demand Backup for NSX Manager 6.x.x and Restore NSX Manager configuration from a backup.

We can back up and restore NSX Manager data, which includes system configuration, events, and audit log tables. Backup are saved to a remote location and that must be accessible by the NSX manager.

We can back up NSX manager data on-demand or we can schedule as per plan.

Let’s start now how to configure remote server to store the backup of NSX manager.

  1. Login to the VMware NSX Manager Virtual Appliance with Admin account.

b12. Under the NSX Manager Virtual Appliance Management –> Click Backup & Restore.b2

3. To store the NSX Manager Backup we can use FTP server with FTP or SFTP Transport protocol. To configure FTP Sever settings click Change Next to FTP Server Settings.b3

4. Backup Location Window will open up:

  • Enter IP/Host name of the FTP Server.
  • Choose Transfer protocol either FTP or SFTP, based on what the destination server supports.
  • Enter the Port number for Transfer Protocol.
  • Enter the user name and password to connect to backup server.
  • Enter the Backup Directory where you want to store the backup.
  •  Enter the Filename Prefix, Prefix will be added every time with backup file runs for NSX manager.
  • Type the Pass Phrase to secure the backup.
  • Click OK to test connection between NSX Manager and FTP Server and Save the settings.

b4

5. Once connection testing done it will save the settings. It will show the settings as below.b5

6.  After configuring the FTP Server Settings, We can configure to schedule the backup. Click Change next to Scheduling. We can schedule backup for Hourly, Daily or Weekly basis. Choose your option as per plan ( Recommended is to take daily basis), and Click Schedule to save the settings.

b6

b7

b8

7. Backup will run as per schedule and you can see entry for every day.

b9

8. We can also perform on-demand backup of NSX manger. For On-Demand backup of NSX Manager click Backup Next to Backup History.b10

9. Create Backup window will open up to confirm that you want to start a backup process now, Click Start to start the backup immediately.b11

10. it will take few minutes to complete the Backup process.b12

11. You can see new backup entry in Backup History.b13

Now will discuss how to Restore from a backup.

We can restore a backup only on a freshly deployed NSX Manager Appliance.  So let’s assume that we have some issue with Current NSX manager and can not be recovered.

In this scenario we can deploy new NSX Manager Virtual Appliance, Configure the FTP Server settings to identify the location of the backup to be restored. Select the backup from backup history and Click Restore and Click OK to confirm.b15That’s it. This is how we can configure Remote Server to store NSX Manager backup, Schedule NSX Manager backup, Perform on-demand backup for NSX Manager and Restore from a backup.

Thank you and Keep spreading the knowledge  🙂

 

 

VMware Released NSX for vSphere 6.2.3

VMware released NSX for vSphere 6.2.3 last month with many Changes and also includes a number of bug fixes in the previous version of NSX.

 

Here are Changes introduced in NSX vSphere 6.2.3:-

  • Logical Switching and Routing
    • NSX Hardware Layer 2 Gateway Integration: expands physical connectivity options by integrating 3rd-party hardware gateway switches into the NSX logical network
    • New VXLAN Port 4789 in NSX 6.2.3 and later: Before version 6.2.3, the default VXLAN UDP port number was 8472. See the NSX Upgrade Guide for details.
  • Networking and Edge Services
    • New Edge DHCP Options: DHCP Option 121 supports static route option, which is used for DHCP server to publish static routes to DHCP client; DHCP Options 66, 67, 150 supports DHCP options for PXE Boot; and DHCP Option 26 supports configuration of DHCP client network interface MTU by DHCP server.
    • Increase in DHCP Pool, static binding limits: The following are the new limit numbers for various form factors: Compact: 2048; Large: 4096; Quad large: 4096; and X-large: 8192.
    • Edge Firewall adds SYN flood protection: Avoid service disruptions by enabling SYN flood protection for transit traffic. Feature is disabled by default, use the NSX REST API to enable it.
    • NSX Edge — On Demand Failover: Enables users to initiate on-demand failover when needed.
    • NSX Edge — Resource Reservation: Reserves CPU/Memory for NSX Edge during creation. You can change the default CPU and memory resource reservation percentages using this API. The CPU/Memory percentage can be set to 0 percent each to disable resource reservation.PUT https://<NSXManager>/api/4.0/edgePublish/tuningConfiguration
                  <tuningConfiguration>
                     <lockUpdatesOnEdge>false</lockUpdatesOnEdge>
                     <aggregatePublishing>true</aggregatePublishing>
                     <edgeVMHealthCheckIntervalInMin>0</edgeVMHealthCheckIntervalInMin>
                     <healthCheckCommandTimeoutInMs>120000</healthCheckCommandTimeoutInMs>
                     <maxParallelVixCallsForHealthCheck>25</maxParallelVixCallsForHealthCheck>
                     <publishingTimeoutInMs>1200000</publishingTimeoutInMs>
                     <edgeVCpuReservationPercentage>0</edgeVCpuReservationPercentage>
                     <edgeMemoryReservationPercentage>0</edgeMemoryReservationPercentage>
                     <megaHertzPerVCpu>1000</megaHertzPerVCpu>
                  </tuningConfiguration>
      
    • Change in NSX Edge Upgrade Behavior: Replacement NSX Edge VMs are deployed before upgrade or redeploy. The host must have sufficient resources for four NSX Edge VMs during the upgrade or redeploy of an Edge HA pair. Default value for TCP connection timeout is changed to 21600 seconds from the previous value of 3600 seconds.
    • Cross VC NSX — Universal Distributed Logical Router (DLR) Upgrade: Auto upgrade of Universal DLR on secondary NSX Manager, once upgraded on primary NSX Manager.
    • Flexible SNAT / DNAT rule creation: vnicId no longer needed as an input parameter; removed requirement that the DNAT address must be the address of an NSX Edge VNIC.
    • NSX Edge VM (ESG, DLR) now shows both Live Location and Desired Location. NSX Manager and NSX APIs including GET api/4.0/edges//appliances now return configuredResourcePool and configuredDataStore in addition to current location.
    • Edge Firewall adds SYN flood protection: Avoid service disruptions by enabling SYN flood protection for transit traffic. Feature is disabled by default, use the NSX REST API to enable it.
    • NSX Manager exposes the ESXi hostname on which the 3rd-party VM Series firewall SVM is running to improve operational manageability in large-scale environments.
    • NAT rule now can be applied to a VNIC interface and not only an IP address.

For complete details please refer release note :- http://pubs.vmware.com/Release_Notes/en/nsx/6.2.3/releasenotes_nsx_vsphere_623.html

Thank you and Keep sharing 🙂

NSX Troubleshooting – VMs out of Network on VNI 5XXX

Currently i am working for customer running Network Virtualization (NSX) in their SDDC environment. Few weeks ago faced issues that multiple VMs out of Network in one of the compute cluster. So wanted to share and hope this will be useful for so many folks working on NSX. Customer is running NSX 6.1.1 with multiple VNIs managing networks for multiple environments. (e.g. Prod, DR, DEV,QA, Test etc.)

Here are the steps:-

  1. After receiving the issue we tried to ping random VMs from the list and VMs were not reachable.
  2. Next step was to find out the VNI number for those VMs and see if all are part of same VNI. And yes those VMs were part of same VNI (e.g. 5XXX)
  3. Once we knew the VNI number next step was to find out if all VMs connected to the VNI 5XXX are impacted or few.
  4. From the step 3 we came to know that only few VMs were impacted not all. After drilling down we found that VMs impacted are running on one of the ESXi hosts in the cluster and VNI working fine with other hosts in the cluster.
  5. To bring the VMs online we moved VMs to another host  and after migrating VMs were reachable and User were able to connect to the applications.
  6. Next was to find out the  Root Cause Analysis (RCA) why VMs connected to VNI 5XXX on ESXi host XXXXXXXXXX  lost network.
  7. Putty to ESXi Host and run the following command to check the VNI status on the host :- net-vdl2 -l. You can see below output screen that VXLAN Network 5002 is DOWN and all impacted VMs were part of this.

VNI19. To fix the issue we need to re-start the NETCPA daemon on the host. Here are list of commands to STOP / START  and CHECK STATUS of NETCPA daemon.

1)  Stopped the netcpa daemon by running –>  /etc/init.d/netcpad stop.

2)  Started the netcpa daemon by running –> /etc/init.d/netcpad start.

3) checked the status of service by running –> /etc/init.d/netcpad status.

10. After starting the NETCPA daemon check the VNI status by running command :- net-vdl2 -l. And now you can see that VXLAN 5002 is UP

VNI211. Next step was to move few VMs on this host from VNI 5002 and check the connectivity status of VMs and Application. All were perfectly fine after moving now on this host.

Note:- This issue has been addressed in NSX version 6.1.4e. If you are running NSX 6.1.4e then may be you will not get this issue. As Controller will be monitoring netcpad daemon and start if it failed on any of the hosts.

That’s it ….SHARE & SPREAD THE KNOWLEDGE!!

NSX 6.2 has been released last week with so many Enhancements and new cool features

New surprise came last week with NSX 6.2 release. Expectation was that NSX 6.2 will be released in the next week VMWORLD 2015 US. But VMware has released much awaited NSX 6.2 a week before VMWORLD 2015 US.

NSX vSphere 6.2 includes the following new and changed features:

  • Cross vCenter Networking and Security
    • NSX 6.2 with vSphere 6.0 supports Cross vCenter NSX: where logical switches (LS), distributed logical routers (DLR) and distributed firewalls (DFW) can be deployed across multiple vCenters, thereby enabling logical networking and security for applications with workloads (VMs) that span multiple vCenters or multiple physical locations.
    • Consistent firewall policy across multiple vCenters: Firewall Rule Sections in NSX can now be marked as “Universal” whereby the rules defined in these sections get replicated across multiple NSX managers. This simplifies the workflows involving defining consistent firewall policy spanning multiple NSX installations
    • Cross vCenter vMotion with DFW: Virtual Machines that have policies defined in the “Universal” sections can be moved across hosts that belong to different vCenters with consistent security policy enforcement.
    • Universal Security Groups: Security Groups in NSX 6.2 that are based on IP Address, IP Set, MAC Address and MAC Set can now be used in Universal rules whereby the groups and group memberships are synced up across multiple NSX managers. This improves the consistency in object group definitions across multiple NSX managers, and enables consistent policy enforcement
    • Universal Logical Switch (ULS): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of logical switches that can span multiple vCenters, allowing the network administrator to create a contiguous L2 domain for an application or tenant.
    • Universal Distributed Logical Router (UDLR): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of distributed logical routers that can span multiple vCenters. The universal distributed logical routers enable routing across the universal logical switches described earlier. In addition, NSX UDLR is capable of localized north-south routing based on the physical location of the workload
  • Operations and Troubleshooting Enhancements
    • New traceflow troubleshooting tool: Traceflow is a troubleshooting tool that helps identify if the problem is in the virtual or physical network. It provides the ability to trace a packet from source to destination and helps observe how that packet passes through the various network functions in the virtual network.
    • Flow monitoring and IPFIX separation: In NSX 6.1.x, NSX supported IPFIX reporting, but IPFIX reporting could be enabled only if flow reporting to NSX Manager was also enabled. Starting in NSX 6.2.0, these features are decoupled. In NSX 6.2.0 and later, you can enable IPFIX independent of flow monitoring on NSX Manager.
    • New CLI monitoring and troubleshooting commands in 6.2: See the knowledge base article for more information.
    • Central CLI: Central CLI reduces troubleshooting time for distributed network functions. Commands are run from the NSX Edge command line and retrieve information from controllers, hosts, and the NSX Manager. This allows you to quickly access and compare information from multiple sources. The central CLI provides information about logical switches, logical routers, distributed firewall and edges.
    • CLI ping command adds configurable packet size and do-not-fragment flag: Starting in NSX 6.2.0, the NSX CLI ‘ping’ command offers options to specify the data packet size (not including the ICMP header) and to set the do-not-fragment flag. See the NSX CLI Reference for details.
    • Show health of the communication channels: NSX 6.2.0 adds the ability to monitor communication channel health. The channel health status between NSX Manager and the firewall agent, between NSX Manager and the control plane agent, and between host and the NSX Controller can be seen from the NSX Manager UI. In addition, this feature detects when configuration messages from the NSX Manager have been lost before being applied to a host, and it instructs the host to reload its NSX configuration when such message failures occur.
    • Standalone Edge L2 VPN client CLI: Prior to NSX 6.2, a standalone NSX Edge L2 VPN client could be configured only through OVF parameters. Commands specific to standalone NSX Edge have been added to allow configuration using the command line interface. The OVF is now used for initial configuration only.
  • Logical Networking and Routing
    • L2 Bridging Interoperability with Distributed Logical Router: With VMware NSX for vSphere 6.2, L2 bridging can now participate in distributed logical routing. The VXLAN network to which the bridge instance is connected, will be used to connect the routing instance and the bridge instance together.
    • Support of /31 prefixes on ESG and DLR interfaces per RFC 3021
    • Support of relayed DHCP request on the ESG DHCP server
    • Ability to keep VLAN tags over VXLAN
    • Exact Match for Redistribution Filters: The redistribution filter has same matching algorithm as ACL, so exact prefix match by default (except if le or ge options are used).
    • Support of administrative distance for static route
    • Ability to enable/disable uRPF check per interface on Edge
    • Display AS path in CLI command show ip bgp
    • HA interface exclusion from redistribution into routing protocols on the DLR control VM
    • Distributed logical router (DLR) force-sync avoids data loss for east-west routing traffic across the DLR.
    • View active edge in HA pair: In the NSX 6.2 web client, you can find out if an NSX Edge appliance is the active or backup in an HA pair.
    • REST API supports reverse path filter(rp_filter) on Edge: Using the system control REST API, rp_filter sysctl can be configured, and is not exposed on REST API for vnic interfaces and sub-interfaces. See the NSX API Guide for more information.
    • Behavior of the IP prefix ‘GE’ and IP prefix ‘LE’ BGP route filters: In NSX 6.2, the following enhancements have been made to BGP route filters:
      • LE / GE keywords not allowed: For the null route network address (defined as ANY or in CIDR format 0.0.0.0/0), less-than-or-equal-to (LE) and greater-than-or-equal-to (GE) keywords are no longer allowed. In previous releases, these keywords were allowed.
      • LE and GE values in the range 0-7 are now treated as valid. In previous releases, this range was not valid.
      • For a given route prefix, you can no longer specify a GE value that is greater than the specified LE value.
  • Networking and Edge Services
    • The management interface of the DLR has been renamed to HA interface. This has been done to highlight the fact that the HA keepalives travel through this interface and that interruptions in traffic on this interface can result in a split-brain condition.
    • Load balancer health monitoring improvements: Delivers granular health monitoring, that reports information on failure, keeps track of last health check and status change, and reports failure reasons.
    • Support VIP and pool port range: Enables load balancer support for applications that require a range of ports.
    • Increased maximum number of virtual IP addresses (VIPs): VIP support rises to 1024.
  • Security Service Enhancements
    • New IP address discovery mechanisms for VMs: Authoritative enforcement of security policies based on VM names or other vCenter-based attributes requires that NSX know the IP address of the VM. In NSX 6.1 and earlier, IP address discovery for each VM relied on the presence of VMware Tools (vmtools) on that VM or the manual authorization of the IP address for that VM. NSX 6.2 introduces the option to discover the VM’s IP address using DHCP snooping or ARP snooping. These new discovery mechanisms enable NSX to enforce IP address-based security rules on VMs that do not have VMware Tools installed.
  • Solution Interoperability
    • Support for vSphere 6.0 Platform Services Controller topologies: NSX now supports external Platform Services Controllers (PSC), in addition to the already supported embedded PSC configurations.
    • Support for vRealize Orchestrator Plug-in for NSX 1.0.2: With NSX 6.2 release, NSX-vRO plug-in v1.0.2 is introduced in vRealize Automation (vRA).

For more details please refer to VMware NSX 6.2 for vSphere Documentation Center :- http://pubs.vmware.com/NSX-62/index.jsp

Thank you 🙂

Network Virtualization with VMware NSX – Part 8

Let’s back into NSX mode again 🙂 In my last blog Network Virtualization with VMware NSX – Part 7 discussed about Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway. Here in Network Virtualization with VMware NSX – Part 8 will discuss about High Availability of the NSX Edge.

High Availability

High Availability (HA) ensures that NSX Edge appliance is always available by installing an active pair of Edges on your virtualized infrastructure. We can enable HA either when installing NSX Edge appliance or after installing NSX Edge appliance.

The primary NSX Edge appliance is in the Active State and the Secondary Appliance is in Standby State. NSX Edge replicates the configuration of the primary appliance to the standby appliance. VMware recommends create the primary and secondary appliances on separate datastores. If you create the primary and secondary appliances on the same datastore, the datastore must be shared across all hosts in the cluster for the HA appliance pair to be deployed on different ESX hosts.

All NSX Edge services run on the active appliance. The primary appliance maintains a heartbeat with the standby appliance and sends service updates through an internal interface. If a heartbeat is not received from the primary appliance within the specified time (default value is 15 seconds), the primary appliance is declared dead. The standby appliance moves to the active state, takes over the interface configuration of the primary appliance, and starts the NSX Edge services that were running on the primary appliance. After switch over Load Balancer and VPN services need to re-establish TCP connection with NSX Edge, so service is disrupted for a short while. Logical switch connections and firewall sessions are synched between the primary and standby appliances, so there is no service disruption during switch over.

If the NSX Edge appliance fails and a bad state is reported, high availability force-synchronizes the failed appliance to revive it. When the appliance is revived, it takes on the configuration of the now active appliance and stays in a standby state. If the NSX Edge appliance is dead, you must delete the appliance and add an appliance.

NSX Edge ensures that the two HA NSX Edge virtual machines are not on the same ESX host even after you use DRS and vMotion (unless you manually vMotion them to the same host).

Now let’s verify HA settings and Configure High Availability for NSX Edge :-

1. Login to the web Client –> Home –> Networking and Security –> NSX Edges –> Double click either Logical Router or NSX Edge Services Router.HA1

2. It will open up the selected device. Click Manage –> Settings –> Configuration –> And under HA Configuration you can see HA Status is DISABLED. Same way you can check for Logical Router.HA2

3. Same can be verify from Management Cluster where we have deployed NSX Edge appliances. you can see in the below screenshot that only one instance of Edge Services Router (Edge Services Router-0) and One instance of Logical Router (Logical-Router-0) is running.HA3

4. Now let’s enabled HA for NSX Edge. Click Manage –> Settings –> Configuration –> And under HA Configuration –> Click Change.HA4

5. Change HA Configuration window will open up, Select HA Status –> Enable, Select vNIC, enter Declare Dead Time (Default is 15 Seconds), And enter the management IP for Heartbeat for both nodes and Click OK.HA5

6. It will take few seconds and you can see HA Status under HA Configuration is showing now Enabled.HA6

7. Let’s go to Management Cluster to see the number of Nodes. Now you can see that there are two instances up and running. Edge Services Router (Edge Services Router-0 and Edge Services Router-1)HA7

8. That’s it. Now NSX Edge Services Router is running is HA mode, If Active node will fail standby node will take over after 15 seconds. Same way we can enable HA for Logical Router. I have added screenshot for Logical Router.HA8

HA9

HA10

HA119. Once you have enabled HA for NSX Edge. You can putty to NSX edge and verify the Active Node and Standby Node by running Show Service highavailability command. Let me connect to and run this command to verify.

You can see in below result that This node (vshield-edge-4-0) is Active and vshield-edge-4-1 is peer host means Standby Node.HA14

10. Now let’s shut down the vshield-edge-4-0 and run the Show Service highavailability command again.

Now you can see in below result that vshield-edge-4-1 is Active and vshield-edge-4-0 is unreachable.HA15

11. Now let’s Power On the vshield-edge-4-0 and run the command again.

Now you can see in below result that vshield-edge-4-1 is Active and vshield-edge-4-0 is peer host means Standby Node.HA16

That’s It !! This is how we can enable HA and test failover for NSX Edge.

Thank You and Keep sharing :)

—————————————————————————————————

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 7

Network Virtualization with VMware NSX – Part 8

Network Virtualization with VMware NSX – Part 7

In my last blog Network Virtualization with VMware NSX – Part 6 we have discussed about Static and Dynamic routing. Here In the Network Virtualization with VMware NSX – Part 7 will be discus about Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway.

Network Address Translation (NAT)

Network Address Translation (NAT) is the process where a network device assigns a public address to a computer (or group of computers) inside a private network. The main use of NAT is to limit the number of public IP addresses an organization or company must use, for both economy and security purposes.

Three blocks of IP addresses are reserved for private use and these Private IP addresses cannot be advertised in the public Internet.

10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, and  192.168.0 0 to 192.168.255.255.

The private addressing scheme works well for computers that only have to access resources inside the network, like workstations needing access to file servers and printers. Routers inside the private network can route traffic between private addresses with no trouble. However, to access resources outside the network, like the Internet, these computers have to have a public address in order for responses to their requests to return to them. This is where NAT comes into play.NAT1

Another exam is public Cloud™ where there are multiple tenant running their workload with Private IP address range. Hosts assigned with private IP addresses cannot communicate with other hosts through the Internet. The solution to this problem is to use network address translation (NAT) with private addressing.

NAT2NSX Edge provides network address translation (NAT) service to assign a public address to a computer or group of computers in a private network. NSX Edge service supports two types of NAT:- SNAT and DNAT

Source NAT (SNAT) is used to translate a private internal IP address into a public IP address for outbound traffic. The below picture depict that NSX Edge gateway is translating Test-Network using addresses 192.168.1.2 through 192.168.1.4 to 10.20.181.171. This technique is called masquerading where multiple Private IP Address are translating into Single host IP Address.

NAT3Destination NAT (DNAT) commonly used to publish a service located in a private network on a publicly accessible IP address. The below picture depict that NSX Edge NAT is publishing the Web Server 192.168.1.2 on an external network as 10.20.181.171. The rule translates the destination IP address in the inbound packet to an internal IP address and forwards the packet.

NAT4Configuring Network Address Translation (SNAT and DNAT) on an NSX Edge Services Gateway:-

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security –> NSX Edges –> and Double Click NSX Edge.NAT5

2. Under NSX Edge router –> click Manage tab –> click NAT tab –> and click the green plus sign (+) and select Add DNAT Rule or Add SNAT Rule whichever you would like to add.NAT6

3. In the Add DNAT Rule dialog box, Select Uplink-Interface from the Applied On drop-down menu. Enter the Public IP address in the Original IP/Range text box and enter the destination Translated IP/Range. And select Enabled the DNAT rule check box and Click OK to add the rule.NAT7

4. click Publish Changes to add the rule.NAT8

5. Once rules pushed you can see one rule has been added to the rule list.NAT9

6. To test the Connectivity Using the Destination NAT Translation, putty to NSX Egde router with Admin account and run command  to begin capturing packets on the Transit-Interface.

debug packet display interface vNic_1 port_80 or debug packet display interface vNic_0 icmp

1st command will capture packets on interface 1 for TCP port 80 and 2nd command will capture packets on interface 0 for ICMP protocol.

Same way we can Add SNAT rules for outgoing traffic.

——————————————————————————————————

NSX Edge Load Balancer

Load Balancing is another network service available within NSX that can be natively enabled on the NSX Edge device. The two main drivers for deploying a load balancer are scaling out an application (Load is distributed across multiple backend servers) as well as improving its high-availability characteristics (Servers or applications that fail are automatically removed from the pool).LB1

The NSX Edge load balancer distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource use, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to layer 7.

Note :- The NSX platform can be integrate load-balancing services offered by 3rd party vendors as well.

NSX Edge offers support for two types of deployment: One-arm mode (called proxy mode) and Inline mode (called transparent mode)

One-arm mode (called proxy mode)

The one-arm load balancer has several advantages and disadvantages. The advantages are that the design is simple and can be deployed easily. The main disadvantage is that you must have a load balancer per segment, leading to a large number of load balancers.

So when you design and deploy you need to see both the factors and choose which mode is fitting to your requirement.LB2

Inline mode (called transparent mode)

The advantage of using Inline mode is that the client IP address is preserved because the proxies are not doing source NAT. This design also requires fewer load balancers because a single NSX Edge instance can service multiple segments.
With this configuration, you cannot have a distributed router because the Web servers must point at the NSX Edge instance as the default gateway.

LB3Configuring Load Balancing with NSX Edge Gateway

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security –> NSX Edges –> and Double Click NSX Edge.

2.  Under the Manage tab, click Load Balancer. In the load balancer category panel, select Global Configuration.

3. Under Load balancer global configuration –> Click Edit to open the Edit load balancer global configuration page, In the Edit load balancer global configuration page check the Enable Loadbalancer box and Click OK.LB4

4. Once Loan balancer has been Enabled, you can see the Green Tick mark for Enable Loadbalancer.LB5

5. Next We need to create Application Profiles, In the load balancer category panel, select Application Profiles –> Click the green plus sign (+) to open the New Profile dialog box.

6. In the New Profile dialog box, Enter the Name –> Select Protocol Type (HTTPS)           –>  Select the Enable SSL Passthrough check box and click OK.LB6

7. Once Application Profile has been created you can see Profile ID and name under box.LB7

8. Next we have to Create a Server Pool. I am going to create a round-robin server pool that contains the two Web server virtual machines as members providing HTTPS.

9. In the load balancer category panel, select Pools –> Click the green plus sign (+) to open the New Pool dialog box.

10. In the New Pool dialog box, Enter the Server Pool Name in the text box –> Select Algorithm – ROUND-ROBIN –> And Below Members, click the green plus sign (+) to open the New Member dialog box, and add all web servers as members.LB8

11. Once all members has been added into Server Pool verify and Click OK.LB9

12. Once Pools has been added you can see the Pool ID, Pool Name with Configured Algorithm under the box.LB10

13. Next we need to Create a Virtual Server. select Virtual Servers –> click the green plus sign (+) to open the New Virtual Server dialog box.

14. In the New Virtual Server dialog box, select Enabled box –> Enter the Virtual Server name –> enter the IP Address of the Interface –> Select protocol (HTTPS) –> Port Number for HTTPS (443) –> Select the Pool name and Application Profile created earlier and Click OK.LB11

15. Once done you can see Virtual Server Name with all configured details under the box. LB12

That’s It 🙂 This is how we can configure NAT and Load balancer using NSX Edge.

Thank You and Keep sharing :)

—————————————————————————————————

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 7

Network Virtualization with VMware NSX – Part 8

Network Virtualization with VMware NSX – Part 6

In the Network Virtualization with VMware NSX – Part 5 we discussed about VXLAN to VLAN Layer 2 Bridging, Configure and Deploy an NSX Edge GatewayConfigure Routes (Static Routing) on the NSX Edge Gateway and on the Distributed Router. Here in Network Virtualization with VMware NSX – Part 6 will discuss about Configure Dynamic Routing (OSPF) on Perimeter Gateway and  on Distributed Router.

As we discussed and Configured Static Routing on both Perimeter Gateway and on Distributed Router in the Network Virtualization with VMware NSX – Part 5. So before going to configure Dynamic Routing we need to delete that.

Remove Static Routes from Perimeter Gateway and from Distributed Router:-

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security and  select NSX Edges.ESG192. In the edge list, double-click the Perimeter Gateway to open and manage that object. In the middle pane, click the Manage tab –> click Routing and Click Static Routes.DR23. In the Static Routing list select the Route to delete and click Red X icon (X).  Click Publish Changes to take effect of the changes.DR34. Once done you’ll see the selected Static Route has been deleted from list.DR45. Repeat steps 1-4 to delete Static Routes from the Distributed Router.DR5

DR6

DR7So now we have deleted Static Routes from both Perimeter Gateway and Distributed Router.

———————————————————————————————————-

Now Will Configure Dynamic Routing (OSPF) on Perimeter Gateway:-

1. Click Home tab –> Inventories –> Networking & Security and  NSX Edges. Double Click the Perimeter Gateway router to open and manage this.DR12. Select Manage –> Routing –>  Global Configuration and Under Dynamic Routing Configuration –> Click Edit  to Edit the Dynamic Routing Configuration.OSPFESG13. In the Edit Dynamic Routing Configuration dialog box, Select the Router ID from List and Click OK.OSPFDLR24. Click Publish Changes to Apply changes.OSPFESG25. Once Changes Applied You can see under Dynamic Routing Configuration Router ID and OSPF Enabled.OSPFESG3

6. Next we need to Configure OSPF. To do so In the routing category, select OSPF and Under Area Definitions verify that Area 0 is exist. If Area 0 does not exist we need to create that.OSPFESG4

7. We need to add more area as needed. So to add Area Click Green Plus Sign (+) under Area Definitions.

8. In the New Area Definition dialog box, Enter the Area ID and Click OK.OSPFESG5

9. Click Publish Changes to Apply changes.OSPFESG6

10. Once Changes Applied You can see Area ID under Area Definitions List.OSPFESG7

11. Once Area ID has been created we need to Map Interface to specified Area. To Map Interface to Area Click Green Plus Sign (+) Under Area to Interface Mapping:OSPFESG8

12. Select the required vNIC and enter Area ID into Area box and Click OK.OSPFESG9

13. Click Publish Changes to Apply changes.OSPFESG10

14. Once changes has been applied you can see that Interface has been mapped to specified Area.OSPFESG11

15. Repeat the steps 11-14 to Map all the required Interface to Area ID.OSPFESG12

 

OSPFESG1316. Once All the Interfaces have been Mapped to Required Area ID. We need to  Redistribute Perimeter Gateway Subnets. To do so In the routing category, select Route Redistribution and Under Route Redistribution Table Click  the green plus sign (+) to open the New Redistribution criteria dialog box.OSPFESG14

17. In the New Redistribution criteria dialog box, Under Allow learning from select the Connected check box and Action Permit and Click OK.OSPFESG15

18. Click Publish Changes to Apply changes.OSPFESG16

19. In the Route Redistribution Status at the top of the page, determine if a green check mark appears next to OSPF. If a green check mark does not appear Click Edit to edit the settings to Enable OSPF.OSPFESG17

20. In the Change Redistribution settings dialog box Check the OSPF Check box and Click OK.OSPFESG18

21. Once Changes done you can see  green check mark appears next to OSPF.OSPFESG19

———————————————————————————————

Now we will be Configuring OSPF on Distributed Router:-

1. Click Home tab –> Inventories –> Networking & Security and  NSX Edges. Double Click the Distributed Router to open and manage Distributed Router.DR1

2. Select Manage –> Routing –>  Global Configuration and Under Dynamic Routing Configuration –> Click Edit  to Edit the Dynamic Routing ConfigurationOSPFDLR1

3. In the Edit Dynamic Routing Configuration dialog box, Select the Router ID from List and Click OK.OSPFDLR2

4. Click Publish Changes to Apply changes.OSPFDLR3

5. Once Changes Applied You can see under Dynamic Routing Configuration Router ID and OSPF Enabled.

6. Next we need to Configure OSPF. To do so In the routing category, select OSPF and On the right side of the OSPF Configuration panel, click Edit to open the OSPF Configuration dialog box.OSPFDLR4

7.  In the OSPF Configuration dialog box, Select the Enable OSPF check box. Enter Protocol Address and Enter Forwarding Address and Click OK.OSPFDLR5

8. We need to add more area as needed. So to add Area Click Green Plus Sign (+) under Area Definitions.OSPFDLR7

9. In the New Area Definition dialog box, Enter the Area ID and Click OK. And Click Publish Changes to Apply changes.OSPFDLR8

10.  Once Area ID has been created we need to Map Interface to specified Area. To Map Interface to Area Click Green Plus Sign (+) Under Area to Interface Mapping.OSPFDLR9

11. Select the required Interface and enter Area ID into Area box and Click OK. And Click Publish Changes to Apply changes.OSPFDLR10

12. After the changes have been published, verify that the OSPF Configuration Status is Enabled.OSPFDLR14

13. Once All the Interfaces have been Mapped to Required Area ID. We need to  Redistribute Distributed Router Internal Subnets. To do so In the routing category, select Route Redistribution and Under Route Redistribution Table Click  the pencil icon to open the Edit Redistribution criteria dialog box, and verify that settings are configured as:  Prefix Name: Any, Learner Protocol: OSPF,  Allow Learning From: Connected and Action: Permit.OSPFDLR13

If the default route redistribution entry does not appear in the list, we need to create a new route redistribution by clicking the green plus sign (+) and configure the table.

That’s it ! we have done with Configuring Dynamic Routing (OSPF) on Perimeter Gateway and  on Distributed Router.

In the next Network Virtualization with VMware NSX – Part 7 will discuss Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway.

Thank You and Keep sharing 🙂

————————————————————————————————————–

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 5

In Network Virtualization with VMware NSX – Part 4 we discussed Configuring and Deploying an NSX Distributed Router. Here in Network Virtualization with VMware NSX – Part 5 will discuss about VXLAN to VLAN Layer 2 Bridging, Configure and Deploy an NSX Edge Gateway, Configure Routes (Static Routing) on the NSX Edge Gateway and on the Distributed Router.

VXLAN to VLAN Layer 2 Bridging

A VXLAN to VLAN bridge enables direct Ethernet connectivity between virtual machines in a logical switch, and virtual machines in a distributed port group, This connectivity is called layer 2 bridging.

We can create a layer 2 bridge between a logical switch and a VLAN, which enables to migrate virtual workloads to physical devices with no effect on IP addresses. A logical network can leverage a physical gateway and access existing physical network and security resources by bridging the logical switch broadcast domain to the VLAN broadcast domain. Bridging can also be used in a migration strategy where you might be using P2V and you do not want to change subnets.

Note:- VXLAN to VXLAN bridging or VLAN to VLAN bridging is not supported. Bridging between different data centers is also not supported. All participants of the VLAN and VXLAN bridge must be in the same data center.

NSX Edge Services Gateway

The services gateway gives you access to all NSX Edge services such as firewall, NAT, DHCP, VPN, load balancing, and high availability. You can install multiple NSX Edge services gateway virtual appliances in a datacenter. Each NSX Edge virtual appliance can have a total of ten uplink and internal network interfaces.

ESG-1

NSX Edge logical router provides East-West and NSX Edge Services Gateway provide North-South Routing.

NSX Edge Services Gateway Sizing:-

NSX Edge can be deployed in four different configurations.ESG-2When we deploy NSX Edge gateway we need to choose right size as per load/requirements. We can also covert size of ESG later from Compact to Large, X-large or Quad Large. as you can in picture.

ESG20Note :- A service interruption might occur when the old NSX Edge gateway instance is removed and the new NSX Edge gateway instance is redeployed with new size or when we convert size of ESG.

NSX Edge Services Gateway features:-

ESG-3For resiliency and high-availability NSX Edge Services Gateway can be deployed as a pair of Active/Standby units (HA Mode).

When we deploy ESG/DLR in HA mode NSX Manager deploy the pair of NSX Edges/DLR on different hosts (anti-affinity rule). Heartbeat keepalives are exchanged every second between the active and standby edge instances to monitor each other’s health status.

If the ESXi server hosting the active NSX Edge fails, at the expiration of a “Declare Dead Time” timer, the standby node takes over the active duties. The default value for this timer is 15 seconds, but it can be tuned down (via UI or API calls) to 6 seconds.

The NSX Manager also monitors the state of health of the deployed NSX Edges, so it ensures to restart the failed unit on another ESXi host.

The NSX Edge appliance supports static and dynamic routing (OSPF, IS-IS, BGP, and Route redistribution).

Deploy NSX Edge gateway and Configure the static routing:

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security and  select NSX Edges.ESG12. Click the green plus sign (+) to open the New NSX Edge dialog box. On the Name and description page, select Edge Services Gateway. (If you want to Enable HA for ESG select the Enable High Availability check box or leave it unchecked). Enter the Name of ESG as per your company standard and click Next.ESG23. On the CLI credentials page, enter the password for ESG in the password text box. Check Enable SSH Access box to enable SSH access for ESG appliance.             Note:- Password length must be at-least 12 characters. ESG1-P

ESG34. Select the Datacenter where you want to deploy this appliance. Select Appliance Size depending on your requirement we can also convert to any Size later as well. Check Enable auto rule generation to automatically generate service rules to allow flow of control traffic.

Under NSX Edge Appliances, click the green plus sign (+) to open the Add NSX Edge Appliance dialog box.ESG45. In Add NSX Edge Appliance dialog box select the Cluster and Datastore to deploy NSX Edge Appliance in the required location and designated datastore. And Click OK.

ESG56. verify all the settings on Configure deployment page and Click Next.

ESG67. On the Configure Interfaces page,click the green plus sign (+) to open the Add NSX Edge Interface dialog box

ESG78. Enter the Interface Name in the Name text box, choose Type, Click the Connected To –> Select link and choosed the required Distributed Port group. Click the green plus sign (+) under Configure Subnets to add subnet for the Interface.

ESG89. In the Add Subnet dialog box, click the green plus sign (+) to add an IP address field. Enter required IP address (192.168.100.3) in the IP Address text box and click OK to confirm the entry. Enter the subnet prefix length (24) in the Subnet prefix length text box and click OK.

ESG910. verify all the settings on Add NSX Edge Interface dialog box and Click OK.

ESG1011. Repeat steps 7-10 to add all required interfaces for ESG and Click Next.

ESG12

ESG11

ESG13

ESG1412. Once all Interfaces has been added verify settings on Configure Interfaces dialog box and Click Next.

ESG1513. On the Default gateway settings page, selec the Configure Default Gateway check box. Verify that the vNIC selection is Uplink-Interface. and  Enter the DG address (192.168.100.2) in the Gateway IP text box and Click Next.

ESG1614. On the Firewall and HA page, Select the Configure Firewall default policy check box. and Default Traffic Policy Accept. You can see that Configure HA parameters are gray out because we have not checked the Enable High Availability check box in step 2. And Click Next.

ESG1715. On the Ready to Complete dialog box verify all the settings (if you want to change any settings go back and change that)  and click Finish to complete the deployment for NSX Edge.

ESG1816. It will take few minutes to complete the deployment. Now under NSX Edges you can see that it is showing Deployed.

ESG1917. Double Click on the NSX Edge and can see the configuration settings as we choosed while deploying this.

esg1-ppNow Will Configure Static Routes on the NSX Edge Gateway:-

1. Double Click on the NSX Edge to browse NSX Edge –> Click on the Manage tab –> click Routing and select Static Routes. And Click the green plus sign (+) to open the Add Static Route dialog box.ESG-SR12. Select the interface connected to DLR which is (Transit-Interface), Enter the network ID with Subnet Mask (172.16.0.0/24) for which you want to add Routing and Next Hop Address for configured Network (in my case 192.168.10.2) and click OK.

ESG-SR23. After every settings or Modification need to Publish Changes. Click on Publish Changes.

ESG-SR34. Once Publishing finished you can see entry under Static Routes.

ESG-SR4

Configure Static Routes on the Distributed Router:-

1.Under Networking & Security –> NSX Edges –> double-click the Distributed Router entry to manage that object.ESG19

DLR-SR12. After browsing DLR  Click on the Manage and Routing tab. In the routing category panel select Static Routes and Click the Green Plus Sign (+) to add static Routes on DLR.

DLR-SR2

3. Select the interface connected to ESG which is (Transit-Interface), Enter the network ID with Subnet Mask (192.168.110.0/24) for which you want to add Routing and Next Hop Address for configured Network (in my case 192.168.10.1) and click OK.

DLR-SR34. After every settings or Modification need to Publish Changes. Click on Publish Changes. Once done you can see Static routes in the Static Routes lists.

DLR-SR4

Once Static Routing has been done will be able to ping the Logical switch network with External network. e.g external Network 192.168.110.10 to 3 logical switch network created in part 2 172.16.0.0/24.

esg1-2

That’s it. We are done with Deploying NSX Distributed Router and NSX Edge Services Gateway and also how to Configure Static Routing on DLR and ESG. 

In the next part (Network Virtualization with VMware NSX – Part 6) will discuss how to Configure Dynamic Routing on NSX Edge Appliances and NSX Distributed Router.

Thank you and stay tuned for next part. Keep sharing the knowledge 🙂

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

VMware NSX- How to Delete/Remove NSX Logical Switch

Recently i was trying to Remove/Delete one of the NSX logical switch in my lab. While trying to Remove/Delete logical switch got this error:

er1 er2As per error message some of the resources still connected to this logical Switch, that’s why getting error: DB-Tier resources are still in use. So we need to remove any connected Virtual Machines from this logical switch and then we’ll be able to Remove/Delete this NSX logical Switch.

So first thing we need to check what are the VMs / Resources utilizing the NSX logical Switch.

Connect to VC ( vSphere Web Client) –> Networking and Security –> Logical Switches    –> and from right pane double click the logical switch we are trying to Remove.er3As you can see in below screen that one Virtual Machine is connected to DB-Tier NSX logical Switch. We have two option to remove this VM from this logical Switch.

1. Migrate Virtual Machine to another port group or,

2. delete the Virtual NIC card from the VM (which is not good practice).

er4So here we are going to migrate DB-sv-01a VM from DB-Tier logical switch to another Virtual Machine port group.

er5 er6Now after migrating VM we try to Remove/Delete this NSX Logical Switch.

er7

er1Now we are able to Remove/Delete this NSX Logical Switch.

er8

That’s All. I hope this will be informative for others. Thank you !!

Network Virtualization with VMware NSX – Part 4

We discussed Virtual LAN (VLAN)Virtual Extensible LAN (VXLAN)Virtual Tunnel End Point (VTEP)VXLAN Replication Modes, and NSX Logical Switching in the Network Virtualization with VMware NSX – Part 3. Here in Part 4 will discuss about NSX Routing. 

NSX Routing :-

The TCP/IP protocol suite offers different routing protocols that provide a router with methods for building valid routes. The following routing protocols are supported by NSX:

Open Shortest Path First (OSPF): This protocol is a link-state protocol that uses a link-state routing algorithm. This protocol is an interior routing protocol.
Intermediate System to Intermediate System (IS-IS): This protocol determines the best route for datagrams through a packet switched network.
Border Gateway Protocol (BGP): This protocol is an exterior gateway protocol that is designed to exchange routing information between autonomous systems (AS) on the Internet.

NSX Logical Router:-

The NSX Edge logical router provides East-West distributed routing with tenant IP address space and data path isolation. Virtual machines or workloads that reside on the same host on different subnets can communicate with one another without having to traverse a traditional routing interface. A logical router can have eight uplink interfaces and up to a thousand internal interfaces.

During the configuration process, NSX Manager deploys the logical router control virtual machine and pushes the logical interface configurations to each host through the control cluster. The logical router control virtual machine is the control plane component of the routing process. The logical router control virtual machine supports the OSPF and BGP protocols.The distributed logical routers run at the kernel module level.

The NSX Controller cluster is responsible for distributing routes learned from the logical router control virtual machine across the hypervisors. Each control node in the cluster takes responsibility for distributing the information for a particular distributed logical router instance. In a deployment where multiple distributed logical router instances are deployed, the load is distributed across the NSX Controller nodes.

The distributed logical router owns the logical interface (LIF). This concept is similar to interfaces on a physical router. But on the distributed router a distributed logical router can have a maximum of 1,000 LIFs. For each segment that the distributed logical router is connected to, the distributed logical router has one ARP table.

When the LIF is connected to a VLAN, the LIF has a pMAC and when the LIF is connected to a VXLAN, the LIF has a vMAC.

NOTE :- You can have only one VXLAN LIF connecting to a logical switch. Only one distributed logical router can be connected to a logical switch.

DLR high availability:- When high availability is enabled, NSX Manager enables the VMware vCenter Server system to deploy another logical control router virtual machine. deploy two logical router control virtual machines and designate one as active and one as passive. If the active logical router control virtual machine fails, the passive logical router control virtual machine takes 15 seconds to take over. Because the control virtual machine is not in the data plane, data plane traffic is not affected.R1Configuring and Deploying an NSX Distributed Router:-

1. Connect vCenter Server through vSphere Web Client –> Home –> Inventories –> Networking & Security.

2. In the left navigation pane, select NSX Edges.DLR303. In the center pane, click the green plus sign (+) to open the New NSX Edge dialog box.DLR14. From the New NSX Edge dialog box.  On the Name and description page, click the Logical (Distributed) Router button. Enter the Name of the Distributed Router in the Name text box, Enter Hostname, Description for DLR and Tenant Name and click Next.
DLR25. On the Settings page, enter Password for DLR and Enable SSH access for DLR. If you want DLR in High Availability mode check the Enable High Availability box. And Click NEXT.DLR4Note:- Password must be at least 12 Characters log.DLR316. On the Configure Deployment page, verify that you have selected required Datacenter.DLR67. Under NSX Edge Appliances, click the green plus sign (+) to open the Add NSX Edge Appliance dialog box. Select the required Cluster/Resource Pool, Datastore, Host and Folder to deploy DLR. (If you have checked High Availability option 2 Distributed Router will be deployed). And Click OK to close the Add NSX Edge Appliance dialog box.DLR78. Verify the NSX Edge Appliances settings and Click Next.DLR89. On the Configure interfaces page, click the Connected To –> Select link under Management Interface Configuration and select the required Port Group Under Distributed Portgroup. And click OK.

DLR910. Under Configure Interfaces of this NSX Edge, click the green plus sign (+) to open the Add Interface dialog box.DLR10Note :- As discussed in Part -3 we are configuring DLR with below requirement. So we need to Add 4 Interfaces).DLR2911. In The Add Interface dialog box, Enter the name of Interface, Select Type, Click Select Link for Connected To: and choose the desired Logical Switch and OK.DLR1112. Now Click the green plus sign (+) under Configure Subnets to Add subnet for the Interface. In the Add Subnet box Click the green plus sign (+) to add IP Address and Subnet Mask and click OK.DLR12DLR1313. Once Subnets has been added click Ok to complete Add Interface.DLR1414. Repeat the steps 11-13 to Add and Configure Interfaces for other 3 (WEB, APP and Database).DLR15

DLR16

DLR17

DLR18

DLR19

DLR20

DLR2115. Once rest 3 Interfaces have been added and configured. Click Next to proceed

DLR2316. On the Ready to complete page, review the configuration and click Finish to start deploying the Logical (Distributed) Router.DLR2417. It will take some to complete the deployment of Logical (Distributed) Router.DLR2518. Verify that the Distributed Router entry has a type of Logical Router. Double-click the Distributed Router entry to manage that object. Click the Manage tab –> Settings –> Interfaces and see Status of all 4 Interfaces are green.

DLR2619. Under Configuration you can see there are 2 Logical Routers Appliances deployed. Because we choosed to deploy in HA mode. Same you can also verity from Cluster.DLR27

DLR28

20. Now after deploying the DLR with all 4 interfaces. You can Test Connectivity using Ping command between all the VMs.

In the Next Part (Network Virtualization with VMware NSX – Part 5) will discuss Configure and Deploy an NSX Edge GatewayConfigure Routes on the NSX Edge Gateway and on the Distributed Router.

—————————————————————————————————-

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Please share if useful …..Thank You :)

 

Network Virtualization with VMware NSX – Part 3

In the Network Virtualization with VMware NSX – Part 2 we have discussed about NSX Controller Cluster, How to Deploy the NSX Controller Instances, Create IP Pool, and Install Network Virtualization Components ( Prepare Hosts) on vSphere Hosts.

In this part will discuss about Logical Switch Networks and VXLAN Overlays.

Before Discussing VXLAN let’s discuss bit about Virtual LAN (VLAN):-

A VLAN is a group of devices on one or more LANs that are configured to communicate as if they were attached to the same wire, when in fact they are located on a number of different LAN segments. Because VLANs are based on logical instead of physical connections, they are extremely flexible.

VLANs address scalability, security, and network management by enabling a switch to serve multiple virtual subnets from its LAN ports.

VLAN Split switches into separate virtual switches (Broadcast Domains). Only members of a virtual LAN (VLAN) can see that VLAN’s traffic. Traffic between VLANs must go through a router.

By default, all ports on a switch are in a single broadcast domain. VLANs enable a single switch to serve multiple switching domains. The forwarding table on the switch is partitioned between all ports belonging to a common VLAN. All ports on a Switch by default part of single and default VLAN 0 and this default VLAN is called the Native VLAN.

Virtual Extensible LAN (VXLAN) enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks.

VXLAN is an Ethernet in IP overlay technology, where the original layer 2 frame is encapsulated in a User Datagram Protocol (UDP) packet and delivered over a transport network. This technology provides the ability to extend layer 2 networks across layer 3 boundaries and consume capacity across clusters. The VXLAN adds 50 to 54 bytes of information to the frame, depending on whether VLAN tagging is used. VMware recommends increasing the MTU to at least 1,600 bytes to support NSX.

A VXLAN Number Identifier (VNI) is a 24-bit number that gets added to the VXLAN frame. The 24-bit address space theoretically enables up to 16 million VXLAN networks. Each VXLAN network is an isolated logical network.  VMware NSX™ starts with VNI 5000.

A Virtual Tunnel End Point (VTEP) is an entity that encapsulates an Ethernet frame in a VXLAN frame or de-encapsulates a VXLAN frame and forwards the inner Ethernet frame.

VXLAN Frame :-

VXLAN1The top frame is the original frame from the virtual machines, minus the Frame Check Sequence (FCS), encapsulated in a VXLAN frame. A new FCS is created by the VTEP to include the entire VXLAN frame. The VLAN tag in the layer 2 Ethernet frame exists if the port group that your VXLAN VMkernel port is connected to has an associated VLAN number. When the port group is associated with a VLAN number, the port group tags the VXLAN frame with that VLAN number.

VXLAN Replication Modes:-

Three modes of traffic replication exist: two modes are based on VMware NSX Controller™ based and one mode is based on data plane.

vxlan1Unicast has no physical network requirements apart from the MTU. All traffic is replicated by the VTEPs. In NSX, the default mode of traffic replication is unicast.  Unicast has Higher overhead on the source VTEP and UTEP.

Multicast mode uses the VTEP as a proxy. In multicast, the VTEP never goes to the NSX Controller instance. As soon as the VTEP receives the broadcast traffic, the VTEP multicasts the traffic to all devices. Multicast has lowest overhead on the source VTEP.

Hybrid mode is not the default mode of operation in NSX for vSphere, but is important for larger scale operations. Also the configuration overhead or complexity of L2 IGMP is significantly lower than multicast routing.

In the Network Virtualization with VMware NSX – Part 2 we have configured/Prepared Hosts so now let’s Configure VXLAN on the ESXi Hosts.

1. Connect to vCenter using web client.

2. Click Networking & Security and then click Installation.

3. Click the Host Preparation tab and under VXLAN column Click Configure to start Configuring VXLAN on the ESXi Hosts.

vxlan24. In the Configure VXLAN networking dialog box, Select Switch, VLAN, Set MTU to 1600, for VMKNic IP Addressing if you have created IP Pool choose existing IP from from list or Click IP Pool to create New Pool And Click OK.

vxlan3

vxlan45. It will take few minutes to configure depending upon number of Hosts into Cluster. If an error is indicated, it is a transitory condition that occurs early in the process of applying the VXLAN configuration to the cluster. The vSphere Web Client interface has not updated to display the actual status. Click Refresh to update the console.

vxlan56. Repeat the steps to configure all the clusters. Once Configuration done on all clusters.Verify that the VXLAN status is Enabled with a green check mark.

vxlan67.  Once VXLAN Configuration done for all the clusters and VXLAN status is Enabled with a green check mark. Click the Logical Network Preparation tab and verify that VXLAN Transport is selected. In the Clusters and Hosts list,expand each of the clusters and confirm the host has a vmk# interface created with IP Address from the IP Pool we have created for each.

vxlan7Once We have finished Configuring VXLAN and Verified VXLAN configuration for all the clusters. Next need to Configure the VXLAN ID Pool to identify VXLAN networks:-

1.  On the Logical Network Preparation tab, click the Segment ID button and Click Edit to open the Segment ID pool dialog box to configure ID Pool.

2. Enter the Segment ID Pool and Click Ok to complete. VMware NSX™ starts with VNI ID from 5000.

vxlan8Next we need to Configure a Global Transport Zone:-

A transport zone specifies the hosts and clusters that are associated with logical switches created in the zone. Hosts in a transport zone are automatically added to the logical switches that you create. This process is very similar to manually adding hosts to VMware vSphere Distributed Switch.

1. On the Logical Network Preparation tab, click Transport Zones and Click the green plus sign to open the New Transport Zone dialog box.

vxlan92.  Enter the Name for Transport Zone and Select Control Plane Mode. select Clusters to Add to the Transport Zone and Click OK to complete the creation.

vxlan10

vxlan11

———————————————————————————————————-

NSX Logical Switching

The Logical Switching capability in the NSX platform provides customers the ability to spin up isolated logical L2 networks with the same flexibility and agility, as it is to spin up virtual machines. Endpoints, both virtual and physical, can then connect to those logical segments and establish connectivity independently from the specific location where
they are deployed in the data center network. This is possible because of the decoupling between network infrastructure and logical networks provided by NSX network virtualization. Each logical switch gets its own unique VNI.

The deployment of the NSX Virtualization components can help to the agile and flexible creation of applications with their required network connectivity and services. A typical example is the creation of a multi-tier application.

LS11Configure Logical Switch Networks

We need to create logical switches for the all required networks (e.g. Transit, Web-Tier, App-Tier, and DB-Tier networks as per above picture.)
1. Connect to vCenter Server using web Client and Click Networking and Security and Select Logical Switches,  In the left navigation pane.

LS12. Click the Green plus sign to open the New Logical Switch dialog box. Enter the Logical Switch Name and  Select the Global Transport Zone we had created earlier, Choose the Control Plane Mode and Click OK to complete the Switch creation.

ls23. Wait for the update to complete and confirm Transit-Network appears with a status of Normal. Repeat steps to create all required Logical Switches and all are Normal.

LS3Once Logical Switches has been created we need to Migrate Virtual Machines to Logical Switches:-

1. In the left pane under Networking & Security and select Logical Switches. In the center pane, select the logical Switch e.g. Web-Tier –> Right Click the Choose Add VM..

LS42. Select Virtual Machines you want to add to the Logical Switch and Click Next.

LS53.  Select the VNIC you want to add to the Network and Click Next.

LS64. In the Ready to complete box verify the settings and  Click Finish to Complete adding VMs to desired Network.

LS75. To verify that VMs have been added to Logical Switch, Double Click the Logical Switch.

LS36. Click Related Objects and Virtual Machines tab and you can the list of VMs added to this specific Logical Switch.

LS87. Repeat the same steps for all the Logical Switches to Add VMs. Once done try to ping VMs in same switch and between Switch.

Now you can only ping VMs connected in the same Switch. To communicate with VMs in another Switch we need to configure Routing. Which will discuss in next Part.

======================================================

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

– See more at: http://virtualcloudsolutions.info/?p=829#sthash.YMq7IeEE.dpuf

Please share if useful …..Thank You 🙂