NSX Troubleshooting – VMs out of Network on VNI 5XXX

Currently i am working for customer running Network Virtualization (NSX) in their SDDC environment. Few weeks ago faced issues that multiple VMs out of Network in one of the compute cluster. So wanted to share and hope this will be useful for so many folks working on NSX. Customer is running NSX 6.1.1 with multiple VNIs managing networks for multiple environments. (e.g. Prod, DR, DEV,QA, Test etc.)

Here are the steps:-

  1. After receiving the issue we tried to ping random VMs from the list and VMs were not reachable.
  2. Next step was to find out the VNI number for those VMs and see if all are part of same VNI. And yes those VMs were part of same VNI (e.g. 5XXX)
  3. Once we knew the VNI number next step was to find out if all VMs connected to the VNI 5XXX are impacted or few.
  4. From the step 3 we came to know that only few VMs were impacted not all. After drilling down we found that VMs impacted are running on one of the ESXi hosts in the cluster and VNI working fine with other hosts in the cluster.
  5. To bring the VMs online we moved VMs to another host  and after migrating VMs were reachable and User were able to connect to the applications.
  6. Next was to find out the  Root Cause Analysis (RCA) why VMs connected to VNI 5XXX on ESXi host XXXXXXXXXX  lost network.
  7. Putty to ESXi Host and run the following command to check the VNI status on the host :- net-vdl2 -l. You can see below output screen that VXLAN Network 5002 is DOWN and all impacted VMs were part of this.

VNI19. To fix the issue we need to re-start the NETCPA daemon on the host. Here are list of commands to STOP / START  and CHECK STATUS of NETCPA daemon.

1)  Stopped the netcpa daemon by running –>  /etc/init.d/netcpad stop.

2)  Started the netcpa daemon by running –> /etc/init.d/netcpad start.

3) checked the status of service by running –> /etc/init.d/netcpad status.

10. After starting the NETCPA daemon check the VNI status by running command :- net-vdl2 -l. And now you can see that VXLAN 5002 is UP

VNI211. Next step was to move few VMs on this host from VNI 5002 and check the connectivity status of VMs and Application. All were perfectly fine after moving now on this host.

Note:- This issue has been addressed in NSX version 6.1.4e. If you are running NSX 6.1.4e then may be you will not get this issue. As Controller will be monitoring netcpad daemon and start if it failed on any of the hosts.

That’s it ….SHARE & SPREAD THE KNOWLEDGE!!

NSX 6.2 has been released last week with so many Enhancements and new cool features

New surprise came last week with NSX 6.2 release. Expectation was that NSX 6.2 will be released in the next week VMWORLD 2015 US. But VMware has released much awaited NSX 6.2 a week before VMWORLD 2015 US.

NSX vSphere 6.2 includes the following new and changed features:

  • Cross vCenter Networking and Security
    • NSX 6.2 with vSphere 6.0 supports Cross vCenter NSX: where logical switches (LS), distributed logical routers (DLR) and distributed firewalls (DFW) can be deployed across multiple vCenters, thereby enabling logical networking and security for applications with workloads (VMs) that span multiple vCenters or multiple physical locations.
    • Consistent firewall policy across multiple vCenters: Firewall Rule Sections in NSX can now be marked as “Universal” whereby the rules defined in these sections get replicated across multiple NSX managers. This simplifies the workflows involving defining consistent firewall policy spanning multiple NSX installations
    • Cross vCenter vMotion with DFW: Virtual Machines that have policies defined in the “Universal” sections can be moved across hosts that belong to different vCenters with consistent security policy enforcement.
    • Universal Security Groups: Security Groups in NSX 6.2 that are based on IP Address, IP Set, MAC Address and MAC Set can now be used in Universal rules whereby the groups and group memberships are synced up across multiple NSX managers. This improves the consistency in object group definitions across multiple NSX managers, and enables consistent policy enforcement
    • Universal Logical Switch (ULS): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of logical switches that can span multiple vCenters, allowing the network administrator to create a contiguous L2 domain for an application or tenant.
    • Universal Distributed Logical Router (UDLR): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of distributed logical routers that can span multiple vCenters. The universal distributed logical routers enable routing across the universal logical switches described earlier. In addition, NSX UDLR is capable of localized north-south routing based on the physical location of the workload
  • Operations and Troubleshooting Enhancements
    • New traceflow troubleshooting tool: Traceflow is a troubleshooting tool that helps identify if the problem is in the virtual or physical network. It provides the ability to trace a packet from source to destination and helps observe how that packet passes through the various network functions in the virtual network.
    • Flow monitoring and IPFIX separation: In NSX 6.1.x, NSX supported IPFIX reporting, but IPFIX reporting could be enabled only if flow reporting to NSX Manager was also enabled. Starting in NSX 6.2.0, these features are decoupled. In NSX 6.2.0 and later, you can enable IPFIX independent of flow monitoring on NSX Manager.
    • New CLI monitoring and troubleshooting commands in 6.2: See the knowledge base article for more information.
    • Central CLI: Central CLI reduces troubleshooting time for distributed network functions. Commands are run from the NSX Edge command line and retrieve information from controllers, hosts, and the NSX Manager. This allows you to quickly access and compare information from multiple sources. The central CLI provides information about logical switches, logical routers, distributed firewall and edges.
    • CLI ping command adds configurable packet size and do-not-fragment flag: Starting in NSX 6.2.0, the NSX CLI ‘ping’ command offers options to specify the data packet size (not including the ICMP header) and to set the do-not-fragment flag. See the NSX CLI Reference for details.
    • Show health of the communication channels: NSX 6.2.0 adds the ability to monitor communication channel health. The channel health status between NSX Manager and the firewall agent, between NSX Manager and the control plane agent, and between host and the NSX Controller can be seen from the NSX Manager UI. In addition, this feature detects when configuration messages from the NSX Manager have been lost before being applied to a host, and it instructs the host to reload its NSX configuration when such message failures occur.
    • Standalone Edge L2 VPN client CLI: Prior to NSX 6.2, a standalone NSX Edge L2 VPN client could be configured only through OVF parameters. Commands specific to standalone NSX Edge have been added to allow configuration using the command line interface. The OVF is now used for initial configuration only.
  • Logical Networking and Routing
    • L2 Bridging Interoperability with Distributed Logical Router: With VMware NSX for vSphere 6.2, L2 bridging can now participate in distributed logical routing. The VXLAN network to which the bridge instance is connected, will be used to connect the routing instance and the bridge instance together.
    • Support of /31 prefixes on ESG and DLR interfaces per RFC 3021
    • Support of relayed DHCP request on the ESG DHCP server
    • Ability to keep VLAN tags over VXLAN
    • Exact Match for Redistribution Filters: The redistribution filter has same matching algorithm as ACL, so exact prefix match by default (except if le or ge options are used).
    • Support of administrative distance for static route
    • Ability to enable/disable uRPF check per interface on Edge
    • Display AS path in CLI command show ip bgp
    • HA interface exclusion from redistribution into routing protocols on the DLR control VM
    • Distributed logical router (DLR) force-sync avoids data loss for east-west routing traffic across the DLR.
    • View active edge in HA pair: In the NSX 6.2 web client, you can find out if an NSX Edge appliance is the active or backup in an HA pair.
    • REST API supports reverse path filter(rp_filter) on Edge: Using the system control REST API, rp_filter sysctl can be configured, and is not exposed on REST API for vnic interfaces and sub-interfaces. See the NSX API Guide for more information.
    • Behavior of the IP prefix ‘GE’ and IP prefix ‘LE’ BGP route filters: In NSX 6.2, the following enhancements have been made to BGP route filters:
      • LE / GE keywords not allowed: For the null route network address (defined as ANY or in CIDR format 0.0.0.0/0), less-than-or-equal-to (LE) and greater-than-or-equal-to (GE) keywords are no longer allowed. In previous releases, these keywords were allowed.
      • LE and GE values in the range 0-7 are now treated as valid. In previous releases, this range was not valid.
      • For a given route prefix, you can no longer specify a GE value that is greater than the specified LE value.
  • Networking and Edge Services
    • The management interface of the DLR has been renamed to HA interface. This has been done to highlight the fact that the HA keepalives travel through this interface and that interruptions in traffic on this interface can result in a split-brain condition.
    • Load balancer health monitoring improvements: Delivers granular health monitoring, that reports information on failure, keeps track of last health check and status change, and reports failure reasons.
    • Support VIP and pool port range: Enables load balancer support for applications that require a range of ports.
    • Increased maximum number of virtual IP addresses (VIPs): VIP support rises to 1024.
  • Security Service Enhancements
    • New IP address discovery mechanisms for VMs: Authoritative enforcement of security policies based on VM names or other vCenter-based attributes requires that NSX know the IP address of the VM. In NSX 6.1 and earlier, IP address discovery for each VM relied on the presence of VMware Tools (vmtools) on that VM or the manual authorization of the IP address for that VM. NSX 6.2 introduces the option to discover the VM’s IP address using DHCP snooping or ARP snooping. These new discovery mechanisms enable NSX to enforce IP address-based security rules on VMs that do not have VMware Tools installed.
  • Solution Interoperability
    • Support for vSphere 6.0 Platform Services Controller topologies: NSX now supports external Platform Services Controllers (PSC), in addition to the already supported embedded PSC configurations.
    • Support for vRealize Orchestrator Plug-in for NSX 1.0.2: With NSX 6.2 release, NSX-vRO plug-in v1.0.2 is introduced in vRealize Automation (vRA).

For more details please refer to VMware NSX 6.2 for vSphere Documentation Center :- http://pubs.vmware.com/NSX-62/index.jsp

Thank you 🙂

Network Virtualization with VMware NSX – Part 8

Let’s back into NSX mode again 🙂 In my last blog Network Virtualization with VMware NSX – Part 7 discussed about Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway. Here in Network Virtualization with VMware NSX – Part 8 will discuss about High Availability of the NSX Edge.

High Availability

High Availability (HA) ensures that NSX Edge appliance is always available by installing an active pair of Edges on your virtualized infrastructure. We can enable HA either when installing NSX Edge appliance or after installing NSX Edge appliance.

The primary NSX Edge appliance is in the Active State and the Secondary Appliance is in Standby State. NSX Edge replicates the configuration of the primary appliance to the standby appliance. VMware recommends create the primary and secondary appliances on separate datastores. If you create the primary and secondary appliances on the same datastore, the datastore must be shared across all hosts in the cluster for the HA appliance pair to be deployed on different ESX hosts.

All NSX Edge services run on the active appliance. The primary appliance maintains a heartbeat with the standby appliance and sends service updates through an internal interface. If a heartbeat is not received from the primary appliance within the specified time (default value is 15 seconds), the primary appliance is declared dead. The standby appliance moves to the active state, takes over the interface configuration of the primary appliance, and starts the NSX Edge services that were running on the primary appliance. After switch over Load Balancer and VPN services need to re-establish TCP connection with NSX Edge, so service is disrupted for a short while. Logical switch connections and firewall sessions are synched between the primary and standby appliances, so there is no service disruption during switch over.

If the NSX Edge appliance fails and a bad state is reported, high availability force-synchronizes the failed appliance to revive it. When the appliance is revived, it takes on the configuration of the now active appliance and stays in a standby state. If the NSX Edge appliance is dead, you must delete the appliance and add an appliance.

NSX Edge ensures that the two HA NSX Edge virtual machines are not on the same ESX host even after you use DRS and vMotion (unless you manually vMotion them to the same host).

Now let’s verify HA settings and Configure High Availability for NSX Edge :-

1. Login to the web Client –> Home –> Networking and Security –> NSX Edges –> Double click either Logical Router or NSX Edge Services Router.HA1

2. It will open up the selected device. Click Manage –> Settings –> Configuration –> And under HA Configuration you can see HA Status is DISABLED. Same way you can check for Logical Router.HA2

3. Same can be verify from Management Cluster where we have deployed NSX Edge appliances. you can see in the below screenshot that only one instance of Edge Services Router (Edge Services Router-0) and One instance of Logical Router (Logical-Router-0) is running.HA3

4. Now let’s enabled HA for NSX Edge. Click Manage –> Settings –> Configuration –> And under HA Configuration –> Click Change.HA4

5. Change HA Configuration window will open up, Select HA Status –> Enable, Select vNIC, enter Declare Dead Time (Default is 15 Seconds), And enter the management IP for Heartbeat for both nodes and Click OK.HA5

6. It will take few seconds and you can see HA Status under HA Configuration is showing now Enabled.HA6

7. Let’s go to Management Cluster to see the number of Nodes. Now you can see that there are two instances up and running. Edge Services Router (Edge Services Router-0 and Edge Services Router-1)HA7

8. That’s it. Now NSX Edge Services Router is running is HA mode, If Active node will fail standby node will take over after 15 seconds. Same way we can enable HA for Logical Router. I have added screenshot for Logical Router.HA8

HA9

HA10

HA119. Once you have enabled HA for NSX Edge. You can putty to NSX edge and verify the Active Node and Standby Node by running Show Service highavailability command. Let me connect to and run this command to verify.

You can see in below result that This node (vshield-edge-4-0) is Active and vshield-edge-4-1 is peer host means Standby Node.HA14

10. Now let’s shut down the vshield-edge-4-0 and run the Show Service highavailability command again.

Now you can see in below result that vshield-edge-4-1 is Active and vshield-edge-4-0 is unreachable.HA15

11. Now let’s Power On the vshield-edge-4-0 and run the command again.

Now you can see in below result that vshield-edge-4-1 is Active and vshield-edge-4-0 is peer host means Standby Node.HA16

That’s It !! This is how we can enable HA and test failover for NSX Edge.

Thank You and Keep sharing :)

—————————————————————————————————

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 7

Network Virtualization with VMware NSX – Part 8

Backing up and Restoring the vCenter Server 6.0 with vSphere Data Protection 6 – Part 1

Few days ago wrote blog about how to Back up and Restore the vCenter Server 6.0 embedded vPostgres database. I am extending same topic and will discuss here how to Back Up and Restore vCenter Server using  vSphere Data Protection.

One of the most critical aspects to any network, not only virtualized Infrastructure, is a solid backup strategy and VMware offers its own backup tool, VMware Data Protection to back up and restore a virtual machine (VM) that contains vCenter Server, a vCenter Server Appliance, or a Platform Services Controller.

vSphere Data Protection (VDP) is a robust, simple to deploy, disk‐based backup and recovery solution that is powered by EMC. VDP is fully integrated with the VMware vCenter Server and enables centralized and efficient management of backup jobs while storing backups in deduplicated destination storage locations.

The VMware vSphere Web Client interface is used to select, schedule, configure, and manage backups and recoveries of virtual machines.

VDP Provides fast and efficient data protection for all of your virtual machines, even those powered off or migrated between vSphere hosts.

VDP Reduces the cost of backing up virtual machines and minimizes the backup window by using Change Block Tracking (CBT) and VMware virtual machine snapshots.

VDP Allows easy backups without the need for third‐party agents installed in each virtual machine. To take a full image backup of vCenter Server or any VMs using vSphere Data Protection (VDP), minimum requirements is that VM must have VMware Tools installed and running and VM must use a fully qualified domain name (FQDN) with correct DNS resolution, or must be configured with a static IP address.

The main purpose of this blog is that, vSphere Data Protection to restore a VM that contains vCenter Server instance directly on the ESXi host that is running the vSphere Data Protection Appliance when the vCenter Server service becomes unavailable or when you cannot access the vSphere Data Protection user interface by using the vSphere Web Client.

One VDP Appliance supports Up to 400 Virtual Machines and one vCenter Server can support Up to 20 VDP appliances.

VDP supports Image-level Backup and Restore, VMDK Backup and Restore, and Guest‐level backups for Microsoft SQL Servers, Exchange Servers, and Share Point Servers. With guest‐level backups, client agents (VMware VDP for SQL Server Client, VMware VDP for Exchange Server Client, or VMware VDP for SharePoint Server Client) are installed on the SQL Server, Exchange Server, or SharePoint Server in the same manner that backup agents are typically installed on physical servers.

vSphere Data Protection Architecture

VDP can be deployed to any storage supported by vSphere e.g. VMFS, NFS, and VSAN datastores. Management of VDP is performed by using the vSphere Web Client.

VDP0I’ll only discuss here how to deploy VDP, Initial Configuration of VDP, Scheduled Backup of vCenter Server VM and How to restore direct-to ESXi host when vCenter Server becomes unavailable or when the user cannot access the vSphere Data Protection user interface by using the vSphere Web Client.

vSphere Data Protection (VDP) installation

1. Connect the vCenter Server using vSphere Web Client: https://vcenter6.vcix.nv:9443/vsphere-client/ with Administrative privileges.

VDP12. Go to Home –> Hosts and Clusters –> Right Click the Cluster/ESXi Host and choose Deploy OVF Template.VDP2

3. It will open up Deploy OVF Template window. If you are deploying OVF first time it will ask to install the VMware Client Integration Plug‐in. Click on the Download the Client Integration Plug‐in to download, make sure you have Internet access or Download on machine has Internet Access and Install on the machine from you want to Deploy OVF.VDP3

4. Once you Click on the Download the Client Integration Plug‐in, it will ask you to save. Click Save File.VDP4

5. Once Download finished double click to start Client Integration Plug‐in Installation.VDP5

6. It is very Simple and Straight forward, Just follow the screen to finish the Installation.VDP6

VDP7

VDP8

VDP9


VDP12
7. Once Client Integration Plug‐in Installation finished. Connect the vCenter Server using vSphere Web Client –> Home –> Hosts and Clusters –> Right Click the Cluster/ESXi Host and choose Deploy OVF Template.VDP13

8. It Will open up Deploy OVF Template Window, Select Local File and Click Browse to Navigate to the location of the VDP Appliance .ova file. Confirm that you select the appropriate file for the datastore and Click OK. Click Next to validate the .OVA file.VDP14

VDP159. It will validate the OVA file. Review the template details and click NextVDP16

10. On the Accept EULAs screen, read the license agreement (If you want 😉 and click Accept, and then click NextVDP17

11. On the Select name and folder screen, Enter VDP Appliance Name and Select folder or datacenter where you want to deploy the VDP Appliance, and then click NextVDP18

12. On the Select Storage screen, select the virtual disk format and select the Datastore to store VDP Appliance. Click Next.VDP19

13. On the Setup networks screen, select the Network Port Group for the VDP Appliance and click Next.VDP20

14. In the Customize template screen, Enter Default Gateway, DNS, Network 1 IP Address of VDP, and Network 1 Netmask/Subnet Mask. Click Next.

NOTE :- The VDP Appliance does not support DHCP.  A static IP address is required.VDP21

15. On the Ready to complete screen, confirm that all of the deployment options are correct. Check Power on after deployment box and click Finish.VDP22

16. It will take some time to Deploy OVF Template. You can monitor progress in Recent Tasks. VDP23

17. Once Deployment Completed, you can see the VDP VM under Host and Cluster, And on the Virtual Machine Summary for VDP.VDP24

18. vCenter deploys the VDP Appliance and boots into the install mode. VDP25

==================================================

VDP has been deployed and now ready for Initial Configuration.

Initial Configuration

1. Open Web Browser and type https://vdp6.vcix.nv:8543/vdp-configure/ to access the VDP console. Username is root and Default password – changeme.VDP26

2. The VDP Welcome screen appears, Click Next to start configuring VDP.VDP27

3. Next Screen is to configure Network Settings for VDP Appliance. Enter IP Address, Subnet Mask, Gateway, DNS IP address, Name and Domain name of VDP and Click Next.VDP28

4. On the Time Zone screen, Select the time zone for the VDP Appliance and Click Next.VDP29

5. On the VDP Credentials screen, Enter password for Root account and Click Next.VDP30

6. On the vCenter Registration screen, Enter vCenter Server User Name, Enter Password for User, Enter vCenter Server User Name you want to register VDP with and Click Test Connection to test connection with entered vCenter Server.VDP31

7. Click Ok on connection status box and Click Next.VDP32

8. Next is Create Storage Page, Select the type and Capacity for VDP and Click Next.VDP33

9. On the Device Allocation Page, Select Storage Provision type and Select the Datastore for VDP and Number of Disks. And Click Next.VDP34

10. On the CPU and Memory Page Select No of Virtual CPUs and Memory and Click Next.VDP35

11. On the Product Improvement Page, Select Enable Customer Experience Improvement Program if you want or leave that Uncheck and Click Next.VDP36

12. On the Ready to Complete page, Select the Run Perform Analysis and Restart Appliance if Successful check box and Click Next to Start the configuration of VDP as per provided details.VDP37

13. Click OK on the Warning dialog box and Wait for configuration to finish.VDP38

14. You can see on below screen it has started configuration of VDP.VDP39

15. Once Configuration done, On the Complete page click Restart Appliance to Finish the configuration and start the VDP Appliance.VDP40

16. As you can see we can Monitor in Resent Tasks.VDP41

17. Once VDP Appliance reloaded. You can Log in with root account with changed password. VDP42

That’s It. Deployment of VDP Appliance and Initial Configuration of appliance has been completed. In the Backing up and Restoring the vCenter Server 6.0 with vSphere Data Protection 6 – Part 2 will discuss how to Take Backup of VM and How to perform Emergency Restore of vCenter Server in the absence vCenter Server.

Thank You 🙂 Keep Learning and Keep Sharing !!

 

Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC)

Great Blog written By Spas Kaloferov, Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team.

Excellent one, Must read. Please click below link to read full use case:-

Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC)

This is great design use case for Local Load Balancing with Local Traffic Manager (LTM) and Geo-Location load balancing with Global Traffic Manager (GTM).

 Thank You 🙂

Network Virtualization with VMware NSX – Part 7

In my last blog Network Virtualization with VMware NSX – Part 6 we have discussed about Static and Dynamic routing. Here In the Network Virtualization with VMware NSX – Part 7 will be discus about Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway.

Network Address Translation (NAT)

Network Address Translation (NAT) is the process where a network device assigns a public address to a computer (or group of computers) inside a private network. The main use of NAT is to limit the number of public IP addresses an organization or company must use, for both economy and security purposes.

Three blocks of IP addresses are reserved for private use and these Private IP addresses cannot be advertised in the public Internet.

10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, and  192.168.0 0 to 192.168.255.255.

The private addressing scheme works well for computers that only have to access resources inside the network, like workstations needing access to file servers and printers. Routers inside the private network can route traffic between private addresses with no trouble. However, to access resources outside the network, like the Internet, these computers have to have a public address in order for responses to their requests to return to them. This is where NAT comes into play.NAT1

Another exam is public Cloud™ where there are multiple tenant running their workload with Private IP address range. Hosts assigned with private IP addresses cannot communicate with other hosts through the Internet. The solution to this problem is to use network address translation (NAT) with private addressing.

NAT2NSX Edge provides network address translation (NAT) service to assign a public address to a computer or group of computers in a private network. NSX Edge service supports two types of NAT:- SNAT and DNAT

Source NAT (SNAT) is used to translate a private internal IP address into a public IP address for outbound traffic. The below picture depict that NSX Edge gateway is translating Test-Network using addresses 192.168.1.2 through 192.168.1.4 to 10.20.181.171. This technique is called masquerading where multiple Private IP Address are translating into Single host IP Address.

NAT3Destination NAT (DNAT) commonly used to publish a service located in a private network on a publicly accessible IP address. The below picture depict that NSX Edge NAT is publishing the Web Server 192.168.1.2 on an external network as 10.20.181.171. The rule translates the destination IP address in the inbound packet to an internal IP address and forwards the packet.

NAT4Configuring Network Address Translation (SNAT and DNAT) on an NSX Edge Services Gateway:-

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security –> NSX Edges –> and Double Click NSX Edge.NAT5

2. Under NSX Edge router –> click Manage tab –> click NAT tab –> and click the green plus sign (+) and select Add DNAT Rule or Add SNAT Rule whichever you would like to add.NAT6

3. In the Add DNAT Rule dialog box, Select Uplink-Interface from the Applied On drop-down menu. Enter the Public IP address in the Original IP/Range text box and enter the destination Translated IP/Range. And select Enabled the DNAT rule check box and Click OK to add the rule.NAT7

4. click Publish Changes to add the rule.NAT8

5. Once rules pushed you can see one rule has been added to the rule list.NAT9

6. To test the Connectivity Using the Destination NAT Translation, putty to NSX Egde router with Admin account and run command  to begin capturing packets on the Transit-Interface.

debug packet display interface vNic_1 port_80 or debug packet display interface vNic_0 icmp

1st command will capture packets on interface 1 for TCP port 80 and 2nd command will capture packets on interface 0 for ICMP protocol.

Same way we can Add SNAT rules for outgoing traffic.

——————————————————————————————————

NSX Edge Load Balancer

Load Balancing is another network service available within NSX that can be natively enabled on the NSX Edge device. The two main drivers for deploying a load balancer are scaling out an application (Load is distributed across multiple backend servers) as well as improving its high-availability characteristics (Servers or applications that fail are automatically removed from the pool).LB1

The NSX Edge load balancer distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource use, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to layer 7.

Note :- The NSX platform can be integrate load-balancing services offered by 3rd party vendors as well.

NSX Edge offers support for two types of deployment: One-arm mode (called proxy mode) and Inline mode (called transparent mode)

One-arm mode (called proxy mode)

The one-arm load balancer has several advantages and disadvantages. The advantages are that the design is simple and can be deployed easily. The main disadvantage is that you must have a load balancer per segment, leading to a large number of load balancers.

So when you design and deploy you need to see both the factors and choose which mode is fitting to your requirement.LB2

Inline mode (called transparent mode)

The advantage of using Inline mode is that the client IP address is preserved because the proxies are not doing source NAT. This design also requires fewer load balancers because a single NSX Edge instance can service multiple segments.
With this configuration, you cannot have a distributed router because the Web servers must point at the NSX Edge instance as the default gateway.

LB3Configuring Load Balancing with NSX Edge Gateway

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security –> NSX Edges –> and Double Click NSX Edge.

2.  Under the Manage tab, click Load Balancer. In the load balancer category panel, select Global Configuration.

3. Under Load balancer global configuration –> Click Edit to open the Edit load balancer global configuration page, In the Edit load balancer global configuration page check the Enable Loadbalancer box and Click OK.LB4

4. Once Loan balancer has been Enabled, you can see the Green Tick mark for Enable Loadbalancer.LB5

5. Next We need to create Application Profiles, In the load balancer category panel, select Application Profiles –> Click the green plus sign (+) to open the New Profile dialog box.

6. In the New Profile dialog box, Enter the Name –> Select Protocol Type (HTTPS)           –>  Select the Enable SSL Passthrough check box and click OK.LB6

7. Once Application Profile has been created you can see Profile ID and name under box.LB7

8. Next we have to Create a Server Pool. I am going to create a round-robin server pool that contains the two Web server virtual machines as members providing HTTPS.

9. In the load balancer category panel, select Pools –> Click the green plus sign (+) to open the New Pool dialog box.

10. In the New Pool dialog box, Enter the Server Pool Name in the text box –> Select Algorithm – ROUND-ROBIN –> And Below Members, click the green plus sign (+) to open the New Member dialog box, and add all web servers as members.LB8

11. Once all members has been added into Server Pool verify and Click OK.LB9

12. Once Pools has been added you can see the Pool ID, Pool Name with Configured Algorithm under the box.LB10

13. Next we need to Create a Virtual Server. select Virtual Servers –> click the green plus sign (+) to open the New Virtual Server dialog box.

14. In the New Virtual Server dialog box, select Enabled box –> Enter the Virtual Server name –> enter the IP Address of the Interface –> Select protocol (HTTPS) –> Port Number for HTTPS (443) –> Select the Pool name and Application Profile created earlier and Click OK.LB11

15. Once done you can see Virtual Server Name with all configured details under the box. LB12

That’s It 🙂 This is how we can configure NAT and Load balancer using NSX Edge.

Thank You and Keep sharing :)

—————————————————————————————————

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 7

Network Virtualization with VMware NSX – Part 8

Backing up and Restoring the vCenter Server 6.0 embedded vPostgres database

Currently i am working to design VMware Infrastructure on vSphere 6.0 for one of the customer. This is small infra with 5-6 hosts and expected growth is 10-20% in next 3-4 years. So as per requirement decided that vCenter management server will be deployed as virtual machine running in vCenter Server with an Embedded Platform Services Controller and bundled PostgreSQL database will be used as PostgreSQL database with vSphere 6.0 can support up to 20 hosts and 200 virtual machines which is full filling our requirement for next 4-5 years.

We had Design review meeting last week and question came up from customer can we backup bundled PostgreSQL database? If yes how can we Backup and Restore vCenter Server bundled PostgreSQL database.

So wanted to share with all and hope this will be useful for many.

Back Up the Embedded vCenter Server Database:-

1. Log in to vCenter Server with administrative privilege, I have logged in as Service Account used for vCenter Server.

2. Browse C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx\ folder, If you changed the default installation location it will be different for you.  B1

3. Locate the vcdb.properties file in the C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx\ and open the file in a text editor.B3

4. In the vcdb.properties file, locate the password of the vc database user and record it as we need it while will take backup.B4

5. Download the Windows back up and restore package windows_backup_restore.zip attached to KB article (2091961) and unzip it on the host machine.B5

6. After unzip you can see that are 2 python scripts, One for Backup and another for Restore.B6

7. Once download and unzip done, we are good to go to run the command to take Backup of the database.

8. Before taking the Backup just wanted to show my current inventory setup, just for the record.B7

9. Open command prompt and navigate to C:\Program Files\VMware\vCenter Server\python directory to run the backup_win.py script.

C:\Program Files\VMware\vCenter Server\python> python.exe “c:\vCenter Server\backup_win.py” -p “enter password we have recorded in step 4”  -f “c:\vcdb_backup_02july2015.bak”B8

10. When the backup completes, you can see a message that the Backup completed successfully.

11. Now backup of vCenter database has been taken, So let me modify/delete inventories entry and then will restore to get those back. As you can see in below screenshot i have deleted Cluster and Datacenter entries.B9

Restore the vCenter Server vPostgres Database:-

1. Log in to vCenter Server with administrative privilege, I have logged in as Service Account used for vCenter Server.

2. Stop the vCenter Server Service and VMware Content Library service. When you stop vCenter server service it will stop all other services dependent on vCenter Server.R1

R23. Locate the vcdb.properties file in the C:\ProgramData\VMware\vCenterServer\cfg\vmware-vpx\ and open the file in a text editor. Locate the password of the vc database user and record it as we need it while Restore the database.

4. Open Command prompt and Run restore script to Restore the database from last backup.

C:\Program Files\VMware\vCenter Server\python> python.exe “c:\vCenter Server\restore_win.py” -p “enter password we have recorded in step 3”  -f “c:\vcdb_backup_02july2015.bak”R4

R55. When the Restore completes, you can see a message that the Restore completed successfully.

6. Start the VMware Content Library service and vCenter Server Service and all other related services.

7. Connect to vCenter Server Web Client to check the status.R6

That’s All 🙂 We have done with how to Backup and Restore the vCenter Server 6.0 embedded vPostgres database. Thank you 🙂

———————————————————————————————

Reference VMware KB to Download the Windows back up and restore package/script:- http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2091961

Network Virtualization with VMware NSX – Part 6

In the Network Virtualization with VMware NSX – Part 5 we discussed about VXLAN to VLAN Layer 2 Bridging, Configure and Deploy an NSX Edge GatewayConfigure Routes (Static Routing) on the NSX Edge Gateway and on the Distributed Router. Here in Network Virtualization with VMware NSX – Part 6 will discuss about Configure Dynamic Routing (OSPF) on Perimeter Gateway and  on Distributed Router.

As we discussed and Configured Static Routing on both Perimeter Gateway and on Distributed Router in the Network Virtualization with VMware NSX – Part 5. So before going to configure Dynamic Routing we need to delete that.

Remove Static Routes from Perimeter Gateway and from Distributed Router:-

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security and  select NSX Edges.ESG192. In the edge list, double-click the Perimeter Gateway to open and manage that object. In the middle pane, click the Manage tab –> click Routing and Click Static Routes.DR23. In the Static Routing list select the Route to delete and click Red X icon (X).  Click Publish Changes to take effect of the changes.DR34. Once done you’ll see the selected Static Route has been deleted from list.DR45. Repeat steps 1-4 to delete Static Routes from the Distributed Router.DR5

DR6

DR7So now we have deleted Static Routes from both Perimeter Gateway and Distributed Router.

———————————————————————————————————-

Now Will Configure Dynamic Routing (OSPF) on Perimeter Gateway:-

1. Click Home tab –> Inventories –> Networking & Security and  NSX Edges. Double Click the Perimeter Gateway router to open and manage this.DR12. Select Manage –> Routing –>  Global Configuration and Under Dynamic Routing Configuration –> Click Edit  to Edit the Dynamic Routing Configuration.OSPFESG13. In the Edit Dynamic Routing Configuration dialog box, Select the Router ID from List and Click OK.OSPFDLR24. Click Publish Changes to Apply changes.OSPFESG25. Once Changes Applied You can see under Dynamic Routing Configuration Router ID and OSPF Enabled.OSPFESG3

6. Next we need to Configure OSPF. To do so In the routing category, select OSPF and Under Area Definitions verify that Area 0 is exist. If Area 0 does not exist we need to create that.OSPFESG4

7. We need to add more area as needed. So to add Area Click Green Plus Sign (+) under Area Definitions.

8. In the New Area Definition dialog box, Enter the Area ID and Click OK.OSPFESG5

9. Click Publish Changes to Apply changes.OSPFESG6

10. Once Changes Applied You can see Area ID under Area Definitions List.OSPFESG7

11. Once Area ID has been created we need to Map Interface to specified Area. To Map Interface to Area Click Green Plus Sign (+) Under Area to Interface Mapping:OSPFESG8

12. Select the required vNIC and enter Area ID into Area box and Click OK.OSPFESG9

13. Click Publish Changes to Apply changes.OSPFESG10

14. Once changes has been applied you can see that Interface has been mapped to specified Area.OSPFESG11

15. Repeat the steps 11-14 to Map all the required Interface to Area ID.OSPFESG12

 

OSPFESG1316. Once All the Interfaces have been Mapped to Required Area ID. We need to  Redistribute Perimeter Gateway Subnets. To do so In the routing category, select Route Redistribution and Under Route Redistribution Table Click  the green plus sign (+) to open the New Redistribution criteria dialog box.OSPFESG14

17. In the New Redistribution criteria dialog box, Under Allow learning from select the Connected check box and Action Permit and Click OK.OSPFESG15

18. Click Publish Changes to Apply changes.OSPFESG16

19. In the Route Redistribution Status at the top of the page, determine if a green check mark appears next to OSPF. If a green check mark does not appear Click Edit to edit the settings to Enable OSPF.OSPFESG17

20. In the Change Redistribution settings dialog box Check the OSPF Check box and Click OK.OSPFESG18

21. Once Changes done you can see  green check mark appears next to OSPF.OSPFESG19

———————————————————————————————

Now we will be Configuring OSPF on Distributed Router:-

1. Click Home tab –> Inventories –> Networking & Security and  NSX Edges. Double Click the Distributed Router to open and manage Distributed Router.DR1

2. Select Manage –> Routing –>  Global Configuration and Under Dynamic Routing Configuration –> Click Edit  to Edit the Dynamic Routing ConfigurationOSPFDLR1

3. In the Edit Dynamic Routing Configuration dialog box, Select the Router ID from List and Click OK.OSPFDLR2

4. Click Publish Changes to Apply changes.OSPFDLR3

5. Once Changes Applied You can see under Dynamic Routing Configuration Router ID and OSPF Enabled.

6. Next we need to Configure OSPF. To do so In the routing category, select OSPF and On the right side of the OSPF Configuration panel, click Edit to open the OSPF Configuration dialog box.OSPFDLR4

7.  In the OSPF Configuration dialog box, Select the Enable OSPF check box. Enter Protocol Address and Enter Forwarding Address and Click OK.OSPFDLR5

8. We need to add more area as needed. So to add Area Click Green Plus Sign (+) under Area Definitions.OSPFDLR7

9. In the New Area Definition dialog box, Enter the Area ID and Click OK. And Click Publish Changes to Apply changes.OSPFDLR8

10.  Once Area ID has been created we need to Map Interface to specified Area. To Map Interface to Area Click Green Plus Sign (+) Under Area to Interface Mapping.OSPFDLR9

11. Select the required Interface and enter Area ID into Area box and Click OK. And Click Publish Changes to Apply changes.OSPFDLR10

12. After the changes have been published, verify that the OSPF Configuration Status is Enabled.OSPFDLR14

13. Once All the Interfaces have been Mapped to Required Area ID. We need to  Redistribute Distributed Router Internal Subnets. To do so In the routing category, select Route Redistribution and Under Route Redistribution Table Click  the pencil icon to open the Edit Redistribution criteria dialog box, and verify that settings are configured as:  Prefix Name: Any, Learner Protocol: OSPF,  Allow Learning From: Connected and Action: Permit.OSPFDLR13

If the default route redistribution entry does not appear in the list, we need to create a new route redistribution by clicking the green plus sign (+) and configure the table.

That’s it ! we have done with Configuring Dynamic Routing (OSPF) on Perimeter Gateway and  on Distributed Router.

In the next Network Virtualization with VMware NSX – Part 7 will discuss Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway.

Thank You and Keep sharing 🙂

————————————————————————————————————–

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 5

In Network Virtualization with VMware NSX – Part 4 we discussed Configuring and Deploying an NSX Distributed Router. Here in Network Virtualization with VMware NSX – Part 5 will discuss about VXLAN to VLAN Layer 2 Bridging, Configure and Deploy an NSX Edge Gateway, Configure Routes (Static Routing) on the NSX Edge Gateway and on the Distributed Router.

VXLAN to VLAN Layer 2 Bridging

A VXLAN to VLAN bridge enables direct Ethernet connectivity between virtual machines in a logical switch, and virtual machines in a distributed port group, This connectivity is called layer 2 bridging.

We can create a layer 2 bridge between a logical switch and a VLAN, which enables to migrate virtual workloads to physical devices with no effect on IP addresses. A logical network can leverage a physical gateway and access existing physical network and security resources by bridging the logical switch broadcast domain to the VLAN broadcast domain. Bridging can also be used in a migration strategy where you might be using P2V and you do not want to change subnets.

Note:- VXLAN to VXLAN bridging or VLAN to VLAN bridging is not supported. Bridging between different data centers is also not supported. All participants of the VLAN and VXLAN bridge must be in the same data center.

NSX Edge Services Gateway

The services gateway gives you access to all NSX Edge services such as firewall, NAT, DHCP, VPN, load balancing, and high availability. You can install multiple NSX Edge services gateway virtual appliances in a datacenter. Each NSX Edge virtual appliance can have a total of ten uplink and internal network interfaces.

ESG-1

NSX Edge logical router provides East-West and NSX Edge Services Gateway provide North-South Routing.

NSX Edge Services Gateway Sizing:-

NSX Edge can be deployed in four different configurations.ESG-2When we deploy NSX Edge gateway we need to choose right size as per load/requirements. We can also covert size of ESG later from Compact to Large, X-large or Quad Large. as you can in picture.

ESG20Note :- A service interruption might occur when the old NSX Edge gateway instance is removed and the new NSX Edge gateway instance is redeployed with new size or when we convert size of ESG.

NSX Edge Services Gateway features:-

ESG-3For resiliency and high-availability NSX Edge Services Gateway can be deployed as a pair of Active/Standby units (HA Mode).

When we deploy ESG/DLR in HA mode NSX Manager deploy the pair of NSX Edges/DLR on different hosts (anti-affinity rule). Heartbeat keepalives are exchanged every second between the active and standby edge instances to monitor each other’s health status.

If the ESXi server hosting the active NSX Edge fails, at the expiration of a “Declare Dead Time” timer, the standby node takes over the active duties. The default value for this timer is 15 seconds, but it can be tuned down (via UI or API calls) to 6 seconds.

The NSX Manager also monitors the state of health of the deployed NSX Edges, so it ensures to restart the failed unit on another ESXi host.

The NSX Edge appliance supports static and dynamic routing (OSPF, IS-IS, BGP, and Route redistribution).

Deploy NSX Edge gateway and Configure the static routing:

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security and  select NSX Edges.ESG12. Click the green plus sign (+) to open the New NSX Edge dialog box. On the Name and description page, select Edge Services Gateway. (If you want to Enable HA for ESG select the Enable High Availability check box or leave it unchecked). Enter the Name of ESG as per your company standard and click Next.ESG23. On the CLI credentials page, enter the password for ESG in the password text box. Check Enable SSH Access box to enable SSH access for ESG appliance.             Note:- Password length must be at-least 12 characters. ESG1-P

ESG34. Select the Datacenter where you want to deploy this appliance. Select Appliance Size depending on your requirement we can also convert to any Size later as well. Check Enable auto rule generation to automatically generate service rules to allow flow of control traffic.

Under NSX Edge Appliances, click the green plus sign (+) to open the Add NSX Edge Appliance dialog box.ESG45. In Add NSX Edge Appliance dialog box select the Cluster and Datastore to deploy NSX Edge Appliance in the required location and designated datastore. And Click OK.

ESG56. verify all the settings on Configure deployment page and Click Next.

ESG67. On the Configure Interfaces page,click the green plus sign (+) to open the Add NSX Edge Interface dialog box

ESG78. Enter the Interface Name in the Name text box, choose Type, Click the Connected To –> Select link and choosed the required Distributed Port group. Click the green plus sign (+) under Configure Subnets to add subnet for the Interface.

ESG89. In the Add Subnet dialog box, click the green plus sign (+) to add an IP address field. Enter required IP address (192.168.100.3) in the IP Address text box and click OK to confirm the entry. Enter the subnet prefix length (24) in the Subnet prefix length text box and click OK.

ESG910. verify all the settings on Add NSX Edge Interface dialog box and Click OK.

ESG1011. Repeat steps 7-10 to add all required interfaces for ESG and Click Next.

ESG12

ESG11

ESG13

ESG1412. Once all Interfaces has been added verify settings on Configure Interfaces dialog box and Click Next.

ESG1513. On the Default gateway settings page, selec the Configure Default Gateway check box. Verify that the vNIC selection is Uplink-Interface. and  Enter the DG address (192.168.100.2) in the Gateway IP text box and Click Next.

ESG1614. On the Firewall and HA page, Select the Configure Firewall default policy check box. and Default Traffic Policy Accept. You can see that Configure HA parameters are gray out because we have not checked the Enable High Availability check box in step 2. And Click Next.

ESG1715. On the Ready to Complete dialog box verify all the settings (if you want to change any settings go back and change that)  and click Finish to complete the deployment for NSX Edge.

ESG1816. It will take few minutes to complete the deployment. Now under NSX Edges you can see that it is showing Deployed.

ESG1917. Double Click on the NSX Edge and can see the configuration settings as we choosed while deploying this.

esg1-ppNow Will Configure Static Routes on the NSX Edge Gateway:-

1. Double Click on the NSX Edge to browse NSX Edge –> Click on the Manage tab –> click Routing and select Static Routes. And Click the green plus sign (+) to open the Add Static Route dialog box.ESG-SR12. Select the interface connected to DLR which is (Transit-Interface), Enter the network ID with Subnet Mask (172.16.0.0/24) for which you want to add Routing and Next Hop Address for configured Network (in my case 192.168.10.2) and click OK.

ESG-SR23. After every settings or Modification need to Publish Changes. Click on Publish Changes.

ESG-SR34. Once Publishing finished you can see entry under Static Routes.

ESG-SR4

Configure Static Routes on the Distributed Router:-

1.Under Networking & Security –> NSX Edges –> double-click the Distributed Router entry to manage that object.ESG19

DLR-SR12. After browsing DLR  Click on the Manage and Routing tab. In the routing category panel select Static Routes and Click the Green Plus Sign (+) to add static Routes on DLR.

DLR-SR2

3. Select the interface connected to ESG which is (Transit-Interface), Enter the network ID with Subnet Mask (192.168.110.0/24) for which you want to add Routing and Next Hop Address for configured Network (in my case 192.168.10.1) and click OK.

DLR-SR34. After every settings or Modification need to Publish Changes. Click on Publish Changes. Once done you can see Static routes in the Static Routes lists.

DLR-SR4

Once Static Routing has been done will be able to ping the Logical switch network with External network. e.g external Network 192.168.110.10 to 3 logical switch network created in part 2 172.16.0.0/24.

esg1-2

That’s it. We are done with Deploying NSX Distributed Router and NSX Edge Services Gateway and also how to Configure Static Routing on DLR and ESG. 

In the next part (Network Virtualization with VMware NSX – Part 6) will discuss how to Configure Dynamic Routing on NSX Edge Appliances and NSX Distributed Router.

Thank you and stay tuned for next part. Keep sharing the knowledge 🙂

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

VMware NSX- How to Delete/Remove NSX Logical Switch

Recently i was trying to Remove/Delete one of the NSX logical switch in my lab. While trying to Remove/Delete logical switch got this error:

er1 er2As per error message some of the resources still connected to this logical Switch, that’s why getting error: DB-Tier resources are still in use. So we need to remove any connected Virtual Machines from this logical switch and then we’ll be able to Remove/Delete this NSX logical Switch.

So first thing we need to check what are the VMs / Resources utilizing the NSX logical Switch.

Connect to VC ( vSphere Web Client) –> Networking and Security –> Logical Switches    –> and from right pane double click the logical switch we are trying to Remove.er3As you can see in below screen that one Virtual Machine is connected to DB-Tier NSX logical Switch. We have two option to remove this VM from this logical Switch.

1. Migrate Virtual Machine to another port group or,

2. delete the Virtual NIC card from the VM (which is not good practice).

er4So here we are going to migrate DB-sv-01a VM from DB-Tier logical switch to another Virtual Machine port group.

er5 er6Now after migrating VM we try to Remove/Delete this NSX Logical Switch.

er7

er1Now we are able to Remove/Delete this NSX Logical Switch.

er8

That’s All. I hope this will be informative for others. Thank you !!

Network Virtualization with VMware NSX – Part 4

We discussed Virtual LAN (VLAN)Virtual Extensible LAN (VXLAN)Virtual Tunnel End Point (VTEP)VXLAN Replication Modes, and NSX Logical Switching in the Network Virtualization with VMware NSX – Part 3. Here in Part 4 will discuss about NSX Routing. 

NSX Routing :-

The TCP/IP protocol suite offers different routing protocols that provide a router with methods for building valid routes. The following routing protocols are supported by NSX:

Open Shortest Path First (OSPF): This protocol is a link-state protocol that uses a link-state routing algorithm. This protocol is an interior routing protocol.
Intermediate System to Intermediate System (IS-IS): This protocol determines the best route for datagrams through a packet switched network.
Border Gateway Protocol (BGP): This protocol is an exterior gateway protocol that is designed to exchange routing information between autonomous systems (AS) on the Internet.

NSX Logical Router:-

The NSX Edge logical router provides East-West distributed routing with tenant IP address space and data path isolation. Virtual machines or workloads that reside on the same host on different subnets can communicate with one another without having to traverse a traditional routing interface. A logical router can have eight uplink interfaces and up to a thousand internal interfaces.

During the configuration process, NSX Manager deploys the logical router control virtual machine and pushes the logical interface configurations to each host through the control cluster. The logical router control virtual machine is the control plane component of the routing process. The logical router control virtual machine supports the OSPF and BGP protocols.The distributed logical routers run at the kernel module level.

The NSX Controller cluster is responsible for distributing routes learned from the logical router control virtual machine across the hypervisors. Each control node in the cluster takes responsibility for distributing the information for a particular distributed logical router instance. In a deployment where multiple distributed logical router instances are deployed, the load is distributed across the NSX Controller nodes.

The distributed logical router owns the logical interface (LIF). This concept is similar to interfaces on a physical router. But on the distributed router a distributed logical router can have a maximum of 1,000 LIFs. For each segment that the distributed logical router is connected to, the distributed logical router has one ARP table.

When the LIF is connected to a VLAN, the LIF has a pMAC and when the LIF is connected to a VXLAN, the LIF has a vMAC.

NOTE :- You can have only one VXLAN LIF connecting to a logical switch. Only one distributed logical router can be connected to a logical switch.

DLR high availability:- When high availability is enabled, NSX Manager enables the VMware vCenter Server system to deploy another logical control router virtual machine. deploy two logical router control virtual machines and designate one as active and one as passive. If the active logical router control virtual machine fails, the passive logical router control virtual machine takes 15 seconds to take over. Because the control virtual machine is not in the data plane, data plane traffic is not affected.R1Configuring and Deploying an NSX Distributed Router:-

1. Connect vCenter Server through vSphere Web Client –> Home –> Inventories –> Networking & Security.

2. In the left navigation pane, select NSX Edges.DLR303. In the center pane, click the green plus sign (+) to open the New NSX Edge dialog box.DLR14. From the New NSX Edge dialog box.  On the Name and description page, click the Logical (Distributed) Router button. Enter the Name of the Distributed Router in the Name text box, Enter Hostname, Description for DLR and Tenant Name and click Next.
DLR25. On the Settings page, enter Password for DLR and Enable SSH access for DLR. If you want DLR in High Availability mode check the Enable High Availability box. And Click NEXT.DLR4Note:- Password must be at least 12 Characters log.DLR316. On the Configure Deployment page, verify that you have selected required Datacenter.DLR67. Under NSX Edge Appliances, click the green plus sign (+) to open the Add NSX Edge Appliance dialog box. Select the required Cluster/Resource Pool, Datastore, Host and Folder to deploy DLR. (If you have checked High Availability option 2 Distributed Router will be deployed). And Click OK to close the Add NSX Edge Appliance dialog box.DLR78. Verify the NSX Edge Appliances settings and Click Next.DLR89. On the Configure interfaces page, click the Connected To –> Select link under Management Interface Configuration and select the required Port Group Under Distributed Portgroup. And click OK.

DLR910. Under Configure Interfaces of this NSX Edge, click the green plus sign (+) to open the Add Interface dialog box.DLR10Note :- As discussed in Part -3 we are configuring DLR with below requirement. So we need to Add 4 Interfaces).DLR2911. In The Add Interface dialog box, Enter the name of Interface, Select Type, Click Select Link for Connected To: and choose the desired Logical Switch and OK.DLR1112. Now Click the green plus sign (+) under Configure Subnets to Add subnet for the Interface. In the Add Subnet box Click the green plus sign (+) to add IP Address and Subnet Mask and click OK.DLR12DLR1313. Once Subnets has been added click Ok to complete Add Interface.DLR1414. Repeat the steps 11-13 to Add and Configure Interfaces for other 3 (WEB, APP and Database).DLR15

DLR16

DLR17

DLR18

DLR19

DLR20

DLR2115. Once rest 3 Interfaces have been added and configured. Click Next to proceed

DLR2316. On the Ready to complete page, review the configuration and click Finish to start deploying the Logical (Distributed) Router.DLR2417. It will take some to complete the deployment of Logical (Distributed) Router.DLR2518. Verify that the Distributed Router entry has a type of Logical Router. Double-click the Distributed Router entry to manage that object. Click the Manage tab –> Settings –> Interfaces and see Status of all 4 Interfaces are green.

DLR2619. Under Configuration you can see there are 2 Logical Routers Appliances deployed. Because we choosed to deploy in HA mode. Same you can also verity from Cluster.DLR27

DLR28

20. Now after deploying the DLR with all 4 interfaces. You can Test Connectivity using Ping command between all the VMs.

In the Next Part (Network Virtualization with VMware NSX – Part 5) will discuss Configure and Deploy an NSX Edge GatewayConfigure Routes on the NSX Edge Gateway and on the Distributed Router.

—————————————————————————————————-

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Please share if useful …..Thank You :)

 

Network Virtualization with VMware NSX – Part 3

In the Network Virtualization with VMware NSX – Part 2 we have discussed about NSX Controller Cluster, How to Deploy the NSX Controller Instances, Create IP Pool, and Install Network Virtualization Components ( Prepare Hosts) on vSphere Hosts.

In this part will discuss about Logical Switch Networks and VXLAN Overlays.

Before Discussing VXLAN let’s discuss bit about Virtual LAN (VLAN):-

A VLAN is a group of devices on one or more LANs that are configured to communicate as if they were attached to the same wire, when in fact they are located on a number of different LAN segments. Because VLANs are based on logical instead of physical connections, they are extremely flexible.

VLANs address scalability, security, and network management by enabling a switch to serve multiple virtual subnets from its LAN ports.

VLAN Split switches into separate virtual switches (Broadcast Domains). Only members of a virtual LAN (VLAN) can see that VLAN’s traffic. Traffic between VLANs must go through a router.

By default, all ports on a switch are in a single broadcast domain. VLANs enable a single switch to serve multiple switching domains. The forwarding table on the switch is partitioned between all ports belonging to a common VLAN. All ports on a Switch by default part of single and default VLAN 0 and this default VLAN is called the Native VLAN.

Virtual Extensible LAN (VXLAN) enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks.

VXLAN is an Ethernet in IP overlay technology, where the original layer 2 frame is encapsulated in a User Datagram Protocol (UDP) packet and delivered over a transport network. This technology provides the ability to extend layer 2 networks across layer 3 boundaries and consume capacity across clusters. The VXLAN adds 50 to 54 bytes of information to the frame, depending on whether VLAN tagging is used. VMware recommends increasing the MTU to at least 1,600 bytes to support NSX.

A VXLAN Number Identifier (VNI) is a 24-bit number that gets added to the VXLAN frame. The 24-bit address space theoretically enables up to 16 million VXLAN networks. Each VXLAN network is an isolated logical network.  VMware NSX™ starts with VNI 5000.

A Virtual Tunnel End Point (VTEP) is an entity that encapsulates an Ethernet frame in a VXLAN frame or de-encapsulates a VXLAN frame and forwards the inner Ethernet frame.

VXLAN Frame :-

VXLAN1The top frame is the original frame from the virtual machines, minus the Frame Check Sequence (FCS), encapsulated in a VXLAN frame. A new FCS is created by the VTEP to include the entire VXLAN frame. The VLAN tag in the layer 2 Ethernet frame exists if the port group that your VXLAN VMkernel port is connected to has an associated VLAN number. When the port group is associated with a VLAN number, the port group tags the VXLAN frame with that VLAN number.

VXLAN Replication Modes:-

Three modes of traffic replication exist: two modes are based on VMware NSX Controller™ based and one mode is based on data plane.

vxlan1Unicast has no physical network requirements apart from the MTU. All traffic is replicated by the VTEPs. In NSX, the default mode of traffic replication is unicast.  Unicast has Higher overhead on the source VTEP and UTEP.

Multicast mode uses the VTEP as a proxy. In multicast, the VTEP never goes to the NSX Controller instance. As soon as the VTEP receives the broadcast traffic, the VTEP multicasts the traffic to all devices. Multicast has lowest overhead on the source VTEP.

Hybrid mode is not the default mode of operation in NSX for vSphere, but is important for larger scale operations. Also the configuration overhead or complexity of L2 IGMP is significantly lower than multicast routing.

In the Network Virtualization with VMware NSX – Part 2 we have configured/Prepared Hosts so now let’s Configure VXLAN on the ESXi Hosts.

1. Connect to vCenter using web client.

2. Click Networking & Security and then click Installation.

3. Click the Host Preparation tab and under VXLAN column Click Configure to start Configuring VXLAN on the ESXi Hosts.

vxlan24. In the Configure VXLAN networking dialog box, Select Switch, VLAN, Set MTU to 1600, for VMKNic IP Addressing if you have created IP Pool choose existing IP from from list or Click IP Pool to create New Pool And Click OK.

vxlan3

vxlan45. It will take few minutes to configure depending upon number of Hosts into Cluster. If an error is indicated, it is a transitory condition that occurs early in the process of applying the VXLAN configuration to the cluster. The vSphere Web Client interface has not updated to display the actual status. Click Refresh to update the console.

vxlan56. Repeat the steps to configure all the clusters. Once Configuration done on all clusters.Verify that the VXLAN status is Enabled with a green check mark.

vxlan67.  Once VXLAN Configuration done for all the clusters and VXLAN status is Enabled with a green check mark. Click the Logical Network Preparation tab and verify that VXLAN Transport is selected. In the Clusters and Hosts list,expand each of the clusters and confirm the host has a vmk# interface created with IP Address from the IP Pool we have created for each.

vxlan7Once We have finished Configuring VXLAN and Verified VXLAN configuration for all the clusters. Next need to Configure the VXLAN ID Pool to identify VXLAN networks:-

1.  On the Logical Network Preparation tab, click the Segment ID button and Click Edit to open the Segment ID pool dialog box to configure ID Pool.

2. Enter the Segment ID Pool and Click Ok to complete. VMware NSX™ starts with VNI ID from 5000.

vxlan8Next we need to Configure a Global Transport Zone:-

A transport zone specifies the hosts and clusters that are associated with logical switches created in the zone. Hosts in a transport zone are automatically added to the logical switches that you create. This process is very similar to manually adding hosts to VMware vSphere Distributed Switch.

1. On the Logical Network Preparation tab, click Transport Zones and Click the green plus sign to open the New Transport Zone dialog box.

vxlan92.  Enter the Name for Transport Zone and Select Control Plane Mode. select Clusters to Add to the Transport Zone and Click OK to complete the creation.

vxlan10

vxlan11

———————————————————————————————————-

NSX Logical Switching

The Logical Switching capability in the NSX platform provides customers the ability to spin up isolated logical L2 networks with the same flexibility and agility, as it is to spin up virtual machines. Endpoints, both virtual and physical, can then connect to those logical segments and establish connectivity independently from the specific location where
they are deployed in the data center network. This is possible because of the decoupling between network infrastructure and logical networks provided by NSX network virtualization. Each logical switch gets its own unique VNI.

The deployment of the NSX Virtualization components can help to the agile and flexible creation of applications with their required network connectivity and services. A typical example is the creation of a multi-tier application.

LS11Configure Logical Switch Networks

We need to create logical switches for the all required networks (e.g. Transit, Web-Tier, App-Tier, and DB-Tier networks as per above picture.)
1. Connect to vCenter Server using web Client and Click Networking and Security and Select Logical Switches,  In the left navigation pane.

LS12. Click the Green plus sign to open the New Logical Switch dialog box. Enter the Logical Switch Name and  Select the Global Transport Zone we had created earlier, Choose the Control Plane Mode and Click OK to complete the Switch creation.

ls23. Wait for the update to complete and confirm Transit-Network appears with a status of Normal. Repeat steps to create all required Logical Switches and all are Normal.

LS3Once Logical Switches has been created we need to Migrate Virtual Machines to Logical Switches:-

1. In the left pane under Networking & Security and select Logical Switches. In the center pane, select the logical Switch e.g. Web-Tier –> Right Click the Choose Add VM..

LS42. Select Virtual Machines you want to add to the Logical Switch and Click Next.

LS53.  Select the VNIC you want to add to the Network and Click Next.

LS64. In the Ready to complete box verify the settings and  Click Finish to Complete adding VMs to desired Network.

LS75. To verify that VMs have been added to Logical Switch, Double Click the Logical Switch.

LS36. Click Related Objects and Virtual Machines tab and you can the list of VMs added to this specific Logical Switch.

LS87. Repeat the same steps for all the Logical Switches to Add VMs. Once done try to ping VMs in same switch and between Switch.

Now you can only ping VMs connected in the same Switch. To communicate with VMs in another Switch we need to configure Routing. Which will discuss in next Part.

======================================================

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

– See more at: http://virtualcloudsolutions.info/?p=829#sthash.YMq7IeEE.dpuf

Please share if useful …..Thank You 🙂

VMware VCP6-DCV Certification is available now!!

Here is the most awaited VCP6-DCV certification has been announced by VMware.

VCP6-DCV Logo

VMware Certified Professional 6 – Data Center Virtualization (VCP6-DCV) exam will Validate your ability/skills required to install, configure, administer and scale a vSphere virtualized data center on VMware vSphere 6.

For existing VCP5-DCV certified, course is recommended but not required. You can directly go for VCP6-DCV Beta exam (VMware Certified Professional 6 – Data Center Virtualization Beta Exam)

To Know more about Beta exam Click Here. And to request for Exam Authorization Click Here.

 

Version 6 Certification Roadmap:-

 

EXAM RoadMap

Good time to upgrade yourself .. Good Luck !!!!!

Customers will only pay for value and not technology