Network Virtualization with VMware NSX – Part 7

In my last blog Network Virtualization with VMware NSX – Part 6 we have discussed about Static and Dynamic routing. Here In the Network Virtualization with VMware NSX – Part 7 will be discus about Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway.

Network Address Translation (NAT)

Network Address Translation (NAT) is the process where a network device assigns a public address to a computer (or group of computers) inside a private network. The main use of NAT is to limit the number of public IP addresses an organization or company must use, for both economy and security purposes.

Three blocks of IP addresses are reserved for private use and these Private IP addresses cannot be advertised in the public Internet.

10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, and  192.168.0 0 to 192.168.255.255.

The private addressing scheme works well for computers that only have to access resources inside the network, like workstations needing access to file servers and printers. Routers inside the private network can route traffic between private addresses with no trouble. However, to access resources outside the network, like the Internet, these computers have to have a public address in order for responses to their requests to return to them. This is where NAT comes into play.NAT1

Another exam is public Cloud™ where there are multiple tenant running their workload with Private IP address range. Hosts assigned with private IP addresses cannot communicate with other hosts through the Internet. The solution to this problem is to use network address translation (NAT) with private addressing.

NAT2NSX Edge provides network address translation (NAT) service to assign a public address to a computer or group of computers in a private network. NSX Edge service supports two types of NAT:- SNAT and DNAT

Source NAT (SNAT) is used to translate a private internal IP address into a public IP address for outbound traffic. The below picture depict that NSX Edge gateway is translating Test-Network using addresses 192.168.1.2 through 192.168.1.4 to 10.20.181.171. This technique is called masquerading where multiple Private IP Address are translating into Single host IP Address.

NAT3Destination NAT (DNAT) commonly used to publish a service located in a private network on a publicly accessible IP address. The below picture depict that NSX Edge NAT is publishing the Web Server 192.168.1.2 on an external network as 10.20.181.171. The rule translates the destination IP address in the inbound packet to an internal IP address and forwards the packet.

NAT4Configuring Network Address Translation (SNAT and DNAT) on an NSX Edge Services Gateway:-

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security –> NSX Edges –> and Double Click NSX Edge.NAT5

2. Under NSX Edge router –> click Manage tab –> click NAT tab –> and click the green plus sign (+) and select Add DNAT Rule or Add SNAT Rule whichever you would like to add.NAT6

3. In the Add DNAT Rule dialog box, Select Uplink-Interface from the Applied On drop-down menu. Enter the Public IP address in the Original IP/Range text box and enter the destination Translated IP/Range. And select Enabled the DNAT rule check box and Click OK to add the rule.NAT7

4. click Publish Changes to add the rule.NAT8

5. Once rules pushed you can see one rule has been added to the rule list.NAT9

6. To test the Connectivity Using the Destination NAT Translation, putty to NSX Egde router with Admin account and run command  to begin capturing packets on the Transit-Interface.

debug packet display interface vNic_1 port_80 or debug packet display interface vNic_0 icmp

1st command will capture packets on interface 1 for TCP port 80 and 2nd command will capture packets on interface 0 for ICMP protocol.

Same way we can Add SNAT rules for outgoing traffic.

——————————————————————————————————

NSX Edge Load Balancer

Load Balancing is another network service available within NSX that can be natively enabled on the NSX Edge device. The two main drivers for deploying a load balancer are scaling out an application (Load is distributed across multiple backend servers) as well as improving its high-availability characteristics (Servers or applications that fail are automatically removed from the pool).LB1

The NSX Edge load balancer distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource use, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to layer 7.

Note :- The NSX platform can be integrate load-balancing services offered by 3rd party vendors as well.

NSX Edge offers support for two types of deployment: One-arm mode (called proxy mode) and Inline mode (called transparent mode)

One-arm mode (called proxy mode)

The one-arm load balancer has several advantages and disadvantages. The advantages are that the design is simple and can be deployed easily. The main disadvantage is that you must have a load balancer per segment, leading to a large number of load balancers.

So when you design and deploy you need to see both the factors and choose which mode is fitting to your requirement.LB2

Inline mode (called transparent mode)

The advantage of using Inline mode is that the client IP address is preserved because the proxies are not doing source NAT. This design also requires fewer load balancers because a single NSX Edge instance can service multiple segments.
With this configuration, you cannot have a distributed router because the Web servers must point at the NSX Edge instance as the default gateway.

LB3Configuring Load Balancing with NSX Edge Gateway

1. Connect to vCenter Server through vSphere Web Client —> Click Home tab –> Inventories –> Networking & Security –> NSX Edges –> and Double Click NSX Edge.

2.  Under the Manage tab, click Load Balancer. In the load balancer category panel, select Global Configuration.

3. Under Load balancer global configuration –> Click Edit to open the Edit load balancer global configuration page, In the Edit load balancer global configuration page check the Enable Loadbalancer box and Click OK.LB4

4. Once Loan balancer has been Enabled, you can see the Green Tick mark for Enable Loadbalancer.LB5

5. Next We need to create Application Profiles, In the load balancer category panel, select Application Profiles –> Click the green plus sign (+) to open the New Profile dialog box.

6. In the New Profile dialog box, Enter the Name –> Select Protocol Type (HTTPS)           –>  Select the Enable SSL Passthrough check box and click OK.LB6

7. Once Application Profile has been created you can see Profile ID and name under box.LB7

8. Next we have to Create a Server Pool. I am going to create a round-robin server pool that contains the two Web server virtual machines as members providing HTTPS.

9. In the load balancer category panel, select Pools –> Click the green plus sign (+) to open the New Pool dialog box.

10. In the New Pool dialog box, Enter the Server Pool Name in the text box –> Select Algorithm – ROUND-ROBIN –> And Below Members, click the green plus sign (+) to open the New Member dialog box, and add all web servers as members.LB8

11. Once all members has been added into Server Pool verify and Click OK.LB9

12. Once Pools has been added you can see the Pool ID, Pool Name with Configured Algorithm under the box.LB10

13. Next we need to Create a Virtual Server. select Virtual Servers –> click the green plus sign (+) to open the New Virtual Server dialog box.

14. In the New Virtual Server dialog box, select Enabled box –> Enter the Virtual Server name –> enter the IP Address of the Interface –> Select protocol (HTTPS) –> Port Number for HTTPS (443) –> Select the Pool name and Application Profile created earlier and Click OK.LB11

15. Once done you can see Virtual Server Name with all configured details under the box. LB12

That’s It 🙂 This is how we can configure NAT and Load balancer using NSX Edge.

Thank You and Keep sharing :)

—————————————————————————————————

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 7

Network Virtualization with VMware NSX – Part 8

2 thoughts on “Network Virtualization with VMware NSX – Part 7

  1. Pingback: Network Virtualization with VMware NSX – Part 8 - Virtual Cloud Solutions

  2. Pingback: Network Virtualization with VMware NSX – Part 6 - Virtual Cloud Solutions

Leave a Reply

Your email address will not be published. Required fields are marked *

*