VMware NSX- How to Delete/Remove NSX Logical Switch

Recently i was trying to Remove/Delete one of the NSX logical switch in my lab. While trying to Remove/Delete logical switch got this error:

er1 er2As per error message some of the resources still connected to this logical Switch, that’s why getting error: DB-Tier resources are still in use. So we need to remove any connected Virtual Machines from this logical switch and then we’ll be able to Remove/Delete this NSX logical Switch.

So first thing we need to check what are the VMs / Resources utilizing the NSX logical Switch.

Connect to VC ( vSphere Web Client) –> Networking and Security –> Logical Switches    –> and from right pane double click the logical switch we are trying to Remove.er3As you can see in below screen that one Virtual Machine is connected to DB-Tier NSX logical Switch. We have two option to remove this VM from this logical Switch.

1. Migrate Virtual Machine to another port group or,

2. delete the Virtual NIC card from the VM (which is not good practice).

er4So here we are going to migrate DB-sv-01a VM from DB-Tier logical switch to another Virtual Machine port group.

er5 er6Now after migrating VM we try to Remove/Delete this NSX Logical Switch.

er7

er1Now we are able to Remove/Delete this NSX Logical Switch.

er8

That’s All. I hope this will be informative for others. Thank you !!

Network Virtualization with VMware NSX – Part 4

We discussed Virtual LAN (VLAN)Virtual Extensible LAN (VXLAN)Virtual Tunnel End Point (VTEP)VXLAN Replication Modes, and NSX Logical Switching in the Network Virtualization with VMware NSX – Part 3. Here in Part 4 will discuss about NSX Routing. 

NSX Routing :-

The TCP/IP protocol suite offers different routing protocols that provide a router with methods for building valid routes. The following routing protocols are supported by NSX:

Open Shortest Path First (OSPF): This protocol is a link-state protocol that uses a link-state routing algorithm. This protocol is an interior routing protocol.
Intermediate System to Intermediate System (IS-IS): This protocol determines the best route for datagrams through a packet switched network.
Border Gateway Protocol (BGP): This protocol is an exterior gateway protocol that is designed to exchange routing information between autonomous systems (AS) on the Internet.

NSX Logical Router:-

The NSX Edge logical router provides East-West distributed routing with tenant IP address space and data path isolation. Virtual machines or workloads that reside on the same host on different subnets can communicate with one another without having to traverse a traditional routing interface. A logical router can have eight uplink interfaces and up to a thousand internal interfaces.

During the configuration process, NSX Manager deploys the logical router control virtual machine and pushes the logical interface configurations to each host through the control cluster. The logical router control virtual machine is the control plane component of the routing process. The logical router control virtual machine supports the OSPF and BGP protocols.The distributed logical routers run at the kernel module level.

The NSX Controller cluster is responsible for distributing routes learned from the logical router control virtual machine across the hypervisors. Each control node in the cluster takes responsibility for distributing the information for a particular distributed logical router instance. In a deployment where multiple distributed logical router instances are deployed, the load is distributed across the NSX Controller nodes.

The distributed logical router owns the logical interface (LIF). This concept is similar to interfaces on a physical router. But on the distributed router a distributed logical router can have a maximum of 1,000 LIFs. For each segment that the distributed logical router is connected to, the distributed logical router has one ARP table.

When the LIF is connected to a VLAN, the LIF has a pMAC and when the LIF is connected to a VXLAN, the LIF has a vMAC.

NOTE :- You can have only one VXLAN LIF connecting to a logical switch. Only one distributed logical router can be connected to a logical switch.

DLR high availability:- When high availability is enabled, NSX Manager enables the VMware vCenter Server system to deploy another logical control router virtual machine. deploy two logical router control virtual machines and designate one as active and one as passive. If the active logical router control virtual machine fails, the passive logical router control virtual machine takes 15 seconds to take over. Because the control virtual machine is not in the data plane, data plane traffic is not affected.R1Configuring and Deploying an NSX Distributed Router:-

1. Connect vCenter Server through vSphere Web Client –> Home –> Inventories –> Networking & Security.

2. In the left navigation pane, select NSX Edges.DLR303. In the center pane, click the green plus sign (+) to open the New NSX Edge dialog box.DLR14. From the New NSX Edge dialog box.  On the Name and description page, click the Logical (Distributed) Router button. Enter the Name of the Distributed Router in the Name text box, Enter Hostname, Description for DLR and Tenant Name and click Next.
DLR25. On the Settings page, enter Password for DLR and Enable SSH access for DLR. If you want DLR in High Availability mode check the Enable High Availability box. And Click NEXT.DLR4Note:- Password must be at least 12 Characters log.DLR316. On the Configure Deployment page, verify that you have selected required Datacenter.DLR67. Under NSX Edge Appliances, click the green plus sign (+) to open the Add NSX Edge Appliance dialog box. Select the required Cluster/Resource Pool, Datastore, Host and Folder to deploy DLR. (If you have checked High Availability option 2 Distributed Router will be deployed). And Click OK to close the Add NSX Edge Appliance dialog box.DLR78. Verify the NSX Edge Appliances settings and Click Next.DLR89. On the Configure interfaces page, click the Connected To –> Select link under Management Interface Configuration and select the required Port Group Under Distributed Portgroup. And click OK.

DLR910. Under Configure Interfaces of this NSX Edge, click the green plus sign (+) to open the Add Interface dialog box.DLR10Note :- As discussed in Part -3 we are configuring DLR with below requirement. So we need to Add 4 Interfaces).DLR2911. In The Add Interface dialog box, Enter the name of Interface, Select Type, Click Select Link for Connected To: and choose the desired Logical Switch and OK.DLR1112. Now Click the green plus sign (+) under Configure Subnets to Add subnet for the Interface. In the Add Subnet box Click the green plus sign (+) to add IP Address and Subnet Mask and click OK.DLR12DLR1313. Once Subnets has been added click Ok to complete Add Interface.DLR1414. Repeat the steps 11-13 to Add and Configure Interfaces for other 3 (WEB, APP and Database).DLR15

DLR16

DLR17

DLR18

DLR19

DLR20

DLR2115. Once rest 3 Interfaces have been added and configured. Click Next to proceed

DLR2316. On the Ready to complete page, review the configuration and click Finish to start deploying the Logical (Distributed) Router.DLR2417. It will take some to complete the deployment of Logical (Distributed) Router.DLR2518. Verify that the Distributed Router entry has a type of Logical Router. Double-click the Distributed Router entry to manage that object. Click the Manage tab –> Settings –> Interfaces and see Status of all 4 Interfaces are green.

DLR2619. Under Configuration you can see there are 2 Logical Routers Appliances deployed. Because we choosed to deploy in HA mode. Same you can also verity from Cluster.DLR27

DLR28

20. Now after deploying the DLR with all 4 interfaces. You can Test Connectivity using Ping command between all the VMs.

In the Next Part (Network Virtualization with VMware NSX – Part 5) will discuss Configure and Deploy an NSX Edge GatewayConfigure Routes on the NSX Edge Gateway and on the Distributed Router.

—————————————————————————————————-

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Please share if useful …..Thank You :)

 

Network Virtualization with VMware NSX – Part 3

In the Network Virtualization with VMware NSX – Part 2 we have discussed about NSX Controller Cluster, How to Deploy the NSX Controller Instances, Create IP Pool, and Install Network Virtualization Components ( Prepare Hosts) on vSphere Hosts.

In this part will discuss about Logical Switch Networks and VXLAN Overlays.

Before Discussing VXLAN let’s discuss bit about Virtual LAN (VLAN):-

A VLAN is a group of devices on one or more LANs that are configured to communicate as if they were attached to the same wire, when in fact they are located on a number of different LAN segments. Because VLANs are based on logical instead of physical connections, they are extremely flexible.

VLANs address scalability, security, and network management by enabling a switch to serve multiple virtual subnets from its LAN ports.

VLAN Split switches into separate virtual switches (Broadcast Domains). Only members of a virtual LAN (VLAN) can see that VLAN’s traffic. Traffic between VLANs must go through a router.

By default, all ports on a switch are in a single broadcast domain. VLANs enable a single switch to serve multiple switching domains. The forwarding table on the switch is partitioned between all ports belonging to a common VLAN. All ports on a Switch by default part of single and default VLAN 0 and this default VLAN is called the Native VLAN.

Virtual Extensible LAN (VXLAN) enables you to create a logical network for your virtual machines across different networks. You can create a layer 2 network on top of your layer 3 networks.

VXLAN is an Ethernet in IP overlay technology, where the original layer 2 frame is encapsulated in a User Datagram Protocol (UDP) packet and delivered over a transport network. This technology provides the ability to extend layer 2 networks across layer 3 boundaries and consume capacity across clusters. The VXLAN adds 50 to 54 bytes of information to the frame, depending on whether VLAN tagging is used. VMware recommends increasing the MTU to at least 1,600 bytes to support NSX.

A VXLAN Number Identifier (VNI) is a 24-bit number that gets added to the VXLAN frame. The 24-bit address space theoretically enables up to 16 million VXLAN networks. Each VXLAN network is an isolated logical network.  VMware NSX™ starts with VNI 5000.

A Virtual Tunnel End Point (VTEP) is an entity that encapsulates an Ethernet frame in a VXLAN frame or de-encapsulates a VXLAN frame and forwards the inner Ethernet frame.

VXLAN Frame :-

VXLAN1The top frame is the original frame from the virtual machines, minus the Frame Check Sequence (FCS), encapsulated in a VXLAN frame. A new FCS is created by the VTEP to include the entire VXLAN frame. The VLAN tag in the layer 2 Ethernet frame exists if the port group that your VXLAN VMkernel port is connected to has an associated VLAN number. When the port group is associated with a VLAN number, the port group tags the VXLAN frame with that VLAN number.

VXLAN Replication Modes:-

Three modes of traffic replication exist: two modes are based on VMware NSX Controller™ based and one mode is based on data plane.

vxlan1Unicast has no physical network requirements apart from the MTU. All traffic is replicated by the VTEPs. In NSX, the default mode of traffic replication is unicast.  Unicast has Higher overhead on the source VTEP and UTEP.

Multicast mode uses the VTEP as a proxy. In multicast, the VTEP never goes to the NSX Controller instance. As soon as the VTEP receives the broadcast traffic, the VTEP multicasts the traffic to all devices. Multicast has lowest overhead on the source VTEP.

Hybrid mode is not the default mode of operation in NSX for vSphere, but is important for larger scale operations. Also the configuration overhead or complexity of L2 IGMP is significantly lower than multicast routing.

In the Network Virtualization with VMware NSX – Part 2 we have configured/Prepared Hosts so now let’s Configure VXLAN on the ESXi Hosts.

1. Connect to vCenter using web client.

2. Click Networking & Security and then click Installation.

3. Click the Host Preparation tab and under VXLAN column Click Configure to start Configuring VXLAN on the ESXi Hosts.

vxlan24. In the Configure VXLAN networking dialog box, Select Switch, VLAN, Set MTU to 1600, for VMKNic IP Addressing if you have created IP Pool choose existing IP from from list or Click IP Pool to create New Pool And Click OK.

vxlan3

vxlan45. It will take few minutes to configure depending upon number of Hosts into Cluster. If an error is indicated, it is a transitory condition that occurs early in the process of applying the VXLAN configuration to the cluster. The vSphere Web Client interface has not updated to display the actual status. Click Refresh to update the console.

vxlan56. Repeat the steps to configure all the clusters. Once Configuration done on all clusters.Verify that the VXLAN status is Enabled with a green check mark.

vxlan67.  Once VXLAN Configuration done for all the clusters and VXLAN status is Enabled with a green check mark. Click the Logical Network Preparation tab and verify that VXLAN Transport is selected. In the Clusters and Hosts list,expand each of the clusters and confirm the host has a vmk# interface created with IP Address from the IP Pool we have created for each.

vxlan7Once We have finished Configuring VXLAN and Verified VXLAN configuration for all the clusters. Next need to Configure the VXLAN ID Pool to identify VXLAN networks:-

1.  On the Logical Network Preparation tab, click the Segment ID button and Click Edit to open the Segment ID pool dialog box to configure ID Pool.

2. Enter the Segment ID Pool and Click Ok to complete. VMware NSX™ starts with VNI ID from 5000.

vxlan8Next we need to Configure a Global Transport Zone:-

A transport zone specifies the hosts and clusters that are associated with logical switches created in the zone. Hosts in a transport zone are automatically added to the logical switches that you create. This process is very similar to manually adding hosts to VMware vSphere Distributed Switch.

1. On the Logical Network Preparation tab, click Transport Zones and Click the green plus sign to open the New Transport Zone dialog box.

vxlan92.  Enter the Name for Transport Zone and Select Control Plane Mode. select Clusters to Add to the Transport Zone and Click OK to complete the creation.

vxlan10

vxlan11

———————————————————————————————————-

NSX Logical Switching

The Logical Switching capability in the NSX platform provides customers the ability to spin up isolated logical L2 networks with the same flexibility and agility, as it is to spin up virtual machines. Endpoints, both virtual and physical, can then connect to those logical segments and establish connectivity independently from the specific location where
they are deployed in the data center network. This is possible because of the decoupling between network infrastructure and logical networks provided by NSX network virtualization. Each logical switch gets its own unique VNI.

The deployment of the NSX Virtualization components can help to the agile and flexible creation of applications with their required network connectivity and services. A typical example is the creation of a multi-tier application.

LS11Configure Logical Switch Networks

We need to create logical switches for the all required networks (e.g. Transit, Web-Tier, App-Tier, and DB-Tier networks as per above picture.)
1. Connect to vCenter Server using web Client and Click Networking and Security and Select Logical Switches,  In the left navigation pane.

LS12. Click the Green plus sign to open the New Logical Switch dialog box. Enter the Logical Switch Name and  Select the Global Transport Zone we had created earlier, Choose the Control Plane Mode and Click OK to complete the Switch creation.

ls23. Wait for the update to complete and confirm Transit-Network appears with a status of Normal. Repeat steps to create all required Logical Switches and all are Normal.

LS3Once Logical Switches has been created we need to Migrate Virtual Machines to Logical Switches:-

1. In the left pane under Networking & Security and select Logical Switches. In the center pane, select the logical Switch e.g. Web-Tier –> Right Click the Choose Add VM..

LS42. Select Virtual Machines you want to add to the Logical Switch and Click Next.

LS53.  Select the VNIC you want to add to the Network and Click Next.

LS64. In the Ready to complete box verify the settings and  Click Finish to Complete adding VMs to desired Network.

LS75. To verify that VMs have been added to Logical Switch, Double Click the Logical Switch.

LS36. Click Related Objects and Virtual Machines tab and you can the list of VMs added to this specific Logical Switch.

LS87. Repeat the same steps for all the Logical Switches to Add VMs. Once done try to ping VMs in same switch and between Switch.

Now you can only ping VMs connected in the same Switch. To communicate with VMs in another Switch we need to configure Routing. Which will discuss in next Part.

======================================================

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

– See more at: http://virtualcloudsolutions.info/?p=829#sthash.YMq7IeEE.dpuf

Please share if useful …..Thank You 🙂

VMware VCP6-DCV Certification is available now!!

Here is the most awaited VCP6-DCV certification has been announced by VMware.

VCP6-DCV Logo

VMware Certified Professional 6 – Data Center Virtualization (VCP6-DCV) exam will Validate your ability/skills required to install, configure, administer and scale a vSphere virtualized data center on VMware vSphere 6.

For existing VCP5-DCV certified, course is recommended but not required. You can directly go for VCP6-DCV Beta exam (VMware Certified Professional 6 – Data Center Virtualization Beta Exam)

To Know more about Beta exam Click Here. And to request for Exam Authorization Click Here.

 

Version 6 Certification Roadmap:-

 

EXAM RoadMap

Good time to upgrade yourself .. Good Luck !!!!!

Network Virtualization with VMware NSX – Part 2

We have finished NSX Manager Deployment and Configuration in Network Virtualization with VMware NSX – Part 1. So let’s start with Deploying and Configuring NSX Manager Components.

NSX Controller Cluster

The Controller cluster in the NSX platform is the control plane component that is responsible in managing the switching and routing modules in the hypervisors. The controller cluster consists of controller nodes that manage specific logical switches. The use of controller cluster in managing VXLAN based logical switches eliminates the need for multicast support from the physical network infrastructure.

NSX Controller stores four types of tables:

  • The ARP table
  • The MAC table
  • VTEP (VXLAN Tunnel End Point) Table
  • Routing table

Note :- VMware recommends to add three controllers for scale and redundancy. But as of Now NSX Manager only support Max 3 Nodes Cluster. Even if you deploy 4th NSX Controller it will not show in the NSX Controller Nodes list.

Let’s Deploy the First NSX Controller Instance:-

1. Log in to the vCenter Server through Web Client and Click Networking & Security.

NSXM262. In the left navigation pane, Select Installation.

NSXC23. On the Management Tab under NSX Controller nodes you can see there is no node listed. To Add First NSX Controller Node Click the GREEN PLUS Sign (+).

4. Add Controller dialog box will be appear. Provide all required details (NSX Manager Name, Datacenter, Cluster Name, Datastore to hold node, ESXi host name, Select the network port group to connect the node, In the IP Pool you can select existing IP Pool or Create New pool by choosing New IP Pool option, enter and confirm Password for NSX Controller Nodes.) and Click OK to deploy First NSX Controller Node.

Note:- Password option will only appear for the First NSX Controller Node deployment for 2nd and 3rd node same Password will be used so there will not be password field.

NSXC35. Monitor the Deployment until the status change from Deploying to Normal. It will take few minutes to complete the Deployment.

NSXC76. Repeat the steps 3 and 4 to Add 2 more NSX Controller Nodes.

NSXC8Note:- You will notice my controllers are not 1,2, &, 3.  That is because my controllers deployment got failed because of some misconfiguration on IP Pools and  few i have deleted just to test something. That’s why you can see my controller name as 15,16 & 17. This is BUG with NSX 6.0 when you add new NSX Controller Node it will start from next number what you have last deployed even got fails or you deleted.

7. To verify that NSX Controller Nodes have been Deployed and working fine. Go to the Management Cluster where we have Deployed all three nodes.

NSX controller nodes are deployed as virtual appliances from the NSX Manager UI. Each appliance is characterized by an IP address used for all control-plane interactions and by specific settings (4 vCPUs, 4GB of RAM) that cannot currently be modified.

NSXC9 8. We can also PUTTY each of the controller to check the Status/Roles/Connections/Startup-nodes.

NSXC13We have deployed and verified NSX Controller nodes. All 3 have been Deployed up and running fine.

=======================================

Now we need to Install Network Virtualization Components/ Prepare ESXi Hosts :-

NSX installs three vSphere Installation Bundles (VIB) that enable NSX functionality to the host. One VIB enables the layer 2 VXLAN functionality, 2nd VIB enables the distributed router, and the 3rd VIB enables the distributed firewall. After adding the VIBs to a distributed switch, that distributed switch is called VMware NSX Virtual Switch. 

NSXC16

Note :- To remove the VIBs from the ESXi Host, the ESXi host requires a reboot.

You install the network infrastructure components in your virtual environment on a per-cluster level for each vCenter server, which deploys the required software on all hosts in the cluster. When a new host is added to this cluster, the required software is automatically installed on the newly added host. After the network infrastructure is installed on a cluster, Logical Firewall is enabled on that cluster.

As you can see in below screen under Firewall  that it is showing Not Enabled. When the installation is complete, the Installation Status column displays 6.0 and the Firewall column displays Enabled. Both columns have a green check mark as well.

NSXC16Let’s Install Network Virtualization Components now Cluster Now:-

1. Connect to vCenter using web client.

2. Click Networking & Security and then click Installation.

3. Click the Host Preparation tab.

4. For each cluster, Click Install and Click YES to Start installation for Cluster.

NSXC15

NSXC175. Monitor the installation until the Installation Status column displays a green check mark.

NSXC18

NSXC19Troubleshooting:- If the Installation Status column displays a red warning icon and says Not Ready, click Resolve. Clicking Resolve might result in a reboot of the host. If the installation is still not successful, click the warning icon. All errors are displayed. Take the required action and click Resolve again.

=============================================================

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Thank You!

Network Virtualization with VMware NSX – Part1

Overview of VMware NSX

VMware NSX is a network virtualization platform that enables you to build a rich set of logical networking services such as Logical Switching, Logical Routing, Logical Firewall, Logical Load Balancer, Logical Virtual Private Network (VPN). NSX enables you to start with your existing network and server hardware in the data center. NSX adds nothing to the physical switching environment. NSX exists in the ESXi environment and is independent of the network hardware.

NSX is a software networking and security virtualization platform that delivers the operational model of a virtual machine for the network. Virtual networks reproduce the Layer2 – Layer7 network model in software. By virtualizing the network, NSX delivers a new operational model for networking that breaks through current physical network barriers and enables data center operators to achieve better speed and agility with reduced costs.

With VMware NSX, virtualization now delivers for networking what it has already delivered for compute and storage. In much the same way that server virtualization programmatically creates, snapshots, deletes and restores software-based virtual machines (VMs), VMware NSX network virtualization programmatically creates, snapshots, deletes, and restores software-based virtual networks.

NSX can be configured through the vSphere Web Client, a command line interface (CLI), and REST API.

An NSX-v deployment consists of a data plane, control plane and management plane:

nsx9

NSX Functional Services

NSX provides a faithful reproduction of network & security services in software. e.g.

NSX10

Preparing for Installation

NSX has the following requirements:

  • vCenter Server 5.5 or later
  • ESXi 5.0 or later for each server
  • VMware Tools

NSX requires below ports for installation and daily operations:

  • 443 between the ESXi hosts, vCenter Server, and NSX Manager.
  • 443 between the REST client and NSX Manager.
  • TCP 902 and 903 between the vSphere Web Client and ESXi hosts.
  • TCP 80 and 443 to access the NSX Manager management user interface and initialize the vSphere and NSX Manager connection.
  • TCP 1234 Communication between ESXi Host and NSX Controller Clusters
  • TCP 22 for CLI troubleshooting.

NSX Manager

The NSX Manager is the centralized management component of NSX, and runs as a virtual appliance on an ESXi host. Each NSX Manager manages a single vCenter Server environment. The NSX Manager requires connectivity to the vCenter Server, ESXi host, and NSX Edge instances, vShield Endpoint module, and NSX Data Security virtual machine. NSX components can communicate over routed connections as well as different LANs.

The NSX Manager virtual machine is packaged as an Open Virtualization Appliance (OVA) file, which allows you to use the vSphere Web Client to import the NSX Manager into the datastore and virtual machine inventory.

In the NSX for vSphere architecture, the NSX Manager is tightly connected to the vCenter server managing the compute infrastructure. In fact, there is a 1:1 relationship between the NSX Manager and vCenter and upon installation the NSX Manager registers with vCenter and injects a plugin into the vSphere Web Client for consumption within the Web management platform.

NSX Manager Components Plugin and Integration inside vSphere Web Client :-

NSX11

Note :- You can install the NSX Manager in a different vCenter than the one that the NSX Manager will be interoperating with. A single NSX Manager serves a single vCenter Server environment only.

Note :- Each NSX virtual appliance includes VMware Tools. Do not upgrade or uninstall the version of VMware Tools included with a NSX virtual appliance.

Deploy NSX Manager Virtual Appliance :-

1. Download the NSX Manager Open Virtualization Appliance (OVA) from https://my.vmware.com/web/vmware/downloads.

NSX22. Under Networking & Security section click Download Product for VMware NSX.

NSX33. Select your Version and click Go to Downloads.

NSX44. On the Download VMware NSX for vSphere 6.X Window click Download Now to start downloading of the NSX Manager Open Virtualization Appliance (OVA) file.

5. Place the NSX Manager Open Virtualization Appliance (OVA) file in a location accessible to your vCenter server and ESXi hosts.

6. Log in to the vSphere Web Client where do you want to Import/Run the NSX Manager.

7. Right-click the Cluster/Host where you want to install NSX Manager and select Deploy OVF Template.

NSX158. If this is the first time you are deploying an OVF file, It will ask you to download the Client Integration Plug-in. Click on Download the Client Integration Plug-in link to download and install. (Close all browser before installation and once completed Log in to the vSphere Web Client again and navigate to the host where you were installing NSX Manager.)

NSX169. On the Select Source window Click Browse to locate the folder on your computer that contains the NSX Manager OVA file, Select the OVA click Open and click Next.

NSXM2

NSXM310. It will take few seconds to validate the OVA. Once validated click Next to continue

NSXM411. Review the OVF template details and click Next.

NSXM612. Click Accept to accept the VMware license agreements and click Next.

NSXM713. Name the NSX Manager and select the location for the NSX Manager that you are installing and Click Next.

NSXM814. Select Storage and Click Next.

NSXM915. On the Setup networks page, confirm that the NSX Manager adapter has been mapped to the correct host network and click Next.

NSXM1016. On the Customized template page, specify the Passwords, Network Properties, DNS, NTP and SSH and Click Next.

NSXM1117. On the Ready to complete page, review the NSX Manager settings, Check the Power On after Deployment and click Finish.

NSXM12The NSX Manager is installed as a virtual machine in the inventory. Once deployment of NSX manager finished we need to Log In to the NSX Manager Virtual Appliance and Configure the NSX Manager.

Log In to the NSX Manager Virtual Appliance:-

1. Open the Web browser window and type the Name/IP address assigned to the NSX Manager. For example, https://nsxmanager.vdca550.com (In my case). Accept the security certificate. The NSX Manager login screen appears.

2. Use User name admin and the password you set during installation. If you had not set a password during installation, type default as the password and Click Log In.

NSXM133. Below is Home Screen of the NSX Manager. As you can see from here we can Manage Appliance Settings, Manage vCenter Registration, Backup and Restore of NSX Manager, and Upgrade NSX Manager Appliance.

NSXM144. Click on the View Summary to View and Configure the NSX Manager.

NSXM155. Click on the Mange Tab. From General Setting you can configure Time (NTP) and Syslog server Settings. Click Edit to enter the details and click ok.

NSXM16Time (NTP) Settings:-

NSXM17Syslog Server Settings:-

NSXM186. Click on Network. You can Review/Edit NSX Manager Network Settings and DNS Server settings for NSX Manager. Click on Edit to Edit the settings and click OK.

NSXM20

NSXM197. Click on SSL Certificates option to configure the SSL Certificate for NSX Manager.

8. Click on Backups and Restore option to take or scheduled Back of NSX manager Data.

NSXM21Note :- Currently there is no option to have multiple NSX managers for redundancy, So Backup is very critical for NSX Manger. In the case of NSX Manager failure you need to Deploy New NSX Manger and Restore the configuration from last backup.

9. To Upgrade your NSX Manager Appliance to latest version Download the Upgrade bundle from VMware website first and then from Upgrade Option in NSX Manager you can Upgrade to latest version. Click Upgrade in the Upgrade NSX Management Service –> Click Browse to select the Upgrade bundle and Click Upgrade to start the upgrade.

NSXM23

NSXM2410. Last and Important Option is NSX Management Service. Click on NSX Management Service –> Under vCenter Server Section click Configure to Register vCenter Server with NSX Manager. Enter vCenter Server Name, User Name and Password and Click OK to Add/Register vCenter Server with NSX Manager.

NSXM2511. Once vCenter Server registration done with NSX Manager We can connect to vCenter Server and verify that Networking & Security Icon under Inventories List.

NSXM2612. Click on the Networking & Security to open up the NSX Home page.

NSXM27And now we are all set to start the use of NSX features.

In the Next Part will discuss Installing and Configuring NSX Components …Please leave your Questions/Comments/Suggestions..Thank you !! 

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

VMware Certification Upgrade / Migration Path from v5 to v6

VMware has announced the new Upgrade / Migration Path from v5 to v6, Here is the latest paths how we can Upgrade / Migrate from version 5 to New Version 6. Biggest news is that do not need to go through the New training to upgrade to v6.

Upgrading from VCP5 to VCP6: Only one exam is required. There is no course attendance required (but it is recommended to go through to know the New product better). There is Name changes from previous version, here is new Name:-

VCP-DCV = Data Center Virtualization
VCP-DTM = Desktop & Mobility (Previously VCP-DT = Desktop)
VCP-CMA = Cloud Management & Automation (Previously VCP-Cloud)
VCP-NV = Network Virtualization

  • There are five VCP6 migration options across the four technology tracks: Data Center Virtualization (DCV), Desktop and Mobility (DTM), Cloud Management and Automation (CMA), and Network Virtulization (NV):
  • VCP6: Data Center Virtualization Exam (exam number: 2V0-621) – Beta yet to be released.
  • VCP6: Data Center Virtualization Delta Exam (exam number: 2V0-621D) – Beta yet to be released.
  • VCP6: Desktop & Mobility Exam (exam number: 2V0-651) – Yet to be released.
  • VCP6: Cloud Management and Automation Exam (exam number: 2V0-631) – Yet to be released.
  • VCP6: Network Virtualization Exam (exam number: 2V0-641) – Yet to be released.

If you hold one of the following v6 certifications it will be automatically upgraded to the equivalent new VCP6 certification:

  • VCP6-Cloud = VCP6-Cloud Management and Automation
  • VCP6-Desktop = VCP6-Desktop and Mobility
  • VCP-Network Virtualization = VCP6–Network Virtualization

Upgrading to VCIX6: VCIX6 contains two exams: Design & Administration similar for all three tracks: DCV, DTM, CMA.

If you hold either a VCAP Administration or Design certification, you will need to take the other of what you do not have in VCIX6 exam.  For example if you hold VCAP5-DCD, you will need to take VCIX6 administration exam and if you hold VCAP5-DCA then you would only need to take the VCIX6 Data Center Design exam to earn your VCIX6-DCV/VCIX6-DCD certifications.

For existing dual VCAP5, You can choose to take either VCIX6 exam; Design or Administration.

For existing VCIX-NV, You will be upgraded automatically to VCIX6-NV.

Upgrading to VCDX6: If you hold a VCDX5, Only required to take VCIX6 design exam in the corresponding solution track.

Thank You!

Save 50% off Your Network Virtualization Certification Exam through June 30

The future of networking is virtual. Keep your skills relevant and future-proof your career by earning yourVMware Certified Professional – Network Virtualization (VCP-NV) certification for half price through June 30, 2015.

Plus, if you have certain Cisco certification*, we will waive the course requirement in recognition of your previous certification through January 31, 2016. Visit the VCP-NV certification requirements page for complete details.

Whether you are earning your first VMware certification or seeking recertification this is a terrific opportunity to discover cutting-edge NSX technology and save on your exam.

Certification Exam Code Discount Code
VCP-NV VCPN610 VCPNV50

You must complete your exam by June 30, 2015 to save 50%…Enjoy 🙂

For more information visit :- https://mylearn.vmware.com/mgrReg/plan.cfm?plan=63030&ui=www_edu

MY VCAP-DCA (VDCA550) EXPERIENCE

vcap-dca-Image

Dear All,

Happy to share that yesterday cleared my VCAP-DCA (VDCA550) Certification. Yesterday finished writing my exam around 6:00 PM Singapore Time and waiting for my Score Report to see this below wonderful lines –

“Congratulations on passing the VMware Certified Advanced Professional – Data Center Administration exam! You will receive an email notification from certification@vmware.com once your certification status has been confirmed and added to your VMware Education Transcript (allow one week).”

After completing my VCAP5-DCD certification last year on 21st August started preparing for my VCAP5-DCA exam but because of hectic schedule could not get enough time to finish this. But Finally achieved this.

Overview:-

The VCAP5-DCA (VDCA550) Exam consists of 23 live lab activities and a short pre-exam survey consisting of 9 questions. The total time for this exam is 180 minutes but as i was writing this exam in Singapore where English is not a primary language so an additional 30 minutes added to the exam time to 210 minutes. Plus 15 minutes for short pre-exam survey so total time was 225 minutes.

The passing score for this exam is 300 out of 500.

Exam Experience

Although i have been working on VMware products for long but for the VCAP-DCA on 5.5 exam, I knew I had to brush up and practice at least once in my lab all the areas in the blueprint. VCAP5-DCA-VDCA550-Exam-Blueprint-v3_3.

I started practicing through the blueprint. Going through the blueprint is compulsory as the exam covers all the topics discussed in the blueprint. Second best source was go through the various blogs sharing experience and tips provided for VCAP-DCA exam.

There are lot of blog posts which you should go through which has study notes which will certainly help with great deal. Here are few blogs which  i followed:-

http://thinkingloudoncloud.com/

http://stretch-cloud.info/

http://longwhiteclouds.com/

vSphere Optimize & Scale Course on Trainsignal by Jason Nash: Official recommended course for VCAP5-DCA and Jason Nash one of the most excellent instructors and he deep dives through each point in the blueprint.

The ProfessionalVMware #vBrownBag sessions on Youtube.com.

Exam Environment

The format has changed significantly from the 510 exams from my research and in asking others of their experience…The exam consists of a number of tasks that are performed using an environment consisting of five ESXi 5.5 hosts, two vCenter 5.5 Servers, vCenter Orchestrator 5.5 and vSphere Replication 5.5 appliances plus an Active Directory domain controller and shared storage. A number of pre-configured virtual machines will also be present for use with certain tasks.Throughout the exam you are repeatedly warned to not change anything but what actions are stated in the question…

I solved 22 questions and was left with 7 minutes time for a question I had no idea where to begin with. So after 5 minutes of desperate clicking around vSphere Client and in provided documentations, I gave up and finished my examination session.

Final Thoughts

I am so happy to clear my VCAP-DCA (VDCA550) exam. I was relieved and fairly happy that the plan had worked and all the hard work of the months had paid off.

Good luck to all those taking the VCAP-DCA in the future 🙂

Deploying VMware ESXi 5.5. with vSphere Auto Deploy 5.5 – Part 2

We have discussed Install and Configure vCenter Server, Auto Deploy Server, TFTP Server and DHCP Server configuration in Deploying VMware ESXi 5.5. with vSphere Auto Deploy 5.5 – Part 1

Let’s discuss remaining parts here –

  • Download Offline Bundle for ESXi 5.5 with all other VIBs.
  • Install VMware PowerCLI 5.5.
  • Create Software Depot / Image Profile / Deploy Rule / Create Host profiles / Update Rules with Host Profile.

Download Offline Bundle for ESXi 5.5 with all other VIBs.

1. To download ESXi 5.5 Offline Bundle Go To https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/5_5

Offline22. Under Product type (Enterprise Plus) –> VMware ESXi 5.5.0 Update 2 and Click on Go to Downloads

Offline33. Under Product Downloads –> ESXi 5.5 Update 2 Offline Bundle –> Click on Download Now

Offline14. Enter My VMware Log In credentials and Click on Log In

Offline45. Tick the check box to Agree VMware End User Licence Agreement and Click Accept.

Offline56. It will start downloading ESXi 5.5 offline bundle Zip file and will take few minutes to download.

Offline67. Download others Agents / Drivers / VIBs as per your requirements to customize with bundle.

——————————————————————————————————

Now let’s Install VMware PowerCLI 5.5

1. Download VMware PowerCLI 5.5 from here https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere_with_operations_management/5_5?productId=352#drivers_tools

Offline72. Start the Installation by double Clicking VMware-PowerCLI-5.5.0-1295336.exe

Offline83. VMware vSphere PowerCLI installation will start and will give Security Warning for PowerShell Execution policy is not set to “RemoteSigned”

PCLI24. Open Windows PowerShell and Run Set-ExecutionPolicy remotesigned command to change the execution policy. And Click Continue to continue Installation.

PCLI35. On Welcome to the VMware vSphere PowerCLI Installation screen Click Next to continue.

PCLI46. Check radio button to Accept the License Agreement and Click Next to Continue.

PCLI57. Select vSphere PowerCLI program and change the Installation location (If required) and click Next to continue.

PCLI68. On Ready to Install the Program screen click Install to begin Installation.

PCLI79. This will take few minutes to install VMware vSphere PowerCLI on the machine.

PCLI810. On InstallShield Wizard Completed screen click Finish to exit the wizard.

PCLI911. Launch VMware vSphere PowerCLI by clicking VMware vSphere PowerCLI.

Offline912. We are all set with VMware vSphere PowerCLI Installation.

Offline10We have fulfilled all other requirements for Auto Deploy. Now will discuss how to Create Software Depot / Image Profile / Deploy Rule / Create Host profiles / Update Rules with Host Profile.

1.  Connect to vCenter server :- Connect-VIServer vc.dca.com.                                       It will ask for login Credential for vCenter Server. Provide User Name and                 Password click Ok to connect to vCenter Server.

AD1.12. As you can see connected to vCenter Server with provided User Name.

AD1.23. First Thing we need to do is create Software Depot, but before creating Software Depot we can check any existing Depot by command :- Get-EsxSoftwareDepot. 

AD14. To Create Software Depot use below command :-                                                                Add-EsxSoftwareDepot “C:\data\update-from-esxi5.5-5.5_update02-2068190.zip”   

AD25. Run again Get-EsxSoftwareDepot to see the Software Depot.

AD36. Next thing we need to do is create Image Profile.                                                                 New-EsxImageProfile -CloneProfile “ESXi-5.5.0-20140902001-stan*” -Name “Roshtestprofile”

AD47.  Use Get-EsxImageProfile to verify that Image Profile got created.

AD58. After Creating Image Profile we need to create DeplyRule                                             New-DeployRule -Name “PreHostProfile” -Item Roshtestprofile -Pattern “ipv4=192.168.174.206-192.168.174.220”                                                                 (Pattern can be IP Range, MAC Address, Make, etc.)

AD69. It has created Deploy Rule (PreHostProfile) for Pattern (IP Range) with Item (Image Profile) RoshtestProfile.

AD810. After Creating Deploy Rule we need to Add/Active Deploy Rule :-                                                    Add-DeployRule -DeployRule “PreHostProfile”

AD911. As you can see there is only one ESXi Host (esxi1.dca.com) is connected to this vCenter Server.

AD1012. We have created Rule with Pattern for IP Range, We’ll have to create Reservation for hosts with those IP Addresses. As you can see below i have created Virtual Machine (Auto_ESXi1) and reserved 192.168.174.210 IP address for host.

AD1113. Let’s Power ON the Virtual Machine Now.                                                                         (Note :- This is Stateless VM as have not allocated Hard Disk to VM)

AD1214. Host booted and get 192.168.174.210 IP Address from DHCP Server as per Reservation and trying to connect to TFTP Server to get Boot Image …

AD1315. Walla… Connected to Auto Deploy Server and loading Image into Memory….

AD1416. Installation is in Process…

AD1517. Once Installation finished host will be added to vCenter Server by Auto Deploy.

AD1718. Next thing we need to do to Create Host profile, Edit Host Profile and Update Deploy Rule to apply Host profile after deploying the ESXi host.

To create Host Profile Go To Home –> Management –> Host Profile

HP119. In Create Profile Wizard select Create Profile from existing Host and Click Next to continue..

HP220. Name the Host Profile and Click Next to continue ..

HP421. On the ready to complete the Profile Window Review and Click Finish to Create Host profile.

HP522. After creating Host Profile we need to edit the Profile to configure all the things want to deploy on hosts. (e.g. Syslog Server, Coredump collector, Stateless chashing , NTP settings, DNS configuration etc.)

HP6 HP7 HP823. Once profile is created and edited as per requirements. We need to Create New DeployRule with Host Profile and Cluster Name to Deploy host with Specific requirement and Add to specific Cluster. And Add/Active new created Deploy Rule.

AD1824. Again i have created one more VM with Name Auto_ESXi2 and reserved IP address 192.168.174.211. Now let’s reboot this VM to deploy with new created Rule.

AD2025. As you can see in below screen that Host Profile is being applied on the host.

AD2126. Once all done host has been added to mentioned Cluster DCA.

AD23That’s all. Hope this will help you all.

Cheers….Roshan Jha 🙂

Deploying VMware ESXi 5.5. with vSphere Auto Deploy 5.5 – Part 1

Auto Deploy allows rapid deployment and configuration of a large number of ESXi hosts. vSphere Auto deploy can be configured with one of the three modes:

  • Stateless
  • Stateless Caching
  • Stateful Install

Stateless :-  You Deploy ESXi using Auto Deploy are not installing ESXi onto a Local disk, or a SAN boot LUN. ESXi is directly loaded into memory on a host as it boots.

Stateless Caching :- You deploy ESXi using Auto Deploy just as with Stateless, but the Image is cached on the server’s local disk or SAN boot LUN. In the event that Auto Deploy infrastructure is not available, the hosts boots from a local cache of the Image.

Stateful Install :- You can provision a host with Auto Deploy and set up the host to store the image to disk. On subsequent boots, the host boots from disk. This process is similar to performing a scripted installation. With a scripted installation, the script provisions a host and the host then boots from disk. In this case, Auto Deploy provisions a host and the host then boots from disk.

Auto Deploy Requirements:-

1.  vCenter Server 5.5 (Install/Upgrade vCenter Server 5.5)

2. Install Auto Deploy Server 5.5.

3. Install TFTP server (SolarWinds).

4. Configure TFTP Server and Boot Loader Data.

5. Configure DHCP server with 66 and 67 Options.

6. Install VMware PowerCLI 5.5.

7. Download Offline Bundle 5.5 with all other VIBs

8. Create Software Depot / Image Profile / Deploy Rule / Create Host profiles / Update Rules

How to Install Auto Deploy server

You can install vSphere Auto Deploy on the same system as vCenter Server or on a separate Windows based system. If you are installing Auto Deploy on a system separate from vCenter Server, Specify the IP Address or Name of the vCenter Server with which this Auto Deploy server should register.

1. Launch the VMware vCenter Installer media and Select vSphere Auto Deploy and then click Install to start the installer.

AD12. Select the appropriate language and Click OK.

AD23. Installer will prepare setup process.

AD34. From the vSphere Auto Deploy installer Welcome Screen, Click Next to Continue.

AD45. Select Radio button to accept End User License Agreement and Click Next.

AD56. Select the vSphere Auto Deploy Installer and Repository Directory and Size of the Repository and Click Next to continue.

AD67. Provide vCenter Server details and Credentials to register Auto Deploy with vCenter Server and click Next to continue.

AD78. Select the default Auto Deploy Server and Management Port and Click Next.                   (Do not change unless there is conflicts with port numbers)

AD89. The Next screen allows to choose how the vSphere Auto Deploy Server will be identified on the network. It Will detect host name on which we are installing Auto Deploy choose default name name Click Next.

AD910. On Ready to Install screen click Install to begin the installation.

AD1011. Ignore the Security Warning and Click Finish to Complete the Installation.

AD1112. Once Installation Completed Connect to vCenter Server –> Home –> Administration –> Auto Deploy. See below screenshot.

AD1413. Click on Auto Deploy to open.

AD15We have completed Auto Deploy Installation, Now let’s Install and Configure TFTP server.

=============================================================

Install and Configure TFTP Server

There are so many TFTP server available but i am going to use here Solarwinds TFTP Server.

1. Download the SolarWinds TFTP server Software and Double Click SolarWindsTFTPServer.exe to launch the TFTP installer.

TFTP62. On the Open File – Security Warning page Click Run to start the installation.

TFTP13. On Welcome to the SolarWinds TFTP Server Setup screen and click Next to continue.

TFTP24. Click tick box to Accept End-User License Agreement and Click Next to Continue.

TFTP35. On Ready to Install screen click Install to begin the installation.

TFTP46. once Installation completed click Finish to exit setup wizard.

TFTP5

We have Installed TFTP server now Let’s configure TFTP Server.

1. Open the TFTP Server by going Start –> All Programs –> SolarWinds TFTP Server –> TFTP Server.

TFTP102. We need to download TFTP Boot ZIP from Auto Deploy Server. Let’s connect to vCenter Server and open Auto Deploy Server.

AD153. Click On Download TFTP Boot Zip and Save Deploy-TFTP.Zip to Local Drive.

TFTP83. Choose the location and Click Save to save on the specified location.

TFTP94. Open the Deploy-tftp.zip folder, Extract and copy all files inside this folder to TFTP Server Root location.

TFTP135. In my case TFTP Root is C:\TFTP-Root, So Paste copied files here.

PCLI246. Now we have to Configure C:\TFTP-Root as Root Directory for TFTP Server. So choose File –> Configure.

TFTP117. Under the Storage –> TFTP Server Root Directory –> Click Browse and choose C:\TFTP-Root and Click OK.

PCLI258. After Setting Up Root Directory Stop and Start the TFTP Server Service and Click OK to close the configuration window.

PCLI269. TFTP Server Configuration Completed and TFTP Server Service is Up and Running fine now.

PCLI27

We have installed and Configured Auto Deploy Server and TFTP Server, Now will Configure DHCP server ( Reservation and Configure Options 66 and 67 )

Configure DHCP Server

1. Open DHCP Server, Right Click on Reservation and choose New Reservation.

DHCP12. We need to specify MAC Address of the server for which reserving IP Address. And Also you can see have not assigned disk to this Virtual Machine (Stateless Host)

DHCP23. Provide Reservation Name same as Host Name, IP Address want to reserved for this host and MAC Address of the host.(Replace Colon (:) with Dash (-)) and Click Add to add reservation.

DHCP34. Now need to Configure Options for reservation with 66 (TFTP Server Name) and            67 (BootFile Name). Right Click Reservation and choose Configure Options.

DHCP45. Scroll Down to Options 66. Tick the check box 66 and Specify TFTP Server Name in Data Entry –> String value and Click Apply.

DHCP56.  Tick the check box 67 (Bootfile Name) and in the String Value enter Name of the Bootfile from TFTP Server Root Directory.

DHCP6

DHCP7

Done.

So far we have discussed Install and Configure vCenter Server, Auto Deploy Server, TFTP Server and DHCP Scope.

We’ll discuss Download Offline Bundle for ESXi 5.5 with all other VIBsInstall VMware PowerCLI 5.5, and Create Software Depot / Image Profile / Deploy Rule / Create Host profiles / Update Rules with Host Profile in the Deploying VMware ESXi 5.5. with vSphere Auto Deploy 5.5 – Part 2.

Click Here for Part 2 – Deploying VMware ESXi 5.5. with vSphere Auto Deploy 5.5

Thank You!

Roshan Jha 

vSphere Syslog Collector 5.5 – Install and Configure

Syslog Collector

Syslog is a way for network devices to send event messages to a logging server – usually known as a Syslog server. The Syslog protocol is supported by a wide range of devices and can be used to log different types of events. An ESXi host will by defaults save its log files locally. This is particularly important for hosts deployed without a persistent scratch Partition, Such as a Stateless host provisioned by Auto Deploy. Syslog Collector also addresses the issue of an Auto Deployed host not having a local disk.  With no local disk the log files are stored on a Ramdisk, which means each time the server boots the logs are lost.   Not having persistent logs can complicate troubleshooting.  Use the syslog collector to capture the ESXi host’s log on a network server.

Syslog Collector on VCSA

A Syslog Collector is bundled with the the vCenter Server Appliance (VCSA) and requires no extra setup. By default logs are saved in /var/log/remote/<HostName>. Just configure the hosts to send their logs to the Syslog collector.

Syslog Collector on a Windows Server

Syslog Collector can be installed on vCenter Server or on a standalone Windows Server.

1. From VMware vCenter Installer media choose vSphere Syslog Collector and Click Install to start the installation process.

SLS12. Select the appropriate language for the Syslog Collector and Click OK.

SLS23. Installer will prepare setup process to guide and install Syslog Collector.

SLS34. On the Welcome screen Click Next to continue.

SLS45. Select Radio button to accept End User License Agreement and Click Next.

SLS56. Select where to install the application and where to stored the logs and also Size of the log file before Rotation and Number of Logs to keep on the Syslog Collector Server. Unless you have specific requirements select default settings and Click Next.

SLS67. Setup Type screen allows to register the Syslog Collector instance with vCenter Server instance. Select VMware vCenter Server Installation and Click Next.

SLS78. On VMware vCenter Server Information screen provide the vCenter Server Name, Port, and Appropriate account credentials to Register Syslog Collector to vCenter Server and Click Next.

SLS8.19. Accept the default ports settings and Click Next.

SLS810. The Next screen allows to choose how the Syslog Collector will be identified on the networks and by the ESXi hosts. It Will detect host name on which we are installing Syslog collector choose default name and Click Next.

SLS911. On Ready to Install screen click Install to begin the installation.

SLS1012. On Installation Completed screen click Finish to complete the Installation.

SLS1613. Once Installation completed connect to vCenter Server –> Home –> Administration –> VMware Syslog Collector–> Double Click to open Syslog Collector.

SLS12

SLS13===========================================================

Configuring ESXi Hosts to Redirect to a Syslog Collector

There are several ways to Configure ESXi hosts to redirect logs to a Syslog Collector.

  • Advanced Configuration Options on the ESXi host
  • Via Host’s command Line
  • Host Profile

Configuring ESXi Hosts using the Advanced Configuration Options

1. Connect to vCenter Server using vSphere Client or Web Client –> Home –> Select Host and Clusters.

2. Select the ESXi Host –> Configuration –> Under Software Advanced Settings.

SLS143. Under Advanced Settings –> Syslog –> Global –> Syslog.global.loghost enter Syslog Collector host name and Click OK to complete the configuration.

SLS15===============================================================

Configuring ESXi Hosts using Host’s Command Line

1. Connect ESXi host using putty.

SLS172. Enter the Root credentials to log into to host.

SLS183. Review the existing Syslog Collector Configuration using below command –                                 esxcli system syslog config get

SLS194. If you do not remember the configuration parameters/options use below commands to get the help – esxcli system syslog config set –help

SLS205. To configure the remote log host server and enable syslog collector server on host use this command –

esxcli system syslog set –loghost=vum.dca.com –logdir-unique=true                    

esxcli system syslog reload

SLS216. Verify configuration using below command – esxcli system syslog config get

SLS22=============================================================

Configuring ESXi Hosts using Host Profile.

1. Edit the Host profile with below settings.

Advanced Configuration Option –> syslog.global.loghost –> Enter the syslog Collector host name and click OK. Apply this Host Profile on other hosts and compliant.

SLS23

Done. We are all set now 🙂

 

Cheers..Roshan Jha

Setting up the ESXi 5.5 Dump Collector

The ESXi Dump Collector is a centralized service that can receive and store Memory dumps from ESXi servers when they crashed unexpectedly. These Memory Dumps occurs when an ESXi hosts crashed with PSOD (Purple Screen of death). The Kernel grabs the contents of Memory and dumps them to nonvolatile disk storage before the server reboots. By default, a core dump is saved to the local disk.  In the case where there may not be a local disk the core dump will be saved to a ramdisk in memory, which is a problem because the core dumps will be lost when the host reboots.

To solve this vSphere 5.0 includes a new feature called the ESXi Dump Collector.  The Dump Collector enables you redirect ESXi host core dumps onto a network server.

The dump collector is included as part of the vCenter Server Appliance (VCSA) and requires no extra setup.

CDC1

How to Install ESXi Dump Collector on Windows.

1. To install the dump collector on Windows simply load the VMware vCenter installation media, launch autorun.exe and from the main install menu choose “vSphere ESXi Dump Collector”.

DC12. Select the appropriate language for ESXi Dump Collector and Click OK.

DC23. Installer will prepare setup process for ESXi Dump Collector.

DC34. On the Welcome screen Click Next to start installation process.

DC45. Select Radio button to accept End User License Agreement and click Next.

DC56. Select where to install the ESXi Dump Collector and Where to store the Dump (Repository Directory), If desired change the location and Repository Size and Click Next.

DC67. Setup Type screen allows to register the ESXi Dump Collector instance with vCenter Server instance. Select VMware vCenter Server Installation and Click Next.

DC78. On VMware vCenter Server Information screen provide the vCenter Server Name, Port, and Appropriate account credentials to Register ESXi Dump Collector to vCenter Server and Click Next.

DC89. Accept default port 6500 and Click Next.

DC910. The Next screen allows to choose how the ESXi Dump Collector will be identified on the networks and by the ESXi hosts. It Will detect host name on which we are installing Dump collector choose default name name Click Next.

DC1011. On Ready to Install screen click Install to begin the installation.

DC1112. On Installation Completed screen click Finish to complete the Installation.

DC1213. Once Installation completed connect to vCenter Server –> Home –> Administration –> VMware ESXi Dump Collector–> Double Click to open ESXi Dump Collector.

DC13You can see Dump Collector’s details and Port Number.

DC14=============================================================

Now we need to configure ESXi host to Redirect their Core Dumps

There are 2 methods to configure ESXi Hosts to redirects Core Dumps to ESXi Dump Collector server.

  • Using ESXCLI command-line Tools
  • Using Host Profile.

1. Log into ESXi host via SSH.

DC152. Enter the Root credentials to log into to host.

DC163. Review the existing Dump Collector Configuration using below command –                                 esxcli system coredump network get

DC174. If you do not remember the configuration parameters/options use below commands to get the help – esxcli system coredump network set –help

DC195. Use below command to configure the host’s dump redirection settings                             esxcli system coredump network set -v vmk0 -i 192.168.174.204 -o 6500

6. Turn On / enable Dump Collector using below command                                                     esxcli system coredump network set -e true

DC207. At the end verify Dump Collector service status with this command.                                             esxcli system coredump network check

DC21

Done!

===========================================================

Now will configure ESXi Dump Collector on hosts using Host Profile

1. Create Host Profile and Edit Host Profile with Below settings to enable and configuration Network Coredump Settings. Once done apply this Profile on the rest of Hosts to make complaint with hosts.

DC22

We are all set now.

 

Thank You!

Roshan Jha

Customers will only pay for value and not technology