Category Archives: Virtual Cloud Solutions – A New Beginning

What’s New in VMware vRealize Log Insight 4.0

2

Last week VMware released vRealize Log Insight 4.0 with new, improved and re-designed user interface, offers enhanced alert management functionalities, and is natively built for and works seamlessly with vSphere, vRealize, and other VMware products.

vRealize Log Insight 4.0 delivers the best real-time and archive log management for VMware environments. vRealize Log Insight 4.0 can be configure to pull data for tasks, events, and alarms that occurred in vCenter Server instances and Integration between vRealize Log Insight and vRealize Operations Manager.

What’s New in vRealize Log Insight 4.0?

  • vSphere 6.5 compatibility.
  • System notification enhancements.
  • New overall User Interface based on the VMware Clarity standard.
  • New Admin Alert Management tool and User Interface (UI) to view and manage all user alerts.
  • New “blur” on session timeout.
  • Support for Syslog octet-framing over TCP.
  • SLES 11 SP3 and SLES 12 SP1 are supported for Linux agents.
  • New Agent installations have SSL enabled by default. Previously, Agent installs defaulted to SSL off. Upgrading does not affect current SSL settings.
  • For content pack alerts instantiated in 4.0, content pack updates now automatically update alert definitions. If needed, you can preserve customization by exporting them and then importing them back into the user profile after the update is applied.

Note:- Support for VMware vCenter Server 5.0 and 5.1 has been removed.

To find out more about vRealize Log Insight Release Note and Log Insight please visit VMware Log Insight documentation.

Thank you and Keep Sharing 🙂

What’s New in vRealize Operations Manager 6.4

blog-img-product-release-sept2015-730x300

Few days ago vRealize Operations Manager 6.4 has been released with enhanced product usability, Stability and supporting vSphere 6.5 environments. And best part is that this release has been added with 13 new customized dashboards which will be very useful for Capacity, Performance and troubleshooting stand point.

Here are the list of enhancement has been made in vRealize Operations Manager 6.4 –

  • Pinpoints and helps you clean up alerts in your environments:
    • Groups alerts by the definition to identify noisiest alerts.
    • Allows single-click disabling of alerts across the system or for a given policy.
  • Reflects business context based on vSphere VM folders:
    • vSphere VM folders are now incorporated into the navigation tree for quick searching.
  • Thirteen new installed dashboards to display status, identify problems, and at-a-glance share data with your peers:
    • A getting started dashboard to introduce you to using our dashboards
    • Environment and capacity overview dashboards to get a summary of your environments.
    • VM troubleshooting dashboard that helps you diagnose problems in a VM and start solving them.
    • Infrastructure capacity and performance dashboards to view status and see problems across your datacenter.
    • VM and infrastructure configuration dashboards to highlight inconsistencies and violations of VMware best practices in your environment.
  • Enhanced All Metrics tab to ease troubleshooting:
    • Set of key KPIs per object and resource for a quick start of your investigation process.
    • Ability to correlate properties with metrics
  • Predictive DRS (pDRS) enables vRealize Operations to provide long-term forecast metrics to augment the short-term placement criteria of the vSphere Distributed Resource Scheduler (DRS). This capability only works within vSphere version 6.5 or above.

Supported Deployment Mode for vROps 6.4

You can deploy vRealize Operations Manager 6.4 with any of the following installation mode:

  • VMware virtual appliance
  • RHEL installation package
  • Microsoft Windows installation package

Note: VMware recommends to use the Virtual Appliance option rather than using Linux or Windows base deployment. For Windows base  vRealize Operations Manager 6.4 is the final version of the product to support Microsoft Windows installations. Although RHEL-based installation option is fully supported in vRealize Operations Manager 6.4 but will be deprecating this support. The future availability of the RHEL options is not guaranteed.

Please visit vRealize Operations Manager 6.4 VMware product page for More details.

Thank you and Keep sharing 🙂

VMware NSX Manager 6.x.x Backup and Restore

In this post i am going to discuss how to configure Backup for NSX Manager 6.x.x, Schedule backup for NSX Manager 6.x.x, How to take On-demand Backup for NSX Manager 6.x.x and Restore NSX Manager configuration from a backup.

We can back up and restore NSX Manager data, which includes system configuration, events, and audit log tables. Backup are saved to a remote location and that must be accessible by the NSX manager.

We can back up NSX manager data on-demand or we can schedule as per plan.

Let’s start now how to configure remote server to store the backup of NSX manager.

  1. Login to the VMware NSX Manager Virtual Appliance with Admin account.

b12. Under the NSX Manager Virtual Appliance Management –> Click Backup & Restore.b2

3. To store the NSX Manager Backup we can use FTP server with FTP or SFTP Transport protocol. To configure FTP Sever settings click Change Next to FTP Server Settings.b3

4. Backup Location Window will open up:

  • Enter IP/Host name of the FTP Server.
  • Choose Transfer protocol either FTP or SFTP, based on what the destination server supports.
  • Enter the Port number for Transfer Protocol.
  • Enter the user name and password to connect to backup server.
  • Enter the Backup Directory where you want to store the backup.
  •  Enter the Filename Prefix, Prefix will be added every time with backup file runs for NSX manager.
  • Type the Pass Phrase to secure the backup.
  • Click OK to test connection between NSX Manager and FTP Server and Save the settings.

b4

5. Once connection testing done it will save the settings. It will show the settings as below.b5

6.  After configuring the FTP Server Settings, We can configure to schedule the backup. Click Change next to Scheduling. We can schedule backup for Hourly, Daily or Weekly basis. Choose your option as per plan ( Recommended is to take daily basis), and Click Schedule to save the settings.

b6

b7

b8

7. Backup will run as per schedule and you can see entry for every day.

b9

8. We can also perform on-demand backup of NSX manger. For On-Demand backup of NSX Manager click Backup Next to Backup History.b10

9. Create Backup window will open up to confirm that you want to start a backup process now, Click Start to start the backup immediately.b11

10. it will take few minutes to complete the Backup process.b12

11. You can see new backup entry in Backup History.b13

Now will discuss how to Restore from a backup.

We can restore a backup only on a freshly deployed NSX Manager Appliance.  So let’s assume that we have some issue with Current NSX manager and can not be recovered.

In this scenario we can deploy new NSX Manager Virtual Appliance, Configure the FTP Server settings to identify the location of the backup to be restored. Select the backup from backup history and Click Restore and Click OK to confirm.b15That’s it. This is how we can configure Remote Server to store NSX Manager backup, Schedule NSX Manager backup, Perform on-demand backup for NSX Manager and Restore from a backup.

Thank you and Keep spreading the knowledge  🙂

 

 

VMware Released NSX for vSphere 6.2.3

VMware released NSX for vSphere 6.2.3 last month with many Changes and also includes a number of bug fixes in the previous version of NSX.

 

Here are Changes introduced in NSX vSphere 6.2.3:-

  • Logical Switching and Routing
    • NSX Hardware Layer 2 Gateway Integration: expands physical connectivity options by integrating 3rd-party hardware gateway switches into the NSX logical network
    • New VXLAN Port 4789 in NSX 6.2.3 and later: Before version 6.2.3, the default VXLAN UDP port number was 8472. See the NSX Upgrade Guide for details.
  • Networking and Edge Services
    • New Edge DHCP Options: DHCP Option 121 supports static route option, which is used for DHCP server to publish static routes to DHCP client; DHCP Options 66, 67, 150 supports DHCP options for PXE Boot; and DHCP Option 26 supports configuration of DHCP client network interface MTU by DHCP server.
    • Increase in DHCP Pool, static binding limits: The following are the new limit numbers for various form factors: Compact: 2048; Large: 4096; Quad large: 4096; and X-large: 8192.
    • Edge Firewall adds SYN flood protection: Avoid service disruptions by enabling SYN flood protection for transit traffic. Feature is disabled by default, use the NSX REST API to enable it.
    • NSX Edge — On Demand Failover: Enables users to initiate on-demand failover when needed.
    • NSX Edge — Resource Reservation: Reserves CPU/Memory for NSX Edge during creation. You can change the default CPU and memory resource reservation percentages using this API. The CPU/Memory percentage can be set to 0 percent each to disable resource reservation.PUT https://<NSXManager>/api/4.0/edgePublish/tuningConfiguration
                  <tuningConfiguration>
                     <lockUpdatesOnEdge>false</lockUpdatesOnEdge>
                     <aggregatePublishing>true</aggregatePublishing>
                     <edgeVMHealthCheckIntervalInMin>0</edgeVMHealthCheckIntervalInMin>
                     <healthCheckCommandTimeoutInMs>120000</healthCheckCommandTimeoutInMs>
                     <maxParallelVixCallsForHealthCheck>25</maxParallelVixCallsForHealthCheck>
                     <publishingTimeoutInMs>1200000</publishingTimeoutInMs>
                     <edgeVCpuReservationPercentage>0</edgeVCpuReservationPercentage>
                     <edgeMemoryReservationPercentage>0</edgeMemoryReservationPercentage>
                     <megaHertzPerVCpu>1000</megaHertzPerVCpu>
                  </tuningConfiguration>
      
    • Change in NSX Edge Upgrade Behavior: Replacement NSX Edge VMs are deployed before upgrade or redeploy. The host must have sufficient resources for four NSX Edge VMs during the upgrade or redeploy of an Edge HA pair. Default value for TCP connection timeout is changed to 21600 seconds from the previous value of 3600 seconds.
    • Cross VC NSX — Universal Distributed Logical Router (DLR) Upgrade: Auto upgrade of Universal DLR on secondary NSX Manager, once upgraded on primary NSX Manager.
    • Flexible SNAT / DNAT rule creation: vnicId no longer needed as an input parameter; removed requirement that the DNAT address must be the address of an NSX Edge VNIC.
    • NSX Edge VM (ESG, DLR) now shows both Live Location and Desired Location. NSX Manager and NSX APIs including GET api/4.0/edges//appliances now return configuredResourcePool and configuredDataStore in addition to current location.
    • Edge Firewall adds SYN flood protection: Avoid service disruptions by enabling SYN flood protection for transit traffic. Feature is disabled by default, use the NSX REST API to enable it.
    • NSX Manager exposes the ESXi hostname on which the 3rd-party VM Series firewall SVM is running to improve operational manageability in large-scale environments.
    • NAT rule now can be applied to a VNIC interface and not only an IP address.

For complete details please refer release note :- http://pubs.vmware.com/Release_Notes/en/nsx/6.2.3/releasenotes_nsx_vsphere_623.html

Thank you and Keep sharing 🙂

vCenter Architecture Changes in vSphere 6.0 and Deploying VCSA 6.0 – Part 3

This is Part 3 of vCenter Architecture Changes in vSphere 6.0 and Deploying vCenter 6. In vCenter Architecture Changes in vSphere 6.0 and Deploying vCenter 6 – Part 1. We have discussed vCenter Architecture Changes in vSphere 6.0vCenter Deployment Modes and how to install Windows-based vCenter Server with an Embedded Platform Services Controller.

In vCenter Architecture Changes in vSphere 6.0 and Deploying VCSA 6 – Part 2 we have discussed how to deploy VCSA 6.0 with an Embedded Platform Services Controller.

Here in vCenter Architecture Changes in vSphere 6.0 and Deploying VCSA 6.0 – Part 3 we’ll discuss how to deploy VCSA 6.0 with an External Platform Services Controller.

To deploy vCenter Server 6.0 with an External Platform Services Controller, We need to deploy PSC (Platform Services Controller) first and then Install vCenter Server. So let’s start and deploy  PSC (Platform Services Controller).

External PSC (Platform Services Controller) deployment using VCSA 6.0 :- 

1. Login to any Windows Server and Copy/Mount the ISO image to start the deployment.

2. We can start VCSA deployment by Double-Clicking ‘vcsa-setup.html’ file. If you have not installed VMware-ClientIntegrationPlugin-6.0 on this server need to install VMware-ClientIntegrationPlugin-6.0 before starting the deployment of VCSA 6.0.

vc10

3. Click ‘Install’ to start the vCenter Server Appliance 6.0 deployment..

evc1

4. On the ‘End User Licence Agreement’ window select ‘Accept the License Agreement’ and Click Next

evc2

5. On the ‘Connect to the target Server’ window, Enter the ESXi host Name or IP address on which want to deploy the PSC (Platform Services Controller) Appliance. Enter root password of the ESXi host and Click ‘Next’

Note:- Make Sure ‘Lock down mode is Disabled’ on the ESXi host and NTP is configured and there is time synchronization between ESXi host and the NTP server.

evc3

6. On the ‘Certificate Warning’ Window click ‘YES’ to accept and continue with deployment.

evc4

7. On the ‘Set up Virtual Machine’ window Enter the Appliance Name and root password for the Appliance and Click ‘Next’

evc5

8. On the ‘Select deployment type’ window select ‘Install Platform Services Controller’ under External Platform Services Controller and click ‘Next’

vc16

9. On the ‘Set up Single Sign-On (SSO)’ window select ‘Create a new SSO domain’, Enter SSO Administrator password, SSO Domain Name and SSO site Name and Click ‘Next’

vc17

10. On the ‘Select Appliance Size’ window click ‘Next’ to continue,

Note :- There is no option to select PSC appliance size, this will deploy an external Platform Services Controller VM with 2 vCPU and 2GB of memory and requires 30 GB of disk space.

vc18

11. On the ‘Select datastore’ window select the datastore to store the PSC  (we can Enable Thin Disk Mode by checking the Enable This Disk Mode check box, but for production recommendation is to           deploy ‘Thick Disk Mode’) and click ‘Next’

vc19

12. On the ‘Network Settings’ Window Enter the PSC appliance IP address, PSC Name, Subnet Mask, Network Gateway, DNS Server, and NTP server settings and Click ‘Next’

vc20_1

13. On the ‘Ready to Complete’ window Review the settings and Click ‘Finish’ to start the deployment . vc21_1

14. It will take 15-20 Minutes to download and deploy appliance, configure machine, and complete setup.

vc22

vc23

vc23_1

15. Here we go ‘Installation of Platform Services Controller completed successfully’. You can see on below screen that now we are ready to Run installer to install ‘vCenter Server’ and connect to this Platform Services Controller by using 192.168.201.131.

vc23_2

We have deployed PSC (Platform Services Controller) and now need to Run the Installer again to Install vCenter Server deployment:-

External vCenter Server deployment using VCSA 6.0 :-

1. Again start VCSA deployment by Double-Clicking ‘vcsa-setup.html’ file.

vc10

2. Click ‘Install’ to start the vCenter Server Appliance 6.0 deployment.

vc11

3. On the ‘End User Licence Agreement’ window select ‘Accept the License Agreement’ and Click Next

vc12

4. On the ‘Connect to the target Server’ window, Enter the ESXi host Name or IP address on which want to deploy the vCenter Server Appliance. Enter root password of the ESXi host and Click ‘Next’

Note:- Make Sure ‘Lock down mode is Disabled’ on the ESXi host and NTP is configured and there is time synchronization between ESXi host and the NTP server.

evc3

5. On the ‘Certificate Warning’ Window click ‘YES’ to accept and continue with deployment.

evc4

6. On the ‘Setup Virtual Machine’ window Enter the vCenter server Virtual Machine Name and root password for the Appliance and Click ‘Next’

evc5

7. On the ‘Select deployment type’ window select ‘Install vCenter Server (Requires External Platform Services Controller’ under External Platform Services Controller and click ‘Next’

evc6

8. On the ‘Configure Single Sign-On (SSO)’ window Enter Platform Services Controller FQDN or IP Address, Enter SSO Administrator password and Click ‘Next’

evc7

9. on the ‘Select Appliance Size’ window select the Appliance size Tiny, Small, Medium and Large depending on the number of ESXi Hosts and VMs this vCenter Server will be Managing and Click ‘Next’

evc8

10. On the ‘Select datastore’ window select the datastore to store the VM configuration files and all of the Virtual Disks.  (we can Enable Thin Disk Mode by checking the Enable This Disk Mode check box, but for production recommendation is to deploy ‘Thick Disk Mode’) and click ‘Next’

evc9

11. On the ‘Configure Database’ window select the desired database type and click ‘Next’

Note :- vCenter Server Appliance can use either embedded PostgreSQL database, which is recommended, or an external database include Oracle Database 11g and Oracle Database 12c. Unlike Windows                     support for PostgreSQL, vCenter Server Appliance supports up to 1,000 hosts or 10,000 virtual machines at full vCenter Server scale. External database support is being deprecated; this is the last                          release that supports the use of an external database with vCenter Server Appliance.

evc10

12. On the ‘Network Settings’ Window Enter the vCenter Server appliance IP address, vCenter Server Name, Subnet Mask, Gateway, DNS Server, and NTP server settings and Click ‘Next’

evc11

13. On the ‘Ready to Complete’ window Review the settings and Click ‘Finish’ to start the deployment.

evc12

14.  It will take 15-20 Minutes to download and deploy appliance, configure machine, and complete setup.

evc13

evc14

15. Here we go ‘Installation of vCenter Server completed successfully’. You can see on below screen that Installation Completed and ready to use.  Start using vSphere web client :- https://192.168.201.132/vsphere-client as SSO administrator (administrator@vsphere.local)

evc15

16. We can connect to ESXi host directly using VI Client and see both VMs (Platform Services Controller and vCenter Server) are up and running and ready to use.evc16That’s it for now Stay tuned for next topic 🙂

Until then Share & Spread The Knowledge !!!!

vCenter Architecture Changes in vSphere 6.0 and Deploying VCSA 6.0 – Part 2

In my last blog vCenter Architecture Changes in vSphere 6.0 and Deploying vCenter 6 – Part 1  We have discussed about Architectural changes in vSphere 6.0. Also we have discussed vCenter Deployment Modes (vCenter Server with an Embedded Platform Services Controller and vCenter Server with an External Platform Services Controller). We have discussed how to install Windows-based vCenter Server with an Embedded Platform Services Controller.

In this vCenter Architecture Changes in vSphere 6.0 and Deploying VCSA 6 – Part 2 will discuss how to deploy VCSA 6.0 with an Embedded Platform Services Controller.

VCSA 6.0 deployment :-

1. Download the VCSA installer from VMware website and Copy/Mount the ISO image to any Windows PC to start the deployment.

vc102. Before starting VCSA deployment we need to Install VMware client integration plug-in.

vc13. Best part is that VMware-Client Integration Plugin is part of VCSA installer, Browse the VCSA folder and Double Click VMware-ClientIntegrationPlugin-6.0.0 to start the installation.

vc24. It is pretty simple Windows Installation process. Just follow the screen to complete the VMware-Client Integration Plugin Installation.vc35. On the ‘Welcome to the Installation wizard for the VMware Client Integration Plug-in 6.0.0’ window click ‘Next’

vc46. On the ‘End-User License Agreement’ window ‘Accept the License Agreement’ and Click ‘Next’

vc5

7. On the ‘Destination Folder’ Window change the folder if required or click ‘Next’vc6

8. On the ‘Ready to Install the Plug-in’ window click ‘Install’ to start the Installation.

vc7

9. it will take 1-2 minutes to finish the installation.

vc8

10. Here we go ‘Installation Completed’ Click ‘Finish’ to close the window.

vc9

11. Once we have Finished VMware Client Integration Plug-in installation, We can start VCSA deployment by Double-Clicking ‘vcsa-setup.html’ file.

vc10  12. Click ‘Install’ to start the Deployment.

vc11

13. ‘Accept the License Agreement’ and Click ‘Next’

vc12

14. On the ‘Connect to the target Server’ window, Enter the ESXi host Name or IP address on which want to deploy the vCenter Server Appliance. Enter root password of the ESXi host and Click ‘Next’

Note:- Make Sure ‘lock down mode is Disabled’ on the ESXi host and NTP is configured and there is time synchronization between ESXi host and the NTP server.

vc13

15. On the ‘Certificate Warning’ Window click ‘YES’ to accept and continue with deployment.

vc14

16. On the ‘Setup Virtual Machine’ window Enter the VM name and root password for the Appliance and Click ‘Next’

vc15_1

17. On the ‘Select deployment type’ window select ‘ Install vCenter Server with an Embedded Platform Services Controller’ under Embedded Platform Services Controller and click ‘Next’

VCE1

18. On the ‘Set up Single Sign-On’ window select ‘Create a new SSO domain’, Enter SSO Administrator password, SSO Domain Name and SSO site name and Click ‘Next’

VCE2

19. On the ‘Select Appliance Size’ window select the appliance size as per size of the Infrastructure going to be managed by the vCenter Server, for the demo purpose i am choosing here ‘Tiny (Up to 10 hosts,             100 VMs)’. You can see that this will deploy a tiny VM configured with 2 vCPUs and 8 GB of Memory and requires 120 GB of disk size.

VCE3

20.  On the ‘Select datastore’ window select the datastore to store the vCenter Server (we can Enable Thin Disk Mode by checking the Enable This Disk Mode check box, but for production recommendation is to           deploy ‘Thick Disk Mode’) and click ‘Next’

vce4

21. On the ‘Configure Database’ window select the desired database type and click ‘Next’

vCenter Server Appliance can use either embedded PostgreSQL database, which is recommended, or an external database include Oracle Database 11g and Oracle Database 12c. Unlike Windows support           for PostgreSQL, vCenter Server Appliance supports up to 1,000 hosts or 10,000 virtual machines at full vCenter Server scale. External database support is being deprecated; this is the last release that                   supports the use of an external database with vCenter Server Appliance.

vce5

22. On the ‘Network Settings’ Window Enter the vCenter Server appliance IP address, Subnet Mask, Gateway, DNS Server, and NTP server settings and Click ‘Next’

VCE6

23. On the ‘Ready to Complete’ window Review the settings and Click ‘Finish’ to start the deployment

vce7

24. It will take several minutes to finish the deployment.

vc22

vc23

25.  Here we go Installation of vCenter Server is successfully completed and ready for use. We can access vSphere Web Client using https://<IP address>/vsphere-client and login as SSO Administrator                credentials.

vce10

26. That’s it. We can connect to vCenter Server and start using and configuring.

VCE11

That’s it 🙂 We have discussed in this part 2 how to deploy VCSA 6.0 with an Embedded Platform Services Controller. In the next part vCenter Architecture Changes in vSphere 6.0 and Deploying VCSA 6.0 – Part 3, Will discuss how to deploy VCSA 6.0 with an External Platform Services Controller. Until then please stay tuned and Keep Spreading the knowledge 🙂

 

vCenter Architecture Changes in vSphere 6.0 and Deploying vCenter 6 – Part 1

Here in this post will discuss about the vCenter Server architecture changes from vSphere 5.5 to vSphere 6. Will also discuss different use cases and Deploying vCenter Server 6 with Installable Windows-based and Deploying vCenter Server 6 with VCSA.

Firstly let me brief about VMware vCenter Server 5.5 components, there are multiple individual components are used to deliver the vCenter Server management solution.

  1. vCenter Single Sign-On
  2. vSphere Web Client
  3. vCenter Inventory Service
  4. vCenter Server

When deploying a vCenter server there are 2 deployment processes Simple Install and Custom Install. 

Simple Install :- The simple install is a deployment option that deploys vCenter Server with its default options selected to a single physical or virtual machine. The simple install install all four components of vCenter Server (vCenter Single Sign-On, vSphere Web Client, vCenter Inventory Service, and the vCenter Server instance) on a single windows Server. This is ideal for small customer.

There are several limitations with the simple installer:-

When you choose Simple install it will install in default location with no option to change the destination folder. Many customers prefer to install their applications to a volume other than the system volume.

Simple install provides only the vCenter Single Sign-On deployment option of “vCenter Single Sign-On for your first vCenter Server.” Therefore it cannot be used for additional vCenter servers.pic1Custom Install :- The custom install gives you option to install each individual component independently on same or different servers. Because each component is being installed individually, so it is important that components are installed in the following order:

1. vCenter Single Sign-On

2. vSphere Web Client  ( Not compulsory can be installed later)

3. vCenter Inventory Service

4. vCenter Server

pic2With the release of vSphere 6.0, vCenter Server installation and configuration has been simplified dramatically. The installation of vCenter now consists of only two components instead of 4 components in vSphere 5.5. and provide all services for the virtual datacenter:

  • Platform Services Controller vSphere 6.0 introduces a new component called the Platform Services Controller (PSC). Now no need to install all these components individually. We choose to install Platform Services Controller (PSC) and all the services included in Platform Services Controller will be installed:
    • vCenter Single Sign-On
    • License Service
    • Lookup Service
    • VMware Directory Service
    • VMware Certificate Authority
  • vCenter Services Same way when we install vCenter Services it will install group of services includes:
    • vCenter Server
    • vSphere Web Client
    • vCenter Inventory Service
    • vSphere Auto Deploy
    • vSphere ESXi Dump Collector
    • vSphere Syslog Collector (Microsoft Windows)/VMware Syslog Service (Appliance)

vCenter Deployment Modes:- 

There are two basic architectures that can be used when deploying vSphere 6.0 :-

1. Install vCenter Server with an Embedded Platform Services Controller vCenter Server with an Embedded Platform Services Controller mode installs all services on the same virtual machine or physical server as vCenter Server. It will install PSC components first followed by vCenter Server Services.  This mode is ideal for small environments.     pic32. Install vCenter Server with an External Platform Services Controller vCenter Server with an External Platform Services Controller mode installs the platform services on a system that is separate from where vCenter services are installed. Installing the platform services is a prerequisite for installing vCenterThis mode is ideal for larger environments, where there are multiple vCenter servers.

pic4There will be several use cases depending on business requirement to deploy vCenter Server either with an Embedded Platform Services Controller or with an External Platform Services Controller .Here are List of recommended use cases for VMware vSphere 6.0 deployment. You can also find out here  :- http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2108548pic5pic6vCenter Server comes not only as an installable Windows-based application but also as a SUSE Linux-based Virtual appliance. Will you use Windows-based or Virtual appliance? There are advantages and disadvantages to each approach. I am not going to discuss what to use here.

Here Will discuss both approach, How to install Windows-based vCenter Server and How to deploy VCSA (VMware vCenter Server Virtual Appliance) with Embedded and External PSC.

Install Windows-based vCenter Server 6 with an Embedded Platform Services Controller:-

  • To install Windows-based vCenter Server we need to have a Windows Server ready.
  • Server must be a member of a domain.
  • Make a new account in Active Directory for the vCenter Server before the deployment and assign required permission on the host where you want to install vCenter Server.
  • Enter a DNS entry for the Server.

Now we are ready to start vCenter Server installation.

1. Login to the Windows Server with Service account and copy/mount the installer VMware-VIMSetup-all-6.0.0-2800571.iso

2. Browse the location of the Installer and double click the Autorun to start the VMware vCenter Installer.pic73. VMware vCenter Installer will start. Select ‘vCenter Server for Windows’ under VMware vCenter Server and Click ‘Install’.

VC24. VMware vCenter Server installation will start and Click ‘Next’ on ‘Welcome to the VMware vCenter Server 6.0. Installer’

VC45. Click the ‘I accept the terms of the license agreement’ box and then click ‘Next’

VC56. Here we are installing vCenter with an Embedded Platform Services Controller, So click ‘vCenter Server and Embedded Platform Services Controller’ and then click ‘Next’. VC67. Enter the ‘System Name’ on the ‘System Network Name’ window and then click ‘Next’. In the most cases it will auto select FQDN of the server on which we are installing vCenter Server.

VC78. On the ‘vCenter Single Sign-On Configuration’ page enter the ‘Domain Name’ (By default it will be vsphere.local), ‘Enter SSO Administrator Password’ and ‘Site Name’ and then click ‘Next’

p39. On the ‘vCenter Server Service Account’ page select ‘Specify a user Service account’ enter password for the Service account and then click ‘Next’. If you have logged in as Service Account it will auto take the account name in ‘Account user name’

VC910. Choose the ‘Database Settings’ and then click ‘Next’. In production environment it is always recommended to ‘use an external database’

VC1011.  On the ‘Configure Ports’ page normally select default ports but if required you can change the port numbers as per and click ‘Next’

VC1112. On the ‘Destination Directory’ page change the location if required or click ‘Next’

VC1213. On the ‘Ready to Install’ page review the settings and click ‘Install’ to start the ‘Installation Process…’

p714. It will take several minutes to install ‘vCenter Server Platform Services Controller and vCenter Server services. Wait for Installation to finish.

VC12-115. Installation in Progress….

VC1616. Here we go ‘Setup Completed’ and vCenter Server has been successfully Installed. Click ‘Finish’ to complete the Installation.

VC17

Will discuss Deploying a new vCenter 6 with VCSA (Embedded and External) in Next Part… So stay tuned  🙂

SHARE & SPREAD THE KNOWLEDGE!!

 

NSX Troubleshooting – VMs out of Network on VNI 5XXX

Currently i am working for customer running Network Virtualization (NSX) in their SDDC environment. Few weeks ago faced issues that multiple VMs out of Network in one of the compute cluster. So wanted to share and hope this will be useful for so many folks working on NSX. Customer is running NSX 6.1.1 with multiple VNIs managing networks for multiple environments. (e.g. Prod, DR, DEV,QA, Test etc.)

Here are the steps:-

  1. After receiving the issue we tried to ping random VMs from the list and VMs were not reachable.
  2. Next step was to find out the VNI number for those VMs and see if all are part of same VNI. And yes those VMs were part of same VNI (e.g. 5XXX)
  3. Once we knew the VNI number next step was to find out if all VMs connected to the VNI 5XXX are impacted or few.
  4. From the step 3 we came to know that only few VMs were impacted not all. After drilling down we found that VMs impacted are running on one of the ESXi hosts in the cluster and VNI working fine with other hosts in the cluster.
  5. To bring the VMs online we moved VMs to another host  and after migrating VMs were reachable and User were able to connect to the applications.
  6. Next was to find out the  Root Cause Analysis (RCA) why VMs connected to VNI 5XXX on ESXi host XXXXXXXXXX  lost network.
  7. Putty to ESXi Host and run the following command to check the VNI status on the host :- net-vdl2 -l. You can see below output screen that VXLAN Network 5002 is DOWN and all impacted VMs were part of this.

VNI19. To fix the issue we need to re-start the NETCPA daemon on the host. Here are list of commands to STOP / START  and CHECK STATUS of NETCPA daemon.

1)  Stopped the netcpa daemon by running –>  /etc/init.d/netcpad stop.

2)  Started the netcpa daemon by running –> /etc/init.d/netcpad start.

3) checked the status of service by running –> /etc/init.d/netcpad status.

10. After starting the NETCPA daemon check the VNI status by running command :- net-vdl2 -l. And now you can see that VXLAN 5002 is UP

VNI211. Next step was to move few VMs on this host from VNI 5002 and check the connectivity status of VMs and Application. All were perfectly fine after moving now on this host.

Note:- This issue has been addressed in NSX version 6.1.4e. If you are running NSX 6.1.4e then may be you will not get this issue. As Controller will be monitoring netcpad daemon and start if it failed on any of the hosts.

That’s it ….SHARE & SPREAD THE KNOWLEDGE!!

NSX 6.2 has been released last week with so many Enhancements and new cool features

New surprise came last week with NSX 6.2 release. Expectation was that NSX 6.2 will be released in the next week VMWORLD 2015 US. But VMware has released much awaited NSX 6.2 a week before VMWORLD 2015 US.

NSX vSphere 6.2 includes the following new and changed features:

  • Cross vCenter Networking and Security
    • NSX 6.2 with vSphere 6.0 supports Cross vCenter NSX: where logical switches (LS), distributed logical routers (DLR) and distributed firewalls (DFW) can be deployed across multiple vCenters, thereby enabling logical networking and security for applications with workloads (VMs) that span multiple vCenters or multiple physical locations.
    • Consistent firewall policy across multiple vCenters: Firewall Rule Sections in NSX can now be marked as “Universal” whereby the rules defined in these sections get replicated across multiple NSX managers. This simplifies the workflows involving defining consistent firewall policy spanning multiple NSX installations
    • Cross vCenter vMotion with DFW: Virtual Machines that have policies defined in the “Universal” sections can be moved across hosts that belong to different vCenters with consistent security policy enforcement.
    • Universal Security Groups: Security Groups in NSX 6.2 that are based on IP Address, IP Set, MAC Address and MAC Set can now be used in Universal rules whereby the groups and group memberships are synced up across multiple NSX managers. This improves the consistency in object group definitions across multiple NSX managers, and enables consistent policy enforcement
    • Universal Logical Switch (ULS): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of logical switches that can span multiple vCenters, allowing the network administrator to create a contiguous L2 domain for an application or tenant.
    • Universal Distributed Logical Router (UDLR): This new functionality introduced in NSX 6.2 as a part of Cross vCenter NSX allows creation of distributed logical routers that can span multiple vCenters. The universal distributed logical routers enable routing across the universal logical switches described earlier. In addition, NSX UDLR is capable of localized north-south routing based on the physical location of the workload
  • Operations and Troubleshooting Enhancements
    • New traceflow troubleshooting tool: Traceflow is a troubleshooting tool that helps identify if the problem is in the virtual or physical network. It provides the ability to trace a packet from source to destination and helps observe how that packet passes through the various network functions in the virtual network.
    • Flow monitoring and IPFIX separation: In NSX 6.1.x, NSX supported IPFIX reporting, but IPFIX reporting could be enabled only if flow reporting to NSX Manager was also enabled. Starting in NSX 6.2.0, these features are decoupled. In NSX 6.2.0 and later, you can enable IPFIX independent of flow monitoring on NSX Manager.
    • New CLI monitoring and troubleshooting commands in 6.2: See the knowledge base article for more information.
    • Central CLI: Central CLI reduces troubleshooting time for distributed network functions. Commands are run from the NSX Edge command line and retrieve information from controllers, hosts, and the NSX Manager. This allows you to quickly access and compare information from multiple sources. The central CLI provides information about logical switches, logical routers, distributed firewall and edges.
    • CLI ping command adds configurable packet size and do-not-fragment flag: Starting in NSX 6.2.0, the NSX CLI ‘ping’ command offers options to specify the data packet size (not including the ICMP header) and to set the do-not-fragment flag. See the NSX CLI Reference for details.
    • Show health of the communication channels: NSX 6.2.0 adds the ability to monitor communication channel health. The channel health status between NSX Manager and the firewall agent, between NSX Manager and the control plane agent, and between host and the NSX Controller can be seen from the NSX Manager UI. In addition, this feature detects when configuration messages from the NSX Manager have been lost before being applied to a host, and it instructs the host to reload its NSX configuration when such message failures occur.
    • Standalone Edge L2 VPN client CLI: Prior to NSX 6.2, a standalone NSX Edge L2 VPN client could be configured only through OVF parameters. Commands specific to standalone NSX Edge have been added to allow configuration using the command line interface. The OVF is now used for initial configuration only.
  • Logical Networking and Routing
    • L2 Bridging Interoperability with Distributed Logical Router: With VMware NSX for vSphere 6.2, L2 bridging can now participate in distributed logical routing. The VXLAN network to which the bridge instance is connected, will be used to connect the routing instance and the bridge instance together.
    • Support of /31 prefixes on ESG and DLR interfaces per RFC 3021
    • Support of relayed DHCP request on the ESG DHCP server
    • Ability to keep VLAN tags over VXLAN
    • Exact Match for Redistribution Filters: The redistribution filter has same matching algorithm as ACL, so exact prefix match by default (except if le or ge options are used).
    • Support of administrative distance for static route
    • Ability to enable/disable uRPF check per interface on Edge
    • Display AS path in CLI command show ip bgp
    • HA interface exclusion from redistribution into routing protocols on the DLR control VM
    • Distributed logical router (DLR) force-sync avoids data loss for east-west routing traffic across the DLR.
    • View active edge in HA pair: In the NSX 6.2 web client, you can find out if an NSX Edge appliance is the active or backup in an HA pair.
    • REST API supports reverse path filter(rp_filter) on Edge: Using the system control REST API, rp_filter sysctl can be configured, and is not exposed on REST API for vnic interfaces and sub-interfaces. See the NSX API Guide for more information.
    • Behavior of the IP prefix ‘GE’ and IP prefix ‘LE’ BGP route filters: In NSX 6.2, the following enhancements have been made to BGP route filters:
      • LE / GE keywords not allowed: For the null route network address (defined as ANY or in CIDR format 0.0.0.0/0), less-than-or-equal-to (LE) and greater-than-or-equal-to (GE) keywords are no longer allowed. In previous releases, these keywords were allowed.
      • LE and GE values in the range 0-7 are now treated as valid. In previous releases, this range was not valid.
      • For a given route prefix, you can no longer specify a GE value that is greater than the specified LE value.
  • Networking and Edge Services
    • The management interface of the DLR has been renamed to HA interface. This has been done to highlight the fact that the HA keepalives travel through this interface and that interruptions in traffic on this interface can result in a split-brain condition.
    • Load balancer health monitoring improvements: Delivers granular health monitoring, that reports information on failure, keeps track of last health check and status change, and reports failure reasons.
    • Support VIP and pool port range: Enables load balancer support for applications that require a range of ports.
    • Increased maximum number of virtual IP addresses (VIPs): VIP support rises to 1024.
  • Security Service Enhancements
    • New IP address discovery mechanisms for VMs: Authoritative enforcement of security policies based on VM names or other vCenter-based attributes requires that NSX know the IP address of the VM. In NSX 6.1 and earlier, IP address discovery for each VM relied on the presence of VMware Tools (vmtools) on that VM or the manual authorization of the IP address for that VM. NSX 6.2 introduces the option to discover the VM’s IP address using DHCP snooping or ARP snooping. These new discovery mechanisms enable NSX to enforce IP address-based security rules on VMs that do not have VMware Tools installed.
  • Solution Interoperability
    • Support for vSphere 6.0 Platform Services Controller topologies: NSX now supports external Platform Services Controllers (PSC), in addition to the already supported embedded PSC configurations.
    • Support for vRealize Orchestrator Plug-in for NSX 1.0.2: With NSX 6.2 release, NSX-vRO plug-in v1.0.2 is introduced in vRealize Automation (vRA).

For more details please refer to VMware NSX 6.2 for vSphere Documentation Center :- http://pubs.vmware.com/NSX-62/index.jsp

Thank you 🙂

Network Virtualization with VMware NSX – Part 8

Let’s back into NSX mode again 🙂 In my last blog Network Virtualization with VMware NSX – Part 7 discussed about Network Address Translation (NAT) and Load Balancing with NSX Edge Gateway. Here in Network Virtualization with VMware NSX – Part 8 will discuss about High Availability of the NSX Edge.

High Availability

High Availability (HA) ensures that NSX Edge appliance is always available by installing an active pair of Edges on your virtualized infrastructure. We can enable HA either when installing NSX Edge appliance or after installing NSX Edge appliance.

The primary NSX Edge appliance is in the Active State and the Secondary Appliance is in Standby State. NSX Edge replicates the configuration of the primary appliance to the standby appliance. VMware recommends create the primary and secondary appliances on separate datastores. If you create the primary and secondary appliances on the same datastore, the datastore must be shared across all hosts in the cluster for the HA appliance pair to be deployed on different ESX hosts.

All NSX Edge services run on the active appliance. The primary appliance maintains a heartbeat with the standby appliance and sends service updates through an internal interface. If a heartbeat is not received from the primary appliance within the specified time (default value is 15 seconds), the primary appliance is declared dead. The standby appliance moves to the active state, takes over the interface configuration of the primary appliance, and starts the NSX Edge services that were running on the primary appliance. After switch over Load Balancer and VPN services need to re-establish TCP connection with NSX Edge, so service is disrupted for a short while. Logical switch connections and firewall sessions are synched between the primary and standby appliances, so there is no service disruption during switch over.

If the NSX Edge appliance fails and a bad state is reported, high availability force-synchronizes the failed appliance to revive it. When the appliance is revived, it takes on the configuration of the now active appliance and stays in a standby state. If the NSX Edge appliance is dead, you must delete the appliance and add an appliance.

NSX Edge ensures that the two HA NSX Edge virtual machines are not on the same ESX host even after you use DRS and vMotion (unless you manually vMotion them to the same host).

Now let’s verify HA settings and Configure High Availability for NSX Edge :-

1. Login to the web Client –> Home –> Networking and Security –> NSX Edges –> Double click either Logical Router or NSX Edge Services Router.HA1

2. It will open up the selected device. Click Manage –> Settings –> Configuration –> And under HA Configuration you can see HA Status is DISABLED. Same way you can check for Logical Router.HA2

3. Same can be verify from Management Cluster where we have deployed NSX Edge appliances. you can see in the below screenshot that only one instance of Edge Services Router (Edge Services Router-0) and One instance of Logical Router (Logical-Router-0) is running.HA3

4. Now let’s enabled HA for NSX Edge. Click Manage –> Settings –> Configuration –> And under HA Configuration –> Click Change.HA4

5. Change HA Configuration window will open up, Select HA Status –> Enable, Select vNIC, enter Declare Dead Time (Default is 15 Seconds), And enter the management IP for Heartbeat for both nodes and Click OK.HA5

6. It will take few seconds and you can see HA Status under HA Configuration is showing now Enabled.HA6

7. Let’s go to Management Cluster to see the number of Nodes. Now you can see that there are two instances up and running. Edge Services Router (Edge Services Router-0 and Edge Services Router-1)HA7

8. That’s it. Now NSX Edge Services Router is running is HA mode, If Active node will fail standby node will take over after 15 seconds. Same way we can enable HA for Logical Router. I have added screenshot for Logical Router.HA8

HA9

HA10

HA119. Once you have enabled HA for NSX Edge. You can putty to NSX edge and verify the Active Node and Standby Node by running Show Service highavailability command. Let me connect to and run this command to verify.

You can see in below result that This node (vshield-edge-4-0) is Active and vshield-edge-4-1 is peer host means Standby Node.HA14

10. Now let’s shut down the vshield-edge-4-0 and run the Show Service highavailability command again.

Now you can see in below result that vshield-edge-4-1 is Active and vshield-edge-4-0 is unreachable.HA15

11. Now let’s Power On the vshield-edge-4-0 and run the command again.

Now you can see in below result that vshield-edge-4-1 is Active and vshield-edge-4-0 is peer host means Standby Node.HA16

That’s It !! This is how we can enable HA and test failover for NSX Edge.

Thank You and Keep sharing :)

—————————————————————————————————

Other NSX Parts:-

Network Virtualization with VMware NSX – Part 1

Network Virtualization with VMware NSX – Part 2

Network Virtualization with VMware NSX – Part 3

Network Virtualization with VMware NSX – Part 4

Network Virtualization with VMware NSX – Part 5

Network Virtualization with VMware NSX – Part 6

Network Virtualization with VMware NSX – Part 7

Network Virtualization with VMware NSX – Part 8

vSphere Syslog Collector 5.5 – Install and Configure

Syslog Collector

Syslog is a way for network devices to send event messages to a logging server – usually known as a Syslog server. The Syslog protocol is supported by a wide range of devices and can be used to log different types of events. An ESXi host will by defaults save its log files locally. This is particularly important for hosts deployed without a persistent scratch Partition, Such as a Stateless host provisioned by Auto Deploy. Syslog Collector also addresses the issue of an Auto Deployed host not having a local disk.  With no local disk the log files are stored on a Ramdisk, which means each time the server boots the logs are lost.   Not having persistent logs can complicate troubleshooting.  Use the syslog collector to capture the ESXi host’s log on a network server.

Syslog Collector on VCSA

A Syslog Collector is bundled with the the vCenter Server Appliance (VCSA) and requires no extra setup. By default logs are saved in /var/log/remote/<HostName>. Just configure the hosts to send their logs to the Syslog collector.

Syslog Collector on a Windows Server

Syslog Collector can be installed on vCenter Server or on a standalone Windows Server.

1. From VMware vCenter Installer media choose vSphere Syslog Collector and Click Install to start the installation process.

SLS12. Select the appropriate language for the Syslog Collector and Click OK.

SLS23. Installer will prepare setup process to guide and install Syslog Collector.

SLS34. On the Welcome screen Click Next to continue.

SLS45. Select Radio button to accept End User License Agreement and Click Next.

SLS56. Select where to install the application and where to stored the logs and also Size of the log file before Rotation and Number of Logs to keep on the Syslog Collector Server. Unless you have specific requirements select default settings and Click Next.

SLS67. Setup Type screen allows to register the Syslog Collector instance with vCenter Server instance. Select VMware vCenter Server Installation and Click Next.

SLS78. On VMware vCenter Server Information screen provide the vCenter Server Name, Port, and Appropriate account credentials to Register Syslog Collector to vCenter Server and Click Next.

SLS8.19. Accept the default ports settings and Click Next.

SLS810. The Next screen allows to choose how the Syslog Collector will be identified on the networks and by the ESXi hosts. It Will detect host name on which we are installing Syslog collector choose default name and Click Next.

SLS911. On Ready to Install screen click Install to begin the installation.

SLS1012. On Installation Completed screen click Finish to complete the Installation.

SLS1613. Once Installation completed connect to vCenter Server –> Home –> Administration –> VMware Syslog Collector–> Double Click to open Syslog Collector.

SLS12

SLS13===========================================================

Configuring ESXi Hosts to Redirect to a Syslog Collector

There are several ways to Configure ESXi hosts to redirect logs to a Syslog Collector.

  • Advanced Configuration Options on the ESXi host
  • Via Host’s command Line
  • Host Profile

Configuring ESXi Hosts using the Advanced Configuration Options

1. Connect to vCenter Server using vSphere Client or Web Client –> Home –> Select Host and Clusters.

2. Select the ESXi Host –> Configuration –> Under Software Advanced Settings.

SLS143. Under Advanced Settings –> Syslog –> Global –> Syslog.global.loghost enter Syslog Collector host name and Click OK to complete the configuration.

SLS15===============================================================

Configuring ESXi Hosts using Host’s Command Line

1. Connect ESXi host using putty.

SLS172. Enter the Root credentials to log into to host.

SLS183. Review the existing Syslog Collector Configuration using below command –                                 esxcli system syslog config get

SLS194. If you do not remember the configuration parameters/options use below commands to get the help – esxcli system syslog config set –help

SLS205. To configure the remote log host server and enable syslog collector server on host use this command –

esxcli system syslog set –loghost=vum.dca.com –logdir-unique=true                    

esxcli system syslog reload

SLS216. Verify configuration using below command – esxcli system syslog config get

SLS22=============================================================

Configuring ESXi Hosts using Host Profile.

1. Edit the Host profile with below settings.

Advanced Configuration Option –> syslog.global.loghost –> Enter the syslog Collector host name and click OK. Apply this Host Profile on other hosts and compliant.

SLS23

Done. We are all set now 🙂

 

Cheers..Roshan Jha

Setting up the ESXi 5.5 Dump Collector

The ESXi Dump Collector is a centralized service that can receive and store Memory dumps from ESXi servers when they crashed unexpectedly. These Memory Dumps occurs when an ESXi hosts crashed with PSOD (Purple Screen of death). The Kernel grabs the contents of Memory and dumps them to nonvolatile disk storage before the server reboots. By default, a core dump is saved to the local disk.  In the case where there may not be a local disk the core dump will be saved to a ramdisk in memory, which is a problem because the core dumps will be lost when the host reboots.

To solve this vSphere 5.0 includes a new feature called the ESXi Dump Collector.  The Dump Collector enables you redirect ESXi host core dumps onto a network server.

The dump collector is included as part of the vCenter Server Appliance (VCSA) and requires no extra setup.

CDC1

How to Install ESXi Dump Collector on Windows.

1. To install the dump collector on Windows simply load the VMware vCenter installation media, launch autorun.exe and from the main install menu choose “vSphere ESXi Dump Collector”.

DC12. Select the appropriate language for ESXi Dump Collector and Click OK.

DC23. Installer will prepare setup process for ESXi Dump Collector.

DC34. On the Welcome screen Click Next to start installation process.

DC45. Select Radio button to accept End User License Agreement and click Next.

DC56. Select where to install the ESXi Dump Collector and Where to store the Dump (Repository Directory), If desired change the location and Repository Size and Click Next.

DC67. Setup Type screen allows to register the ESXi Dump Collector instance with vCenter Server instance. Select VMware vCenter Server Installation and Click Next.

DC78. On VMware vCenter Server Information screen provide the vCenter Server Name, Port, and Appropriate account credentials to Register ESXi Dump Collector to vCenter Server and Click Next.

DC89. Accept default port 6500 and Click Next.

DC910. The Next screen allows to choose how the ESXi Dump Collector will be identified on the networks and by the ESXi hosts. It Will detect host name on which we are installing Dump collector choose default name name Click Next.

DC1011. On Ready to Install screen click Install to begin the installation.

DC1112. On Installation Completed screen click Finish to complete the Installation.

DC1213. Once Installation completed connect to vCenter Server –> Home –> Administration –> VMware ESXi Dump Collector–> Double Click to open ESXi Dump Collector.

DC13You can see Dump Collector’s details and Port Number.

DC14=============================================================

Now we need to configure ESXi host to Redirect their Core Dumps

There are 2 methods to configure ESXi Hosts to redirects Core Dumps to ESXi Dump Collector server.

  • Using ESXCLI command-line Tools
  • Using Host Profile.

1. Log into ESXi host via SSH.

DC152. Enter the Root credentials to log into to host.

DC163. Review the existing Dump Collector Configuration using below command –                                 esxcli system coredump network get

DC174. If you do not remember the configuration parameters/options use below commands to get the help – esxcli system coredump network set –help

DC195. Use below command to configure the host’s dump redirection settings                             esxcli system coredump network set -v vmk0 -i 192.168.174.204 -o 6500

6. Turn On / enable Dump Collector using below command                                                     esxcli system coredump network set -e true

DC207. At the end verify Dump Collector service status with this command.                                             esxcli system coredump network check

DC21

Done!

===========================================================

Now will configure ESXi Dump Collector on hosts using Host Profile

1. Create Host Profile and Edit Host Profile with Below settings to enable and configuration Network Coredump Settings. Once done apply this Profile on the rest of Hosts to make complaint with hosts.

DC22

We are all set now.

 

Thank You!

Roshan Jha

Upgrade vCenter Server 4.1 to 5.5

Last night upgraded my home lab from vSphere 4.1 to vSphere 5.5.

The vCenter upgrade process is actually fairly simple, (all things considered). You basically just backup your existing database (SQL or Oracle), snapshot (or clone) your existing vCenter (if virtual), mount the ISO and start the upgrade process.

There are Prerequisites for vCenter Server 5.1 and Later (vCenter Single Sign-On and vCenter Inventory Service) which does not include in vCenter Server 5.0 and earlier so we need to install these 2 components before starting upgrade from 4.1 to 5.5.

up1You can install vCenter Single Sign-On, the vSphere Web Client, vCenter Inventory Service, and vCenter Server on the same host machine (as with vCenter Simple Install) or on different machines as Custom Install.

Let’s start to Install vCenter Single Sign-On

1. Download or Copy Installation Media VMware VMware-VIMSetup-all-5.5.0-2105955-20140901-update02 on the server where want to install vCenter Single Sign-On and Double-click autorun.exe to Start the Installer Media.

up22. From the VMware vCenter Installer, Select vCenter Single Sign-On and then Click Install button to start vCenter Single Sign-On installation.

up33. On Welcome vCenter Single Sign-On Setup screen click Next to continue..

SSO14. Select the tick box to Accept the Licence Agreement and Click NEXT..

SS)25. It will Auto check Prerequisites for vCenter Single Sign-On (e.g. Host Name, FQDN name of the Host, IP Address of the host, Machine is part of the Domain or not, and DNS is able to resolve the host name or Not). As you can see in below screen. If you want to add this Domain as identity Source in SSO check the tick box and Click NEXT..

SSO36. As this is First vCenter Server so choose Standalone vCenter Single Sign-On Server and click NEXT (For more details on all these please refer http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-upgrade-guide.pdf)

SSO47. Installation will create default SSO Domain. Choose a password for the SSO administrator – the “Master SSO Password” (administrator@vsphere.local).Enter Password and the click Next

SSO58. Choose an appropriate Site Name for this Installation and then Click NEXT..

SSO69. On the SSO Port settings page enter the port number and then Click NEXT..                   ( Unless there is conflict in the environment, recommend not to change default      Port Number)

SSO710. Change the Directory for installation if desired and Click NEXT..

SSO811. Finally, Review the installation options and click Install to start the installation.

SSO912. It will take few minutes to install. Once the installation is complete, Click Finish to close the Installer.

SSO10==========================================================

Installing vCenter Inventory Service

The second prerequisite for vCenter Server installation is the vCenter Inventory Service:

1. From the VMware vCenter Installer, Select vCenter Inventory Service and then click Install to start the installer.

IC12. Select the language for the installation and Click OK.

IC23. VMware vCenter Inventory Service installer will start..

IC34. From the vCenter Inventory Service installer Welcome Screen, Click Next to Continue..

IC45. Select the radio button to accept the End User License Agreement and Click Next.

IC56. Change the directory for installation if desired and click Next to continue.

IC67. Enter the vCenter Inventory Service local system name or select default and Click Next.

IC78. On the Configure Port settings page enter the port number and then Click NEXT..           ( Unless there is conflict in the environment, recommend not to change default      Port Number)

IC89. On the JVM Memory screen asks how big your vCenter Inventory will be once it’s fully configured. Select the desired option depending on your requirement and click Next.

IC910. On the vCenter Single Sign-On Information Screen enter the “Master Password” you choosed during SSO installation. Enter the Password (changed port number in the Lookup Service URL if you changed in SSO installation) and click Next to Continue..

IC1011. Because this is the first installation of vCenter Server, So default security certificates need to be accepted. These can be changed to “self-Signed” at a later time. Click Yes to accept and Click Install Certificates to install default certificates.

IC11

 

IC1212. Finally Click Install to commence the Installation and start the services.

IC1313. It will take few minutes to install and register with SSO.

IC1414. Once the installation is complete, Click Finish to Close the installer.

IC15=============================================================

 Installing vCenter Server

We have installed both prerequisites for vCenter Server now ready to upgrade vCenter Server 4.1 to 5.5.

Logged on as Administrative user (Administrator or Service Account) to the computer that will run vCenter Server.

1. Start the vCenter Server installation process by selecting vCenter Server and then Click Install.

VC32. As you can see that minimum RAM required is 4GB and i was started installation on host with 2GB So got failed with NOT ENOUGH SYSTEM RAM. Increased RAM to 4 GB on this VM and restarted installation process.

VC23. Select the language for the installation and click OK.

VC44. Setup will Prepare the InstallSheild Wizard to guide through the setup process…

VC55. On the Welcome screen you can see that An earlier version of vCenter Server is already installed on this computer and will be upgraded to vCenter Server 5.5.warning message. Click Next to continue upgrade process.

VC66. Select the Radio Button to Accept End User License Agreement and click Next.

VC77. Enter vCenter Server license key or leave it blank to install in evaluation mode for 60 days trail click Next to continue.

VC88. At this point you must select whether to use SQL Server 2008 Express Edition or a Separate Database server. I am using embedded database for my Home lab.

UP49. You will get Database Upgrade Warning choose as per your requirement and click Next.

VC1110.  By default the vCenter Server service use Windows Local System Account, if you are using another administrative user service account, provide the credentials and click Next to continue.

VC1311. On the vCenter Agent Upgrade screen choose how do you want to upgrade vCenter Agent on connected ESXi Host and Click Next.

VC1212. The next screen provides the option to change the default TCP and UDP ports on which vCenter server communicates. (Unless there is conflict in the environment, recommend not to change default Port Numbers)

VC1413. On the JVM Memory screen asks how big your vCenter Server will be once it’s fully configured. Select the desired option depending on your requirement and click Next.

VC1514. On the vCenter Single Sign-On Information Screen enter the “Master Password” you choosed during SSO installation. Enter the Password (changed port number in the Lookup Service URL if you changed in SSO installation) and click Next to Continue..

VC1615. Click Yes to accept Certificates and continue.

VC1716. Change the Inventory service address and Port if required and Click Next to continue.

VC1817. As we are upgrading the earlier version so can not change installation directory Click next to accept the location and continue.

VC1918. On the Ready to Install screen Click Install to start the Installation Process ( If you want to Participate in Customer Experience Improvement Program Tick the Enable Data Collection Box – by enabling this we are agreeing to send technical data weekly to VMware)

VC2019. As you can see on Installing VMware vCenter Server Screen it will take 15-20 minutes (depending on how big your environment is) to complete the upgrade.

VC2120. It is being installed as per options/features selected..

VC2221. Once Installation completed click Finish to close the Wizard.

VC23===============================================================

Installing vSphere Web Client

There are 2 different clients that can be used to administer a vCenter Server.

  • vSphere C# Client
  • vSphere Web Client

From vSphere 5.1 onward, VMware stated that it was no longer adding features to the .NET vSphere client: only the vSphere Web Client would gain new feature capabilities.

Any few features that are part of the vSphere 5.5 release are not available from the vSphere Client.

1. From the VMware vCenter Installer, Select vSphere Web Client and then Click Install to start process.

WC12. Select the appropriate language and click OK.

WC23. Installer will prepare the setup process..

WC34. On the Welcome screen click Next to continue.

WC45. Select radio button to accept End User License Agreement and click Next to continue.

WC56. Change the Installation Directory if required and Click Next to continue.

WC67. The next screen provides the option to change or accept the default ports for HTTP and HTTPS on which vSphere Web Client will communicate. (Unless there is conflict in the environment, recommend not to change default Port Numbers)

WC78. On the vCenter Single Sign-On Information Screen enter the “Master Password” you choosed during SSO installation. Enter the Password (changed port number in the Lookup Service URL if you changed in SSO installation) and click Next to Continue..

WC89. Click Yes to accept the certificate.

WC910. On the Ready to Install screen click Install to start the installation.

WC1011. It will take few minutes to install.

WC1112. Once Installation completed click Finish to exit the Wizard.

WC12Done!

==================================================================