All posts by Roshan Jha

Consultant - VMware's professional services organization, Singapore since September 2015. With industry experience of more than 15+ years, Works on large-scale virtualization and cloud deployments in various industry verticals. Helping enterprises customer to architect Private and Hybrid Cloud based on SDDC (software-defined data center), SDN (Software-defined networking), VVD (VMware Validated Design) and VCF (VMware Cloud Foundation).

Getting STARTED with VMWARE INTEGRATED OPENSTACK (VIO) – PART 2

VMware Integrated OpenStack (VIO) is an OpenStack distribution that is built and tested by VMware. VIO is compliant with the OpenStack Foundation guidelines for an OpenStack distribution and is API-compatible for all OpenStack services running on enterprise level virtual infrastructure.VMware ensures platform stability by rigorous testing and ensuring interoperability. VIO leverages vSphere, NSX, and storage functionality as core of infrastructure. VMware places priority on packing OpenStack core projects in the most stable manner through relentless testing (functional and interoperability).

VMware Integrated OpenStack provides the following key features:
• Fastest deployment with simple installation using an OVA file
• Simplified operations through API and web interface
• Distributed Resource Scheduler (DRS) and Storage DRS for workload    rebalancing and datastore load balancing
• vSphere high availability (HA) to protect and automatically restart workloads
• In-house expertise and skillset with existing vSphere technology
• Runs on the proven VMware software-defined data center
• Production-ready container management that is natively integrated by using VMware capabilities.

• Advanced networking functionality through NSX 

• Integration with vRealize Operations Manager and vRealize Log Insight for greater performance and capacity management, Alerting and troubleshooting.  
• Trusted and single vendor for infrastructure and OpenStack
• Compliant with the OpenStack Foundation’s 2019.11 interoperability guideline

OpenStack Model

The OpenStack model is comprised of core projects and supplement projects. In addition to the core OpenStack projects, customer can choose supplement projects for additional services and functionality based on their requirements.

VMware Integrated OpenStack Components

VMware Integrated OpenStack (VIO) is made by two main building blocks, first the VIO Manager and second OpenStack components. VIO is packaged as an OVA file that contains the VIO Manager server and an Ubuntu Linux virtual machine to be used as the template for the different OpenStack components.

VMware Integrated OpenStack is designed to run over vSphere and NSX-T Data Center, leveraging existing virtualization functionality to provide security, stability, performance, and reliability.

Plug-in drivers are available in Nova for interaction with vCenter Server  and in Neutron to interact with NSX-T Data Center (or vSphere Distributed Switch). Glance and Cinder interact with storage through the vCenter Server system and the OpenStack plug-in driver.

VMware Integrated OpenStack and the VMware SDDC Integration 

VMware Integrated OpenStack (VIO) provides full-stack integration with VMware Software-Defined Data Center (SDDC), which provides customer to have one-stop-shop enterprise grade OpenStack solutions.

Stay tuned for VMware Integrated OpenStack (VIO) – PART 3, In Part 3 will discuss more on VMware Integrated OpenStack (VIO) Deployment !!

 

Monthly Webinar Series – #1 – VCF Multi Availability Zone (vSAN Stretched) Design and Deploy Deep Dive

Thank you everyone, Thank you so much for joining  the monthly webinar series. #1 – Virtual TechTalk – VCF Multi Availability Zone (vSAN Stretched) Design and Deploy Deep Dive. 

Here is the video recording of the session –

 

Please feel free to share and subscribers YouTube Channel (Virtual Cloud Solutions by Roshan Jha). Thanks!

 

Syslog Configuration for NSX-T Components using API

In this post, quickly i’ll walk through how to Configure the NSX-T components to Forward Log Events to vRealize Log Insight using API.

Once you have VMware vRealize Log Insight (vRLI) designed and deployment, you can use API call to configure your NSX-T components to forward logs to log management servers. In this case i am going to push vRLI VIP FQDN through API call on NSX-T Managers and NSX-T Edges.

  • nsx01a
  • nsx01b
  • nsx01c
  1. Open POSTMAN and configure Authorization –> Select Basic Auth under TYPE and Provide NSX-T Manager username and Password to allow Postman to talk to NSX-T managers. 

2. Next select Headers, set  KEY as Content-Type and VALUE as  application/json

3. Next Select Body –> raw –> and provide Syslog server, protocol, post and log level you want to sent from NSX-T managers to log insight.

4. Next select POST –> https://xx-m01-nsx01a.xxxxx.com/api/v1/node/services/syslog/exporters and Click Send.

In the lower Body section, it will display content which confirms that syslog settings has successfully pushed on NSX-T Manager.

5. Repeat this for another NSX-T Managers node nsx01b and nsx01c.

POST – https://xx-m01-nsx01b.xxxxx.com/api/v1/node/services/syslog/exporters

POST – https://xx-m01-nsx01c.xxxxx.com/api/v1/node/services/syslog/exporters

6. Now time to verify, Clear the text from Body  section and send GET to retrieve configuration data from NSX-T Managers.

GEThttps://xx-m01-nsx01a.xxxxx.com/api/v1/node/services/syslog/exporters

In the lower Body section, it retrieves the configured syslog settings from  NSX-T Manager.

Configure the NSX-T Edges to Forward Log Events to vRealize Log Insight 

Now will Configure the NSX-T Edge nodes to send audit logs and system events to vRealize Log Insight.

To configure on NSX-T Edge nodes first, you retrieve the ID of each edge transport node by using the NSX-T Manager user interface. Then, you use the Postman application to configure log forwarding for all edge transport nodes by sending a post request to each NSX-T Edge request URL.

  1. Login to NSX-T Manager to retrieve the ID of each edge nodes.

  • nsxedge-01 — 16420ffa-d159-41a2-9f02-b4ac30d32636
  • nsxedge-02 — 39fe9748-c6ae-4a32-9023-ad610ea87249

2. Here is syntax for edge node – POSThttps://xx-m01-nsx01.xxxxx.com/api/v1/transport-nodes/16420ffa-d159-41a2-9f02-b4ac30d32636/node/services/syslog/exporters and Send

3. Now time to verify, Clear the text from Body  section and send GET to same url to retrieve configuration data from NSX-T edge node.

Repeat this for rest of the NSX-T edge nodes. 

That’s all.  Hope you enjoyed reading this post. Feel free to share 🙂

 

VCF 4.X – NSX-T Manager Sizing for VI Workload Domain (WLD) – Default Size is LARGE

I got interesting  question today related to NSX-T Manager sizing for VI Workload Domain (WLD), While bring up management domain, there is an option in Bring-up sheet to choose size of the NSX-T Manager.

But when we deploy VI Workload Domain (WLD) there is no option to choose NSX-T Manager Size (It will only ask for NSX-T manager name and IP details). And By Default 3 Large Size NSX-T Managers will be deployed.

If you require to deploy Medium size NSX-T Manager for VI Workload Domain (WLD), Here are steps to perform on SDDC Manager before deploying VI Workload Domain (WLD) :-

If You have already deployed VI Workload Domain (WLD) and want to change the NSX- T manager size after deployment for VI Workload Domain (WLD), you can follow the VMware NSX Docs:

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/administration/GUID-B1B0CB39-7C51-410D-A964-C03D99E39C19.html

Hope this will help and Keep sharing the knowledge!

 

Why VMware Integrated OpenStack (VIO) – Part 1

Time to move out from comfort zone, explore and deep dive into OpenStack, Specially VMware Integrated OpenStack (VIO), vSphere with Kubernetes, VMware Tanzu Kubernetes Grid (TKG), and VMware Tanzu Kubernetes Grid Integrated (TKGI) (formerly known as VMware Enterprise PKS).

Let’s Start with VMware Integrated OpenStack (VIO)?

VMware Integrated OpenStack (VIO) is a VMware supported enterprise grade OpenStack distribution that makes it easy to run OpenStack cloud on top of VMware virtualization technologies. With VIO, customers can rapidly build production-grade private and public OpenStack clouds on top of VMware technologies, leveraging their existing VMware investment and expertise.

VMware Integrated OpenStack is ideal for many different use cases, including building a IaaS platform, providing standard, OpenStack API access to developers, leveraging edge computing and deploying NFV services on OpenStack.

VMware Integrated OpenStack (VIO) can be deployed and run on your existing vSphere, NSX-T, and vSAN and simplify operations and offering better performance and stability.

VMware Integrated OpenStack (VIO) Architecture

The VMware Integrated OpenStack (VIO) connects vSphere resources to the OpenStack Compute, Networking, Block Storage, Image Service, Identity Service, and Orchestration components.

VMware Integrated OpenStack is design and implemented as separate management and compute clusters. The management cluster contains OpenStack components and compute cluster runs tenant or application workloads.

VMware Integrated OpenStack (VIO) core components are :-

Nova (compute) – Compute clusters in vSphere are used as Nova compute nodes. Nova Provides a way to provision compute instances (aka virtual servers) in these clusters.

Neutron (networking) – Neutron allows you to create and attach network interface devices managed by OpenStack. Neutron provides networking functions by communicating with the NSX Manager (for
NSX-T Data Center deployment) or with vCenter Server (for VDS-only deployments).

Cinder (block storage) – Cinder designed to create and manage a service that provides persistent data storage to applications. Cinder executes block volume operations through the VMDK driver, causing the desired
volumes to be created in vSphere.

Glance (image service) – Glance enables users to discover, register, and retrieve virtual machine images through the Image service in a variety of locations, from simple file systems to object-storage systems  like OpenStack Object Storage. Glance images are stored and cached in a dedicated image service datastore when the virtual machines that use them are booted.

Keystone (identity management) – Authentication and authorization in OpenStack are managed by Keystone.

Heat (orchestration) – Heat provides orchestration service to orchestrate composite cloud applications through an OpenStack API call.

Ceilometer (telemetry) – Telemetry collect data on the utilization of the physical and virtual resources comprising deployed clouds, persist these data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.

VMware also provides simplified OpenStack operations with vRealize Operations Manager (vROps) integrations for performance monitoring, capacity planning and troubleshooting. And vRealize Log Insight (vRLI) for diagnostics across OpenStack service logs.

Stay tuned for VMware Integrated OpenStack (VIO) – PART 2 !!

 

VMware Cloud Foundation (VCF) 4.1 – What’s new?

Last week was big release week from VMware prospective, where VMware released vSphere 7 Update 1, vSAN 7 Update 1, and VMware Cloud Foundation (VCF) 4.1. There are some nice new enhancements with VCF 4.1. In this post, I’ll highlight the big features which customers and architects were looking  with upcoming release.

Rename Objects
With VMware Cloud Foundation 4.1, you can now rename domains, clusters, as well as network pools. Domain and Network Pool objects can be renamed from SDDC Manager UI. And Cluster objects can be renamed from vCenter Server. Once you do, you can go back to the SDDC Manager and refresh the UI, the new cluster name will be retrieved by the SDDC Manager.

SDDC Manager Backup Enhancements
With this release of VCF 4.1 backup can be scheduled on a reoccurring basis, Now customer can enable backup state change and SDDC manager backup will occur 10 minutes after the successful completion of the event, such as the creation of a workload domain.

Support for vVols as a Principal Storage for Workload Domains          With Cloud Foundation 4.1, vVols can now be used for principle storage for workload domains and secondary storage for both management domain as well as workload domains.

If you want to read in details about vVols, please refer blog written by Cormac Hogan (Director and Chief Technologist in the Office of the CTO in the Cloud Platform Business Unit (CPBU) at VMwarehttps://cormachogan.com/2015/02/17/vsphere-6-0-storage-features-part-5-virtual-volumes/

Support for Remote Clusters (Extends VCF at the Remote/Edge)         We continue to see growing demands of Remote or Edge sites, where customer wants to have small infra footprint at remote or edge sites, but wanted to have automated deployment and lifecycle management as unified management.

With release of VCF 4.1, Support for remote clusters will be a minimum of 3 nodes and maximum of 4 nodes vSAN Ready Node configuration. Remote clusters can be implemented in two different design. The first is when each remote site is managed as a separate workload domain. In this design, each remote site has a dedicated vCenter Server instance. The second  is when each remote site is managed as a cluster within a single workload domain. In this design, each remote site shares a single vCenter Server instance. Day 2 operations (such as lifecycle management, adding and removing clusters) can be performed centrally from the data center to the remote sites. 

Improved Lifecycle Management (VCF Upgrade Process)                              In previous editions of VCF, the upgrade process was sequential in nature. For example, if you started at Cloud Foundation version 4.0, And you wanted to go to Cloud Foundation version 4.1, You had go through a process where you first upgraded to any versions that may existed in-between and eventually upgrading to the desired version. This resulted in the need to schedule multiple maintenance windows and took more time to get to the desired state.

Now with release of VCF 4.1 has ability to perform skip level upgrades for the SDDC Manager. With this feature, you can schedule a single maintenance window and update to the desired state in a single action. This can result in a reduction in the time needed to perform the upgrades.

vRealize Suite for VCF                                                                                                     With Cloud Foundation 4.1, VCF now deploys a ‘VCF Aware’ vRSLCM appliance. First enhancement is no need to manually download and deploy vRSLCM. Once you have management domain bring up done and SDDC Manager up and running, you can initiate the installation of vRSLCM from SDDC Manager.

Now with VCF 4.1, there will be also bidirectional vRSLCM and SDDC Manager relationship. This will provide unified product experience. Users can log into vRSCLM to perform operations, and SDDC Manager can now discover if vRSLCM was used to deploy vRealize suite of products such as vRealize Automation (vRA), vRealize Operations Manager (vROps) and vRealize Log Insight (vRLI). This will ease the deployment for customers and any potential interoperability issues between vRSLCM and SDDC Manager.

Hybrid Cloud Extension (HCX) Integration 

With the release of VCF 4.1, HCX R143 now has native support for Cloud Foundation 4.1 with Converged Virtual Distributed Switches (CVDS). This will be extremely helpful for customers who have a need to migrate existing workloads to a new Cloud Foundation installation. 

Role-Based Access Control for VCF

A New VCF User Role – ‘viewer’

A new ‘view-only’ role has been added to VCF 4.1, In previous edition of  VCF had only 2 roles, Administrator and Operator. Now third role available called a ‘viewer’. As name suggest, with this view only role Users has no ability to create, delete, or modify objects. with this limited ‘view-only’ role assigned users may also see a message saying they are unauthorized to perform certain actions.

 

VCF Local Account

With VCF 4.1, Custer can have local account that can be used during a SSO failure.

What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, customers now can create VCF local account called admin@local. This account will allow to perform certain actions until the SSO domain is functional again.

This VCF local account can be defined in the deployment bring up worksheet. 

Summary

I tried to cover all the new enhancements with VCF 4.1 release, But always  refer official documentation for more and complete details :- https://docs.vmware.com/en/VMware-Cloud-Foundation/index.html

 

 

#1 – TECHTALK – VSAN STANDARD AND STRETCH CLUSTER DESIGN AND DEPLOY DEEP DIVE WITH VCF

Hello There,                                                                                      

Join me for #1 – Virtual TECHTALK             

Register Now!!👇✅

#1 – TechTalk – vSAN Standard and Stretch Cluster Design and Deploy Deep Dive with VCF

Time: 16th September, Wednesday at 5-6 PM SGT ⏰

https://vmware.zoom.us/webinar/register/WN_8zrTPX7hTG-isQ5pzsUHhg

#vcf #vmware #vSAN #TechTalk

#1 – TechTalk – vSAN Standard and Stretch Cluster Design and Deploy Deep Dive With VCF

Hello There,

I am starting monthly TechTalk focusing SDDC based on VVD and VCF. In the very  First TechTalk series planning to Deep Dive on vSAN Standard and Stretch Cluster Design and Deploy with VCF.

#1 – TechTalk  – vSAN Standard and Stretch Cluster Design and Deploy   Deep Dive with VCF (16th September 2020 – 5-6 PM SGT)

Will share the Zoom meeting details later, Stay Tuned and Happy Learning!!

 

Accelerate your Private/Hybrid Cloud Journey With VMware Cloud Foundation (VCF) – By Roshan Jha

Last week got invited to present in VMUG Delhi virtual webinar and 100+ participants had joined the session. Thank you #VMUG Delhi Team for invitation to share with the community from where started my journey #vmugdelhi #vmware #VMwareCloudFoundation #vcf

What’s New in VMware vRealize Log Insight 4.0

2

Last week VMware released vRealize Log Insight 4.0 with new, improved and re-designed user interface, offers enhanced alert management functionalities, and is natively built for and works seamlessly with vSphere, vRealize, and other VMware products.

vRealize Log Insight 4.0 delivers the best real-time and archive log management for VMware environments. vRealize Log Insight 4.0 can be configure to pull data for tasks, events, and alarms that occurred in vCenter Server instances and Integration between vRealize Log Insight and vRealize Operations Manager.

What’s New in vRealize Log Insight 4.0?

  • vSphere 6.5 compatibility.
  • System notification enhancements.
  • New overall User Interface based on the VMware Clarity standard.
  • New Admin Alert Management tool and User Interface (UI) to view and manage all user alerts.
  • New “blur” on session timeout.
  • Support for Syslog octet-framing over TCP.
  • SLES 11 SP3 and SLES 12 SP1 are supported for Linux agents.
  • New Agent installations have SSL enabled by default. Previously, Agent installs defaulted to SSL off. Upgrading does not affect current SSL settings.
  • For content pack alerts instantiated in 4.0, content pack updates now automatically update alert definitions. If needed, you can preserve customization by exporting them and then importing them back into the user profile after the update is applied.

Note:- Support for VMware vCenter Server 5.0 and 5.1 has been removed.

To find out more about vRealize Log Insight Release Note and Log Insight please visit VMware Log Insight documentation.

Thank you and Keep Sharing 🙂

What’s New in vRealize Operations Manager 6.4

blog-img-product-release-sept2015-730x300

Few days ago vRealize Operations Manager 6.4 has been released with enhanced product usability, Stability and supporting vSphere 6.5 environments. And best part is that this release has been added with 13 new customized dashboards which will be very useful for Capacity, Performance and troubleshooting stand point.

Here are the list of enhancement has been made in vRealize Operations Manager 6.4 –

  • Pinpoints and helps you clean up alerts in your environments:
    • Groups alerts by the definition to identify noisiest alerts.
    • Allows single-click disabling of alerts across the system or for a given policy.
  • Reflects business context based on vSphere VM folders:
    • vSphere VM folders are now incorporated into the navigation tree for quick searching.
  • Thirteen new installed dashboards to display status, identify problems, and at-a-glance share data with your peers:
    • A getting started dashboard to introduce you to using our dashboards
    • Environment and capacity overview dashboards to get a summary of your environments.
    • VM troubleshooting dashboard that helps you diagnose problems in a VM and start solving them.
    • Infrastructure capacity and performance dashboards to view status and see problems across your datacenter.
    • VM and infrastructure configuration dashboards to highlight inconsistencies and violations of VMware best practices in your environment.
  • Enhanced All Metrics tab to ease troubleshooting:
    • Set of key KPIs per object and resource for a quick start of your investigation process.
    • Ability to correlate properties with metrics
  • Predictive DRS (pDRS) enables vRealize Operations to provide long-term forecast metrics to augment the short-term placement criteria of the vSphere Distributed Resource Scheduler (DRS). This capability only works within vSphere version 6.5 or above.

Supported Deployment Mode for vROps 6.4

You can deploy vRealize Operations Manager 6.4 with any of the following installation mode:

  • VMware virtual appliance
  • RHEL installation package
  • Microsoft Windows installation package

Note: VMware recommends to use the Virtual Appliance option rather than using Linux or Windows base deployment. For Windows base  vRealize Operations Manager 6.4 is the final version of the product to support Microsoft Windows installations. Although RHEL-based installation option is fully supported in vRealize Operations Manager 6.4 but will be deprecating this support. The future availability of the RHEL options is not guaranteed.

Please visit vRealize Operations Manager 6.4 VMware product page for More details.

Thank you and Keep sharing 🙂

How to Configure centralized logging for the NSX Manager 6.x.x, NSX Controllers and NSX Edge devices

In my previous article discussed about VMware NSX Manager 6.x.x Backup and Restore and in this article I am going to discuss how to Configure centralized logging for the NSX Manager 6.x.x, NSX Controllers and NSX Edge devices.

In the production environment it is always recommenced to have remote log collector server configured, so that NSX Manager 6.x.x, NSX Controllers and NSX Edge devices sends all audit logs and system events from  NSX Manager 6.x.x, NSX Controllers and NSX Edge devices to the syslog server. This will be handy to troubleshoot or to get the final RCA in the event of any issue.

Let’s start with configuring syslog server for NSX Manager:-

1. Login to the VMware NSX Manager Virtual Appliance with Admin account.b1
2. Go to Manage –> General –> Click Edit in the Syslog Server section.p13. Provide Syslog Server, Port and Protocol details in the syslog server window and Click OK to test and save the  settings.p2

4. Once it is saved. It will show the settings like below.p3This is how we can configure Syslog server for NSX Manager.


Next is how to Configure Syslog Server for VMware NSX controllers :-

For NSX Controllers only supported method to configure syslog Server is through the NSX API. And using Rest API we need to push Syslog Server details on all the NSX controllers one by one.

Before we go ahead and push the syslog server on NSX controllers through REST API, We need to enable/Add REST API client to the browser. You can search for Rest API Client for the browser for Chrome or Mozilla and Add to the Browser.

api1

api2

Once you are done with adding REST API plug-in to your browser. There are couple of thing that needs to be remember.

REST API requests requires Authentication  header and Content-Type as application/xml to send HTTP body.

api4

Now we are ready to send the request body to configure Syslog Server for NSX controllers.

Open the Rest Client to set the request body to configure Syslog for NSX for vSphere Controllers. Make sure you have selected the Method as POST and URL as https://<NSX Manager IP>/api/2.0/vdn/controller/{controller-id}/syslog where controller-id is the name of NSX controller and can be found on the NSX Installation page.

HTTP Request body has to be this:

<controllerSyslogServer>
<syslogServer>x.x.x.x</syslogServer>
<port>514</port>
<protocol>UDP</protocol>
<level>INFO</level>
</controllerSyslogServer>

api3

This is how we can configure Syslog Server on NSX Controllers. If you want to DELETE the Syslog exporter use below request:-

Method :- DELETE and URL:- https://<NSX-Manager-IP>/api/2.0/vdn/controller/{controller-ID}/syslog.


How to configure Syslog Server for Distributed Logical Router.

1.  Login to vCenter Server using vSphere Web Client and choose Networking and Security –> NSX Edges –> and Double click on Logical Router.lrs1

2. Under Manage –> Settings –> Configuration click on Change under Syslog Servers.LRs2

3. Enter the Syslog Server and Protocol details in the Edit Syslog Server Configuration page and Click OK.LRs3

4. Now we can see Syslog is configured and ready to send all the logs to Remote Server.LRs4


How to configure Syslog Server for NSX Edge.

1.  Login to vCenter Server using vSphere Web Client and choose Networking and Security –> NSX Edges –> and Double click on NSX Edge.DRS1

2. Under Manage –> Settings –> Configuration click on Change under Syslog Servers.DRS2

3. Enter the Syslog Server and Protocol details and Click OK.

DRS4

That’s All. This is how you can configure Syslog Server for NSX Manager, NSX Controllers and NSX Edges.

Thank you and Happy learning 🙂

 

VMware NSX Manager 6.x.x Backup and Restore

In this post i am going to discuss how to configure Backup for NSX Manager 6.x.x, Schedule backup for NSX Manager 6.x.x, How to take On-demand Backup for NSX Manager 6.x.x and Restore NSX Manager configuration from a backup.

We can back up and restore NSX Manager data, which includes system configuration, events, and audit log tables. Backup are saved to a remote location and that must be accessible by the NSX manager.

We can back up NSX manager data on-demand or we can schedule as per plan.

Let’s start now how to configure remote server to store the backup of NSX manager.

  1. Login to the VMware NSX Manager Virtual Appliance with Admin account.

b12. Under the NSX Manager Virtual Appliance Management –> Click Backup & Restore.b2

3. To store the NSX Manager Backup we can use FTP server with FTP or SFTP Transport protocol. To configure FTP Sever settings click Change Next to FTP Server Settings.b3

4. Backup Location Window will open up:

  • Enter IP/Host name of the FTP Server.
  • Choose Transfer protocol either FTP or SFTP, based on what the destination server supports.
  • Enter the Port number for Transfer Protocol.
  • Enter the user name and password to connect to backup server.
  • Enter the Backup Directory where you want to store the backup.
  •  Enter the Filename Prefix, Prefix will be added every time with backup file runs for NSX manager.
  • Type the Pass Phrase to secure the backup.
  • Click OK to test connection between NSX Manager and FTP Server and Save the settings.

b4

5. Once connection testing done it will save the settings. It will show the settings as below.b5

6.  After configuring the FTP Server Settings, We can configure to schedule the backup. Click Change next to Scheduling. We can schedule backup for Hourly, Daily or Weekly basis. Choose your option as per plan ( Recommended is to take daily basis), and Click Schedule to save the settings.

b6

b7

b8

7. Backup will run as per schedule and you can see entry for every day.

b9

8. We can also perform on-demand backup of NSX manger. For On-Demand backup of NSX Manager click Backup Next to Backup History.b10

9. Create Backup window will open up to confirm that you want to start a backup process now, Click Start to start the backup immediately.b11

10. it will take few minutes to complete the Backup process.b12

11. You can see new backup entry in Backup History.b13

Now will discuss how to Restore from a backup.

We can restore a backup only on a freshly deployed NSX Manager Appliance.  So let’s assume that we have some issue with Current NSX manager and can not be recovered.

In this scenario we can deploy new NSX Manager Virtual Appliance, Configure the FTP Server settings to identify the location of the backup to be restored. Select the backup from backup history and Click Restore and Click OK to confirm.b15That’s it. This is how we can configure Remote Server to store NSX Manager backup, Schedule NSX Manager backup, Perform on-demand backup for NSX Manager and Restore from a backup.

Thank you and Keep spreading the knowledge  🙂