Tag Archives: vSphere

Getting STARTED with VMWARE INTEGRATED OPENSTACK (VIO) – PART 2

VMware Integrated OpenStack (VIO) is an OpenStack distribution that is built and tested by VMware. VIO is compliant with the OpenStack Foundation guidelines for an OpenStack distribution and is API-compatible for all OpenStack services running on enterprise level virtual infrastructure.VMware ensures platform stability by rigorous testing and ensuring interoperability. VIO leverages vSphere, NSX, and storage functionality as core of infrastructure. VMware places priority on packing OpenStack core projects in the most stable manner through relentless testing (functional and interoperability).

VMware Integrated OpenStack provides the following key features:
• Fastest deployment with simple installation using an OVA file
• Simplified operations through API and web interface
• Distributed Resource Scheduler (DRS) and Storage DRS for workload    rebalancing and datastore load balancing
• vSphere high availability (HA) to protect and automatically restart workloads
• In-house expertise and skillset with existing vSphere technology
• Runs on the proven VMware software-defined data center
• Production-ready container management that is natively integrated by using VMware capabilities.

• Advanced networking functionality through NSX 

• Integration with vRealize Operations Manager and vRealize Log Insight for greater performance and capacity management, Alerting and troubleshooting.  
• Trusted and single vendor for infrastructure and OpenStack
• Compliant with the OpenStack Foundation’s 2019.11 interoperability guideline

OpenStack Model

The OpenStack model is comprised of core projects and supplement projects. In addition to the core OpenStack projects, customer can choose supplement projects for additional services and functionality based on their requirements.

VMware Integrated OpenStack Components

VMware Integrated OpenStack (VIO) is made by two main building blocks, first the VIO Manager and second OpenStack components. VIO is packaged as an OVA file that contains the VIO Manager server and an Ubuntu Linux virtual machine to be used as the template for the different OpenStack components.

VMware Integrated OpenStack is designed to run over vSphere and NSX-T Data Center, leveraging existing virtualization functionality to provide security, stability, performance, and reliability.

Plug-in drivers are available in Nova for interaction with vCenter Server  and in Neutron to interact with NSX-T Data Center (or vSphere Distributed Switch). Glance and Cinder interact with storage through the vCenter Server system and the OpenStack plug-in driver.

VMware Integrated OpenStack and the VMware SDDC Integration 

VMware Integrated OpenStack (VIO) provides full-stack integration with VMware Software-Defined Data Center (SDDC), which provides customer to have one-stop-shop enterprise grade OpenStack solutions.

Stay tuned for VMware Integrated OpenStack (VIO) – PART 3, In Part 3 will discuss more on VMware Integrated OpenStack (VIO) Deployment !!

 

Monthly Webinar Series – #1 – VCF Multi Availability Zone (vSAN Stretched) Design and Deploy Deep Dive

Thank you everyone, Thank you so much for joining  the monthly webinar series. #1 – Virtual TechTalk – VCF Multi Availability Zone (vSAN Stretched) Design and Deploy Deep Dive. 

Here is the video recording of the session –

 

Please feel free to share and subscribers YouTube Channel (Virtual Cloud Solutions by Roshan Jha). Thanks!

 

VCF 4.X – NSX-T Manager Sizing for VI Workload Domain (WLD) – Default Size is LARGE

I got interesting  question today related to NSX-T Manager sizing for VI Workload Domain (WLD), While bring up management domain, there is an option in Bring-up sheet to choose size of the NSX-T Manager.

But when we deploy VI Workload Domain (WLD) there is no option to choose NSX-T Manager Size (It will only ask for NSX-T manager name and IP details). And By Default 3 Large Size NSX-T Managers will be deployed.

If you require to deploy Medium size NSX-T Manager for VI Workload Domain (WLD), Here are steps to perform on SDDC Manager before deploying VI Workload Domain (WLD) :-

If You have already deployed VI Workload Domain (WLD) and want to change the NSX- T manager size after deployment for VI Workload Domain (WLD), you can follow the VMware NSX Docs:

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/administration/GUID-B1B0CB39-7C51-410D-A964-C03D99E39C19.html

Hope this will help and Keep sharing the knowledge!

 

Why VMware Integrated OpenStack (VIO) – Part 1

Time to move out from comfort zone, explore and deep dive into OpenStack, Specially VMware Integrated OpenStack (VIO), vSphere with Kubernetes, VMware Tanzu Kubernetes Grid (TKG), and VMware Tanzu Kubernetes Grid Integrated (TKGI) (formerly known as VMware Enterprise PKS).

Let’s Start with VMware Integrated OpenStack (VIO)?

VMware Integrated OpenStack (VIO) is a VMware supported enterprise grade OpenStack distribution that makes it easy to run OpenStack cloud on top of VMware virtualization technologies. With VIO, customers can rapidly build production-grade private and public OpenStack clouds on top of VMware technologies, leveraging their existing VMware investment and expertise.

VMware Integrated OpenStack is ideal for many different use cases, including building a IaaS platform, providing standard, OpenStack API access to developers, leveraging edge computing and deploying NFV services on OpenStack.

VMware Integrated OpenStack (VIO) can be deployed and run on your existing vSphere, NSX-T, and vSAN and simplify operations and offering better performance and stability.

VMware Integrated OpenStack (VIO) Architecture

The VMware Integrated OpenStack (VIO) connects vSphere resources to the OpenStack Compute, Networking, Block Storage, Image Service, Identity Service, and Orchestration components.

VMware Integrated OpenStack is design and implemented as separate management and compute clusters. The management cluster contains OpenStack components and compute cluster runs tenant or application workloads.

VMware Integrated OpenStack (VIO) core components are :-

Nova (compute) – Compute clusters in vSphere are used as Nova compute nodes. Nova Provides a way to provision compute instances (aka virtual servers) in these clusters.

Neutron (networking) – Neutron allows you to create and attach network interface devices managed by OpenStack. Neutron provides networking functions by communicating with the NSX Manager (for
NSX-T Data Center deployment) or with vCenter Server (for VDS-only deployments).

Cinder (block storage) – Cinder designed to create and manage a service that provides persistent data storage to applications. Cinder executes block volume operations through the VMDK driver, causing the desired
volumes to be created in vSphere.

Glance (image service) – Glance enables users to discover, register, and retrieve virtual machine images through the Image service in a variety of locations, from simple file systems to object-storage systems  like OpenStack Object Storage. Glance images are stored and cached in a dedicated image service datastore when the virtual machines that use them are booted.

Keystone (identity management) – Authentication and authorization in OpenStack are managed by Keystone.

Heat (orchestration) – Heat provides orchestration service to orchestrate composite cloud applications through an OpenStack API call.

Ceilometer (telemetry) – Telemetry collect data on the utilization of the physical and virtual resources comprising deployed clouds, persist these data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.

VMware also provides simplified OpenStack operations with vRealize Operations Manager (vROps) integrations for performance monitoring, capacity planning and troubleshooting. And vRealize Log Insight (vRLI) for diagnostics across OpenStack service logs.

Stay tuned for VMware Integrated OpenStack (VIO) – PART 2 !!

 

VMware Cloud Foundation (VCF) 4.1 – What’s new?

Last week was big release week from VMware prospective, where VMware released vSphere 7 Update 1, vSAN 7 Update 1, and VMware Cloud Foundation (VCF) 4.1. There are some nice new enhancements with VCF 4.1. In this post, I’ll highlight the big features which customers and architects were looking  with upcoming release.

Rename Objects
With VMware Cloud Foundation 4.1, you can now rename domains, clusters, as well as network pools. Domain and Network Pool objects can be renamed from SDDC Manager UI. And Cluster objects can be renamed from vCenter Server. Once you do, you can go back to the SDDC Manager and refresh the UI, the new cluster name will be retrieved by the SDDC Manager.

SDDC Manager Backup Enhancements
With this release of VCF 4.1 backup can be scheduled on a reoccurring basis, Now customer can enable backup state change and SDDC manager backup will occur 10 minutes after the successful completion of the event, such as the creation of a workload domain.

Support for vVols as a Principal Storage for Workload Domains          With Cloud Foundation 4.1, vVols can now be used for principle storage for workload domains and secondary storage for both management domain as well as workload domains.

If you want to read in details about vVols, please refer blog written by Cormac Hogan (Director and Chief Technologist in the Office of the CTO in the Cloud Platform Business Unit (CPBU) at VMwarehttps://cormachogan.com/2015/02/17/vsphere-6-0-storage-features-part-5-virtual-volumes/

Support for Remote Clusters (Extends VCF at the Remote/Edge)         We continue to see growing demands of Remote or Edge sites, where customer wants to have small infra footprint at remote or edge sites, but wanted to have automated deployment and lifecycle management as unified management.

With release of VCF 4.1, Support for remote clusters will be a minimum of 3 nodes and maximum of 4 nodes vSAN Ready Node configuration. Remote clusters can be implemented in two different design. The first is when each remote site is managed as a separate workload domain. In this design, each remote site has a dedicated vCenter Server instance. The second  is when each remote site is managed as a cluster within a single workload domain. In this design, each remote site shares a single vCenter Server instance. Day 2 operations (such as lifecycle management, adding and removing clusters) can be performed centrally from the data center to the remote sites. 

Improved Lifecycle Management (VCF Upgrade Process)                              In previous editions of VCF, the upgrade process was sequential in nature. For example, if you started at Cloud Foundation version 4.0, And you wanted to go to Cloud Foundation version 4.1, You had go through a process where you first upgraded to any versions that may existed in-between and eventually upgrading to the desired version. This resulted in the need to schedule multiple maintenance windows and took more time to get to the desired state.

Now with release of VCF 4.1 has ability to perform skip level upgrades for the SDDC Manager. With this feature, you can schedule a single maintenance window and update to the desired state in a single action. This can result in a reduction in the time needed to perform the upgrades.

vRealize Suite for VCF                                                                                                     With Cloud Foundation 4.1, VCF now deploys a ‘VCF Aware’ vRSLCM appliance. First enhancement is no need to manually download and deploy vRSLCM. Once you have management domain bring up done and SDDC Manager up and running, you can initiate the installation of vRSLCM from SDDC Manager.

Now with VCF 4.1, there will be also bidirectional vRSLCM and SDDC Manager relationship. This will provide unified product experience. Users can log into vRSCLM to perform operations, and SDDC Manager can now discover if vRSLCM was used to deploy vRealize suite of products such as vRealize Automation (vRA), vRealize Operations Manager (vROps) and vRealize Log Insight (vRLI). This will ease the deployment for customers and any potential interoperability issues between vRSLCM and SDDC Manager.

Hybrid Cloud Extension (HCX) Integration 

With the release of VCF 4.1, HCX R143 now has native support for Cloud Foundation 4.1 with Converged Virtual Distributed Switches (CVDS). This will be extremely helpful for customers who have a need to migrate existing workloads to a new Cloud Foundation installation. 

Role-Based Access Control for VCF

A New VCF User Role – ‘viewer’

A new ‘view-only’ role has been added to VCF 4.1, In previous edition of  VCF had only 2 roles, Administrator and Operator. Now third role available called a ‘viewer’. As name suggest, with this view only role Users has no ability to create, delete, or modify objects. with this limited ‘view-only’ role assigned users may also see a message saying they are unauthorized to perform certain actions.

 

VCF Local Account

With VCF 4.1, Custer can have local account that can be used during a SSO failure.

What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, customers now can create VCF local account called admin@local. This account will allow to perform certain actions until the SSO domain is functional again.

This VCF local account can be defined in the deployment bring up worksheet. 

Summary

I tried to cover all the new enhancements with VCF 4.1 release, But always  refer official documentation for more and complete details :- https://docs.vmware.com/en/VMware-Cloud-Foundation/index.html

 

 

#1 – TECHTALK – VSAN STANDARD AND STRETCH CLUSTER DESIGN AND DEPLOY DEEP DIVE WITH VCF

Hello There,                                                                                      

Join me for #1 – Virtual TECHTALK             

Register Now!!👇✅

#1 – TechTalk – vSAN Standard and Stretch Cluster Design and Deploy Deep Dive with VCF

Time: 16th September, Wednesday at 5-6 PM SGT ⏰

https://vmware.zoom.us/webinar/register/WN_8zrTPX7hTG-isQ5pzsUHhg

#vcf #vmware #vSAN #TechTalk

#1 – TechTalk – vSAN Standard and Stretch Cluster Design and Deploy Deep Dive With VCF

Hello There,

I am starting monthly TechTalk focusing SDDC based on VVD and VCF. In the very  First TechTalk series planning to Deep Dive on vSAN Standard and Stretch Cluster Design and Deploy with VCF.

#1 – TechTalk  – vSAN Standard and Stretch Cluster Design and Deploy   Deep Dive with VCF (16th September 2020 – 5-6 PM SGT)

Will share the Zoom meeting details later, Stay Tuned and Happy Learning!!

 

vCenter Site Recovery Manager (SRM) 5.X – Part 6

First we are going to discuss Replication With VR and then will cover Array-based Replication.

To protect a single virtual machine or a group of virtual machines, the virtual machine files need to be replicated from the Protected Site to the Recovery Site. Since we have already set up a VR infrastructure we will proceed using this mechanism. Here we will skip the replication configuration of a single VM and concentrate on configuring replication of multiple VMs since it’s the most common setup in a virtual infrastructure.

Configure Replication for multiple VMs

1). Open the vSphere Client and from the Home page go to VMs and Templates.

2). Select a folder in the left pane and select the Virtual Machines tab. Select All the VMs you want to configure Replication. Right-Click the selected VMs and choose vSphere Replication from Drop Down..

VRR13). In the Configure Replication wizard, select desired Recovery Point Objective (RPO – Minimum 15 min and Max 24 hr), and lease the “Initial copies of .vmdk files have been placed on the target datastoresun-checked since we didn’t copy the files to the Recovery Site. Click Next.

VRR24). Select the appropriate VR Server (If you have multiple VR Servers) at the Recovery Site or leave the setting to Auto-assign VR Server. Click Next to proceed.

VRR35). Review the settings and click Finish.

VRR46). Click Close when the Configuring Replication process is completed.

VRR5============================================================

Configure Datastore Mapping

Now that the replication of VMs has been set up, we need to create the datastore mapping for the replicated VMs on the Recovery Site.

1). On the Protected Site, open the Site Recovery Manager and select vSphere Replication in the left pane. Select the Protected Site  and go to the Datastore Mappings tab.

VRR62). Select datastores under the Source Datastore and map it to the appropriate datastore at the Recovery Site by clicking on the Configure Mapping button.

VRR73). In the Datastore Mapping window, select the appropriate datastore and click OK.

VRR84). Verify the mapping under the Target Datastore column on the Datastore Mappings tab.

5). To check if the replication is running, select the Recovery Site in the vSphere Replication and select the Virtual Machines tab. You should see something similar to the screenshot below.

VRR9OK, so the files of the virtual machines are now synchronized between the Protected Site and Recovery Site. Let’s proceed now with configuration of the Protection Group.

===========================================================

Now Let’s discuss and configure VMs with Array-Based Replication

Datastore Groups

A datastore group is a container that aggregates one or more replication-enabled datastores. The datastore groups are created by SRM and cannot be manually altered. A replication-enabled datastore is a datastore who’s LUN has a replication schedule enabled at the array.

DP1A datastore group will contain only a single datastore if the datastore doesn’t store files of virtual machines from other datastores. See the preceding single-datastore datastore group conceptual diagram.

A datastore group can also contain more than one datastore. SRM aggregates multiple datastores into a single group if they have virtual machines whose files are distributed onto these datastores. For example, if VM-A has two VMDKs placed on datastores Datastore-M and Datastore-N each, and then both these datastores become part of the same datastore group.

Protection Groups

“A protection group is a group of virtual machines that fail over together to the recovery site during a test or a recovery procedure.”

Unlike vSphere Replication, SRM cannot enable protection on individual virtual machines. All the virtual machines that are hosted on the datastores in a datastore group are protected. Meaning, with SRM, protection is enabled at the datastore group level. This is because, with an array-based replication, the LUNs backing the datastores are replicated. The array doesn’t know which VMs are hosted on the datastore. It just replicates the LUN, block by block. So, at the SRM layer, the protection is enabled at the datastore level. In a way, a Protection Group is nothing but a software construct to which datastore groups are added, which in turn includes all the VMs stored on them in the Protection Group.

When creating a Protection Group, you will have to choose the datastore groups that will be included. Keep in mind that you cannot individually select the datastores in a datastore group. If it were ever allowed to do so, then you will have virtual machines with not all of its files protected. Let’s assume that you have a virtual machine, VM-A, with two disks (VMDK-1 and VMDK-2) placed on two different datastores. Let’s also say VMDK-1 is on Datastore-X and VMDK-2 is on Datastore-Y. When creating a Protection Group, if you were allowed to select the individual datastores and if you choose only one of them, then you will leave the remaining disks of the VM unprotected. Hence, SRM doesn’t allow selecting individual datastores from a datastore group as a measure to prevent such a scenario. The following diagram shows the modified conceptual structure of the datastore group:

DP2Note:- That a datastore group cannot be a part of two Protection Groups at the same time.

===========================================================

Creating Protection Group Based on VR (vSphere Replication)

The protection group can be used in one or multiple recovery plans which we will create later. Protection groups can be Array-based or VR-based. In this case we will create a protection group based on vSphere Replication since it was configured in the previous part.

Create Protection Group

1). On the Protected Site, open the Site Recovery Manager and select Protection Groups in the left pane. Click on Create Protection Group to start the wizard.

CPG12). In the Create Protection Group wizard, select the Protected Site and vSphere Replication as the protection group type. Click Next.

PG23). In the Select Virtual Machines window, select the VMs you want to add to this Protection Group and click Next.

PG34). Provide a suggestive Protection Group Name and alternatively a Description for this Protection Group and click Next.

PG45). Review the settings and click on Finish when ready.

PG56). Wait until the Protection Group is configured. You can monitor the progress in the task pane.

Now that the VMs are protected we can start building a recovery plan. where we will configure a recovery plan for our VMs.

==========================================================

Creating Protection Group Based on Array-based Replication

A Protection Group is created in the SRM UI at the protected site.

The following procedure will guide you through the steps required to create a Protection Group:

1. On the Protected Site, open the Site Recovery Manager and select Protection Groups in the left pane. Click on Create Protection Group to start the wizard.

CPG12).  In the Create Protection Group wizard, select the Protected Site and Array based Replication (SAN) as the protection group type and  Correct Array pair Click Next.

CPG5

CPG63). On the next screen, choose a datastore group that you would like to protect. When you select a datastore group, the bottom pane will list all the VMs hosted on the datastores in the group. You cannot individually select the VMs though. Although I have selected only a single datastore group, we can select multiple datastore groups to become part of the Protection Group. Click on Next to continue

CPG34). In the next screen, provide the Protection Group Name and an optional description, and click on Next to continue.

The Protection Group Name can be any name that you would prefer to identify the Protection Group with. The common naming convention is to indicate the type or purpose of the VMs. For instance, if you were protecting the SQL Server VMs, then you might name the Protection Group as SQL Server Protection Group; or, if it  were to be a set of hyphenate VMs, you may name it as High Priority  VMs Protection Group. 

5). On the Ready to Complete screen, Review the wizard options selected and click on Finish to create a  Protection Group:

CPG4

=========================================================

So, what exactly happens when you create a Protection Group?

When you create a Protection Group, it enables protection on all the VMs in the chosen datastore group and creates shadow VMs at the recovery site. In detail, this means that at the protected site vCenter Server, you should see a Create Protection Group task complete; subsequently a Protect VM task completes successfully for each of the VMs in the Protection Group. See the following screenshot for reference:

CPG7At the recovery site of the vCenter Server, you should see the Create Protection Group, Protect VM (one for each VM), Create virtual machine (one for each VM), and Recompute Datastore Groups tasks completed successfully.

CPG8As shown in the following screenshot, the shadow VMs appear in the vCenter Server’s inventory at the recovery site:

CPG9As they are solely placeholders, you cannot perform any power operations on it. There are other operations that are possible but are not recommended. Hence, a warning will be displayed, requesting a confirmation, as shown in the following screenshot:

CPG10The placeholder datastores will only have the configuration file (.vmx), teaming configuration file (.vmxf), and a snapshot metadata file (.vmsd) for each VM.

CPG11These files will be automatically deleted when you delete the Protection Group.

==============================================================

In the next part we will discuss Recovery Plan, Testing and Performing a Failover and Failback. Click here to continue to part 7.

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 1

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 2

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 3

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 4

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 5

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 6

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 7

Note :- I have used pictures in this post from SRM book written by Abhilash GB and from blog (http://defaultreasoning.com) by Marek.Z and would like to Thank both of them 🙂

vCenter Site Recovery Manager (SRM) 5.X – Part 5

Now SRA (Array Manager) or VR infrastructure is up and running we can start to configure the inventory mappings. The Inventory Mappings provide a convenient way to specify how the resources at the Protected Site are mapped to the resources at the Recovery Site. You can create a mapping of the following objects in the vCenter Server:

  • Resources
  • VM folders
  • Networks
  • Datastores

Inventory mapping are not mandatory but highly recommend. Let Start and configure the resource mapping, folder mapping, network mapping and a placeholder datastore.

Resource Mapping

We need to provide a correlation between the compute resource containers on both the sites. The compute resource containers are cluster, resource pool, and ESXi host. This is achieved with the help of resource mappings. Resource mappings respect the presence of these containers, which means that if there is a cluster or resource pool at the site, the ESXi hosts are not made available as a selectable compute container.

This is how you configure resource mappings:

1. Navigate to vCenter Server’s inventory home page and click on Site Recovery.

2. Click on Sites in the left pane, select a site, and navigate to the Resource Mappings tab. Select the resource container (a cluster, resource pool, vAPP, or host) you want to map, and click on Configure Mapping to bring up the Mapping window.

This is example of Cluster if you want Cluster as Resource Mappings

RM1

This is example of vAPP if you want vAPP as Resource Mappings

RM73). In the Mapping window, browse the resource inventory of the recovery site, select the destination resource container (a cluster, resource pool, vAPP, or host), and click on OK to confirm.

(Select vAPP as Resource) 

RM5

 

(Select Cluster as Resource)

RM34). The selected resource should now appear as mapped in the Recovery Site Resource column on the Resource Mappings tab.

RM6=======================================================

Folder mappings

Folders are inventory containers that can only be created using vCenter Server.  They are used to group inventory objects of the same type for easier management. There are different types of folders. The folder type is determined by the  inventory-hierarchy level they are created at. The folder names are as follows:

• Datacenter folder

• Hosts and clusters folder

• Virtual machine and template folder

• Network folder

• Storage folder The vSphere Web Client

This is how you configure Folder mappings: 

1). Click on Protected Sites in the left pane and navigate to the Folder Mappings tab. Select the virtual machine folder that you want to map,  and click on Configure Mapping to bring up the Mapping window:-

RM82). In the mapping window, create or select a recovery folder and click OK.

RM93). The selected folder should now appear as mapped in the Recovery Site Resource on the Folder Mapping tab.

RM10

 

========================================================

Network mappings

Network configuration at the protected and recovery sites need not be identical. Network mappings provide a method to form a correlation between the port groups (standard or distributed) of the protected and recovery steps. Let’s say we have a port group with the name VM Network at the protected site, and it is mapped to a port group with the name Recovery Network at the recovery site.  In this case, a virtual machine that is connected to VM Network will be reconfigured to use the Recovery Network when failed over.

This is how you configure network mappings: 

1). With the Protected Site still selected, go to the Network Mappings tab, Select the Network to map and click on Configure Mapping.

RM112). In the mapping window, select the appropriate network or dvSwitch port group at the Recovery Site and click OK.

RM123). The selected network should now appear as mapped in the Recovery Site Resource on the Network Mappings tab.

RM13

=================================================================

Configuring Placeholder Datastores

For every virtual machine that becomes part of a Protection Group, SRM creates a shadow virtual machine. A placeholder datastore is used to store the files for the shadow virtual machines. The datastore used for this purpose should be accessible to all the hosts in the datacenter/cluster serving the role of a recovery-host. Configuring placeholder datastores is an essential step in forming an SRM environment. Assuming that each of these paired sites is geographically separated, each site will have its own placeholder datastore. The following figure shows the site and placeholder datastore relationship:

RM14This is how you configure placeholder datastores:

1). With the Protected Site still selected, go to the Placeholder Datastores tab and click on Configure Placeholder Datastore.

RM152).In the mapping window, select the appropriate datastore and click on OK.

RM163). The selected datastore should now appear under the Datastore column in the Placeholder Datastores tab.

RM17Remember to configure the Resource mapping, folder mapping, network mapping and the placeholder mapping at the Recovery Site.

The inventory mappings are now configured and ready to use. Continue to SRM-Part 6 where we will configure vSphere Replication for our VMs.

========================================================

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 1

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 2

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 3

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 4

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 5

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 6

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 7

Note :- I have used pictures in this post from SRM book written by Abhilash GB and from blog (http://defaultreasoning.com) by Marek.Z and would like to Thank both of them 🙂

vCenter Site Recovery Manager (SRM) 5.X – Part 4

Now we have deployed and configured VRM servers at the Protected Site and the Recovery Site, it’s time to pair both servers. Once the pairing is completed we will be able to deploy the VR Servers, one for each site. This will enable us to fail over from the Protected Site to the Recovery Site. In SRM 5, each VR Server can manage 50 replication schedules and protect a maximum of 500 VMs. Let’s start the VRM Server pairing process.

VRM Server Pairing

1). Open the Site Recovery Manager from the Solutions and Applications in the vSphere Client on the Protected Site.

2). Go to vSphere Replication, make sure that the Protected Site is selected and click on Configure VRMS Connection.

VR103). Click Yes if asked to configure VRMS Connection.

4). Click OK to accept the certificate error.

5). Provide the username and password for the Recovery Site and click OK.

6). Once again, click OK to accept the certificate error.

7). You will be presented with a configuration progress window. Click OK when the configuration of VRMS connection succeeds.

VR118). The status of the connection under the Summary tab should now display:Connected on both sites.

VR12=========================================================

Deploying VR Server

Now that the VRM Servers are connected, we can start the deployment of the VR Server.

1). From the Protected Site, select the Recovery site in the left pane and click on Deploy VR Server.

VR132). Click OK to launch the OVF wizard.

3). Click Next in the Source window and Next again in the OVF Template Details window.

4). In the Name field, provide a FQDN name for the VR Server appliance and select the location. Click on Next to proceed.

VR145). Next, select the Cluster, Host and appropriate datastore for the VR Server. Click on Next to proceed at each step.

6). Select desired disk format and click Next.

7). In the Properties window, provide a Default Gateway, DNS Server, IP address and Subnet Mask for the VR Server and click Next.

VR158). Review the settings in the Ready to Complete window and press Finish to start the deployment.

Repeat this process for the Recovery Site.

Note :-The reason to deploy it at both sites is because to replicate back to the Protected site. You can replicate only to the Recovery site but if you don’t have a VR at the Protected site, you will not be able to replicate back.

==========================================================

Register VR Server

After the VR Server is deployed, it has to be registered with the vCenter Server.

1). Go back to the Site Recovery Manager and click on Register VR Server in the left pane under Commands.

VR162). Expand the Datacenter object and select the VR Server VM. Click OK to proceed.

3). Click Yes when asked if you want to register the VR Server.

4). Ignore the remote server certificate error and click on OK.

5). Click on OK to close the window when the registration completes successfully.

6). Verify that the VR Server is now visible in the left pane and the Summary tab shows the status: Connected.

VR17Repeat these steps for the Recovery Site.

OK, all done now. The VR infrastructure is now up and running. Continue to Part 5 where we will take a look at the inventory mappings between the Protected Site and the Recovery Site.

=========================================================

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 1

 Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 2

 Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 3

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 4

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 5

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 6

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 7

Note :- I have used pictures in this post from SRM book written by Abhilash GB and from blog (http://defaultreasoning.com) by Marek.Z and would like to Thank both of them 🙂

vCenter Site Recovery Manager (SRM) 5.X – Part 3

Now that the connection between the Protected and Recovery Site is established, we have also discussed in Part – 2 How to Download SRA, Install SRA, Rescan SRA, Adding an Array Manager, and Enabling an Array Pair. Now time to discuss about deploying the vSphere Replication (VR). The VR infrastructure consists of the following components:

  • VRM Server: provides the management of the VR Servers across the ESXi hosts
  • VR Server: replicates the virtual machines between sites

vCenter Server Managed IP Address

First we need to configure the vCenter Server Managed IP address. The managed IP address will be used by VRM server to communicate with the extension service on the vCenter Server.

1. In the vSphere Client, go to the Administration menu and select vCenter Server          Settings.

2. Go to Runtime Settings and fill in the vCenter Server Managed IP address.

VR13. Click OK to save and close the window.

Repeat this step for the Recovery Site as well.

===========================================================

VRM Database

Before deploying the VRM Server, a database must be created. Follow these steps to create database for VRM Server:

  1. Login and right click Databases and select New Database.
  2. Provide a suggestive name (i.e. VRMS_DB) and change the owner to i.e. srmadmin. Click OK to create the database.
  3. Close the SQL Server Management Studio and proceed to the next step.

Deploy VRM Server

Next step is the deployment of the VRM Server which is deployed as a virtual appliance from the SRM Server

1. Connect to the protected site of vCenter Server using  vSphere Client.

2. Navigate Home –> Solutions and Applications –> And Site Recovery.

3. Click vSphere Replication in the left pane and make sure the Protected Site is selected. On the left pane under Commands –> Click Deploy VRM Server.

VR24. The Deploy VRM Server wizard will be started, click on OK to begin.

5. Press Next in the Source window of the deployment wizard.

6. In the next step, review the OVF Template Details and press Next to continue.

7. Provide a FQDN name and a location for the VRM Server. Click Next.

VR38. Select the appropriate cluster and select a datastore to store the VRM Server appliance. Click Next.

VR49. Select the disk format, in this case I used the Thin Provision disk type. Click Next to proceed.

10. Provide the Password for the root account, Default Gateway, DNS server, IP Address and Subnet Mask for the appliance. Click Next when ready.

VR511. Click Next on the Configure Service Bindings window.

12. Review the settings and click Finish to start the deployment process.

13. When the deployment finishes the VRM Server VM will be powered on.

Remember to deploy a VRM Server instance on the Recovery Site as well. Wait until the VRM Server is fully started before proceeding to the next step. 

Configure VRM Server

When the VRM Server for the Protected Site is fully started, you will see the VMware Appliance Management Interface (VAMI) displayed and an URL to manage the VRM Server.

1. Next, click on Configure VRM Server under Commands pane in the Site Recovery Manger.

VR62. Ignore the certificate security warning and log in with the root account and the password entered during the deployment wizard.

3. In the Getting Started window, Go to option 1: Startup Configuration. Click on Configuration Page.

VR74. Leave the Configuration Mode to default (Manual Configuration).

5. Select SQL Server as DB Type.

6. Provide the FQDN for the DB Host, in my case the vCenter Server.

7. Leave DB Port to default (1433).

8. Enter the DB Username (i.e. srmadmin) and DB Password.

9. Provide the DB Name (i.e. VRMS_DB) created earlier.

10. Leave the VRM Host value to default.

11. Provide a name for the DB Name.

12. Enter a FQDN for the vCenter Server address and leave the vCenter Server  Port to default (80).

13. Enter the vCenter Server Username, Password and e-mail address.

14. Scroll down and click Generate and Install under the Generate a self-signed certificate. You should receive a message stating that the certificate was successfully generated and installed.

15. On the right side under the Actions menu, click the Save and Restart Service button.

16. Wait until the process completes, if successful you will see the following                     message.

VR817.Note that the status of the VRM Service is now running under the VRM Service Status section.

18. Also, you will be presented with a security warning, select Install this Certificate and click Ignore button.

19. Go back to the Site Recovery Manager and verify that the VRM Server is now configured.

VR9Next, Deploy and Configure the VRM Server at the Recovery Site by following the steps above.

Continue to part 4 where we will pair the VRM Servers between the Protected Site and the Recovery Site and we will deploy the VR Servers.

Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 1

 Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 2

 Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 3

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 4

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 5

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 6

  Click here to go to vCenter Site Recovery Manager (SRM) 5.X – Part 7

=========================================================

Note :- I have used pictures in this post from SRM book written by Abhilash GB and from blog (http://defaultreasoning.com) by Marek.Z and would like to Thank both of them 🙂