VMware Tanzu Kubernetes Grid (TKG) Architecture – Part 4

In the last Part VMware Tanzu Kubernetes Grid (TKG) Bootstrap– PART3  discussed about Bootstrapping for TKG cluster. 

In this Part 4 will be discussing about TKG Architecture and then will jump on how to install and initialize the Tanzu command line interface (CLI) on a bootstrap machine.

In nutshell, VMware Tanzu Kubernetes Grid (TKG) provides customers with a consistent Kubernetes environments that is ready for end-user workloads and ecosystem integrations. Customer have option to deploy Tanzu Kubernetes Grid (TKG) across software-defined datacenters (SDDC) On Premises and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2.

Tanzu Kubernetes Grid Architecture

Tanzu Kubernetes Grid (TKG) delivers a Kubernetes platform that is engineered and supported by VMware, so that you do not have to
build your Kubernetes environment by yourself.

In addition to Kubernetes binaries that are tested, signed, and supported by VMware, Tanzu Kubernetes Grid provides the services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires. 

Tanzu Kubernetes Grid (TKG) is an implementation of several open-source projects tested and supported by VMware to provide automated provisioning and lifecycle management of Kubernetes clusters.

These include:

  • Declarative API 
  • Networking with Antrea and Calico
  • Ingress Control with Contour
  • Log Forwarding with Fluentbit
  • Harbor Registry
  • User Authentication with Pinniped
  • Load Balancing with VMware NSX Advanced LB (AVI Networks)
  • Backup and Restore with Velero

TKG Architecture design and deployment includes –

  1. Management Cluster 
  2. Tanzu Kubernetes (Workload) Cluster 
  3. Bootstrap Machine running Tanzu CLI

Management Cluster

Management cluster is the first element that you design and deploy when you create a Tanzu Kubernetes Grid instance. The management cluster performs the role of the primary management and operational center for the Tanzu Kubernetes Grid instance. This is
where Cluster API runs to create the Tanzu Kubernetes clusters in which your application workloads run, and where you configure the shared and in-cluster services that the clusters use.

When you deploy a management cluster, networking with Antrea is automatically enabled in the management cluster. The management cluster is responsible for operating the platform and
managing the lifecycle of Tanzu Kubernetes clusters. 

Tanzu Kubernetes (Workload) Clusters
Once management cluster is ready, we deploy Kubernetes clusters that handle your application workloads, that you manage
through the management cluster. Tanzu Kubernetes clusters can run different versions of Kubernetes, depending on the needs of the applications you run. You can manage the entire
lifecycle of Tanzu Kubernetes clusters by using the Tanzu CLI.

Tanzu Kubernetes clusters implement Antrea for pod-to-pod networking by default.

Bootstrap Machine running Tanzu CLI

The bootstrap machine is the host that you use to deploy
management and workload clusters from, and that keeps the Tanzu and Kubernetes configuration files for your deployments. 

That’s all about TKG architecture, will discuss more about Installation of TKG CLI in VMware Tanzu Kubernetes Grid (TKG) CLI Installation – Part 5. Stay tuned for next part.

Hope this will be informative. Happy learning and happy sharing 🙂 

Certified Kubernetes Administrator (CKA) Exam Experience Sharing

First of All Happy New Year 2021 every one and have blessed 2021 🙂

Last 3-4 months have been great learning  for Cloud Native. Started my cloud native  journey with VMware Tanzu Portfolio covering –

  • VMware Tanzu Kubernetes Grid (TKG)
  • Tanzu Kubernetes Grid Service (TKGS)
  • VMware Tanzu Mission Control (TMC)
  • VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)
  • VMware vSphere with kubernetes
  • VMware Enterprise PKS (Pivotal Container Service)

While going through all the above, Where all the discussion starts and end with cloud native aka Kubernetes (K8s), this attract me to Certified Kubernetes Administrator (CKA) exam. 

The Certified Kubernetes Administrator (CKA) created by the Cloud Native Computing Foundation (CNCF), in collaboration with The Linux Foundation.

Started exploring the best course for this, which covers all the topics in CKA exam and also provide Labs to practice, as this is complete hands on exam and need hell lot of practice to clear this exam.

Came across these 2 super awesome courses developed by Mumshad Mannambeth on Udemy.

1. Kubernetes for the Absolute Beginners – Hands-on

2. Certified Kubernetes Administrator (CKA) with Practice Tests (Kubernetes v1.19)

My focus was more on Practice Tests with these courses, and frankly speaking i did 4-5 times all the labs to be confident enough before booked my exam. Not easy to complete 4-5 times all the labs, need many days to complete and i know how many sleepless night i had to complete these labs.

Exam fee is US$ 300. You can find 30-40% discount coupons on this exam fee.

There were total 17 questions with 120 mins to complete. For my experience, I tried all  17 questions, i know few i did wrong 😉 but i tried all to finish. In general, the questions are not too difficult if you practice enough or your fundamentals is clear, but I think it’s still harder and need better time control to complete all 17 questions in 120 minutes (2 Hours).

For the CKA Exam, a score of 66% or above must be earned to pass. And I Scored 75% not bad in first attempt 🙂

Kubectl Cheat Sheet

It’s highly recommended to go through all the commands at least once before the exam, it would help you to become more familiar of some frequent-use command and be easier to find them during the exam. https://kubernetes.io/docs/reference/kubectl/cheatsheet

Best part is that you can access this during the exam to refer and you can copy the YAML format in the exam – https://kubernetes.io/docs/home/  

Results will be emailed to you within 36 hours from the time that the Exam was completed.  If you clear in first attempt is great achievement, if not you’ll have “Free Retake” another free chance to PASS it.

If you have any friend/colleague who has PASS this, do have conversation with him and discuss time management and some insider tips 🙂

I suggest whether you are working on Cloud Native aka Kubernetes (K8s) or not MUST do this, this is upcoming  and good to know or have updated yourself 🙂

Thank you and Hope this will help. Do reach out to me if need any help or wants to discuss further .. Cheers 🙂

 

  

 

 

 

 

 

 

 

 

 

 

 

 

#3 – Virtual Webinar – vRealize Suite on VMware Cloud Foundation (VCF)

Once again Thank you everyone for joining  #3 – Virtual Webinar – vRealize Suite on VMware Cloud Foundation (VCF) Architecture and Deployment Tech Deep Dive.

Here is recording for same :- 

 

This has been also shared at Virtual Cloud Solutions by Roshan Jha

Hope this will be informative and useful for many of you. Stay Tuned for #4 – Virtual Webinar – Topic and Date.  Keep sharing 🙂

 

VMware Tanzu Kubernetes Grid (TKG) Bootstrap – Part 3

Preview in new tab

In the second series VMware Tanzu Kubernetes Grid (TKG) – PART2. I  covered the basic of VMware Tanzu Portfolio, supported platform for Tanzu whether VMware Cloud  (On-Premises or VCPP Cloud) or Public cloud  (AWS or Azure) and supported  kubernetes versions with latest VMware Tanzu Kubernetes Grid 1.2 release.

Before  deep diving  into Tanzu Kubernetes Grid (TKG) Architecture, Let’s discuss about Bootstrap Environment

The bootstrap environment is typically a VM on which you run the Tanzu Kubernetes Grid CLI or VM from where you plan to deploy the Management or Workload Cluster/s. When you initiate Tanzu Kubernetes Grid instance, it bootstrap on local VM first and then transfer to the cloud infrastructure of your choice whether VMware cloud  (On-Premises or VCPP Cloud) or Public cloud  (AWS or Azure). After bootstrapping the management cluster, This VM can be  used to manage the Tanzu Kubernetes Grid instance.

The TKG CLI is used to initialize cluster, as well as to create, scale, upgrade, and delete Tanzu Kubernetes clusters. Basically Administrator or Developer needs to have TKG CLI to administrator or manage the kubernetes infrastructures or Applications.

When you initialize the TKG cluster, A cluster plan runs within bootstrap VM to provides a set of configurable values for deployment, for example, the number of control plane machines, worker machines, number of vCPUs, amount of memory, and other parameters you want for your TKG cluster. 

TKG Management Cluster

The Management Cluster is the first cluster that you deploy when you create a Tanzu Kubernetes Grid instance. This is Kubernetes cluster that performs the role of the primary management and operational center for the Tanzu Kubernetes Grid instance. This cluster is where Cluster API runs.

TKG Workload Cluster

Once you have deployed Management Cluster, You can initiate the deployment of Tanzu Kubernetes cluster/s from the management cluster by using the Tanzu Kubernetes Grid CLI. These clusters are also called workload clusters. These are clusters where PODs/Containers will run and will host your applications.

A single bootstrap environment can be used to bootstrap as many instances of Tanzu Kubernetes Grid as you want to have for different environments, e.g. Test, dev, Production running in different IaaS Providers – vSphere, AWS or Azure.

That’s all about bootstrap, will discuss more on architecture in VMware Tanzu Kubernetes Grid (TKG) Architecture – Part 4. Stay tuned for next part.

Hope this will be informative. Happy learning and happy sharing 🙂 

VMware Tanzu Kubernetes Grid (TKG) – PART2

In the first series GETTING STARTED WITH VMWARE TANZU PORTFOLIO – PART1. I tried to cover the basic of VMware Tanzu Portfolio, Basic of Cloud Native and what are the products VMware offers to customers under VMware Tanzu portfolio.

Let me recap again before moving to TKG acrchitecture, VMware provide following products for complete Kubernetes life cycle management (LCM):

  • VMware Tanzu Kubernetes Grid, informally known as TKG
  • VMware Tanzu Kubernetes Grid Integrated Edition, informally known as TKGI (Formerly known as Enterprise PKS) – Rebranded after VMware acquisition of Pivotal.
  • VMware Tanzu Kubernetes Grid Service, informally known as TKGS, This is the native Kubernetes in vsphere 7 (vSphere with Tanzu).
  • VMware Tanzu Mission Control, informally known as TMC.

These are complete Tanzu portfolio, which provides broad range of options and covers variety of use-cases for customers to have single platform to run traditional applications and Cloud Native or Modern Applications.

Tanzu Supported Platform

With Lateset release of VMware Tanzu Kubernetes Grid 1.2, customer can run on on-premises software defined data center (SDDC), and public cloud environments, including vSphere, Microsoft Azure, and Amazon EC2.

When we say vSphere it can run within your on-premises (Private Cloud) data center or vSphere base VMware certified cloud runs and managed by VMware Partners. To know more about VMware Certified Cloud or VCPP partners pls refer to VMware Certified Service Provider portal – https://www.vmware.com/partners/service-provider.html

Until VMware Tanzu Kubernetes Grid 1.1, it was only supported on vSphere (either Private cloud or VMware Certified Service Provider Cloud) and AWS. But with the relaese of VMware Tanzu Kubernetes Grid 1.2, now it also support for deployment on Microsoft Azure Public cloud.

With VMware Tanzu Kubernetes Grid 1.2 Release, customer can choose to run these  Kubernetes versions:

  • 1.19.1
  • 1.18.8
  • 1.17.11

For more details what’s new in VMware Tanzu Kubernetes Grid 1.2, Please refer release notes – https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.2/rn/VMware-Tanzu-Kubernetes-Grid-12-Release-Notes.html

Will discuss more on architecture in VMware Tanzu Kubernetes Grid (TKG) Bootstrap – Part 3. Stay tuned for next part.

Hope this will be informative. Please feel free to share if you wish to 🙂 

 

Getting Started with Vmware Tanzu Portfolio – Part1

Last few months I have been learning about Cloud Native Applications and how VMware Tanzu Portfolio helping thousands of customers around the world to build and run modern application on their existing private cloud running on VMware software defined data center (SDDC). 

Customers are looking for one stop shop to run Traditional Enterprise Apps and Cloud Native Apps on one platform, customer wants to limit CapEx and OpEx  to have single infrastructure which can support to run both modern and traditional applications, and want to focus more on application modernization and business development which helps them to generate more revenue.

This is where VMware Tanzu Portfolio helps to have one infrastructure where customer can run Traditional Enterprise Apps and Cloud Native Apps side by side. Which gives freedom and visibility to software developers and IT operations with the goal of delivering high-quality software that solves business challenges.

How do you build and run cloud native applications?

Whenever we talk or discuss about cloud native application, these are few important terms come into mind, let’s briefly discuss about these terms –

Microservices – Microservices is an architectural approach to developing an application. The microservices approach is the opposite of traditional monolithic software which consists of tightly integrated modules as a single unit. microservices have become popular with companies that need greater agility and scalability for their application.

Microservices characteristics and operations are :

  • A collection of small services where each service implements business capabilities
  • Runs in its own process and communicates via an HTTP API
  • Can be deployed, upgraded, scaled, and restarted independent of other services in the application

Containers – Container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Containers offer both efficiency and speed compared with standard virtual machines (VMs). Using operating-system-level virtualization, a single OS instance is dynamically divided among one or more isolated containers, each with a unique writable file system and resource quota. The low overhead of creating and destroying containers combined with the high packing density in a single VM makes containers an ideal compute vehicle for deploying individual microservices.

The most popular container is Docker, A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

VMware Tanzu Portfolio

The goal of the VMware Tanzu portfolio is to provide a modern application platform, which helps customer to transform business, not just IT. VMware Tanzu can run on vSphere with Tanzu, vSphere, Public cloud and Edge environments. Edge refers to branch offices or remote locations outside of data center.

VMware Tanzu portfolio products provides complete end to end solutions for customers to RUN and MANAGE their cloud native applications.

Under Tanzu RUN, VMware provide following products for complete Kubernetes life cycle management (LCM):

  • VMware Tanzu Kubernetes Grid
  • VMware Tanzu Kubernetes Grid Integrated Edition (Formerly known as Enterprise PKS)
  • VMware vSphere with Tanzu

Under Tanzu Manage, VMware provides VMware Tanzu Mission Control, which Provides a centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications running across multiple clouds. In nutshell, it provides unified managements for all the Kubernetes infrastructure, whether running on-prem, public cloud or running in multi public clouds environments.

What is Kubernetes and why do we need Kubernetes?

When running containers at scale in production – thousands of containers across your enterprise—things get complex and out of reach for developers or DevOps team to manage them efficiently. In such environments you must have unified and centralized ways to automate the deployment and management of all those containers. This is where we need orchestration engine for container.

Kubernetes, is the industry-standard for container management and provides orchestration engine for container, Kubernetes streamline container orchestration to avoid the complexities of interdependent system architectures. 

VMware Tanzu Kubernetes Grid is CNCF-certified, enterprise ready Kubernetes runtime solution to streamlines and simplify installation and Day 2 operations of Kubernetes across enterprise. It is tightly integrated with vSphere and can be extended to run with consistency across your public cloud and edge environments.

VMware Tanzu Kubernetes Grid is a multi cloud Kubernetes distribution that you can run on, VMware vSphere  and Amazon Web Services. TKG are tested, signed, and supported by VMware. VMware TKG Includes signed and supported versions of open-source applications to provide the networking, authentication, ingress control, and logging services that a production Kubernetes environment requires.

For more details pls visit VMware Tanzu documentation site – https://tanzu.vmware.com/ or https://docs.pivotal.io/

Will leave here in this post, In VMware Tanzu Kubernetes Grid (TKG) – PART2 will discuss more on TKG architecture.

Stay tunned, Keep learing and keep sharing 🙂

 

 

 

#2 – Virtual Webinar – Multi-tenant Tanzu Run with VMware Cloud Director (VCD)

Thank you everyone, Thank you so much for joining  the 2nd webinar last week. #2 – Virtual Webinar – Multi-tenant Tanzu Run with VMware Cloud Director (VCD)

Here is the video recording of the session –

Thank you Avnish Tripathi, Staff Cloud Solutions Architect at VMware for partnership and sharing with me. Looking forward for more partnership in coming months.

Please feel free to share and subscribe YouTube Channel (Virtual Cloud Solutions by Roshan Jha). Thank you and stay tuned for #3 – Virtual Webinar Topic and date in mid november 2020.

Getting STARTED with VMWARE INTEGRATED OPENSTACK (VIO) – PART 2

VMware Integrated OpenStack (VIO) is an OpenStack distribution that is built and tested by VMware. VIO is compliant with the OpenStack Foundation guidelines for an OpenStack distribution and is API-compatible for all OpenStack services running on enterprise level virtual infrastructure.VMware ensures platform stability by rigorous testing and ensuring interoperability. VIO leverages vSphere, NSX, and storage functionality as core of infrastructure. VMware places priority on packing OpenStack core projects in the most stable manner through relentless testing (functional and interoperability).

VMware Integrated OpenStack provides the following key features:
• Fastest deployment with simple installation using an OVA file
• Simplified operations through API and web interface
• Distributed Resource Scheduler (DRS) and Storage DRS for workload    rebalancing and datastore load balancing
• vSphere high availability (HA) to protect and automatically restart workloads
• In-house expertise and skillset with existing vSphere technology
• Runs on the proven VMware software-defined data center
• Production-ready container management that is natively integrated by using VMware capabilities.

• Advanced networking functionality through NSX 

• Integration with vRealize Operations Manager and vRealize Log Insight for greater performance and capacity management, Alerting and troubleshooting.  
• Trusted and single vendor for infrastructure and OpenStack
• Compliant with the OpenStack Foundation’s 2019.11 interoperability guideline

OpenStack Model

The OpenStack model is comprised of core projects and supplement projects. In addition to the core OpenStack projects, customer can choose supplement projects for additional services and functionality based on their requirements.

VMware Integrated OpenStack Components

VMware Integrated OpenStack (VIO) is made by two main building blocks, first the VIO Manager and second OpenStack components. VIO is packaged as an OVA file that contains the VIO Manager server and an Ubuntu Linux virtual machine to be used as the template for the different OpenStack components.

VMware Integrated OpenStack is designed to run over vSphere and NSX-T Data Center, leveraging existing virtualization functionality to provide security, stability, performance, and reliability.

Plug-in drivers are available in Nova for interaction with vCenter Server  and in Neutron to interact with NSX-T Data Center (or vSphere Distributed Switch). Glance and Cinder interact with storage through the vCenter Server system and the OpenStack plug-in driver.

VMware Integrated OpenStack and the VMware SDDC Integration 

VMware Integrated OpenStack (VIO) provides full-stack integration with VMware Software-Defined Data Center (SDDC), which provides customer to have one-stop-shop enterprise grade OpenStack solutions.

Stay tuned for VMware Integrated OpenStack (VIO) – PART 3, In Part 3 will discuss more on VMware Integrated OpenStack (VIO) Deployment !!

 

Monthly Webinar Series – #1 – VCF Multi Availability Zone (vSAN Stretched) Design and Deploy Deep Dive

Thank you everyone, Thank you so much for joining  the monthly webinar series. #1 – Virtual TechTalk – VCF Multi Availability Zone (vSAN Stretched) Design and Deploy Deep Dive. 

Here is the video recording of the session –

 

Please feel free to share and subscribers YouTube Channel (Virtual Cloud Solutions by Roshan Jha). Thanks!

 

Syslog Configuration for NSX-T Components using API

In this post, quickly i’ll walk through how to Configure the NSX-T components to Forward Log Events to vRealize Log Insight using API.

Once you have VMware vRealize Log Insight (vRLI) designed and deployment, you can use API call to configure your NSX-T components to forward logs to log management servers. In this case i am going to push vRLI VIP FQDN through API call on NSX-T Managers and NSX-T Edges.

  • nsx01a
  • nsx01b
  • nsx01c
  1. Open POSTMAN and configure Authorization –> Select Basic Auth under TYPE and Provide NSX-T Manager username and Password to allow Postman to talk to NSX-T managers. 

2. Next select Headers, set  KEY as Content-Type and VALUE as  application/json

3. Next Select Body –> raw –> and provide Syslog server, protocol, post and log level you want to sent from NSX-T managers to log insight.

4. Next select POST –> https://xx-m01-nsx01a.xxxxx.com/api/v1/node/services/syslog/exporters and Click Send.

In the lower Body section, it will display content which confirms that syslog settings has successfully pushed on NSX-T Manager.

5. Repeat this for another NSX-T Managers node nsx01b and nsx01c.

POST – https://xx-m01-nsx01b.xxxxx.com/api/v1/node/services/syslog/exporters

POST – https://xx-m01-nsx01c.xxxxx.com/api/v1/node/services/syslog/exporters

6. Now time to verify, Clear the text from Body  section and send GET to retrieve configuration data from NSX-T Managers.

GEThttps://xx-m01-nsx01a.xxxxx.com/api/v1/node/services/syslog/exporters

In the lower Body section, it retrieves the configured syslog settings from  NSX-T Manager.

Configure the NSX-T Edges to Forward Log Events to vRealize Log Insight 

Now will Configure the NSX-T Edge nodes to send audit logs and system events to vRealize Log Insight.

To configure on NSX-T Edge nodes first, you retrieve the ID of each edge transport node by using the NSX-T Manager user interface. Then, you use the Postman application to configure log forwarding for all edge transport nodes by sending a post request to each NSX-T Edge request URL.

  1. Login to NSX-T Manager to retrieve the ID of each edge nodes.

  • nsxedge-01 — 16420ffa-d159-41a2-9f02-b4ac30d32636
  • nsxedge-02 — 39fe9748-c6ae-4a32-9023-ad610ea87249

2. Here is syntax for edge node – POSThttps://xx-m01-nsx01.xxxxx.com/api/v1/transport-nodes/16420ffa-d159-41a2-9f02-b4ac30d32636/node/services/syslog/exporters and Send

3. Now time to verify, Clear the text from Body  section and send GET to same url to retrieve configuration data from NSX-T edge node.

Repeat this for rest of the NSX-T edge nodes. 

That’s all.  Hope you enjoyed reading this post. Feel free to share 🙂

 

VCF 4.X – NSX-T Manager Sizing for VI Workload Domain (WLD) – Default Size is LARGE

I got interesting  question today related to NSX-T Manager sizing for VI Workload Domain (WLD), While bring up management domain, there is an option in Bring-up sheet to choose size of the NSX-T Manager.

But when we deploy VI Workload Domain (WLD) there is no option to choose NSX-T Manager Size (It will only ask for NSX-T manager name and IP details). And By Default 3 Large Size NSX-T Managers will be deployed.

If you require to deploy Medium size NSX-T Manager for VI Workload Domain (WLD), Here are steps to perform on SDDC Manager before deploying VI Workload Domain (WLD) :-

If You have already deployed VI Workload Domain (WLD) and want to change the NSX- T manager size after deployment for VI Workload Domain (WLD), you can follow the VMware NSX Docs:

https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/administration/GUID-B1B0CB39-7C51-410D-A964-C03D99E39C19.html

Hope this will help and Keep sharing the knowledge!

 

Why VMware Integrated OpenStack (VIO) – Part 1

Time to move out from comfort zone, explore and deep dive into OpenStack, Specially VMware Integrated OpenStack (VIO), vSphere with Kubernetes, VMware Tanzu Kubernetes Grid (TKG), and VMware Tanzu Kubernetes Grid Integrated (TKGI) (formerly known as VMware Enterprise PKS).

Let’s Start with VMware Integrated OpenStack (VIO)?

VMware Integrated OpenStack (VIO) is a VMware supported enterprise grade OpenStack distribution that makes it easy to run OpenStack cloud on top of VMware virtualization technologies. With VIO, customers can rapidly build production-grade private and public OpenStack clouds on top of VMware technologies, leveraging their existing VMware investment and expertise.

VMware Integrated OpenStack is ideal for many different use cases, including building a IaaS platform, providing standard, OpenStack API access to developers, leveraging edge computing and deploying NFV services on OpenStack.

VMware Integrated OpenStack (VIO) can be deployed and run on your existing vSphere, NSX-T, and vSAN and simplify operations and offering better performance and stability.

VMware Integrated OpenStack (VIO) Architecture

The VMware Integrated OpenStack (VIO) connects vSphere resources to the OpenStack Compute, Networking, Block Storage, Image Service, Identity Service, and Orchestration components.

VMware Integrated OpenStack is design and implemented as separate management and compute clusters. The management cluster contains OpenStack components and compute cluster runs tenant or application workloads.

VMware Integrated OpenStack (VIO) core components are :-

Nova (compute) – Compute clusters in vSphere are used as Nova compute nodes. Nova Provides a way to provision compute instances (aka virtual servers) in these clusters.

Neutron (networking) – Neutron allows you to create and attach network interface devices managed by OpenStack. Neutron provides networking functions by communicating with the NSX Manager (for
NSX-T Data Center deployment) or with vCenter Server (for VDS-only deployments).

Cinder (block storage) – Cinder designed to create and manage a service that provides persistent data storage to applications. Cinder executes block volume operations through the VMDK driver, causing the desired
volumes to be created in vSphere.

Glance (image service) – Glance enables users to discover, register, and retrieve virtual machine images through the Image service in a variety of locations, from simple file systems to object-storage systems  like OpenStack Object Storage. Glance images are stored and cached in a dedicated image service datastore when the virtual machines that use them are booted.

Keystone (identity management) – Authentication and authorization in OpenStack are managed by Keystone.

Heat (orchestration) – Heat provides orchestration service to orchestrate composite cloud applications through an OpenStack API call.

Ceilometer (telemetry) – Telemetry collect data on the utilization of the physical and virtual resources comprising deployed clouds, persist these data for subsequent retrieval and analysis, and trigger actions when defined criteria are met.

VMware also provides simplified OpenStack operations with vRealize Operations Manager (vROps) integrations for performance monitoring, capacity planning and troubleshooting. And vRealize Log Insight (vRLI) for diagnostics across OpenStack service logs.

Stay tuned for VMware Integrated OpenStack (VIO) – PART 2 !!

 

VMware Cloud Foundation (VCF) 4.1 – What’s new?

Last week was big release week from VMware prospective, where VMware released vSphere 7 Update 1, vSAN 7 Update 1, and VMware Cloud Foundation (VCF) 4.1. There are some nice new enhancements with VCF 4.1. In this post, I’ll highlight the big features which customers and architects were looking  with upcoming release.

Rename Objects
With VMware Cloud Foundation 4.1, you can now rename domains, clusters, as well as network pools. Domain and Network Pool objects can be renamed from SDDC Manager UI. And Cluster objects can be renamed from vCenter Server. Once you do, you can go back to the SDDC Manager and refresh the UI, the new cluster name will be retrieved by the SDDC Manager.

SDDC Manager Backup Enhancements
With this release of VCF 4.1 backup can be scheduled on a reoccurring basis, Now customer can enable backup state change and SDDC manager backup will occur 10 minutes after the successful completion of the event, such as the creation of a workload domain.

Support for vVols as a Principal Storage for Workload Domains          With Cloud Foundation 4.1, vVols can now be used for principle storage for workload domains and secondary storage for both management domain as well as workload domains.

If you want to read in details about vVols, please refer blog written by Cormac Hogan (Director and Chief Technologist in the Office of the CTO in the Cloud Platform Business Unit (CPBU) at VMwarehttps://cormachogan.com/2015/02/17/vsphere-6-0-storage-features-part-5-virtual-volumes/

Support for Remote Clusters (Extends VCF at the Remote/Edge)         We continue to see growing demands of Remote or Edge sites, where customer wants to have small infra footprint at remote or edge sites, but wanted to have automated deployment and lifecycle management as unified management.

With release of VCF 4.1, Support for remote clusters will be a minimum of 3 nodes and maximum of 4 nodes vSAN Ready Node configuration. Remote clusters can be implemented in two different design. The first is when each remote site is managed as a separate workload domain. In this design, each remote site has a dedicated vCenter Server instance. The second  is when each remote site is managed as a cluster within a single workload domain. In this design, each remote site shares a single vCenter Server instance. Day 2 operations (such as lifecycle management, adding and removing clusters) can be performed centrally from the data center to the remote sites. 

Improved Lifecycle Management (VCF Upgrade Process)                              In previous editions of VCF, the upgrade process was sequential in nature. For example, if you started at Cloud Foundation version 4.0, And you wanted to go to Cloud Foundation version 4.1, You had go through a process where you first upgraded to any versions that may existed in-between and eventually upgrading to the desired version. This resulted in the need to schedule multiple maintenance windows and took more time to get to the desired state.

Now with release of VCF 4.1 has ability to perform skip level upgrades for the SDDC Manager. With this feature, you can schedule a single maintenance window and update to the desired state in a single action. This can result in a reduction in the time needed to perform the upgrades.

vRealize Suite for VCF                                                                                                     With Cloud Foundation 4.1, VCF now deploys a ‘VCF Aware’ vRSLCM appliance. First enhancement is no need to manually download and deploy vRSLCM. Once you have management domain bring up done and SDDC Manager up and running, you can initiate the installation of vRSLCM from SDDC Manager.

Now with VCF 4.1, there will be also bidirectional vRSLCM and SDDC Manager relationship. This will provide unified product experience. Users can log into vRSCLM to perform operations, and SDDC Manager can now discover if vRSLCM was used to deploy vRealize suite of products such as vRealize Automation (vRA), vRealize Operations Manager (vROps) and vRealize Log Insight (vRLI). This will ease the deployment for customers and any potential interoperability issues between vRSLCM and SDDC Manager.

Hybrid Cloud Extension (HCX) Integration 

With the release of VCF 4.1, HCX R143 now has native support for Cloud Foundation 4.1 with Converged Virtual Distributed Switches (CVDS). This will be extremely helpful for customers who have a need to migrate existing workloads to a new Cloud Foundation installation. 

Role-Based Access Control for VCF

A New VCF User Role – ‘viewer’

A new ‘view-only’ role has been added to VCF 4.1, In previous edition of  VCF had only 2 roles, Administrator and Operator. Now third role available called a ‘viewer’. As name suggest, with this view only role Users has no ability to create, delete, or modify objects. with this limited ‘view-only’ role assigned users may also see a message saying they are unauthorized to perform certain actions.

 

VCF Local Account

With VCF 4.1, Custer can have local account that can be used during a SSO failure.

What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, customers now can create VCF local account called admin@local. This account will allow to perform certain actions until the SSO domain is functional again.

This VCF local account can be defined in the deployment bring up worksheet. 

Summary

I tried to cover all the new enhancements with VCF 4.1 release, But always  refer official documentation for more and complete details :- https://docs.vmware.com/en/VMware-Cloud-Foundation/index.html

 

 

Provide Solution for Business, Do not sell Products