Welcome back to this module on vMA product overview. Let's start with vSphere, the hypervisor layer. vSphere is a distributed software system with features enabled by hypervisor ESXi and a management server vCenter that are working together. vSphere enables the separation of virtual machine from the hardware by presenting a complete x86 platform to the virtual machine guest operating system. A vSphere cluster is a group of ESXi nodes that partition and aggregate the compute resources in a distributed manner. For example, the distributed virtual switch is a logical switch created using all the network adapters and uplinks from the ESXi host to maintain a consistent network configuration. The VMs deployed in a cluster share the resources like CPU, memory, data store, and network. But at the same time, vSphere have some intelligent resource management techniques to reclaim the resources and provide to the VMs that are in demand. There are two primary features for vSphere cluster, High Availability or HA and Distributed Resource Scheduler or DRS. vSphere HA provides high availability for virtual machines within the cluster. If a host within the cluster fails, the VM residing on that host are restarted on another host in the same cluster. vSphere DRS is a distributed resource scheduling mechanism that spreads the virtual machine workloads across vSphere host and monitors the available resources. Based on the automation level, you can set VMs to live migrate manually or automatically to other hosts which have less resource consumption. VMotion is referred to as the live migration, often running virtual machine from one physical server to another without any downtime. The virtual machine retains its network identity and connections. With Storage vMotion, you can migrate a virtual machine and also the disk files from one data storage to another while the virtual machine is running. In Oracle Cloud VMS solution, the minimum number of hosts required is three and the maximum is 64 for all your production purposes. If you are using vSphere 7.0 update Two or newer versions, this introduces a new feature called the vSphere Cluster Services or vCLS. The vCLS feature is enabled by default and runs on all vSphere clusters. vCLS ensures that if the vCenter becomes unavailable, the cluster services like the DRS and HA remains available to maintain the resources and health of the workloads that are running in those clusters. vCLS uses agent virtual machines to maintain the cluster services held. The vCLS agent virtual machines or vCLS VMs are created when you provision the SDDC stack. There are three vCLS VMs deployed that are required to run on each vSphere cluster. vSphere DRS in a DRS-enabled cluster will depend on the availability of at least one vCLS VM. Unlike your application VMs, vCLS VMs should be treated like your system VMs. This means that it is highly recommended not to perform any operations on these VMs unless it is explicitly listed as a supported operation. vSAN is the hyper-converged storage part of the solution. The term hyper-converged here means having a high performance NVMe, all flash based drives attached directly to the bare metal computer, and that becomes the primary storage for your VMs. With having a software-defined approach, you can pull these direct attached devices across the vSphere cluster to create a distributed shared data store for the VMs. VMs are a set of objects together, and vSAN is the object store for those objects and its components. vSAN uses a construct called disk groups and manage the devices into two different tiers, the capacity tier and the cache tier. The capacity tier is used as the persistent storage for the VMs, and it is also used for read cache purposes. The cache tier in this architecture is having all flash drives, and it is dedicated to write buffering. The write buffer is all about absorbing the highest rate of write operations directly to the cache tier, while a very small stream of data is written to the capacity tier. This two-tier design gives great performance to the VMs while ensuring that the devices can have data written in the most efficient way possible. vSAN implements a concept of fault domains. This is different from the Oracle Cloud Infrastructure Fault-Domain. vSAN Fault-Domain is about grouping multiple hosts into a logical boundary domain. The fault domains make sure there are at least two replica copies of the storage objects that are distributed across the domains. vSAN's storage policies are used to determine the high availability of individual VMs. You can configure different policies to determine the number of hosts and device failures that a VM can tolerate. FTT stands for the total failures to tolerate, and with FTT equals one, this means you can accommodate one node failure within the cluster where the VMs can sustain and still be functional. FTM stands for Failure Tolerance Method, and we use FTM as raid one, which means a replication of an object is always maintained. The Witness Node is a dedicated host used for monitoring the availability of an object. Now, when we have at least two replicas of an object and during a real failure, it can cause the data object of that application to be active on both vSAN or domains. This can be disastrous to any application, and so to avoid split-brain condition, a recent Witness Node is configured. This node is not meant for deploying VMs, and it stores only the metadata, which means exclusively decide for the Witness components and to determine the actual failure. NSX-T is the software-defined networking and security product part of OCVS. It is heterogeneous, which means NSX-T can be deployed not just for your vSphere, but also for your multi-Cloud environment. It can extend features to multiple hypervisors, bare-metal servers, containers, and Cloud-native application frameworks. Some of the common security services are firewall to [INAUDIBLE] appliance load balancing for your workload VMs, distributed and logical routing and switching, NAT for external inbound and outbound access, VPN tunnels for connecting between environments. One of the top use cases automation that are REST APIs with JSON support for scripting operational task. It is also compatible with Terraform and OpenStack Heat orchestration for provisioning purposes. And with all these capabilities and a software-defined approach, NSX-T is very familiar to OCI's Virtual Cloud Network. Let's look into some of the components of NSX-T and some of the logical constructs. NSX-T works by implementing three integrated planes, the management, the control, and the data. These three planes are implemented as a set of processes, modules and agents residing on three different types of nodes, the manager, the controller and the transport nodes. NSX manager node hosts the API services. It also provides a graphical user interface and also REST APIs for creating, configuring, and monitoring NSX-T data center component. NSX controller nodes host the central control plane cluster services. The transport nodes are responsible for performing stateless forwarding of packets based on the tables populated by the control plane. A transport zone is a container that defines the potential reach of transport nodes. Transport nodes are classified into Host node and an Edge node. Host transport nodes, ESXi Host that participate within the zone and Edge transport nodes are that run the control plane daemons with forwarding engines and implements the NSX-T data plane. There are primarily two types of gateways that you configure for your virtual machine communication. The Tier Zero gateway processes the traffic between the logical and physical network, or you call this the north-south traffic. And then the Tier One gateway is for the east-west traffic, the traffic between VM to VM within the same infrastructure. To enable access between your VMs and outside world, you can configure an external or internal BGP connection between a Tier Zero gateway and a router in your physical infrastructure. Now, remember when configuring BGP, you must configure a local and remote autonomous system AS number for your Tier Zero gateway. OSPF is an interior gateway protocol that can be configured on Tier Zero gateway and that operates within a single autonomous system. Segments are defined as Virtual Layer Two domains. There are two types of segments in NSX-T, the VLAN-backed segments and the overlay-backed-segments. A VLAN-backed segment is a Layer 2 broadcast domain that is implemented as a traditional VLAN in the physical infrastructure. This means the traffic between two VMs on two different hosts but attached to the same VLAN-backed segment is carried over a VLAN between the two hosts. In an overlay-backed segment, the traffic between two VMs on two different hosts but attached to the same overlay segment have their Layer 2 traffic carried by a tunnel between the host. Geneve is a network encapsulation protocol. It works by creating Layer Two logical network encapsulated in UDP packets. It provides the overlay capability by creating an isolated multi tenant broadcast domain across the data center fabrics. HCX or Hybrid Cloud Extension is an application mobility platform that can simplify the migration of application workloads with rebalancing and also helping you to achieve the business continuity between an on-premises and Oracle Cloud vMA solution. HCX advanced edition can be enabled as part of OCVS deployment, and it has a wide range of features. Network extension with hybrid connect is the top feature of HCX. It allows Layer Two networks like VLANs in your data center to be extended to the OCVS environment. Cross Cloud connectivity is another feature. You can do a side pairing and create a secure channel between the environments. WAN optimization is a feature to optimize your network traffic with E-duplication, compression and line conditioning, if you run a legacy vSphere version, HCX can be used to make great your workloads to a newer vSphere version. One of the key feature of HCX is Cloud-to-Cloud migration. There are different migration types, it could be online live or offline migration. We will be looking into the different migration types in one of the later module. HCX also support disaster recovery features. HCX enterprise is an upgrade option with additional features. Some of the features are migration from a non vSphere-based environment to vSphere. Large scale bulk migration is supported through this edition. You can extend disaster recovery features with vMSR, the site recovery manager product, which will help you to orchestrate the DR workflows. Traffic engineering allows you to optimize the resiliency of your network parts and use them more efficiently. Mobility groups are about structuring your migration waves based on functionalities of your application networks and without any service disruption. Finally, mobility-optimized networking ensures the traffic between environments uses an optimal path while the flow remains symmetric. So the wrap up, we looked into the core vMA vShere product and some of the features like HA and DRS. We also looked into the vSAN hyper converse storage with all flash drives and how it provides a fault-tolerance architecture. NSX-T is the software-defined networking and security. We looked into some of its architecture, the different integrated planes, the configurations with transport zones and transport nodes, the Tier Zero, Tier One gateways, the BGP and OSPF configurations and also the Geneve encapsulation. Finally, HCX, the application mobility platform that allows you to extend your Layer Two network with features like mobility, optimized networking and also helping you to achieve migrations and disaster recovery. So that's an overview on vMA products, and let's move on to the next lecture.