[MUSIC] Welcome to this lesson on Oracle Container Engine for Kubernetes also referred to us OKE. Let's get started. Before we dive deeper into OKE and look at all its details first let's look at the difference between virtual machines and containers. Well, in case of virtual machines we have an hypervisor, like an ESSI and virtual machines running on top of the hypervisor. Each virtual machine has its own operating system inside it and then the libraries and the dependencies and the application. Now, in case of containers, we have a different kind of environment. So we have the underlying hardware which is the same as case of virtual machines and then we have the operating system. And then we have what is referred to as a container runtime, like docker installed on the operating system. Now this container runtime this case darker manages the containers that run with the libraries and the dependencies alone. And of course the application but there is no operating system inside the containers now. Why would we do that? What are the advantages? Well, if you compare with virtual machines in case of VMs, there is an extra overhead which causes higher utilization of the underlying resources as there are multiple virtual machines running here. And there are multiple operating systems running here, right? And they have their own cardinals etc. So there is higher utilization here. Now VMs also consume higher disk space as each VM is heavy and it's usually gigabytes in size. They also take longer to boot up because every operating system has to boot up here, right. So those are some of the disadvantages with VMs, higher utilization, bulkier and take longer to boot up. In case of containers because now we took out operating system from each of these containers here. They boot up faster because there's no kind of operating system boot up time. So they boot up usually in matter of seconds compared to minutes for virtual machines. And they're also lightweight. Usually megabytes in size because the whole operating system here can be heavy. So that's outside and so they are much faster to boot up and they are much more lightweight. So that's the advantage with containers, the smaller images ect. The thing with containers is the main reason other than these advantages, people use containers is because they are portable. So you can build your containers, you have kind of a build file which goes with your image and then you could deploy it anywhere. And the whole development and operations team can work in sync. And the portability really makes containers suitable for building cloud native applications microservices. Once you create these containers they need to have connectivity between them, they need to talk to each other. They also need to scale up or down based on the load and there are several other things which these containers would need to do once you containerized your applications. So how do you deploy them manage them, connect them? Scale them up and down, all this process of automatically deploying and managing containers is known as container orchestration. And Kubernetes is an open source system for automating, deployment, scaling and management of containerized applications. So what are some of the advantages? Well with using Kubernetes, you can run containerized applications of any scale with no downtime. You can self heal applications thereby providing resiliency. You can auto scale containerized applications, ensure optimal utilization and then this whole orchestration simplifies deployment to a large extent. So as you see here, you can use docker to manage and build the containers and then Kubernetes basically helps in orchestration. So they can containers can talk to each other, you can skill them up and down etc. Lots of advanced functionality you could do using a container orchestration system like Kubernetes. So what is a container engine for Kubernetes or article container engine for Kubernetes also referred to as OKE. Well, it's a fully managed scalable and highly available Kubernetes service. It's based on the open source Kubernetes system, it has a lot of features for developers like one click cluster creation. CLI, API support and then support for running these on arm based and GPU based instances and it has a lot of DevOps advantages. There is auto scaling support, there is automatic Kubernetes cluster upgrades. There is self healing cluster notes, some of the things core characteristics of Kubernetes. All those core characteristics. Some are listed here are supported in this managed environment. So how does this work? Well, this is a very advanced topic. So I'm just going to cover it at a very high level for more details. You should check out our other courses on DevOps and developer. So the first thing is you start with nodes, a node is a machine on which Kubernetes is installed. They also referred to as worker node. Worker node is where containers are launched by by Kubernetes. So you see these worker notes here and we group them together and we create what is called as a node pool. You also see this thing called pod here. A pod is a group of one or more containers with shared storage and network resources and a specification file for how to run the containers inside the pod. So think about this as the smallest unit of compute within a managed Kubernetes environment. So now we have this, let's say this cluster here and cluster of nodes and nodes have parts within them. Now how do we manage the cluster? How do we schedule the containers? How do we manage high availability? And this is where the control plane comes into picture, the control plane notes and the control plane notes basically manages the worker nodes. And the parts in the cluster, as you can see here, this control plane node is highly available and it's managed by oracle. And the great thing is you don't really pay for this thing. It's the management, there is no charge for running the control plane nodes. Now, the control plane component, you see a lot of components here to control manager API server CD. Again, we're not going to cover all these in this foundational course, but these components make decisions about the cluster for example. How to schedule containers as well as detecting and responding to cluster events, scale up, scale down resiliency, north failure, etc. Now you see a database like SCD, it's a key value store used for Kubernetes to back all the clusters historian. And there are lots of other components and we have other courses where we go into a lot more detail than this year. But this slide just gives you a quick overview of what is article managed as far as OKE is concerned. And what is customer managed here? The control plane nodes managed by us article and then customers can manage their own worker nodes. Where actually they're going to run their containerized application. So this was just a quick overview of oracle container region for Kubernetes also referred to as OKE. And in some of the other advanced courses we go into much more details. I hope you found this lesson useful. Thanks for your time