Hi everyone. In the last lesson, we did a quick overview of the AWS services. Now, we want to showcase how we can take those separate services and create these large-scale applications for scale. So, what we want to go over is just some basic architecture, so very basic starter apps that were typically used 20 years ago maybe. And now, the evolution of the architecture into the IOT world. So, the image you see here is a typical Web Architecture. So, what you have is a user connecting to some website, and getting some feedback from it. So, the original model used to be, you have one web server connected to one database and you have a user typing the URL and it connects to that one server and does all the computation. When you have the large scale that we do have now with a bunch of users trying to access the same website at once, you need to have your scale built into your architecture. So, in the image here, we have, when the user sends a request, it goes through a load balancer, which will choose which web server to send that request to. So, it'll keep track of which web servers are overloaded, which ones are not doing any work at all and it will send to those. Then from those web servers, it will go through another elastic load balancer, which will choose which database is overloaded right now. And typically, those databases will be following a master-slave model, where the master holds to ground truth. And just in case that master database goes down, its secondary databases can go ahead and take over the traffic. Now, recently, because of the more big data-centric explosion that happened, servers don't really do what we want to do now because let's say, you have just one virtual machine online for 24 hours, you're not going to see the same traffic throughout the entire day. So, what if instead every single time a request is made, we create a resource that does the computation, and after it is finished, we tear it down. That is kind of what AWS lambda tried to do. It was these basic small simple little functions that do these little different things that when you access it, when you connect to the endpoint, it will create resources, the quick little compute server, do the computation, do whatever it is and it will break down. So, this actually proves to be pretty cool for the initial stages for the IoT world. So, any time you had these IoT devices that would send data, not consistently, you would just connect to a lambda service. It will create the resources and then turn up. It is stateless though, alright? Exactly. So, there are pros and cons using this different stuff. The server you have as you said, a state full service, where you can keep track of everything. Whereas, the Lambda services, once you use it, you can't access any of the previous data before. It has to be stored elsewhere and you have to access it. Also, if you are having a continuous stream of data, Lambda might not be the best option because that tear down and bring up of resources takes some time. So, if you do enough requests, there will be a latency factor that gets put into play. Whereas, if you have a regular EC2 service or let's say a bunch of EC2 servers behind an elastic load balancer, the latency won't be in much of an issue because the resources are already there. I guess to simplify these Lambda services, what [inaudible] is saying is imagine, you have a little piece of code. I think of a better one is like imagine you want to write a function which adds two numbers sum. Right now, you've got to figured it out, I want a server to run this on, set up the environment, blah, blah, blah but what Lambda offers is, you don't have to set up anything, just imagine you type in this function into your Word doc and you get an identifier to that code and function and it runs. Lambda is like that. You don't worry about servers, environment, it has certain language supports, just typing your piece of little snippet of function and you get an Amazon Resource Identifier for it ARI, I should say and you call that, you're passing the parameters, it runs, gives you an output and then there is no state maintaining. Classic example, the Sum, Sum A, B it adds the input returns the output. It doesn't store what was sent last time so it is kind of stateless, it is used a lot in the Amazon Echo world, where you are asking Amazon and Kohei, Amazon what's the capital of something? And this you call triggers the Lamba function, it looks up to capital returns you and that is it. It does not remember anything else. So, we would do Amazon Echo App in this course one of the things and you would get to use lambda functions in there. So it is kind of stateless. Yeah. Great. So, Lambda was kind of the introduction into the IoT world but since then, our needs have changed and we need a better architecture to control that. So, Amazon actually introduced IoT module, I think a few years ago and essentially, we'll go more into depth in a later lesson or later module, rather. But essentially what it is, is its own message broker or basically this channel where you can have a bunch of devices connect to the cloud and then behind the cloud, you can have different services connect to that channel and whenever they see some sort of update, like let's say, there's a temperature sensor over there sending temperature data to the cloud, you could have one web server be like, "Hey, we have new temperature data, let's go ahead and take that data and start doing some stuff." So, this is a lot different from the previous model where every single time we try to connect to the cloud, we have to create something, do an operation, and then break down. Here, we have this channel that's constantly listening for these requests but it's not quite as similar to a regular EC2 which is an operating system. This one's simply just a message queue. So, the IoT was a cloud-centric worldview. However, we do not want to do all of our computations on a cloud because when you have now a bunch of devices like millions of devices, trying to connect to the cloud at the same time to do these small, little operations, even the cloud is going to get a little overworked. So, now that there's this new concept of Edge Computing. So, what that does is, you have these mini clouds at the edge of your system. So, let's say you have a bunch of Bluetooth devices and send them directly connecting to some cloud, you'll have these servers or these mini like beacon masters that the Bluetooth devices will connect to, where that master will do the computation there, if needed and then either send that data back to its individual Bluetooth devices or maybe even send it up through a cloud to do some more heavy work computation. But the point of this is that you are offloading the computation onto the edge. Instead of you doing all the work, you are letting everyone else do the work because they do have the resources necessary to do the work. So, Amazon released a new resource. I believe it's still in the Beta or it's still pretty new, called GreenGrass which essentially does this. You have local server or local hardware, where you are able to upload your code or a specific operation that you want to do on the Edge and their devices will connect to that server and if needed, communicate with the cloud, at the same time. So, that's kind of the basic gist of the evolution that the architecture for different applications have gone through. We started off with a very server-client model with a typical Web server database architecture. Then we moved over to a stateless Lambda functions. Now, we are moving over to a message queue centric where we have just- we are just listening to messages and we have different services sniffing on that, to doing a lot of computation away from the cloud but onto the Edge.