The data center network is basically an adaptation of this class network. And typically, the data centers are arranged as servers at the bottom. And these are server racks and a rack may contain 20 to 40 servers and the servers connect to what are called access switches. So this particular figure is showing a three-tiered design of a data center network. There is also two-tiered design where there's only a core level and an edge level, we won't talk about that because it is less prevalent. So in the more prevalent version, there is three tiers, the core level, the aggregation level, and the edge level. The edge level is the one that is connecting the servers to the rest of the network topology. At the edge level, what you have is level two switching. So that is link layer protocol is being used at this level and these access switches connect to these aggregation switches. And the aggregation switches is the one that changes from this level two to level three at the core level. So the core level is the one that is going out to the Internet as well. And when I say level three, what I mean is the network level, so common network level is IP, right? So, that's the switching that is happening going from Level two to level three through this aggregation switches. So this is an example or three-tiered design. And one of the things that you will observe is that there's a lot of bandwidth at the edge, but as you go up the tree, the bandwidth is narrowing, right? And that's not good, because if you want uniform capacity, this is not giving you the bandwidth scaling that you need in order to make sure that you can have all servers communicating with one another with layer to semantics. Because lead to semantics means that you have constantl latency between any two servers connecting and talking to each other in spite of the fact that all of them want to talk simultaneously. So what you do is actually have a special form of cross network called fat free, which is just a different arrangement of switches. Which is giving you the same property as a class network but arranged as a factory. In this factory configuration, what you've got is the servers, again, at the bottom and every one of these switches is k-ports switch, and in this particular example k is equal to 4. And with k equal to 4 what you get is k pods one, two, three, four case for set of four pods in this one. And in each part, you've got k by 2 square servers in every one of these pods. So the total capacity of the computational capacity of the entire server form with the key areas switch is k cubed by 4. That's the total capacity that you have for the entire server form. And the way the network fabric has arranged, every one of these access switch connects to two servers ports on the bottom, and to two aggregations switch at the top. So two connections at the bottom and two connections at the top, and this is because k is equal to 4. So in other words, this K by 2 connections to the servers and k by 2 connections to the aggregation switch for every one of these access switch. And similarly if you take the access switch, it has k by 2 connections to the core and k by 2 connections to the access switches. So that's the aggregations switch layer. The key things that should pop out is the fact that it corrects for all the deficiencies that I mentioned in the earlier slide. When you have true tree formation, the band was actually decreases as you go up the tree, but here you've got the aggregate bandwidth at each layer is the same. So if you take any particular layer, the aggregate bandwidth is the same and there is identical bisection bandwidth that at any by section you have identical bandwidth. Which means that because of the redundancy similar to the class network, we've got enough bandwidth going up the tree that you can have simultaneous connections between any two pair of servers going on, just like the cross networks or any blocking. And so with a carry a factory configuration like I've shown here, you can have k cube by 4, total number of servers arranged in k parts 1, 2, 3, etc key pods. And then you've got at the core level case switches and you've got all of these intermediate switches that each of them having k by 2 connections to the core level k by 2 connections to the axis level. And similarly, the access level having k by 2 connections to the service of the bottom. So with this arrangement, all devices can transmit at line speed if packet distribution is uniformly arranged over all the available path. So that is arranged then you can make sure that that happens. And so this next thing that I want to show you just an example of routing. So for instance, let's say that from this part, both these servers, they want to communicate with two servers over here. There is no problem with that because now you even though both of them go to the same access switch, the access swtich can route one guy like this, one guy like this. So similarly, the other communication can be routed like this, through these set of sets of switches and then down to this guy. So you can see that even though they are originating in the same part and going through same set of switches, it quickly gets distributed enough that you can have simultaneous communication among all the elements that want to talk to one another in any one cycle