Welcome back for our next exercise. Now in our previous model, we had the structure that we have here, convolution layer, another convolution layer that Max pool to bring down the size, flattening it out. Then that dense connection, and then that final classification with the activation functions in the dropouts that we had specified. Now we want to try building a more complicated model, and it's going to have the following structure, convolutional layer, convolution layer, Max pool, and then two more convolution layers. So we're adding on an extra two convolution layers, another Max pool, and then that flatten that dense connection in our final classification. We're also going to use rides of one for each one of our convolutional layers. So rather than moving that kernel along, two to the right, then two down as we did before. We're only going to move it across, and down by one each time. We're then going to see how many parameters as our new model have, and we compare to our old model, and then we're going to train it only for five E pox. It will be more complicated, so it will take some more time, and then we can look at the loss in accuracy numbers for both the training and validation sets. And we can, on your own, go ahead and try different structures and runtimes and see how accurate you can get your model to be. So we going to run this with this specified new framework, so we're going to again, have 32 different filters. Here our grid is going to be 3 by 3. Above if you recall, we had 5 by 5, so that will also move across a bit quicker. And then before we had destroyed the strides equal to 2 by 2, now they're going to be at their default of 1 by 1. And then we're having padding on each we'll also add on some extra weight, some extra learning that we'll have to do, it'll have to go through more convolutional operations as we have that padding. Then we have a relu activation, and again we set the default, we have another convolution layer, this time without padding. We have another activation of relu, some Max pooling, and then we have another convolution layer, this time with 64 different filters. And we're going to do that with padding, and then again without padding again using a 3 by 3 grid. And then will flatten, have are dense layer and then our final dense layer to predict the classes as well as the activation of Softmax. So we run this to set up our new framework, and then when we look down at the number of total parameters that we have to learn, or up to 1.25 million total parameters that we have to train. And if you recall, before we only had around 181 thousand train. So if we think about the timing that this will take, and we'll start to run it here. We're going to have probably something that's going to take a lot longer at each one of E Pox, so we see that ETA is going down pretty quickly. But still, each one of our E pox, it's around this three minute mark that's getting three minutes 20 seconds. And it's going to take some time at each E pox, compared to what we had before what was going through each E pox, around 27 seconds. Now I'm going to pause the video here, and come back when this is done running, and this will take some time to run even longer than it did before. But it's something that we want to make sure that you take into account. As you start to build out your deep neural networks, and understanding that as you have more complex structure. You'll probably need a stronger machine, or some way of paralyzing across multiple machines as you build these out. Alright, I'll see you in just a bit. So hopefully you able to run that on your own. And as we see here, it took quite a bit of time to run that. We see that's a bit under three minutes for each one of the different E pox, for five E pox we're getting close to 15 minutes to run through and fit the model. But what we also see, is if we look at the accuracy, and specifically be validation accuracy for that hold out set. We, after the fourth E pox got to a higher accuracy than we ever got before with the other architecture. So we see this more complex framework was able to better fit to our actual data set. Now we can play around with different frameworks, adding an extra convolutional layers, or moving convolutional layers, changing the stride and so on. But as we saw here, we could take some time. So and because of the flexibility, there's actually some architecture. Some frameworks that are best practices, or most common practices that are used throughout that will discuss in just a bit. But before that, in our next video will discuss how we can use something that we trained on for a specific data set, such as what we did here. And use that training to actually supplement. A classification of images for completely different data set, and we'll see what we mean in just a second when we discuss in the next lecture the idea of transfer learning. Alright, I'll see you there.