So let's take a look at the toxicity example. First of all, this is the full HTML page containing everything, and here you can see I have my two scripts: one is loading TensorFlow.js and one is loading toxicity itself. I then set up the threshold to be 0.9, and I will load the toxicity model passing it that threshold. From that, I'm going to get back a model. I'm going to use this model to classify some sentences, and here's my first sentence that I'm going to try. It's common thing yell generally at sports games when somebody disapproves of the sports player, and you can see it's quite toxic and insulting. So those sentences I'm going to pass to model that classify, and from that I'm going to get back some predictions. I'll logout those predictions that we can take a look at them. But as we discussed in the lecture, I'm going to iterate through the full list of predictions. There are seven of them, and then in the case of the match being true, that means a positive hit on the toxic behavior. Then I'm going to logout which toxic behavior the label of which toxic behavior we've seen and I've just logged that out with a probability of the actual probability that was measured for that. So simple as that, and this is just the code. For now run it, this is at running within the browser. I just have the developer tools turned on in the browser. This was a previous run. I'll run it and take a look at the console, and here's my output. So we see that toxicity was hit with a 97 percent probability and of that an insult was found with about a 94 percent probability. Now, if I look through all of the labels, we'll see them. The identity attack, as we can see had very low probability of there being an identity attack because I didn't name anybody. It just said, "you suck." The insults as we've seen came back with very high probability. Considering the words that we use is clearly insulting, and we can see that it was a five percent chance that it was not insulting or six percent chance, and a 94 percent chance that it was insulting and then on this thing we had a true match. We'll see the same thing as we go through all of the others, for example, a threat, is there an implicit threat in saying this, and flag this as no there was no implicit threat because it was a 92.5 percent chance of that and as a result the match was false. So then the code that I wrote to just iterate through the seven of them and found that this label insults and reports on the probability found this label toxicity and reports on the probability. So let's try it with something where it's not toxic, and see what the results would look like. So for example, if I change that to say, "You are nice," and let me save this and then I'll go back and rerun it, and we'll take a look at the results. Sometimes this happens with the internal web server. I take a look at my results now it didn't actually print out anything. That's because there was no toxicity found. So if I look at the toxicity itself, and I look at my results we can see faults on here. So because faults and toxicity, there's no toxicity at all found, and as a result then every one of these will turn out to be false. If I look at them, for example, insults, if I go and take a look at that, would see that's also false, and because my code is going to iterate through and just print out what it found to be true, as a result nothing was found here. You can try other examples, and we'll have some in the exercises too.