So how did we get started? Well, let's begin with the scripts to include. Here are the ones to get tensorflow.js latest and the toxicity model. I often get questions about how one can find these models. It's kind of hard to search for them if you don't know what they are. So the rule of thumb that I would recommend is to take a look at the URL like this, and then just take the name of the model at the end, and then go back to the GitHub we shared earlier and look at the models, and hey they match. Here's toxicity. Similarly, if you want to look at the universal sentence encoder, we'll know what the URL of the script for that will look like. Okay, back to our code, here's a super simple HTML page containing the scripts. Then add another script block where you'll put the rest of the code you're going to work on for this example. Now the first thing you're going to need when using toxicity is a threshold. This value is the minimum prediction confidence namely, if a prediction comes in as over this value, we will match it. Every prediction has two values. So for example, on the insult prediction, you'll get two values back, one for not an insult and one for not-insult. If the not-insults is greater than the prediction confidence, toxicity will report that it's not an insult by setting matched to false. Similarly, if the insult is greater than the prediction confidence, toxicity will report that this isn't insult by setting match to true. If neither is greater, then toxicity will set match to null. So that's the role of the threshold. Now let's see how to do a prediction on a sentence. Here's the code and we'll unpack it line by line. First, we load the model, passing it the threshold value that we just specified to initialize it. Then once it's loaded, we'll have a model. We'll create an array of sentences to classify, and I'll use the most common insult that I hear in sports games namely "you suck." We'll then call model.classify passing it the sentences. Then we'll get a set of predictions back that we can handle. At this point, if I console.log predictions, I'll get these results, and we'll see that there are seven different labels of prediction, and each of these contains a set of results, which if I expand on you can then see, for example, in this case, the insult label, I have a match of true and probabilities of 0.06 for not insults and 0.94 for insults. Clearly showing that in this case, the text was insulting. So now let's take a look at extracting this information. We saw that insults where the entry at number one in predictions. So to extract the label, we just use the .label property, then there's an array of results which contains an array of probabilities, so the syntax to access them is as simple as that. It's similar also for the match value. Now, of course, this is hard-coded for insults, so let's write some codes to iterate across all labels and report back on the ones with a match was true. When I run it, here's the output that I'll see in the console. This is of course with just one sentence to be predicted. If I were to classify multiple sentences, then my array of results would be bigger than one element. So I could look at the results for sentence n like this. So that's it for the toxicity example and reusing the toxicity pre-trained model. Let's check out a screencast of it in action, and then in the next video, you'll see how to build a JavaScript model like this for yourself.