You can compile your model as before, and print out the summary with this code. Now training is the simplest passing padded and your training labels final as your training set, specifying the number of epochs, and passing the testing padded and testing labels final as your test set. Here's the results of training, with the training set giving us 1.00 accuracy and the validation set at 0.8259. So there's a good chance that we're overfitting. We'll look at some strategies to avoid this later, but you should expect results a little bit like this. Okay. Now we need to talk about and demonstrate the embeddings, so you can visualize them like you did right back at the beginning of this lesson. We'll start by getting the results of the embeddings layer, which is layer zero. We can get the weights, and print out their shape like this. We can see that this is a 10,000 by 16 array, we have 10,000 words in our corpus, and we're working in a 16 dimensional array, so our embedding will have that shape. To be able to plot it, we need a helper function to reverse our word index. As it currently stands, our word index has the key being the word, and the value being the token for the word. We'll need to flip this around, to look through the padded list to decode the tokens back into the words, so we've written this helper function. Now it's time to write the vectors and their metadata auto files. The TensorFlow Projector reads this file type and uses it to plot the vectors in 3D space so we can visualize them. To the vectors file, we simply write out the value of each of the items in the array of embeddings, i.e, the co-efficient of each dimension on the vector for this word. To the metadata array, we just write out the words. If you're working in Colab, this code will download the two files. To now render the results, go to the TensorFlow Embedding Projector on projector.tensorflow.org, press the ''Load data'' button on the left. You'll see a dialog asking you to load data from your computer. Use vector.TSV for the first one, and meta.TSV for the second. Once they're loaded, you should see something like this. Click this ''sphereize data'' checkbox on the top left, and you'll see the binary clustering of the data. Experiment by searching for words, or clicking on the blue dots in the chart that represent words. Above all, have some fun with it. Next up, we'll step through a screencast of what you've just seen, so you can explore it in action. After that, you'll look at how TFTS has built in tokenizers that prevent you from writing a lot of the tokenizing code that we've just used.