Assignment 1: Sentiment with Deep Neural Networks¶

Welcome to the first assignment of course 3. This is a practice assignment, which means that the grade you receive won't count towards your final grade of the course. However you can still submit your solutions and receive a grade along with feedback from the grader. Before getting started take some time to read the following tips:

TIPS FOR SUCCESSFUL GRADING OF YOUR ASSIGNMENT:¶

  • All cells are frozen except for the ones where you need to submit your solutions.

  • You can add new cells to experiment but these will be omitted by the grader, so don't rely on newly created cells to host your solution code, use the provided places for this.

  • You can add the comment # grade-up-to-here in any graded cell to signal the grader that it must only evaluate up to that point. This is helpful if you want to check if you are on the right track even if you are not done with the whole assignment. Be sure to remember to delete the comment afterwards!

  • To submit your notebook, save it and then click on the blue submit button at the beginning of the page.

In this assignment, you will explore sentiment analysis using deep neural networks.

Table of Contents¶

  • 1 - Import the Libraries
  • 2 - Importing the Data
    • 2.1 - Load and split the Data
    • 2.2 - Build the Vocabulary
      • Exercise 1 - build_vocabulary
    • 2.3 - Convert a Tweet to a Tensor
      • Exercise 2 - max_len
      • Exercise 3 - padded_sequences
  • 3 - Define the structure of the neural network layers
    • 3.1 - ReLU
      • Exercise 4 - relu
    • 3.2 - Sigmoid
      • Exercise 5 - sigmoid
    • 3.3 - Dense class
      • Exercise 6 - Dense
    • 3.3 - Model
      • Exercise 7 - create_model
  • 4 - Evaluate the model
    • 4.1 Predict on Data
  • 5 - Test With Your Own Input
    • 5.1 Create the Prediction Function
      • Exercise 8 - graded_very_positive_tweet
  • 6 - Word Embeddings

In course 1, you implemented Logistic regression and Naive Bayes for sentiment analysis. Even though the two models performed very well on the dataset of tweets, they fail to catch any meaning beyond the meaning of words. For this you can use neural networks. In this assignment, you will write a program that uses a simple deep neural network to identify sentiment in text. By completing this assignment, you will:

  • Understand how you can design a neural network using tensorflow
  • Build and train a model
  • Use a binary cross-entropy loss function
  • Compute the accuracy of your model
  • Predict using your own input

As you can tell, this model follows a similar structure to the one you previously implemented in the second course of this specialization.

  • Indeed most of the deep nets you will be implementing will have a similar structure. The only thing that changes is the model architecture, the inputs, and the outputs. In this assignment, you will first create the neural network layers from scratch using numpy to better understand what is going on. After this you will use the library tensorflow for building and training the model.

1 - Import the Libraries¶

Run the next cell to import the Python packages you'll need for this assignment.

Note the from utils import ... line. This line imports the functions that were specifically written for this assignment. If you want to look at what these functions are, go to File -> Open... and open the utils.py file to have a look.

In [1]:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA

from utils import load_tweets, process_tweet

%matplotlib inline
[nltk_data] Downloading package twitter_samples to
[nltk_data]     /usr/local/share/nltk_data...
[nltk_data]   Package twitter_samples is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data]     /usr/local/share/nltk_data...
[nltk_data]   Unzipping corpora/stopwords.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data]     /usr/local/share/nltk_data...
[nltk_data]   Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] Downloading package wordnet to
[nltk_data]     /usr/local/share/nltk_data...
In [2]:
import w1_unittest

2 - Import the Data¶

2.1 - Load and split the Data¶

  • Import the positive and negative tweets
  • Have a look at some examples of the tweets
  • Split the data into the training and validation sets
  • Create labels for the data
In [3]:
# Load positive and negative tweets
all_positive_tweets, all_negative_tweets = load_tweets()

# View the total number of positive and negative tweets.
print(f"The number of positive tweets: {len(all_positive_tweets)}")
print(f"The number of negative tweets: {len(all_negative_tweets)}")
The number of positive tweets: 5000
The number of negative tweets: 5000

Now you can have a look at some examples of tweets.

In [4]:
# Change the tweet number to any number between 0 and 4999 to see a different pair of tweets.
tweet_number = 4
print('Positive tweet example:')
print(all_positive_tweets[tweet_number])
print('\nNegative tweet example:')
print(all_negative_tweets[tweet_number])
Positive tweet example:
yeaaaah yippppy!!!  my accnt verified rqst has succeed got a blue tick mark on my fb profile :) in 15 days

Negative tweet example:
Dang starting next week I have "work" :(

Here you will process the tweets. This part of the code has been implemented for you. The processing includes:

  • tokenizing the sentence (splitting to words)
  • removing stock market tickers like $GE
  • removing old style retweet text "RT"
  • removing hyperlinks
  • removing hashtags
  • lowercasing
  • removing stopwords and punctuation
  • stemming

Some of these things are general steps you would do when processing any text, some others are very "tweet-specific". The details of the process_tweet function are available in utils.py file

In [5]:
# Process all the tweets: tokenize the string, remove tickers, handles, punctuation and stopwords, stem the words
all_positive_tweets_processed = [process_tweet(tweet) for tweet in all_positive_tweets]
all_negative_tweets_processed = [process_tweet(tweet) for tweet in all_negative_tweets]

Now you can have a look at some examples of how the tweets look like after being processed.

In [6]:
# Change the tweet number to any number between 0 and 4999 to see a different pair of tweets.
tweet_number = 4
print('Positive processed tweet example:')
print(all_positive_tweets_processed[tweet_number])
print('\nNegative processed tweet example:')
print(all_negative_tweets_processed[tweet_number])
Positive processed tweet example:
['yeaaah', 'yipppy', 'accnt', 'verify', 'rqst', 'succeed', 'get', 'blue', 'tick', 'mark', 'fb', 'profile', ':)', '15', 'day']

Negative processed tweet example:
['dang', 'start', 'next', 'week', 'work', ':(']

Next, you split the tweets into the training and validation datasets. For this example you can use 80 % of the data for training and 20 % of the data for validation.

In [7]:
# Split positive set into validation and training
val_pos = all_positive_tweets_processed[4000:]
train_pos = all_positive_tweets_processed[:4000]
# Split negative set into validation and training
val_neg = all_negative_tweets_processed[4000:]
train_neg = all_negative_tweets_processed[:4000]

train_x = train_pos + train_neg 
val_x  = val_pos + val_neg

# Set the labels for the training and validation set (1 for positive, 0 for negative)
train_y = [[1] for _ in train_pos] + [[0] for _ in train_neg]
val_y  = [[1] for _ in val_pos] + [[0] for _ in val_neg]

print(f"There are {len(train_x)} sentences for training.")
print(f"There are {len(train_y)} labels for training.\n")
print(f"There are {len(val_x)} sentences for validation.")
print(f"There are {len(val_y)} labels for validation.")
There are 8000 sentences for training.
There are 8000 labels for training.

There are 2000 sentences for validation.
There are 2000 labels for validation.

2.2 - Build the Vocabulary¶

Now build the vocabulary.

  • Map each word in each tweet to an integer (an "index").
  • Note that you will build the vocabulary based on the training data.
  • To do so, you will assign an index to every word by iterating over your training set.

The vocabulary will also include some special tokens

  • '': padding
  • '[UNK]': a token representing any word that is not in the vocabulary.

Exercise 1 - build_vocabulary¶

Build the vocabulary from all of the tweets in the training set.

In [11]:
# GRADED FUNCTION: build_vocabulary
def build_vocabulary(corpus):
    '''Function that builds a vocabulary from the given corpus
    Input: 
        - corpus (list): the corpus
    Output:
        - vocab (dict): Dictionary of all the words in the corpus.
                The keys are the words and the values are integers.
    '''

    # The vocabulary includes special tokens like padding token and token for unknown words
    # Keys are words and values are distinct integers (increasing by one from 0)
    vocab = {'': 0, '[UNK]': 1} 
    # Initialize a counter for the vocabulary index
    index = 2
    ### START CODE HERE ###
    
    # For each tweet in the training set
    for tweet in corpus:
        # For each word in the tweet
        for word in tweet:
            # If the word is not in vocabulary yet, add it to vocabulary
            if word not in vocab:
                vocab[word] = index
                index += 1
    
    ### END CODE HERE ###
    
    return vocab


vocab = build_vocabulary(train_x)
num_words = len(vocab)

print(f"Vocabulary contains {num_words} words\n")
print(vocab)
Vocabulary contains 9535 words

{'': 0, '[UNK]': 1, 'followfriday': 2, 'top': 3, 'engage': 4, 'member': 5, 'community': 6, 'week': 7, ':)': 8, 'hey': 9, 'james': 10, 'odd': 11, ':/': 12, 'please': 13, 'call': 14, 'contact': 15, 'centre': 16, '02392441234': 17, 'able': 18, 'assist': 19, 'many': 20, 'thanks': 21, 'listen': 22, 'last': 23, 'night': 24, 'bleed': 25, 'amazing': 26, 'track': 27, 'scotland': 28, 'congrats': 29, 'yeaaah': 30, 'yipppy': 31, 'accnt': 32, 'verify': 33, 'rqst': 34, 'succeed': 35, 'get': 36, 'blue': 37, 'tick': 38, 'mark': 39, 'fb': 40, 'profile': 41, '15': 42, 'day': 43, 'one': 44, 'irresistible': 45, 'flipkartfashionfriday': 46, 'like': 47, 'keep': 48, 'lovely': 49, 'customer': 50, 'wait': 51, 'long': 52, 'hope': 53, 'enjoy': 54, 'happy': 55, 'friday': 56, 'lwwf': 57, 'second': 58, 'thought': 59, '’': 60, 'enough': 61, 'time': 62, 'dd': 63, 'new': 64, 'short': 65, 'enter': 66, 'system': 67, 'sheep': 68, 'must': 69, 'buy': 70, 'jgh': 71, 'go': 72, 'bayan': 73, ':D': 74, 'bye': 75, 'act': 76, 'mischievousness': 77, 'etl': 78, 'layer': 79, 'in-house': 80, 'warehouse': 81, 'app': 82, 'katamari': 83, 'well': 84, '…': 85, 'name': 86, 'imply': 87, ':p': 88, 'influencers': 89, 'love': 90, 'big': 91, '...': 92, 'juicy': 93, 'selfies': 94, 'follow': 95, 'perfect': 96, 'already': 97, 'know': 98, "what's": 99, 'great': 100, 'opportunity': 101, 'junior': 102, 'triathletes': 103, 'age': 104, '12': 105, '13': 106, 'gatorade': 107, 'series': 108, 'entry': 109, 'lay': 110, 'greeting': 111, 'card': 112, 'range': 113, 'print': 114, 'today': 115, 'job': 116, ':-)': 117, "friend's": 118, 'lunch': 119, 'yummm': 120, 'nostalgia': 121, 'tb': 122, 'ku': 123, 'id': 124, 'conflict': 125, 'help': 126, "here's": 127, 'screenshot': 128, 'work': 129, 'hi': 130, 'liv': 131, 'hello': 132, 'need': 133, 'something': 134, 'u': 135, 'fm': 136, 'twitter': 137, '—': 138, 'sure': 139, 'thing': 140, 'dm': 141, 'x': 142, 'follower': 143, "i've": 144, 'hear': 145, 'four': 146, 'season': 147, 'pretty': 148, 'dope': 149, 'penthouse': 150, 'obvs': 151, 'gobigorgohome': 152, 'fun': 153, "y'all": 154, 'yeah': 155, 'suppose': 156, 'lol': 157, 'chat': 158, 'bit': 159, 'youth': 160, '💅🏽': 161, '💋': 162, 'see': 163, 'year': 164, 'thank': 165, 'rest': 166, 'quickly': 167, 'bed': 168, 'music': 169, 'fix': 170, 'dream': 171, 'spiritual': 172, 'ritual': 173, 'festival': 174, 'népal': 175, 'beginning': 176, 'line-up': 177, 'leave': 178, 'sarah': 179, 'send': 180, 'email': 181, 'bitsy@bitdefender.com': 182, "we'll": 183, 'asap': 184, 'lols': 185, 'kik': 186, 'hatessuce': 187, '32429': 188, 'kikme': 189, 'lgbt': 190, 'tinder': 191, 'nsfw': 192, 'akua': 193, 'cumshot': 194, 'come': 195, 'house': 196, 'nsn_supplements': 197, 'effective': 198, 'press': 199, 'release': 200, 'distribution': 201, 'result': 202, 'link': 203, 'remove': 204, 'pressrelease': 205, 'newsdistribution': 206, 'bam': 207, 'bestfriend': 208, 'lot': 209, 'warsaw': 210, '<3': 211, 'x46': 212, 'everyone': 213, 'watch': 214, 'documentary': 215, 'earthling': 216, 'youtube': 217, 'support': 218, 'buuut': 219, 'oh': 220, 'look': 221, 'forward': 222, 'visit': 223, 'next': 224, 'letsgetmessy': 225, 'jo': 226, 'make': 227, 'feel': 228, 'never': 229, 'anyone': 230, 'kpop': 231, 'flesh': 232, 'good': 233, 'girl': 234, 'best': 235, 'wish': 236, 'reason': 237, 'epic': 238, 'soundtrack': 239, 'shout': 240, 'add': 241, 'video': 242, 'playlist': 243, 'would': 244, 'dear': 245, 'jordan': 246, 'okay': 247, 'fake': 248, 'gameplays': 249, ';)': 250, 'haha': 251, 'im': 252, 'kidding': 253, 'stuff': 254, 'exactly': 255, 'product': 256, 'line': 257, 'etsy': 258, 'shop': 259, 'check': 260, 'vacation': 261, 'rechargeable': 262, 'normally': 263, 'charger': 264, 'asleep': 265, 'talk': 266, 'sooo': 267, 'someone': 268, 'text': 269, 'yes': 270, 'bet': 271, "he'll": 272, 'fit': 273, 'speech': 274, 'pity': 275, 'green': 276, 'garden': 277, 'midnight': 278, 'sun': 279, 'beautiful': 280, 'canal': 281, 'dasvidaniya': 282, 'till': 283, 'scout': 284, 'sg': 285, 'future': 286, 'wlan': 287, 'pro': 288, 'conference': 289, 'asia': 290, 'change': 291, 'lollipop': 292, '🍭': 293, 'nez': 294, 'agnezmo': 295, 'oley': 296, 'mama': 297, 'stand': 298, 'strong': 299, 'god': 300, 'misty': 301, 'baby': 302, 'cute': 303, 'woohoo': 304, "can't": 305, 'sign': 306, 'yet': 307, 'still': 308, 'think': 309, 'mka': 310, 'liam': 311, 'access': 312, 'welcome': 313, 'stats': 314, 'arrive': 315, '1': 316, 'unfollowers': 317, 'via': 318, 'surprise': 319, 'figure': 320, 'happybirthdayemilybett': 321, 'sweet': 322, 'talented': 323, 'amaze': 324, '2': 325, 'plan': 326, 'drain': 327, 'gotta': 328, 'timezones': 329, 'parent': 330, 'proud': 331, 'least': 332, 'maybe': 333, 'sometimes': 334, 'grade': 335, 'al': 336, 'grande': 337, 'manila_bro': 338, 'choose': 339, 'let': 340, 'around': 341, '..': 342, 'side': 343, 'world': 344, 'eh': 345, 'take': 346, 'care': 347, 'finally': 348, 'fuck': 349, 'weekend': 350, 'real': 351, 'x45': 352, 'join': 353, 'hushedcallwithfraydoe': 354, 'gift': 355, 'yeahhh': 356, 'hushedpinwithsammy': 357, 'event': 358, 'might': 359, 'luv': 360, 'really': 361, 'appreciate': 362, 'share': 363, 'wow': 364, 'tom': 365, 'gym': 366, 'monday': 367, 'invite': 368, 'scope': 369, 'influencer': 370, 'friend': 371, 'nude': 372, 'sleep': 373, 'birthday': 374, 'want': 375, 't-shirt': 376, 'cool': 377, 'haw': 378, 'phela': 379, 'mom': 380, 'obviously': 381, 'prince': 382, 'charm': 383, 'stage': 384, 'luck': 385, 'tyler': 386, 'hipster': 387, 'glass': 388, 'marty': 389, 'glad': 390, 'do': 391, 'afternoon': 392, 'read': 393, 'kahfi': 394, 'finish': 395, 'ohmyg': 396, 'yaya': 397, 'dub': 398, 'stalk': 399, 'ig': 400, 'gondooo': 401, 'moo': 402, 'tologooo': 403, 'become': 404, 'detail': 405, 'zzz': 406, 'xx': 407, 'physiotherapy': 408, 'hashtag': 409, 'custom': 410, '💪': 411, 'monica': 412, 'miss': 413, 'sound': 414, 'morning': 415, "that's": 416, 'x43': 417, 'definitely': 418, 'try': 419, 'tonight': 420, 'advice': 421, 'treviso': 422, 'concert': 423, 'city': 424, 'country': 425, "i'll": 426, 'start': 427, 'fine': 428, 'gorgeous': 429, 'xo': 430, 'oven': 431, 'roast': 432, 'garlic': 433, 'olive': 434, 'oil': 435, 'dry': 436, 'tomato': 437, 'dried': 438, 'basil': 439, 'century': 440, 'tuna': 441, 'right': 442, 'back': 443, 'atchya': 444, 'even': 445, 'almost': 446, 'chance': 447, 'cheer': 448, 'po': 449, 'ice': 450, 'cream': 451, 'agree': 452, '100': 453, 'hehehehe': 454, 'thats': 455, 'point': 456, 'stay': 457, 'home': 458, 'soon': 459, 'promise': 460, 'web': 461, 'whatsapp': 462, 'volta': 463, 'funcionar': 464, 'com': 465, 'iphone': 466, 'jailbroken': 467, 'later': 468, '34': 469, 'min': 470, 'leia': 471, 'appear': 472, 'hologram': 473, 'r2d2': 474, 'w': 475, 'message': 476, 'obi': 477, 'wan': 478, 'sit': 479, 'luke': 480, 'inter': 481, '3': 482, 'ucl': 483, 'arsenal': 484, 'small': 485, 'team': 486, 'passing': 487, '🚂': 488, 'dewsbury': 489, 'railway': 490, 'station': 491, 'dew': 492, 'west': 493, 'yorkshire': 494, '430': 495, 'smh': 496, '9:25': 497, 'live': 498, 'strange': 499, 'imagine': 500, 'megan': 501, 'masaantoday': 502, 'a4': 503, 'shweta': 504, 'tripathi': 505, '5': 506, '20': 507, 'kurta': 508, 'half': 509, 'number': 510, 'wsalelove': 511, 'ah': 512, 'larry': 513, 'anyway': 514, 'kinda': 515, 'goood': 516, 'life': 517, 'enn': 518, 'surely': 519, 'could': 520, 'warmup': 521, '15th': 522, 'bath': 523, 'dum': 524, 'andar': 525, 'ram': 526, 'sampath': 527, 'sona': 528, 'mohapatra': 529, 'samantha': 530, 'edward': 531, 'mein': 532, 'tulane': 533, 'razi': 534, 'wah': 535, 'josh': 536, 'always': 537, 'smile': 538, 'picture': 539, '16.20': 540, 'timing': 541, 'giveitup': 542, 'give': 543, 'gas': 544, 'subsidy': 545, 'initiative': 546, 'propose': 547, 'feeling': 548, 'delighted': 549, 'yesterday': 550, 'x42': 551, 'lmaoo': 552, 'song': 553, 'ever': 554, 'shall': 555, 'little': 556, 'throwback': 557, 'outlying': 558, 'island': 559, 'cheung': 560, 'chau': 561, 'mui': 562, 'wo': 563, 'totally': 564, 'different': 565, 'kfckitchentours': 566, 'kitchen': 567, 'clean': 568, "i'm": 569, 'amazed': 570, 'cusp': 571, 'test': 572, 'water': 573, 'reward': 574, 'arummzz': 575, "let's": 576, 'drive': 577, 'travel': 578, 'traveler': 579, 'yogyakarta': 580, 'jeep': 581, 'indonesia': 582, 'instamood': 583, 'wanna': 584, 'skype': 585, 'may': 586, 'nice': 587, 'friendly': 588, 'pretend': 589, 'film': 590, 'congratulation': 591, 'winner': 592, 'cheesydelights': 593, 'contest': 594, 'address': 595, 'guy': 596, 'marketing': 597, '24/7': 598, '14': 599, 'hour': 600, 'without': 601, 'delay': 602, 'actually': 603, 'easy': 604, 'guess': 605, 'train': 606, 'wd': 607, 'shift': 608, 'engine': 609, 'etc': 610, 'sunburn': 611, 'peel': 612, 'blog': 613, 'huge': 614, 'warm': 615, '☆': 616, 'complete': 617, 'triangle': 618, 'northern': 619, 'ireland': 620, 'sight': 621, 'smthng': 622, 'fr': 623, 'hug': 624, 'xoxo': 625, 'uu': 626, 'jaann': 627, 'topnewfollowers': 628, 'connect': 629, 'wonderful': 630, 'fluffy': 631, 'inside': 632, 'pirouette': 633, 'moose': 634, 'trip': 635, 'philly': 636, 'december': 637, "i'd": 638, 'dude': 639, 'x41': 640, 'question': 641, 'flaw': 642, 'pain': 643, 'negate': 644, 'strength': 645, 'solo': 646, 'move': 647, 'fav': 648, 'nirvana': 649, 'smell': 650, 'teen': 651, 'spirit': 652, 'rip': 653, 'amy': 654, 'winehouse': 655, 'couple': 656, 'tomhiddleston': 657, 'elizabetholsen': 658, 'yaytheylookgreat': 659, 'goodnight': 660, 'vid': 661, 'wake': 662, 'gonna': 663, 'shoot': 664, 'itty': 665, 'bitty': 666, 'teenie': 667, 'bikini': 668, 'much': 669, '4th': 670, 'together': 671, 'end': 672, 'xfiles': 673, 'content': 674, 'rain': 675, 'fabulous': 676, 'fantastic': 677, '♡': 678, 'jb': 679, 'forever': 680, 'belieber': 681, 'nighty': 682, 'bug': 683, 'bite': 684, 'bracelet': 685, 'idea': 686, 'foundry': 687, 'game': 688, 'sense': 689, 'pic': 690, 'eff': 691, 'phone': 692, 'woot': 693, 'derek': 694, 'use': 695, 'parkshare': 696, 'gloucestershire': 697, 'aaaahhh': 698, 'man': 699, 'traffic': 700, 'stress': 701, 'reliever': 702, "how're": 703, 'arbeloa': 704, 'turn': 705, '17': 706, 'omg': 707, 'difference': 708, 'say': 709, 'europe': 710, 'rise': 711, 'find': 712, 'hard': 713, 'believe': 714, 'uncountable': 715, 'coz': 716, 'unlimited': 717, 'course': 718, 'teampositive': 719, 'aldub': 720, '☕': 721, 'rita': 722, 'info': 723, "we'd": 724, 'way': 725, 'boy': 726, 'x40': 727, 'true': 728, 'sethi': 729, 'high': 730, 'exe': 731, 'skeem': 732, 'saam': 733, 'people': 734, 'polite': 735, 'izzat': 736, 'wese': 737, 'trust': 738, 'khawateen': 739, 'k': 740, 'sath': 741, 'mana': 742, 'kar': 743, 'deya': 744, 'evening': 745, 'sort': 746, 'smart': 747, 'hair': 748, 'tbh': 749, 'jacob': 750, 'g': 751, 'upgrade': 752, 'tee': 753, 'family': 754, 'reading': 755, 'person': 756, 'two': 757, 'conversation': 758, 'online': 759, 'mclaren': 760, 'fridayfeeling': 761, 'tgif': 762, 'square': 763, 'enix': 764, 'bissmillah': 765, 'ya': 766, 'allah': 767, "we're": 768, 'training': 769, 'socent': 770, 'startup': 771, 'drop': 772, 'youre': 773, 'arnd': 774, 'town': 775, 'basically': 776, 'piss': 777, 'cup': 778, 'also': 779, 'terrible': 780, 'complicated': 781, 'discussion': 782, 'snapchat': 783, 'lynettelowe': 784, 'kikmenow': 785, 'snapme': 786, 'hot': 787, 'amazon': 788, 'kikmeguys': 789, 'definately': 790, 'grow': 791, 'sport': 792, 'rt': 793, 'rakyat': 794, 'writing': 795, 'since': 796, 'mention': 797, 'fly': 798, 'fish': 799, 'promoted': 800, 'post': 801, 'cyber': 802, 'ourdaughtersourpride': 803, 'mypapamypride': 804, 'papa': 805, 'coach': 806, 'positive': 807, 'kha': 808, 'atleast': 809, 'x39': 810, 'mango': 811, "lassi's": 812, "monty's": 813, 'marvellous': 814, 'though': 815, 'suspect': 816, 'mean': 817, '24': 818, 'hr': 819, 'touch': 820, 'kepler': 821, '452b': 822, 'chalna': 823, 'hai': 824, 'thankyou': 825, 'hazel': 826, 'food': 827, 'market': 828, 'brooklyn': 829, 'pta': 830, 'awake': 831, 'okayy': 832, 'awww': 833, 'ha': 834, 'doc': 835, 'splendid': 836, 'spam': 837, 'folder': 838, 'amount': 839, 'nigeria': 840, 'claim': 841, 'rted': 842, 'legs': 843, 'hurt': 844, 'bad': 845, 'mine': 846, 'saturday': 847, 'thaaanks': 848, 'puhon': 849, 'happiness': 850, 'tnc': 851, 'prior': 852, 'notification': 853, 'fat': 854, 'co': 855, 'probably': 856, 'eat': 857, 'yuna': 858, 'tameside': 859, '´': 860, 'google': 861, 'account': 862, 'scouser': 863, 'everything': 864, 'zoe': 865, 'mate': 866, 'literally': 867, "they're": 868, 'sameee': 869, 'edgar': 870, 'update': 871, 'log': 872, 'bring': 873, 'abes': 874, 'meet': 875, 'x38': 876, 'sigh': 877, 'dreamily': 878, 'pout': 879, 'eye': 880, 'quacketyquack': 881, 'funny': 882, 'happen': 883, 'phil': 884, 'em': 885, 'del': 886, 'rodders': 887, 'else': 888, 'play': 889, 'gamejam': 890, 'irish': 891, 'literature': 892, 'inaccessible': 893, "kareena's": 894, 'fan': 895, 'brain': 896, 'dot': 897, 'braindots': 898, 'fair': 899, 'rush': 900, 'either': 901, 'brandi': 902, '18': 903, 'selfie': 904, 'carnival': 905, 'men': 906, 'put': 907, 'mask': 908, 'xavier': 909, 'forneret': 910, 'jennifer': 911, 'site': 912, 'free': 913, '50.000': 914, '8': 915, 'ball': 916, 'pool': 917, 'coin': 918, 'edit': 919, 'trish': 920, '♥': 921, 'gratefulness': 922, 'three': 923, 'grateful': 924, 'comment': 925, 'wakeup': 926, 'beside': 927, 'dirty': 928, 'sex': 929, 'lmaooo': 930, '😤': 931, 'louis': 932, "he's": 933, 'throw': 934, 'cause': 935, 'inspire': 936, 'ff': 937, 'twoofs': 938, 'gr8': 939, 'wkend': 940, 'kind': 941, 'exhausted': 942, 'word': 943, 'cheltenham': 944, 'area': 945, 'kale': 946, 'crisp': 947, 'ruin': 948, 'x37': 949, 'open': 950, 'worldwide': 951, 'outta': 952, 'sfvbeta': 953, 'vantastic': 954, 'xcylin': 955, 'bundle': 956, 'show': 957, 'internet': 958, 'price': 959, 'realisticly': 960, 'pay': 961, 'net': 962, 'education': 963, 'powerful': 964, 'weapon': 965, 'nelson': 966, 'mandela': 967, 'recent': 968, 'j': 969, 'chenab': 970, 'flow': 971, 'pakistan': 972, 'incredibleindia': 973, 'teenchoice': 974, 'choiceinternationalartist': 975, 'superjunior': 976, 'caught': 977, 'first': 978, 'salmon': 979, 'super-blend': 980, 'project': 981, 'youth@bipolaruk.org.uk': 982, 'awesome': 983, 'stream': 984, 'alma': 985, 'mater': 986, 'highschooldays': 987, 'clientvisit': 988, 'faith': 989, 'christian': 990, 'school': 991, 'lizaminnelli': 992, 'upcoming': 993, 'uk': 994, 'appearance': 995, '😄': 996, 'single': 997, 'hill': 998, 'every': 999, 'beat': 1000, 'wrong': 1001, 'ready': 1002, 'natural': 1003, 'pefumery': 1004, 'workshop': 1005, 'neals': 1006, 'yard': 1007, 'covent': 1008, 'tomorrow': 1009, 'fback': 1010, 'indo': 1011, 'harmos': 1012, 'americano': 1013, 'remember': 1014, 'aww': 1015, 'head': 1016, 'saw': 1017, 'dark': 1018, 'handshome': 1019, 'juga': 1020, 'hurray': 1021, 'meeting': 1022, 'hate': 1023, 'cant': 1024, 'decide': 1025, 'save': 1026, 'list': 1027, 'hiya': 1028, 'exec': 1029, 'loryn.good@lincs-chamber.co.uk': 1030, 'photo': 1031, 'thx': 1032, '4': 1033, 'china': 1034, 'homosexual': 1035, 'hyungbot': 1036, 'fam': 1037, 'mind': 1038, 'timetunnel': 1039, '1982': 1040, 'quite': 1041, 'radio': 1042, 'set': 1043, 'heart': 1044, 'hiii': 1045, 'jack': 1046, 'ily': 1047, '✨': 1048, 'domino': 1049, 'pub': 1050, 'heat': 1051, 'prob': 1052, 'sorry': 1053, 'hastily': 1054, 'type': 1055, 'screenshotting': 1056, 'pakistani': 1057, 'x36': 1058, '3points': 1059, 'dreamteam': 1060, 'gooo': 1061, 'bailey': 1062, 'pbb': 1063, '737gold': 1064, 'drink': 1065, 'old': 1066, '1/2': 1067, 'welsh': 1068, 'wale': 1069, 'yippee': 1070, '💟': 1071, 'bro': 1072, 'lord': 1073, 'michael': 1074, "u're": 1075, 'ure': 1076, 'bigot': 1077, 'usually': 1078, 'front': 1079, 'squat': 1080, 'dobar': 1081, 'dan': 1082, 'brand': 1083, 'heavy': 1084, 'musicology': 1085, '2015': 1086, 'spend': 1087, 'marathon': 1088, 'iflix': 1089, 'officially': 1090, 'graduate': 1091, 'cry': 1092, '__': 1093, 'yep': 1094, 'expert': 1095, 'bisexuality': 1096, 'minal': 1097, 'aidzin': 1098, 'yo': 1099, 'pi': 1100, 'cook': 1101, 'book': 1102, 'dinner': 1103, 'tough': 1104, 'choice': 1105, 'others': 1106, 'chill': 1107, 'smu': 1108, 'oval': 1109, 'basketball': 1110, 'player': 1111, 'whahahaha': 1112, 'soamazing': 1113, 'moment': 1114, 'onto': 1115, 'a5': 1116, 'wardrobe': 1117, 'user': 1118, 'teamred': 1119, 'apparently': 1120, 'hopefully': 1121, 'depends': 1122, 'greatly': 1123, 'design': 1124, 'ahhh': 1125, '7th': 1126, 'cinepambata': 1127, 'mechanic': 1128, 'official': 1129, 'form': 1130, 'download': 1131, 'ur': 1132, 'swishers': 1133, 'cop': 1134, 'ducktails': 1135, 'surreal': 1136, 'exposure': 1137, 'sotw': 1138, 'halesowen': 1139, 'blackcountryfair': 1140, 'street': 1141, 'assessment': 1142, 'mental': 1143, 'body': 1144, 'ooze': 1145, 'appeal': 1146, 'amassiveoverdoseofships': 1147, 'late': 1148, 'isi': 1149, 'chan': 1150, 'c': 1151, 'note': 1152, 'pkwalasawaal': 1153, 'gemma': 1154, 'orleans': 1155, 'fever': 1156, 'catch': 1157, 'geskenya': 1158, 'obamainkenya': 1159, 'magicalkenya': 1160, 'greatkenya': 1161, 'allgoodthingske': 1162, 'anime': 1163, 'umaru': 1164, 'singer': 1165, 'ship': 1166, 'order': 1167, 'room': 1168, 'car': 1169, 'hahaha': 1170, 'story': 1171, 'relate': 1172, 'label': 1173, 'batch': 1174, 'principal': 1175, 'due': 1176, 'march': 1177, 'wooftastic': 1178, 'receive': 1179, 'necessary': 1180, 'regret': 1181, 'rn': 1182, 'whatever': 1183, 'hat': 1184, 'success': 1185, 'abstinence': 1186, 'wtf': 1187, "there's": 1188, 'thrown': 1189, 'middle': 1190, 'repeat': 1191, 'relentlessly': 1192, 'approximately': 1193, 'oldschool': 1194, 'runescape': 1195, 'daaay': 1196, 'jumma_mubarik': 1197, 'frnds': 1198, 'stay_blessed': 1199, 'bless': 1200, 'pussycats': 1201, 'main': 1202, 'launch': 1203, 'pretoria': 1204, 'fahrinahmad': 1205, 'tengkuaaronshah': 1206, 'eksperimencinta': 1207, 'tykkäsin': 1208, 'videosta': 1209, 'month': 1210, 'hoodie': 1211, 'eeep': 1212, 'yay': 1213, 'sohappyrightnow': 1214, 'mmm': 1215, 'azz-sets': 1216, 'babe': 1217, 'feedback': 1218, 'gain': 1219, 'value': 1220, 'peaceful': 1221, 'refresh': 1222, 'manthan': 1223, 'tune': 1224, 'freshness': 1225, 'mother': 1226, 'determination': 1227, 'maxfreshmove': 1228, 'lonely': 1229, 'tattoo': 1230, 'friday.and': 1231, 'magnificent': 1232, 'e': 1233, 'achieve': 1234, 'rashmi': 1235, 'dedication': 1236, 'inspiration': 1237, 'happyfriday': 1238, 'nearly': 1239, 'retweeted': 1240, 'alert': 1241, 'da': 1242, 'dang': 1243, 'rad': 1244, 'fanart': 1245, 'massive': 1246, 'niamh': 1247, 'fennell': 1248, 'journalism': 1249, 'land': 1250, 'copying': 1251, 'paste': 1252, 'tweet': 1253, 'ariana': 1254, 'selena': 1255, 'gomez': 1256, 'tomlinson': 1257, 'payne': 1258, 'caradelevingne': 1259, '🌷': 1260, 'trade': 1261, 'tire': 1262, 'nope': 1263, 'apply': 1264, 'iamca': 1265, 'aftie': 1266, 'goodmorning': 1267, 'prokabaddi': 1268, 'koel': 1269, 'mallick': 1270, 'recite': 1271, 'national': 1272, 'anthem': 1273, '6': 1274, 'yournaturalleaders': 1275, 'youngnaturalleaders': 1276, 'mon': 1277, '27july': 1278, 'cumbria': 1279, 'flockstars': 1280, 'thur': 1281, '30july': 1282, 'itv': 1283, 'sleeptight': 1284, 'haveagoodday': 1285, 'leg': 1286, 'september': 1287, 'perhaps': 1288, 'bb': 1289, 'promote': 1290, 'full': 1291, 'album': 1292, 'fully': 1293, 'intend': 1294, 'write': 1295, 'possible': 1296, 'attack': 1297, '>:D': 1298, 'bird': 1299, 'teamadmicro': 1300, 'fridaydownpour': 1301, 'clear': 1302, 'rohit': 1303, 'queen': 1304, 'otwolgrandtrailer': 1305, 'sheer': 1306, 'fact': 1307, 'obama': 1308, 'innumerable': 1309, 'odds': 1310, 'president': 1311, 'ni': 1312, 'shauri': 1313, 'yako': 1314, 'memotohaters': 1315, 'sunday': 1316, 'pamper': 1317, "t'was": 1318, 'cabincrew': 1319, 'interview': 1320, 'langkawi': 1321, '1st': 1322, 'august': 1323, 'fulfil': 1324, 'fantasy': 1325, '👉': 1326, 'thinking': 1327, 'ex-twelebs': 1328, 'friends': 1329, 'apartment': 1330, 'makeover': 1331, 'brilliantly': 1332, 'happyyy': 1333, 'birthdaaayyy': 1334, 'kill': 1335, 'interested': 1336, 'internship': 1337, 'program': 1338, 'sadly': 1339, 'career': 1340, 'page': 1341, 'issue': 1342, 'sad': 1343, 'overwhelmingly': 1344, 'aha': 1345, 'beauts': 1346, '♬': 1347, 'win': 1348, 'deo': 1349, 'faaabulous': 1350, 'freebiefriday': 1351, 'aluminiumfree': 1352, 'stayfresh': 1353, 'john': 1354, 'worry': 1355, 'navigate': 1356, 'thnks': 1357, 'progrmr': 1358, '9pm': 1359, '9am': 1360, 'quit': 1361, 'hardly': 1362, 'surprising': 1363, 'roses': 1364, 'emotive': 1365, 'poetry': 1366, 'frequentflyer': 1367, 'break': 1368, 'apologize': 1369, 'kb': 1370, 'londondairy': 1371, 'icecream': 1372, 'experience': 1373, 'past': 1374, 'cover': 1375, 'sin': 1376, 'excited': 1377, ":')": 1378, 'xxx': 1379, 'jim': 1380, 'chuckle': 1381, 'shopping': 1382, 'cake': 1383, 'doh': 1384, '500': 1385, 'subscriber': 1386, 'reach': 1387, 'scorch': 1388, 'summer': 1389, 'young': 1390, 'woman': 1391, 'stamen': 1392, 'expect': 1393, 'anything': 1394, 'less': 1395, 'tweeties': 1396, 'fab': 1397, 'dont': 1398, '-->': 1399, '10': 1400, 'loner': 1401, 'introduce': 1402, 'v': 1403, 'alter': 1404, 'understanding': 1405, 'spread': 1406, 'problem': 1407, 'supa': 1408, 'dupa': 1409, 'near': 1410, 'dartmoor': 1411, 'gold': 1412, 'colour': 1413, 'ok': 1414, 'someday': 1415, 'r': 1416, 'dii': 1417, 'n': 1418, 'forget': 1419, 'si': 1420, 'smf': 1421, 'ft': 1422, 'japanese': 1423, 'import': 1424, 'kitty': 1425, 'matching': 1426, 'stationary': 1427, 'draw': 1428, 'close': 1429, 'specialise': 1430, 'thermal': 1431, 'image': 1432, 'survey': 1433, '–': 1434, 'south': 1435, 'korea': 1436, 'scamper': 1437, 'alarm': 1438, "ain't": 1439, 'mad': 1440, 'chweina': 1441, 'xd': 1442, 'jotzh': 1443, 'waste': 1444, 'place': 1445, 'completely': 1446, 'worth': 1447, 'coat': 1448, 'beforehand': 1449, 'tho': 1450, 'foh': 1451, 'outside': 1452, 'holiday': 1453, 'menace': 1454, 'jojo': 1455, 'ta': 1456, 'accepted': 1457, 'guys': 1458, 'admin': 1459, 'lukris': 1460, '😘': 1461, 'momma': 1462, 'bear': 1463, '❤': 1464, '️': 1465, 'redo': 1466, '8th': 1467, 'v.ball': 1468, 'atm': 1469, 'retweets': 1470, 'build': 1471, 'pack': 1472, 'suitcase': 1473, 'hang-copying': 1474, 'translation': 1475, "dostoevsky's": 1476, 'voucher': 1477, 'bugatti': 1478, 'bra': 1479, 'مطعم_هاشم': 1480, 'yummy': 1481, 'a7la': 1482, 'bdayt': 1483, 'mnwreeen': 1484, 'jazz': 1485, 'truck': 1486, 'x34': 1487, 'speak': 1488, 'pbevent': 1489, 'hq': 1490, 'yoona': 1491, 'hairpin': 1492, 'otp': 1493, 'collection': 1494, 'mastership': 1495, 'honey': 1496, 'paindo': 1497, 'await': 1498, 'report': 1499, 'manny': 1500, 'asshole': 1501, 'brijresidency': 1502, 'structure': 1503, '156': 1504, 'unit': 1505, 'encompass': 1506, 'bhk': 1507, 'flat': 1508, '91': 1509, '975-580-': 1510, '444': 1511, 'honor': 1512, 'curry': 1513, 'clash': 1514, 'milano': 1515, '👌': 1516, 'followback': 1517, ':-D': 1518, 'legit': 1519, 'loser': 1520, 'dead': 1521, 'starsquad': 1522, '⭐': 1523, 'news': 1524, 'utc': 1525, 'flume': 1526, 'kaytranada': 1527, 'alunageorge': 1528, 'ticket': 1529, 'kms': 1530, 'certainty': 1531, 'solve': 1532, 'faster': 1533, '👊': 1534, 'hurry': 1535, 'totem': 1536, 'somewhere': 1537, 'alice': 1538, 'dog': 1539, 'cat': 1540, 'goodwynsgoodies': 1541, 'ugh': 1542, 'fade': 1543, 'moan': 1544, 'leeds': 1545, 'jozi': 1546, 'wasnt': 1547, 'fifth': 1548, 'available': 1549, 'tix': 1550, 'pa': 1551, 'ba': 1552, 'ng': 1553, 'atl': 1554, 'coldplay': 1555, 'favorite': 1556, 'scientist': 1557, 'yellow': 1558, 'atlas': 1559, 'yein': 1560, 'selos': 1561, 'jabongatpumaurbanstampede': 1562, 'an': 1563, '7': 1564, 'timely': 1565, 'arrival': 1566, 'waiter': 1567, 'bill': 1568, 'sir': 1569, 'title': 1570, 'pocket': 1571, 'wripped': 1572, 'jean': 1573, 'connie': 1574, 'crew': 1575, 'staff': 1576, 'sweetan': 1577, 'ask': 1578, 'filming': 1579, 'mum': 1580, 'beg': 1581, 'soprano': 1582, 'ukraine': 1583, 'x33': 1584, 'olly': 1585, 'disney.arts': 1586, 'elmoprinssi': 1587, 'tired': 1588, 'salsa': 1589, 'dance': 1590, 'tell': 1591, 'truth': 1592, 'pls': 1593, '4-6': 1594, 'interest': 1595, '2nd': 1596, 'blogiversary': 1597, 'review': 1598, 'cutie': 1599, 'bohol': 1600, 'briliant': 1601, 'key': 1602, 'annual': 1603, 'productive': 1604, 'far': 1605, 'spin': 1606, 'voice': 1607, '\U000fe334': 1608, 'yeheyy': 1609, 'pinya': 1610, 'whoooah': 1611, 'trance': 1612, 'lover': 1613, 'subject': 1614, 'physic': 1615, 'stop': 1616, 'ब': 1617, 'matter': 1618, 'jungle': 1619, 'accommodate': 1620, 'secret': 1621, 'behind': 1622, 'sandroforceo': 1623, 'ceo': 1624, '1month': 1625, 'swag': 1626, 'mia': 1627, 'workinprogress': 1628, 'finnigan': 1629, 'loyal': 1630, 'royal': 1631, 'fotoset': 1632, 'reusful': 1633, 'seem': 1634, 'somebody': 1635, 'sell': 1636, 'understand': 1637, 'muntu': 1638, 'another': 1639, 'gem': 1640, 'falcos': 1641, 'supersmash': 1642, 'hotnsexy': 1643, 'friskyfriday': 1644, 'beach': 1645, 'movie': 1646, 'crop': 1647, 'nash': 1648, 'tissue': 1649, 'chocolate': 1650, 'tea': 1651, 'hannibal': 1652, 'episode': 1653, 'hotbed': 1654, 'bush': 1655, 'classicassures': 1656, 'thrill': 1657, 'international': 1658, 'assignment': 1659, 'aerial': 1660, 'camera': 1661, 'operator': 1662, 'wales': 1663, 'boom': 1664, 'hong': 1665, 'kong': 1666, 'ferry': 1667, 'central': 1668, 'girlfriend': 1669, 'after-work': 1670, 'dj': 1671, 'resto': 1672, 'drinkt': 1673, 'koffie': 1674, 'a6': 1675, 'stargate': 1676, 'atlantis': 1677, 'muaahhh': 1678, 'ohh': 1679, 'hii': 1680, '🙈': 1681, 'di': 1682, 'nagsend': 1683, 'yung': 1684, 'ko': 1685, '</3': 1686, 'ulit': 1687, '🎉': 1688, '🎈': 1689, 'ugly': 1690, 'leggete': 1691, 'qui': 1692, 'per': 1693, 'la': 1694, 'mar': 1695, 'encourage': 1696, 'employer': 1697, 'board': 1698, 'sticker': 1699, 'sponsor': 1700, 'prize': 1701, '(:': 1702, 'milo': 1703, 'aurini': 1704, 'juicebro': 1705, 'fucking': 1706, 'pillar': 1707, 'respective': 1708, 'boii': 1709, 'smashingbook': 1710, 'bible': 1711, 'ill': 1712, 'sick': 1713, 'lamo': 1714, 'fangirl': 1715, 'platonic': 1716, 'science': 1717, 'resident': 1718, 'servicewithasmile': 1719, 'fams': 1720, 'bloodline': 1721, 'husky': 1722, 'obituary': 1723, 'advert': 1724, 'goofingaround': 1725, 'madness': 1726, 'bollywood': 1727, 'giveaway': 1728, 'dah': 1729, 'nothing': 1730, 'bitterness': 1731, 'anger': 1732, 'hatred': 1733, 'towards': 1734, 'pure': 1735, 'indifference': 1736, 'suite': 1737, 'zach': 1738, 'cody': 1739, 'deliver': 1740, 'ac': 1741, 'excellence': 1742, 'producer': 1743, 'boggling': 1744, 'fatigue': 1745, 'baareeq': 1746, 'gamedev': 1747, 'hobby': 1748, 'tweenie_fox': 1749, 'click': 1750, 'accessory': 1751, 'tamang': 1752, 'hinala': 1753, 'niam': 1754, 'selfieee': 1755, 'especially': 1756, 'lass': 1757, 'aling': 1758, 'swim': 1759, 'perfection': 1760, 'bout': 1761, 'goodbye': 1762, 'feminist': 1763, 'fight': 1764, 'snobby': 1765, 'bitch': 1766, 'caroline': 1767, 'mighty': 1768, '🔥': 1769, 'hbd': 1770, 'follback': 1771, 'jog': 1772, 'remote': 1773, 'newly': 1774, 'ebay': 1775, 'store': 1776, 'disneyinfinity': 1777, 'starwars': 1778, 'character': 1779, 'preorder': 1780, 'starter': 1781, 'hit': 1782, 'snap': 1783, 'homies': 1784, 'skin': 1785, 'bday': 1786, 'chant': 1787, 'jai': 1788, 'italy': 1789, 'fast': 1790, 'heeeyyy': 1791, 'woah': 1792, '★': 1793, '😊': 1794, 'whenever': 1795, 'ang': 1796, 'kiss': 1797, 'philippine': 1798, 'package': 1799, 'bruise': 1800, 'rib': 1801, '😀': 1802, '😁': 1803, '😂': 1804, '😃': 1805, '😅': 1806, '😉': 1807, 'tombraider': 1808, 'hype': 1809, 'thejuiceinthemix': 1810, 'rela': 1811, 'building': 1812, 'low': 1813, 'priority': 1814, 'match': 1815, 'harry': 1816, 'bc': 1817, 'opportune': 1818, 'collapse': 1819, 'chaotic': 1820, 'cosas': 1821, '<---': 1822, 'alliteration': 1823, 'oppayaa': 1824, "how's": 1825, 'natgeo': 1826, 'lick': 1827, 'elbow': 1828, '. .': 1829, 'interesting': 1830, '“': 1831, 'emu': 1832, 'stoke': 1833, "people's": 1834, 'approval': 1835, "god's": 1836, 'jisung': 1837, 'kid': 1838, 'sunshine': 1839, 'mm': 1840, 'nicola': 1841, 'brighten': 1842, 'helen': 1843, 'brian': 1844, '2-3': 1845, 'australia': 1846, 'ol': 1847, 'bone': 1848, 'creaking': 1849, 'abuti': 1850, 'tweetland': 1851, 'android': 1852, 'xmas': 1853, 'skyblock': 1854, 'standing': 1855, 'bcause': 1856, '2009': 1857, 'die': 1858, 'twitch': 1859, 'sympathy': 1860, 'laugh': 1861, 'unnieee': 1862, 'nuka': 1863, 'penacova': 1864, 'djset': 1865, 'edm': 1866, 'kizomba': 1867, 'latinhouse': 1868, 'housemusic': 1869, 'portugal': 1870, 'wild': 1871, 'ride': 1872, 'anytime': 1873, 'taste': 1874, 'yer': 1875, 'mtn': 1876, 'maganda': 1877, 'mistress': 1878, 'saphire': 1879, 'busy': 1880, '4000': 1881, 'instagram': 1882, 'among': 1883, 'coconut': 1884, 'sambal': 1885, 'mussel': 1886, 'recipe': 1887, 'kalin': 1888, 'mixcloud': 1889, 'sarcasm': 1890, 'chelsea': 1891, 'he': 1892, 'useless': 1893, 'thursday': 1894, 'hang': 1895, 'hehe': 1896, 'benson': 1897, 'facebook': 1898, 'solid': 1899, '16/17': 1900, '30': 1901, '°': 1902, '😜': 1903, 'maryhicks': 1904, 'kikmeboys': 1905, 'photooftheday': 1906, 'musicbiz': 1907, 'sheskindahot': 1908, 'fleekile': 1909, 'mbalula': 1910, 'africa': 1911, 'mexican': 1912, 'scar': 1913, 'office': 1914, 'donut': 1915, 'foiegras': 1916, 'despite': 1917, 'weather': 1918, 'wedding': 1919, 'tony': 1920, 'stark': 1921, 'incredible': 1922, 'poem': 1923, 'bubble': 1924, 'dale': 1925, 'billion': 1926, 'magical': 1927, 'op': 1928, 'cast': 1929, 'vote': 1930, 'election': 1931, 'jcreport': 1932, 'piggin': 1933, 'peace': 1934, 'botanical': 1935, 'soap': 1936, 'upload': 1937, 'freshly': 1938, '3weeks': 1939, 'heal': 1940, 'exciting': 1941, 'tobi-bro': 1942, 'isp': 1943, 'steel': 1944, 'wednesday': 1945, 'swear': 1946, 'earlier': 1947, 'cam': 1948, '😭': 1949, 'except': 1950, "masha'allah": 1951, 'french': 1952, 'wwat': 1953, 'france': 1954, 'yaaay': 1955, 'beiruting': 1956, 'coffee': 1957, 'panda': 1958, 'eonnie': 1959, 'favourite': 1960, 'soda': 1961, 'fuller': 1962, 'shit': 1963, 'healthy': 1964, '💓': 1965, 'rettweet': 1966, 'mvg': 1967, 'valuable': 1968, 'madrid': 1969, 'sore': 1970, 'bergerac': 1971, 'u21': 1972, 'individual': 1973, 'excellent': 1974, 'adam': 1975, "beach's": 1976, 'suicide': 1977, 'squad': 1978, 'fond': 1979, 'christopher': 1980, 'initially': 1981, 'cocky': 1982, 'prove': 1983, "attitude's": 1984, 'improve': 1985, 'suggest': 1986, 'date': 1987, 'indeed': 1988, 'happys': 1989, 'intelligent': 1990, 'cs': 1991, 'certain': 1992, 'exam': 1993, 'forgot': 1994, 'home-based': 1995, 'knee': 1996, 'sale': 1997, 'fleur': 1998, 'dress': 1999, 'readystock_hijabmart': 2000, 'idr': 2001, '325.000': 2002, '200.000': 2003, 'tompolo': 2004, 'aim': 2005, 'cannot': 2006, 'buyer': 2007, 'disappoint': 2008, 'paper': 2009, 'slacking': 2010, 'crack': 2011, 'particularly': 2012, 'striking': 2013, '31': 2014, 'mam': 2015, 'feytyaz': 2016, 'instant': 2017, 'stiffening': 2018, 'ricky_febs': 2019, 'grindea': 2020, 'courier': 2021, 'crypt': 2022, 'possibly': 2023, 'arma': 2024, 'record': 2025, 'gosh': 2026, 'limbo': 2027, 'retweeting': 2028, 'orchard': 2029, 'art': 2030, 'super': 2031, 'karachi': 2032, 'ka': 2033, 'venice': 2034, 'several': 2035, 'part': 2036, 'witness': 2037, 'accumulate': 2038, 'maroon': 2039, 'cocktail': 2040, '0-100': 2041, 'quick': 2042, '1100d': 2043, 'auto-focus': 2044, 'manual': 2045, 'vein': 2046, 'crackle': 2047, 'glaze': 2048, 'layout': 2049, 'bomb': 2050, 'social': 2051, 'website': 2052, 'pake': 2053, 'joim': 2054, 'fee': 2055, 'troop': 2056, 'beauty': 2057, 'mail': 2058, 'ladolcevitainluxembourg@hotmail.com': 2059, 'prrequest': 2060, 'journorequest': 2061, 'the_madstork': 2062, 'shaun': 2063, 'bot': 2064, 'chloe': 2065, 'actress': 2066, 'away': 2067, 'wicked': 2068, 'hola': 2069, 'juan': 2070, 'sending': 2071, 'houston': 2072, 'tx': 2073, 'jenni': 2074, "year's": 2075, 'stumble': 2076, 'upon': 2077, 'prob.nice': 2078, 'choker': 2079, 'btw': 2080, 'seouljins': 2081, 'photoset': 2082, 'sadomasochistsparadise': 2083, 'wynter': 2084, 'bottom': 2085, 'outtake': 2086, 'sadomasochist': 2087, 'paradise': 2088, 'cuties': 2089, 'ty': 2090, 'bby': 2091, 'clip': 2092, 'lose': 2093, 'cypher': 2094, 'amen': 2095, 'x32': 2096, 'plant': 2097, 'allow': 2098, 'corner': 2099, 'addict': 2100, 'gurl': 2101, 'suck': 2102, 'special': 2103, 'owe': 2104, 'daniel': 2105, 'ape': 2106, 'saar': 2107, 'ahead': 2108, 'verse': 2109, 'butterfly': 2110, 'bonus': 2111, 'fill': 2112, 'tear': 2113, 'laughter': 2114, '5sos': 2115, 'yummmyyy': 2116, 'dosa': 2117, 'unless': 2118, 'achi': 2119, 'youuu': 2120, 'bawi': 2121, 'ako': 2122, 'queenesther': 2123, 'sharp': 2124, 'wonder': 2125, 'poldi': 2126, 'cimbom': 2127, 'buddy': 2128, 'bruhhh': 2129, 'daddy': 2130, '”': 2131, 'communal': 2132, 'knowledge': 2133, 'attention': 2134, '1tb': 2135, 'bank': 2136, 'credit': 2137, 'department': 2138, 'anz': 2139, 'extreme': 2140, 'offshoring': 2141, 'absolutely': 2142, 'classic': 2143, 'gottolovebanks': 2144, 'yup': 2145, 'in-shaa-allah': 2146, 'dua': 2147, 'thru': 2148, 'aameen': 2149, '4/5': 2150, 'coca': 2151, 'cola': 2152, 'fanta': 2153, 'pepsi': 2154, 'sprite': 2155, 'alls': 2156, 'sweeety': 2157, ';-)': 2158, 'welcometweet': 2159, 'psygustokita': 2160, 'setup': 2161, 'wet': 2162, 'foot': 2163, 'carpet': 2164, 'judgmental': 2165, 'hypocritical': 2166, 'narcissist': 2167, 'jumpsuit': 2168, 'bt': 2169, 'denim': 2170, 'verge': 2171, 'owl': 2172, 'constant': 2173, 'run': 2174, 'sia': 2175, 'count': 2176, 'brilliant': 2177, 'teacher': 2178, 'comparative': 2179, 'religion': 2180, 'rant': 2181, 'student': 2182, 'benchers': 2183, '1/5': 2184, 'porsche': 2185, 'paddock': 2186, 'budapestgp': 2187, 'johnyherbert': 2188, 'roll': 2189, 'porschesupercup': 2190, 'koyal': 2191, 'melody': 2192, 'unexpected': 2193, 'create': 2194, 'memory': 2195, '35': 2196, 'eps': 2197, 'wirh': 2198, 'arc': 2199, 'x31': 2200, 'wolf': 2201, 'fulfill': 2202, 'desire': 2203, 'ameen': 2204, 'kca': 2205, 'votejkt': 2206, '48id': 2207, 'helpinggroupdms': 2208, 'quote': 2209, 'weird': 2210, 'dp': 2211, 'wife': 2212, 'poor': 2213, 'chick': 2214, 'guide': 2215, 'zonzofox': 2216, 'bhaiya': 2217, 'brother': 2218, 'lucky': 2219, 'patty': 2220, 'elaborate': 2221, 'kuching': 2222, 'rate': 2223, 'merdeka': 2224, 'palace': 2225, 'hotel': 2226, 'plusmiles': 2227, 'service': 2228, 'hahahaa': 2229, 'nex': 2230, 'safe': 2231, 'gwd': 2232, 'shes': 2233, 'okok': 2234, '33': 2235, 'idiot': 2236, 'chaerin': 2237, 'unnie': 2238, 'viable': 2239, 'alternative': 2240, 'nowadays': 2241, 'pass': 2242, 'ip': 2243, 'tombow': 2244, 'abt': 2245, 'friyay': 2246, 'smug': 2247, 'marrickville': 2248, 'public': 2249, 'ten': 2250, 'ago': 2251, 'eighteen': 2252, 'auvssscr': 2253, 'ncaaseason': 2254, 'slow': 2255, 'popsicle': 2256, 'soft': 2257, 'melt': 2258, 'mouth': 2259, 'thankyouuu': 2260, 'dianna': 2261, 'ngga': 2262, 'usah': 2263, 'dipikirin': 2264, 'elah': 2265, 'easily': 2266, "who's": 2267, 'entp': 2268, 'killin': 2269, 'meme': 2270, 'worthy': 2271, 'shot': 2272, 'emon': 2273, 'decent': 2274, 'outdoor': 2275, 'rave': 2276, 'dv': 2277, 'aku': 2278, 'bakal': 2279, 'liat': 2280, 'kak': 2281, 'merry': 2282, 'tv': 2283, 'outfit': 2284, '--->': 2285, 'fashionfriday': 2286, 'angle.nelson': 2287, 'cheap': 2288, 'mymonsoonstory': 2289, 'tree': 2290, 'lotion': 2291, 'moisturize': 2292, 'important': 2293, 'monsoon': 2294, 'whoop': 2295, 'romantic': 2296, 'valencia': 2297, 'daaru': 2298, 'party': 2299, 'chaddi': 2300, 'bros': 2301, 'wonderful.great': 2302, 'closely': 2303, 'trim': 2304, 'pubes': 2305, 'mi': 2306, 'tio': 2307, 'sinaloa': 2308, 'arre': 2309, 'stylish': 2310, 'trendy': 2311, 'kim': 2312, 'fabfriday': 2313, 'facetime': 2314, 'calum': 2315, 'constantly': 2316, 'announce': 2317, 'filbarbarian': 2318, 'beer': 2319, 'broken': 2320, 'arm': 2321, 'testicle': 2322, 'light': 2323, 'katerina': 2324, 'maniataki': 2325, 'ahh': 2326, 'alright': 2327, 'worthwhile': 2328, 'judging': 2329, 'tech': 2330, 'window': 2331, 'stupid': 2332, 'plugin': 2333, 'bass': 2334, 'slap': 2335, '6pm': 2336, 'door': 2337, 'vip': 2338, 'general': 2339, 'seat': 2340, 'early': 2341, 'london': 2342, 'toptravelcentar': 2343, 'ttctop': 2344, 'lux': 2345, 'luxurytravel': 2346, 'beograd': 2347, 'srbija': 2348, 'putovanja': 2349, 'wendy': 2350, 'provide': 2351, 'fresh': 2352, 'drainage': 2353, 'homebound': 2354, 'hahahays': 2355, 'yeeeah': 2356, 'moar': 2357, 'kittehs': 2358, 'incoming': 2359, 'tower': 2360, 'yippeee': 2361, 'scrummy': 2362, 'bio': 2363, 'mcpe': 2364, '->': 2365, 'vainglory': 2366, 'driver': 2367, '6:01': 2368, 'lilydale': 2369, 'f': 2370, 'raise': 2371, 'magicalmysterytour': 2372, 'chek': 2373, 'rule': 2374, 'weebly': 2375, 'donetsk': 2376, 'earth': 2377, 'personalise': 2378, 'wrap': 2379, 'business': 2380, 'stationery': 2381, 'adrian': 2382, 'parcel': 2383, 'tuesday': 2384, 'pris': 2385, '80': 2386, 'wz': 2387, 'pattern': 2388, 'cut': 2389, 'buttonhole': 2390, 'finishing': 2391, '4my': 2392, 'designer': 2393, 'famous': 2394, 'client': 2395, 'p': 2396, 'alive': 2397, 'trial': 2398, 'spm': 2399, 'dinooo': 2400, 'cardio': 2401, 'steak': 2402, 'cue': 2403, 'laptop': 2404, 'excite': 2405, 'guinea': 2406, 'pig': 2407, 'bestfriends': 2408, 'salamat': 2409, 'sa': 2410, 'mga': 2411, 'nag.greet': 2412, 'appreciated': 2413, 'guise': 2414, 'godbless': 2415, 'crush': 2416, 'apple': 2417, 'ga': 2418, 'deserve': 2419, 'charles': 2420, 'workhard': 2421, 'model': 2422, 'forrit': 2423, 'bread': 2424, 'bacon': 2425, 'butter': 2426, 'afang': 2427, 'soup': 2428, 'semo': 2429, 'brb': 2430, 'force': 2431, 'doesnt': 2432, 'tato': 2433, 'bulat': 2434, 'discuss': 2435, 'suggestion': 2436, 'concerned': 2437, 'snake': 2438, 'perform': 2439, 'con': 2440, 'todayyy': 2441, 'max': 2442, 'gaza': 2443, 'retweet': 2444, 'bbb': 2445, 'peacefully': 2446, 'pc': 2447, '22': 2448, 'legal': 2449, 'ditch': 2450, 'tory': 2451, 'bajrangibhaijaanhighestweek': 2452, "s'okay": 2453, 'andy': 2454, 'you-and': 2455, 'return': 2456, 'tuitutil': 2457, 'bud': 2458, 'learn': 2459, 'takeaway': 2460, 'slept': 2461, 'instead': 2462, '1hr': 2463, 'genial': 2464, 'competition': 2465, 'yosh': 2466, 'procrastinate': 2467, 'plus': 2468, 'sorting': 2469, 'kfc': 2470, 'itunes': 2471, 'dedicatedfan': 2472, '💜': 2473, 'daft': 2474, 'teethe': 2475, 'trouble': 2476, 'huxley': 2477, 'basket': 2478, 'ben': 2479, 'gamer': 2480, 'active': 2481, '120': 2482, 'distance': 2483, 'suitable': 2484, 'final': 2485, 'stockholm': 2486, 'zack': 2487, 'destroy': 2488, 'heel': 2489, 'claw': 2490, 'q': 2491, 'blonde': 2492, 'box': 2493, 'cheerio': 2494, 'seed': 2495, 'cutest': 2496, 'ffback': 2497, 'spotify': 2498, "we've": 2499, 'vc': 2500, 'tgp': 2501, 'race': 2502, 'average': 2503, "joe's": 2504, 'bluejays': 2505, 'vinylbear': 2506, 'pal': 2507, 'furbaby': 2508, 'luff': 2509, 'mega': 2510, 'retail': 2511, 'boot': 2512, 'whsmith': 2513, 'ps3': 2514, 'shannon': 2515, 'na': 2516, 'redecorate': 2517, 'bob': 2518, 'ellie': 2519, 'mairi': 2520, 'workout': 2521, 'impair': 2522, 'uggghhh': 2523, 'dam': 2524, 'dun': 2525, 'eczema': 2526, 'sufferer': 2527, 'ndee': 2528, 'pleasure': 2529, 'publilius': 2530, 'syrus': 2531, 'fear': 2532, 'death': 2533, 'dread': 2534, 'fell': 2535, 'fuk': 2536, 'unblock': 2537, 'manually': 2538, 'tweak': 2539, 'php': 2540, 'fall': 2541, 'oomf': 2542, 'pippa': 2543, 'hschool': 2544, 'bus': 2545, 'cardi': 2546, 'everyday': 2547, 'everytime': 2548, 'hk': 2549, "why'd": 2550, 'acorn': 2551, 'originally': 2552, 'c64': 2553, 'apart': 2554, 'cpu': 2555, 'considerably': 2556, 'advanced': 2557, 'onair': 2558, 'bay': 2559, 'hold': 2560, 'river': 2561, '0878 0388': 2562, '1033': 2563, '0272 3306': 2564, '70': 2565, 'rescue': 2566, 'mutt': 2567, 'confirm': 2568, 'delivery': 2569, 'switch': 2570, 'lap': 2571, 'optimize': 2572, 'lu': 2573, ':|': 2574, 'tweetofthedecade': 2575, ':P': 2576, 'class': 2577, 'happiest': 2578, 'bbmme': 2579, 'pin': 2580, '7df9e60a': 2581, 'bbm': 2582, 'bbmpin': 2583, 'addmeonbbm': 2584, 'addme': 2585, "today's": 2586, 'normal': 2587, 'menu': 2588, 'marry': 2589, 'glenn': 2590, 'whats': 2591, 'height': 2592, "sculptor's": 2593, 'ti5': 2594, 'dota': 2595, 'nudge': 2596, 'spot': 2597, 'tasty': 2598, 'hilly': 2599, 'cycle': 2600, 'england': 2601, 'scotlandismassive': 2602, 'gen': 2603, 'vikk': 2604, 'fna': 2605, 'mombasa': 2606, 'tukutanemombasa': 2607, '100reasonstovisitmombasa': 2608, 'karibumombasa': 2609, 'hanbin': 2610, 'certainly': 2611, 'goosnight': 2612, 'kindly': 2613, 'familiar': 2614, 'jealous': 2615, 'tent': 2616, 'yea': 2617, 'cozy': 2618, 'phenomenal': 2619, 'collab': 2620, 'birth': 2621, 'behave': 2622, 'monster': 2623, 'spree': 2624, '000': 2625, 'tank': 2626, 'outstanding': 2627, 'donation': 2628, 'h': 2629, 'contestkiduniya': 2630, 'mfundo': 2631, 'oche': 2632, 'hun': 2633, 'inner': 2634, 'nerd': 2635, 'tame': 2636, 'insidious': 2637, 'logic': 2638, 'math': 2639, 'channel': 2640, 'continue': 2641, 'doubt': 2642, '300': 2643, 'sub': 2644, '200': 2645, 'subs': 2646, 'forgiven': 2647, 'wonderfuls': 2648, 'mannerfuls': 2649, 'yhooo': 2650, 'ngi': 2651, 'mood': 2652, 'push': 2653, 'limit': 2654, 'obakeng': 2655, 'goat': 2656, 'alhamdullilah': 2657, 'pebble': 2658, 'engross': 2659, 'bing': 2660, 'scream': 2661, 'whole': 2662, 'wide': 2663, '🌎': 2664, '😧': 2665, 'wat': 2666, 'muahhh': 2667, 'pausetime': 2668, 'drift': 2669, 'loose': 2670, 'campaign': 2671, 'kickstarter': 2672, 'article': 2673, 'absolute': 2674, 'jenna': 2675, 'bellybutton': 2676, 'innie': 2677, 'outie': 2678, 'havent': 2679, 'delish': 2680, 'joselito': 2681, 'freya': 2682, 'nth': 2683, 'latepost': 2684, 'lupet': 2685, 'mo': 2686, 'eric': 2687, 'askaman': 2688, 'helpful': 2689, 'alternatively': 2690, '150': 2691, '0345': 2692, '454': 2693, '111': 2694, 'webz': 2695, 'oops': 2696, "they'll": 2697, 'realise': 2698, 'anymore': 2699, 'carmel': 2700, 'decision': 2701, 'matt': 2702, 'probs': 2703, '@commonculture': 2704, '@connorfranta': 2705, 'honestly': 2706, 'explain': 2707, 'relationship': 2708, 'pick': 2709, 'tessnzach': 2710, 'paperboy': 2711, 'honest': 2712, 'reassure': 2713, 'personal': 2714, 'mubank': 2715, "dongwoo's": 2716, 'bright': 2717, 'tommorow': 2718, 'newyork': 2719, 'magic': 2720, 'lolll': 2721, 'twinx': 2722, '16': 2723, 'path': 2724, 'firmansyahbl': 2725, 'usual': 2726, 'procedure': 2727, 'grim': 2728, 'fandango': 2729, 'ordinary': 2730, 'extraordinary': 2731, 'bos': 2732, 'birmingham': 2733, 'oracle': 2734, 'samosa': 2735, 'fireball': 2736, 'shoe': 2737, 'serve': 2738, 'sushi': 2739, 'shoeshi': 2740, '�': 2741, 'lymond': 2742, 'philippa': 2743, 'novel': 2744, 'tara': 2745, '. . .': 2746, 'aur': 2747, 'han': 2748, 'imran': 2749, 'khan': 2750, '63': 2751, 'agaaain': 2752, 'doli': 2753, 'siregar': 2754, 'ninh': 2755, 'size': 2756, 'geekiest': 2757, 'geek': 2758, 'wallet': 2759, 'das': 2760, 'request': 2761, 'medium': 2762, 'rally': 2763, 'rotate': 2764, 'direction': 2765, 'eek': 2766, 'red': 2767, 'beijing': 2768, 'meni': 2769, 'tebrik': 2770, 'etdi': 2771, '700': 2772, '💗': 2773, 'rod': 2774, 'embrace': 2775, 'actor': 2776, 'aplomb': 2777, 'foreveralone': 2778, 'mysummer': 2779, '01482': 2780, '333505': 2781, 'hahahaha': 2782, 'wear': 2783, 'uniform': 2784, 'evil': 2785, 'owww': 2786, 'choo': 2787, 'chweet': 2788, 'shorthaired': 2789, 'oscar': 2790, 'realize': 2791, 'harmony': 2792, 'judge': 2793, 'denerivery': 2794, '506': 2795, 'kiksexting': 2796, 'kikkomansabor': 2797, 'killer': 2798, 'henessydiaries': 2799, 'journey': 2800, 'band': 2801, 'plz': 2802, 'convo': 2803, '11': 2804, 'vault': 2805, 'expand': 2806, 'vinny': 2807, 'money': 2808, 'hahahahaha': 2809, '50cents': 2810, 'repay': 2811, 'debt': 2812, 'smiling': 2813, 'evet': 2814, 'wifi': 2815, 'lifestyle': 2816, 'qatarday': 2817, '. ..': 2818, '🌞': 2819, 'girly': 2820, 'india': 2821, 'innovate': 2822, 'volunteer': 2823, 'saran': 2824, 'drama': 2825, 'genre': 2826, 'romance': 2827, 'comedy': 2828, 'leanneriner': 2829, '19': 2830, 'porno': 2831, 'l4l': 2832, 'weloveyounamjoon': 2833, 'homey': 2834, 'kenya': 2835, 'emotional': 2836, 'roller': 2837, 'coaster': 2838, 'aspect': 2839, 'najam': 2840, 'confession': 2841, 'ad': 2842, 'pricelessantique': 2843, 'takesonetoknowone': 2844, 'extra': 2845, 'ucount': 2846, 'ji': 2847, 'turkish': 2848, 'crap': 2849, 'burn': 2850, '80x': 2851, 'airline': 2852, 'sexy': 2853, 'yello': 2854, 'gail': 2855, 'yael': 2856, 'lesson': 2857, 'en': 2858, 'manos': 2859, 'hand': 2860, 'manager': 2861, 'reader': 2862, 'dnt': 2863, 'ideal': 2864, 'weekly': 2865, 'idol': 2866, 'pose': 2867, 'shortlist': 2868, 'dominion': 2869, 'picnic': 2870, 'tmrw': 2871, 'nobody': 2872, 'jummamubarak': 2873, 'shower': 2874, 'shalwarkameez': 2875, 'itter': 2876, 'offer': 2877, 'jummaprayer': 2878, 'af': 2879, 'display': 2880, 'enable': 2881, 'company': 2882, 'peep': 2883, 'tweeps': 2884, 'folow': 2885, '2k': 2886, 'ohhh': 2887, 'teaser': 2888, 'airecs': 2889, '009': 2890, 'acid': 2891, 'mouse': 2892, 'ep': 2893, '31st': 2894, 'include': 2895, 'robin': 2896, 'rough': 2897, 'control': 2898, 'remixes': 2899, 'rts': 2900, 'faves': 2901, 'toss': 2902, 'lady': 2903, '🐑': 2904, 'library': 2905, 'mr2': 2906, 'climb': 2907, 'cuddle': 2908, 'jilla': 2909, 'headline': 2910, '2017': 2911, 'jumma': 2912, 'mubarik': 2913, 'total': 2914, 'congratz': 2915, 'contribution': 2916, '2.0': 2917, 'yuppiieee': 2918, 'alienthought': 2919, 'happyalien': 2920, 'crowd': 2921, 'loud': 2922, 'gary': 2923, 'particular': 2924, 'attraction': 2925, 'supprt': 2926, 'savage': 2927, 'cleanse': 2928, 'scam': 2929, 'ridden': 2930, 'vyapam': 2931, 'rename': 2932, 'wave': 2933, 'couch': 2934, 'dodge': 2935, 'explanation': 2936, 'bag': 2937, 'sanza': 2938, 'yaa': 2939, 'slr': 2940, 'som': 2941, 'honour': 2942, 'hehehe': 2943, 'view': 2944, 'explorer': 2945, 'wayanadan': 2946, 'forest': 2947, 'wayanad': 2948, 'srijith': 2949, 'whisper': 2950, 'lie': 2951, 'pokemon': 2952, 'dazzle': 2953, 'urself': 2954, 'double': 2955, 'flare': 2956, 'black': 2957, '9': 2958, '51': 2959, 'browse': 2960, 'bore': 2961, 'female': 2962, 'tour': 2963, 'delve': 2964, 'muchhh': 2965, 'tmr': 2966, 'breakfast': 2967, 'gl': 2968, "tonight's": 2969, '):': 2970, 'litey': 2971, 'manuella': 2972, 'maine': 2973, 'abhi': 2974, 'tak': 2975, 'ye': 2976, 'nhi': 2977, 'dekhi': 2978, 'promos': 2979, 'se': 2980, 'welcomed': 2981, 'xpax': 2982, 'lisa': 2983, 'aboard': 2984, 'institution': 2985, 'nc': 2986, 'cheese': 2987, 'overload': 2988, 'pizza': 2989, '•': 2990, 'mcfloat': 2991, 'fudge': 2992, 'sandae': 2993, 'munchkins': 2994, "d'd": 2995, 'granny': 2996, 'baller': 2997, 'lil': 2998, 'chain': 2999, 'everybody': 3000, 'ought': 3001, 'jay': 3002, 'events@breastcancernow.org': 3003, '79x': 3004, 'champion': 3005, 'letter': 3006, 'approve': 3007, 'unique': 3008, 'affaraid': 3009, 'dearslim': 3010, 'role': 3011, 'billy': 3012, 'labs': 3013, 'ovh': 3014, 'maxi': 3015, 'bunch': 3016, 'acc': 3017, 'sprit': 3018, 'yous': 3019, 'til': 3020, 'severe': 3021, 'hammies': 3022, 'freedom': 3023, 'pistol': 3024, 'unlock': 3025, 'bemeapp': 3026, 'thumb': 3027, 'beme': 3028, 'bemecode': 3029, 'proudtobeme': 3030, 'round': 3031, 'calm': 3032, 'kepo': 3033, 'luckily': 3034, 'clearly': 3035, 'دعمم': 3036, 'للعودة': 3037, 'للحياة': 3038, 'heiyo': 3039, 'dudaftie': 3040, 'breaktym': 3041, 'fatal': 3042, 'dangerous': 3043, 'term': 3044, 'health': 3045, 'outraged': 3046, '645k': 3047, 'muna': 3048, 'magstart': 3049, 'salute': 3050, '→': 3051, 'thq': 3052, 'continous': 3053, 'thalaivar': 3054, '£': 3055, 'heiya': 3056, 'grab': 3057, '30.000': 3058, 'av': 3059, 'gd': 3060, 'wknd': 3061, 'ear': 3062, 'yesss': 3063, "y'day": 3064, 'hxh': 3065, 'besides': 3066, 'vids': 3067, 'badass': 3068, 'killua': 3069, 'scene': 3070, 'suffering': 3071, 'feed': 3072, '78x': 3073, 'unappreciated': 3074, 'gracious': 3075, 'nailedit': 3076, 'ourdisneyinfinity': 3077, 'mary': 3078, 'jillmill': 3079, 'webcam': 3080, 'elfindelmundo': 3081, 'sexi': 3082, 'mainly': 3083, 'favour': 3084, 'dancetastic': 3085, 'satyajit': 3086, "ray's": 3087, 'porosh': 3088, 'pathor': 3089, 'situation': 3090, 'goldbugs': 3091, 'wine': 3092, 'bottle': 3093, 'spill': 3094, 'jazmin': 3095, 'bonilla': 3096, '15000': 3097, 'star': 3098, 'hollywood': 3099, 'rofl': 3100, 'shade': 3101, 'grey': 3102, 'netsec': 3103, 'edition': 3104, 'ate': 3105, 'kev': 3106, 'apology': 3107, 'fangirled': 3108, 'sister': 3109, 'unlisted': 3110, 'hickey': 3111, 'dad': 3112, 'hock': 3113, 'mamma': 3114, 'human': 3115, 'being': 3116, 'mere': 3117, 'holistic': 3118, 'cosmovision': 3119, 'narrow-minded': 3120, 'charge': 3121, 'ce': 3122, 'alix': 3123, 'quan': 3124, 'tip': 3125, 'naaahhh': 3126, 'duh': 3127, 'emesh': 3128, 'hilarious': 3129, 'kath': 3130, 'kia': 3131, '@vauk': 3132, 'tango': 3133, 'tracerequest': 3134, 'homie': 3135, 'dassy': 3136, 'fwm': 3137, 'selamat': 3138, 'nichola': 3139, 'found': 3140, 'malta': 3141, 'gto': 3142, 'tomorrowland': 3143, 'incall': 3144, 'shobs': 3145, 'incomplete': 3146, 'barkada': 3147, 'silverstone': 3148, 'pull': 3149, 'bookstore': 3150, 'lately': 3151, 'ganna': 3152, 'hillary': 3153, 'clinton': 3154, 'court': 3155, 'notice': 3156, 'slice': 3157, 'life-so': 3158, 'hide': 3159, 'untapped': 3160, 'mca': 3161, 'gettin': 3162, 'hella': 3163, 'wana': 3164, 'bandz': 3165, 'hell': 3166, 'donington': 3167, 'park': 3168, '24/25': 3169, 'hop': 3170, 'x30': 3171, 'merci': 3172, 'bien': 3173, 'amie': 3174, 'pitbull': 3175, '777x': 3176, 'fri': 3177, 'annyeong': 3178, 'oppa': 3179, 'indonesian': 3180, 'elf': 3181, 'flight': 3182, 'bf': 3183, 'jennyjean': 3184, 'kikchat': 3185, 'sabadodeganarseguidores': 3186, 'sexysasunday': 3187, 'marseille': 3188, 'ganda': 3189, 'fnaf': 3190, 'steam': 3191, 'assure': 3192, 'current': 3193, 'goin': 3194, 'sweety': 3195, "spot's": 3196, 'barnstaple': 3197, 'bideford': 3198, 'abit': 3199, 'road': 3200, 'rocro': 3201, '13glodyysbro': 3202, 'hire': 3203, '2ne1': 3204, 'aspetti': 3205, 'chicken': 3206, 'chip': 3207, 'cupboard': 3208, 'empty': 3209, 'jamie': 3210, 'ian': 3211, 'latin': 3212, 'asian': 3213, 'version': 3214, 'fave': 3215, 'vaing': 3216, '642': 3217, 'kikgirl': 3218, 'orgasm': 3219, 'phonesex': 3220, 'spacers': 3221, 'felicity': 3222, 'smoak': 3223, '👓': 3224, '💘': 3225, 'child': 3226, 'psychopaths': 3227, 'spoile': 3228, 'dimple': 3229, 'contemplate': 3230, 'indie': 3231, 'route': 3232, 'jsl': 3233, '76x': 3234, 'gotcha': 3235, 'kina': 3236, 'donna': 3237, 'reachability': 3238, 'jk': 3239, 'bitter': 3240, 's02e04': 3241, 'air': 3242, 'naggy': 3243, 'anal': 3244, 'vidcon': 3245, 'anxious': 3246, 'shake': 3247, '10:30': 3248, 'smoke': 3249, 'white': 3250, 'grandpa': 3251, 'prolly': 3252, 'stash': 3253, 'closer-chasing': 3254, 'spec': 3255, 'league': 3256, 'chase': 3257, 'wall': 3258, 'angel': 3259, 'mochamichelle': 3260, 'iph': 3261, '0ne': 3262, 'simply': 3263, 'bi0': 3264, 'x29': 3265, 'there': 3266, 'background': 3267, 'maggie': 3268, 'afraid': 3269, 'mull': 3270, 'nil': 3271, 'glasgow': 3272, 'netball': 3273, 'thistle': 3274, 'thistlelove': 3275, 'effect': 3276, 'minecraft': 3277, 'boring': 3278, 'drew': 3279, 'delicious': 3280, 'muddle': 3281, 'racket': 3282, 'isolate': 3283, 'fa': 3284, 'participate': 3285, 'icecreammaster': 3286, 'group': 3287, 'huhu': 3288, 'shet': 3289, 'desk': 3290, 'o_o': 3291, 'orz': 3292, 'problemmm': 3293, '75x': 3294, 'english': 3295, 'yeeaayy': 3296, 'alhamdulillah': 3297, 'amin': 3298, 'weed': 3299, 'definition': 3300, 'crowdfunding': 3301, 'goal': 3302, 'walk': 3303, 'hellooo': 3304, 'selection': 3305, 'lynne': 3306, 'buffer': 3307, 'button': 3308, 'composer': 3309, 'fridayfun': 3310, 'non-filipina': 3311, 'ejayster': 3312, 'united': 3313, 'state': 3314, 'le': 3315, 'stan': 3316, 'lee': 3317, 'discovery': 3318, 'cousin': 3319, '1400': 3320, 'yrs': 3321, 'teleportation': 3322, 'shahid': 3323, 'afridi': 3324, 'tou': 3325, 'mahnor': 3326, 'baloch': 3327, 'nikki': 3328, 'flower': 3329, 'blackfly': 3330, 'courgette': 3331, 'wont': 3332, 'affect': 3333, 'fruit': 3334, 'italian': 3335, 'netfilx': 3336, 'unmarried': 3337, 'finger': 3338, 'rock': 3339, 'wiellys': 3340, 'paul': 3341, 'barcode': 3342, 'charlotte': 3343, 'thtas': 3344, 'trailblazerhonors': 3345, 'labour': 3346, 'leader': 3347, 'alot': 3348, 'agayhippiehippy': 3349, 'exercise': 3350, 'better': 3351, 'ginger': 3352, 'x28': 3353, 'teach': 3354, 'awareness': 3355, '::': 3356, 'portsmouth': 3357, 'sonal': 3358, 'hungry': 3359, 'hmmm': 3360, 'pedant': 3361, '98': 3362, 'kit': 3363, 'ack': 3364, 'hih': 3365, 'choir': 3366, 'rosidbinr': 3367, 'duke': 3368, 'earl': 3369, 'tau': 3370, 'awak': 3371, 'orayt': 3372, 'knw': 3373, 'block': 3374, 'dikha': 3375, 'reh': 3376, 'adolf': 3377, 'hitler': 3378, 'obstacle': 3379, 'exist': 3380, 'surrender': 3381, 'terrific': 3382, 'advaddict': 3383, '_15': 3384, 'jimin': 3385, 'notanapology': 3386, 'map': 3387, 'informed': 3388, '0.7': 3389, 'dependency': 3390, 'motherfucking': 3391, "david's": 3392, 'damn': 3393, 'college': 3394, '24th': 3395, 'steroid': 3396, 'made': 3397, 'alansmithpart': 3398, 'publication': 3399, 'servus': 3400, 'bonasio': 3401, "doido's": 3402, 'task': 3403, 'delegate': 3404, 'aaahhh': 3405, 'jen': 3406, 'information': 3407, 'virgin': 3408, 'non-mapbox': 3409, 'restrict': 3410, 'mapbox': 3411, 'basemaps': 3412, 'contractually': 3413, 'researcher': 3414, 'seafood': 3415, 'weltum': 3416, 'teh': 3417, 'dety': 3418, 'huh': 3419, '=D': 3420, 'annoy': 3421, 'katmtan': 3422, 'swan': 3423, 'fandom': 3424, 'blurry': 3425, 'besok': 3426, 'b': 3427, 'urgently': 3428, 'within': 3429, 'currently': 3430, 'dorset': 3431, 'goddess': 3432, 'blast': 3433, 'shitfaced': 3434, 'soul': 3435, 'donate': 3436, 'sing': 3437, 'disney': 3438, 'doug': 3439, '28': 3440, 'bnte': 3441, 'hain': 3442, ';p': 3443, 'shiiitt': 3444, 'case': 3445, 'rm35': 3446, 'negooo': 3447, 'male': 3448, 'madeline': 3449, 'nun': 3450, 'mornin': 3451, 'yapsters': 3452, 'ply': 3453, 'copy': 3454, 'icon': 3455, 'alchemist': 3456, 'x27': 3457, 'dayz': 3458, 'preview': 3459, 'thug': 3460, 'lmao': 3461, 'sharethelove': 3462, 'highvalue': 3463, 'halsey': 3464, '30th': 3465, 'wed': 3466, 'anniversary': 3467, 'folk': 3468, 'bae': 3469, 'reply': 3470, 'complain': 3471, 'rude': 3472, 'bond': 3473, 'niggs': 3474, 'readingres': 3475, 'wordoftheweek': 3476, 'wotw': 3477, '4:18': 3478, 'est': 3479, 'earn': 3480, 'whatevs': 3481, 'jess': 3482, 'surry': 3483, 'botany': 3484, 'gel': 3485, 'alison': 3486, 'lsa': 3487, 'response': 3488, 'fron': 3489, 'debbie': 3490, 'carol': 3491, 'patient': 3492, 'discharge': 3493, 'lounge': 3494, 'walmart': 3495, 'balance': 3496, 'study': 3497, 'hayley': 3498, 'shoulder': 3499, 'pad': 3500, 'mount': 3501, 'inquisitor': 3502, 'cosplay': 3503, 'cosplayprogress': 3504, 'mike': 3505, 'dunno': 3506, 'housing': 3507, 'insecurity': 3508, 'nh': 3509, 'devolution': 3510, 'patriotism': 3511, 'halla': 3512, 'ark': 3513, "jiyeon's": 3514, 'buzz': 3515, 'burnt': 3516, 'mist': 3517, 'opi': 3518, 'avoplex': 3519, 'nail': 3520, 'cuticle': 3521, 'replenish': 3522, '15ml': 3523, 'serious': 3524, 'submission': 3525, 'lb': 3526, 'cherish': 3527, 'flip': 3528, 'backflip': 3529, 'jumpgiants': 3530, 'foampit': 3531, 'usa': 3532, 'pamer': 3533, 'thks': 3534, 'actuallythough': 3535, 'craft': 3536, 'session': 3537, 'mehtab': 3538, 'aunty': 3539, 'gc': 3540, 'yeeew': 3541, 'pre': 3542, 'lan': 3543, 'yeey': 3544, 'strangely': 3545, 'arrange': 3546, 'doodle': 3547, 'comic': 3548, 'summoner': 3549, 'none': 3550, '🙅': 3551, 'lycra': 3552, 'vincent': 3553, 'couldnt': 3554, 'roy': 3555, 'bg': 3556, 'img': 3557, 'circle': 3558, 'font': 3559, 'deathofgrass': 3560, 'loan': 3561, 'lawnmower': 3562, 'popular': 3563, 'charismatic': 3564, 'man.he': 3565, 'thrive': 3566, 'economy': 3567, 'burst': 3568, 'georgie': 3569, 'x26': 3570, 'million': 3571, 'fl': 3572, 'sometime': 3573, 'iceland': 3574, 'crazy': 3575, 'landscape': 3576, 'yok': 3577, 'lah': 3578, 'concordia': 3579, 'reunite': 3580, 'xxxibmchll': 3581, 'sea': 3582, 'imitatia': 3583, 'oe': 3584, 'michelle': 3585, 'comeback': 3586, 'gross': 3587, 'treat': 3588, 'equal': 3589, 'injustice': 3590, 'feminism': 3591, 'ineedfeminismbecause': 3592, 'jam': 3593, 'stuck': 3594, 'recommend': 3595, 'redhead': 3596, 'wacky': 3597, 'rather': 3598, 'worst': 3599, 'waytoliveahappylife': 3600, 'hoxton': 3601, 'holborn': 3602, 'karen': 3603, 'wag': 3604, 'bum': 3605, 'wwooo': 3606, 'nite': 3607, 'drawing': 3608, 'laiten': 3609, 'arond': 3610, '1:30': 3611, 'consider': 3612, 'exhaust': 3613, 'mature': 3614, 'journeyps': 3615, 'foam': 3616, "lady's": 3617, 'mob': 3618, 'false': 3619, 'bulletin': 3620, 'spring': 3621, 'fiesta': 3622, 'noise': 3623, 'awuuu': 3624, 'aich': 3625, 'sept': 3626, 'rudramadevi': 3627, 'anushka': 3628, 'gunashekar': 3629, 'harryxhood': 3630, 'upset': 3631, 'ooh': 3632, 'humanist': 3633, 'magazine': 3634, 'username': 3635, 'rape': 3636, 'csrracing': 3637, 'lack': 3638, 'hygiene': 3639, 'tose': 3640, 'clothes': 3641, 'temperature': 3642, 'planet': 3643, 'brave': 3644, 'ge': 3645, '2015kenya': 3646, 'ryan': 3647, 'tidy': 3648, 'hagergang': 3649, 'chanhun': 3650, 'photoshoot': 3651, 'afterall': 3652, 'sadkaay': 3653, 'tharkness': 3654, 'peak': 3655, 'heatwave': 3656, 'lower': 3657, 'standard': 3658, 'x25': 3659, 'exams': 3660, 'recruit': 3661, 'doom': 3662, 'nasty': 3663, 'affiliate': 3664, '>:)': 3665, 'situate': 3666, '64': 3667, '74': 3668, '40': 3669, '00': 3670, 'hall': 3671, 'ted': 3672, 'pixgram': 3673, 'creative': 3674, 'slideshow': 3675, 'tentatively': 3676, 'nibble': 3677, 'ivy': 3678, 'sho': 3679, 'superpower': 3680, 'obsess': 3681, 'oth': 3682, 'third': 3683, 'ngarepfollbackdarinabilahjkt': 3684, '48': 3685, 'sunglasses': 3686, 'jackie': 3687, 'sunnies': 3688, 'style': 3689, 'jlo': 3690, 'jlovers': 3691, 'turkey': 3692, 'goodafternoon': 3693, 'collage': 3694, 'furry': 3695, 'bruce': 3696, 'kunoriforceo': 3697, 'aayegi': 3698, 'timming': 3699, 'wiw': 3700, 'bips': 3701, 'zareen': 3702, 'daisy': 3703, "b'coz": 3704, 'karte': 3705, 'mak': 3706, '∗': 3707, 'lega': 3708, 'branding': 3709, 'spag': 3710, 'boat': 3711, 'outboarding': 3712, 'spell': 3713, 'reboarding': 3714, 'fire': 3715, 'offboarding': 3716, 'sn16': 3717, '9dg': 3718, 'following': 3719, 'bnf': 3720, '50': 3721, 'jason': 3722, 'rob': 3723, 'feb': 3724, 'victoriasecret': 3725, 'finland': 3726, 'helsinki': 3727, 'airport': 3728, 'plane': 3729, 'beyond': 3730, 'onting': 3731, 'tiis': 3732, 'lng': 3733, 'yan': 3734, "u'll": 3735, 'steve': 3736, 'bell': 3737, 'prescott': 3738, 'leadership': 3739, 'cartoon': 3740, 'upside': 3741, 'statement': 3742, 'selamathariraya': 3743, 'lovesummertime': 3744, 'dumont': 3745, 'jax': 3746, 'jones': 3747, 'awesomeee': 3748, 'x24': 3749, 'geoff': 3750, 'packing': 3751, 'stick': 3752, 'amazingly': 3753, 'talanted': 3754, 'vsco': 3755, 'thankies': 3756, 'hash': 3757, 'tag': 3758, 'ifimeetanalien': 3759, 'bff': 3760, 'section': 3761, 'follbaaack': 3762, 'az': 3763, 'cauliflower': 3764, 'attempt': 3765, 'prinsesa': 3766, 'yaaah': 3767, 'law': 3768, 'toy': 3769, 'sonaaa': 3770, 'beautifull': 3771, "josephine's": 3772, 'mirror': 3773, 'cretaperfect': 3774, '4me': 3775, 'cretaperfectsuv': 3776, 'creta': 3777, 'load': 3778, 'telecom': 3779, 'judy': 3780, 'superb': 3781, 'slightly': 3782, 'rakna': 3783, 'ew': 3784, 'whose': 3785, 'fifa': 3786, 'lineup': 3787, 'survive': 3788, 'p90x': 3789, 'p90': 3790, 'dishoom': 3791, 'rajnigandha': 3792, 'minju': 3793, 'rapper': 3794, 'lead': 3795, 'vocal': 3796, 'yujin': 3797, 'visual': 3798, 'maknae': 3799, 'jane': 3800, 'hah': 3801, 'hawk': 3802, 'history': 3803, 'along': 3804, 'talkback': 3805, 'process': 3806, 'feature': 3807, 'mostly': 3808, "cinema's": 3809, 'defend': 3810, 'fashion': 3811, 'atrocity': 3812, 'pandimensional': 3813, 'manifestation': 3814, 'argos': 3815, 'ring': 3816, '640': 3817, 'nad': 3818, 'plezzz': 3819, 'asthma': 3820, 'inhaler': 3821, 'breathe': 3822, 'goodluck': 3823, 'hunger': 3824, 'mockingjay': 3825, 'thehungergames': 3826, 'adore': 3827, 'x23': 3828, 'reina': 3829, 'felt': 3830, 'blogged': 3831, 'excuse': 3832, 'attender': 3833, 'whn': 3834, 'andre': 3835, 'mamayang': 3836, '11pm': 3837, '1d': 3838, '89.9': 3839, 'powys': 3840, 'shropshire': 3841, 'border': 3842, "school's": 3843, 'san': 3844, 'diego': 3845, 'jump': 3846, 'source': 3847, 'appeasement': 3848, '¦': 3849, 'aj': 3850, 'action': 3851, 'grunt': 3852, 'sc': 3853, 'anti-christ': 3854, 'm8': 3855, 'ju': 3856, 'halfway': 3857, 'ex': 3858, 'postive': 3859, 'opinion': 3860, 'avi': 3861, 'dare': 3862, 'corridor': 3863, '👯': 3864, 'neither': 3865, 'rundown': 3866, 'yah': 3867, 'leviboard': 3868, 'kleper': 3869, ':(': 3870, 'impeccable': 3871, 'setokido': 3872, 'shoulda': 3873, 'hippo': 3874, 'materialistic': 3875, 'showpo': 3876, 'cough': 3877, '@artofsleepingin': 3878, 'x22': 3879, '☺': 3880, 'makesme': 3881, 'santorini': 3882, 'escape': 3883, 'beatport': 3884, '👊🏻': 3885, 'trmdhesitant': 3886, 'manuel': 3887, 'valls': 3888, 'king': 3889, 'seven': 3890, 'kingdom': 3891, 'andals': 3892, 'privacy': 3893, 'wise': 3894, 'natsuki': 3895, 'often': 3896, 'catchy': 3897, 'neil': 3898, 'emirate': 3899, 'brill': 3900, 'urquhart': 3901, 'castle': 3902, 'simple': 3903, 'generally': 3904, 'shatter': 3905, 'contrast': 3906, 'educampakl': 3907, 'rotorua': 3908, 'pehly': 3909, 'phir': 3910, 'somi': 3911, 'burfday': 3912, 'university': 3913, 'santo': 3914, 'tomas': 3915, 'norhing': 3916, 'dialogue': 3917, 'chainsaw': 3918, 'amusement': 3919, 'awe': 3920, 'protect': 3921, 'pop': 3922, '2ish': 3923, 'fahad': 3924, 'bhai': 3925, 'iqrar': 3926, 'waseem': 3927, 'abroad': 3928, 'rotation': 3929, 'moviee': 3930, 'chef': 3931, 'grogol': 3932, 'long-distance': 3933, 'rhys': 3934, 'pwrfl': 3935, 'benefit': 3936, 'b2b': 3937, 'b2c': 3938, "else's": 3939, 'soo': 3940, 'enterprison': 3941, 'schoolsoutforsummer': 3942, 'fellow': 3943, 'juggle': 3944, 'purrthos': 3945, 'cathos': 3946, 'catamis': 3947, 'fourfiveseconds': 3948, 'deaf': 3949, 'drug': 3950, 'alcohol': 3951, 'apexis': 3952, 'crystal': 3953, 'meth': 3954, 'champagne': 3955, 'fc': 3956, 'streamer': 3957, 'juice': 3958, 'correct': 3959, 'portrait': 3960, 'izumi': 3961, 'fugiwara': 3962, 'clonmel': 3963, 'refreshing': 3964, 'vibrant': 3965, 'estimate': 3966, 'server': 3967, 'quiet': 3968, 'yey': 3969, "insha'allah": 3970, 'wil': 3971, 'pleased': 3972, 'x21': 3973, 'trend': 3974, 'akshaymostlovedsuperstarever': 3975, 'indirecting': 3976, 'askurban': 3977, 'lyka': 3978, 'sits': 3979, 'nap': 3980, 'aff': 3981, 'uname': 3982, 'jonginuh': 3983, 'billie': 3984, 'forecast': 3985, '10am': 3986, '5am': 3987, 'soothe': 3988, 'vii': 3989, 'sweetheart': 3990, 'freak': 3991, 'original': 3992, 'zayn': 3993, 'fucker': 3994, 'pet': 3995, 'illustration': 3996, 'wohoo': 3997, 'gleam': 3998, 'painting': 3999, 'deal': 4000, 'prime': 4001, 'minister': 4002, 'sunjam': 4003, 'industry': 4004, 'present': 4005, 'practicing': 4006, 'proactive': 4007, 'environment': 4008, 'unreal': 4009, 'zaine': 4010, 'zac': 4011, 'isaac': 4012, 'os': 4013, 'frank': 4014, 'iero': 4015, 'phase': 4016, 'david': 4017, 'beginner': 4018, 'shin': 4019, 'sunflower': 4020, 'sunny': 4021, 'favourites': 4022, 'tommarow': 4023, 'yall': 4024, 'rank': 4025, 'birthdaymonth': 4026, 'vianey': 4027, 'bffs': 4028, 'july': 4029, 'birthdaygirl': 4030, "town's": 4031, 'andrew': 4032, 'checkout': 4033, 'otwol': 4034, 'awhile': 4035, 'x20': 4036, 'all-time': 4037, 'julia': 4038, 'robert': 4039, 'awwhh': 4040, 'bulldog': 4041, 'unfortunate': 4042, '02079': 4043, '490': 4044, '132': 4045, 'caring': 4046, 'fightstickfriday': 4047, 'extravagant': 4048, 'tearout': 4049, 'selektion': 4050, 'yoot': 4051, 'cross': 4052, 'deserved': 4053, 'gudday': 4054, 'dave': 4055, 'haileyhelps': 4056, 'eid': 4057, 'mubarak': 4058, 'brotheeerrr': 4059, 'adventure': 4060, 'tokyo': 4061, 'kansai': 4062, 'l': 4063, 'uppe': 4064, 'om': 4065, '60': 4066, 'minuter': 4067, 'detailed': 4068, 'data': 4069, 'jesus': 4070, 'amsterdam': 4071, '3rd': 4072, 'nextweek': 4073, 'sends': 4074, 'booty': 4075, 'bcuz': 4076, 'step': 4077, 'option': 4078, 'stable': 4079, 'sturdy': 4080, 'lukkkee': 4081, 'again.ensoi': 4082, 'tc': 4083, 'madam': 4084, 'siddi': 4085, 'unknown': 4086, 'roomie': 4087, 'gn': 4088, 'gf': 4089, 'consent': 4090, 'mister': 4091, 'supportive': 4092, 'vine': 4093, 'peyton': 4094, 'nagato': 4095, 'yuki-chan': 4096, 'shoushitsu': 4097, 'archdbanterbury': 4098, 'experttradesmen': 4099, 'banter': 4100, 'quiz': 4101, 'tradetalk': 4102, 'floofs': 4103, 'face': 4104, 'muahah': 4105, 'x19': 4106, 'anticipation': 4107, 'jds': 4108, 'laro': 4109, 'tayo': 4110, 'answer': 4111, 'ht': 4112, 'angelica': 4113, 'anghel': 4114, 'aa': 4115, 'kkk': 4116, 'macbook': 4117, 'rehearse': 4118, 'youthcelebrate': 4119, 'mute': 4120, '29th': 4121, 'gohf': 4122, 'invited': 4123, 'vegetarian': 4124, "she'll": 4125, 'gooday': 4126, '101': 4127, '12000': 4128, 'oshieer': 4129, 'realreviews': 4130, 'happycustomers': 4131, 'realoshi': 4132, 'dealsuthaonotebachao': 4133, 'dime': 4134, 'uhuh': 4135, '🎵': 4136, 'code': 4137, 'pleasant': 4138, 'on-board': 4139, 'raheel': 4140, 'flyhigh': 4141, 'bother': 4142, 'everette': 4143, 'taylor': 4144, 'ha-ha': 4145, 'peachyloans': 4146, 'fridayfreebie': 4147, 'noe': 4148, 'yi': 4149, 'bindingofissac': 4150, 'xboxone': 4151, 'console': 4152, 'justin': 4153, 'gladly': 4154, 'son': 4155, 'morocco': 4156, 'peru': 4157, 'nxt': 4158, 'bps': 4159, 'resort': 4160, 'x18': 4161, 'havuuuloveyou': 4162, 'uuu': 4163, 'possitve': 4164, 'hopeyou': 4165, 'sweetie': 4166, 'throwbackfriday': 4167, 'christen': 4168, 'ki': 4169, 'yaad': 4170, 'gayi': 4171, 'opossum': 4172, 'running': 4173, 'belated': 4174, 'yeahh': 4175, 'kuffar': 4176, 'computer': 4177, 'cell': 4178, 'diarrhea': 4179, 'immigrant': 4180, 'louse': 4181, 'goictived': 4182, '70685': 4183, 'tagsforlikes': 4184, 'trapmusic': 4185, 'hotmusicdelocos': 4186, 'kinickers': 4187, '01282': 4188, '452096': 4189, 'shady': 4190, 'management': 4191, 'reservation': 4192, 'tkts': 4193, 'likewise': 4194, 'overgeneralization': 4195, 'ikr': 4196, '😍': 4197, 'consumerism': 4198, 'rid': 4199, 'recently': 4200, 'fics': 4201, 'ouch': 4202, 'slip': 4203, 'disc': 4204, 'thw': 4205, 'swimming': 4206, 'chute': 4207, 'chalut': 4208, 'minute': 4209, 'replay': 4210, 'iplayer': 4211, '11am': 4212, 'unneeded': 4213, 'megamoh': 4214, '7/29': 4215, 'power': 4216, 'tool': 4217, 'zealand': 4218, 'pile': 4219, 'dump': 4220, 'couscous': 4221, "women's": 4222, 'fiction': 4223, 'wahahaah': 4224, 'x17': 4225, 'orhan': 4226, 'pamuk': 4227, 'hero': 4228, 'canopy': 4229, 'maple': 4230, 'leaf': 4231, 'syrup': 4232, 'farm': 4233, 'stephanie': 4234, '💖': 4235, 'congrtaualtions': 4236, 'phileas': 4237, 'club': 4238, 'inc': 4239, 'photograph': 4240, 'phonegraphs': 4241, 'srsly': 4242, '10:17': 4243, 'ripaaa': 4244, 'banate': 4245, 'ray': 4246, 'dept': 4247, 'hospital': 4248, 'grt': 4249, 'infographic': 4250, "o'clock": 4251, 'habit': 4252, '1dfor': 4253, 'roadtrip': 4254, '19:30': 4255, 'ifc': 4256, 'whip': 4257, 'lilsisbro': 4258, 'pre-ordered': 4259, "pixar's": 4260, 'steelbook': 4261, 'hmm': 4262, 'pegell': 4263, 'lemess': 4264, 'kyle': 4265, 'paypal': 4266, 'confirmation': 4267, 'oct': 4268, 'tud': 4269, 'jst': 4270, 'addictive': 4271, 'humphrey': 4272, 'yell': 4273, 'erm': 4274, 'breach': 4275, 'lemon': 4276, 'yogurt': 4277, 'pot': 4278, 'discover': 4279, 'liquorice': 4280, 'pud': 4281, 'cajun': 4282, 'spiced': 4283, 'yum': 4284, 'cajunchicken': 4285, 'infinite': 4286, 'gern': 4287, 'cikaaa': 4288, 'maaf': 4289, 'telat': 4290, 'ngucapinnya': 4291, 'maaay': 4292, 'x16': 4293, 'viparita': 4294, 'karani': 4295, 'legsupthewall': 4296, 'unwind': 4297, 'coco': 4298, 'comfy': 4299, 'jalulu': 4300, 'rosh': 4301, 'gla': 4302, 'avail': 4303, 'suit': 4304, 'pallavi': 4305, 'nairobi': 4306, 'hrdstellobama': 4307, 'regional': 4308, 'civil': 4309, 'society': 4310, 'region': 4311, 'globe': 4312, 'hajur': 4313, 'yayy': 4314, "must've": 4315, 'nerve': 4316, 'prelim': 4317, 'costacc': 4318, 'nwb': 4319, 'shud': 4320, 'begin': 4321, 'cold': 4322, 'hmu': 4323, 'cala': 4324, 'brush': 4325, 'ego': 4326, 'wherever': 4327, 'interaction': 4328, 'dongsaeng': 4329, 'chorong': 4330, 'friendship': 4331, 'ffs': 4332, 'impressive': 4333, 'dragon': 4334, 'duck': 4335, 'mix': 4336, 'cheetah': 4337, 'wagga': 4338, 'coursework': 4339, 'lorna': 4340, 'scan': 4341, 'x12': 4342, 'canvas': 4343, 'paint': 4344, 'iqbal': 4345, 'ima': 4346, 'knowing': 4347, 'hon': 4348, 'aja': 4349, 'besi': 4350, 'chati': 4351, 'phulani': 4352, 'swasa': 4353, 'bahari': 4354, 'jiba': 4355, 'mumbai': 4356, 'gujarat': 4357, 'distrubed': 4358, 'otherwise': 4359, '190cr': 4360, 'inspite': 4361, 'holder': 4362, 'threatens': 4363, 'daily': 4364, 'basis': 4365, 'vr': 4366, 'angelo': 4367, 'quezon': 4368, 'sweatpants': 4369, 'breath': 4370, 'tripping': 4371, 'farbridges': 4372, 'segalakatakata': 4373, 'nixus': 4374, 'flint': 4375, '🍰': 4376, 'separately': 4377, 'criticise': 4378, 'gesture': 4379, 'pedal': 4380, 'stroke': 4381, 'attentive': 4382, 'caro': 4383, 'deposit': 4384, 'secure': 4385, 'shock': 4386, 'coffe': 4387, 'tenerina': 4388, 'auguri': 4389, 'iso': 4390, 'certification': 4391, 'paralyze': 4392, 'anxiety': 4393, 'sadness': 4394, "it'd": 4395, 'development': 4396, 'spain': 4397, 'def': 4398, 'bantime': 4399, 'fail': 4400, '2ban': 4401, 'x15': 4402, 'awkward': 4403, 'abs': 4404, 'galing': 4405, 'founder': 4406, 'loveyaaah': 4407, '⅛': 4408, '⅞': 4409, '∞': 4410, 'specialist': 4411, 'aw': 4412, 'babyyy': 4413, 'djstruthmate': 4414, 're-cap': 4415, 'flickr': 4416, 'tack': 4417, 'zephbot': 4418, 'hhahahahaha': 4419, 'blew': 4420, 'upp': 4421, 'entire': 4422, 'vega': 4423, 'strip': 4424, 'hahahahahhaha': 4425, "callie's": 4426, 'puppy': 4427, 'owner': 4428, 'callinganimalabusehotlineasap': 4429, 'gorefiend': 4430, 'mythic': 4431, 'reminder': 4432, '9:00': 4433, '▪': 4434, '️bea': 4435, 'miller': 4436, 'lockscreen': 4437, 'mbf': 4438, 'keesh': 4439, "yesterday's": 4440, 'groupie': 4441, 'bebe': 4442, 'sizams': 4443, 'color': 4444, 'invoice': 4445, 'kanina': 4446, 'pong': 4447, 'umaga': 4448, 'browser': 4449, 'typically': 4450, 'pleasse': 4451, 'leeteuk': 4452, 'pearl': 4453, 'thusi': 4454, 'pour': 4455, 'milk': 4456, 'tgv': 4457, 'paris': 4458, 'austerlitz': 4459, 'blois': 4460, 'mile': 4461, 'chateau': 4462, 'de': 4463, 'marais': 4464, 'taxi': 4465, 'x14': 4466, 'noms': 4467, 'enji': 4468, 'hater': 4469, 'purchase': 4470, 'specially-marked': 4471, 'custard': 4472, 'sm': 4473, 'on-pack': 4474, 'instruction': 4475, 'tile': 4476, 'downstairs': 4477, 'kelly': 4478, 'greek': 4479, 'petra': 4480, 'shadowplaylouis': 4481, 'mutual': 4482, 'cuz': 4483, 'liveonstreamate': 4484, 'lani': 4485, 'graze': 4486, 'pride': 4487, 'bristolart': 4488, 'in-app': 4489, 'ensure': 4490, 'item': 4491, 'screw': 4492, 'amber': 4493, 'noticing': 4494, '43': 4495, 'hpc': 4496, 'wip': 4497, 'sws': 4498, 'newsround': 4499, 'hound': 4500, '7:40': 4501, 'ada': 4502, 'racist': 4503, 'hulk': 4504, 'tight': 4505, 'prayer': 4506, 'pardon': 4507, 'phl': 4508, 'abu': 4509, 'dhabi': 4510, 'blessing': 4511, 'hihihi': 4512, 'teamjanuaryclaims': 4513, 'godonna': 4514, 'msg': 4515, 'bowwowchicawowwow': 4516, 'settle': 4517, 'dkt': 4518, 'porch': 4519, 'uber': 4520, 'mobile': 4521, 'application': 4522, 'giggle': 4523, 'delight': 4524, 'bare': 4525, 'wind': 4526, 'kahlil': 4527, 'gibran': 4528, 'flash': 4529, 'stiff': 4530, 'upper': 4531, 'lip': 4532, 'britain': 4533, 'latmon': 4534, 'endeavour': 4535, 'anne': 4536, 'joy': 4537, 'exploit': 4538, 'ign': 4539, 'au': 4540, 'pubcast': 4541, 'tengaman': 4542, '21': 4543, 'celebratio': 4544, 'determine': 4545, 'install': 4546, 'glorify': 4547, 'infirmity': 4548, 'silly': 4549, 'suave': 4550, 'gentlemen': 4551, 'monthly': 4552, 'mileage': 4553, 'target': 4554, 'samsung': 4555, 'quality': 4556, 'ey': 4557, 'beth': 4558, 'watched': 4559, 'gangster': 4560, "athena's": 4561, 'fancy': 4562, 'wellington': 4563, 'rich': 4564, 'christina': 4565, 'newsletter': 4566, 'zy': 4567, 'olur': 4568, 'x13': 4569, 'flawless': 4570, 'remix': 4571, 'reaction': 4572, 'hayli': 4573, 'edwin': 4574, 'elvena': 4575, 'emc': 4576, 'rubber': 4577, 'swearword': 4578, 'infection': 4579, '10:16': 4580, 'christophe': 4581, 'gans': 4582, 'brotherhood': 4583, 'pill': 4584, 'nocturnal': 4585, 'rrp': 4586, '18.99': 4587, '13.99': 4588, 'jah': 4589, 'wobble': 4590, 'retard': 4591, '50notifications': 4592, 'check-up': 4593, 'pun': 4594, 'elite': 4595, 'camillus': 4596, 'pleaseee': 4597, 'spare': 4598, 'tyre': 4599, 'joke': 4600, 'ahahah': 4601, 'shame': 4602, 'abandon': 4603, 'disagree': 4604, 'nowhere': 4605, 'contradict': 4606, 'continuously': 4607, 'chaos': 4608, 'contain': 4609, 'cranium': 4610, 'sneaker': 4611, 'nike': 4612, 'nikeoriginal': 4613, 'nikeindonesia': 4614, 'pierojogger': 4615, 'skoy': 4616, 'winter': 4617, 'falklands': 4618, 'jamie-lee': 4619, 'congraaats': 4620, 'hooh': 4621, 'chrome': 4622, 'storm': 4623, 'thunderstorm': 4624, 'vegas': 4625, 'circuscircus': 4626, 'omgg': 4627, 'thankie': 4628, 'tdy': 4629, '(-:': 4630, 'peter': 4631, 'expel': 4632, 'boughy': 4633, 'kernel': 4634, 'paralysis': 4635, 'liza': 4636, 'lol.hook': 4637, 'vampire': 4638, 'diaries': 4639, 'twice': 4640, 'thanq': 4641, 'goodwill': 4642, 'vandr': 4643, 'ash': 4644, 'debatable': 4645, 'solar': 4646, '6-5': 4647, 'ek': 4648, 'taco': 4649, 'mexico': 4650, 'viva': 4651, 'méxico': 4652, 'burger': 4653, 'thebestangkapuso': 4654, 'tooth': 4655, 'korean': 4656, 'netizen': 4657, 'cruel': 4658, 'elephant': 4659, 'marula': 4660, 'tdif': 4661, 'shoutouts': 4662, 'shortly': 4663, 'itsamarvelthing': 4664, 'marvel': 4665, "japan's": 4666, 'artist': 4667, 'homework': 4668, 'marco': 4669, 'herb': 4670, 'pm': 4671, 'self': 4672, 'esteem': 4673, 'patience': 4674, 'sobtian': 4675, 'coworker': 4676, 'deathly': 4677, 'hallows': 4678, 'supernatural': 4679, 'consultant': 4680, 'himachal': 4681, '2.25': 4682, 'ashamed': 4683, 'inform': 4684, 'where.do.i.start': 4685, 'moviemarathon': 4686, 'conversational': 4687, 'skill': 4688, 'shadow': 4689, 'own': 4690, 'pair': 4691, 'typical': 4692, "it'll": 4693, 'cortez': 4694, 'superstar': 4695, 'tthanks': 4696, 'colin': 4697, 'luxuous': 4698, 'tarryn': 4699, 'practice': 4700, 'goodness': 4701, 'hbdme': 4702, 'yeeeyyy': 4703, 'barsostay': 4704, 'malese': 4705, 'independent': 4706, 'sum': 4707, 'debacle': 4708, 'perfectly': 4709, 'amyjackson': 4710, 'omegle': 4711, 'countrymusic': 4712, 'five': 4713, "night's": 4714, "freddy's": 4715, 'demo': 4716, 'pump': 4717, 'fanboy': 4718, 'thegrandad': 4719, 'impression': 4720, 'grand': 4721, 'sidni': 4722, 'remarriage': 4723, 'occasion': 4724, 'completion': 4725, 'language': 4726, 'java': 4727, "php's": 4728, 'notion': 4729, 'reference': 4730, 'equally': 4731, 'confuse': 4732, 'ohioan': 4733, 'doctor': 4734, 'offline': 4735, 'thesims': 4736, 'mb': 4737, 'meaningless': 4738, 'common': 4739, 'celebrate': 4740, 'muertosatfringe': 4741, 'emulation': 4742, 'enemy': 4743, 'relax': 4744, 'ou': 4745, 'pink': 4746, 'cc': 4747, 'meooowww': 4748, 'barkkkiiideee': 4749, 'bark': 4750, 'x11': 4751, 'routine': 4752, 'aleks': 4753, 'awh': 4754, 'kumpul': 4755, 'cantik': 4756, 'ganteng': 4757, 'kresna': 4758, 'jelly': 4759, 'simon': 4760, 'lesley': 4761, 'blood': 4762, 'panty': 4763, 'lion': 4764, 'artworkbylie': 4765, 'judo': 4766, 'presentation': 4767, 'daredevil': 4768, 'despondently': 4769, 're-watch': 4770, 'welcoma.have': 4771, 'favor': 4772, 'tridon': 4773, '21pics': 4774, 'master': 4775, 'nim': 4776, "there're": 4777, '22pics': 4778, 'kebun': 4779, 'adorable': 4780, 'ubud': 4781, 'ladyposse': 4782, 'xoxoxo': 4783, 'sneak': 4784, 'peek': 4785, 'tuned': 4786, 'inbox': 4787, 'happyweekend': 4788, 'therealgolden': 4789, '47': 4790, 'girlfriendsmya': 4791, 'ppl': 4792, 'njoy': 4793, 'followingg': 4794, 'private': 4795, 'pusher': 4796, 'pri': 4797, 'stun': 4798, 'wooohooo': 4799, 'cuss': 4800, 'teenage': 4801, 'ace': 4802, 'sauce': 4803, 'livi': 4804, 'fowles': 4805, 'oliviafowles': 4806, '891': 4807, 'burnout': 4808, 'johnforceo': 4809, 'matthew': 4810, 'provoke': 4811, 'indiankulture': 4812, 'oppose': 4813, 'biker': 4814, 'dis': 4815, 'lyk': 4816, 'gud': 4817, 'weight': 4818, 'bcus': 4819, 'rubbish': 4820, 'veggie': 4821, 'steph': 4822, 'nj': 4823, 'x10': 4824, 'explore': 4825, 'listenable': 4826, 'cohesive': 4827, 'gossip': 4828, 'alex': 4829, 'heswifi': 4830, '7am': 4831, 'wub': 4832, 'cerbchan': 4833, 'jarraaa': 4834, 'morrrning': 4835, 'snooze': 4836, 'clicksco': 4837, 'gay': 4838, 'lesbian': 4839, 'rigid': 4840, 'theocratic': 4841, 'wing': 4842, 'fundamentalist': 4843, 'islamist': 4844, 'brianaaa': 4845, 'brianazabrocki': 4846, 'sky': 4847, 'batb': 4848, 'clap': 4849, 'working': 4850, 'whilst': 4851, 'aki': 4852, 'thencerest': 4853, '547': 4854, 'indiemusic': 4855, 'sexyjudy': 4856, 'pussy': 4857, 'sexo': 4858, 'humidity': 4859, '87': 4860, 'promotional': 4861, 'sloppy': 4862, "second's": 4863, 'stock': 4864, 'marmite': 4865, 'x9': 4866, 'nic': 4867, 'taft': 4868, 'finalist': 4869, 'lottery': 4870, 'award': 4871, 'usagi': 4872, 'looove': 4873, 'wowww': 4874, '💙': 4875, '💚': 4876, '💕': 4877, 'lepas': 4878, 'sembuh': 4879, 'sibuk': 4880, 'balik': 4881, 'kin': 4882, 'gotham': 4883, 'sunnyday': 4884, 'texting': 4885, 'dudettes': 4886, 'cost': 4887, 'flippin': 4888, 'fortune': 4889, 'divinediscontent': 4890, ';}': 4891, 'amnotness': 4892, 'autofollow': 4893, 'teamfollowback': 4894, 'geer': 4895, 'bat': 4896, 'mz': 4897, 'yang': 4898, 'deennya': 4899, 'jehwan': 4900, '11:00': 4901, 'julie': 4902, 'ashton': 4903, '✧': 4904, '。': 4905, 'chelny': 4906, 'datz': 4907, 'jeremy': 4908, 'fmt': 4909, 'dat': 4910, 'heartbeat': 4911, 'clutch': 4912, '🐢': 4913, 'watching': 4914, 'besteverdoctorwhoepisode': 4915, 'relevant': 4916, 'puke': 4917, 'proper': 4918, 'x8': 4919, 'subliminal': 4920, 'eatmeat': 4921, 'brewproject': 4922, 'lovenafianna': 4923, 'mr': 4924, 'lewis': 4925, 'clock': 4926, '3:02': 4927, 'happens': 4928, 'muslim': 4929, 'prophet': 4930, 'غردلي': 4931, 'is.he': 4932, 'mistake': 4933, 'politician': 4934, 'argue': 4935, 'intellect': 4936, 'recommendation': 4937, 'shiva': 4938, 'mp3': 4939, 'apps': 4940, 'standrews': 4941, 'sandcastle': 4942, 'ewok': 4943, 'nate': 4944, 'brawl': 4945, 'rear': 4946, 'naked': 4947, 'choke': 4948, 'heck': 4949, 'gun': 4950, 'associate': 4951, 'um': 4952, 'endowment': 4953, 'ai': 4954, 'sikandar': 4955, 'pti': 4956, 'standwdik': 4957, 'westandwithik': 4958, 'starbucks': 4959, 'logo': 4960, 'obsession': 4961, 'addiction': 4962, 'renew': 4963, 'charity': 4964, 'جمعة_مباركة': 4965, 'hokies': 4966, 'biz': 4967, 'non': 4968, 'america': 4969, 'california': 4970, '01:16': 4971, '45gameplay': 4972, 'iloveyou': 4973, 'vex': 4974, 'igers': 4975, 'leicaq': 4976, 'leica': 4977, 'dudeee': 4978, 'persona': 4979, 'yepp': 4980, '5878e503': 4981, 'x7': 4982, 'greg': 4983, 'useful': 4984, 'posey': 4985, 'miami': 4986, 'james_yammouni': 4987, 'breakdown': 4988, 'material': 4989, 'thorin': 4990, 'hunt': 4991, 'choroo': 4992, 'nahi': 4993, 'aztec': 4994, 'princess': 4995, 'rainy': 4996, 'kingfisher': 4997, 'relaxed': 4998, 'chinua': 4999, 'achebe': 5000, 'intellectual': 5001, 'liquid': 5002, 'melbournetrip': 5003, 'taxikitchen': 5004, 'nooow': 5005, 'mcdo': 5006, 'everywhere': 5007, 'dreamer': 5008, 'tanisha': 5009, '1nonly': 5010, 'attitude': 5011, 'kindle': 5012, 'flame': 5013, 'conviction': 5014, 'bar': 5015, 'repath': 5016, 'adis': 5017, 'stefanie': 5018, 'sg1': 5019, 'lightbox': 5020, 'incorrect': 5021, 'spelling': 5022, 'apologist': 5023, 'x6': 5024, 'vuly': 5025, '01:15': 5026, 'batman': 5027, 'pearson': 5028, 'reputation': 5029, 'nikkei': 5030, 'woodford': 5031, 'vscocam': 5032, 'vscoph': 5033, 'vscogood': 5034, 'vscophil': 5035, 'vscocousins': 5036, 'yaap': 5037, 'urwelc': 5038, 'neon': 5039, 'pant': 5040, 'haaa': 5041, 'willing': 5042, 'auspost': 5043, 'openfollow': 5044, 'rp': 5045, 'eng': 5046, 'yūjō-cosplay': 5047, 'cosplayers': 5048, 'luxembourg': 5049, 'bunnys': 5050, 'broadcast': 5051, 'needa': 5052, 'gal': 5053, 'bend': 5054, 'heaven': 5055, 'proposal': 5056, 'score': 5057, 'january': 5058, 'hanabutle': 5059, 'kikhorny': 5060, 'interracial': 5061, 'makeup': 5062, 'chu': 5063, "weekend's": 5064, 'punt': 5065, 'horseracing': 5066, 'horse': 5067, 'horseracingtips': 5068, 'soulful': 5069, 'guitar': 5070, 'cocoared': 5071, 'salut': 5072, 'brief': 5073, 'introduction': 5074, 'indian': 5075, 'subcontinent': 5076, 'bfr': 5077, 'mauryas': 5078, 'jordanian': 5079, '00962778381': 5080, '838': 5081, 'tenyai': 5082, 'hee': 5083, 'ss': 5084, 'semi': 5085, 'atp': 5086, 'wimbledon': 5087, 'federer': 5088, 'nadal': 5089, 'monfils': 5090, 'handsome': 5091, 'cilic': 5092, 'firm': 5093, 'diary': 5094, 'potentially': 5095, 'nyc': 5096, 'chillin': 5097, 'lils': 5098, 'tail': 5099, 'kitten': 5100, 'garret': 5101, 'baz': 5102, 'leo': 5103, 'xst': 5104, 'centrifugal': 5105, 'haired': 5106, 'eternity': 5107, 'forgive': 5108, 'kangin': 5109, 'بندر': 5110, 'العنزي': 5111, 'kristin': 5112, 'ca': 5113, 'surajettan': 5114, 'kashi': 5115, 'ashwathy': 5116, 'mommy': 5117, 'tirth': 5118, 'brambhatt': 5119, 'playing': 5120, 'snooker': 5121, 'compensation': 5122, 'theoper': 5123, '479': 5124, 'premiostumundo': 5125, 'differ': 5126, 'philosophical': 5127, 'x5': 5128, 'graphic': 5129, 'skills': 5130, 'level': 5131, 'thurs': 5132, 'aug': 5133, 'excl': 5134, 'raw': 5135, 'weenie': 5136, 'annoyingbaby': 5137, 'lazy': 5138, 'cosy': 5139, 'client_amends_edit': 5140, '_5_final_final_final': 5141, 'pdf': 5142, 'mauliate': 5143, 'ito': 5144, 'fridays': 5145, 'okkay': 5146, 'knock': 5147, "soloist's": 5148, 'ryu': 5149, 'saera': 5150, 'pinkeu': 5151, 'angry': 5152, 'animation': 5153, 'screencaps': 5154, 'jonghyun': 5155, 'seungyeon': 5156, 'cnblue': 5157, 'mbc': 5158, 'wgm': 5159, 'masa': 5160, 'entrepreneurship': 5161, 'empower': 5162, 'limpopo': 5163, 'picts': 5164, 'norapowel': 5165, 'hornykik': 5166, 'livesex': 5167, 'emirates': 5168, 'pumpkin': 5169, 'reserve': 5170, 'thrice': 5171, 'patron': 5172, 'venture': 5173, 'deathcure': 5174, 'boob': 5175, 'blame': 5176, 'dine': 5177, 'modern': 5178, 'grill': 5179, 'disk': 5180, 'nt4': 5181, 'iirc': 5182, 'ux': 5183, 'refinement': 5184, 'zdps': 5185, 'didnt': 5186, 'justice': 5187, 'daw': 5188, 'tine': 5189, 'gensan': 5190, 'frightlings': 5191, 'undead': 5192, 'plush': 5193, 'cushion': 5194, 'nba': 5195, '2k15': 5196, 'mypark': 5197, 'chronicle': 5198, 'gryph': 5199, 'volume': 5200, 'favorable': 5201, 'ellen': 5202, 'degeneres': 5203, 'shirt': 5204, 'mint': 5205, 'superdry': 5206, 'berangkaat': 5207, 'lagiii': 5208, 'siguro': 5209, 'un': 5210, 'kesa': 5211, 'lotsa': 5212, 'organisation': 5213, '4am': 5214, 'fingers-crossed': 5215, 'deep': 5216, 'htaccess': 5217, 'file': 5218, 'adf': 5219, 'womad': 5220, 'gran': 5221, 'canaria': 5222, 'gig': 5223, 'twist': 5224, 'define': 5225, 'youve': 5226, 'teamnatural': 5227, 'huni': 5228, 'yayayayay': 5229, 'yt': 5230, 'convention': 5231, 'barely': 5232, 'brighton': 5233, 'slay': 5234, 'nickname': 5235, 'babygirl': 5236, 'regard': 5237, 'himmat': 5238, 'karain': 5239, 'baat': 5240, 'meri': 5241, 'debate': 5242, 'hotee-my': 5243, 'uncle': 5244, 'tongue': 5245, 'pronounce': 5246, 'native': 5247, 'american': 5248, 'proverb': 5249, 'lovable': 5250, 'yesha': 5251, 'montoya': 5252, 'eagerly': 5253, 'waiting': 5254, 'payment': 5255, 'supreme': 5256, 'leon': 5257, 'randy': 5258, '9bis': 5259, 'physique': 5260, 'shave': 5261, 'uncut': 5262, 'boi': 5263, 'printing': 5264, 'regular': 5265, 'printer': 5266, 'nz': 5267, 'large': 5268, 'format': 5269, '10/10': 5270, 'senior': 5271, 'raid': 5272, 'conserve': 5273, 'battery': 5274, 'comfortable': 5275, 'swt': 5276, 'reservations@sandsbeach.eu': 5277, 'localgaragederby': 5278, 'campus': 5279, 'subgames': 5280, 'faceit': 5281, 'snpcaht': 5282, 'hakhakhak': 5283, 't___t': 5284, "kyungsoo's": 5285, 'animated': 5286, '3d': 5287, 'property': 5288, 'agent': 5289, 'accurate': 5290, 'description': 5291, 'theory': 5292, 'x4': 5293, '15.90': 5294, 'yvette': 5295, 'author': 5296, 'mwf': 5297, 'programme': 5298, 'taal': 5299, 'lake': 5300, '2emt': 5301, '«': 5302, 'scurri': 5303, 'agile': 5304, 'shipping': 5305, 'solution': 5306, 'sme': 5307, 'omar': 5308, 'kamaal': 5309, 'amm': 5310, '3am': 5311, 'hopehousekids': 5312, 'pitmantraining': 5313, 'walkersmithway': 5314, 'keepitlocal': 5315, 'sehun': 5316, 'se100leaders': 5317, 'uneventful': 5318, 'sofa': 5319, 'surf': 5320, 'cunt': 5321, 'unfollow': 5322, 'convos': 5323, 'rescoops': 5324, 'multiracial': 5325, 'fk': 5326, 'narrow': 5327, 'warlock': 5328, 'faithful': 5329, 'balloon': 5330, 'pas': 5331, 'mj': 5332, 'madison': 5333, 'beonknockknock': 5334, 'con-graduation': 5335, 'gent': 5336, 'bitchface': 5337, '😒': 5338, 'organic': 5339, '12pm': 5340, 'york': 5341, 'lendal': 5342, 'pikami': 5343, 'capture': 5344, 'fulton': 5345, 'sheen': 5346, 'baloney': 5347, 'unvarnished': 5348, 'thick': 5349, 'blarney': 5350, 'flattery': 5351, 'laid': 5352, 'thin': 5353, 'sachin': 5354, 'unimportant': 5355, 'context': 5356, 'dampen': 5357, 'excitement': 5358, 'yu': 5359, 'compare': 5360, 'rocket': 5361, 'narendra': 5362, 'modi': 5363, 'aaaand': 5364, "team's": 5365, 'macauley': 5366, 'however': 5367, 'x3': 5368, 'wheeen': 5369, 'heechul': 5370, 'toast': 5371, 'coffee-weekdays': 5372, '9-11': 5373, 'sail': 5374, "friday's": 5375, 'commercial': 5376, 'insurance': 5377, 'requirement': 5378, 'lookfortheo': 5379, 'updated': 5380, 'cl': 5381, 'thou': 5382, 'april': 5383, 'airforce': 5384, 'clark': 5385, 'field': 5386, 'pampanga': 5387, 'troll': 5388, '⚡': 5389, 'brow': 5390, 'oily': 5391, 'maricarljanah': 5392, '6:15': 5393, 'degree': 5394, 'fahrenheit': 5395, '🍸': 5396, '╲': 5397, '─': 5398, '╱': 5399, '🍤': 5400, '╭': 5401, '╮': 5402, '┓': 5403, '┳': 5404, '┣': 5405, '╰': 5406, '╯': 5407, '┗': 5408, '┻': 5409, 'stool': 5410, 'topple': 5411, 'findyourfit': 5412, 'preferred': 5413, 'whomosexual': 5414, 'stack': 5415, 'pandora': 5416, 'attend': 5417, 'digitalexeter': 5418, 'digitalmarketing': 5419, 'sociamedia': 5420, 'nb': 5421, 'bom': 5422, 'dia': 5423, 'todos': 5424, 'forklift': 5425, 'worker': 5426, 'lsceens': 5427, 'immature': 5428, 'gandhi': 5429, 'grassy': 5430, 'feetblog': 5431, 'daughter': 5432, '4yrs': 5433, 'old-porridge': 5434, 'fiend': 5435, '2nite': 5436, 'comp': 5437, 'viking': 5438, 't20blast': 5439, 'np': 5440, 'tax': 5441, 'feets': 5442, 'ooohh': 5443, 'petjam': 5444, 'virtual': 5445, 'pounce': 5446, 'benteke': 5447, 'agnes': 5448, 'socialmedia@dpdgroup.co.uk': 5449, 'sam': 5450, 'fruity': 5451, 'vodka': 5452, 'sellyourcarin': 5453, '5words': 5454, 'chaloniklo': 5455, 'pic.twitter.com/jxz2lbv6o': 5456, "paperwhite's": 5457, 'laser-like': 5458, 'focus': 5459, 'ghost': 5460, 'tagsforlikesapp': 5461, 'instagood': 5462, 'tbt': 5463, 'socket': 5464, 'spanner': 5465, 'patiently': 5466, '😴': 5467, 'pglcsgo': 5468, 'x2': 5469, 'tend': 5470, 'crave': 5471, 'sjw': 5472, 'cakehamper': 5473, 'glow': 5474, 'yayyy': 5475, 'mercedes': 5476, 'hood': 5477, 'badge': 5478, 'hosted': 5479, 'drone': 5480, 'blow': 5481, 'ignore': 5482, 'retaliate': 5483, 'bollinger': 5484, "where's": 5485, 'planning': 5486, 'denmark': 5487, 'whitey': 5488, 'culture': 5489, 'coursee': 5490, 'intro': 5491, 'graphicdesign': 5492, 'videographer': 5493, 'youtuber': 5494, 'space': 5495, "ted's": 5496, 'bogus': 5497, '1000': 5498, 'hahahaaah': 5499, 'owly': 5500, 'afternon': 5501, 'whangarei': 5502, 'katie': 5503, 'nicholas': 5504, 'pauline': 5505, 'trafficker': 5506, 'daring': 5507, 'hence': 5508, 'expression': 5509, 'wot': 5510, 'hand-lettering': 5511, 'roof': 5512, 'ease': 5513, '2/2': 5514, 'sour': 5515, 'dough': 5516, 'egypt': 5517, 'hubby': 5518, 'mixed': 5519, 'sakin': 5520, 'six': 5521, 'christmas': 5522, 'avril': 5523, 'n04js': 5524, '25': 5525, 'prosecco': 5526, 'pech': 5527, 'micro': 5528, 'catspjs': 5529, '4:15': 5530, 'lazyweekend': 5531, 'overdue': 5532, '💃': 5533, 'jurassic': 5534, 'ding': 5535, 'nila': 5536, 'pairing': 5537, '8)': 5538, 'unfortunately': 5539, 'cookie': 5540, 'shir': 5541, '0': 5542, 'hale': 5543, 'cheshire': 5544, 'decorate': 5545, 'lemme': 5546, 'recs': 5547, 'ingat': 5548, 'din': 5549, 'mono': 5550, 'kathryn': 5551, 'jr': 5552, 'develop': 5553, 'hsr': 5554, 'base': 5555, 'major': 5556, 'sugarrush': 5557, 'knitting': 5558, 'partly': 5559, 'binge': 5560, 'homegirl': 5561, 'nancy': 5562, 'fenja': 5563, 'aapke': 5564, 'benchmark': 5565, 'ke': 5566, 'hisaab': 5567, 'ho': 5568, 'gaya': 5569, 'ofc': 5570, 'influence': 5571, 'rtss': 5572, 'hwaiting': 5573, 'titanfall': 5574, 'xbox': 5575, 'ultimate': 5576, 'experiment': 5577, 'gastronomy': 5578, 'newblogpost': 5579, 'foodiefridays': 5580, 'foodie': 5581, 'yoghurt': 5582, 'pancake': 5583, 'sabah': 5584, 'kapima': 5585, 'gelen': 5586, 'guzel': 5587, 'bir': 5588, 'hediye': 5589, 'thanx': 5590, '💞': 5591, 'visa': 5592, 'parisa': 5593, 'epiphany': 5594, 'lit': 5595, 'em-con': 5596, 'defender': 5597, '0330 333 7234': 5598, 'retailer': 5599, 'kianweareproud': 5600, 'distract': 5601, 'dayofarch': 5602, '10-20': 5603, 'bapu': 5604, 'ivypowel': 5605, 'newmusic': 5606, 'sexchat': 5607, '🍅': 5608, 'pathway': 5609, 'balkan': 5610, 'gypsy': 5611, 'mayhem': 5612, 'burek': 5613, 'meat': 5614, 'gibanica': 5615, 'pie': 5616, 'likely': 5617, 'surrey': 5618, 'afterwards': 5619, '10.30': 5620, 'heard': 5621, 'temporal': 5622, 'void': 5623, 'stem': 5624, 'sf': 5625, 'ykr': 5626, 'sparky': 5627, '40mm': 5628, '3.5': 5629, 'grs': 5630, 'rockfishing': 5631, 'topwater': 5632, 'twitlonger': 5633, 'me.so': 5634, 'jummah': 5635, 'durood': 5636, 'pak': 5637, 'cjradacomateada': 5638, 'suprised': 5639, 'debut': 5640, 'shipper': 5641, 'aside': 5642, 'housemate': 5643, '737bigatingconcert': 5644, 'jedzjabłka': 5645, 'pijjabłka': 5646, 'polish': 5647, 'cider': 5648, 'mustread': 5649, 'cricket': 5650, 'origin': 5651, '5pm': 5652, 'query': 5653, 'abby': 5654, 'sumedh': 5655, 'sunnah': 5656, 'عن': 5657, 'quad': 5658, 'bike': 5659, 'carrie': 5660, 'propriety': 5661, 'chronic': 5662, 'illness': 5663, 'superday': 5664, 'chocolatey': 5665, 'yasu': 5666, 'ooooh': 5667, 'fucked': 5668, 'hallo': 5669, 'improvement': 5670, 'dylan': 5671, 'laura': 5672, 'patrice': 5673, 'keepin': 5674, 'mohr': 5675, 'guest': 5676, "o'neal": 5677, 'tks': 5678, 'luas': 5679, 'stone': 5680, 'quicker': 5681, 'diet': 5682, 'sosweet': 5683, 'nominiere': 5684, 'und': 5685, 'hardcore': 5686, '😌': 5687, 'ff__special': 5688, 'acha': 5689, 'banda': 5690, '✌': 5691, 'bhi': 5692, 'krta': 5693, 'beautifully-crafted': 5694, 'mockingbird': 5695, 'diploma': 5696, 'blend': 5697, 'numbero': 5698, 'lolz': 5699, 'ambrose': 5700, 'gwinett': 5701, 'bierce': 5702, 'suffer': 5703, 'ravage': 5704, 'illadvised': 5705, 'marriage': 5706, 'virginity': 5707, 'cynical': 5708, 'yahuda': 5709, 'nosmet': 5710, 'pony': 5711, 'cuuute': 5712, "f'ing": 5713, 'vacant': 5714, 'hauc': 5715, 'hiss': 5716, 'overnight': 5717, 'cornish': 5718, 'all-clear': 5719, 'complains': 5720, 'raincoat': 5721, 'measure': 5722, 'wealth': 5723, 'invest': 5724, 'garbi': 5725, 'wash': 5726, 'refuel': 5727, 'dunedin': 5728, 'kalle': 5729, 'rakhi': 5730, 'photographer': 5731, '12th': 5732, 'carry': 5733, 'represent': 5734, 'slovenia': 5735, 'fridge': 5736, 'fighting': 5737, 'ludlow': 5738, '28th': 5739, 'selway': 5740, 'submit': 5741, 'spanish': 5742, 'greet': 5743, '90210': 5744, 'oitnb': 5745, 'prepare': 5746, 'condition': 5747, 'msged': 5748, 'chiquitos': 5749, 'ohaha': 5750, 'delhi': 5751, '95': 5752, 'webtogsawards': 5753, 'grace': 5754, 'sheffield': 5755, 'tramline': 5756, 'tl': 5757, 'hack': 5758, 'lad': 5759, 'beeepin': 5760, 'duper': 5761, 'handle': 5762, 'critique': 5763, 'contectually': 5764, 'beliebers': 5765, 'ultor': 5766, 'mamaya': 5767, 'loiyals': 5768, 'para': 5769, 'truthfulwordsof': 5770, 'beanatividad': 5771, 'nknkkpagpapakumbaba': 5772, 'birthdaypresent': 5773, 'compliment': 5774, 'swerve': 5775, 'goodtime': 5776, 'sinister': 5777, 'tryna': 5778, 'anonymous': 5779, 'dipsatch': 5780, 'aunt': 5781, 'dagga': 5782, 'burketeer': 5783, '2am': 5784, 'promo': 5785, 'required': 5786, 'twine': 5787, "diane's": 5788, 'happybirthday': 5789, 'thanksss': 5790, 'randomly': 5791, 'buckinghampalace': 5792, 'personality': 5793, 'chibi': 5794, 'maker': 5795, 'timog': 5796, '18th': 5797, 'otw': 5798, 'kami': 5799, 'feelinggood': 5800, 'demand': 5801, 'naman': 5802, 'barkin': 5803, 'yeap': 5804, 'onkey': 5805, 'umma': 5806, 'pervert': 5807, 'onyu': 5808, 'appa': 5809, 'lucy': 5810, 'horrible': 5811, 'quantum': 5812, 'blockchain': 5813, 'nowplaying': 5814, 'loftey': 5815, 'routte': 5816, 'assia': 5817, '.\n.\n.': 5818, 'joint': 5819, 'futurereleases': 5820, "look's": 5821, 'scary': 5822, 'murder': 5823, 'mystery': 5824, 'comma': 5825, "j's": 5826, 'hunny': 5827, 'diva': 5828, 'emily': 5829, 'nathan': 5830, 'meditation': 5831, 'alumnus': 5832, 'mba': 5833, 'representative': 5834, 'foto': 5835, 'what-is-your-fashionality': 5836, 'lorenangel': 5837, 'kw': 5838, 'tellanoldjokeday': 5839, 'reqd': 5840, 'speculation': 5841, 'consistency': 5842, 'tropic': 5843, 'startupph': 5844, 'zodiac': 5845, 'rapunzel': 5846, 'imaginative': 5847, 'therver': 5848, '85552': 5849, 'bestoftheday': 5850, 'oralsex': 5851, 'carly': 5852, 'happily': 5853, 'contract': 5854, 'matsu_bouzu': 5855, 'sonic': 5856, 'videogames': 5857, 'harana': 5858, 'belfast': 5859, 'danny': 5860, 'rare': 5861, 'sponsorship': 5862, 'aswell': 5863, 'gigi': 5864, 'nick': 5865, 'austin': 5866, 'youll': 5867, 'weak': 5868, '10,000': 5869, 'bravo': 5870, 'iamamonster': 5871, 'rxthedailysurveyvotes': 5872, 'as': 5873, 'roux': 5874, 'walkin': 5875, 'audience': 5876, 'pfb': 5877, 'jute': 5878, 'walangmakakapigilsakin': 5879, 'lori': 5880, 'ehm': 5881, 'trick': 5882, 'baekhyun': 5883, 'eyesmiles': 5884, 'borrow': 5885, 'knife': 5886, 'thek': 5887, 'widely': 5888, 'eventually': 5889, 'reaapearing': 5890, 'kno': 5891, 'whet': 5892, 'grattis': 5893, 'tweetin': 5894, 'inshallah': 5895, 'banana': 5896, 'raspberry': 5897, 'healthylifestyle': 5898, 'aint': 5899, 'skate': 5900, 'analyze': 5901, 'variety': 5902, 'twitching': 5903, '4:13': 5904, 'insomnia': 5905, 'medication': 5906, 'opposite': 5907, 'everlasting': 5908, 'yoga': 5909, 'massage': 5910, 'osteopath': 5911, 'trainer': 5912, 'sharm': 5913, 'al_master_band': 5914, 'tbc': 5915, 'univesity': 5916, 'architecture': 5917, 'random': 5918, 'isnt': 5919, 'typo': 5920, 'snark': 5921, 'lessions': 5922, 'drunk': 5923, 'bruuh': 5924, '2weeks': 5925, '50europe': 5926, '🇫🇷': 5927, 'iove': 5928, 'accord': 5929, 'mne': 5930, 'pchelok': 5931, 'ja': 5932, 'carefully': 5933, '=:': 5934, 'comet': 5935, 'ahah': 5936, 'candy': 5937, 'axio': 5938, 'remind': 5939, 'rabbit': 5940, 'nutshell': 5941, 'letshavecocktailsafternuclai': 5942, 'malik': 5943, 'umair': 5944, 'celebrity': 5945, 'canon': 5946, 'gang': 5947, 'grind': 5948, 'thoracicbridge': 5949, '5minute': 5950, 'nonscripted': 5951, 'password': 5952, 'shoshannavassil': 5953, 'addmeonsnapchat': 5954, 'dmme': 5955, 'mpoints': 5956, 'soph': 5957, 'subjective': 5958, 'painful': 5959, 'hopeless': 5960, 'ikea': 5961, 'slide': 5962, 'ban': 5963, 'neighbour': 5964, 'motor': 5965, 'search': 5966, 'sialan': 5967, 'athabasca': 5968, 'glacier': 5969, '1948': 5970, ':-(': 5971, 'jasper': 5972, 'jaspernationalpark': 5973, 'alberta': 5974, 'explorealberta': 5975, 'mare': 5976, 'ivan': 5977, 'hahahah': 5978, 'replacement': 5979, 'bi-polar': 5980, 'grasp': 5981, 'basic': 5982, 'digital': 5983, 'research': 5984, '😩': 5985, 'choreographing': 5986, 'longer': 5987, 'mingming': 5988, 'truly': 5989, 'pee': 5990, 'newwine': 5991, 'doushite': 5992, 'gundam': 5993, 'dollar': 5994, 'pale': 5995, 'imitation': 5996, 'rash': 5997, 'absent': 5998, 'sequel': 5999, 'ouucchhh': 6000, 'wisdom': 6001, 'teeth': 6002, 'frighten': 6003, 'pret': 6004, 'wkwkw': 6005, 'verfied': 6006, 'sentir-se': 6007, 'incompleta': 6008, 'thwarting': 6009, 'zante': 6010, "when's": 6011, 'audraesar': 6012, 'craaazzyy': 6013, 'helium': 6014, 'climatechange': 6015, "california's": 6016, 'influential': 6017, 'pollution': 6018, 'watchdog': 6019, 'califor': 6020, 'elhaida': 6021, 'jury': 6022, '10th': 6023, 'televoting': 6024, 'idaho': 6025, 'drought-linked': 6026, 'die-of': 6027, 'abrupt': 6028, 'climate': 6029, 'mammoth': 6030, 'megafauna': 6031, "australia's": 6032, 'considers': 6033, 'energy': 6034, 'biomass': 6035, 'golf': 6036, 'ulti': 6037, 'prexy': 6038, 'kindergarten': 6039, 'sane': 6040, 'boss': 6041, 'hozier': 6042, 'soooner': 6043, 'respectlost': 6044, 'hypercholesteloremia': 6045, 'calibraska': 6046, 'genuine': 6047, 'contender': 6048, 'aos': 6049, 'yun': 6050, 'encore': 6051, '4thwin': 6052, 'baymax': 6053, 'mixer': 6054, 'wft': 6055, 'promotion': 6056, 'hub': 6057, 'finale': 6058, 'parasyte': 6059, 'alll': 6060, 'zayniscomingbackonjuly': 6061, '26': 6062, 'era': 6063, '。': 6064, 'ω': 6065, '」': 6066, '∠': 6067, 'nathann': 6068, 'dieididieieiei': 6069, 'netflix': 6070, 'ervin': 6071, 'accept': 6072, 'desperate': 6073, 'amargolonnard': 6074, 'batalladelosgallos': 6075, 'webcamsex': 6076, 'duty': 6077, "u've": 6078, 'alien': 6079, 'sorka': 6080, 'funeral': 6081, 'nonexistent': 6082, 'wowza': 6083, 'fah': 6084, 'boo': 6085, 'hyung': 6086, 'predict': 6087, 'sj': 6088, 'nominate': 6089, 'brace': 6090, 'omggg': 6091, 'whyyy': 6092, 'annoying': 6093, 'evan': 6094, 'opt': 6095, 'muster': 6096, 'merchs': 6097, 'drinking': 6098, 'savanna': 6099, 'straw': 6100, 'yester': 6101, 'whens': 6102, 'consume': 6103, 'werk': 6104, 'foreals': 6105, 'wesen': 6106, 'uwesiti': 6107, 'bosen': 6108, 'egg': 6109, 'benny': 6110, 'badly': 6111, '>:(': 6112, 'zaz': 6113, 'rehash': 6114, 'mushroom': 6115, 'piece': 6116, 'exception': 6117, 'vicky': 6118, '45': 6119, 'hahah': 6120, 'ninasty': 6121, 'tsktsk': 6122, 'dick': 6123, 'kawaii': 6124, 'manly': 6125, 'youu': 6126, 'potato': 6127, 'fry': 6128, 'asf': 6129, 'eish': 6130, 'ive': 6131, 'mojo': 6132, 'mara': 6133, 'neh': 6134, 'association': 6135, 'councillor': 6136, 'sholong': 6137, 'reject': 6138, 'gee': 6139, 'gidi': 6140, 'lagos': 6141, 'ehn': 6142, 'arrest': 6143, 'idk': 6144, 'anybody': 6145, 'disappear': 6146, 'daze': 6147, 'fragment': 6148, "would've": 6149, 'walao': 6150, 'kbs': 6151, 'djderek': 6152, 'legend': 6153, 'tragedy': 6154, '💪🏻': 6155, '🐒': 6156, 'strucked': 6157, 'belgium': 6158, 'fabian': 6159, 'delph': 6160, 'draking': 6161, 'silently': 6162, 'mumma': 6163, 'pray': 6164, 'appropriate': 6165, 'woke': 6166, 'across': 6167, 'hindi': 6168, 'disability': 6169, 'pension': 6170, 'ptsd': 6171, 'impossible': 6172, 'physically': 6173, 'financially': 6174, 'nooo': 6175, 'togheter': 6176, '21st': 6177, 'snsd': 6178, 'anna': 6179, 'akana': 6180, 'askip': 6181, "t'existes": 6182, 'decides': 6183, 'kei': 6184, 'kate': 6185, 'spade': 6186, 'pero': 6187, 'walang': 6188, 'agesss': 6189, 'corinehurleigh': 6190, 'snapchatme': 6191, 'sfs': 6192, 'zara': 6193, 'trouser': 6194, "it'okay": 6195, 'luckely': 6196, 'orcalove': 6197, 'lew': 6198, 'crumble': 6199, 'mcflurry': 6200, 'cable': 6201, 'scared': 6202, "venus's": 6203, 'concept': 6204, 'rly': 6205, 'tagal': 6206, 'awful': 6207, 'appointment': 6208, 'cereal': 6209, 'previous': 6210, '73': 6211, 'fansign': 6212, 'expensive': 6213, 'zzzz': 6214, "bff's": 6215, 'extremely': 6216, "deosn't": 6217, 'liverpool': 6218, 'tacky': 6219, 'haaretz': 6220, 'israel': 6221, 'syria': 6222, 'chemical': 6223, 'wsj': 6224, 'rep': 6225, 'bts': 6226, 'wong': 6227, 'confiscate': 6228, 'icepack': 6229, 'dos': 6230, 'whimper': 6231, 'senpai': 6232, 'buttsex': 6233, "dn't": 6234, 'brk': 6235, ":'(": 6236, 'falsetto': 6237, 'zone': 6238, 'loveofmylife': 6239, 's2': 6240, 'untruth': 6241, 'ij': 6242, '💥': 6243, '💫': 6244, 'soshi': 6245, 'buttt': 6246, 'heyyy': 6247, 'yeols': 6248, 'danceee': 6249, 'nemanja': 6250, 'vidic': 6251, 'roma': 6252, "mom's": 6253, 'linguist': 6254, "dad's": 6255, 'dumb': 6256, 'wanted': 6257, 'pouring': 6258, 'crash': 6259, 'resource': 6260, 'vehicle': 6261, 'ayex': 6262, 'lamon': 6263, 'scroll': 6264, 'curve': 6265, 'cement': 6266, '10.3': 6267, 'jmu': 6268, 'camp': 6269, 'tease': 6270, 'awuna': 6271, 'mbulelo': 6272, 'although': 6273, 'crackling': 6274, 'plug': 6275, 'fuse': 6276, 'dammit': 6277, 'carlton': 6278, 'aflblueshawks': 6279, "alex's": 6280, 'motorsport': 6281, 'cheeky': 6282, 'seo': 6283, 'nls': 6284, 'christy': 6285, 'niece': 6286, 'bloody': 6287, 'sandwhich': 6288, 'buset': 6289, 'discrimination': 6290, 'pregnancy': 6291, 'maternity': 6292, 'kick': 6293, 'domesticviolence': 6294, 'domestic': 6295, 'violence': 6296, 'victim': 6297, '98fm': 6298, 'stressful': 6299, 'upsetting': 6300, 'government': 6301, 'sapiosexual': 6302, 'beta': 6303, 'hogan': 6304, 'scrub': 6305, 'wwe': 6306, 'irene': 6307, 'naa': 6308, 'h_my_king': 6309, 'valentine': 6310, 'et': 6311, "r'ships": 6312, 'btwn': 6313, 'homo': 6314, 'biphobic': 6315, 'discipline': 6316, 'incl': 6317, 'european': 6318, 'langs': 6319, 'fresherstofinals': 6320, '💔': 6321, 'realistic': 6322, 'liking': 6323, 'prefer': 6324, 'benzema': 6325, 'hahahahahaah': 6326, 'donno': 6327, 'russian': 6328, 'waaa': 6329, 'eidwithgrofers': 6330, 'boreddd': 6331, 'mug': 6332, 'tiddler': 6333, '에이핑크': 6334, '더쇼': 6335, 'clan': 6336, 'slot': 6337, 'pfff': 6338, 'bugbounty': 6339, 'self-xss': 6340, 'host': 6341, 'header': 6342, 'poison': 6343, 'execution': 6344, 'ktksbye': 6345, 'cancel': 6346, 'petite': 6347, '600': 6348, 'security': 6349, 'odoo': 6350, 'partner': 6351, 'effin': 6352, 'ap': 6353, 'living': 6354, 'elsewhere': 6355, 'branch': 6356, 'concern': 6357, 'pod': 6358, 'D;': 6359, 'talk-kama': 6360, 'hawako': 6361, 'waa': 6362, 'kimaaani': 6363, 'prisss': 6364, 'baggage': 6365, 'sugar': 6366, 'rarely': 6367, 'agh': 6368, 'undercoverboss': 6369, 'تكفى': 6370, 'ache': 6371, 'wrist': 6372, 'ligo': 6373, 'dozen': 6374, 'shark': 6375, 'heartache': 6376, 'zayniscomingback': 6377, 'sweden': 6378, 'elmhurst': 6379, 'etid': 6380, "chillin'with": 6381, 'father': 6382, 'istanya': 6383, '2supply': 6384, 'infrastructure': 6385, 'nurse': 6386, 'paramedic': 6387, 'countless': 6388, '2cope': 6389, 'bored': 6390, 'plea': 6391, 'arianator': 6392, 'streaming': 6393, 'kendall': 6394, 'kylie': 6395, "kylie's": 6396, 'manila': 6397, 'jeebus': 6398, 'reabsorbtion': 6399, 'abscess': 6400, 'threaten': 6401, 'crown': 6402, 'ooouch': 6403, 'barney': 6404, "be's": 6405, 'problematic': 6406, 'mess': 6407, 'maa': 6408, 'bangalore': 6409, 'luis': 6410, 'manzano': 6411, 'shaaa': 6412, 'convener': 6413, '2:30': 6414, 'delete': 6415, 'airfields': 6416, 'jet': 6417, "jack's": 6418, 'spamming': 6419, "mommy's": 6420, 'overweight': 6421, 'sigeg': 6422, 'habhab': 6423, 'masud': 6424, 'kaha': 6425, 'akong': 6426, 'pala': 6427, 'dolphin': 6428, 'holy': 6429, 'anythin': 6430, 'beware': 6431, 'agonising': 6432, 'modimo': 6433, 'tseba': 6434, 'wena': 6435, 'fela': 6436, 'nowt': 6437, 'willy': 6438, 'gon': 6439, 'vomit': 6440, 'bowl': 6441, 'devastate': 6442, 'titan': 6443, 'ae': 6444, 'shiny': 6445, 'wavy': 6446, 'emo': 6447, 'germany': 6448, 'shedding': 6449, 'bheyps': 6450, 'ayemso': 6451, 'left': 6452, 'swell': 6453, 'wut': 6454, 'vicious': 6455, 'simpson': 6456, 'singapore': 6457, 'pooo': 6458, 'bh3s': 6459, 'pitchwars': 6460, 'chap': 6461, "mine's": 6462, 'transcript': 6463, "apma's": 6464, 'timw': 6465, 'accs': 6466, 'vitamin': 6467, 'stretch': 6468, 'blockjam': 6469, "schedule's": 6470, 'whack': 6471, 'thelock': 6472, '76': 6473, 'hotgirls': 6474, 'ghantay': 6475, 'nai': 6476, 'hay': 6477, 'rejection': 6478, 'deny': 6479, 'laguna': 6480, 'exit': 6481, 'gomen': 6482, 'inch': 6483, 'suuuper': 6484, '65': 6485, 'anyones': 6486, 'inactive': 6487, 'orphan': 6488, 'whaaat': 6489, 'kaya': 6490, 'naaan': 6491, 'pause': 6492, '3:30': 6493, 'inglewood': 6494, 'ummm': 6495, 'charcoal': 6496, 'mid-end': 6497, 'noooo': 6498, 'rodfanta': 6499, 'wasp': 6500, 'sting': 6501, 'avert': 6502, 'exo': 6503, 'seekly': 6504, 'riptito': 6505, 'manbearpig': 6506, 'shorter': 6507, 'academic': 6508, 'exclusive': 6509, 'unfair': 6510, 'esp': 6511, 'bleak': 6512, 'german': 6513, 'chart': 6514, 'pfft': 6515, 'washed': 6516, 'polaroid': 6517, 'newbethvideo': 6518, 'greece': 6519, 'xur': 6520, 'imy': 6521, 'stud': 6522, 'earring': 6523, 'hunde': 6524, '3.4': 6525, 'yach': 6526, 'huvvft': 6527, 'zoo': 6528, 'fieldtrips': 6529, 'sizwe': 6530, 'boredom': 6531, 'who': 6532, 'sinse': 6533, 'somehow': 6534, 'tiny': 6535, 'barbell': 6536, 'looong': 6537, 'apartement': 6538, 'longe': 6539, 'litro': 6540, 'shepherd': 6541, 'lami': 6542, 'lungomare': 6543, 'pesaro': 6544, 'giachietittiwedding': 6545, 'igersoftheday': 6546, 'summertime': 6547, 'nose': 6548, 'scarf': 6549, 'aus': 6550, 'afford': 6551, 'woe': 6552, 'nigga': 6553, 'motn': 6554, 'lighting': 6555, 'make-up': 6556, 'limited': 6557, 'ver': 6558, 'huhuhu': 6559, "m'lady": 6560, 'j8': 6561, 'j11': 6562, 'm20': 6563, 'acads': 6564, 'schedule': 6565, 'nowww': 6566, 'hugh': 6567, 'paw': 6568, 'muddy': 6569, 'distracted': 6570, 'heyy': 6571, 'cupcake': 6572, 'talaga': 6573, 'poppin': 6574, 'joc': 6575, 'playin': 6576, 'subj': 6577, 'sobrang': 6578, 'bv': 6579, 'zamn': 6580, 'afropunk': 6581, 'fest': 6582, 'shithouse': 6583, 'ladder': 6584, 'wrack': 6585, 'booset': 6586, 'restart': 6587, 'assassin': 6588, 'creed': 6589, 'ii': 6590, 'heap': 6591, 'ankle': 6592, 'puddle': 6593, 'wearing': 6594, 'slipper': 6595, 'eve': 6596, 'sararocs': 6597, 'anywayhedidanicejob': 6598, '😞': 6599, 'local': 6600, 'cruise': 6601, 'wail': 6602, 'wheelchair': 6603, '26weeks': 6604, 'sbenu': 6605, 'sasin': 6606, 'anarchy': 6607, 'candle': 6608, 'forehead': 6609, 'medicine': 6610, 'hoya': 6611, 'mah': 6612, 'aing': 6613, 'hush': 6614, 'gurly': 6615, 'purty': 6616, 'closer': 6617, 'shiver': 6618, 'properly': 6619, 'gol': 6620, 'pea': 6621, 'emotionally': 6622, 'mentally': 6623, 'tierd': 6624, "eye's": 6625, 'thnkyouuu': 6626, 'highlight': 6627, 'courage': 6628, 'fishy': 6629, 'idek': 6630, 'apink': 6631, 'performance': 6632, 'bulet': 6633, 'gendut': 6634, 'noo': 6635, 'hotwheels': 6636, 'm': 6637, 'patch': 6638, 'ahaha': 6639, 'akon': 6640, 'nightmare': 6641, 'mino': 6642, 'crazyyy': 6643, 'thooo': 6644, 'zz': 6645, 'straight': 6646, 'soundcheck': 6647, 'antagonistic': 6648, 'ob': 6649, 'phantasy': 6650, 'sleepdeprived': 6651, 'tiredashell': 6652, '4aspot': 6653, "kinara's": 6654, 'awami': 6655, 'niqqa': 6656, 'pb.contestants': 6657, 'aarww': 6658, 'lmbo': 6659, 'dangit': 6660, 'ohmygod': 6661, 'scenario': 6662, 'tooo': 6663, 'baechyyy': 6664, 'okayyy': 6665, 'noone': 6666, 'drag': 6667, 'seriously': 6668, 'misundersranding': 6669, 'chal': 6670, 'raha': 6671, 'yhm': 6672, 'edsa': 6673, 'jasmingarrick': 6674, 'milf': 6675, 'nakamaforever': 6676, 'kiksex': 6677, 'tried': 6678, "unicef's": 6679, 'fu': 6680, 'alone': 6681, 'manage': 6682, 'stephen': 6683, 'frustration': 6684, 'sent': 6685, 'woza': 6686, 'senight': 6687, '468': 6688, 'amateur': 6689, 'hotscratch': 6690, 'sock': 6691, '150-160': 6692, 'peso': 6693, 'degrassi': 6694, 'bcz': 6695, 'kat': 6696, 'chem': 6697, 'onscreen': 6698, 'ofscreen': 6699, 'irony': 6700, 'rhisfor': 6701, 'camsex': 6702, 'poopie': 6703, 'pip': 6704, 'uff': 6705, '1.300': 6706, 'glue': 6707, 'factory': 6708, 'kuchar': 6709, 'ups': 6710, 'definite': 6711, 'uni': 6712, 'anyways': 6713, 'ee': 6714, 'tommy': 6715, 'georgia': 6716, 'transmission': 6717, 'orang': 6718, 'suma': 6719, 'shouldeeerr': 6720, 'repack': 6721, 'repacks': 6722, 'dye': 6723, 'rihanna': 6724, 'ginge': 6725, 'adidas': 6726, 'pro@illamasqua.com': 6727, 'ifeelyou': 6728, 'ratbaglater': 6729, 'semester': 6730, 'gin': 6731, 'gutted': 6732, 'reynold': 6733, 'dessert': 6734, 'village': 6735, 'unite': 6736, 'oppressed': 6737, 'mass': 6738, 'afghanistn': 6739, 'war': 6740, 'sunggyu': 6741, 'injure': 6742, 'plaster': 6743, 'rtd': 6744, 'stadium': 6745, 'welder': 6746, 'hogo': 6747, 'vishaya': 6748, 'adu': 6749, 'bjp': 6750, 'madatte': 6751, 'anta': 6752, 'vishwas': 6753, 'ne': 6754, 'illa': 6755, 'wua': 6756, 'noticed': 6757, 'picky': 6758, 'mutuals': 6759, 'unfollowed': 6760, 'thenting': 6761, '423': 6762, 'sexual': 6763, 'sync': 6764, 'plug.dj': 6765, 'suspemsion': 6766, 'cope': 6767, 'offroading': 6768, 'theres': 6769, 'harvest': 6770, 'machinery': 6771, 'inapropriate': 6772, 'weave': 6773, 'investment': 6774, 'scottish': 6775, 'football': 6776, 'dire': 6777, 'nomoney': 6778, 'nawf': 6779, 'bechos': 6780, 'overly': 6781, 'lab': 6782, 'zap': 6783, 'distressing': 6784, 'cinema': 6785, 'louisianashooting': 6786, 'laughing': 6787, 'har': 6788, 'chum': 6789, 'ncc': 6790, 'ph': 6791, 'kayo': 6792, 'itong': 6793, 'thaaat': 6794, 'ctto': 6795, 'expire': 6796, 'bis': 6797, 'broke': 6798, 'bi': 6799, '3:33': 6800, 'jfc': 6801, 'bodo': 6802, 'amat': 6803, 'yelaaa': 6804, 'dublin': 6805, 'potter': 6806, 'pining': 6807, 'keybind': 6808, 'warfare': 6809, 'controlls': 6810, 'diagnose': 6811, 'wiv': 6812, "scheuermann's": 6813, 'disease': 6814, 'rlyhurts': 6815, 'howdo': 6816, 'georgesampson': 6817, 'signal': 6818, 'reckon': 6819, 't20': 6820, 'taunton': 6821, 'hopeful': 6822, 'justiceforsandrabland': 6823, 'sandrabland': 6824, 'disturb': 6825, 'happpy': 6826, 'justinbieber': 6827, 'daianerufato': 6828, 'ilysm': 6829, '07:34': 6830, 'delphy': 6831, 'dom': 6832, 'technique': 6833, 'mince': 6834, 'symphony': 6835, 'joe': 6836, 'wth': 6837, 'aisyhhh': 6838, 'bald': 6839, 'seungchan': 6840, 'aigooo': 6841, 'riri': 6842, 'vet': 6843, 'va': 6844, 'luminous': 6845, 'km': 6846, 'horribly': 6847, 'z': 6848, 'popularity': 6849, 'bebeee': 6850, 'lt': 6851, 'inaccuracy': 6852, 'inaccurate': 6853, 'worried': 6854, 'tragic': 6855, 'toronto': 6856, 'stuart': 6857, "party's": 6858, 'iyalaya': 6859, ';(': 6860, 'ubusy': 6861, 'gymnastics': 6862, 'aahhh': 6863, 'noggin': 6864, 'bump': 6865, 'feelslikeanidiot': 6866, 'pregnant': 6867, 'dearly': 6868, 'suk': 6869, 'scone': 6870, 'outnumber': 6871, 'eris': 6872, 'geez': 6873, 'precious': 6874, 'hive': 6875, 'voting': 6876, 'vietnam': 6877, 'dunt': 6878, 'sob': 6879, 'buff': 6880, 'toni': 6881, 'deactivate': 6882, "shady's": 6883, 'isibaya': 6884, '😓': 6885, 'colder': 6886, 'med': 6887, 'sausage': 6888, 'adios': 6889, 'h8': 6890, 'messenger': 6891, 'shittier': 6892, 'leno': 6893, 'identity': 6894, 'crisis': 6895, 'roommate': 6896, 'nighter': 6897, 'wetherspoons': 6898, 'pubs': 6899, 'police': 6900, 'ocean': 6901, 'lisaherring': 6902, 'ebony': 6903, 'polka': 6904, 'ndi': 6905, 'leftover': 6906, 'walnut': 6907, 'boah': 6908, 'mady': 6909, 'manga': 6910, 'giant': 6911, 'aminormalyet': 6912, 'poorly': 6913, 'tummy': 6914, 'pjs': 6915, 'groaning': 6916, 'nou': 6917, 'ken': 6918, 'saras': 6919, 'accident': 6920, 'freaking': 6921, 'describe': 6922, 'prydz': 6923, 'sister-in-law': 6924, 'instal': 6925, 'rear-ended': 6926, "everyone's": 6927, 'trash': 6928, 'boobs': 6929, 'stair': 6930, 'childhood': 6931, 'toothsensitivity': 6932, 'shem': 6933, 'awell': 6934, 'weekendofmadness': 6935, '🍹': 6936, 'cb': 6937, 'dancer': 6938, 'choregrapher': 6939, '626-430-8715': 6940, 'replied': 6941, 'hoe': 6942, 'xiu': 6943, 'nk': 6944, 'gi': 6945, 'us': 6946, 'elisse': 6947, 'ksoo': 6948, 'tat': 6949, 'bcoz': 6950, 'rancho': 6951, 'imperial': 6952, 'silang': 6953, 'subdivision': 6954, 'center': 6955, '39': 6956, 'cornwall': 6957, 'veritably': 6958, 'penny': 6959, 'ebook': 6960, 'фотосет': 6961, 'addicted-to-analsex': 6962, 'sweetbj': 6963, 'blowjob': 6964, 'mhhh': 6965, 'sed': 6966, 'mee': 6967, 'envious': 6968, 'eonni': 6969, 'lovey': 6970, 'dovey': 6971, 'workin': 6972, 'schade': 6973, 'isco': 6974, 'penis': 6975, 'nation': 6976, 'louisiana': 6977, 'lafayette': 6978, 'matteroftheheart': 6979, 'waduh': 6980, 'suspend': 6981, 'smoking': 6982, 'cliche': 6983, 'rma': 6984, 'jersey': 6985, 'texted': 6986, 'jaclintiler': 6987, 'teens': 6988, 'likeforlike': 6989, 'hotfmnoaidilforariana': 6990, 'fuckkk': 6991, 'sanum': 6992, 'llaollao': 6993, 'foood': 6994, 'ubericecream': 6995, 'glare': 6996, 'choreo': 6997, 'offensive': 6998, 'yeyy': 6999, 'hd': 7000, 'sux': 7001, 'nothaveld': 7002, '765': 7003, 'likeforfollow': 7004, 'mosquitoe': 7005, 'kinky': 7006, 'hsould': 7007, 'justget': 7008, 'married': 7009, 'shuffle': 7010, 'inte': 7011, 'buckling': 7012, 'millz': 7013, 'askies': 7014, 'awusasho': 7015, 'unlucky': 7016, 'briefly': 7017, 'greatful': 7018, '144p': 7019, 'brooke': 7020, 'cracked': 7021, '@': 7022, 'maverickgamer': 7023, '07:32': 7024, '07:25': 7025, 'external': 7026, 'sd': 7027, 'airdroid': 7028, '4.4+': 7029, 'cramp': 7030, 'unstan': 7031, 'tay': 7032, 'ngeze': 7033, 'cocktaily': 7034, 'classy': 7035, '07:24': 7036, '✈': 7037, '️2': 7038, '☔': 7039, 'pen': 7040, 'spar': 7041, 'barcelona': 7042, 'bilbao': 7043, 'sharyl': 7044, 'shane': 7045, 'giddy': 7046, 'd1': 7047, 'zipper': 7048, 'repair': 7049, '2016': 7050, 'cont': 7051, 'wore': 7052, 'tempt': 7053, 'oreo': 7054, 'network': 7055, 'lolipop': 7056, 'kebab': 7057, 'klappertart': 7058, 'moodboster': 7059, 'unprepared': 7060, 'sry': 7061, 'dresscode': 7062, 'closed': 7063, 'iam': 7064, 'stab': 7065, 'meh': 7066, 'wrocilam': 7067, 'looww': 7068, 'recover': 7069, 'wayne': 7070, 'loss': 7071, 'steal': 7072, 'accidentally': 7073, 'damage': 7074, 'device': 7075, 'warranty': 7076, 'lmfaoo': 7077, 'fra': 7078, 'otamendi': 7079, 'ny': 7080, '🚖': 7081, '🗽': 7082, '🌃': 7083, 'stealth': 7084, 'bastard': 7085, 'therapy': 7086, 'exhausting': 7087, 'understandable': 7088, 'switzerland': 7089, 'th': 7090, 'wolrd': 7091, 'fyn': 7092, 'genuinely': 7093, '3g': 7094, 'christ': 7095, 'scale': 7096, 'deck': 7097, 'chair': 7098, 'yk': 7099, 'resy': 7100, 'bruh': 7101, 'lock': 7102, 'fbc': 7103, 'mork': 7104, '873': 7105, 'hotspotwithdanris': 7106, 'sone': 7107, 'produce': 7108, 'potager': 7109, 'blight': 7110, 'mych': 7111, 'shiiit': 7112, 'prompt': 7113, 'aready': 7114, 'similar': 7115, 'soulmate': 7116, 'careful': 7117, 'hws': 7118, 'jouch': 7119, 'por': 7120, 'que': 7121, 'liceooo': 7122, 'ayala': 7123, 'tunnel': 7124, 'thatscold': 7125, 'lourdes': 7126, 'bang': 7127, 'anywhere': 7128, 'showbox': 7129, 'naruto': 7130, 'companion': 7131, 'skinny': 7132, 'dubai': 7133, 'laribuggy': 7134, 'nutella': 7135, "could've": 7136, 'sirius': 7137, 'frudging': 7138, 'bbz': 7139, 'angeke': 7140, 'sbali': 7141, 'euuuwww': 7142, 'construction': 7143, '1k': 7144, 'nelle': 7145, 'ik': 7146, 'jaysus': 7147, 'buti': 7148, 'poop': 7149, 'rome': 7150, 'throat': 7151, 'llama': 7152, 'getwellsoonamber': 7153, 'heath': 7154, 'ledger': 7155, 'permission': 7156, '2-0': 7157, 'supersport': 7158, 'milkshake': 7159, 'witcher': 7160, 'papertown': 7161, 'bale': 7162, 'bahay': 7163, 'bahayan': 7164, 'magisa': 7165, 'lang': 7166, 'sadlyf': 7167, 'bunso': 7168, 'sleeep': 7169, 'astonvilla': 7170, 'berigaud': 7171, 'bakar': 7172, 'allergic': 7173, 'depress': 7174, "blaine's": 7175, 'acoustic': 7176, 'hernia': 7177, 'toxin': 7178, 'ariel': 7179, 'slam': 7180, 'bee': 7181, 'finddjderek': 7182, 'uuughhh': 7183, 'grabe': 7184, 'wheres': 7185, 'smi': 7186, 'nemesis': 7187, 'neeein': 7188, 'saaad': 7189, 'crease': 7190, 'tanned': 7191, 'dallas': 7192, 'infront': 7193, 'beato': 7194, 'tim': 7195, 'minha': 7196, 'deleicious': 7197, 'pcb': 7198, 'got': 7199, 'peregrine': 7200, '8.40': 7201, 'pigeon': 7202, 'flew': 7203, 'tram': 7204, 'hav': 7205, 'spent': 7206, 'apt': 7207, 'bldg': 7208, 'mmmm': 7209, 'nicki': 7210, 'fucjikg': 7211, 'disgust': 7212, 'buynotanapologyonitunes': 7213, 'avalible': 7214, 'frustrate': 7215, 'nw': 7216, 'sch': 7217, 'jeslyn': 7218, '72': 7219, 'root': 7220, 'kuch': 7221, 'hua': 7222, 'newbie': 7223, 'miracle': 7224, 'linda': 7225, 'nux': 7226, 'hinanap': 7227, 'uy': 7228, 'sched': 7229, 'anyare': 7230, 'entertain': 7231, 'typa': 7232, 'transparency': 7233, 'photoshop': 7234, 'planner': 7235, 'helppp': 7236, 'wearig': 7237, 'dri': 7238, 'prey': 7239, 'ausfailia': 7240, 'snow': 7241, 'footy': 7242, 'row': 7243, "m's": 7244, 'kitkat': 7245, '😢': 7246, 'suger': 7247, 'olivia': 7248, 'audition': 7249, 'injury': 7250, 'appendix': 7251, 'appendicitis': 7252, 'fack': 7253, 'nhl': 7254, 'khamis': 7255, 'reaaly': 7256, 'naomi': 7257, 'contemporary': 7258, 'slacke': 7259, '565': 7260, 'jahat': 7261, 'discount': 7262, 'thorpe': 7263, 'nicely': 7264, 'esnho': 7265, 'node': 7266, 'directx': 7267, 'p2': 7268, 'uploaded': 7269, 'blackberry': 7270, 'shitty': 7271, 'povertyyouareevil': 7272, 'struggle': 7273, 'emm': 7274, 'elgin': 7275, 'vava': 7276, 'makati': 7277, '💛': 7278, 'baon': 7279, 'soak': 7280, 'mush': 7281, "they'd": 7282, 'ouat': 7283, 'blinkin': 7284, 'headack': 7285, 'tension': 7286, 'eritation': 7287, 'frustrated': 7288, 'perspective': 7289, 'endlessly': 7290, 'blush': 7291, 'kiddo': 7292, 'rumbelle': 7293, 'overwhelming': 7294, 'irresponsibly': 7295, 'pakighinabi': 7296, 'pinkfinite': 7297, 'beb': 7298, 'migraine': 7299, 'coyote': 7300, 'headache': 7301, '인피니트': 7302, 'baechu': 7303, 'calibraskaep': 7304, 'elgato': 7305, 'ant': 7306, 'unexpect': 7307, 'faint': 7308, 'bp': 7309, 'subway': 7310, 'fragile': 7311, 'gap': 7312, 'plot': 7313, 'bungie': 7314, 'woohyun': 7315, 'guilty': 7316, 'davao': 7317, 'luckyyy': 7318, 'confidence': 7319, 'eunhae': 7320, 'misplace': 7321, 'den': 7322, 'dae': 7323, 'bap': 7324, 'huehue': 7325, 'rice': 7326, 'krispy': 7327, 'marshmallow': 7328, 'm5m6junction': 7329, 'soulsurvivor': 7330, 'stafford': 7331, 'progress': 7332, 'mixture': 7333, "they've": 7334, 'shems': 7335, 'lage': 7336, 'ramd': 7337, 'opening': 7338, 'munchkin': 7339, 'parting': 7340, 'juja': 7341, 'murugan': 7342, 'bgtau': 7343, 'harap': 7344, 'bagi': 7345, 'aminn': 7346, 'fraand': 7347, '😬': 7348, 'bigbang': 7349, 'sian': 7350, 'nicoleapage': 7351, 'surprised': 7352, 'hellish': 7353, 'thirstyyy': 7354, 'chesties': 7355, "nando's": 7356, 'bow': 7357, 'hen': 7358, 'rdd': 7359, 'dissipate': 7360, 'capeee': 7361, 'japan': 7362, 'outlive': 7363, 'x-ray': 7364, 'dental': 7365, 'spine': 7366, 'relief': 7367, 'popol': 7368, 'stomach': 7369, 'frog': 7370, 'brad': 7371, 'gen.ad': 7372, 'negotiable': 7373, 'huhuhuhuhu': 7374, 'bbmadeinmanila': 7375, 'findavip': 7376, 'boyirl': 7377, 'yasss': 7378, '6th': 7379, 'june': 7380, 'laine': 7381, 'difficiency': 7382, 'speed': 7383, 'rapist': 7384, 'commit': 7385, 'crime': 7386, 'bachpan': 7387, 'yaadein': 7388, 'finnair': 7389, 'heathrow': 7390, 'norwegian': 7391, ':\\': 7392, 'upvotes': 7393, 'keeno': 7394, 'whatthefuck': 7395, 'grotty': 7396, 'seeker': 7397, 'morality': 7398, 'fern': 7399, 'mimi': 7400, 'bali': 7401, 'editing': 7402, 'lowbat': 7403, 'funk': 7404, 'wewanticecream': 7405, 'sweat': 7406, 'eugh': 7407, 'sara': 7408, 'occasionally': 7409, "izzy's": 7410, 'dorm': 7411, 'choppy': 7412, "infinite's": 7413, '5:30': 7414, 'cayton': 7415, 'emma': 7416, 'darcey': 7417, 'connor': 7418, 'roommateexperience': 7419, 'avoid': 7420, 'ic': 7421, 'te': 7422, 'auto-followback': 7423, 'ljp': 7424, 'nowdays': 7425, 'attach': 7426, 'numb': 7427, 'dentist': 7428, 'tab': 7429, 'ucas': 7430, 'bigtime': 7431, 'rumor': 7432, 'chin': 7433, 'tickle': 7434, '♫': 7435, 'zikra': 7436, 'lusi': 7437, 'hasya': 7438, 'nugget': 7439, 'olympic': 7440, "millie's": 7441, '748292': 7442, 'ano': 7443, '22stans': 7444, 'mag': 7445, 'hateee': 7446, 'lease': 7447, '6g': 7448, 'unsuccessful': 7449, 'earlobes': 7450, 'girls': 7451, 'sue': 7452, 'dreary': 7453, 'denise': 7454, 'murielle': 7455, 'ahouré': 7456, 'pr': 7457, "kath'd": 7458, 'respond': 7459, 'chopped': 7460, 'wbu': 7461, 'activity': 7462, 'kme': 7463, 'cram': 7464, 'curious': 7465, 'announcement': 7466, 'trespasser': 7467, 'clandestins': 7468, 'muller': 7469, 'obvious': 7470, 'mufc': 7471, 'stu': 7472, 'buddyyy': 7473, 'feelgoodfriday': 7474, '6:30': 7475, 'babysit': 7476, 'opixer': 7477, '805': 7478, 'pilllow': 7479, 'fool': 7480, 'brag': 7481, 'skrillah': 7482, 'drown': 7483, 'gue': 7484, 'north': 7485, 'sjkaos': 7486, 'disappointed': 7487, 'srry': 7488, 'honma': 7489, 'yeh': 7490, 'walay': 7491, 'bohat': 7492, 'wailay': 7493, 'pre-season': 7494, 'pe': 7495, 'itna': 7496, 'shor': 7497, 'machaya': 7498, 'samjha': 7499, '👍': 7500, '😔': 7501, 'sirkay': 7502, 'wali': 7503, 'pyaaz': 7504, 'daal': 7505, 'onion': 7506, 'vinegar': 7507, 'cooking': 7508, 'tutorial': 7509, 'soho': 7510, 'wobbly': 7511, 'ciao': 7512, 'masaan': 7513, 'muv': 7514, 'beast': 7515, 'hayst': 7516, 'cr': 7517, 'hnnn': 7518, 'optimisation': 7519, 'soniii': 7520, 'kahaaa': 7521, 'freeze': 7522, 'fml': 7523, 'jacket': 7524, 'sleepy': 7525, 'bully': 7526, 'racial': 7527, 'loool': 7528, 'onwards': 7529, 'coincidence': 7530, 'imac': 7531, 'gram': 7532, 'nearer': 7533, 'blaine': 7534, 'darren': 7535, 'fuuuck': 7536, 'gishwhes': 7537, 'exclude': 7538, 'movement': 7539, 'frou': 7540, 'vaccine': 7541, 'armor': 7542, 'legendary': 7543, 'cash': 7544, 'effort': 7545, 'nat': 7546, 'brake': 7547, 'grumpy': 7548, 'wreck': 7549, 'gahhh': 7550, 'terible': 7551, 'kiligs': 7552, 'shravan': 7553, 'maas': 7554, 'stooop': 7555, 'gi-guilty': 7556, 'akooo': 7557, 'imveryverysorry': 7558, 'cd': 7559, 'basename': 7560, 'theme': 7561, 'cigar': 7562, 'speaker': 7563, 'promethazine': 7564, 'zopiclone': 7565, 'addition': 7566, 'quetiapine': 7567, 'modified': 7568, 'prescription': 7569, 'greska': 7570, 'macedonian': 7571, 'slovak': 7572, 'hike': 7573, 'zokay': 7574, 'accent': 7575, 'b-but': 7576, 'gintama': 7577, 'shinsengumi': 7578, 'chapter': 7579, 'crapple': 7580, 'agrees': 7581, 'ftw': 7582, 'phandroid': 7583, 'tline': 7584, 'orchestra': 7585, 'rehearsal': 7586, 'bittersweetness': 7587, 'eunji': 7588, 'bakit': 7589, '121st': 7590, 'ehdar': 7591, 'pegea': 7592, 'panga': 7593, 'dosto': 7594, 'nd': 7595, 'real_liam_payne': 7596, '3/10': 7597, 'dmed': 7598, '23': 7599, 'alreaddyyy': 7600, 'luceleva': 7601, 'naeun': 7602, "son's": 7603, 'kidney': 7604, 'ink': 7605, 'bullying': 7606, 'ihatesomepeople': 7607, 'table': 7608, '0-2': 7609, 'hard-wired': 7610, 'canadian': 7611, 'enjoyed': 7612, 'acne': 7613, 'gulo': 7614, 'kandekjs': 7615, 'rize': 7616, 'meydan': 7617, 'fcking': 7618, 'crei': 7619, 'connection': 7620, 'dormmates': 7621, 'bo3': 7622, 'activation': 7623, 'cod': 7624, 'redeem': 7625, 'invalid': 7626, 'hopia': 7627, 'editor': 7628, 'reveal': 7629, 'booo': 7630, 'extension': 7631, 'rightnow': 7632, 'btu': 7633, 'karaoke': 7634, 'licence': 7635, 'apb': 7636, 'hahahaokay': 7637, 'basara': 7638, 'capcom': 7639, 'url': 7640, 'grumble': 7641, 'migrant': 7642, 'awsme': 7643, 'picking': 7644, 'tmw': 7645, 'uwu': 7646, 'jinki': 7647, 'taems': 7648, 'gifs': 7649, 'cambridge': 7650, 'viathe': 7651, 'cyprus': 7652, 'zayncomebackto': 7653, 'spazzing': 7654, 'soobin': 7655, '27': 7656, 'float': 7657, 'pressure': 7658, 'lifetime': 7659, 'hiondsheings': 7660, '58543': 7661, 'sexdate': 7662, "demi's": 7663, 'junjou': 7664, 'romantica': 7665, 'privilege': 7666, 'mixtape': 7667, 'convince': 7668, 'friex': 7669, 'shaylan': 7670, '4:20': 7671, 'ylona': 7672, 'nah': 7673, 'waitting': 7674, 'noon': 7675, 'ouh': 7676, 'nm': 7677, 'encanta': 7678, 'vale': 7679, 'osea': 7680, 'bea': 7681, '♛': 7682, '》': 7683, 'beli̇eve': 7684, 'wi̇ll': 7685, 'justi̇n': 7686, '350': 7687, 'see': 7688, 'me': 7689, '349': 7690, 'baek': 7691, 'dunwan': 7692, 'suan': 7693, 'haiz': 7694, '348': 7695, 'adult': 7696, '347': 7697, '😕': 7698, 'insonia': 7699, '346': 7700, 'rick': 7701, 'ross': 7702, 'heartbreaking': 7703, '345': 7704, 'millie': 7705, 'diff': 7706, 'golden': 7707, 'rosebury': 7708, 'familyhome': 7709, '344': 7710, 'monkey': 7711, '343': 7712, 'erica': 7713, 'istg': 7714, 'jackson': 7715, 'nsbzhdnxndamal': 7716, '342': 7717, '11:15': 7718, '2hours': 7719, '11:25': 7720, '341': 7721, 'mahilig': 7722, 'mam-bully': 7723, 'mtaani': 7724, 'tunaita': 7725, 'viazi': 7726, 'choma': 7727, 'jerk': 7728, 'menille': 7729, '340': 7730, "kam's": 7731, 'meee': 7732, 'diz': 7733, 'biooo': 7734, 'ay': 7735, 'taray': 7736, 'yumu-youtuber': 7737, '339': 7738, 'parijat': 7739, 'willmissyouparijat': 7740, 'jolly': 7741, '338': 7742, 'mcnuggets': 7743, 'sophie': 7744, 'caramello': 7745, 'koala': 7746, 'suckmejimin': 7747, '337': 7748, 'sucky': 7749, 'dying': 7750, 'pou': 7751, 'goddamn': 7752, 'nje': 7753, 'dbn': 7754, '🎀': 7755, '336': 7756, '335': 7757, 'supporting': 7758, 'pledge': 7759, 'viber': 7760, 'mwah': 7761, 'estate': 7762, 'lansi': 7763, '334': 7764, 'hp': 7765, 'waah': 7766, 'vandag': 7767, 'kgola': 7768, 'neng': 7769, 'eintlik': 7770, 'porn': 7771, '4like': 7772, 'repost': 7773, '333': 7774, 'magpie': 7775, '22.05': 7776, '15-24': 7777, '05.15': 7778, 'chswiyfxcskcalum': 7779, 'nvm': 7780, 'foof': 7781, '332': 7782, 'casillas': 7783, 'manchester': 7784, 'xi': 7785, 'rmtour': 7786, 'eating': 7787, 'irl': 7788, 'blooper': 7789, 'huhuhuhu': 7790, 'na-take': 7791, 'sorta': 7792, 'unfriend': 7793, 'greysonchance': 7794, 'sandwich': 7795, 'belle': 7796, 'sebastian': 7797, 'rewatched': 7798, 's4': 7799, 'sers': 7800, 'heart-breaking': 7801, 'outdated': 7802, 'm4': 7803, 'theater': 7804, '7-3': 7805, '7.30-': 7806, 'ekk': 7807, 'giriboy': 7808, 'danni': 7809, 'harriet': 7810, 'gegu': 7811, 'gray': 7812, '331': 7813, 'politics': 7814, 'blaming': 7815, '68': 7816, 'corbyn': 7817, "labour's": 7818, 'ry': 7819, 'lfccw': 7820, '5ever': 7821, 'ontheroadagain': 7822, 'halaaang': 7823, 'recieved': 7824, 'flop': 7825, 'caesarspalace': 7826, 'socialrewards': 7827, 'cali': 7828, 'fuckboys': 7829, '330': 7830, 'chrompet': 7831, 'immune': 7832, 'lush': 7833, 'bathtub': 7834, 'mysql': 7835, 'libmysqlclient-dev': 7836, 'dev': 7837, 'pleasanton': 7838, 'wala': 7839, '329': 7840, 'heed': 7841, '328': 7842, 'gwss': 7843, 'thankyouu': 7844, 'charade': 7845, 'piano': 7846, '327': 7847, 'complaint': 7848, 'yelling': 7849, 'whatsoever': 7850, 'pete': 7851, 'wentz': 7852, 'shogi': 7853, 'blameshoghicp': 7854, 'classmate': 7855, 'fixedgearfrenzy': 7856, 'dispatch': 7857, 'theyre': 7858, "shamuon's": 7859, 'toe': 7860, 'horrendously': 7861, "someone's": 7862, '326': 7863, 'hasb': 7864, 'atty': 7865, 'mujy': 7866, 'sirf': 7867, 'sensible': 7868, 'brum': 7869, 'cyclerevolution': 7870, 'caaannnttt': 7871, 'overdraw': 7872, 'tbf': 7873, 'perfume': 7874, 'sample': 7875, 'chanel': 7876, 'burberry': 7877, 'prada': 7878, '325': 7879, 'noesss': 7880, 'topgear': 7881, 'bridesmaid': 7882, 'nhs': 7883, "tomorrow's": 7884, 'gathering': 7885, 'sudden': 7886, '324': 7887, 'randomrestart': 7888, 'randomreboot': 7889, 'lumia': 7890, 'windowsphone': 7891, "microsoft's": 7892, 'addressing': 7893, 'mañana': 7894, 'rappings': 7895, 'striker': 7896, 'lvg': 7897, 'refurbish': 7898, 'cintiq': 7899, 'originate': 7900, "finnick's": 7901, 'askfinnick': 7902, 'container': 7903, 'hairy': 7904, '323': 7905, 'bury': 7906, 'omaygad': 7907, 'vic': 7908, 'surgery': 7909, 'tt.tt': 7910, 'hyper': 7911, '322': 7912, 'imiss': 7913, '321': 7914, '320': 7915, 'know.for': 7916, 'prepay': 7917, '319': 7918, 'grandma': 7919, "grandpa's": 7920, 'cow': 7921, 'sheeps': 7922, 'vegetable': 7923, 'delirious': 7924, 'motilium': 7925, 'shite': 7926, '318': 7927, 'schoolwork': 7928, "phoebe's": 7929, '317': 7930, 'pothole': 7931, '316': 7932, '1,300': 7933, 'robyn': 7934, 'necklace': 7935, 'rachel': 7936, 'ramzan': 7937, 'clapham': 7938, 'investigate': 7939, 'sth': 7940, 'essentially': 7941, 'photoshooot': 7942, 'mahone': 7943, 'shut': 7944, 'andaming': 7945, 'memorization': 7946, 'cotton': 7947, 'swallow': 7948, 'snot': 7949, 'taknottem': 7950, '477': 7951, 'btob': 7952, 'percentage': 7953, 'swift': 7954, 'a9': 7955, 'sexyjane': 7956, 'horny': 7957, 'goodmusic': 7958, 'lart': 7959, 'sew': 7960, 'orange': 7961, 'skyfall': 7962, 'premiere': 7963, 'manteca': 7964, "she'd": 7965, 'shiatsu': 7966, 'setting': 7967, 'risk': 7968, 'hopper': 7969, 'eyyah': 7970, 'utd': 7971, 'born': 7972, '1-0': 7973, 'cart': 7974, 'aaa': 7975, 'waifu': 7976, 'breakup': 7977, 'bias': 7978, 'syndrome': 7979, 'shy': 7980, 'pixelated': 7981, 'weh': 7982, 'maymay': 7983, 'advance': 7984, 'allowance': 7985, 'magpaalam': 7986, 'tf': 7987, 'subtitle': 7988, 'chang': 7989, 'backstory': 7990, 'gimme': 7991, 'meal': 7992, 'neat-o': 7993, 'wru': 7994, 'scissors': 7995, 'creation': 7996, 'amtired': 7997, 'imysm': 7998, 'tut': 7999, 'trop': 8000, 'tard': 8001, 'deadline': 8002, 'st': 8003, 'premiun': 8004, 'making': 8005, 'notcool': 8006, '2/3': 8007, 'lahat': 8008, 'araw': 8009, 'nag': 8010, 'gyu': 8011, 'lmfaooo': 8012, 'mashup': 8013, 'eu': 8014, 'lcs': 8015, 'yass': 8016, 'relative': 8017, 'yr': 8018, 'sydney': 8019, 'perf': 8020, 'hashtags': 8021, 'omfg': 8022, 'combat': 8023, 'dosent': 8024, "sod's": 8025, '20mins': 8026, 'yahoo': 8027, 'yodel': 8028, 'jokingly': 8029, 'seriousness': 8030, 'gahd': 8031, 'zayns': 8032, '26th': 8033, '12.00': 8034, 'obyun': 8035, 'wayhh': 8036, 'poisoning': 8037, 'prevalent': 8038, 'controversy': 8039, '🍵': 8040, 'tube': 8041, 'strike': 8042, 'meck': 8043, 'mcfc': 8044, 'ucan': 8045, 'depressing': 8046, 'poc': 8047, 'sms': 8048, 'specific': 8049, 'sinhala': 8050, 'billionaire': 8051, '1645': 8052, '1190': 8053, 'maldives': 8054, 'dheena': 8055, 'fasgadah': 8056, 'alvadhaau': 8057, 'countdown': 8058, 'function': 8059, 'desktop': 8060, 'evelineconrade': 8061, 'kikmsn': 8062, 'selfshot': 8063, 'backkk': 8064, 'transfer': 8065, 'relaxing': 8066, 'dull': 8067, 'overcast': 8068, 'missin': 8069, 'hangin': 8070, 'wiff': 8071, 'interactive': 8072, 'cherry': 8073, 'bakewell': 8074, 'collect': 8075, 'teal': 8076, 'sect': 8077, 'tennunb': 8078, 'skip': 8079, 'doomsday': 8080, 'neglected': 8081, 'postie': 8082, 'bellamy': 8083, 'raven': 8084, 'clarke': 8085, 'helmy': 8086, 'uh': 8087, 'cnt': 8088, 'whereisthesun': 8089, 'summerismissing': 8090, 'longgg': 8091, 'ridiculous': 8092, 'stocko': 8093, 'lucozade': 8094, 'shooting': 8095, 'explosion': 8096, 'beh': 8097, 'half-remembered': 8098, "melody's": 8099, 'recall': 8100, 'difficult': 8101, 'expo': 8102, 'jisoo': 8103, 'anon': 8104, 'mager': 8105, 'wi': 8106, 'wht': 8107, 'distant': 8108, 'buffering': 8109, 'insane': 8110, 'charli': 8111, 'ganas': 8112, 'studio': 8113, 'arch': 8114, 'lyin': 8115, 'kian': 8116, 'supercars': 8117, 'gurgaon': 8118, 'location': 8119, '9:15': 8120, 'satire': 8121, 'peanut': 8122, 'viners': 8123, 'palembang': 8124, 'sorrryyy': 8125, 'fany': 8126, 'boner': 8127, 'mercy': 8128, 'yuki': 8129, '2500k': 8130, 'jake': 8131, 'gyllenhaal': 8132, 'impact': 8133, "ledger's": 8134, 'b4': 8135, 'deplete': 8136, 'mbasa': 8137, 'aah': 8138, 'pa-copy': 8139, 'biome': 8140, 'mosque': 8141, 'smelly': 8142, 'annoyed': 8143, "ciara's": 8144, "everything's": 8145, 'hugs': 8146, 'tall': 8147, 'intention': 8148, 'ambs': 8149, "harry's": 8150, 'mayday': 8151, 'parade': 8152, 'lyf': 8153, '13th': 8154, 'animal': 8155, 'chris': 8156, 'brown': 8157, 'risky': 8158, 'cologne': 8159, 'duo': 8160, 'ballad': 8161, 'bish': 8162, 'intern': 8163, 'yumyum': 8164, "cathy's": 8165, 'missyou': 8166, 'bishes': 8167, 'ruby': 8168, 'pora': 8169, 'karlia': 8170, 'khatam': 8171, 'bandi': 8172, '👑': 8173, 'pyaari': 8174, 'gawd': 8175, 'massis': 8176, 'thatselfiethough': 8177, 'loop': 8178, 'aishhh': 8179, 'viewer': 8180, 'toffee': 8181, 'honesty': 8182, 'cheatday': 8183, 'protein': 8184, 'sissi': 8185, 'tote': 8186, 'slowly': 8187, 'breaking': 8188, 'church': 8189, 'pll': 8190, 'sel': 8191, 'serbia': 8192, 'serbian': 8193, 'selenators': 8194, 'motavators': 8195, 'zayyyn': 8196, 'happend': 8197, 'imperative': 8198, 'panas': 8199, 'sake': 8200, 'hamstring': 8201, 'rodwell': 8202, 'trace': 8203, 'tp': 8204, 'powder': 8205, 'wider': 8206, 'waking': 8207, 'bruno': 8208, '1.8': 8209, 'ed': 8210, 'croke': 8211, 'toll': 8212, 'shape': 8213, 'unluckiest': 8214, 'bettor': 8215, 'nstp': 8216, 'sem': 8217, 'tan': 8218, 'chipotle': 8219, 'chick-fil-a': 8220, 'stole': 8221, 'ramadhan': 8222, 'stexpert': 8223, 'ripstegi': 8224, 'nickyyy': 8225, '¿': 8226, 'emotion': 8227, 'centralise': 8228, 'discontinue': 8229, 'disappointment': 8230, 'sniff': 8231, "i'ts": 8232, 'therese': 8233, 'cred': 8234, 't_t': 8235, 'eliminate': 8236, 'teamzipal': 8237, 'smtm': 8238, 'assingnment': 8239, 'editied': 8240, 'nakaka': 8241, 'beastmode': 8242, 'gaaawd': 8243, 'colombia': 8244, 'yots': 8245, 'labyo': 8246, 'pano': 8247, 'nalamannn': 8248, 'hardheaded': 8249, "zach's": 8250, 'xpress': 8251, 'hopkins': 8252, 'melatonin': 8253, '2-4': 8254, 'hahaah': 8255, 'frequently': 8256, 'jail': 8257, 'weirddd': 8258, 'donghyuk': 8259, 'stans': 8260, 'beks': 8261, 'reynoldsgrl': 8262, 'ole': 8263, 'beardy': 8264, 'kaussies': 8265, 'pixels': 8266, 'bummer': 8267, 'fightingmcirene': 8268, "michael's": 8269, 'exercising': 8270, 'miserable': 8271, '💦': 8272, '💃🏽': 8273, 'shouldve': 8274, 'saffron': 8275, 'peasant': 8276, 'wouldve': 8277, 'nfinite': 8278, 'admin_myung': 8279, 'slp': 8280, 'laomma': 8281, 'kebaya': 8282, 'bandung': 8283, '7df89150': 8284, '62': 8285, '08962464174': 8286, 'laomma_couture': 8287, 'haizzz': 8288, 'urghhh': 8289, 'sat': 8290, 'working-on-a-tight-schedule': 8291, 'ganbarimasu': 8292, 'livid': 8293, 'whammy': 8294, 'quuuee': 8295, 'friooo': 8296, 'raining': 8297, 'stereo': 8298, 'chwang': 8299, 'lorm': 8300, '823': 8301, 'unhappy': 8302, 'lolzz': 8303, 'dats': 8304, 'corey': 8305, 'mahirap': 8306, 'noodle': 8307, 'veeerry': 8308, 'orig': 8309, 'starholicxx': 8310, '07:17': 8311, '@the': 8312, 'notr': 8313, 'hwy': 8314, 'niall': 8315, 'fraud': 8316, 'diplomacy': 8317, 'survival': 8318, 'zero': 8319, 'tolerant': 8320, 'pier': 8321, 'approach': 8322, 'rattle': 8323, 'robe': 8324, 'emphasis': 8325, 'abby.can': 8326, 'persuade': 8327, 'lyric': 8328, "emily's": 8329, 'elect': 8330, 'kamiss': 8331, 'mwa': 8332, 'cafe': 8333, 'melbourne': 8334, 'anyonneee': 8335, 'fricken': 8336, 'rito': 8337, 'friendzone': 8338, 'panel': 8339, 'hsm': 8340, 'canarios': 8341, 'ukiss': 8342, 'kurt': 8343, "fatma'm": 8344, 'lmfao': 8345, 'flapjack': 8346, 'countthecost': 8347, 'ihop': 8348, 'infra': 8349, 'lq': 8350, 'sotired': 8351, 'mybrainneedstoshutoff': 8352, 'maccies': 8353, '510': 8354, 'silicon': 8355, 'kbye': 8356, 'ini': 8357, 'citizen': 8358, 'ranking': 8359, 'mcountdown': 8360, '5h': 8361, 'thapelo': 8362, 'civ': 8363, 'wooden': 8364, 'mic': 8365, 'embarrassing': 8366, 'organization': 8367, 'translate': 8368, 'mecha-totems': 8369, 'nak': 8370, 'tgk': 8371, 'jokid': 8372, 'rent': 8373, 'inconsiderate': 8374, 'softball': 8375, 'tomcat': 8376, 'chel': 8377, 'jemma': 8378, 'matchy': 8379, 'elsa': 8380, 'postpone': 8381, 'karin': 8382, 'vist': 8383, 'unhealthy': 8384, 'propa': 8385, 'knockin': 8386, 'pre-holiday': 8387, 'meany': 8388, 'deathbybaconsmell': 8389, 'inital': 8390, 'destination': 8391, 'victoria': 8392, 'luna': 8393, 'krystal': 8394, 'sarajevo': 8395, 'haix': 8396, 'sp': 8397, 'wii': 8398, 'bayonetta': 8399, 'doable': 8400, 'drove': 8401, 'moved': 8402, 'agency': 8403, 'story.miss': 8404, 'everone': 8405, 'jps': 8406, 'mamabear': 8407, 'imintoher': 8408, 'underrated': 8409, "slovakia's": 8410, 'D:': 8411, 'saklap': 8412, 'rizal': 8413, 'lib': 8414, 'advisory': 8415, 'period': 8416, 'dit': 8417, 'dus': 8418, 'harsh': 8419, 'ohgod': 8420, 'abligaverins': 8421, 'sexygirlbypreciouslemmy': 8422, 'ripsandrabland': 8423, 'cri': 8424, 'edel': 8425, 'salam': 8426, 'mubark': 8427, 'dong': 8428, 'tammirossm': 8429, 'speck': 8430, 'abbymill': 8431, 'ion': 8432, '5min': 8433, 'hse': 8434, 'noob': 8435, 'fck': 8436, 'nae': 8437, 'whit': 8438, 'van': 8439, 'bristol': 8440, 'subserver': 8441, 'oo': 8442, 'tub': 8443, 'penyfan': 8444, 'breconbeacons': 8445, 'tittheir': 8446, '42': 8447, 'hottie': 8448, 'fuzzy': 8449, 'antonio': 8450, 'kang': 8451, 'junhee': 8452, 'couldve': 8453, 'pz': 8454, 'somerset': 8455, 'sunburnt': 8456, 'safer': 8457, 'k3g': 8458, 'input': 8459, 'gamestomp': 8460, 'desc': 8461, "angelo's": 8462, 'yna': 8463, 'fiver': 8464, 'sakho': 8465, 'threat': 8466, 'goalscorer': 8467, '10:59': 8468, '11.00': 8469, 'sham': 8470, 'tricky': 8471, 'baao': 8472, 'nisrina': 8473, 'bcs': 8474, 'ladygaga': 8475, "you's": 8476, 'marrish': 8477, "otp's": 8478, 'edomnt': 8479, 'qih': 8480, 'shxbs': 8481, 'chilton': 8482, 'creepy': 8483, 'boohoo': 8484, 'roar': 8485, 'victory': 8486, 'tweepsmatchout': 8487, 'nein': 8488, '404': 8489, 'willlow': 8490, 'sowwy': 8491, '3000': 8492, 'gear': 8493, '0.001': 8494, 'mode': 8495, 'madi': 8496, '11:11': 8497, 'shanzay': 8498, 'salabraty': 8499, 'journo': 8500, 'lure': 8501, 'mashaket': 8502, 'bapak': 8503, 'prima': 8504, 'mune': 8505, '874': 8506, 'plisss': 8507, 'sunway': 8508, 'petaling': 8509, 'jaya': 8510, 'selangor': 8511, 'huhuu': 8512, 'margo': 8513, 'konga': 8514, 'wa': 8515, 'ode': 8516, 'disvirgined': 8517, 'negotiate': 8518, 'bride': 8519, 'yulin': 8520, 'imma': 8521, 'syawal': 8522, 'lapar': 8523, 'foundation': 8524, 'facil': 8525, 'dh': 8526, 'chalet': 8527, 'suay': 8528, 'anot': 8529, 'bugger': 8530, 'एक': 8531, 'बार': 8532, 'फिर': 8533, 'सेँ': 8534, 'धोखा': 8535, 'chandauli': 8536, 'majhwar': 8537, 'tito': 8538, 'titas': 8539, 'critical': 8540, 'narcos': 8541, 'regens': 8542, 'unfaved': 8543, 'benadryl': 8544, 'arent': 8545, 'yg': 8546, 'gg': 8547, 'sxrew': 8548, 'dissappeared': 8549, 'swap': 8550, 'ishal': 8551, 'thaanks': 8552, 'jhezz': 8553, 'defence': 8554, 'defensive': 8555, 'nrltigersroosters': 8556, 'indiana': 8557, 'hibbs': 8558, 'biblethump': 8559, 'rlyyy': 8560, 'septum': 8561, 'pierce': 8562, 'venomous': 8563, 'carriage': 8564, 'fur-trimmed': 8565, 'stetson': 8566, 'error': 8567, '59': 8568, 'xue': 8569, 'midori': 8570, 'disabled': 8571, 'sakit': 8572, 'mateo': 8573, 'bartender': 8574, 'despair': 8575, 'insta': 8576, 'iwantin': 8577, '___': 8578, 'fault': 8579, 'help@veryhq.co.uk': 8580, 'benedictervention': 8581, '221b': 8582, 'popcorn': 8583, 'joyce': 8584, 'ooops': 8585, 'paalam': 8586, 'sazballs': 8587, 'incident': 8588, 'aaahh': 8589, "stomach's": 8590, 'growl': 8591, 'beard': 8592, 'nooope': 8593, 'hundred': 8594, 'meg': 8595, "verity's": 8596, 'rupert': 8597, 'pleaaase': 8598, '👆🏻': 8599, 'woaah': 8600, 'solvo': 8601, 'twin': 8602, 'lego': 8603, 'barefooted': 8604, 'twelvyy': 8605, 'boaz': 8606, 'myhill': 8607, 'takeover': 8608, 'wba': 8609, "taeyeon's": 8610, 'derp': 8611, 'pd': 8612, 'zoom': 8613, "sunny's": 8614, 'besst': 8615, 'plague': 8616, 'pit': 8617, 'frail': 8618, 'twurkin': 8619, 'razzist': 8620, 'tumblr': 8621, 'shek': 8622, '609': 8623, 'mugshot': 8624, 'plsss': 8625, 'taissa': 8626, 'farmiga': 8627, 'ahs': 8628, 'danielle': 8629, 'software': 8630, 'restore': 8631, 'momo': 8632, 'pharma': 8633, 'immovable': 8634, 'messy': 8635, 'anshe': 8636, 'f1': 8637, 'rand': 8638, 'bein': 8639, 'tla': 8640, 'tweng': 8641, 'gene': 8642, 'up.come': 8643, 'county': 8644, 'minhyuks': 8645, '1900': 8646, '😪': 8647, 'hz': 8648, 'emta': 8649, 'hatigii': 8650, 'b2aa': 8651, 'anesthesia': 8652, 'penrith': 8653, 'plain': 8654, 'untouched': 8655, 'brienne': 8656, 'lsh': 8657, 'gunna': 8658, 'former': 8659, 'darn': 8660, 'juudiciary': 8661, "horton's": 8662, 'dunkin': 8663, 'socialise': 8664, 'cara': 8665, "delevingne's": 8666, 'lace': 8667, 'fank': 8668, 'takfaham': 8669, 'older': 8670, 'ufff': 8671, 'sr': 8672, 'dard': 8673, 'katekyn': 8674, 'ehh': 8675, 'hacharatt': 8676, 'niwll': 8677, 'depend': 8678, 'successful': 8679, 'goa': 8680, 'linis': 8681, 'kasi': 8682, 'sweating': 8683, 'init': 8684, 'rhd': 8685, 'celebration': 8686, 'physical': 8687, 'wae': 8688, 'subsidized': 8689, '20th': 8690, 'youngjae': 8691, 'harumph': 8692, 'soggy': 8693, 'weeding': 8694, 'sakura': 8695, 'flavour': 8696, 'chokkie': 8697, '🌸': 8698, 'unavailable': 8699, 'richard': 8700, 'satya': 8701, 'aditya': 8702, '🍜': 8703, 'vibrate': 8704, 'cu': 8705, 'dhaka': 8706, 'cornettos': 8707, 'nosebleed': 8708, 'nintendo': 8709, 'wew': 8710, 'ramos': 8711, 'ground': 8712, 'shawn': 8713, 'mend': 8714, 'dinghy': 8715, 'skye': 8716, 'colleague': 8717, 'gagal': 8718, 'txt': 8719, 'sims': 8720, 'nooot': 8721, 'notch': 8722, 'thts': 8723, 'everyones': 8724, 'starve': 8725, '\U000fe196': 8726, 'pyjama': 8727, 'suks': 8728, 'sleeping': 8729, 'swifties': 8730, 'sorna': 8731, 'lurgy': 8732, '6gb': 8733, 'fenestoscope': 8734, 'etienne': 8735, 'bandana': 8736, 'vagina': 8737, 'suriya': 8738, 'dangle': 8739, 'mjhe': 8740, 'aaj': 8741, 'kisi': 8742, 'kiya': 8743, 'eyesight': 8744, '25x30': 8745, 'aftenoon': 8746, 'booore': 8747, 'boyfriend': 8748, 'garage': 8749, 'gws': 8750, 'anatomy': 8751, 'no1': 8752, "morisette's": 8753, 'non-trial': 8754, 'sayhername': 8755, 'lootcrate': 8756, 'inca': 8757, 'trail': 8758, 'sandboarding': 8759, 'derby': 8760, 'unable': 8761, 'signature': 8762, 'dish': 8763, 'unfamiliar': 8764, 'shed': 8765, "old's": 8766, '14518344': 8767, '61': 8768, 'thirdwheeling': 8769, 'lovebird': 8770, 'imo': 8771, '@juliettemaughan': 8772, 'sensiesha': 8773, 'eldest': 8774, '😟': 8775, 'keedz': 8776, 'taybigail': 8777, 'tournament': 8778, 'ps4': 8779, 'kink': 8780, 'losing': 8781, 'streak': 8782, 'srsky': 8783, 'tdc': 8784, 'in-sensitiveness': 8785, 'cooperate': 8786, 'conversion': 8787, 'thurston': 8788, 'collins': 8789, 'quietly': 8790, 'kennel': 8791, '911': 8792, 'pluckersss': 8793, 'gion': 8794, '886': 8795, 'kidschoiceawards': 8796, 'ming': 8797, 'pbr': 8798, 'shoutout': 8799, 'periscope': 8800, 'uts': 8801, 'shawty': 8802, 'naw': 8803, "sterling's": 8804, '9muses': 8805, 'hrryok': 8806, 'wnt': 8807, '9:30': 8808, '9:48': 8809, '9/11': 8810, 'bueno': 8811, 'receptionist': 8812, 'ella': 8813, 'ketchup': 8814, 'tasteless': 8815, 'deantd': 8816, 'justgotkanekified': 8817, 'babes': 8818, 'notgonnabeactivefor': 8819, '2weeksdontmissittoomuch': 8820, '2013': 8821, 'vlog': 8822, 'turtle': 8823, 'cnn': 8824, 'strapline': 8825, 'theatre': 8826, 'guncontrol': 8827, "thát's": 8828, 'powerpoint': 8829, 'expectation': 8830, 'diner': 8831, 'no-no': 8832, 'hinde': 8833, 'circuit': 8834, 'secondary': 8835, 'sodders': 8836, 'mobitel': 8837, 'playstation': 8838, 'exp': 8839, 'misspell': 8840, 'hyungwon': 8841, 'needicecreamnow': 8842, 'repeatedly': 8843, 'nu-uh': 8844, 'jace': 8845, 'most': 8846, 'urgh': 8847, "grigson's": 8848, 'carrot': 8849, '>:-(': 8850, 'ughh': 8851, 'otter': 8852, 'protection': 8853, 'argh': 8854, 'pon': 8855, 'otl': 8856, 'sleepover': 8857, 'jesse': 8858, 'fabina': 8859, 'meant': 8860, 'gardening': 8861, "barrista's": 8862, 'pup': 8863, 'brolly': 8864, 'dey': 8865, 'bitin': 8866, 'pretzel': 8867, 'bb17': 8868, 'bblf': 8869, 'fuckin': 8870, 'vanilla': 8871, 'latte': 8872, 'skulker': 8873, 'thread': 8874, 'hungrrryyy': 8875, 'icloud': 8876, 'ipod': 8877, 'hallyu': 8878, 'über': 8879, 'okie': 8880, '8p': 8881, 'harlo': 8882, 'torrentialrain': 8883, 'lloyd': 8884, 'knowww': 8885, 'runny': 8886, 'sweater': 8887, 'intolerant': 8888, 'xenophobes': 8889, 'wtfff': 8890, 'tone': 8891, '1pm': 8892, 'pish': 8893, 'comparison': 8894, 'remastered': 8895, 'fe14': 8896, 'cornetto': 8897, 'strawberry': 8898, 'kapatidkongpogi': 8899, 'mel': 8900, 'carmen': 8901, 'login': 8902, '00128835': 8903, 'wingstop': 8904, 'budge': 8905, 'fuq': 8906, 'ilhoon': 8907, 'getthescoop': 8908, 'hearess': 8909, '677': 8910, 'txt_shot': 8911, 'unfollowing': 8912, 'standby': 8913, 'inatall': 8914, 'zenmate': 8915, 'namechecking': 8916, 'whistle': 8917, 'junmyeon': 8918, 'ddy': 8919, 'arini': 8920, 'je': 8921, 'igbo': 8922, 'blamehoney': 8923, 'whhr': 8924, 'snuggle': 8925, 'usage': 8926, 'warning': 8927, 'tweeting': 8928, 'animator': 8929, 'vertigo': 8930, 'panic': 8931, 'dual': 8932, 'carriageway': 8933, 'aragalang': 8934, '08': 8935, 'tams': 8936, 'theo': 8937, 'anymoreee': 8938, 'cactus': 8939, 'sorrry': 8940, 'bowel': 8941, 'tumour': 8942, 'puffy': 8943, 'eyelid': 8944, 'musicas': 8945, 'campsite': 8946, 'miah': 8947, 'hahays': 8948, 'churro': 8949, 'montana': 8950, 'reign': 8951, 'example': 8952, 'inflation': 8953, 'sic': 8954, 'reset': 8955, 'entlerbountly': 8956, 'dirtykik': 8957, 'sexcam': 8958, 'spray': 8959, 'postcode': 8960, 'kafi': 8961, 'mene': 8962, 'koi': 8963, 'rewert': 8964, 'bunta': 8965, 'warnaaa': 8966, 'torture': 8967, 'iran': 8968, 'irandeal': 8969, 'us-iran': 8970, 'nuclear': 8971, "mit's": 8972, 'severely': 8973, 'li': 8974, 's2e12': 8975, 'rumpy': 8976, 'gallon': 8977, 'responsibility': 8978, 'dandia': 8979, 'rbi': 8980, 'cage': 8981, 'parrot': 8982, '1ly': 8983, 'commission': 8984, 'cag': 8985, 'ily.melanie': 8986, 'unlike': 8987, 'talent': 8988, 'deepxcape': 8989, 'doin': 8990, '5:08': 8991, 'thesis': 8992, 'gtg': 8993, 'compete': 8994, 'vv': 8995, 'respect': 8996, 'nys': 8997, 'opt-outed': 8998, 'vam': 8999, 'testing': 9000, 'speced': 9001, 'ell': 9002, 'sexyamelie': 9003, 'fineandyu': 9004, 'imsorry': 9005, 'koe': 9006, 'emyu': 9007, 'confetti': 9008, 'sini': 9009, 'dipoppo': 9010, 'bestweekend': 9011, 'okay-ish': 9012, 'html': 9013, 'geneva': 9014, 'patml': 9015, '482': 9016, 'abouty': 9017, '797': 9018, 'reaally': 9019, 'meter': 9020, 'unanswered': 9021, 'bri': 9022, 'magcon': 9023, 'merch': 9024, 'sinuend': 9025, 'laper': 9026, 'rage': 9027, 'brendon': 9028, "urie's": 9029, 'sumer': 9030, 'repackage': 9031, ":'D": 9032, 'yongbe': 9033, 'suede': 9034, 'warm-up': 9035, 'signing': 9036, 'rub': 9037, 'belly': 9038, 'jannatul': 9039, 'ferdous': 9040, 'ami': 9041, 'ekta': 9042, 'kharap': 9043, 'manush': 9044, 'mart': 9045, 'gua': 9046, 'can': 9047, "khloe's": 9048, 'nhe': 9049, 'yar': 9050, 'minkyuk': 9051, 'hols': 9052, 'grown': 9053, 'sensor': 9054, 'broker': 9055, 'wna': 9056, 'flaviana': 9057, 'chickmt': 9058, '123': 9059, 'letsfootball': 9060, 'atk': 9061, 'greymind': 9062, 'gayle': 9063, 'mood-dump': 9064, 'livestream': 9065, 'felton': 9066, 'verity': 9067, "standen's": 9068, '😆': 9069, 'takoyaki': 9070, 'aisyah': 9071, 'ffvi': 9072, 'youtu.be/2_gpctsojkw': 9073, '50p': 9074, 'grate': 9075, 'sparse': 9076, 'lagi': 9077, 'rider': 9078, 'hueee': 9079, 'thingy': 9080, 'george': 9081, 'chew': 9082, 'stella': 9083, 'theaccidentalcouple': 9084, 'smooth': 9085, 'handover': 9086, 'spick': 9087, 'offense': 9088, 'bebii': 9089, 'happenend': 9090, 'dr': 9091, 'balm': 9092, 'hmph': 9093, 'bubba': 9094, 'floor': 9095, 'oi': 9096, 'bengali': 9097, 'insecure': 9098, 'masterchef': 9099, 'whatchya': 9100, 'petrol': 9101, 'diesel': 9102, 'cock': 9103, 'nyquil': 9104, 'poootek': 9105, '1,500': 9106, 'bobble': 9107, 'leak': 9108, 'thermos': 9109, 'tae': 9110, 'confusing': 9111, 'kita': 9112, 'ia': 9113, 'developed': 9114, 'corrupted': 9115, 'anything.surely': 9116, 'october': 9117, 'ene': 9118, '3k': 9119, 'zehr': 9120, 'khany': 9121, 'grocery': 9122, 'hubba': 9123, 'gum': 9124, 'closet': 9125, 'jhalak': 9126, 'bakwas': 9127, '. ...': 9128, 'seehiah': 9129, 'goy': 9130, 'nachos': 9131, 'braid': 9132, 'initial': 9133, 'ruth': 9134, 'boong': 9135, 'gta': 9136, 'cwnt': 9137, 'trivia': 9138, 'bdays': 9139, 'rohingya': 9140, 'muslims': 9141, 'indict': 9142, 'trafficking': 9143, 'thailand': 9144, 'rumble': 9145, 'kumble': 9146, 'scold': 9147, 'phrase': 9148, 'tfw': 9149, 'jest': 9150, 'relaxes': 9151, 'offend': 9152, 'sleepingwithsirens': 9153, '17th': 9154, 'bringmethehorizon': 9155, 'carva': 9156, 'regularly': 9157, 'sympathis': 9158, 'revamps': 9159, 'mosquito': 9160, 'headphone': 9161, 'breathing': 9162, 'wacha': 9163, 'niende': 9164, '2hrs': 9165, '13m': 9166, 'kk': 9167, 'calibraksaep': 9168, 'darlin': 9169, 'stunning': 9170, "doedn't": 9171, 'meaningful': 9172, 'horrific': 9173, 'scoups': 9174, 'royally': 9175, 'sweedy': 9176, 'nams': 9177, "sacconejoly's": 9178, 'bethesda': 9179, 'fallout': 9180, 'likee': 9181, 'minecon': 9182, 'kateee': 9183, 'iloveyouu': 9184, 'linux': 9185, 'nawwwe': 9186, 'chikka': 9187, 'ug': 9188, 'rata': 9189, 'soonest': 9190, 'mwamwa': 9191, 'faggot': 9192, 'opener': 9193, 'fyi': 9194, 'mehendi': 9195, 'dash': 9196, 'bookmark': 9197, 'whay': 9198, 'shaa': 9199, 'pramis': 9200, '😚': 9201, 'ngee': 9202, 'ann': 9203, 'crikey': 9204, 'snit': 9205, 'tiring': 9206, 'nathanielhinanakit': 9207, 'naya': 9208, 'spinny': 9209, 'loading': 9210, 'wheel': 9211, 'notifs': 9212, 'albeit': 9213, 'disappointing': 9214, 'athlete': 9215, 'racing': 9216, 'stripe': 9217, 'gfriend': 9218, 'screenshots': 9219, 'fugly': 9220, 'jongdae': 9221, 'tlists': 9222, 'recommended': 9223, 'budget': 9224, 'pabebegirls': 9225, 'pabebe': 9226, 'sandra': 9227, 'bland': 9228, 'storify': 9229, 'mtvhottest': 9230, 'gaga': 9231, '😵': 9232, 'hulkamania': 9233, 'unloved': 9234, 'ihhh': 9235, 'stackare': 9236, 'remedy': 9237, 'ov': 9238, 'raiz': 9239, 'nvr': 9240, 'gv': 9241, 'up.wt': 9242, 'wt': 9243, 'thr': 9244, 'soln': 9245, "sister's": 9246, 'pipe': 9247, 'lawn': 9248, "cupid's": 9249, 'retainer': 9250, 'clown': 9251, 'lipstick': 9252, 'haiss': 9253, 'todayy': 9254, 'thoo': 9255, 'everday': 9256, 'hangout': 9257, 'steven': 9258, 'william': 9259, 'umboh': 9260, 'jadines': 9261, 'thiz': 9262, 'iz': 9263, 'emeged': 9264, 'kennat': 9265, 'abi': 9266, 'arctic': 9267, 'chicsirific': 9268, 'structured': 9269, 'cumbia': 9270, 'badlife': 9271, '4-5': 9272, 'kaslkdja': 9273, '3wks': 9274, 'feverfew': 9275, 'weddingflowers': 9276, 'diyflowers': 9277, 'fitnes': 9278, 'wolverine': 9279, 'innocent': 9280, '🙏🏻': 9281, '🎂': 9282, 'mememe': 9283, 'krystoria': 9284, 'snob': 9285, 'zumba': 9286, 'greekcrisis': 9287, 'remain': 9288, 'artistic': 9289, 'dutch': 9290, 'legible': 9291, 'israeli': 9292, 'passport': 9293, 'froze': 9294, '23rd': 9295, 'stomachache': 9296, 'ཀ': 9297, 'agains': 9298, 'otani': 9299, '3-0': 9300, 'niaaa': 9301, '2/4': 9302, 'scheme': 9303, 'fckin': 9304, 'vin': 9305, 'plss': 9306, 'rply': 9307, 'rat': 9308, 'mac': 9309, 'backup': 9310, 'actual': 9311, 'lunes': 9312, 'martes': 9313, 'robinhood': 9314, 'robinhoodies': 9315, '🚙': 9316, 'docopenhagen': 9317, 'setter': 9318, 'swipe': 9319, 'bbygurl': 9320, 'caribbean': 9321, '6yrs': 9322, 'takraw': 9323, 'fersuree': 9324, 'angie': 9325, 'sheriff': 9326, 'aaages': 9327, "i'mo": 9328, 'sulk': 9329, 'selfish': 9330, 'nonce': 9331, 'bison': 9332, 'motivate': 9333, "q'don": 9334, 'cheat': 9335, 'stomping': 9336, 'aaaaaaaaah': 9337, 'kanye': 9338, 'jdjdjdjd': 9339, "jimin's": 9340, 'fancafe': 9341, 'flipping': 9342, 'waffle': 9343, '87.7': 9344, '2fm': 9345, 'himseek': 9346, 'kissme': 9347, 'glo': 9348, 'cory': 9349, 'monteith': 9350, 'hashbrowns': 9351, 'pgs': 9352, 'msc': 9353, 'hierro': 9354, 'shirleycam': 9355, 'looks': 9356, 'gilet': 9357, 'cheek': 9358, 'squishy': 9359, 'donating': 9360, 'lahhh': 9361, 'eon': 9362, 'sunrise': 9363, 'beety': 9364, '697': 9365, 'getaway': 9366, 'criminal': 9367, 'amiibo': 9368, 'habe': 9369, 'siannn': 9370, 'chuckin': 9371, 'ampsha': 9372, 'nia': 9373, 'strap': 9374, 'dz9055': 9375, 'entlead': 9376, '590': 9377, 'nudes': 9378, '07:02': 9379, 'ifsc': 9380, 'mayor': 9381, 'biodiversity': 9382, 'taxonomic': 9383, 'collaboration': 9384, 'specie': 9385, 'collar': 9386, '3:03': 9387, 'belt': 9388, 'smith': 9389, 'eyeliner': 9390, 'therefore': 9391, 'netherlands': 9392, 'el': 9393, 'jeb': 9394, 'blacklivesmatter': 9395, 'slogan': 9396, 'msnbc': 9397, 'jebbush': 9398, 'famish': 9399, 'marino': 9400, 'qualify': 9401, 'suzy': 9402, 'skirt': 9403, 'tama': 9404, 'warrior': 9405, 'wound': 9406, 'iraq': 9407, 'camara': 9408, 'coverall': 9409, 'sneezy': 9410, 'rogerwatch': 9411, 'stalker': 9412, 'velvet': 9413, 'tradition': 9414, 'beheaviour': 9415, "robert's": 9416, '.\n.': 9417, 'aaron': 9418, 'jelouse': 9419, 'mtg': 9420, 'thoughtseized': 9421, 'playables': 9422, 'oldie': 9423, 'goody': 9424, 'mcg': 9425, 'inspirit': 9426, 'ised': 9427, 'assume': 9428, 'waisted': 9429, 'guinness': 9430, 'venue': 9431, 'pepper': 9432, 'thessidew': 9433, '877': 9434, 'genesis': 9435, 'november': 9436, 'mash': 9437, 'whattsap': 9438, 'inuyasha': 9439, 'outfwith': 9440, 'myungsoo': 9441, 'yeol': 9442, 'satisfied': 9443, 'challo': 9444, 'pliss': 9445, 'juliana': 9446, 'enroll': 9447, 'darlene': 9448, 'emoji': 9449, 'brisbane': 9450, 'merlin': 9451, 'nawwwee': 9452, 'hyperbullies': 9453, 'tong': 9454, 'nga': 9455, 'seatmates': 9456, 'rajud': 9457, 'ore': 9458, 'kaylas': 9459, 'ericavan': 9460, 'jong': 9461, 'dongwoo': 9462, 'photocards': 9463, 'wh': 9464, 'dw': 9465, 'tumor': 9466, 'vivian': 9467, 'mmsmalubhangsakit': 9468, 'jillcruz': 9469, 'qt': 9470, '19th': 9471, 'co-worker': 9472, 'starving': 9473, 'unsettled': 9474, 'gh': 9475, '18c': 9476, 'rlly': 9477, 'hamster': 9478, 'sheeran': 9479, 'preform': 9480, 'monash': 9481, 'hitmarker': 9482, 'glitch': 9483, 'safaa': 9484, "selena's": 9485, 'galat': 9486, 'tum': 9487, 'ab': 9488, 'lrka': 9489, 'bna': 9490, 'bhook': 9491, 'afterschool': 9492, 'bilal': 9493, 'ashraf': 9494, 'icu': 9495, 'annnd': 9496, 'winchester': 9497, '{:': 9498, 'dms': 9499, 'grepe': 9500, 'grepein': 9501, 'panem': 9502, 'sulli': 9503, 'injured': 9504, 'cpm': 9505, 'condemn': 9506, 'political': 9507, '✔': 9508, 'occur': 9509, 'mentality': 9510, 'unagi': 9511, '7elw': 9512, 'mesh': 9513, 'beyt': 9514, '3a2ad': 9515, 'fluent': 9516, 'varsity': 9517, 'sengenza': 9518, 'typos': 9519, 'movnat': 9520, 'yield': 9521, 'nbheroes': 9522, 'agover': 9523, 'brasileirao': 9524, 'abusive': 9525, 'unfollower': 9526, 'unparents': 9527, 'bianca': 9528, 'bun': 9529, 'dislike': 9530, 'burdensome': 9531, 'amelia': 9532, 'melon': 9533, 'soccer': 9534}

The dictionary Vocab will look like this:

{'': 0,
 '[UNK]': 1,
 'followfriday': 2,
 'top': 3,
 'engage': 4,
 ...
  • Each unique word has a unique integer associated with it.
  • The total number of words in Vocab: 9535
In [12]:
# Test the build_vocabulary function
w1_unittest.test_build_vocabulary(build_vocabulary)
 All tests passed

2.3 - Convert a Tweet to a Tensor¶

Next, you will write a function that will convert each tweet to a tensor (a list of integer IDs representing the processed tweet).

  • You already transformed each tweet to a list of tokens with the process_tweet function in order to make a vocabulary.
  • Now you will transform the tokens to integers and pad the tensors so they all have equal length.
  • Note, the returned data type will be a regular Python list()
    • You won't use TensorFlow in this function
    • You also won't use a numpy array
  • For words in the tweet that are not in the vocabulary, set them to the unique ID for the token [UNK].
Example¶

You had the original tweet:

'@happypuppy, is Maria happy?'

The tweet is already converted into a list of tokens (including only relevant words).

['maria', 'happy']

Now you will convert each word into its unique integer.

[1, 55]
  • Notice that the word "maria" is not in the vocabulary, so it is assigned the unique integer associated with the [UNK] token, because it is considered "unknown."

After that, you will pad the tweet with zeros so that all the tweets have the same length.

[1, 56, 0, 0, ... , 0]

First, let's have a look at the length of the processed tweets. You have to look at all tweets in the training and validation set and find the longest one to pad all of them to the maximum length.

In [13]:
# Tweet lengths
plt.hist([len(t) for t in train_x + val_x]);
No description has been provided for this image

Now find the length of the longest tweet. Remember to look at the training and the validation set.

Exercise 2 - max_len¶

Calculate the length of the longest tweet.

In [14]:
# GRADED FUNCTION: max_length
def max_length(training_x, validation_x):
    """Computes the length of the longest tweet in the training and validation sets.

    Args:
        training_x (list): The tweets in the training set.
        validation_x (list): The tweets in the validation set.

    Returns:
        int: Length of the longest tweet.
    """
    ### START CODE HERE ###

    max_len = max(max(len(tweet) for tweet in training_x), max(len(tweet) for tweet in validation_x))
    
    ### END CODE HERE ###
    return max_len

max_len = max_length(train_x, val_x)
print(f'The length of the longest tweet is {max_len} tokens.')
The length of the longest tweet is 51 tokens.

Expected output:

The length of the longest tweet is 51 tokens.

In [15]:
# Test your max_len function
w1_unittest.test_max_length(max_length)
 All tests passed

Exercise 3 - padded_sequence¶

Implement padded_sequence function to transform sequences of words into padded sequences of numbers. A couple of things to notice:

  • The term tensor is used to refer to the encoded tweet but the function should return a regular python list, not a tf.tensor
  • There is no need to truncate the tweet if it exceeds max_len as you already know the maximum length of the tweets beforehand
In [16]:
# GRADED FUNCTION: padded_sequence
def padded_sequence(tweet, vocab_dict, max_len, unk_token='[UNK]'):
    """transform sequences of words into padded sequences of numbers

    Args:
        tweet (list): A single tweet encoded as a list of strings.
        vocab_dict (dict): Vocabulary.
        max_len (int): Length of the longest tweet.
        unk_token (str, optional): Unknown token. Defaults to '[UNK]'.

    Returns:
        list: Padded tweet encoded as a list of int.
    """
    ### START CODE HERE ###
    
    # Find the ID of the UNK token, to use it when you encounter a new word
    unk_ID = vocab_dict[unk_token] 
    
    # First convert the words to integers by looking up the vocab_dict
    tweet_encoded = [vocab_dict.get(word, unk_ID) for word in tweet]

    # Then pad the tensor with zeroes up to the length max_len
    padded_tensor = tweet_encoded + [0] * (max_len - len(tweet_encoded))

    ### END CODE HERE ###

    return padded_tensor

Test the function

In [17]:
# Test your padded_sequence function
w1_unittest.test_padded_sequence(padded_sequence)
 All tests passed

Pad the train and validation dataset

In [18]:
train_x_padded = [padded_sequence(x, vocab, max_len) for x in train_x]
val_x_padded = [padded_sequence(x, vocab, max_len) for x in val_x]

3 - Define the structure of the neural network layers¶

In this part, you will write your own functions and layers for the neural network to test your understanding of the implementation. It will be similar to the one used in Keras and PyTorch. Writing your own small framework will help you understand how they all work and use them effectively in the future.

You will implement the ReLU and sigmoid functions, which you will use as activation functions for the neural network, as well as a fully connected (dense) layer.

3.1 - ReLU¶

You will now implement the ReLU activation in a function below. The ReLU function looks as follows: No description has been provided for this image

$$ \mathrm{ReLU}(x) = \mathrm{max}(0,x) $$

Exercise 4 - relu¶

Instructions: Implement the ReLU activation function below. Your function should take in a matrix or vector and it should transform all the negative numbers into 0 while keeping all the positive numbers intact.

Notice you can get the maximum of two numbers by using np.maximum.

In [19]:
# GRADED FUNCTION: relu
def relu(x):
    '''Relu activation function implementation
    Input: 
        - x (numpy array)
    Output:
        - activation (numpy array): input with negative values set to zero
    '''
    ### START CODE HERE ###

    activation = np.maximum(0,x)

    ### END CODE HERE ###

    return activation
In [20]:
# Check the output of your function
x = np.array([[-2.0, -1.0, 0.0], [0.0, 1.0, 2.0]], dtype=float)
print("Test data is:")
print(x)
print("\nOutput of relu is:")
print(relu(x))
Test data is:
[[-2. -1.  0.]
 [ 0.  1.  2.]]

Output of relu is:
[[0. 0. 0.]
 [0. 1. 2.]]

Expected Output:

Test data is:
[[-2. -1.  0.]
 [ 0.  1.  2.]]
 
Output of relu is:
[[0. 0. 0.]
 [0. 1. 2.]]
In [21]:
# Test your relu function
w1_unittest.test_relu(relu)
 All tests passed

3.2 - Sigmoid¶

You will now implement the sigmoid activation in a function below. The sigmoid function looks as follows: No description has been provided for this image

$$ \mathrm{sigmoid}(x) = \frac{1}{1 + e^{-x}} $$

Exercise 5 - sigmoid¶

Instructions: Implement the sigmoid activation function below. Your function should take in a matrix or vector and it should transform all the numbers according to the formula above.

In [22]:
# GRADED FUNCTION: sigmoid
def sigmoid(x):
    '''Sigmoid activation function implementation
    Input: 
        - x (numpy array)
    Output:
        - activation (numpy array)
    '''
    ### START CODE HERE ###

    activation = 1 / (1 + np.exp(-x))

    ### END CODE HERE ###

    return activation    
In [23]:
# Check the output of your function
x = np.array([[-1000.0, -1.0, 0.0], [0.0, 1.0, 1000.0]], dtype=float)
print("Test data is:")
print(x)
print("\nOutput of sigmoid is:")
print(sigmoid(x))
Test data is:
[[-1000.    -1.     0.]
 [    0.     1.  1000.]]

Output of sigmoid is:
[[0.         0.26894142 0.5       ]
 [0.5        0.73105858 1.        ]]

Expected Output:

Test data is:
[[-1000.    -1.     0.]
 [    0.     1.  1000.]]

Output of sigmoid is:
[[0.         0.26894142 0.5       ]
 [0.5        0.73105858 1.        ]]
In [24]:
# Test your sigmoid function
w1_unittest.test_sigmoid(sigmoid)
 All tests passed

3.3 - Dense Class¶

Implement the weight initialization in the __init__ method.

  • Weights are initialized with a random key.
  • The shape of the weights (num_rows, num_cols) should equal the number of columns in the input data (this is in the last column) and the number of units respectively.
    • The number of rows in the weight matrix should equal the number of columns in the input data x. Since x may have 2 dimensions if it represents a single training example (row, col), or three dimensions (batch_size, row, col), get the last dimension from the tuple that holds the dimensions of x.
    • The number of columns in the weight matrix is the number of units chosen for that dense layer.
  • The values generated should have a mean of 0 and standard deviation of stdev.
    • To initialize random weights, a random generator is created using random_generator = np.random.default_rng(seed=random_seed). This part is implemented for you. You will use random_generator.normal(...) to create your random weights. Check here how the random generator works.
    • Please don't change the random_seed, so that the results are reproducible for testing (and you can be fairly graded).

Implement the forward function of the Dense class.

  • The forward function multiplies the input to the layer (x) by the weight matrix (W)

$$\mathrm{forward}(\mathbf{x},\mathbf{W}) = \mathbf{xW} $$

  • You can use numpy.dot to perform the matrix multiplication.

Exercise 6 - Dense¶

Implement the Dense class. You might want to check how normal random numbers can be generated with numpy by checking the docs.

In [35]:
# GRADED CLASS: Dense
class Dense():
    """
    A dense (fully-connected) layer.
    """

    # Please implement '__init__'
    def __init__(self, n_units, input_shape, activation, stdev=0.1, random_seed=42):
        
        # Set the number of units in this layer
        self.n_units = n_units
        # Set the random key for initializing weights
        self.random_generator = np.random.default_rng(seed=random_seed)
        self.activation = activation
        
        ### START CODE HERE ###
        # The number of input features is the last dimension of the input shape
        input_features = input_shape[-1]
        # Generate the weight matrix from a normal distribution with a mean of 0 and standard deviation of 'stdev'
        # The size of the matrix w should be (input_features, n_units)
        w = self.random_generator.normal(scale=stdev, size=(input_features, n_units))
        
        ### END CODE HERE ##

        self.weights = w
        

    def __call__(self, x):
        return self.forward(x)
    
    
    # Please implement 'forward()'
    def forward(self, x):
        
        ### START CODE HERE ###

        # Matrix multiply x and the weight matrix
        dense = np.dot(x, self.weights)
        # Apply the activation function
        dense = self.activation(dense)
        
        ### END CODE HERE ###
        return dense
In [36]:
# random_key = np.random.get_prng()  # sets random seed
z = np.array([[2.0, 7.0, 25.0]]) # input array

# Testing your Dense layer 
dense_layer = Dense(n_units=10, input_shape=z.shape, activation=relu)  #sets  number of units in dense layer

print("Weights are:\n",dense_layer.weights) #Returns randomly generated weights
print("Foward function output is:", dense_layer(z)) # Returns multiplied values of units and weights
Weights are:
 [[ 0.03047171 -0.10399841  0.07504512  0.09405647 -0.19510352 -0.13021795
   0.01278404 -0.03162426 -0.00168012 -0.08530439]
 [ 0.0879398   0.07777919  0.00660307  0.11272412  0.04675093 -0.08592925
   0.03687508 -0.09588826  0.08784503 -0.00499259]
 [-0.01848624 -0.06809295  0.12225413 -0.01545295 -0.04283278 -0.03521336
   0.05323092  0.03654441  0.04127326  0.0430821 ]]
Foward function output is: [[0.21436609 0.         3.25266507 0.59085808 0.         0.
  1.61446659 0.17914382 1.64338651 0.87149558]]

Expected Output:

Weights are:
 [[ 0.03047171 -0.10399841  0.07504512  0.09405647 -0.19510352 -0.13021795
   0.01278404 -0.03162426 -0.00168012 -0.08530439]
 [ 0.0879398   0.07777919  0.00660307  0.11272412  0.04675093 -0.08592925
   0.03687508 -0.09588826  0.08784503 -0.00499259]
 [-0.01848624 -0.06809295  0.12225413 -0.01545295 -0.04283278 -0.03521336
   0.05323092  0.03654441  0.04127326  0.0430821 ]]

Foward function output is: [[0.21436609 0.         3.25266507 0.59085808 0.         0.
  1.61446659 0.17914382 1.64338651 0.87149558]]

Test the Dense class

In [37]:
# Test your Dense class
w1_unittest.test_Dense(Dense)
 All tests passed

3.4 - Model¶

Now you will implement a classifier using neural networks. Here is the model architecture you will be implementing.

No description has been provided for this image

For the model implementation, you will use TensorFlow module, imported as tf. Your model will consist of layers and activation functions that you implemented above, but you will take them directly from the tensorflow library.

You will use the tf.keras.Sequential module, which allows you to stack the layers in a sequence as you want them in the model. You will use the following layers:

  • tf.keras.layers.Embedding
    • Turns positive integers (word indices) into vectors of fixed size. You can imagine it as creating one-hot vectors out of indices and then running them through a fully-connected (dense) layer.
  • tf.keras.layers.GlobalAveragePooling1D
  • tf.keras.layers.Dense
    • Regular fully connected layer

Please use the help function to view documentation for each layer.

In [38]:
# View documentation on how to implement the layers in tf.
help(tf.keras.Sequential)
help(tf.keras.layers.Embedding)
help(tf.keras.layers.GlobalAveragePooling1D)
help(tf.keras.layers.Dense)
Help on class Sequential in module keras.src.engine.sequential:

class Sequential(keras.src.engine.functional.Functional)
 |  Sequential(*args, **kwargs)
 |  
 |  `Sequential` groups a linear stack of layers into a `tf.keras.Model`.
 |  
 |  `Sequential` provides training and inference features on this model.
 |  
 |  Examples:
 |  
 |  ```python
 |  # Optionally, the first layer can receive an `input_shape` argument:
 |  model = tf.keras.Sequential()
 |  model.add(tf.keras.layers.Dense(8, input_shape=(16,)))
 |  # Afterwards, we do automatic shape inference:
 |  model.add(tf.keras.layers.Dense(4))
 |  
 |  # This is identical to the following:
 |  model = tf.keras.Sequential()
 |  model.add(tf.keras.Input(shape=(16,)))
 |  model.add(tf.keras.layers.Dense(8))
 |  
 |  # Note that you can also omit the `input_shape` argument.
 |  # In that case the model doesn't have any weights until the first call
 |  # to a training/evaluation method (since it isn't yet built):
 |  model = tf.keras.Sequential()
 |  model.add(tf.keras.layers.Dense(8))
 |  model.add(tf.keras.layers.Dense(4))
 |  # model.weights not created yet
 |  
 |  # Whereas if you specify the input shape, the model gets built
 |  # continuously as you are adding layers:
 |  model = tf.keras.Sequential()
 |  model.add(tf.keras.layers.Dense(8, input_shape=(16,)))
 |  model.add(tf.keras.layers.Dense(4))
 |  len(model.weights)
 |  # Returns "4"
 |  
 |  # When using the delayed-build pattern (no input shape specified), you can
 |  # choose to manually build your model by calling
 |  # `build(batch_input_shape)`:
 |  model = tf.keras.Sequential()
 |  model.add(tf.keras.layers.Dense(8))
 |  model.add(tf.keras.layers.Dense(4))
 |  model.build((None, 16))
 |  len(model.weights)
 |  # Returns "4"
 |  
 |  # Note that when using the delayed-build pattern (no input shape specified),
 |  # the model gets built the first time you call `fit`, `eval`, or `predict`,
 |  # or the first time you call the model on some input data.
 |  model = tf.keras.Sequential()
 |  model.add(tf.keras.layers.Dense(8))
 |  model.add(tf.keras.layers.Dense(1))
 |  model.compile(optimizer='sgd', loss='mse')
 |  # This builds the model for the first time:
 |  model.fit(x, y, batch_size=32, epochs=10)
 |  ```
 |  
 |  Method resolution order:
 |      Sequential
 |      keras.src.engine.functional.Functional
 |      keras.src.engine.training.Model
 |      keras.src.engine.base_layer.Layer
 |      tensorflow.python.module.module.Module
 |      tensorflow.python.trackable.autotrackable.AutoTrackable
 |      tensorflow.python.trackable.base.Trackable
 |      keras.src.utils.version_utils.LayerVersionSelector
 |      keras.src.utils.version_utils.ModelVersionSelector
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, layers=None, name=None)
 |      Creates a `Sequential` model instance.
 |      
 |      Args:
 |        layers: Optional list of layers to add to the model.
 |        name: Optional name for the model.
 |  
 |  add(self, layer)
 |      Adds a layer instance on top of the layer stack.
 |      
 |      Args:
 |          layer: layer instance.
 |      
 |      Raises:
 |          TypeError: If `layer` is not a layer instance.
 |          ValueError: In case the `layer` argument does not
 |              know its input shape.
 |          ValueError: In case the `layer` argument has
 |              multiple output tensors, or is already connected
 |              somewhere else (forbidden in `Sequential` models).
 |  
 |  build(self, input_shape=None)
 |      Builds the model based on input shapes received.
 |      
 |      This is to be used for subclassed models, which do not know at
 |      instantiation time what their inputs look like.
 |      
 |      This method only exists for users who want to call `model.build()` in a
 |      standalone way (as a substitute for calling the model on real data to
 |      build it). It will never be called by the framework (and thus it will
 |      never throw unexpected errors in an unrelated workflow).
 |      
 |      Args:
 |       input_shape: Single tuple, `TensorShape` instance, or list/dict of
 |         shapes, where shapes are tuples, integers, or `TensorShape`
 |         instances.
 |      
 |      Raises:
 |        ValueError:
 |          1. In case of invalid user-provided data (not of type tuple,
 |             list, `TensorShape`, or dict).
 |          2. If the model requires call arguments that are agnostic
 |             to the input shapes (positional or keyword arg in call
 |             signature).
 |          3. If not all layers were properly built.
 |          4. If float type inputs are not supported within the layers.
 |      
 |        In each of these cases, the user should build their model by calling
 |        it on real tensor data.
 |  
 |  call(self, inputs, training=None, mask=None)
 |      Calls the model on new inputs.
 |      
 |      In this case `call` just reapplies
 |      all ops in the graph to the new inputs
 |      (e.g. build a new computational graph from the provided inputs).
 |      
 |      Args:
 |          inputs: A tensor or list of tensors.
 |          training: Boolean or boolean scalar tensor, indicating whether to
 |              run the `Network` in training mode or inference mode.
 |          mask: A mask or list of masks. A mask can be
 |              either a tensor or None (no mask).
 |      
 |      Returns:
 |          A tensor if there is a single output, or
 |          a list of tensors if there are more than one outputs.
 |  
 |  compute_mask(self, inputs, mask)
 |      Computes an output mask tensor.
 |      
 |      Args:
 |          inputs: Tensor or list of tensors.
 |          mask: Tensor or list of tensors.
 |      
 |      Returns:
 |          None or a tensor (or list of tensors,
 |              one per output tensor of the layer).
 |  
 |  compute_output_shape(self, input_shape)
 |      Computes the output shape of the layer.
 |      
 |      This method will cause the layer's state to be built, if that has not
 |      happened before. This requires that the layer will later be used with
 |      inputs that match the input shape provided here.
 |      
 |      Args:
 |          input_shape: Shape tuple (tuple of integers) or `tf.TensorShape`,
 |              or structure of shape tuples / `tf.TensorShape` instances
 |              (one per output tensor of the layer).
 |              Shape tuples can include None for free dimensions,
 |              instead of an integer.
 |      
 |      Returns:
 |          A `tf.TensorShape` instance
 |          or structure of `tf.TensorShape` instances.
 |  
 |  get_config(self)
 |      Returns the config of the `Model`.
 |      
 |      Config is a Python dictionary (serializable) containing the
 |      configuration of an object, which in this case is a `Model`. This allows
 |      the `Model` to be be reinstantiated later (without its trained weights)
 |      from this configuration.
 |      
 |      Note that `get_config()` does not guarantee to return a fresh copy of
 |      dict every time it is called. The callers should make a copy of the
 |      returned dict if they want to modify it.
 |      
 |      Developers of subclassed `Model` are advised to override this method,
 |      and continue to update the dict from `super(MyModel, self).get_config()`
 |      to provide the proper configuration of this `Model`. The default config
 |      will return config dict for init parameters if they are basic types.
 |      Raises `NotImplementedError` when in cases where a custom
 |      `get_config()` implementation is required for the subclassed model.
 |      
 |      Returns:
 |          Python dictionary containing the configuration of this `Model`.
 |  
 |  pop(self)
 |      Removes the last layer in the model.
 |      
 |      Raises:
 |          TypeError: if there are no layers in the model.
 |  
 |  ----------------------------------------------------------------------
 |  Class methods defined here:
 |  
 |  from_config(config, custom_objects=None) from builtins.type
 |      Creates a layer from its config.
 |      
 |      This method is the reverse of `get_config`,
 |      capable of instantiating the same layer from the config
 |      dictionary. It does not handle layer connectivity
 |      (handled by Network), nor weights (handled by `set_weights`).
 |      
 |      Args:
 |          config: A Python dictionary, typically the
 |              output of get_config.
 |      
 |      Returns:
 |          A layer instance.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties defined here:
 |  
 |  layers
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  input_spec
 |      `InputSpec` instance(s) describing the input format for this layer.
 |      
 |      When you create a layer subclass, you can set `self.input_spec` to
 |      enable the layer to run input compatibility checks when it is called.
 |      Consider a `Conv2D` layer: it can only be called on a single input
 |      tensor of rank 4. As such, you can set, in `__init__()`:
 |      
 |      ```python
 |      self.input_spec = tf.keras.layers.InputSpec(ndim=4)
 |      ```
 |      
 |      Now, if you try to call the layer on an input that isn't rank 4
 |      (for instance, an input of shape `(2,)`, it will raise a
 |      nicely-formatted error:
 |      
 |      ```
 |      ValueError: Input 0 of layer conv2d is incompatible with the layer:
 |      expected ndim=4, found ndim=1. Full shape received: [2]
 |      ```
 |      
 |      Input checks that can be specified via `input_spec` include:
 |      - Structure (e.g. a single input, a list of 2 inputs, etc)
 |      - Shape
 |      - Rank (ndim)
 |      - Dtype
 |      
 |      For more information, see `tf.keras.layers.InputSpec`.
 |      
 |      Returns:
 |        A `tf.keras.layers.InputSpec` instance, or nested structure thereof.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.src.engine.functional.Functional:
 |  
 |  get_weight_paths(self)
 |      Retrieve all the variables and their paths for the model.
 |      
 |      The variable path (string) is a stable key to identify a `tf.Variable`
 |      instance owned by the model. It can be used to specify variable-specific
 |      configurations (e.g. DTensor, quantization) from a global view.
 |      
 |      This method returns a dict with weight object paths as keys
 |      and the corresponding `tf.Variable` instances as values.
 |      
 |      Note that if the model is a subclassed model and the weights haven't
 |      been initialized, an empty dict will be returned.
 |      
 |      Returns:
 |          A dict where keys are variable paths and values are `tf.Variable`
 |           instances.
 |      
 |      Example:
 |      
 |      ```python
 |      class SubclassModel(tf.keras.Model):
 |      
 |        def __init__(self, name=None):
 |          super().__init__(name=name)
 |          self.d1 = tf.keras.layers.Dense(10)
 |          self.d2 = tf.keras.layers.Dense(20)
 |      
 |        def call(self, inputs):
 |          x = self.d1(inputs)
 |          return self.d2(x)
 |      
 |      model = SubclassModel()
 |      model(tf.zeros((10, 10)))
 |      weight_paths = model.get_weight_paths()
 |      # weight_paths:
 |      # {
 |      #    'd1.kernel': model.d1.kernel,
 |      #    'd1.bias': model.d1.bias,
 |      #    'd2.kernel': model.d2.kernel,
 |      #    'd2.bias': model.d2.bias,
 |      # }
 |      
 |      # Functional model
 |      inputs = tf.keras.Input((10,), batch_size=10)
 |      x = tf.keras.layers.Dense(20, name='d1')(inputs)
 |      output = tf.keras.layers.Dense(30, name='d2')(x)
 |      model = tf.keras.Model(inputs, output)
 |      d1 = model.layers[1]
 |      d2 = model.layers[2]
 |      weight_paths = model.get_weight_paths()
 |      # weight_paths:
 |      # {
 |      #    'd1.kernel': d1.kernel,
 |      #    'd1.bias': d1.bias,
 |      #    'd2.kernel': d2.kernel,
 |      #    'd2.bias': d2.bias,
 |      # }
 |      ```
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from keras.src.engine.functional.Functional:
 |  
 |  input
 |      Retrieves the input tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input tensor or list of input tensors.
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |        AttributeError: If no inbound nodes are found.
 |  
 |  input_shape
 |      Retrieves the input shape(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer, or if all inputs
 |      have the same shape.
 |      
 |      Returns:
 |          Input shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per input tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined input_shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  output
 |      Retrieves the output tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one output,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |        Output tensor or list of output tensors.
 |      
 |      Raises:
 |        AttributeError: if the layer is connected to more than one incoming
 |          layers.
 |        RuntimeError: if called in Eager mode.
 |  
 |  output_shape
 |      Retrieves the output shape(s) of a layer.
 |      
 |      Only applicable if the layer has one output,
 |      or if all outputs have the same shape.
 |      
 |      Returns:
 |          Output shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per output tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined output shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.src.engine.training.Model:
 |  
 |  __call__(self, *args, **kwargs)
 |  
 |  __copy__(self)
 |  
 |  __deepcopy__(self, memo)
 |  
 |  __reduce__(self)
 |      Helper for pickle.
 |  
 |  __setattr__(self, name, value)
 |      Support self.foo = trackable syntax.
 |  
 |  compile(self, optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, jit_compile=None, pss_evaluation_shards=0, **kwargs)
 |      Configures the model for training.
 |      
 |      Example:
 |      
 |      ```python
 |      model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
 |                    loss=tf.keras.losses.BinaryCrossentropy(),
 |                    metrics=[tf.keras.metrics.BinaryAccuracy(),
 |                             tf.keras.metrics.FalseNegatives()])
 |      ```
 |      
 |      Args:
 |          optimizer: String (name of optimizer) or optimizer instance. See
 |            `tf.keras.optimizers`.
 |          loss: Loss function. May be a string (name of loss function), or
 |            a `tf.keras.losses.Loss` instance. See `tf.keras.losses`. A loss
 |            function is any callable with the signature `loss = fn(y_true,
 |            y_pred)`, where `y_true` are the ground truth values, and
 |            `y_pred` are the model's predictions.
 |            `y_true` should have shape
 |            `(batch_size, d0, .. dN)` (except in the case of
 |            sparse loss functions such as
 |            sparse categorical crossentropy which expects integer arrays of
 |            shape `(batch_size, d0, .. dN-1)`).
 |            `y_pred` should have shape `(batch_size, d0, .. dN)`.
 |            The loss function should return a float tensor.
 |            If a custom `Loss` instance is
 |            used and reduction is set to `None`, return value has shape
 |            `(batch_size, d0, .. dN-1)` i.e. per-sample or per-timestep loss
 |            values; otherwise, it is a scalar. If the model has multiple
 |            outputs, you can use a different loss on each output by passing a
 |            dictionary or a list of losses. The loss value that will be
 |            minimized by the model will then be the sum of all individual
 |            losses, unless `loss_weights` is specified.
 |          metrics: List of metrics to be evaluated by the model during
 |            training and testing. Each of this can be a string (name of a
 |            built-in function), function or a `tf.keras.metrics.Metric`
 |            instance. See `tf.keras.metrics`. Typically you will use
 |            `metrics=['accuracy']`.
 |            A function is any callable with the signature `result = fn(y_true,
 |            y_pred)`. To specify different metrics for different outputs of a
 |            multi-output model, you could also pass a dictionary, such as
 |            `metrics={'output_a':'accuracy', 'output_b':['accuracy', 'mse']}`.
 |            You can also pass a list to specify a metric or a list of metrics
 |            for each output, such as
 |            `metrics=[['accuracy'], ['accuracy', 'mse']]`
 |            or `metrics=['accuracy', ['accuracy', 'mse']]`. When you pass the
 |            strings 'accuracy' or 'acc', we convert this to one of
 |            `tf.keras.metrics.BinaryAccuracy`,
 |            `tf.keras.metrics.CategoricalAccuracy`,
 |            `tf.keras.metrics.SparseCategoricalAccuracy` based on the shapes
 |            of the targets and of the model output. We do a similar
 |            conversion for the strings 'crossentropy' and 'ce' as well.
 |            The metrics passed here are evaluated without sample weighting; if
 |            you would like sample weighting to apply, you can specify your
 |            metrics via the `weighted_metrics` argument instead.
 |          loss_weights: Optional list or dictionary specifying scalar
 |            coefficients (Python floats) to weight the loss contributions of
 |            different model outputs. The loss value that will be minimized by
 |            the model will then be the *weighted sum* of all individual
 |            losses, weighted by the `loss_weights` coefficients.  If a list,
 |            it is expected to have a 1:1 mapping to the model's outputs. If a
 |            dict, it is expected to map output names (strings) to scalar
 |            coefficients.
 |          weighted_metrics: List of metrics to be evaluated and weighted by
 |            `sample_weight` or `class_weight` during training and testing.
 |          run_eagerly: Bool. If `True`, this `Model`'s logic will not be
 |            wrapped in a `tf.function`. Recommended to leave this as `None`
 |            unless your `Model` cannot be run inside a `tf.function`.
 |            `run_eagerly=True` is not supported when using
 |            `tf.distribute.experimental.ParameterServerStrategy`. Defaults to
 |             `False`.
 |          steps_per_execution: Int. The number of batches to
 |            run during each `tf.function` call. Running multiple batches
 |            inside a single `tf.function` call can greatly improve performance
 |            on TPUs or small models with a large Python overhead. At most, one
 |            full epoch will be run each execution. If a number larger than the
 |            size of the epoch is passed, the execution will be truncated to
 |            the size of the epoch. Note that if `steps_per_execution` is set
 |            to `N`, `Callback.on_batch_begin` and `Callback.on_batch_end`
 |            methods will only be called every `N` batches (i.e. before/after
 |            each `tf.function` execution). Defaults to `1`.
 |          jit_compile: If `True`, compile the model training step with XLA.
 |            [XLA](https://www.tensorflow.org/xla) is an optimizing compiler
 |            for machine learning.
 |            `jit_compile` is not enabled for by default.
 |            Note that `jit_compile=True`
 |            may not necessarily work for all models.
 |            For more information on supported operations please refer to the
 |            [XLA documentation](https://www.tensorflow.org/xla).
 |            Also refer to
 |            [known XLA issues](https://www.tensorflow.org/xla/known_issues)
 |            for more details.
 |          pss_evaluation_shards: Integer or 'auto'. Used for
 |            `tf.distribute.ParameterServerStrategy` training only. This arg
 |            sets the number of shards to split the dataset into, to enable an
 |            exact visitation guarantee for evaluation, meaning the model will
 |            be applied to each dataset element exactly once, even if workers
 |            fail. The dataset must be sharded to ensure separate workers do
 |            not process the same data. The number of shards should be at least
 |            the number of workers for good performance. A value of 'auto'
 |            turns on exact evaluation and uses a heuristic for the number of
 |            shards based on the number of workers. 0, meaning no
 |            visitation guarantee is provided. NOTE: Custom implementations of
 |            `Model.test_step` will be ignored when doing exact evaluation.
 |            Defaults to `0`.
 |          **kwargs: Arguments supported for backwards compatibility only.
 |  
 |  compile_from_config(self, config)
 |      Compiles the model with the information given in config.
 |      
 |      This method uses the information in the config (optimizer, loss,
 |      metrics, etc.) to compile the model.
 |      
 |      Args:
 |          config: Dict containing information for compiling the model.
 |  
 |  compute_loss(self, x=None, y=None, y_pred=None, sample_weight=None)
 |      Compute the total loss, validate it, and return it.
 |      
 |      Subclasses can optionally override this method to provide custom loss
 |      computation logic.
 |      
 |      Example:
 |      ```python
 |      class MyModel(tf.keras.Model):
 |      
 |        def __init__(self, *args, **kwargs):
 |          super(MyModel, self).__init__(*args, **kwargs)
 |          self.loss_tracker = tf.keras.metrics.Mean(name='loss')
 |      
 |        def compute_loss(self, x, y, y_pred, sample_weight):
 |          loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y))
 |          loss += tf.add_n(self.losses)
 |          self.loss_tracker.update_state(loss)
 |          return loss
 |      
 |        def reset_metrics(self):
 |          self.loss_tracker.reset_states()
 |      
 |        @property
 |        def metrics(self):
 |          return [self.loss_tracker]
 |      
 |      tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,))
 |      dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)
 |      
 |      inputs = tf.keras.layers.Input(shape=(10,), name='my_input')
 |      outputs = tf.keras.layers.Dense(10)(inputs)
 |      model = MyModel(inputs, outputs)
 |      model.add_loss(tf.reduce_sum(outputs))
 |      
 |      optimizer = tf.keras.optimizers.SGD()
 |      model.compile(optimizer, loss='mse', steps_per_execution=10)
 |      model.fit(dataset, epochs=2, steps_per_epoch=10)
 |      print('My custom loss: ', model.loss_tracker.result().numpy())
 |      ```
 |      
 |      Args:
 |        x: Input data.
 |        y: Target data.
 |        y_pred: Predictions returned by the model (output of `model(x)`)
 |        sample_weight: Sample weights for weighting the loss function.
 |      
 |      Returns:
 |        The total loss as a `tf.Tensor`, or `None` if no loss results (which
 |        is the case when called by `Model.test_step`).
 |  
 |  compute_metrics(self, x, y, y_pred, sample_weight)
 |      Update metric states and collect all metrics to be returned.
 |      
 |      Subclasses can optionally override this method to provide custom metric
 |      updating and collection logic.
 |      
 |      Example:
 |      ```python
 |      class MyModel(tf.keras.Sequential):
 |      
 |        def compute_metrics(self, x, y, y_pred, sample_weight):
 |      
 |          # This super call updates `self.compiled_metrics` and returns
 |          # results for all metrics listed in `self.metrics`.
 |          metric_results = super(MyModel, self).compute_metrics(
 |              x, y, y_pred, sample_weight)
 |      
 |          # Note that `self.custom_metric` is not listed in `self.metrics`.
 |          self.custom_metric.update_state(x, y, y_pred, sample_weight)
 |          metric_results['custom_metric_name'] = self.custom_metric.result()
 |          return metric_results
 |      ```
 |      
 |      Args:
 |        x: Input data.
 |        y: Target data.
 |        y_pred: Predictions returned by the model (output of `model.call(x)`)
 |        sample_weight: Sample weights for weighting the loss function.
 |      
 |      Returns:
 |        A `dict` containing values that will be passed to
 |        `tf.keras.callbacks.CallbackList.on_train_batch_end()`. Typically, the
 |        values of the metrics listed in `self.metrics` are returned. Example:
 |        `{'loss': 0.2, 'accuracy': 0.7}`.
 |  
 |  evaluate(self, x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)
 |      Returns the loss value & metrics values for the model in test mode.
 |      
 |      Computation is done in batches (see the `batch_size` arg.)
 |      
 |      Args:
 |          x: Input data. It could be:
 |            - A Numpy array (or array-like), or a list of arrays
 |              (in case the model has multiple inputs).
 |            - A TensorFlow tensor, or a list of tensors
 |              (in case the model has multiple inputs).
 |            - A dict mapping input names to the corresponding array/tensors,
 |              if the model has named inputs.
 |            - A `tf.data` dataset. Should return a tuple
 |              of either `(inputs, targets)` or
 |              `(inputs, targets, sample_weights)`.
 |            - A generator or `keras.utils.Sequence` returning `(inputs,
 |              targets)` or `(inputs, targets, sample_weights)`.
 |            A more detailed description of unpacking behavior for iterator
 |            types (Dataset, generator, Sequence) is given in the `Unpacking
 |            behavior for iterator-like inputs` section of `Model.fit`.
 |          y: Target data. Like the input data `x`, it could be either Numpy
 |            array(s) or TensorFlow tensor(s). It should be consistent with `x`
 |            (you cannot have Numpy inputs and tensor targets, or inversely).
 |            If `x` is a dataset, generator or `keras.utils.Sequence` instance,
 |            `y` should not be specified (since targets will be obtained from
 |            the iterator/dataset).
 |          batch_size: Integer or `None`. Number of samples per batch of
 |            computation. If unspecified, `batch_size` will default to 32. Do
 |            not specify the `batch_size` if your data is in the form of a
 |            dataset, generators, or `keras.utils.Sequence` instances (since
 |            they generate batches).
 |          verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
 |              0 = silent, 1 = progress bar, 2 = single line.
 |              `"auto"` becomes 1 for most cases, and to 2 when used with
 |              `ParameterServerStrategy`. Note that the progress bar is not
 |              particularly useful when logged to a file, so `verbose=2` is
 |              recommended when not running interactively (e.g. in a production
 |              environment). Defaults to 'auto'.
 |          sample_weight: Optional Numpy array of weights for the test samples,
 |            used for weighting the loss function. You can either pass a flat
 |            (1D) Numpy array with the same length as the input samples
 |              (1:1 mapping between weights and samples), or in the case of
 |                temporal data, you can pass a 2D array with shape `(samples,
 |                sequence_length)`, to apply a different weight to every
 |                timestep of every sample. This argument is not supported when
 |                `x` is a dataset, instead pass sample weights as the third
 |                element of `x`.
 |          steps: Integer or `None`. Total number of steps (batches of samples)
 |            before declaring the evaluation round finished. Ignored with the
 |            default value of `None`. If x is a `tf.data` dataset and `steps`
 |            is None, 'evaluate' will run until the dataset is exhausted. This
 |            argument is not supported with array inputs.
 |          callbacks: List of `keras.callbacks.Callback` instances. List of
 |            callbacks to apply during evaluation. See
 |            [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).
 |          max_queue_size: Integer. Used for generator or
 |            `keras.utils.Sequence` input only. Maximum size for the generator
 |            queue. If unspecified, `max_queue_size` will default to 10.
 |          workers: Integer. Used for generator or `keras.utils.Sequence` input
 |            only. Maximum number of processes to spin up when using
 |            process-based threading. If unspecified, `workers` will default to
 |            1.
 |          use_multiprocessing: Boolean. Used for generator or
 |            `keras.utils.Sequence` input only. If `True`, use process-based
 |            threading. If unspecified, `use_multiprocessing` will default to
 |            `False`. Note that because this implementation relies on
 |            multiprocessing, you should not pass non-picklable arguments to
 |            the generator as they can't be passed easily to children
 |            processes.
 |          return_dict: If `True`, loss and metric results are returned as a
 |            dict, with each key being the name of the metric. If `False`, they
 |            are returned as a list.
 |          **kwargs: Unused at this time.
 |      
 |      See the discussion of `Unpacking behavior for iterator-like inputs` for
 |      `Model.fit`.
 |      
 |      Returns:
 |          Scalar test loss (if the model has a single output and no metrics)
 |          or list of scalars (if the model has multiple outputs
 |          and/or metrics). The attribute `model.metrics_names` will give you
 |          the display labels for the scalar outputs.
 |      
 |      Raises:
 |          RuntimeError: If `model.evaluate` is wrapped in a `tf.function`.
 |  
 |  evaluate_generator(self, generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)
 |      Evaluates the model on a data generator.
 |      
 |      DEPRECATED:
 |        `Model.evaluate` now supports generators, so there is no longer any
 |        need to use this endpoint.
 |  
 |  export(self, filepath)
 |      Create a SavedModel artifact for inference (e.g. via TF-Serving).
 |      
 |      This method lets you export a model to a lightweight SavedModel artifact
 |      that contains the model's forward pass only (its `call()` method)
 |      and can be served via e.g. TF-Serving. The forward pass is registered
 |      under the name `serve()` (see example below).
 |      
 |      The original code of the model (including any custom layers you may
 |      have used) is *no longer* necessary to reload the artifact -- it is
 |      entirely standalone.
 |      
 |      Args:
 |          filepath: `str` or `pathlib.Path` object. Path where to save
 |              the artifact.
 |      
 |      Example:
 |      
 |      ```python
 |      # Create the artifact
 |      model.export("path/to/location")
 |      
 |      # Later, in a different process / environment...
 |      reloaded_artifact = tf.saved_model.load("path/to/location")
 |      predictions = reloaded_artifact.serve(input_data)
 |      ```
 |      
 |      If you would like to customize your serving endpoints, you can
 |      use the lower-level `keras.export.ExportArchive` class. The `export()`
 |      method relies on `ExportArchive` internally.
 |  
 |  fit(self, x=None, y=None, batch_size=None, epochs=1, verbose='auto', callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False)
 |      Trains the model for a fixed number of epochs (dataset iterations).
 |      
 |      Args:
 |          x: Input data. It could be:
 |            - A Numpy array (or array-like), or a list of arrays
 |              (in case the model has multiple inputs).
 |            - A TensorFlow tensor, or a list of tensors
 |              (in case the model has multiple inputs).
 |            - A dict mapping input names to the corresponding array/tensors,
 |              if the model has named inputs.
 |            - A `tf.data` dataset. Should return a tuple
 |              of either `(inputs, targets)` or
 |              `(inputs, targets, sample_weights)`.
 |            - A generator or `keras.utils.Sequence` returning `(inputs,
 |              targets)` or `(inputs, targets, sample_weights)`.
 |            - A `tf.keras.utils.experimental.DatasetCreator`, which wraps a
 |              callable that takes a single argument of type
 |              `tf.distribute.InputContext`, and returns a `tf.data.Dataset`.
 |              `DatasetCreator` should be used when users prefer to specify the
 |              per-replica batching and sharding logic for the `Dataset`.
 |              See `tf.keras.utils.experimental.DatasetCreator` doc for more
 |              information.
 |            A more detailed description of unpacking behavior for iterator
 |            types (Dataset, generator, Sequence) is given below. If these
 |            include `sample_weights` as a third component, note that sample
 |            weighting applies to the `weighted_metrics` argument but not the
 |            `metrics` argument in `compile()`. If using
 |            `tf.distribute.experimental.ParameterServerStrategy`, only
 |            `DatasetCreator` type is supported for `x`.
 |          y: Target data. Like the input data `x`,
 |            it could be either Numpy array(s) or TensorFlow tensor(s).
 |            It should be consistent with `x` (you cannot have Numpy inputs and
 |            tensor targets, or inversely). If `x` is a dataset, generator,
 |            or `keras.utils.Sequence` instance, `y` should
 |            not be specified (since targets will be obtained from `x`).
 |          batch_size: Integer or `None`.
 |              Number of samples per gradient update.
 |              If unspecified, `batch_size` will default to 32.
 |              Do not specify the `batch_size` if your data is in the
 |              form of datasets, generators, or `keras.utils.Sequence`
 |              instances (since they generate batches).
 |          epochs: Integer. Number of epochs to train the model.
 |              An epoch is an iteration over the entire `x` and `y`
 |              data provided
 |              (unless the `steps_per_epoch` flag is set to
 |              something other than None).
 |              Note that in conjunction with `initial_epoch`,
 |              `epochs` is to be understood as "final epoch".
 |              The model is not trained for a number of iterations
 |              given by `epochs`, but merely until the epoch
 |              of index `epochs` is reached.
 |          verbose: 'auto', 0, 1, or 2. Verbosity mode.
 |              0 = silent, 1 = progress bar, 2 = one line per epoch.
 |              'auto' becomes 1 for most cases, but 2 when used with
 |              `ParameterServerStrategy`. Note that the progress bar is not
 |              particularly useful when logged to a file, so verbose=2 is
 |              recommended when not running interactively (eg, in a production
 |              environment). Defaults to 'auto'.
 |          callbacks: List of `keras.callbacks.Callback` instances.
 |              List of callbacks to apply during training.
 |              See `tf.keras.callbacks`. Note
 |              `tf.keras.callbacks.ProgbarLogger` and
 |              `tf.keras.callbacks.History` callbacks are created automatically
 |              and need not be passed into `model.fit`.
 |              `tf.keras.callbacks.ProgbarLogger` is created or not based on
 |              `verbose` argument to `model.fit`.
 |              Callbacks with batch-level calls are currently unsupported with
 |              `tf.distribute.experimental.ParameterServerStrategy`, and users
 |              are advised to implement epoch-level calls instead with an
 |              appropriate `steps_per_epoch` value.
 |          validation_split: Float between 0 and 1.
 |              Fraction of the training data to be used as validation data.
 |              The model will set apart this fraction of the training data,
 |              will not train on it, and will evaluate
 |              the loss and any model metrics
 |              on this data at the end of each epoch.
 |              The validation data is selected from the last samples
 |              in the `x` and `y` data provided, before shuffling. This
 |              argument is not supported when `x` is a dataset, generator or
 |              `keras.utils.Sequence` instance.
 |              If both `validation_data` and `validation_split` are provided,
 |              `validation_data` will override `validation_split`.
 |              `validation_split` is not yet supported with
 |              `tf.distribute.experimental.ParameterServerStrategy`.
 |          validation_data: Data on which to evaluate
 |              the loss and any model metrics at the end of each epoch.
 |              The model will not be trained on this data. Thus, note the fact
 |              that the validation loss of data provided using
 |              `validation_split` or `validation_data` is not affected by
 |              regularization layers like noise and dropout.
 |              `validation_data` will override `validation_split`.
 |              `validation_data` could be:
 |                - A tuple `(x_val, y_val)` of Numpy arrays or tensors.
 |                - A tuple `(x_val, y_val, val_sample_weights)` of NumPy
 |                  arrays.
 |                - A `tf.data.Dataset`.
 |                - A Python generator or `keras.utils.Sequence` returning
 |                `(inputs, targets)` or `(inputs, targets, sample_weights)`.
 |              `validation_data` is not yet supported with
 |              `tf.distribute.experimental.ParameterServerStrategy`.
 |          shuffle: Boolean (whether to shuffle the training data
 |              before each epoch) or str (for 'batch'). This argument is
 |              ignored when `x` is a generator or an object of tf.data.Dataset.
 |              'batch' is a special option for dealing
 |              with the limitations of HDF5 data; it shuffles in batch-sized
 |              chunks. Has no effect when `steps_per_epoch` is not `None`.
 |          class_weight: Optional dictionary mapping class indices (integers)
 |              to a weight (float) value, used for weighting the loss function
 |              (during training only).
 |              This can be useful to tell the model to
 |              "pay more attention" to samples from
 |              an under-represented class. When `class_weight` is specified
 |              and targets have a rank of 2 or greater, either `y` must be
 |              one-hot encoded, or an explicit final dimension of `1` must
 |              be included for sparse class labels.
 |          sample_weight: Optional Numpy array of weights for
 |              the training samples, used for weighting the loss function
 |              (during training only). You can either pass a flat (1D)
 |              Numpy array with the same length as the input samples
 |              (1:1 mapping between weights and samples),
 |              or in the case of temporal data,
 |              you can pass a 2D array with shape
 |              `(samples, sequence_length)`,
 |              to apply a different weight to every timestep of every sample.
 |              This argument is not supported when `x` is a dataset, generator,
 |              or `keras.utils.Sequence` instance, instead provide the
 |              sample_weights as the third element of `x`.
 |              Note that sample weighting does not apply to metrics specified
 |              via the `metrics` argument in `compile()`. To apply sample
 |              weighting to your metrics, you can specify them via the
 |              `weighted_metrics` in `compile()` instead.
 |          initial_epoch: Integer.
 |              Epoch at which to start training
 |              (useful for resuming a previous training run).
 |          steps_per_epoch: Integer or `None`.
 |              Total number of steps (batches of samples)
 |              before declaring one epoch finished and starting the
 |              next epoch. When training with input tensors such as
 |              TensorFlow data tensors, the default `None` is equal to
 |              the number of samples in your dataset divided by
 |              the batch size, or 1 if that cannot be determined. If x is a
 |              `tf.data` dataset, and 'steps_per_epoch'
 |              is None, the epoch will run until the input dataset is
 |              exhausted.  When passing an infinitely repeating dataset, you
 |              must specify the `steps_per_epoch` argument. If
 |              `steps_per_epoch=-1` the training will run indefinitely with an
 |              infinitely repeating dataset.  This argument is not supported
 |              with array inputs.
 |              When using `tf.distribute.experimental.ParameterServerStrategy`:
 |                * `steps_per_epoch=None` is not supported.
 |          validation_steps: Only relevant if `validation_data` is provided and
 |              is a `tf.data` dataset. Total number of steps (batches of
 |              samples) to draw before stopping when performing validation
 |              at the end of every epoch. If 'validation_steps' is None,
 |              validation will run until the `validation_data` dataset is
 |              exhausted. In the case of an infinitely repeated dataset, it
 |              will run into an infinite loop. If 'validation_steps' is
 |              specified and only part of the dataset will be consumed, the
 |              evaluation will start from the beginning of the dataset at each
 |              epoch. This ensures that the same validation samples are used
 |              every time.
 |          validation_batch_size: Integer or `None`.
 |              Number of samples per validation batch.
 |              If unspecified, will default to `batch_size`.
 |              Do not specify the `validation_batch_size` if your data is in
 |              the form of datasets, generators, or `keras.utils.Sequence`
 |              instances (since they generate batches).
 |          validation_freq: Only relevant if validation data is provided.
 |            Integer or `collections.abc.Container` instance (e.g. list, tuple,
 |            etc.).  If an integer, specifies how many training epochs to run
 |            before a new validation run is performed, e.g. `validation_freq=2`
 |            runs validation every 2 epochs. If a Container, specifies the
 |            epochs on which to run validation, e.g.
 |            `validation_freq=[1, 2, 10]` runs validation at the end of the
 |            1st, 2nd, and 10th epochs.
 |          max_queue_size: Integer. Used for generator or
 |            `keras.utils.Sequence` input only. Maximum size for the generator
 |            queue.  If unspecified, `max_queue_size` will default to 10.
 |          workers: Integer. Used for generator or `keras.utils.Sequence` input
 |              only. Maximum number of processes to spin up
 |              when using process-based threading. If unspecified, `workers`
 |              will default to 1.
 |          use_multiprocessing: Boolean. Used for generator or
 |              `keras.utils.Sequence` input only. If `True`, use process-based
 |              threading. If unspecified, `use_multiprocessing` will default to
 |              `False`. Note that because this implementation relies on
 |              multiprocessing, you should not pass non-picklable arguments to
 |              the generator as they can't be passed easily to children
 |              processes.
 |      
 |      Unpacking behavior for iterator-like inputs:
 |          A common pattern is to pass a tf.data.Dataset, generator, or
 |        tf.keras.utils.Sequence to the `x` argument of fit, which will in fact
 |        yield not only features (x) but optionally targets (y) and sample
 |        weights.  Keras requires that the output of such iterator-likes be
 |        unambiguous. The iterator should return a tuple of length 1, 2, or 3,
 |        where the optional second and third elements will be used for y and
 |        sample_weight respectively. Any other type provided will be wrapped in
 |        a length one tuple, effectively treating everything as 'x'. When
 |        yielding dicts, they should still adhere to the top-level tuple
 |        structure.
 |        e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate
 |        features, targets, and weights from the keys of a single dict.
 |          A notable unsupported data type is the namedtuple. The reason is
 |        that it behaves like both an ordered datatype (tuple) and a mapping
 |        datatype (dict). So given a namedtuple of the form:
 |            `namedtuple("example_tuple", ["y", "x"])`
 |        it is ambiguous whether to reverse the order of the elements when
 |        interpreting the value. Even worse is a tuple of the form:
 |            `namedtuple("other_tuple", ["x", "y", "z"])`
 |        where it is unclear if the tuple was intended to be unpacked into x,
 |        y, and sample_weight or passed through as a single element to `x`. As
 |        a result the data processing code will simply raise a ValueError if it
 |        encounters a namedtuple. (Along with instructions to remedy the
 |        issue.)
 |      
 |      Returns:
 |          A `History` object. Its `History.history` attribute is
 |          a record of training loss values and metrics values
 |          at successive epochs, as well as validation loss values
 |          and validation metrics values (if applicable).
 |      
 |      Raises:
 |          RuntimeError: 1. If the model was never compiled or,
 |          2. If `model.fit` is  wrapped in `tf.function`.
 |      
 |          ValueError: In case of mismatch between the provided input data
 |              and what the model expects or when the input data is empty.
 |  
 |  fit_generator(self, generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)
 |      Fits the model on data yielded batch-by-batch by a Python generator.
 |      
 |      DEPRECATED:
 |        `Model.fit` now supports generators, so there is no longer any need to
 |        use this endpoint.
 |  
 |  get_compile_config(self)
 |      Returns a serialized config with information for compiling the model.
 |      
 |      This method returns a config dictionary containing all the information
 |      (optimizer, loss, metrics, etc.) with which the model was compiled.
 |      
 |      Returns:
 |          A dict containing information for compiling the model.
 |  
 |  get_layer(self, name=None, index=None)
 |      Retrieves a layer based on either its name (unique) or index.
 |      
 |      If `name` and `index` are both provided, `index` will take precedence.
 |      Indices are based on order of horizontal graph traversal (bottom-up).
 |      
 |      Args:
 |          name: String, name of layer.
 |          index: Integer, index of layer.
 |      
 |      Returns:
 |          A layer instance.
 |  
 |  get_metrics_result(self)
 |      Returns the model's metrics values as a dict.
 |      
 |      If any of the metric result is a dict (containing multiple metrics),
 |      each of them gets added to the top level returned dict of this method.
 |      
 |      Returns:
 |        A `dict` containing values of the metrics listed in `self.metrics`.
 |        Example:
 |        `{'loss': 0.2, 'accuracy': 0.7}`.
 |  
 |  get_weights(self)
 |      Retrieves the weights of the model.
 |      
 |      Returns:
 |          A flat list of Numpy arrays.
 |  
 |  load_weights(self, filepath, skip_mismatch=False, by_name=False, options=None)
 |      Loads all layer weights from a saved files.
 |      
 |      The saved file could be a SavedModel file, a `.keras` file (v3 saving
 |      format), or a file created via `model.save_weights()`.
 |      
 |      By default, weights are loaded based on the network's
 |      topology. This means the architecture should be the same as when the
 |      weights were saved. Note that layers that don't have weights are not
 |      taken into account in the topological ordering, so adding or removing
 |      layers is fine as long as they don't have weights.
 |      
 |      **Partial weight loading**
 |      
 |      If you have modified your model, for instance by adding a new layer
 |      (with weights) or by changing the shape of the weights of a layer,
 |      you can choose to ignore errors and continue loading
 |      by setting `skip_mismatch=True`. In this case any layer with
 |      mismatching weights will be skipped. A warning will be displayed
 |      for each skipped layer.
 |      
 |      **Weight loading by name**
 |      
 |      If your weights are saved as a `.h5` file created
 |      via `model.save_weights()`, you can use the argument `by_name=True`.
 |      
 |      In this case, weights are loaded into layers only if they share
 |      the same name. This is useful for fine-tuning or transfer-learning
 |      models where some of the layers have changed.
 |      
 |      Note that only topological loading (`by_name=False`) is supported when
 |      loading weights from the `.keras` v3 format or from the TensorFlow
 |      SavedModel format.
 |      
 |      Args:
 |          filepath: String, path to the weights file to load. For weight files
 |              in TensorFlow format, this is the file prefix (the same as was
 |              passed to `save_weights()`). This can also be a path to a
 |              SavedModel or a `.keras` file (v3 saving format) saved
 |              via `model.save()`.
 |          skip_mismatch: Boolean, whether to skip loading of layers where
 |              there is a mismatch in the number of weights, or a mismatch in
 |              the shape of the weights.
 |          by_name: Boolean, whether to load weights by name or by topological
 |              order. Only topological loading is supported for weight files in
 |              the `.keras` v3 format or in the TensorFlow SavedModel format.
 |          options: Optional `tf.train.CheckpointOptions` object that specifies
 |              options for loading weights (only valid for a SavedModel file).
 |  
 |  make_predict_function(self, force=False)
 |      Creates a function that executes one step of inference.
 |      
 |      This method can be overridden to support custom inference logic.
 |      This method is called by `Model.predict` and `Model.predict_on_batch`.
 |      
 |      Typically, this method directly controls `tf.function` and
 |      `tf.distribute.Strategy` settings, and delegates the actual evaluation
 |      logic to `Model.predict_step`.
 |      
 |      This function is cached the first time `Model.predict` or
 |      `Model.predict_on_batch` is called. The cache is cleared whenever
 |      `Model.compile` is called. You can skip the cache and generate again the
 |      function with `force=True`.
 |      
 |      Args:
 |        force: Whether to regenerate the predict function and skip the cached
 |          function if available.
 |      
 |      Returns:
 |        Function. The function created by this method should accept a
 |        `tf.data.Iterator`, and return the outputs of the `Model`.
 |  
 |  make_test_function(self, force=False)
 |      Creates a function that executes one step of evaluation.
 |      
 |      This method can be overridden to support custom evaluation logic.
 |      This method is called by `Model.evaluate` and `Model.test_on_batch`.
 |      
 |      Typically, this method directly controls `tf.function` and
 |      `tf.distribute.Strategy` settings, and delegates the actual evaluation
 |      logic to `Model.test_step`.
 |      
 |      This function is cached the first time `Model.evaluate` or
 |      `Model.test_on_batch` is called. The cache is cleared whenever
 |      `Model.compile` is called. You can skip the cache and generate again the
 |      function with `force=True`.
 |      
 |      Args:
 |        force: Whether to regenerate the test function and skip the cached
 |          function if available.
 |      
 |      Returns:
 |        Function. The function created by this method should accept a
 |        `tf.data.Iterator`, and return a `dict` containing values that will
 |        be passed to `tf.keras.Callbacks.on_test_batch_end`.
 |  
 |  make_train_function(self, force=False)
 |      Creates a function that executes one step of training.
 |      
 |      This method can be overridden to support custom training logic.
 |      This method is called by `Model.fit` and `Model.train_on_batch`.
 |      
 |      Typically, this method directly controls `tf.function` and
 |      `tf.distribute.Strategy` settings, and delegates the actual training
 |      logic to `Model.train_step`.
 |      
 |      This function is cached the first time `Model.fit` or
 |      `Model.train_on_batch` is called. The cache is cleared whenever
 |      `Model.compile` is called. You can skip the cache and generate again the
 |      function with `force=True`.
 |      
 |      Args:
 |        force: Whether to regenerate the train function and skip the cached
 |          function if available.
 |      
 |      Returns:
 |        Function. The function created by this method should accept a
 |        `tf.data.Iterator`, and return a `dict` containing values that will
 |        be passed to `tf.keras.Callbacks.on_train_batch_end`, such as
 |        `{'loss': 0.2, 'accuracy': 0.7}`.
 |  
 |  predict(self, x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)
 |      Generates output predictions for the input samples.
 |      
 |      Computation is done in batches. This method is designed for batch
 |      processing of large numbers of inputs. It is not intended for use inside
 |      of loops that iterate over your data and process small numbers of inputs
 |      at a time.
 |      
 |      For small numbers of inputs that fit in one batch,
 |      directly use `__call__()` for faster execution, e.g.,
 |      `model(x)`, or `model(x, training=False)` if you have layers such as
 |      `tf.keras.layers.BatchNormalization` that behave differently during
 |      inference. You may pair the individual model call with a `tf.function`
 |      for additional performance inside your inner loop.
 |      If you need access to numpy array values instead of tensors after your
 |      model call, you can use `tensor.numpy()` to get the numpy array value of
 |      an eager tensor.
 |      
 |      Also, note the fact that test loss is not affected by
 |      regularization layers like noise and dropout.
 |      
 |      Note: See [this FAQ entry](
 |      https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call)
 |      for more details about the difference between `Model` methods
 |      `predict()` and `__call__()`.
 |      
 |      Args:
 |          x: Input samples. It could be:
 |            - A Numpy array (or array-like), or a list of arrays
 |              (in case the model has multiple inputs).
 |            - A TensorFlow tensor, or a list of tensors
 |              (in case the model has multiple inputs).
 |            - A `tf.data` dataset.
 |            - A generator or `keras.utils.Sequence` instance.
 |            A more detailed description of unpacking behavior for iterator
 |            types (Dataset, generator, Sequence) is given in the `Unpacking
 |            behavior for iterator-like inputs` section of `Model.fit`.
 |          batch_size: Integer or `None`.
 |              Number of samples per batch.
 |              If unspecified, `batch_size` will default to 32.
 |              Do not specify the `batch_size` if your data is in the
 |              form of dataset, generators, or `keras.utils.Sequence` instances
 |              (since they generate batches).
 |          verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
 |              0 = silent, 1 = progress bar, 2 = single line.
 |              `"auto"` becomes 1 for most cases, and to 2 when used with
 |              `ParameterServerStrategy`. Note that the progress bar is not
 |              particularly useful when logged to a file, so `verbose=2` is
 |              recommended when not running interactively (e.g. in a production
 |              environment). Defaults to 'auto'.
 |          steps: Total number of steps (batches of samples)
 |              before declaring the prediction round finished.
 |              Ignored with the default value of `None`. If x is a `tf.data`
 |              dataset and `steps` is None, `predict()` will
 |              run until the input dataset is exhausted.
 |          callbacks: List of `keras.callbacks.Callback` instances.
 |              List of callbacks to apply during prediction.
 |              See [callbacks](
 |              https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).
 |          max_queue_size: Integer. Used for generator or
 |              `keras.utils.Sequence` input only. Maximum size for the
 |              generator queue. If unspecified, `max_queue_size` will default
 |              to 10.
 |          workers: Integer. Used for generator or `keras.utils.Sequence` input
 |              only. Maximum number of processes to spin up when using
 |              process-based threading. If unspecified, `workers` will default
 |              to 1.
 |          use_multiprocessing: Boolean. Used for generator or
 |              `keras.utils.Sequence` input only. If `True`, use process-based
 |              threading. If unspecified, `use_multiprocessing` will default to
 |              `False`. Note that because this implementation relies on
 |              multiprocessing, you should not pass non-picklable arguments to
 |              the generator as they can't be passed easily to children
 |              processes.
 |      
 |      See the discussion of `Unpacking behavior for iterator-like inputs` for
 |      `Model.fit`. Note that Model.predict uses the same interpretation rules
 |      as `Model.fit` and `Model.evaluate`, so inputs must be unambiguous for
 |      all three methods.
 |      
 |      Returns:
 |          Numpy array(s) of predictions.
 |      
 |      Raises:
 |          RuntimeError: If `model.predict` is wrapped in a `tf.function`.
 |          ValueError: In case of mismatch between the provided
 |              input data and the model's expectations,
 |              or in case a stateful model receives a number of samples
 |              that is not a multiple of the batch size.
 |  
 |  predict_generator(self, generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)
 |      Generates predictions for the input samples from a data generator.
 |      
 |      DEPRECATED:
 |        `Model.predict` now supports generators, so there is no longer any
 |        need to use this endpoint.
 |  
 |  predict_on_batch(self, x)
 |      Returns predictions for a single batch of samples.
 |      
 |      Args:
 |          x: Input data. It could be:
 |            - A Numpy array (or array-like), or a list of arrays (in case the
 |                model has multiple inputs).
 |            - A TensorFlow tensor, or a list of tensors (in case the model has
 |                multiple inputs).
 |      
 |      Returns:
 |          Numpy array(s) of predictions.
 |      
 |      Raises:
 |          RuntimeError: If `model.predict_on_batch` is wrapped in a
 |            `tf.function`.
 |  
 |  predict_step(self, data)
 |      The logic for one inference step.
 |      
 |      This method can be overridden to support custom inference logic.
 |      This method is called by `Model.make_predict_function`.
 |      
 |      This method should contain the mathematical logic for one step of
 |      inference.  This typically includes the forward pass.
 |      
 |      Configuration details for *how* this logic is run (e.g. `tf.function`
 |      and `tf.distribute.Strategy` settings), should be left to
 |      `Model.make_predict_function`, which can also be overridden.
 |      
 |      Args:
 |        data: A nested structure of `Tensor`s.
 |      
 |      Returns:
 |        The result of one inference step, typically the output of calling the
 |        `Model` on data.
 |  
 |  reset_metrics(self)
 |      Resets the state of all the metrics in the model.
 |      
 |      Examples:
 |      
 |      >>> inputs = tf.keras.layers.Input(shape=(3,))
 |      >>> outputs = tf.keras.layers.Dense(2)(inputs)
 |      >>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
 |      >>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
 |      
 |      >>> x = np.random.random((2, 3))
 |      >>> y = np.random.randint(0, 2, (2, 2))
 |      >>> _ = model.fit(x, y, verbose=0)
 |      >>> assert all(float(m.result()) for m in model.metrics)
 |      
 |      >>> model.reset_metrics()
 |      >>> assert all(float(m.result()) == 0 for m in model.metrics)
 |  
 |  reset_states(self)
 |  
 |  save(self, filepath, overwrite=True, save_format=None, **kwargs)
 |      Saves a model as a TensorFlow SavedModel or HDF5 file.
 |      
 |      See the [Serialization and Saving guide](
 |          https://keras.io/guides/serialization_and_saving/) for details.
 |      
 |      Args:
 |          model: Keras model instance to be saved.
 |          filepath: `str` or `pathlib.Path` object. Path where to save the
 |              model.
 |          overwrite: Whether we should overwrite any existing model at the
 |              target location, or instead ask the user via an interactive
 |              prompt.
 |          save_format: Either `"keras"`, `"tf"`, `"h5"`,
 |              indicating whether to save the model
 |              in the native Keras format (`.keras`),
 |              in the TensorFlow SavedModel format
 |              (referred to as "SavedModel" below),
 |              or in the legacy HDF5 format (`.h5`).
 |              Defaults to `"tf"` in TF 2.X, and `"h5"` in TF 1.X.
 |      
 |      SavedModel format arguments:
 |          include_optimizer: Only applied to SavedModel and legacy HDF5
 |              formats. If False, do not save the optimizer state.
 |              Defaults to `True`.
 |          signatures: Only applies to SavedModel format. Signatures to save
 |              with the SavedModel. See the `signatures` argument in
 |              `tf.saved_model.save` for details.
 |          options: Only applies to SavedModel format.
 |              `tf.saved_model.SaveOptions` object that specifies SavedModel
 |              saving options.
 |          save_traces: Only applies to SavedModel format. When enabled, the
 |              SavedModel will store the function traces for each layer. This
 |              can be disabled, so that only the configs of each layer are
 |              stored. Defaults to `True`.
 |              Disabling this will decrease serialization time
 |              and reduce file size, but it requires that all custom
 |              layers/models implement a `get_config()` method.
 |      
 |      Example:
 |      
 |      ```python
 |      model = tf.keras.Sequential([
 |          tf.keras.layers.Dense(5, input_shape=(3,)),
 |          tf.keras.layers.Softmax()])
 |      model.save("model.keras")
 |      loaded_model = tf.keras.models.load_model("model.keras")
 |      x = tf.random.uniform((10, 3))
 |      assert np.allclose(model.predict(x), loaded_model.predict(x))
 |      ```
 |      
 |      Note that `model.save()` is an alias for `tf.keras.models.save_model()`.
 |  
 |  save_spec(self, dynamic_batch=True)
 |      Returns the `tf.TensorSpec` of call args as a tuple `(args, kwargs)`.
 |      
 |      This value is automatically defined after calling the model for the
 |      first time. Afterwards, you can use it when exporting the model for
 |      serving:
 |      
 |      ```python
 |      model = tf.keras.Model(...)
 |      
 |      @tf.function
 |      def serve(*args, **kwargs):
 |        outputs = model(*args, **kwargs)
 |        # Apply postprocessing steps, or add additional outputs.
 |        ...
 |        return outputs
 |      
 |      # arg_specs is `[tf.TensorSpec(...), ...]`. kwarg_specs, in this
 |      # example, is an empty dict since functional models do not use keyword
 |      # arguments.
 |      arg_specs, kwarg_specs = model.save_spec()
 |      
 |      model.save(path, signatures={
 |        'serving_default': serve.get_concrete_function(*arg_specs,
 |                                                       **kwarg_specs)
 |      })
 |      ```
 |      
 |      Args:
 |        dynamic_batch: Whether to set the batch sizes of all the returned
 |          `tf.TensorSpec` to `None`. (Note that when defining functional or
 |          Sequential models with `tf.keras.Input([...], batch_size=X)`, the
 |          batch size will always be preserved). Defaults to `True`.
 |      Returns:
 |        If the model inputs are defined, returns a tuple `(args, kwargs)`. All
 |        elements in `args` and `kwargs` are `tf.TensorSpec`.
 |        If the model inputs are not defined, returns `None`.
 |        The model inputs are automatically set when calling the model,
 |        `model.fit`, `model.evaluate` or `model.predict`.
 |  
 |  save_weights(self, filepath, overwrite=True, save_format=None, options=None)
 |      Saves all layer weights.
 |      
 |      Either saves in HDF5 or in TensorFlow format based on the `save_format`
 |      argument.
 |      
 |      When saving in HDF5 format, the weight file has:
 |        - `layer_names` (attribute), a list of strings
 |            (ordered names of model layers).
 |        - For every layer, a `group` named `layer.name`
 |            - For every such layer group, a group attribute `weight_names`,
 |                a list of strings
 |                (ordered names of weights tensor of the layer).
 |            - For every weight in the layer, a dataset
 |                storing the weight value, named after the weight tensor.
 |      
 |      When saving in TensorFlow format, all objects referenced by the network
 |      are saved in the same format as `tf.train.Checkpoint`, including any
 |      `Layer` instances or `Optimizer` instances assigned to object
 |      attributes. For networks constructed from inputs and outputs using
 |      `tf.keras.Model(inputs, outputs)`, `Layer` instances used by the network
 |      are tracked/saved automatically. For user-defined classes which inherit
 |      from `tf.keras.Model`, `Layer` instances must be assigned to object
 |      attributes, typically in the constructor. See the documentation of
 |      `tf.train.Checkpoint` and `tf.keras.Model` for details.
 |      
 |      While the formats are the same, do not mix `save_weights` and
 |      `tf.train.Checkpoint`. Checkpoints saved by `Model.save_weights` should
 |      be loaded using `Model.load_weights`. Checkpoints saved using
 |      `tf.train.Checkpoint.save` should be restored using the corresponding
 |      `tf.train.Checkpoint.restore`. Prefer `tf.train.Checkpoint` over
 |      `save_weights` for training checkpoints.
 |      
 |      The TensorFlow format matches objects and variables by starting at a
 |      root object, `self` for `save_weights`, and greedily matching attribute
 |      names. For `Model.save` this is the `Model`, and for `Checkpoint.save`
 |      this is the `Checkpoint` even if the `Checkpoint` has a model attached.
 |      This means saving a `tf.keras.Model` using `save_weights` and loading
 |      into a `tf.train.Checkpoint` with a `Model` attached (or vice versa)
 |      will not match the `Model`'s variables. See the
 |      [guide to training checkpoints](
 |      https://www.tensorflow.org/guide/checkpoint) for details on
 |      the TensorFlow format.
 |      
 |      Args:
 |          filepath: String or PathLike, path to the file to save the weights
 |              to. When saving in TensorFlow format, this is the prefix used
 |              for checkpoint files (multiple files are generated). Note that
 |              the '.h5' suffix causes weights to be saved in HDF5 format.
 |          overwrite: Whether to silently overwrite any existing file at the
 |              target location, or provide the user with a manual prompt.
 |          save_format: Either 'tf' or 'h5'. A `filepath` ending in '.h5' or
 |              '.keras' will default to HDF5 if `save_format` is `None`.
 |              Otherwise, `None` becomes 'tf'. Defaults to `None`.
 |          options: Optional `tf.train.CheckpointOptions` object that specifies
 |              options for saving weights.
 |      
 |      Raises:
 |          ImportError: If `h5py` is not available when attempting to save in
 |              HDF5 format.
 |  
 |  summary(self, line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)
 |      Prints a string summary of the network.
 |      
 |      Args:
 |          line_length: Total length of printed lines
 |              (e.g. set this to adapt the display to different
 |              terminal window sizes).
 |          positions: Relative or absolute positions of log elements
 |              in each line. If not provided, becomes
 |              `[0.3, 0.6, 0.70, 1.]`. Defaults to `None`.
 |          print_fn: Print function to use. By default, prints to `stdout`.
 |              If `stdout` doesn't work in your environment, change to `print`.
 |              It will be called on each line of the summary.
 |              You can set it to a custom function
 |              in order to capture the string summary.
 |          expand_nested: Whether to expand the nested models.
 |              Defaults to `False`.
 |          show_trainable: Whether to show if a layer is trainable.
 |              Defaults to `False`.
 |          layer_range: a list or tuple of 2 strings,
 |              which is the starting layer name and ending layer name
 |              (both inclusive) indicating the range of layers to be printed
 |              in summary. It also accepts regex patterns instead of exact
 |              name. In such case, start predicate will be the first element
 |              it matches to `layer_range[0]` and the end predicate will be
 |              the last element it matches to `layer_range[1]`.
 |              By default `None` which considers all layers of model.
 |      
 |      Raises:
 |          ValueError: if `summary()` is called before the model is built.
 |  
 |  test_on_batch(self, x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)
 |      Test the model on a single batch of samples.
 |      
 |      Args:
 |          x: Input data. It could be:
 |            - A Numpy array (or array-like), or a list of arrays (in case the
 |                model has multiple inputs).
 |            - A TensorFlow tensor, or a list of tensors (in case the model has
 |                multiple inputs).
 |            - A dict mapping input names to the corresponding array/tensors,
 |                if the model has named inputs.
 |          y: Target data. Like the input data `x`, it could be either Numpy
 |            array(s) or TensorFlow tensor(s). It should be consistent with `x`
 |            (you cannot have Numpy inputs and tensor targets, or inversely).
 |          sample_weight: Optional array of the same length as x, containing
 |            weights to apply to the model's loss for each sample. In the case
 |            of temporal data, you can pass a 2D array with shape (samples,
 |            sequence_length), to apply a different weight to every timestep of
 |            every sample.
 |          reset_metrics: If `True`, the metrics returned will be only for this
 |            batch. If `False`, the metrics will be statefully accumulated
 |            across batches.
 |          return_dict: If `True`, loss and metric results are returned as a
 |            dict, with each key being the name of the metric. If `False`, they
 |            are returned as a list.
 |      
 |      Returns:
 |          Scalar test loss (if the model has a single output and no metrics)
 |          or list of scalars (if the model has multiple outputs
 |          and/or metrics). The attribute `model.metrics_names` will give you
 |          the display labels for the scalar outputs.
 |      
 |      Raises:
 |          RuntimeError: If `model.test_on_batch` is wrapped in a
 |            `tf.function`.
 |  
 |  test_step(self, data)
 |      The logic for one evaluation step.
 |      
 |      This method can be overridden to support custom evaluation logic.
 |      This method is called by `Model.make_test_function`.
 |      
 |      This function should contain the mathematical logic for one step of
 |      evaluation.
 |      This typically includes the forward pass, loss calculation, and metrics
 |      updates.
 |      
 |      Configuration details for *how* this logic is run (e.g. `tf.function`
 |      and `tf.distribute.Strategy` settings), should be left to
 |      `Model.make_test_function`, which can also be overridden.
 |      
 |      Args:
 |        data: A nested structure of `Tensor`s.
 |      
 |      Returns:
 |        A `dict` containing values that will be passed to
 |        `tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the
 |        values of the `Model`'s metrics are returned.
 |  
 |  to_json(self, **kwargs)
 |      Returns a JSON string containing the network configuration.
 |      
 |      To load a network from a JSON save file, use
 |      `keras.models.model_from_json(json_string, custom_objects={})`.
 |      
 |      Args:
 |          **kwargs: Additional keyword arguments to be passed to
 |              *`json.dumps()`.
 |      
 |      Returns:
 |          A JSON string.
 |  
 |  to_yaml(self, **kwargs)
 |      Returns a yaml string containing the network configuration.
 |      
 |      Note: Since TF 2.6, this method is no longer supported and will raise a
 |      RuntimeError.
 |      
 |      To load a network from a yaml save file, use
 |      `keras.models.model_from_yaml(yaml_string, custom_objects={})`.
 |      
 |      `custom_objects` should be a dictionary mapping
 |      the names of custom losses / layers / etc to the corresponding
 |      functions / classes.
 |      
 |      Args:
 |          **kwargs: Additional keyword arguments
 |              to be passed to `yaml.dump()`.
 |      
 |      Returns:
 |          A YAML string.
 |      
 |      Raises:
 |          RuntimeError: announces that the method poses a security risk
 |  
 |  train_on_batch(self, x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)
 |      Runs a single gradient update on a single batch of data.
 |      
 |      Args:
 |          x: Input data. It could be:
 |            - A Numpy array (or array-like), or a list of arrays
 |                (in case the model has multiple inputs).
 |            - A TensorFlow tensor, or a list of tensors
 |                (in case the model has multiple inputs).
 |            - A dict mapping input names to the corresponding array/tensors,
 |                if the model has named inputs.
 |          y: Target data. Like the input data `x`, it could be either Numpy
 |            array(s) or TensorFlow tensor(s).
 |          sample_weight: Optional array of the same length as x, containing
 |            weights to apply to the model's loss for each sample. In the case
 |            of temporal data, you can pass a 2D array with shape (samples,
 |            sequence_length), to apply a different weight to every timestep of
 |            every sample.
 |          class_weight: Optional dictionary mapping class indices (integers)
 |            to a weight (float) to apply to the model's loss for the samples
 |            from this class during training. This can be useful to tell the
 |            model to "pay more attention" to samples from an under-represented
 |            class. When `class_weight` is specified and targets have a rank of
 |            2 or greater, either `y` must be one-hot encoded, or an explicit
 |            final dimension of `1` must be included for sparse class labels.
 |          reset_metrics: If `True`, the metrics returned will be only for this
 |            batch. If `False`, the metrics will be statefully accumulated
 |            across batches.
 |          return_dict: If `True`, loss and metric results are returned as a
 |            dict, with each key being the name of the metric. If `False`, they
 |            are returned as a list.
 |      
 |      Returns:
 |          Scalar training loss
 |          (if the model has a single output and no metrics)
 |          or list of scalars (if the model has multiple outputs
 |          and/or metrics). The attribute `model.metrics_names` will give you
 |          the display labels for the scalar outputs.
 |      
 |      Raises:
 |        RuntimeError: If `model.train_on_batch` is wrapped in a `tf.function`.
 |  
 |  train_step(self, data)
 |      The logic for one training step.
 |      
 |      This method can be overridden to support custom training logic.
 |      For concrete examples of how to override this method see
 |      [Customizing what happens in fit](
 |      https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit).
 |      This method is called by `Model.make_train_function`.
 |      
 |      This method should contain the mathematical logic for one step of
 |      training.  This typically includes the forward pass, loss calculation,
 |      backpropagation, and metric updates.
 |      
 |      Configuration details for *how* this logic is run (e.g. `tf.function`
 |      and `tf.distribute.Strategy` settings), should be left to
 |      `Model.make_train_function`, which can also be overridden.
 |      
 |      Args:
 |        data: A nested structure of `Tensor`s.
 |      
 |      Returns:
 |        A `dict` containing values that will be passed to
 |        `tf.keras.callbacks.CallbackList.on_train_batch_end`. Typically, the
 |        values of the `Model`'s metrics are returned. Example:
 |        `{'loss': 0.2, 'accuracy': 0.7}`.
 |  
 |  ----------------------------------------------------------------------
 |  Static methods inherited from keras.src.engine.training.Model:
 |  
 |  __new__(cls, *args, **kwargs)
 |      Create and return a new object.  See help(type) for accurate signature.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from keras.src.engine.training.Model:
 |  
 |  distribute_strategy
 |      The `tf.distribute.Strategy` this model was created under.
 |  
 |  metrics
 |      Return metrics added using `compile()` or `add_metric()`.
 |      
 |      Note: Metrics passed to `compile()` are available only after a
 |      `keras.Model` has been trained/evaluated on actual data.
 |      
 |      Examples:
 |      
 |      >>> inputs = tf.keras.layers.Input(shape=(3,))
 |      >>> outputs = tf.keras.layers.Dense(2)(inputs)
 |      >>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
 |      >>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
 |      >>> [m.name for m in model.metrics]
 |      []
 |      
 |      >>> x = np.random.random((2, 3))
 |      >>> y = np.random.randint(0, 2, (2, 2))
 |      >>> model.fit(x, y)
 |      >>> [m.name for m in model.metrics]
 |      ['loss', 'mae']
 |      
 |      >>> inputs = tf.keras.layers.Input(shape=(3,))
 |      >>> d = tf.keras.layers.Dense(2, name='out')
 |      >>> output_1 = d(inputs)
 |      >>> output_2 = d(inputs)
 |      >>> model = tf.keras.models.Model(
 |      ...    inputs=inputs, outputs=[output_1, output_2])
 |      >>> model.add_metric(
 |      ...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
 |      >>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
 |      >>> model.fit(x, (y, y))
 |      >>> [m.name for m in model.metrics]
 |      ['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
 |      'out_1_acc', 'mean']
 |  
 |  metrics_names
 |      Returns the model's display labels for all outputs.
 |      
 |      Note: `metrics_names` are available only after a `keras.Model` has been
 |      trained/evaluated on actual data.
 |      
 |      Examples:
 |      
 |      >>> inputs = tf.keras.layers.Input(shape=(3,))
 |      >>> outputs = tf.keras.layers.Dense(2)(inputs)
 |      >>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
 |      >>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
 |      >>> model.metrics_names
 |      []
 |      
 |      >>> x = np.random.random((2, 3))
 |      >>> y = np.random.randint(0, 2, (2, 2))
 |      >>> model.fit(x, y)
 |      >>> model.metrics_names
 |      ['loss', 'mae']
 |      
 |      >>> inputs = tf.keras.layers.Input(shape=(3,))
 |      >>> d = tf.keras.layers.Dense(2, name='out')
 |      >>> output_1 = d(inputs)
 |      >>> output_2 = d(inputs)
 |      >>> model = tf.keras.models.Model(
 |      ...    inputs=inputs, outputs=[output_1, output_2])
 |      >>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
 |      >>> model.fit(x, (y, y))
 |      >>> model.metrics_names
 |      ['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
 |      'out_1_acc']
 |  
 |  non_trainable_weights
 |      List of all non-trainable weights tracked by this layer.
 |      
 |      Non-trainable weights are *not* updated during training. They are
 |      expected to be updated manually in `call()`.
 |      
 |      Returns:
 |        A list of non-trainable variables.
 |  
 |  state_updates
 |      Deprecated, do NOT use!
 |      
 |      Returns the `updates` from all layers that are stateful.
 |      
 |      This is useful for separating training updates and
 |      state updates, e.g. when we need to update a layer's internal state
 |      during prediction.
 |      
 |      Returns:
 |          A list of update ops.
 |  
 |  trainable_weights
 |      List of all trainable weights tracked by this layer.
 |      
 |      Trainable weights are updated via gradient descent during training.
 |      
 |      Returns:
 |        A list of trainable variables.
 |  
 |  weights
 |      Returns the list of all layer variables/weights.
 |      
 |      Note: This will not track the weights of nested `tf.Modules` that are
 |      not themselves Keras layers.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.src.engine.training.Model:
 |  
 |  distribute_reduction_method
 |      The method employed to reduce per-replica values during training.
 |      
 |      Unless specified, the value "auto" will be assumed, indicating that
 |      the reduction strategy should be chosen based on the current
 |      running environment.
 |      See `reduce_per_replica` function for more details.
 |  
 |  jit_compile
 |      Specify whether to compile the model with XLA.
 |      
 |      [XLA](https://www.tensorflow.org/xla) is an optimizing compiler
 |      for machine learning. `jit_compile` is not enabled by default.
 |      Note that `jit_compile=True` may not necessarily work for all models.
 |      
 |      For more information on supported operations please refer to the
 |      [XLA documentation](https://www.tensorflow.org/xla). Also refer to
 |      [known XLA issues](https://www.tensorflow.org/xla/known_issues)
 |      for more details.
 |  
 |  run_eagerly
 |      Settable attribute indicating whether the model should run eagerly.
 |      
 |      Running eagerly means that your model will be run step by step,
 |      like Python code. Your model might run slower, but it should become
 |      easier for you to debug it by stepping into individual layer calls.
 |      
 |      By default, we will attempt to compile your model to a static graph to
 |      deliver the best execution performance.
 |      
 |      Returns:
 |        Boolean, whether the model should run eagerly.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  __delattr__(self, name)
 |      Implement delattr(self, name).
 |  
 |  __getstate__(self)
 |  
 |  __setstate__(self, state)
 |  
 |  add_loss(self, losses, **kwargs)
 |      Add loss tensor(s), potentially dependent on layer inputs.
 |      
 |      Some losses (for instance, activity regularization losses) may be
 |      dependent on the inputs passed when calling a layer. Hence, when reusing
 |      the same layer on different inputs `a` and `b`, some entries in
 |      `layer.losses` may be dependent on `a` and some on `b`. This method
 |      automatically keeps track of dependencies.
 |      
 |      This method can be used inside a subclassed layer or model's `call`
 |      function, in which case `losses` should be a Tensor or list of Tensors.
 |      
 |      Example:
 |      
 |      ```python
 |      class MyLayer(tf.keras.layers.Layer):
 |        def call(self, inputs):
 |          self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |          return inputs
 |      ```
 |      
 |      The same code works in distributed training: the input to `add_loss()`
 |      is treated like a regularization loss and averaged across replicas
 |      by the training loop (both built-in `Model.fit()` and compliant custom
 |      training loops).
 |      
 |      The `add_loss` method can also be called directly on a Functional Model
 |      during construction. In this case, any loss Tensors passed to this Model
 |      must be symbolic and be able to be traced back to the model's `Input`s.
 |      These losses become part of the model's topology and are tracked in
 |      `get_config`.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Activity regularization.
 |      model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      ```
 |      
 |      If this is not the case for your loss (if, for example, your loss
 |      references a `Variable` of one of the model's layers), you can wrap your
 |      loss in a zero-argument lambda. These losses are not tracked as part of
 |      the model's topology since they can't be serialized.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      d = tf.keras.layers.Dense(10)
 |      x = d(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Weight regularization.
 |      model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      ```
 |      
 |      Args:
 |        losses: Loss tensor, or list/tuple of tensors. Rather than tensors,
 |          losses may also be zero-argument callables which create a loss
 |          tensor.
 |        **kwargs: Used for backwards compatibility only.
 |  
 |  add_metric(self, value, name=None, **kwargs)
 |      Adds metric tensor to the layer.
 |      
 |      This method can be used inside the `call()` method of a subclassed layer
 |      or model.
 |      
 |      ```python
 |      class MyMetricLayer(tf.keras.layers.Layer):
 |        def __init__(self):
 |          super(MyMetricLayer, self).__init__(name='my_metric_layer')
 |          self.mean = tf.keras.metrics.Mean(name='metric_1')
 |      
 |        def call(self, inputs):
 |          self.add_metric(self.mean(inputs))
 |          self.add_metric(tf.reduce_sum(inputs), name='metric_2')
 |          return inputs
 |      ```
 |      
 |      This method can also be called directly on a Functional Model during
 |      construction. In this case, any tensor passed to this Model must
 |      be symbolic and be able to be traced back to the model's `Input`s. These
 |      metrics become part of the model's topology and are tracked when you
 |      save the model via `save()`.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(math_ops.reduce_sum(x), name='metric_1')
 |      ```
 |      
 |      Note: Calling `add_metric()` with the result of a metric object on a
 |      Functional Model, as shown in the example below, is not supported. This
 |      is because we cannot trace the metric result tensor back to the model's
 |      inputs.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1')
 |      ```
 |      
 |      Args:
 |        value: Metric tensor.
 |        name: String metric name.
 |        **kwargs: Additional keyword arguments for backward compatibility.
 |          Accepted values:
 |          `aggregation` - When the `value` tensor provided is not the result
 |          of calling a `keras.Metric` instance, it will be aggregated by
 |          default using a `keras.Metric.Mean`.
 |  
 |  add_update(self, updates)
 |      Add update op(s), potentially dependent on layer inputs.
 |      
 |      Weight updates (for instance, the updates of the moving mean and
 |      variance in a BatchNormalization layer) may be dependent on the inputs
 |      passed when calling a layer. Hence, when reusing the same layer on
 |      different inputs `a` and `b`, some entries in `layer.updates` may be
 |      dependent on `a` and some on `b`. This method automatically keeps track
 |      of dependencies.
 |      
 |      This call is ignored when eager execution is enabled (in that case,
 |      variable updates are run on the fly and thus do not need to be tracked
 |      for later execution).
 |      
 |      Args:
 |        updates: Update op, or list/tuple of update ops, or zero-arg callable
 |          that returns an update op. A zero-arg callable should be passed in
 |          order to disable running the updates by setting `trainable=False`
 |          on this Layer, when executing in Eager mode.
 |  
 |  add_variable(self, *args, **kwargs)
 |      Deprecated, do NOT use! Alias for `add_weight`.
 |  
 |  add_weight(self, name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=<VariableSynchronization.AUTO: 0>, aggregation=<VariableAggregationV2.NONE: 0>, **kwargs)
 |      Adds a new variable to the layer.
 |      
 |      Args:
 |        name: Variable name.
 |        shape: Variable shape. Defaults to scalar if unspecified.
 |        dtype: The type of the variable. Defaults to `self.dtype`.
 |        initializer: Initializer instance (callable).
 |        regularizer: Regularizer instance (callable).
 |        trainable: Boolean, whether the variable should be part of the layer's
 |          "trainable_variables" (e.g. variables, biases)
 |          or "non_trainable_variables" (e.g. BatchNorm mean and variance).
 |          Note that `trainable` cannot be `True` if `synchronization`
 |          is set to `ON_READ`.
 |        constraint: Constraint instance (callable).
 |        use_resource: Whether to use a `ResourceVariable` or not.
 |          See [this guide](
 |          https://www.tensorflow.org/guide/migrate/tf1_vs_tf2#resourcevariables_instead_of_referencevariables)
 |           for more information.
 |        synchronization: Indicates when a distributed a variable will be
 |          aggregated. Accepted values are constants defined in the class
 |          `tf.VariableSynchronization`. By default the synchronization is set
 |          to `AUTO` and the current `DistributionStrategy` chooses when to
 |          synchronize. If `synchronization` is set to `ON_READ`, `trainable`
 |          must not be set to `True`.
 |        aggregation: Indicates how a distributed variable will be aggregated.
 |          Accepted values are constants defined in the class
 |          `tf.VariableAggregation`.
 |        **kwargs: Additional keyword arguments. Accepted values are `getter`,
 |          `collections`, `experimental_autocast` and `caching_device`.
 |      
 |      Returns:
 |        The variable created.
 |      
 |      Raises:
 |        ValueError: When giving unsupported dtype and no initializer or when
 |          trainable has been set to True with synchronization set as
 |          `ON_READ`.
 |  
 |  build_from_config(self, config)
 |      Builds the layer's states with the supplied config dict.
 |      
 |      By default, this method calls the `build(config["input_shape"])` method,
 |      which creates weights based on the layer's input shape in the supplied
 |      config. If your config contains other information needed to load the
 |      layer's state, you should override this method.
 |      
 |      Args:
 |          config: Dict containing the input shape associated with this layer.
 |  
 |  compute_output_signature(self, input_signature)
 |      Compute the output tensor signature of the layer based on the inputs.
 |      
 |      Unlike a TensorShape object, a TensorSpec object contains both shape
 |      and dtype information for a tensor. This method allows layers to provide
 |      output dtype information if it is different from the input dtype.
 |      For any layer that doesn't implement this function,
 |      the framework will fall back to use `compute_output_shape`, and will
 |      assume that the output dtype matches the input dtype.
 |      
 |      Args:
 |        input_signature: Single TensorSpec or nested structure of TensorSpec
 |          objects, describing a candidate input for the layer.
 |      
 |      Returns:
 |        Single TensorSpec or nested structure of TensorSpec objects,
 |          describing how the layer would transform the provided input.
 |      
 |      Raises:
 |        TypeError: If input_signature contains a non-TensorSpec object.
 |  
 |  count_params(self)
 |      Count the total number of scalars composing the weights.
 |      
 |      Returns:
 |          An integer count.
 |      
 |      Raises:
 |          ValueError: if the layer isn't yet built
 |            (in which case its weights aren't yet defined).
 |  
 |  finalize_state(self)
 |      Finalizes the layers state after updating layer weights.
 |      
 |      This function can be subclassed in a layer and will be called after
 |      updating a layer weights. It can be overridden to finalize any
 |      additional layer state after a weight update.
 |      
 |      This function will be called after weights of a layer have been restored
 |      from a loaded model.
 |  
 |  get_build_config(self)
 |      Returns a dictionary with the layer's input shape.
 |      
 |      This method returns a config dict that can be used by
 |      `build_from_config(config)` to create all states (e.g. Variables and
 |      Lookup tables) needed by the layer.
 |      
 |      By default, the config only contains the input shape that the layer
 |      was built with. If you're writing a custom layer that creates state in
 |      an unusual way, you should override this method to make sure this state
 |      is already created when Keras attempts to load its value upon model
 |      loading.
 |      
 |      Returns:
 |          A dict containing the input shape associated with the layer.
 |  
 |  get_input_at(self, node_index)
 |      Retrieves the input tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first input node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_input_mask_at(self, node_index)
 |      Retrieves the input mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_shape_at(self, node_index)
 |      Retrieves the input shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_at(self, node_index)
 |      Retrieves the output tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first output node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_mask_at(self, node_index)
 |      Retrieves the output mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_shape_at(self, node_index)
 |      Retrieves the output shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  load_own_variables(self, store)
 |      Loads the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is loaded upon calling `keras.models.load_model()`.
 |      
 |      Args:
 |          store: Dict from which the state of the model will be loaded.
 |  
 |  save_own_variables(self, store)
 |      Saves the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is saved upon calling `model.save()`.
 |      
 |      Args:
 |          store: Dict where the state of the model will be saved.
 |  
 |  set_weights(self, weights)
 |      Sets the weights of the layer, from NumPy arrays.
 |      
 |      The weights of a layer represent the state of the layer. This function
 |      sets the weight values from numpy arrays. The weight values should be
 |      passed in the order they are created by the layer. Note that the layer's
 |      weights must be instantiated before calling this function, by calling
 |      the layer.
 |      
 |      For example, a `Dense` layer returns a list of two values: the kernel
 |      matrix and the bias vector. These can be used to set the weights of
 |      another `Dense` layer:
 |      
 |      >>> layer_a = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(1.))
 |      >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
 |      >>> layer_a.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(2.))
 |      >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
 |      >>> layer_b.get_weights()
 |      [array([[2.],
 |             [2.],
 |             [2.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b.set_weights(layer_a.get_weights())
 |      >>> layer_b.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      
 |      Args:
 |        weights: a list of NumPy arrays. The number
 |          of arrays and their shape must match
 |          number of the dimensions of the weights
 |          of the layer (i.e. it should match the
 |          output of `get_weights`).
 |      
 |      Raises:
 |        ValueError: If the provided weights list does not match the
 |          layer's specifications.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from keras.src.engine.base_layer.Layer:
 |  
 |  compute_dtype
 |      The dtype of the layer's computations.
 |      
 |      This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.dtype`, the dtype of
 |      the weights.
 |      
 |      Layers automatically cast their inputs to the compute dtype, which
 |      causes computations and the output to be in the compute dtype as well.
 |      This is done by the base Layer class in `Layer.__call__`, so you do not
 |      have to insert these casts if implementing your own layer.
 |      
 |      Layers often perform certain internal computations in higher precision
 |      when `compute_dtype` is float16 or bfloat16 for numeric stability. The
 |      output will still typically be float16 or bfloat16 in such cases.
 |      
 |      Returns:
 |        The layer's compute dtype.
 |  
 |  dtype
 |      The dtype of the layer weights.
 |      
 |      This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.compute_dtype`, the
 |      dtype of the layer's computations.
 |  
 |  dtype_policy
 |      The dtype policy associated with this layer.
 |      
 |      This is an instance of a `tf.keras.mixed_precision.Policy`.
 |  
 |  dynamic
 |      Whether the layer is dynamic (eager-only); set in the constructor.
 |  
 |  inbound_nodes
 |      Return Functional API nodes upstream of this layer.
 |  
 |  input_mask
 |      Retrieves the input mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input mask tensor (potentially None) or list of input
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  losses
 |      List of losses added using the `add_loss()` API.
 |      
 |      Variable regularization tensors are created when this property is
 |      accessed, so it is eager safe: accessing `losses` under a
 |      `tf.GradientTape` will propagate gradients back to the corresponding
 |      variables.
 |      
 |      Examples:
 |      
 |      >>> class MyLayer(tf.keras.layers.Layer):
 |      ...   def call(self, inputs):
 |      ...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |      ...     return inputs
 |      >>> l = MyLayer()
 |      >>> l(np.ones((10, 1)))
 |      >>> l.losses
 |      [1.0]
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> x = tf.keras.layers.Dense(10)(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Activity regularization.
 |      >>> len(model.losses)
 |      0
 |      >>> model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      >>> len(model.losses)
 |      1
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
 |      >>> x = d(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Weight regularization.
 |      >>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      >>> model.losses
 |      [<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
 |      
 |      Returns:
 |        A list of tensors.
 |  
 |  name
 |      Name of the layer (string), set in the constructor.
 |  
 |  non_trainable_variables
 |      Sequence of non-trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  outbound_nodes
 |      Return Functional API nodes downstream of this layer.
 |  
 |  output_mask
 |      Retrieves the output mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Output mask tensor (potentially None) or list of output
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  trainable_variables
 |      Sequence of trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  updates
 |  
 |  variable_dtype
 |      Alias of `Layer.dtype`, the dtype of the weights.
 |  
 |  variables
 |      Returns the list of all layer variables/weights.
 |      
 |      Alias of `self.weights`.
 |      
 |      Note: This will not track the weights of nested `tf.Modules` that are
 |      not themselves Keras layers.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.src.engine.base_layer.Layer:
 |  
 |  activity_regularizer
 |      Optional regularizer function for the output of this layer.
 |  
 |  stateful
 |  
 |  supports_masking
 |      Whether this layer supports computing a mask using `compute_mask`.
 |  
 |  trainable
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from tensorflow.python.module.module.Module:
 |  
 |  with_name_scope(method) from builtins.type
 |      Decorator to automatically enter the module name scope.
 |      
 |      >>> class MyModule(tf.Module):
 |      ...   @tf.Module.with_name_scope
 |      ...   def __call__(self, x):
 |      ...     if not hasattr(self, 'w'):
 |      ...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
 |      ...     return tf.matmul(x, self.w)
 |      
 |      Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose
 |      names included the module name:
 |      
 |      >>> mod = MyModule()
 |      >>> mod(tf.ones([1, 2]))
 |      <tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
 |      >>> mod.w
 |      <tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
 |      numpy=..., dtype=float32)>
 |      
 |      Args:
 |        method: The method to wrap.
 |      
 |      Returns:
 |        The original method wrapped such that it enters the module's name scope.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from tensorflow.python.module.module.Module:
 |  
 |  name_scope
 |      Returns a `tf.name_scope` instance for this class.
 |  
 |  submodules
 |      Sequence of all sub-modules.
 |      
 |      Submodules are modules which are properties of this module, or found as
 |      properties of modules which are properties of this module (and so on).
 |      
 |      >>> a = tf.Module()
 |      >>> b = tf.Module()
 |      >>> c = tf.Module()
 |      >>> a.b = b
 |      >>> b.c = c
 |      >>> list(a.submodules) == [b, c]
 |      True
 |      >>> list(b.submodules) == [c]
 |      True
 |      >>> list(c.submodules) == []
 |      True
 |      
 |      Returns:
 |        A sequence of all submodules.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from tensorflow.python.trackable.base.Trackable:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)

Help on class Embedding in module keras.src.layers.core.embedding:

class Embedding(keras.src.engine.base_layer.Layer)
 |  Embedding(*args, **kwargs)
 |  
 |  Turns positive integers (indexes) into dense vectors of fixed size.
 |  
 |  e.g. `[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]`
 |  
 |  This layer can only be used on positive integer inputs of a fixed range. The
 |  `tf.keras.layers.TextVectorization`, `tf.keras.layers.StringLookup`,
 |  and `tf.keras.layers.IntegerLookup` preprocessing layers can help prepare
 |  inputs for an `Embedding` layer.
 |  
 |  This layer accepts `tf.Tensor`, `tf.RaggedTensor` and `tf.SparseTensor`
 |  input.
 |  
 |  Example:
 |  
 |  >>> model = tf.keras.Sequential()
 |  >>> model.add(tf.keras.layers.Embedding(1000, 64, input_length=10))
 |  >>> # The model will take as input an integer matrix of size (batch,
 |  >>> # input_length), and the largest integer (i.e. word index) in the input
 |  >>> # should be no larger than 999 (vocabulary size).
 |  >>> # Now model.output_shape is (None, 10, 64), where `None` is the batch
 |  >>> # dimension.
 |  >>> input_array = np.random.randint(1000, size=(32, 10))
 |  >>> model.compile('rmsprop', 'mse')
 |  >>> output_array = model.predict(input_array)
 |  >>> print(output_array.shape)
 |  (32, 10, 64)
 |  
 |  Args:
 |    input_dim: Integer. Size of the vocabulary,
 |      i.e. maximum integer index + 1.
 |    output_dim: Integer. Dimension of the dense embedding.
 |    embeddings_initializer: Initializer for the `embeddings`
 |      matrix (see `keras.initializers`).
 |    embeddings_regularizer: Regularizer function applied to
 |      the `embeddings` matrix (see `keras.regularizers`).
 |    embeddings_constraint: Constraint function applied to
 |      the `embeddings` matrix (see `keras.constraints`).
 |    mask_zero: Boolean, whether or not the input value 0 is a special
 |      "padding" value that should be masked out. This is useful when using
 |      recurrent layers which may take variable length input. If this is
 |      `True`, then all subsequent layers in the model need to support masking
 |      or an exception will be raised. If mask_zero is set to True, as a
 |      consequence, index 0 cannot be used in the vocabulary (input_dim should
 |      equal size of vocabulary + 1).
 |    input_length: Length of input sequences, when it is constant.
 |      This argument is required if you are going to connect
 |      `Flatten` then `Dense` layers upstream
 |      (without it, the shape of the dense outputs cannot be computed).
 |    sparse: If True, calling this layer returns a `tf.SparseTensor`. If False,
 |      the layer returns a dense `tf.Tensor`. For an entry with no features in
 |      a sparse tensor (entry with value 0), the embedding vector of index 0 is
 |      returned by default.
 |  
 |  Input shape:
 |    2D tensor with shape: `(batch_size, input_length)`.
 |  
 |  Output shape:
 |    3D tensor with shape: `(batch_size, input_length, output_dim)`.
 |  
 |  **Note on variable placement:**
 |  By default, if a GPU is available, the embedding matrix will be placed on
 |  the GPU. This achieves the best performance, but it might cause issues:
 |  
 |  - You may be using an optimizer that does not support sparse GPU kernels.
 |  In this case you will see an error upon training your model.
 |  - Your embedding matrix may be too large to fit on your GPU. In this case
 |  you will see an Out Of Memory (OOM) error.
 |  
 |  In such cases, you should place the embedding matrix on the CPU memory.
 |  You can do so with a device scope, as such:
 |  
 |  ```python
 |  with tf.device('cpu:0'):
 |    embedding_layer = Embedding(...)
 |    embedding_layer.build()
 |  ```
 |  
 |  The pre-built `embedding_layer` instance can then be added to a `Sequential`
 |  model (e.g. `model.add(embedding_layer)`), called in a Functional model
 |  (e.g. `x = embedding_layer(x)`), or used in a subclassed model.
 |  
 |  Method resolution order:
 |      Embedding
 |      keras.src.engine.base_layer.Layer
 |      tensorflow.python.module.module.Module
 |      tensorflow.python.trackable.autotrackable.AutoTrackable
 |      tensorflow.python.trackable.base.Trackable
 |      keras.src.utils.version_utils.LayerVersionSelector
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, input_dim, output_dim, embeddings_initializer='uniform', embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None, sparse=False, **kwargs)
 |  
 |  build = wrapper(instance, input_shape)
 |  
 |  call(self, inputs)
 |      This is where the layer's logic lives.
 |      
 |      The `call()` method may not create state (except in its first
 |      invocation, wrapping the creation of variables or other resources in
 |      `tf.init_scope()`).  It is recommended to create state, including
 |      `tf.Variable` instances and nested `Layer` instances,
 |       in `__init__()`, or in the `build()` method that is
 |      called automatically before `call()` executes for the first time.
 |      
 |      Args:
 |        inputs: Input tensor, or dict/list/tuple of input tensors.
 |          The first positional `inputs` argument is subject to special rules:
 |          - `inputs` must be explicitly passed. A layer cannot have zero
 |            arguments, and `inputs` cannot be provided via the default value
 |            of a keyword argument.
 |          - NumPy array or Python scalar values in `inputs` get cast as
 |            tensors.
 |          - Keras mask metadata is only collected from `inputs`.
 |          - Layers are built (`build(input_shape)` method)
 |            using shape info from `inputs` only.
 |          - `input_spec` compatibility is only checked against `inputs`.
 |          - Mixed precision input casting is only applied to `inputs`.
 |            If a layer has tensor arguments in `*args` or `**kwargs`, their
 |            casting behavior in mixed precision should be handled manually.
 |          - The SavedModel input specification is generated using `inputs`
 |            only.
 |          - Integration with various ecosystem packages like TFMOT, TFLite,
 |            TF.js, etc is only supported for `inputs` and not for tensors in
 |            positional and keyword arguments.
 |        *args: Additional positional arguments. May contain tensors, although
 |          this is not recommended, for the reasons above.
 |        **kwargs: Additional keyword arguments. May contain tensors, although
 |          this is not recommended, for the reasons above.
 |          The following optional keyword arguments are reserved:
 |          - `training`: Boolean scalar tensor of Python boolean indicating
 |            whether the `call` is meant for training or inference.
 |          - `mask`: Boolean input mask. If the layer's `call()` method takes a
 |            `mask` argument, its default value will be set to the mask
 |            generated for `inputs` by the previous layer (if `input` did come
 |            from a layer that generated a corresponding mask, i.e. if it came
 |            from a Keras layer with masking support).
 |      
 |      Returns:
 |        A tensor or list/tuple of tensors.
 |  
 |  compute_mask(self, inputs, mask=None)
 |      Computes an output mask tensor.
 |      
 |      Args:
 |          inputs: Tensor or list of tensors.
 |          mask: Tensor or list of tensors.
 |      
 |      Returns:
 |          None or a tensor (or list of tensors,
 |              one per output tensor of the layer).
 |  
 |  compute_output_shape = wrapper(instance, input_shape)
 |  
 |  get_config(self)
 |      Returns the config of the layer.
 |      
 |      A layer config is a Python dictionary (serializable)
 |      containing the configuration of a layer.
 |      The same layer can be reinstantiated later
 |      (without its trained weights) from this configuration.
 |      
 |      The config of a layer does not include connectivity
 |      information, nor the layer class name. These are handled
 |      by `Network` (one layer of abstraction above).
 |      
 |      Note that `get_config()` does not guarantee to return a fresh copy of
 |      dict every time it is called. The callers should make a copy of the
 |      returned dict if they want to modify it.
 |      
 |      Returns:
 |          Python dictionary.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  __call__(self, *args, **kwargs)
 |      Wraps `call`, applying pre- and post-processing steps.
 |      
 |      Args:
 |        *args: Positional arguments to be passed to `self.call`.
 |        **kwargs: Keyword arguments to be passed to `self.call`.
 |      
 |      Returns:
 |        Output tensor(s).
 |      
 |      Note:
 |        - The following optional keyword arguments are reserved for specific
 |          uses:
 |          * `training`: Boolean scalar tensor of Python boolean indicating
 |            whether the `call` is meant for training or inference.
 |          * `mask`: Boolean input mask.
 |        - If the layer's `call` method takes a `mask` argument (as some Keras
 |          layers do), its default value will be set to the mask generated
 |          for `inputs` by the previous layer (if `input` did come from
 |          a layer that generated a corresponding mask, i.e. if it came from
 |          a Keras layer with masking support.
 |        - If the layer is not built, the method will call `build`.
 |      
 |      Raises:
 |        ValueError: if the layer's `call` method returns None (an invalid
 |          value).
 |        RuntimeError: if `super().__init__()` was not called in the
 |          constructor.
 |  
 |  __delattr__(self, name)
 |      Implement delattr(self, name).
 |  
 |  __getstate__(self)
 |  
 |  __setattr__(self, name, value)
 |      Support self.foo = trackable syntax.
 |  
 |  __setstate__(self, state)
 |  
 |  add_loss(self, losses, **kwargs)
 |      Add loss tensor(s), potentially dependent on layer inputs.
 |      
 |      Some losses (for instance, activity regularization losses) may be
 |      dependent on the inputs passed when calling a layer. Hence, when reusing
 |      the same layer on different inputs `a` and `b`, some entries in
 |      `layer.losses` may be dependent on `a` and some on `b`. This method
 |      automatically keeps track of dependencies.
 |      
 |      This method can be used inside a subclassed layer or model's `call`
 |      function, in which case `losses` should be a Tensor or list of Tensors.
 |      
 |      Example:
 |      
 |      ```python
 |      class MyLayer(tf.keras.layers.Layer):
 |        def call(self, inputs):
 |          self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |          return inputs
 |      ```
 |      
 |      The same code works in distributed training: the input to `add_loss()`
 |      is treated like a regularization loss and averaged across replicas
 |      by the training loop (both built-in `Model.fit()` and compliant custom
 |      training loops).
 |      
 |      The `add_loss` method can also be called directly on a Functional Model
 |      during construction. In this case, any loss Tensors passed to this Model
 |      must be symbolic and be able to be traced back to the model's `Input`s.
 |      These losses become part of the model's topology and are tracked in
 |      `get_config`.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Activity regularization.
 |      model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      ```
 |      
 |      If this is not the case for your loss (if, for example, your loss
 |      references a `Variable` of one of the model's layers), you can wrap your
 |      loss in a zero-argument lambda. These losses are not tracked as part of
 |      the model's topology since they can't be serialized.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      d = tf.keras.layers.Dense(10)
 |      x = d(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Weight regularization.
 |      model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      ```
 |      
 |      Args:
 |        losses: Loss tensor, or list/tuple of tensors. Rather than tensors,
 |          losses may also be zero-argument callables which create a loss
 |          tensor.
 |        **kwargs: Used for backwards compatibility only.
 |  
 |  add_metric(self, value, name=None, **kwargs)
 |      Adds metric tensor to the layer.
 |      
 |      This method can be used inside the `call()` method of a subclassed layer
 |      or model.
 |      
 |      ```python
 |      class MyMetricLayer(tf.keras.layers.Layer):
 |        def __init__(self):
 |          super(MyMetricLayer, self).__init__(name='my_metric_layer')
 |          self.mean = tf.keras.metrics.Mean(name='metric_1')
 |      
 |        def call(self, inputs):
 |          self.add_metric(self.mean(inputs))
 |          self.add_metric(tf.reduce_sum(inputs), name='metric_2')
 |          return inputs
 |      ```
 |      
 |      This method can also be called directly on a Functional Model during
 |      construction. In this case, any tensor passed to this Model must
 |      be symbolic and be able to be traced back to the model's `Input`s. These
 |      metrics become part of the model's topology and are tracked when you
 |      save the model via `save()`.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(math_ops.reduce_sum(x), name='metric_1')
 |      ```
 |      
 |      Note: Calling `add_metric()` with the result of a metric object on a
 |      Functional Model, as shown in the example below, is not supported. This
 |      is because we cannot trace the metric result tensor back to the model's
 |      inputs.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1')
 |      ```
 |      
 |      Args:
 |        value: Metric tensor.
 |        name: String metric name.
 |        **kwargs: Additional keyword arguments for backward compatibility.
 |          Accepted values:
 |          `aggregation` - When the `value` tensor provided is not the result
 |          of calling a `keras.Metric` instance, it will be aggregated by
 |          default using a `keras.Metric.Mean`.
 |  
 |  add_update(self, updates)
 |      Add update op(s), potentially dependent on layer inputs.
 |      
 |      Weight updates (for instance, the updates of the moving mean and
 |      variance in a BatchNormalization layer) may be dependent on the inputs
 |      passed when calling a layer. Hence, when reusing the same layer on
 |      different inputs `a` and `b`, some entries in `layer.updates` may be
 |      dependent on `a` and some on `b`. This method automatically keeps track
 |      of dependencies.
 |      
 |      This call is ignored when eager execution is enabled (in that case,
 |      variable updates are run on the fly and thus do not need to be tracked
 |      for later execution).
 |      
 |      Args:
 |        updates: Update op, or list/tuple of update ops, or zero-arg callable
 |          that returns an update op. A zero-arg callable should be passed in
 |          order to disable running the updates by setting `trainable=False`
 |          on this Layer, when executing in Eager mode.
 |  
 |  add_variable(self, *args, **kwargs)
 |      Deprecated, do NOT use! Alias for `add_weight`.
 |  
 |  add_weight(self, name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=<VariableSynchronization.AUTO: 0>, aggregation=<VariableAggregationV2.NONE: 0>, **kwargs)
 |      Adds a new variable to the layer.
 |      
 |      Args:
 |        name: Variable name.
 |        shape: Variable shape. Defaults to scalar if unspecified.
 |        dtype: The type of the variable. Defaults to `self.dtype`.
 |        initializer: Initializer instance (callable).
 |        regularizer: Regularizer instance (callable).
 |        trainable: Boolean, whether the variable should be part of the layer's
 |          "trainable_variables" (e.g. variables, biases)
 |          or "non_trainable_variables" (e.g. BatchNorm mean and variance).
 |          Note that `trainable` cannot be `True` if `synchronization`
 |          is set to `ON_READ`.
 |        constraint: Constraint instance (callable).
 |        use_resource: Whether to use a `ResourceVariable` or not.
 |          See [this guide](
 |          https://www.tensorflow.org/guide/migrate/tf1_vs_tf2#resourcevariables_instead_of_referencevariables)
 |           for more information.
 |        synchronization: Indicates when a distributed a variable will be
 |          aggregated. Accepted values are constants defined in the class
 |          `tf.VariableSynchronization`. By default the synchronization is set
 |          to `AUTO` and the current `DistributionStrategy` chooses when to
 |          synchronize. If `synchronization` is set to `ON_READ`, `trainable`
 |          must not be set to `True`.
 |        aggregation: Indicates how a distributed variable will be aggregated.
 |          Accepted values are constants defined in the class
 |          `tf.VariableAggregation`.
 |        **kwargs: Additional keyword arguments. Accepted values are `getter`,
 |          `collections`, `experimental_autocast` and `caching_device`.
 |      
 |      Returns:
 |        The variable created.
 |      
 |      Raises:
 |        ValueError: When giving unsupported dtype and no initializer or when
 |          trainable has been set to True with synchronization set as
 |          `ON_READ`.
 |  
 |  build_from_config(self, config)
 |      Builds the layer's states with the supplied config dict.
 |      
 |      By default, this method calls the `build(config["input_shape"])` method,
 |      which creates weights based on the layer's input shape in the supplied
 |      config. If your config contains other information needed to load the
 |      layer's state, you should override this method.
 |      
 |      Args:
 |          config: Dict containing the input shape associated with this layer.
 |  
 |  compute_output_signature(self, input_signature)
 |      Compute the output tensor signature of the layer based on the inputs.
 |      
 |      Unlike a TensorShape object, a TensorSpec object contains both shape
 |      and dtype information for a tensor. This method allows layers to provide
 |      output dtype information if it is different from the input dtype.
 |      For any layer that doesn't implement this function,
 |      the framework will fall back to use `compute_output_shape`, and will
 |      assume that the output dtype matches the input dtype.
 |      
 |      Args:
 |        input_signature: Single TensorSpec or nested structure of TensorSpec
 |          objects, describing a candidate input for the layer.
 |      
 |      Returns:
 |        Single TensorSpec or nested structure of TensorSpec objects,
 |          describing how the layer would transform the provided input.
 |      
 |      Raises:
 |        TypeError: If input_signature contains a non-TensorSpec object.
 |  
 |  count_params(self)
 |      Count the total number of scalars composing the weights.
 |      
 |      Returns:
 |          An integer count.
 |      
 |      Raises:
 |          ValueError: if the layer isn't yet built
 |            (in which case its weights aren't yet defined).
 |  
 |  finalize_state(self)
 |      Finalizes the layers state after updating layer weights.
 |      
 |      This function can be subclassed in a layer and will be called after
 |      updating a layer weights. It can be overridden to finalize any
 |      additional layer state after a weight update.
 |      
 |      This function will be called after weights of a layer have been restored
 |      from a loaded model.
 |  
 |  get_build_config(self)
 |      Returns a dictionary with the layer's input shape.
 |      
 |      This method returns a config dict that can be used by
 |      `build_from_config(config)` to create all states (e.g. Variables and
 |      Lookup tables) needed by the layer.
 |      
 |      By default, the config only contains the input shape that the layer
 |      was built with. If you're writing a custom layer that creates state in
 |      an unusual way, you should override this method to make sure this state
 |      is already created when Keras attempts to load its value upon model
 |      loading.
 |      
 |      Returns:
 |          A dict containing the input shape associated with the layer.
 |  
 |  get_input_at(self, node_index)
 |      Retrieves the input tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first input node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_input_mask_at(self, node_index)
 |      Retrieves the input mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_shape_at(self, node_index)
 |      Retrieves the input shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_at(self, node_index)
 |      Retrieves the output tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first output node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_mask_at(self, node_index)
 |      Retrieves the output mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_shape_at(self, node_index)
 |      Retrieves the output shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_weights(self)
 |      Returns the current weights of the layer, as NumPy arrays.
 |      
 |      The weights of a layer represent the state of the layer. This function
 |      returns both trainable and non-trainable weight values associated with
 |      this layer as a list of NumPy arrays, which can in turn be used to load
 |      state into similarly parameterized layers.
 |      
 |      For example, a `Dense` layer returns a list of two values: the kernel
 |      matrix and the bias vector. These can be used to set the weights of
 |      another `Dense` layer:
 |      
 |      >>> layer_a = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(1.))
 |      >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
 |      >>> layer_a.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(2.))
 |      >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
 |      >>> layer_b.get_weights()
 |      [array([[2.],
 |             [2.],
 |             [2.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b.set_weights(layer_a.get_weights())
 |      >>> layer_b.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      
 |      Returns:
 |          Weights values as a list of NumPy arrays.
 |  
 |  load_own_variables(self, store)
 |      Loads the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is loaded upon calling `keras.models.load_model()`.
 |      
 |      Args:
 |          store: Dict from which the state of the model will be loaded.
 |  
 |  save_own_variables(self, store)
 |      Saves the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is saved upon calling `model.save()`.
 |      
 |      Args:
 |          store: Dict where the state of the model will be saved.
 |  
 |  set_weights(self, weights)
 |      Sets the weights of the layer, from NumPy arrays.
 |      
 |      The weights of a layer represent the state of the layer. This function
 |      sets the weight values from numpy arrays. The weight values should be
 |      passed in the order they are created by the layer. Note that the layer's
 |      weights must be instantiated before calling this function, by calling
 |      the layer.
 |      
 |      For example, a `Dense` layer returns a list of two values: the kernel
 |      matrix and the bias vector. These can be used to set the weights of
 |      another `Dense` layer:
 |      
 |      >>> layer_a = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(1.))
 |      >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
 |      >>> layer_a.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(2.))
 |      >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
 |      >>> layer_b.get_weights()
 |      [array([[2.],
 |             [2.],
 |             [2.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b.set_weights(layer_a.get_weights())
 |      >>> layer_b.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      
 |      Args:
 |        weights: a list of NumPy arrays. The number
 |          of arrays and their shape must match
 |          number of the dimensions of the weights
 |          of the layer (i.e. it should match the
 |          output of `get_weights`).
 |      
 |      Raises:
 |        ValueError: If the provided weights list does not match the
 |          layer's specifications.
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  from_config(config) from builtins.type
 |      Creates a layer from its config.
 |      
 |      This method is the reverse of `get_config`,
 |      capable of instantiating the same layer from the config
 |      dictionary. It does not handle layer connectivity
 |      (handled by Network), nor weights (handled by `set_weights`).
 |      
 |      Args:
 |          config: A Python dictionary, typically the
 |              output of get_config.
 |      
 |      Returns:
 |          A layer instance.
 |  
 |  ----------------------------------------------------------------------
 |  Static methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  __new__(cls, *args, **kwargs)
 |      Create and return a new object.  See help(type) for accurate signature.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from keras.src.engine.base_layer.Layer:
 |  
 |  compute_dtype
 |      The dtype of the layer's computations.
 |      
 |      This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.dtype`, the dtype of
 |      the weights.
 |      
 |      Layers automatically cast their inputs to the compute dtype, which
 |      causes computations and the output to be in the compute dtype as well.
 |      This is done by the base Layer class in `Layer.__call__`, so you do not
 |      have to insert these casts if implementing your own layer.
 |      
 |      Layers often perform certain internal computations in higher precision
 |      when `compute_dtype` is float16 or bfloat16 for numeric stability. The
 |      output will still typically be float16 or bfloat16 in such cases.
 |      
 |      Returns:
 |        The layer's compute dtype.
 |  
 |  dtype
 |      The dtype of the layer weights.
 |      
 |      This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.compute_dtype`, the
 |      dtype of the layer's computations.
 |  
 |  dtype_policy
 |      The dtype policy associated with this layer.
 |      
 |      This is an instance of a `tf.keras.mixed_precision.Policy`.
 |  
 |  dynamic
 |      Whether the layer is dynamic (eager-only); set in the constructor.
 |  
 |  inbound_nodes
 |      Return Functional API nodes upstream of this layer.
 |  
 |  input
 |      Retrieves the input tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input tensor or list of input tensors.
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |        AttributeError: If no inbound nodes are found.
 |  
 |  input_mask
 |      Retrieves the input mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input mask tensor (potentially None) or list of input
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_shape
 |      Retrieves the input shape(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer, or if all inputs
 |      have the same shape.
 |      
 |      Returns:
 |          Input shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per input tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined input_shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  losses
 |      List of losses added using the `add_loss()` API.
 |      
 |      Variable regularization tensors are created when this property is
 |      accessed, so it is eager safe: accessing `losses` under a
 |      `tf.GradientTape` will propagate gradients back to the corresponding
 |      variables.
 |      
 |      Examples:
 |      
 |      >>> class MyLayer(tf.keras.layers.Layer):
 |      ...   def call(self, inputs):
 |      ...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |      ...     return inputs
 |      >>> l = MyLayer()
 |      >>> l(np.ones((10, 1)))
 |      >>> l.losses
 |      [1.0]
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> x = tf.keras.layers.Dense(10)(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Activity regularization.
 |      >>> len(model.losses)
 |      0
 |      >>> model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      >>> len(model.losses)
 |      1
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
 |      >>> x = d(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Weight regularization.
 |      >>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      >>> model.losses
 |      [<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
 |      
 |      Returns:
 |        A list of tensors.
 |  
 |  metrics
 |      List of metrics added using the `add_metric()` API.
 |      
 |      Example:
 |      
 |      >>> input = tf.keras.layers.Input(shape=(3,))
 |      >>> d = tf.keras.layers.Dense(2)
 |      >>> output = d(input)
 |      >>> d.add_metric(tf.reduce_max(output), name='max')
 |      >>> d.add_metric(tf.reduce_min(output), name='min')
 |      >>> [m.name for m in d.metrics]
 |      ['max', 'min']
 |      
 |      Returns:
 |        A list of `Metric` objects.
 |  
 |  name
 |      Name of the layer (string), set in the constructor.
 |  
 |  non_trainable_variables
 |      Sequence of non-trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  non_trainable_weights
 |      List of all non-trainable weights tracked by this layer.
 |      
 |      Non-trainable weights are *not* updated during training. They are
 |      expected to be updated manually in `call()`.
 |      
 |      Returns:
 |        A list of non-trainable variables.
 |  
 |  outbound_nodes
 |      Return Functional API nodes downstream of this layer.
 |  
 |  output
 |      Retrieves the output tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one output,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |        Output tensor or list of output tensors.
 |      
 |      Raises:
 |        AttributeError: if the layer is connected to more than one incoming
 |          layers.
 |        RuntimeError: if called in Eager mode.
 |  
 |  output_mask
 |      Retrieves the output mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Output mask tensor (potentially None) or list of output
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_shape
 |      Retrieves the output shape(s) of a layer.
 |      
 |      Only applicable if the layer has one output,
 |      or if all outputs have the same shape.
 |      
 |      Returns:
 |          Output shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per output tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined output shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  trainable_variables
 |      Sequence of trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  trainable_weights
 |      List of all trainable weights tracked by this layer.
 |      
 |      Trainable weights are updated via gradient descent during training.
 |      
 |      Returns:
 |        A list of trainable variables.
 |  
 |  updates
 |  
 |  variable_dtype
 |      Alias of `Layer.dtype`, the dtype of the weights.
 |  
 |  variables
 |      Returns the list of all layer variables/weights.
 |      
 |      Alias of `self.weights`.
 |      
 |      Note: This will not track the weights of nested `tf.Modules` that are
 |      not themselves Keras layers.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  weights
 |      Returns the list of all layer variables/weights.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.src.engine.base_layer.Layer:
 |  
 |  activity_regularizer
 |      Optional regularizer function for the output of this layer.
 |  
 |  input_spec
 |      `InputSpec` instance(s) describing the input format for this layer.
 |      
 |      When you create a layer subclass, you can set `self.input_spec` to
 |      enable the layer to run input compatibility checks when it is called.
 |      Consider a `Conv2D` layer: it can only be called on a single input
 |      tensor of rank 4. As such, you can set, in `__init__()`:
 |      
 |      ```python
 |      self.input_spec = tf.keras.layers.InputSpec(ndim=4)
 |      ```
 |      
 |      Now, if you try to call the layer on an input that isn't rank 4
 |      (for instance, an input of shape `(2,)`, it will raise a
 |      nicely-formatted error:
 |      
 |      ```
 |      ValueError: Input 0 of layer conv2d is incompatible with the layer:
 |      expected ndim=4, found ndim=1. Full shape received: [2]
 |      ```
 |      
 |      Input checks that can be specified via `input_spec` include:
 |      - Structure (e.g. a single input, a list of 2 inputs, etc)
 |      - Shape
 |      - Rank (ndim)
 |      - Dtype
 |      
 |      For more information, see `tf.keras.layers.InputSpec`.
 |      
 |      Returns:
 |        A `tf.keras.layers.InputSpec` instance, or nested structure thereof.
 |  
 |  stateful
 |  
 |  supports_masking
 |      Whether this layer supports computing a mask using `compute_mask`.
 |  
 |  trainable
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from tensorflow.python.module.module.Module:
 |  
 |  with_name_scope(method) from builtins.type
 |      Decorator to automatically enter the module name scope.
 |      
 |      >>> class MyModule(tf.Module):
 |      ...   @tf.Module.with_name_scope
 |      ...   def __call__(self, x):
 |      ...     if not hasattr(self, 'w'):
 |      ...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
 |      ...     return tf.matmul(x, self.w)
 |      
 |      Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose
 |      names included the module name:
 |      
 |      >>> mod = MyModule()
 |      >>> mod(tf.ones([1, 2]))
 |      <tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
 |      >>> mod.w
 |      <tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
 |      numpy=..., dtype=float32)>
 |      
 |      Args:
 |        method: The method to wrap.
 |      
 |      Returns:
 |        The original method wrapped such that it enters the module's name scope.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from tensorflow.python.module.module.Module:
 |  
 |  name_scope
 |      Returns a `tf.name_scope` instance for this class.
 |  
 |  submodules
 |      Sequence of all sub-modules.
 |      
 |      Submodules are modules which are properties of this module, or found as
 |      properties of modules which are properties of this module (and so on).
 |      
 |      >>> a = tf.Module()
 |      >>> b = tf.Module()
 |      >>> c = tf.Module()
 |      >>> a.b = b
 |      >>> b.c = c
 |      >>> list(a.submodules) == [b, c]
 |      True
 |      >>> list(b.submodules) == [c]
 |      True
 |      >>> list(c.submodules) == []
 |      True
 |      
 |      Returns:
 |        A sequence of all submodules.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from tensorflow.python.trackable.base.Trackable:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)

Help on class GlobalAveragePooling1D in module keras.src.layers.pooling.global_average_pooling1d:

class GlobalAveragePooling1D(keras.src.layers.pooling.base_global_pooling1d.GlobalPooling1D)
 |  GlobalAveragePooling1D(*args, **kwargs)
 |  
 |  Global average pooling operation for temporal data.
 |  
 |  Examples:
 |  
 |  >>> input_shape = (2, 3, 4)
 |  >>> x = tf.random.normal(input_shape)
 |  >>> y = tf.keras.layers.GlobalAveragePooling1D()(x)
 |  >>> print(y.shape)
 |  (2, 4)
 |  
 |  Args:
 |    data_format: A string,
 |      one of `channels_last` (default) or `channels_first`.
 |      The ordering of the dimensions in the inputs.
 |      `channels_last` corresponds to inputs with shape
 |      `(batch, steps, features)` while `channels_first`
 |      corresponds to inputs with shape
 |      `(batch, features, steps)`.
 |    keepdims: A boolean, whether to keep the temporal dimension or not.
 |      If `keepdims` is `False` (default), the rank of the tensor is reduced
 |      for spatial dimensions.
 |      If `keepdims` is `True`, the temporal dimension are retained with
 |      length 1.
 |      The behavior is the same as for `tf.reduce_mean` or `np.mean`.
 |  
 |  Call arguments:
 |    inputs: A 3D tensor.
 |    mask: Binary tensor of shape `(batch_size, steps)` indicating whether
 |      a given step should be masked (excluded from the average).
 |  
 |  Input shape:
 |    - If `data_format='channels_last'`:
 |      3D tensor with shape:
 |      `(batch_size, steps, features)`
 |    - If `data_format='channels_first'`:
 |      3D tensor with shape:
 |      `(batch_size, features, steps)`
 |  
 |  Output shape:
 |    - If `keepdims`=False:
 |      2D tensor with shape `(batch_size, features)`.
 |    - If `keepdims`=True:
 |      - If `data_format='channels_last'`:
 |        3D tensor with shape `(batch_size, 1, features)`
 |      - If `data_format='channels_first'`:
 |        3D tensor with shape `(batch_size, features, 1)`
 |  
 |  Method resolution order:
 |      GlobalAveragePooling1D
 |      keras.src.layers.pooling.base_global_pooling1d.GlobalPooling1D
 |      keras.src.engine.base_layer.Layer
 |      tensorflow.python.module.module.Module
 |      tensorflow.python.trackable.autotrackable.AutoTrackable
 |      tensorflow.python.trackable.base.Trackable
 |      keras.src.utils.version_utils.LayerVersionSelector
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, data_format='channels_last', **kwargs)
 |  
 |  call(self, inputs, mask=None)
 |      This is where the layer's logic lives.
 |      
 |      The `call()` method may not create state (except in its first
 |      invocation, wrapping the creation of variables or other resources in
 |      `tf.init_scope()`).  It is recommended to create state, including
 |      `tf.Variable` instances and nested `Layer` instances,
 |       in `__init__()`, or in the `build()` method that is
 |      called automatically before `call()` executes for the first time.
 |      
 |      Args:
 |        inputs: Input tensor, or dict/list/tuple of input tensors.
 |          The first positional `inputs` argument is subject to special rules:
 |          - `inputs` must be explicitly passed. A layer cannot have zero
 |            arguments, and `inputs` cannot be provided via the default value
 |            of a keyword argument.
 |          - NumPy array or Python scalar values in `inputs` get cast as
 |            tensors.
 |          - Keras mask metadata is only collected from `inputs`.
 |          - Layers are built (`build(input_shape)` method)
 |            using shape info from `inputs` only.
 |          - `input_spec` compatibility is only checked against `inputs`.
 |          - Mixed precision input casting is only applied to `inputs`.
 |            If a layer has tensor arguments in `*args` or `**kwargs`, their
 |            casting behavior in mixed precision should be handled manually.
 |          - The SavedModel input specification is generated using `inputs`
 |            only.
 |          - Integration with various ecosystem packages like TFMOT, TFLite,
 |            TF.js, etc is only supported for `inputs` and not for tensors in
 |            positional and keyword arguments.
 |        *args: Additional positional arguments. May contain tensors, although
 |          this is not recommended, for the reasons above.
 |        **kwargs: Additional keyword arguments. May contain tensors, although
 |          this is not recommended, for the reasons above.
 |          The following optional keyword arguments are reserved:
 |          - `training`: Boolean scalar tensor of Python boolean indicating
 |            whether the `call` is meant for training or inference.
 |          - `mask`: Boolean input mask. If the layer's `call()` method takes a
 |            `mask` argument, its default value will be set to the mask
 |            generated for `inputs` by the previous layer (if `input` did come
 |            from a layer that generated a corresponding mask, i.e. if it came
 |            from a Keras layer with masking support).
 |      
 |      Returns:
 |        A tensor or list/tuple of tensors.
 |  
 |  compute_mask(self, inputs, mask=None)
 |      Computes an output mask tensor.
 |      
 |      Args:
 |          inputs: Tensor or list of tensors.
 |          mask: Tensor or list of tensors.
 |      
 |      Returns:
 |          None or a tensor (or list of tensors,
 |              one per output tensor of the layer).
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.src.layers.pooling.base_global_pooling1d.GlobalPooling1D:
 |  
 |  build(self, input_shape)
 |      Creates the variables of the layer (for subclass implementers).
 |      
 |      This is a method that implementers of subclasses of `Layer` or `Model`
 |      can override if they need a state-creation step in-between
 |      layer instantiation and layer call. It is invoked automatically before
 |      the first execution of `call()`.
 |      
 |      This is typically used to create the weights of `Layer` subclasses
 |      (at the discretion of the subclass implementer).
 |      
 |      Args:
 |        input_shape: Instance of `TensorShape`, or list of instances of
 |          `TensorShape` if the layer expects a list of inputs
 |          (one instance per input).
 |  
 |  compute_output_shape(self, input_shape)
 |      Computes the output shape of the layer.
 |      
 |      This method will cause the layer's state to be built, if that has not
 |      happened before. This requires that the layer will later be used with
 |      inputs that match the input shape provided here.
 |      
 |      Args:
 |          input_shape: Shape tuple (tuple of integers) or `tf.TensorShape`,
 |              or structure of shape tuples / `tf.TensorShape` instances
 |              (one per output tensor of the layer).
 |              Shape tuples can include None for free dimensions,
 |              instead of an integer.
 |      
 |      Returns:
 |          A `tf.TensorShape` instance
 |          or structure of `tf.TensorShape` instances.
 |  
 |  get_config(self)
 |      Returns the config of the layer.
 |      
 |      A layer config is a Python dictionary (serializable)
 |      containing the configuration of a layer.
 |      The same layer can be reinstantiated later
 |      (without its trained weights) from this configuration.
 |      
 |      The config of a layer does not include connectivity
 |      information, nor the layer class name. These are handled
 |      by `Network` (one layer of abstraction above).
 |      
 |      Note that `get_config()` does not guarantee to return a fresh copy of
 |      dict every time it is called. The callers should make a copy of the
 |      returned dict if they want to modify it.
 |      
 |      Returns:
 |          Python dictionary.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  __call__(self, *args, **kwargs)
 |      Wraps `call`, applying pre- and post-processing steps.
 |      
 |      Args:
 |        *args: Positional arguments to be passed to `self.call`.
 |        **kwargs: Keyword arguments to be passed to `self.call`.
 |      
 |      Returns:
 |        Output tensor(s).
 |      
 |      Note:
 |        - The following optional keyword arguments are reserved for specific
 |          uses:
 |          * `training`: Boolean scalar tensor of Python boolean indicating
 |            whether the `call` is meant for training or inference.
 |          * `mask`: Boolean input mask.
 |        - If the layer's `call` method takes a `mask` argument (as some Keras
 |          layers do), its default value will be set to the mask generated
 |          for `inputs` by the previous layer (if `input` did come from
 |          a layer that generated a corresponding mask, i.e. if it came from
 |          a Keras layer with masking support.
 |        - If the layer is not built, the method will call `build`.
 |      
 |      Raises:
 |        ValueError: if the layer's `call` method returns None (an invalid
 |          value).
 |        RuntimeError: if `super().__init__()` was not called in the
 |          constructor.
 |  
 |  __delattr__(self, name)
 |      Implement delattr(self, name).
 |  
 |  __getstate__(self)
 |  
 |  __setattr__(self, name, value)
 |      Support self.foo = trackable syntax.
 |  
 |  __setstate__(self, state)
 |  
 |  add_loss(self, losses, **kwargs)
 |      Add loss tensor(s), potentially dependent on layer inputs.
 |      
 |      Some losses (for instance, activity regularization losses) may be
 |      dependent on the inputs passed when calling a layer. Hence, when reusing
 |      the same layer on different inputs `a` and `b`, some entries in
 |      `layer.losses` may be dependent on `a` and some on `b`. This method
 |      automatically keeps track of dependencies.
 |      
 |      This method can be used inside a subclassed layer or model's `call`
 |      function, in which case `losses` should be a Tensor or list of Tensors.
 |      
 |      Example:
 |      
 |      ```python
 |      class MyLayer(tf.keras.layers.Layer):
 |        def call(self, inputs):
 |          self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |          return inputs
 |      ```
 |      
 |      The same code works in distributed training: the input to `add_loss()`
 |      is treated like a regularization loss and averaged across replicas
 |      by the training loop (both built-in `Model.fit()` and compliant custom
 |      training loops).
 |      
 |      The `add_loss` method can also be called directly on a Functional Model
 |      during construction. In this case, any loss Tensors passed to this Model
 |      must be symbolic and be able to be traced back to the model's `Input`s.
 |      These losses become part of the model's topology and are tracked in
 |      `get_config`.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Activity regularization.
 |      model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      ```
 |      
 |      If this is not the case for your loss (if, for example, your loss
 |      references a `Variable` of one of the model's layers), you can wrap your
 |      loss in a zero-argument lambda. These losses are not tracked as part of
 |      the model's topology since they can't be serialized.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      d = tf.keras.layers.Dense(10)
 |      x = d(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Weight regularization.
 |      model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      ```
 |      
 |      Args:
 |        losses: Loss tensor, or list/tuple of tensors. Rather than tensors,
 |          losses may also be zero-argument callables which create a loss
 |          tensor.
 |        **kwargs: Used for backwards compatibility only.
 |  
 |  add_metric(self, value, name=None, **kwargs)
 |      Adds metric tensor to the layer.
 |      
 |      This method can be used inside the `call()` method of a subclassed layer
 |      or model.
 |      
 |      ```python
 |      class MyMetricLayer(tf.keras.layers.Layer):
 |        def __init__(self):
 |          super(MyMetricLayer, self).__init__(name='my_metric_layer')
 |          self.mean = tf.keras.metrics.Mean(name='metric_1')
 |      
 |        def call(self, inputs):
 |          self.add_metric(self.mean(inputs))
 |          self.add_metric(tf.reduce_sum(inputs), name='metric_2')
 |          return inputs
 |      ```
 |      
 |      This method can also be called directly on a Functional Model during
 |      construction. In this case, any tensor passed to this Model must
 |      be symbolic and be able to be traced back to the model's `Input`s. These
 |      metrics become part of the model's topology and are tracked when you
 |      save the model via `save()`.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(math_ops.reduce_sum(x), name='metric_1')
 |      ```
 |      
 |      Note: Calling `add_metric()` with the result of a metric object on a
 |      Functional Model, as shown in the example below, is not supported. This
 |      is because we cannot trace the metric result tensor back to the model's
 |      inputs.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1')
 |      ```
 |      
 |      Args:
 |        value: Metric tensor.
 |        name: String metric name.
 |        **kwargs: Additional keyword arguments for backward compatibility.
 |          Accepted values:
 |          `aggregation` - When the `value` tensor provided is not the result
 |          of calling a `keras.Metric` instance, it will be aggregated by
 |          default using a `keras.Metric.Mean`.
 |  
 |  add_update(self, updates)
 |      Add update op(s), potentially dependent on layer inputs.
 |      
 |      Weight updates (for instance, the updates of the moving mean and
 |      variance in a BatchNormalization layer) may be dependent on the inputs
 |      passed when calling a layer. Hence, when reusing the same layer on
 |      different inputs `a` and `b`, some entries in `layer.updates` may be
 |      dependent on `a` and some on `b`. This method automatically keeps track
 |      of dependencies.
 |      
 |      This call is ignored when eager execution is enabled (in that case,
 |      variable updates are run on the fly and thus do not need to be tracked
 |      for later execution).
 |      
 |      Args:
 |        updates: Update op, or list/tuple of update ops, or zero-arg callable
 |          that returns an update op. A zero-arg callable should be passed in
 |          order to disable running the updates by setting `trainable=False`
 |          on this Layer, when executing in Eager mode.
 |  
 |  add_variable(self, *args, **kwargs)
 |      Deprecated, do NOT use! Alias for `add_weight`.
 |  
 |  add_weight(self, name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=<VariableSynchronization.AUTO: 0>, aggregation=<VariableAggregationV2.NONE: 0>, **kwargs)
 |      Adds a new variable to the layer.
 |      
 |      Args:
 |        name: Variable name.
 |        shape: Variable shape. Defaults to scalar if unspecified.
 |        dtype: The type of the variable. Defaults to `self.dtype`.
 |        initializer: Initializer instance (callable).
 |        regularizer: Regularizer instance (callable).
 |        trainable: Boolean, whether the variable should be part of the layer's
 |          "trainable_variables" (e.g. variables, biases)
 |          or "non_trainable_variables" (e.g. BatchNorm mean and variance).
 |          Note that `trainable` cannot be `True` if `synchronization`
 |          is set to `ON_READ`.
 |        constraint: Constraint instance (callable).
 |        use_resource: Whether to use a `ResourceVariable` or not.
 |          See [this guide](
 |          https://www.tensorflow.org/guide/migrate/tf1_vs_tf2#resourcevariables_instead_of_referencevariables)
 |           for more information.
 |        synchronization: Indicates when a distributed a variable will be
 |          aggregated. Accepted values are constants defined in the class
 |          `tf.VariableSynchronization`. By default the synchronization is set
 |          to `AUTO` and the current `DistributionStrategy` chooses when to
 |          synchronize. If `synchronization` is set to `ON_READ`, `trainable`
 |          must not be set to `True`.
 |        aggregation: Indicates how a distributed variable will be aggregated.
 |          Accepted values are constants defined in the class
 |          `tf.VariableAggregation`.
 |        **kwargs: Additional keyword arguments. Accepted values are `getter`,
 |          `collections`, `experimental_autocast` and `caching_device`.
 |      
 |      Returns:
 |        The variable created.
 |      
 |      Raises:
 |        ValueError: When giving unsupported dtype and no initializer or when
 |          trainable has been set to True with synchronization set as
 |          `ON_READ`.
 |  
 |  build_from_config(self, config)
 |      Builds the layer's states with the supplied config dict.
 |      
 |      By default, this method calls the `build(config["input_shape"])` method,
 |      which creates weights based on the layer's input shape in the supplied
 |      config. If your config contains other information needed to load the
 |      layer's state, you should override this method.
 |      
 |      Args:
 |          config: Dict containing the input shape associated with this layer.
 |  
 |  compute_output_signature(self, input_signature)
 |      Compute the output tensor signature of the layer based on the inputs.
 |      
 |      Unlike a TensorShape object, a TensorSpec object contains both shape
 |      and dtype information for a tensor. This method allows layers to provide
 |      output dtype information if it is different from the input dtype.
 |      For any layer that doesn't implement this function,
 |      the framework will fall back to use `compute_output_shape`, and will
 |      assume that the output dtype matches the input dtype.
 |      
 |      Args:
 |        input_signature: Single TensorSpec or nested structure of TensorSpec
 |          objects, describing a candidate input for the layer.
 |      
 |      Returns:
 |        Single TensorSpec or nested structure of TensorSpec objects,
 |          describing how the layer would transform the provided input.
 |      
 |      Raises:
 |        TypeError: If input_signature contains a non-TensorSpec object.
 |  
 |  count_params(self)
 |      Count the total number of scalars composing the weights.
 |      
 |      Returns:
 |          An integer count.
 |      
 |      Raises:
 |          ValueError: if the layer isn't yet built
 |            (in which case its weights aren't yet defined).
 |  
 |  finalize_state(self)
 |      Finalizes the layers state after updating layer weights.
 |      
 |      This function can be subclassed in a layer and will be called after
 |      updating a layer weights. It can be overridden to finalize any
 |      additional layer state after a weight update.
 |      
 |      This function will be called after weights of a layer have been restored
 |      from a loaded model.
 |  
 |  get_build_config(self)
 |      Returns a dictionary with the layer's input shape.
 |      
 |      This method returns a config dict that can be used by
 |      `build_from_config(config)` to create all states (e.g. Variables and
 |      Lookup tables) needed by the layer.
 |      
 |      By default, the config only contains the input shape that the layer
 |      was built with. If you're writing a custom layer that creates state in
 |      an unusual way, you should override this method to make sure this state
 |      is already created when Keras attempts to load its value upon model
 |      loading.
 |      
 |      Returns:
 |          A dict containing the input shape associated with the layer.
 |  
 |  get_input_at(self, node_index)
 |      Retrieves the input tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first input node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_input_mask_at(self, node_index)
 |      Retrieves the input mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_shape_at(self, node_index)
 |      Retrieves the input shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_at(self, node_index)
 |      Retrieves the output tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first output node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_mask_at(self, node_index)
 |      Retrieves the output mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_shape_at(self, node_index)
 |      Retrieves the output shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_weights(self)
 |      Returns the current weights of the layer, as NumPy arrays.
 |      
 |      The weights of a layer represent the state of the layer. This function
 |      returns both trainable and non-trainable weight values associated with
 |      this layer as a list of NumPy arrays, which can in turn be used to load
 |      state into similarly parameterized layers.
 |      
 |      For example, a `Dense` layer returns a list of two values: the kernel
 |      matrix and the bias vector. These can be used to set the weights of
 |      another `Dense` layer:
 |      
 |      >>> layer_a = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(1.))
 |      >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
 |      >>> layer_a.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(2.))
 |      >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
 |      >>> layer_b.get_weights()
 |      [array([[2.],
 |             [2.],
 |             [2.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b.set_weights(layer_a.get_weights())
 |      >>> layer_b.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      
 |      Returns:
 |          Weights values as a list of NumPy arrays.
 |  
 |  load_own_variables(self, store)
 |      Loads the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is loaded upon calling `keras.models.load_model()`.
 |      
 |      Args:
 |          store: Dict from which the state of the model will be loaded.
 |  
 |  save_own_variables(self, store)
 |      Saves the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is saved upon calling `model.save()`.
 |      
 |      Args:
 |          store: Dict where the state of the model will be saved.
 |  
 |  set_weights(self, weights)
 |      Sets the weights of the layer, from NumPy arrays.
 |      
 |      The weights of a layer represent the state of the layer. This function
 |      sets the weight values from numpy arrays. The weight values should be
 |      passed in the order they are created by the layer. Note that the layer's
 |      weights must be instantiated before calling this function, by calling
 |      the layer.
 |      
 |      For example, a `Dense` layer returns a list of two values: the kernel
 |      matrix and the bias vector. These can be used to set the weights of
 |      another `Dense` layer:
 |      
 |      >>> layer_a = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(1.))
 |      >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
 |      >>> layer_a.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(2.))
 |      >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
 |      >>> layer_b.get_weights()
 |      [array([[2.],
 |             [2.],
 |             [2.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b.set_weights(layer_a.get_weights())
 |      >>> layer_b.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      
 |      Args:
 |        weights: a list of NumPy arrays. The number
 |          of arrays and their shape must match
 |          number of the dimensions of the weights
 |          of the layer (i.e. it should match the
 |          output of `get_weights`).
 |      
 |      Raises:
 |        ValueError: If the provided weights list does not match the
 |          layer's specifications.
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  from_config(config) from builtins.type
 |      Creates a layer from its config.
 |      
 |      This method is the reverse of `get_config`,
 |      capable of instantiating the same layer from the config
 |      dictionary. It does not handle layer connectivity
 |      (handled by Network), nor weights (handled by `set_weights`).
 |      
 |      Args:
 |          config: A Python dictionary, typically the
 |              output of get_config.
 |      
 |      Returns:
 |          A layer instance.
 |  
 |  ----------------------------------------------------------------------
 |  Static methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  __new__(cls, *args, **kwargs)
 |      Create and return a new object.  See help(type) for accurate signature.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from keras.src.engine.base_layer.Layer:
 |  
 |  compute_dtype
 |      The dtype of the layer's computations.
 |      
 |      This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.dtype`, the dtype of
 |      the weights.
 |      
 |      Layers automatically cast their inputs to the compute dtype, which
 |      causes computations and the output to be in the compute dtype as well.
 |      This is done by the base Layer class in `Layer.__call__`, so you do not
 |      have to insert these casts if implementing your own layer.
 |      
 |      Layers often perform certain internal computations in higher precision
 |      when `compute_dtype` is float16 or bfloat16 for numeric stability. The
 |      output will still typically be float16 or bfloat16 in such cases.
 |      
 |      Returns:
 |        The layer's compute dtype.
 |  
 |  dtype
 |      The dtype of the layer weights.
 |      
 |      This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.compute_dtype`, the
 |      dtype of the layer's computations.
 |  
 |  dtype_policy
 |      The dtype policy associated with this layer.
 |      
 |      This is an instance of a `tf.keras.mixed_precision.Policy`.
 |  
 |  dynamic
 |      Whether the layer is dynamic (eager-only); set in the constructor.
 |  
 |  inbound_nodes
 |      Return Functional API nodes upstream of this layer.
 |  
 |  input
 |      Retrieves the input tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input tensor or list of input tensors.
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |        AttributeError: If no inbound nodes are found.
 |  
 |  input_mask
 |      Retrieves the input mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input mask tensor (potentially None) or list of input
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_shape
 |      Retrieves the input shape(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer, or if all inputs
 |      have the same shape.
 |      
 |      Returns:
 |          Input shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per input tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined input_shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  losses
 |      List of losses added using the `add_loss()` API.
 |      
 |      Variable regularization tensors are created when this property is
 |      accessed, so it is eager safe: accessing `losses` under a
 |      `tf.GradientTape` will propagate gradients back to the corresponding
 |      variables.
 |      
 |      Examples:
 |      
 |      >>> class MyLayer(tf.keras.layers.Layer):
 |      ...   def call(self, inputs):
 |      ...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |      ...     return inputs
 |      >>> l = MyLayer()
 |      >>> l(np.ones((10, 1)))
 |      >>> l.losses
 |      [1.0]
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> x = tf.keras.layers.Dense(10)(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Activity regularization.
 |      >>> len(model.losses)
 |      0
 |      >>> model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      >>> len(model.losses)
 |      1
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
 |      >>> x = d(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Weight regularization.
 |      >>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      >>> model.losses
 |      [<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
 |      
 |      Returns:
 |        A list of tensors.
 |  
 |  metrics
 |      List of metrics added using the `add_metric()` API.
 |      
 |      Example:
 |      
 |      >>> input = tf.keras.layers.Input(shape=(3,))
 |      >>> d = tf.keras.layers.Dense(2)
 |      >>> output = d(input)
 |      >>> d.add_metric(tf.reduce_max(output), name='max')
 |      >>> d.add_metric(tf.reduce_min(output), name='min')
 |      >>> [m.name for m in d.metrics]
 |      ['max', 'min']
 |      
 |      Returns:
 |        A list of `Metric` objects.
 |  
 |  name
 |      Name of the layer (string), set in the constructor.
 |  
 |  non_trainable_variables
 |      Sequence of non-trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  non_trainable_weights
 |      List of all non-trainable weights tracked by this layer.
 |      
 |      Non-trainable weights are *not* updated during training. They are
 |      expected to be updated manually in `call()`.
 |      
 |      Returns:
 |        A list of non-trainable variables.
 |  
 |  outbound_nodes
 |      Return Functional API nodes downstream of this layer.
 |  
 |  output
 |      Retrieves the output tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one output,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |        Output tensor or list of output tensors.
 |      
 |      Raises:
 |        AttributeError: if the layer is connected to more than one incoming
 |          layers.
 |        RuntimeError: if called in Eager mode.
 |  
 |  output_mask
 |      Retrieves the output mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Output mask tensor (potentially None) or list of output
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_shape
 |      Retrieves the output shape(s) of a layer.
 |      
 |      Only applicable if the layer has one output,
 |      or if all outputs have the same shape.
 |      
 |      Returns:
 |          Output shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per output tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined output shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  trainable_variables
 |      Sequence of trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  trainable_weights
 |      List of all trainable weights tracked by this layer.
 |      
 |      Trainable weights are updated via gradient descent during training.
 |      
 |      Returns:
 |        A list of trainable variables.
 |  
 |  updates
 |  
 |  variable_dtype
 |      Alias of `Layer.dtype`, the dtype of the weights.
 |  
 |  variables
 |      Returns the list of all layer variables/weights.
 |      
 |      Alias of `self.weights`.
 |      
 |      Note: This will not track the weights of nested `tf.Modules` that are
 |      not themselves Keras layers.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  weights
 |      Returns the list of all layer variables/weights.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.src.engine.base_layer.Layer:
 |  
 |  activity_regularizer
 |      Optional regularizer function for the output of this layer.
 |  
 |  input_spec
 |      `InputSpec` instance(s) describing the input format for this layer.
 |      
 |      When you create a layer subclass, you can set `self.input_spec` to
 |      enable the layer to run input compatibility checks when it is called.
 |      Consider a `Conv2D` layer: it can only be called on a single input
 |      tensor of rank 4. As such, you can set, in `__init__()`:
 |      
 |      ```python
 |      self.input_spec = tf.keras.layers.InputSpec(ndim=4)
 |      ```
 |      
 |      Now, if you try to call the layer on an input that isn't rank 4
 |      (for instance, an input of shape `(2,)`, it will raise a
 |      nicely-formatted error:
 |      
 |      ```
 |      ValueError: Input 0 of layer conv2d is incompatible with the layer:
 |      expected ndim=4, found ndim=1. Full shape received: [2]
 |      ```
 |      
 |      Input checks that can be specified via `input_spec` include:
 |      - Structure (e.g. a single input, a list of 2 inputs, etc)
 |      - Shape
 |      - Rank (ndim)
 |      - Dtype
 |      
 |      For more information, see `tf.keras.layers.InputSpec`.
 |      
 |      Returns:
 |        A `tf.keras.layers.InputSpec` instance, or nested structure thereof.
 |  
 |  stateful
 |  
 |  supports_masking
 |      Whether this layer supports computing a mask using `compute_mask`.
 |  
 |  trainable
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from tensorflow.python.module.module.Module:
 |  
 |  with_name_scope(method) from builtins.type
 |      Decorator to automatically enter the module name scope.
 |      
 |      >>> class MyModule(tf.Module):
 |      ...   @tf.Module.with_name_scope
 |      ...   def __call__(self, x):
 |      ...     if not hasattr(self, 'w'):
 |      ...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
 |      ...     return tf.matmul(x, self.w)
 |      
 |      Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose
 |      names included the module name:
 |      
 |      >>> mod = MyModule()
 |      >>> mod(tf.ones([1, 2]))
 |      <tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
 |      >>> mod.w
 |      <tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
 |      numpy=..., dtype=float32)>
 |      
 |      Args:
 |        method: The method to wrap.
 |      
 |      Returns:
 |        The original method wrapped such that it enters the module's name scope.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from tensorflow.python.module.module.Module:
 |  
 |  name_scope
 |      Returns a `tf.name_scope` instance for this class.
 |  
 |  submodules
 |      Sequence of all sub-modules.
 |      
 |      Submodules are modules which are properties of this module, or found as
 |      properties of modules which are properties of this module (and so on).
 |      
 |      >>> a = tf.Module()
 |      >>> b = tf.Module()
 |      >>> c = tf.Module()
 |      >>> a.b = b
 |      >>> b.c = c
 |      >>> list(a.submodules) == [b, c]
 |      True
 |      >>> list(b.submodules) == [c]
 |      True
 |      >>> list(c.submodules) == []
 |      True
 |      
 |      Returns:
 |        A sequence of all submodules.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from tensorflow.python.trackable.base.Trackable:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)

Help on class Dense in module keras.src.layers.core.dense:

class Dense(keras.src.engine.base_layer.Layer)
 |  Dense(*args, **kwargs)
 |  
 |  Just your regular densely-connected NN layer.
 |  
 |  `Dense` implements the operation:
 |  `output = activation(dot(input, kernel) + bias)`
 |  where `activation` is the element-wise activation function
 |  passed as the `activation` argument, `kernel` is a weights matrix
 |  created by the layer, and `bias` is a bias vector created by the layer
 |  (only applicable if `use_bias` is `True`). These are all attributes of
 |  `Dense`.
 |  
 |  Note: If the input to the layer has a rank greater than 2, then `Dense`
 |  computes the dot product between the `inputs` and the `kernel` along the
 |  last axis of the `inputs` and axis 0 of the `kernel` (using `tf.tensordot`).
 |  For example, if input has dimensions `(batch_size, d0, d1)`, then we create
 |  a `kernel` with shape `(d1, units)`, and the `kernel` operates along axis 2
 |  of the `input`, on every sub-tensor of shape `(1, 1, d1)` (there are
 |  `batch_size * d0` such sub-tensors).  The output in this case will have
 |  shape `(batch_size, d0, units)`.
 |  
 |  Besides, layer attributes cannot be modified after the layer has been called
 |  once (except the `trainable` attribute).
 |  When a popular kwarg `input_shape` is passed, then keras will create
 |  an input layer to insert before the current layer. This can be treated
 |  equivalent to explicitly defining an `InputLayer`.
 |  
 |  Example:
 |  
 |  >>> # Create a `Sequential` model and add a Dense layer as the first layer.
 |  >>> model = tf.keras.models.Sequential()
 |  >>> model.add(tf.keras.Input(shape=(16,)))
 |  >>> model.add(tf.keras.layers.Dense(32, activation='relu'))
 |  >>> # Now the model will take as input arrays of shape (None, 16)
 |  >>> # and output arrays of shape (None, 32).
 |  >>> # Note that after the first layer, you don't need to specify
 |  >>> # the size of the input anymore:
 |  >>> model.add(tf.keras.layers.Dense(32))
 |  >>> model.output_shape
 |  (None, 32)
 |  
 |  Args:
 |      units: Positive integer, dimensionality of the output space.
 |      activation: Activation function to use.
 |          If you don't specify anything, no activation is applied
 |          (ie. "linear" activation: `a(x) = x`).
 |      use_bias: Boolean, whether the layer uses a bias vector.
 |      kernel_initializer: Initializer for the `kernel` weights matrix.
 |      bias_initializer: Initializer for the bias vector.
 |      kernel_regularizer: Regularizer function applied to
 |          the `kernel` weights matrix.
 |      bias_regularizer: Regularizer function applied to the bias vector.
 |      activity_regularizer: Regularizer function applied to
 |          the output of the layer (its "activation").
 |      kernel_constraint: Constraint function applied to
 |          the `kernel` weights matrix.
 |      bias_constraint: Constraint function applied to the bias vector.
 |  
 |  Input shape:
 |      N-D tensor with shape: `(batch_size, ..., input_dim)`.
 |      The most common situation would be
 |      a 2D input with shape `(batch_size, input_dim)`.
 |  
 |  Output shape:
 |      N-D tensor with shape: `(batch_size, ..., units)`.
 |      For instance, for a 2D input with shape `(batch_size, input_dim)`,
 |      the output would have shape `(batch_size, units)`.
 |  
 |  Method resolution order:
 |      Dense
 |      keras.src.engine.base_layer.Layer
 |      tensorflow.python.module.module.Module
 |      tensorflow.python.trackable.autotrackable.AutoTrackable
 |      tensorflow.python.trackable.base.Trackable
 |      keras.src.utils.version_utils.LayerVersionSelector
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)
 |  
 |  build(self, input_shape)
 |      Creates the variables of the layer (for subclass implementers).
 |      
 |      This is a method that implementers of subclasses of `Layer` or `Model`
 |      can override if they need a state-creation step in-between
 |      layer instantiation and layer call. It is invoked automatically before
 |      the first execution of `call()`.
 |      
 |      This is typically used to create the weights of `Layer` subclasses
 |      (at the discretion of the subclass implementer).
 |      
 |      Args:
 |        input_shape: Instance of `TensorShape`, or list of instances of
 |          `TensorShape` if the layer expects a list of inputs
 |          (one instance per input).
 |  
 |  call(self, inputs)
 |      This is where the layer's logic lives.
 |      
 |      The `call()` method may not create state (except in its first
 |      invocation, wrapping the creation of variables or other resources in
 |      `tf.init_scope()`).  It is recommended to create state, including
 |      `tf.Variable` instances and nested `Layer` instances,
 |       in `__init__()`, or in the `build()` method that is
 |      called automatically before `call()` executes for the first time.
 |      
 |      Args:
 |        inputs: Input tensor, or dict/list/tuple of input tensors.
 |          The first positional `inputs` argument is subject to special rules:
 |          - `inputs` must be explicitly passed. A layer cannot have zero
 |            arguments, and `inputs` cannot be provided via the default value
 |            of a keyword argument.
 |          - NumPy array or Python scalar values in `inputs` get cast as
 |            tensors.
 |          - Keras mask metadata is only collected from `inputs`.
 |          - Layers are built (`build(input_shape)` method)
 |            using shape info from `inputs` only.
 |          - `input_spec` compatibility is only checked against `inputs`.
 |          - Mixed precision input casting is only applied to `inputs`.
 |            If a layer has tensor arguments in `*args` or `**kwargs`, their
 |            casting behavior in mixed precision should be handled manually.
 |          - The SavedModel input specification is generated using `inputs`
 |            only.
 |          - Integration with various ecosystem packages like TFMOT, TFLite,
 |            TF.js, etc is only supported for `inputs` and not for tensors in
 |            positional and keyword arguments.
 |        *args: Additional positional arguments. May contain tensors, although
 |          this is not recommended, for the reasons above.
 |        **kwargs: Additional keyword arguments. May contain tensors, although
 |          this is not recommended, for the reasons above.
 |          The following optional keyword arguments are reserved:
 |          - `training`: Boolean scalar tensor of Python boolean indicating
 |            whether the `call` is meant for training or inference.
 |          - `mask`: Boolean input mask. If the layer's `call()` method takes a
 |            `mask` argument, its default value will be set to the mask
 |            generated for `inputs` by the previous layer (if `input` did come
 |            from a layer that generated a corresponding mask, i.e. if it came
 |            from a Keras layer with masking support).
 |      
 |      Returns:
 |        A tensor or list/tuple of tensors.
 |  
 |  compute_output_shape(self, input_shape)
 |      Computes the output shape of the layer.
 |      
 |      This method will cause the layer's state to be built, if that has not
 |      happened before. This requires that the layer will later be used with
 |      inputs that match the input shape provided here.
 |      
 |      Args:
 |          input_shape: Shape tuple (tuple of integers) or `tf.TensorShape`,
 |              or structure of shape tuples / `tf.TensorShape` instances
 |              (one per output tensor of the layer).
 |              Shape tuples can include None for free dimensions,
 |              instead of an integer.
 |      
 |      Returns:
 |          A `tf.TensorShape` instance
 |          or structure of `tf.TensorShape` instances.
 |  
 |  get_config(self)
 |      Returns the config of the layer.
 |      
 |      A layer config is a Python dictionary (serializable)
 |      containing the configuration of a layer.
 |      The same layer can be reinstantiated later
 |      (without its trained weights) from this configuration.
 |      
 |      The config of a layer does not include connectivity
 |      information, nor the layer class name. These are handled
 |      by `Network` (one layer of abstraction above).
 |      
 |      Note that `get_config()` does not guarantee to return a fresh copy of
 |      dict every time it is called. The callers should make a copy of the
 |      returned dict if they want to modify it.
 |      
 |      Returns:
 |          Python dictionary.
 |  
 |  ----------------------------------------------------------------------
 |  Methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  __call__(self, *args, **kwargs)
 |      Wraps `call`, applying pre- and post-processing steps.
 |      
 |      Args:
 |        *args: Positional arguments to be passed to `self.call`.
 |        **kwargs: Keyword arguments to be passed to `self.call`.
 |      
 |      Returns:
 |        Output tensor(s).
 |      
 |      Note:
 |        - The following optional keyword arguments are reserved for specific
 |          uses:
 |          * `training`: Boolean scalar tensor of Python boolean indicating
 |            whether the `call` is meant for training or inference.
 |          * `mask`: Boolean input mask.
 |        - If the layer's `call` method takes a `mask` argument (as some Keras
 |          layers do), its default value will be set to the mask generated
 |          for `inputs` by the previous layer (if `input` did come from
 |          a layer that generated a corresponding mask, i.e. if it came from
 |          a Keras layer with masking support.
 |        - If the layer is not built, the method will call `build`.
 |      
 |      Raises:
 |        ValueError: if the layer's `call` method returns None (an invalid
 |          value).
 |        RuntimeError: if `super().__init__()` was not called in the
 |          constructor.
 |  
 |  __delattr__(self, name)
 |      Implement delattr(self, name).
 |  
 |  __getstate__(self)
 |  
 |  __setattr__(self, name, value)
 |      Support self.foo = trackable syntax.
 |  
 |  __setstate__(self, state)
 |  
 |  add_loss(self, losses, **kwargs)
 |      Add loss tensor(s), potentially dependent on layer inputs.
 |      
 |      Some losses (for instance, activity regularization losses) may be
 |      dependent on the inputs passed when calling a layer. Hence, when reusing
 |      the same layer on different inputs `a` and `b`, some entries in
 |      `layer.losses` may be dependent on `a` and some on `b`. This method
 |      automatically keeps track of dependencies.
 |      
 |      This method can be used inside a subclassed layer or model's `call`
 |      function, in which case `losses` should be a Tensor or list of Tensors.
 |      
 |      Example:
 |      
 |      ```python
 |      class MyLayer(tf.keras.layers.Layer):
 |        def call(self, inputs):
 |          self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |          return inputs
 |      ```
 |      
 |      The same code works in distributed training: the input to `add_loss()`
 |      is treated like a regularization loss and averaged across replicas
 |      by the training loop (both built-in `Model.fit()` and compliant custom
 |      training loops).
 |      
 |      The `add_loss` method can also be called directly on a Functional Model
 |      during construction. In this case, any loss Tensors passed to this Model
 |      must be symbolic and be able to be traced back to the model's `Input`s.
 |      These losses become part of the model's topology and are tracked in
 |      `get_config`.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Activity regularization.
 |      model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      ```
 |      
 |      If this is not the case for your loss (if, for example, your loss
 |      references a `Variable` of one of the model's layers), you can wrap your
 |      loss in a zero-argument lambda. These losses are not tracked as part of
 |      the model's topology since they can't be serialized.
 |      
 |      Example:
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      d = tf.keras.layers.Dense(10)
 |      x = d(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      # Weight regularization.
 |      model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      ```
 |      
 |      Args:
 |        losses: Loss tensor, or list/tuple of tensors. Rather than tensors,
 |          losses may also be zero-argument callables which create a loss
 |          tensor.
 |        **kwargs: Used for backwards compatibility only.
 |  
 |  add_metric(self, value, name=None, **kwargs)
 |      Adds metric tensor to the layer.
 |      
 |      This method can be used inside the `call()` method of a subclassed layer
 |      or model.
 |      
 |      ```python
 |      class MyMetricLayer(tf.keras.layers.Layer):
 |        def __init__(self):
 |          super(MyMetricLayer, self).__init__(name='my_metric_layer')
 |          self.mean = tf.keras.metrics.Mean(name='metric_1')
 |      
 |        def call(self, inputs):
 |          self.add_metric(self.mean(inputs))
 |          self.add_metric(tf.reduce_sum(inputs), name='metric_2')
 |          return inputs
 |      ```
 |      
 |      This method can also be called directly on a Functional Model during
 |      construction. In this case, any tensor passed to this Model must
 |      be symbolic and be able to be traced back to the model's `Input`s. These
 |      metrics become part of the model's topology and are tracked when you
 |      save the model via `save()`.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(math_ops.reduce_sum(x), name='metric_1')
 |      ```
 |      
 |      Note: Calling `add_metric()` with the result of a metric object on a
 |      Functional Model, as shown in the example below, is not supported. This
 |      is because we cannot trace the metric result tensor back to the model's
 |      inputs.
 |      
 |      ```python
 |      inputs = tf.keras.Input(shape=(10,))
 |      x = tf.keras.layers.Dense(10)(inputs)
 |      outputs = tf.keras.layers.Dense(1)(x)
 |      model = tf.keras.Model(inputs, outputs)
 |      model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1')
 |      ```
 |      
 |      Args:
 |        value: Metric tensor.
 |        name: String metric name.
 |        **kwargs: Additional keyword arguments for backward compatibility.
 |          Accepted values:
 |          `aggregation` - When the `value` tensor provided is not the result
 |          of calling a `keras.Metric` instance, it will be aggregated by
 |          default using a `keras.Metric.Mean`.
 |  
 |  add_update(self, updates)
 |      Add update op(s), potentially dependent on layer inputs.
 |      
 |      Weight updates (for instance, the updates of the moving mean and
 |      variance in a BatchNormalization layer) may be dependent on the inputs
 |      passed when calling a layer. Hence, when reusing the same layer on
 |      different inputs `a` and `b`, some entries in `layer.updates` may be
 |      dependent on `a` and some on `b`. This method automatically keeps track
 |      of dependencies.
 |      
 |      This call is ignored when eager execution is enabled (in that case,
 |      variable updates are run on the fly and thus do not need to be tracked
 |      for later execution).
 |      
 |      Args:
 |        updates: Update op, or list/tuple of update ops, or zero-arg callable
 |          that returns an update op. A zero-arg callable should be passed in
 |          order to disable running the updates by setting `trainable=False`
 |          on this Layer, when executing in Eager mode.
 |  
 |  add_variable(self, *args, **kwargs)
 |      Deprecated, do NOT use! Alias for `add_weight`.
 |  
 |  add_weight(self, name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=<VariableSynchronization.AUTO: 0>, aggregation=<VariableAggregationV2.NONE: 0>, **kwargs)
 |      Adds a new variable to the layer.
 |      
 |      Args:
 |        name: Variable name.
 |        shape: Variable shape. Defaults to scalar if unspecified.
 |        dtype: The type of the variable. Defaults to `self.dtype`.
 |        initializer: Initializer instance (callable).
 |        regularizer: Regularizer instance (callable).
 |        trainable: Boolean, whether the variable should be part of the layer's
 |          "trainable_variables" (e.g. variables, biases)
 |          or "non_trainable_variables" (e.g. BatchNorm mean and variance).
 |          Note that `trainable` cannot be `True` if `synchronization`
 |          is set to `ON_READ`.
 |        constraint: Constraint instance (callable).
 |        use_resource: Whether to use a `ResourceVariable` or not.
 |          See [this guide](
 |          https://www.tensorflow.org/guide/migrate/tf1_vs_tf2#resourcevariables_instead_of_referencevariables)
 |           for more information.
 |        synchronization: Indicates when a distributed a variable will be
 |          aggregated. Accepted values are constants defined in the class
 |          `tf.VariableSynchronization`. By default the synchronization is set
 |          to `AUTO` and the current `DistributionStrategy` chooses when to
 |          synchronize. If `synchronization` is set to `ON_READ`, `trainable`
 |          must not be set to `True`.
 |        aggregation: Indicates how a distributed variable will be aggregated.
 |          Accepted values are constants defined in the class
 |          `tf.VariableAggregation`.
 |        **kwargs: Additional keyword arguments. Accepted values are `getter`,
 |          `collections`, `experimental_autocast` and `caching_device`.
 |      
 |      Returns:
 |        The variable created.
 |      
 |      Raises:
 |        ValueError: When giving unsupported dtype and no initializer or when
 |          trainable has been set to True with synchronization set as
 |          `ON_READ`.
 |  
 |  build_from_config(self, config)
 |      Builds the layer's states with the supplied config dict.
 |      
 |      By default, this method calls the `build(config["input_shape"])` method,
 |      which creates weights based on the layer's input shape in the supplied
 |      config. If your config contains other information needed to load the
 |      layer's state, you should override this method.
 |      
 |      Args:
 |          config: Dict containing the input shape associated with this layer.
 |  
 |  compute_mask(self, inputs, mask=None)
 |      Computes an output mask tensor.
 |      
 |      Args:
 |          inputs: Tensor or list of tensors.
 |          mask: Tensor or list of tensors.
 |      
 |      Returns:
 |          None or a tensor (or list of tensors,
 |              one per output tensor of the layer).
 |  
 |  compute_output_signature(self, input_signature)
 |      Compute the output tensor signature of the layer based on the inputs.
 |      
 |      Unlike a TensorShape object, a TensorSpec object contains both shape
 |      and dtype information for a tensor. This method allows layers to provide
 |      output dtype information if it is different from the input dtype.
 |      For any layer that doesn't implement this function,
 |      the framework will fall back to use `compute_output_shape`, and will
 |      assume that the output dtype matches the input dtype.
 |      
 |      Args:
 |        input_signature: Single TensorSpec or nested structure of TensorSpec
 |          objects, describing a candidate input for the layer.
 |      
 |      Returns:
 |        Single TensorSpec or nested structure of TensorSpec objects,
 |          describing how the layer would transform the provided input.
 |      
 |      Raises:
 |        TypeError: If input_signature contains a non-TensorSpec object.
 |  
 |  count_params(self)
 |      Count the total number of scalars composing the weights.
 |      
 |      Returns:
 |          An integer count.
 |      
 |      Raises:
 |          ValueError: if the layer isn't yet built
 |            (in which case its weights aren't yet defined).
 |  
 |  finalize_state(self)
 |      Finalizes the layers state after updating layer weights.
 |      
 |      This function can be subclassed in a layer and will be called after
 |      updating a layer weights. It can be overridden to finalize any
 |      additional layer state after a weight update.
 |      
 |      This function will be called after weights of a layer have been restored
 |      from a loaded model.
 |  
 |  get_build_config(self)
 |      Returns a dictionary with the layer's input shape.
 |      
 |      This method returns a config dict that can be used by
 |      `build_from_config(config)` to create all states (e.g. Variables and
 |      Lookup tables) needed by the layer.
 |      
 |      By default, the config only contains the input shape that the layer
 |      was built with. If you're writing a custom layer that creates state in
 |      an unusual way, you should override this method to make sure this state
 |      is already created when Keras attempts to load its value upon model
 |      loading.
 |      
 |      Returns:
 |          A dict containing the input shape associated with the layer.
 |  
 |  get_input_at(self, node_index)
 |      Retrieves the input tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first input node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_input_mask_at(self, node_index)
 |      Retrieves the input mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple inputs).
 |  
 |  get_input_shape_at(self, node_index)
 |      Retrieves the input shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple inputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_at(self, node_index)
 |      Retrieves the output tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first output node of the layer.
 |      
 |      Returns:
 |          A tensor (or list of tensors if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_output_mask_at(self, node_index)
 |      Retrieves the output mask tensor(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A mask tensor
 |          (or list of tensors if the layer has multiple outputs).
 |  
 |  get_output_shape_at(self, node_index)
 |      Retrieves the output shape(s) of a layer at a given node.
 |      
 |      Args:
 |          node_index: Integer, index of the node
 |              from which to retrieve the attribute.
 |              E.g. `node_index=0` will correspond to the
 |              first time the layer was called.
 |      
 |      Returns:
 |          A shape tuple
 |          (or list of shape tuples if the layer has multiple outputs).
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |  
 |  get_weights(self)
 |      Returns the current weights of the layer, as NumPy arrays.
 |      
 |      The weights of a layer represent the state of the layer. This function
 |      returns both trainable and non-trainable weight values associated with
 |      this layer as a list of NumPy arrays, which can in turn be used to load
 |      state into similarly parameterized layers.
 |      
 |      For example, a `Dense` layer returns a list of two values: the kernel
 |      matrix and the bias vector. These can be used to set the weights of
 |      another `Dense` layer:
 |      
 |      >>> layer_a = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(1.))
 |      >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
 |      >>> layer_a.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(2.))
 |      >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
 |      >>> layer_b.get_weights()
 |      [array([[2.],
 |             [2.],
 |             [2.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b.set_weights(layer_a.get_weights())
 |      >>> layer_b.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      
 |      Returns:
 |          Weights values as a list of NumPy arrays.
 |  
 |  load_own_variables(self, store)
 |      Loads the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is loaded upon calling `keras.models.load_model()`.
 |      
 |      Args:
 |          store: Dict from which the state of the model will be loaded.
 |  
 |  save_own_variables(self, store)
 |      Saves the state of the layer.
 |      
 |      You can override this method to take full control of how the state of
 |      the layer is saved upon calling `model.save()`.
 |      
 |      Args:
 |          store: Dict where the state of the model will be saved.
 |  
 |  set_weights(self, weights)
 |      Sets the weights of the layer, from NumPy arrays.
 |      
 |      The weights of a layer represent the state of the layer. This function
 |      sets the weight values from numpy arrays. The weight values should be
 |      passed in the order they are created by the layer. Note that the layer's
 |      weights must be instantiated before calling this function, by calling
 |      the layer.
 |      
 |      For example, a `Dense` layer returns a list of two values: the kernel
 |      matrix and the bias vector. These can be used to set the weights of
 |      another `Dense` layer:
 |      
 |      >>> layer_a = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(1.))
 |      >>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
 |      >>> layer_a.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b = tf.keras.layers.Dense(1,
 |      ...   kernel_initializer=tf.constant_initializer(2.))
 |      >>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
 |      >>> layer_b.get_weights()
 |      [array([[2.],
 |             [2.],
 |             [2.]], dtype=float32), array([0.], dtype=float32)]
 |      >>> layer_b.set_weights(layer_a.get_weights())
 |      >>> layer_b.get_weights()
 |      [array([[1.],
 |             [1.],
 |             [1.]], dtype=float32), array([0.], dtype=float32)]
 |      
 |      Args:
 |        weights: a list of NumPy arrays. The number
 |          of arrays and their shape must match
 |          number of the dimensions of the weights
 |          of the layer (i.e. it should match the
 |          output of `get_weights`).
 |      
 |      Raises:
 |        ValueError: If the provided weights list does not match the
 |          layer's specifications.
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  from_config(config) from builtins.type
 |      Creates a layer from its config.
 |      
 |      This method is the reverse of `get_config`,
 |      capable of instantiating the same layer from the config
 |      dictionary. It does not handle layer connectivity
 |      (handled by Network), nor weights (handled by `set_weights`).
 |      
 |      Args:
 |          config: A Python dictionary, typically the
 |              output of get_config.
 |      
 |      Returns:
 |          A layer instance.
 |  
 |  ----------------------------------------------------------------------
 |  Static methods inherited from keras.src.engine.base_layer.Layer:
 |  
 |  __new__(cls, *args, **kwargs)
 |      Create and return a new object.  See help(type) for accurate signature.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from keras.src.engine.base_layer.Layer:
 |  
 |  compute_dtype
 |      The dtype of the layer's computations.
 |      
 |      This is equivalent to `Layer.dtype_policy.compute_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.dtype`, the dtype of
 |      the weights.
 |      
 |      Layers automatically cast their inputs to the compute dtype, which
 |      causes computations and the output to be in the compute dtype as well.
 |      This is done by the base Layer class in `Layer.__call__`, so you do not
 |      have to insert these casts if implementing your own layer.
 |      
 |      Layers often perform certain internal computations in higher precision
 |      when `compute_dtype` is float16 or bfloat16 for numeric stability. The
 |      output will still typically be float16 or bfloat16 in such cases.
 |      
 |      Returns:
 |        The layer's compute dtype.
 |  
 |  dtype
 |      The dtype of the layer weights.
 |      
 |      This is equivalent to `Layer.dtype_policy.variable_dtype`. Unless
 |      mixed precision is used, this is the same as `Layer.compute_dtype`, the
 |      dtype of the layer's computations.
 |  
 |  dtype_policy
 |      The dtype policy associated with this layer.
 |      
 |      This is an instance of a `tf.keras.mixed_precision.Policy`.
 |  
 |  dynamic
 |      Whether the layer is dynamic (eager-only); set in the constructor.
 |  
 |  inbound_nodes
 |      Return Functional API nodes upstream of this layer.
 |  
 |  input
 |      Retrieves the input tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input tensor or list of input tensors.
 |      
 |      Raises:
 |        RuntimeError: If called in Eager mode.
 |        AttributeError: If no inbound nodes are found.
 |  
 |  input_mask
 |      Retrieves the input mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Input mask tensor (potentially None) or list of input
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  input_shape
 |      Retrieves the input shape(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one input,
 |      i.e. if it is connected to one incoming layer, or if all inputs
 |      have the same shape.
 |      
 |      Returns:
 |          Input shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per input tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined input_shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  losses
 |      List of losses added using the `add_loss()` API.
 |      
 |      Variable regularization tensors are created when this property is
 |      accessed, so it is eager safe: accessing `losses` under a
 |      `tf.GradientTape` will propagate gradients back to the corresponding
 |      variables.
 |      
 |      Examples:
 |      
 |      >>> class MyLayer(tf.keras.layers.Layer):
 |      ...   def call(self, inputs):
 |      ...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
 |      ...     return inputs
 |      >>> l = MyLayer()
 |      >>> l(np.ones((10, 1)))
 |      >>> l.losses
 |      [1.0]
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> x = tf.keras.layers.Dense(10)(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Activity regularization.
 |      >>> len(model.losses)
 |      0
 |      >>> model.add_loss(tf.abs(tf.reduce_mean(x)))
 |      >>> len(model.losses)
 |      1
 |      
 |      >>> inputs = tf.keras.Input(shape=(10,))
 |      >>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
 |      >>> x = d(inputs)
 |      >>> outputs = tf.keras.layers.Dense(1)(x)
 |      >>> model = tf.keras.Model(inputs, outputs)
 |      >>> # Weight regularization.
 |      >>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
 |      >>> model.losses
 |      [<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
 |      
 |      Returns:
 |        A list of tensors.
 |  
 |  metrics
 |      List of metrics added using the `add_metric()` API.
 |      
 |      Example:
 |      
 |      >>> input = tf.keras.layers.Input(shape=(3,))
 |      >>> d = tf.keras.layers.Dense(2)
 |      >>> output = d(input)
 |      >>> d.add_metric(tf.reduce_max(output), name='max')
 |      >>> d.add_metric(tf.reduce_min(output), name='min')
 |      >>> [m.name for m in d.metrics]
 |      ['max', 'min']
 |      
 |      Returns:
 |        A list of `Metric` objects.
 |  
 |  name
 |      Name of the layer (string), set in the constructor.
 |  
 |  non_trainable_variables
 |      Sequence of non-trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  non_trainable_weights
 |      List of all non-trainable weights tracked by this layer.
 |      
 |      Non-trainable weights are *not* updated during training. They are
 |      expected to be updated manually in `call()`.
 |      
 |      Returns:
 |        A list of non-trainable variables.
 |  
 |  outbound_nodes
 |      Return Functional API nodes downstream of this layer.
 |  
 |  output
 |      Retrieves the output tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one output,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |        Output tensor or list of output tensors.
 |      
 |      Raises:
 |        AttributeError: if the layer is connected to more than one incoming
 |          layers.
 |        RuntimeError: if called in Eager mode.
 |  
 |  output_mask
 |      Retrieves the output mask tensor(s) of a layer.
 |      
 |      Only applicable if the layer has exactly one inbound node,
 |      i.e. if it is connected to one incoming layer.
 |      
 |      Returns:
 |          Output mask tensor (potentially None) or list of output
 |          mask tensors.
 |      
 |      Raises:
 |          AttributeError: if the layer is connected to
 |          more than one incoming layers.
 |  
 |  output_shape
 |      Retrieves the output shape(s) of a layer.
 |      
 |      Only applicable if the layer has one output,
 |      or if all outputs have the same shape.
 |      
 |      Returns:
 |          Output shape, as an integer shape tuple
 |          (or list of shape tuples, one tuple per output tensor).
 |      
 |      Raises:
 |          AttributeError: if the layer has no defined output shape.
 |          RuntimeError: if called in Eager mode.
 |  
 |  trainable_variables
 |      Sequence of trainable variables owned by this module and its submodules.
 |      
 |      Note: this method uses reflection to find variables on the current instance
 |      and submodules. For performance reasons you may wish to cache the result
 |      of calling this method if you don't expect the return value to change.
 |      
 |      Returns:
 |        A sequence of variables for the current module (sorted by attribute
 |        name) followed by variables from all submodules recursively (breadth
 |        first).
 |  
 |  trainable_weights
 |      List of all trainable weights tracked by this layer.
 |      
 |      Trainable weights are updated via gradient descent during training.
 |      
 |      Returns:
 |        A list of trainable variables.
 |  
 |  updates
 |  
 |  variable_dtype
 |      Alias of `Layer.dtype`, the dtype of the weights.
 |  
 |  variables
 |      Returns the list of all layer variables/weights.
 |      
 |      Alias of `self.weights`.
 |      
 |      Note: This will not track the weights of nested `tf.Modules` that are
 |      not themselves Keras layers.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  weights
 |      Returns the list of all layer variables/weights.
 |      
 |      Returns:
 |        A list of variables.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from keras.src.engine.base_layer.Layer:
 |  
 |  activity_regularizer
 |      Optional regularizer function for the output of this layer.
 |  
 |  input_spec
 |      `InputSpec` instance(s) describing the input format for this layer.
 |      
 |      When you create a layer subclass, you can set `self.input_spec` to
 |      enable the layer to run input compatibility checks when it is called.
 |      Consider a `Conv2D` layer: it can only be called on a single input
 |      tensor of rank 4. As such, you can set, in `__init__()`:
 |      
 |      ```python
 |      self.input_spec = tf.keras.layers.InputSpec(ndim=4)
 |      ```
 |      
 |      Now, if you try to call the layer on an input that isn't rank 4
 |      (for instance, an input of shape `(2,)`, it will raise a
 |      nicely-formatted error:
 |      
 |      ```
 |      ValueError: Input 0 of layer conv2d is incompatible with the layer:
 |      expected ndim=4, found ndim=1. Full shape received: [2]
 |      ```
 |      
 |      Input checks that can be specified via `input_spec` include:
 |      - Structure (e.g. a single input, a list of 2 inputs, etc)
 |      - Shape
 |      - Rank (ndim)
 |      - Dtype
 |      
 |      For more information, see `tf.keras.layers.InputSpec`.
 |      
 |      Returns:
 |        A `tf.keras.layers.InputSpec` instance, or nested structure thereof.
 |  
 |  stateful
 |  
 |  supports_masking
 |      Whether this layer supports computing a mask using `compute_mask`.
 |  
 |  trainable
 |  
 |  ----------------------------------------------------------------------
 |  Class methods inherited from tensorflow.python.module.module.Module:
 |  
 |  with_name_scope(method) from builtins.type
 |      Decorator to automatically enter the module name scope.
 |      
 |      >>> class MyModule(tf.Module):
 |      ...   @tf.Module.with_name_scope
 |      ...   def __call__(self, x):
 |      ...     if not hasattr(self, 'w'):
 |      ...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
 |      ...     return tf.matmul(x, self.w)
 |      
 |      Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose
 |      names included the module name:
 |      
 |      >>> mod = MyModule()
 |      >>> mod(tf.ones([1, 2]))
 |      <tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
 |      >>> mod.w
 |      <tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
 |      numpy=..., dtype=float32)>
 |      
 |      Args:
 |        method: The method to wrap.
 |      
 |      Returns:
 |        The original method wrapped such that it enters the module's name scope.
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties inherited from tensorflow.python.module.module.Module:
 |  
 |  name_scope
 |      Returns a `tf.name_scope` instance for this class.
 |  
 |  submodules
 |      Sequence of all sub-modules.
 |      
 |      Submodules are modules which are properties of this module, or found as
 |      properties of modules which are properties of this module (and so on).
 |      
 |      >>> a = tf.Module()
 |      >>> b = tf.Module()
 |      >>> c = tf.Module()
 |      >>> a.b = b
 |      >>> b.c = c
 |      >>> list(a.submodules) == [b, c]
 |      True
 |      >>> list(b.submodules) == [c]
 |      True
 |      >>> list(c.submodules) == []
 |      True
 |      
 |      Returns:
 |        A sequence of all submodules.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors inherited from tensorflow.python.trackable.base.Trackable:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)

Exercise 7 - create_model¶

Implement the create_model function.

First you need to create the model. The tf.keras.Sequential has been implemented for you. Within it you should put the following layers:

  • tf.keras.layers.Embedding with the size num_words times embeding_dim and the input_length set to the length of the input sequences (which is the length of the longest tweet).
  • tf.keras.layers.GlobalAveragePooling1D with no extra parameters.
  • tf.keras.layers.Dense with the size of one (this is your classification output) and 'sigmoid' activation passed to the activation keyword parameter. Make sure to separate the layers with a comma.

Then you need to compile the model. Here you can look at all the parameters you can set when compiling the model: tf.keras.Model. In this notebook, you just need to set the loss to 'binary_crossentropy' (because you are doing binary classification with a sigmoid function at the output), the optimizer to 'adam' and the metrics to 'accuracy' (so that you can track the accuracy on the training and validation sets.

In [43]:
from tensorflow.keras import losses
In [59]:
# GRADED FUNCTION: create_model
def create_model(num_words, embedding_dim, max_len):
    """
    Creates a text classifier model
    
    Args:
        num_words (int): size of the vocabulary for the Embedding layer input
        embedding_dim (int): dimensionality of the Embedding layer output
        max_len (int): length of the input sequences
    
    Returns:
        model (tf.keras Model): the text classifier model
    """
    
    tf.random.set_seed(123)
    
    ### START CODE HERE
    
    model = tf.keras.Sequential([ 
        tf.keras.layers.Embedding(num_words, embedding_dim, input_length=max_len),
        tf.keras.layers.GlobalAveragePooling1D(),
        tf.keras.layers.Dense(1, activation='sigmoid')
    ]) 
    
    model.compile(loss="binary_crossentropy",
                  optimizer="adam",
                  metrics=['accuracy'])

    ### END CODE HERE

    return model
In [60]:
# Create the model
model = create_model(num_words=num_words, embedding_dim=16, max_len=max_len)

print('The model is created!\n')
The model is created!

In [61]:
# Test your create_model function
w1_unittest.test_model(create_model)
 All tests passed

Now you need to prepare the data to put into the model. You already created lists of x and y values and all you need to do now is convert them to NumPy arrays, as this is the format that the model is expecting.

Then you can create a model with the function you defined above and train it. The trained model should give you about 99.6 % accuracy on the validation set.

In [62]:
# Prepare the data
train_x_prepared = np.array(train_x_padded)
val_x_prepared = np.array(val_x_padded)

train_y_prepared = np.array(train_y)
val_y_prepared = np.array(val_y)

print('The data is prepared for training!\n')

# Fit the model
print('Training:')
history = model.fit(train_x_prepared, train_y_prepared, epochs=20, validation_data=(val_x_prepared, val_y_prepared))
The data is prepared for training!

Training:
Epoch 1/20
250/250 [==============================] - 16s 52ms/step - loss: 0.6818 - accuracy: 0.6631 - val_loss: 0.6652 - val_accuracy: 0.9920
Epoch 2/20
250/250 [==============================] - 3s 13ms/step - loss: 0.6310 - accuracy: 0.9564 - val_loss: 0.5967 - val_accuracy: 0.9765
Epoch 3/20
250/250 [==============================] - 1s 4ms/step - loss: 0.5402 - accuracy: 0.9874 - val_loss: 0.4996 - val_accuracy: 0.9895
Epoch 4/20
250/250 [==============================] - 1s 3ms/step - loss: 0.4342 - accuracy: 0.9899 - val_loss: 0.3991 - val_accuracy: 0.9935
Epoch 5/20
250/250 [==============================] - 1s 4ms/step - loss: 0.3371 - accuracy: 0.9944 - val_loss: 0.3126 - val_accuracy: 0.9920
Epoch 6/20
250/250 [==============================] - 1s 3ms/step - loss: 0.2584 - accuracy: 0.9942 - val_loss: 0.2450 - val_accuracy: 0.9960
Epoch 7/20
250/250 [==============================] - 1s 4ms/step - loss: 0.1986 - accuracy: 0.9955 - val_loss: 0.1920 - val_accuracy: 0.9945
Epoch 8/20
250/250 [==============================] - 1s 3ms/step - loss: 0.1540 - accuracy: 0.9962 - val_loss: 0.1527 - val_accuracy: 0.9945
Epoch 9/20
250/250 [==============================] - 1s 3ms/step - loss: 0.1213 - accuracy: 0.9964 - val_loss: 0.1232 - val_accuracy: 0.9950
Epoch 10/20
250/250 [==============================] - 1s 4ms/step - loss: 0.0967 - accuracy: 0.9970 - val_loss: 0.1003 - val_accuracy: 0.9950
Epoch 11/20
250/250 [==============================] - 1s 4ms/step - loss: 0.0783 - accuracy: 0.9969 - val_loss: 0.0832 - val_accuracy: 0.9960
Epoch 12/20
250/250 [==============================] - 1s 2ms/step - loss: 0.0641 - accuracy: 0.9971 - val_loss: 0.0694 - val_accuracy: 0.9960
Epoch 13/20
250/250 [==============================] - 1s 2ms/step - loss: 0.0533 - accuracy: 0.9975 - val_loss: 0.0588 - val_accuracy: 0.9965
Epoch 14/20
250/250 [==============================] - 1s 4ms/step - loss: 0.0447 - accuracy: 0.9977 - val_loss: 0.0503 - val_accuracy: 0.9965
Epoch 15/20
250/250 [==============================] - 1s 4ms/step - loss: 0.0380 - accuracy: 0.9980 - val_loss: 0.0434 - val_accuracy: 0.9965
Epoch 16/20
250/250 [==============================] - 1s 4ms/step - loss: 0.0325 - accuracy: 0.9980 - val_loss: 0.0378 - val_accuracy: 0.9960
Epoch 17/20
250/250 [==============================] - 1s 2ms/step - loss: 0.0280 - accuracy: 0.9980 - val_loss: 0.0329 - val_accuracy: 0.9960
Epoch 18/20
250/250 [==============================] - 1s 3ms/step - loss: 0.0244 - accuracy: 0.9984 - val_loss: 0.0291 - val_accuracy: 0.9960
Epoch 19/20
250/250 [==============================] - 1s 3ms/step - loss: 0.0215 - accuracy: 0.9984 - val_loss: 0.0261 - val_accuracy: 0.9960
Epoch 20/20
250/250 [==============================] - 1s 2ms/step - loss: 0.0189 - accuracy: 0.9983 - val_loss: 0.0234 - val_accuracy: 0.9960

4 - Evaluate the model¶

Now that you trained the model, it is time to look at its performance. While training, you already saw a printout of the accuracy and loss on training and validation sets. To have a better feeling on how the model improved with training, you can plot them below.

In [63]:
def plot_metrics(history, metric):
    plt.plot(history.history[metric])
    plt.plot(history.history[f'val_{metric}'])
    plt.xlabel("Epochs")
    plt.ylabel(metric.title())
    plt.legend([metric, f'val_{metric}'])
    plt.show()
    
plot_metrics(history, "accuracy")
plot_metrics(history, "loss")
No description has been provided for this image
No description has been provided for this image

You can see that already after just a few epochs the model reached very high accuracy on both sets. But if you zoom in, you can see that the performance was still slightly improving on the training set through all 20 epochs, while it stagnated a bit earlier on the validation set. The loss on the other hand kept decreasing through all 20 epochs, which means that the model also got more confident in its predictions.

4.1 - Predict on Data¶

Now you can use the model for predictions on unseen tweets as model.predict(). This is as simple as passing an array of sequences you want to predict to the mentioned method. In the cell below you prepare an extract of positive and negative samples from the validation set (remember, the positive examples are at the beginning and the negative are at the end) for the demonstration and predict their values with the model. Note that in the ideal case you should have another test set from which you would draw this data to inspect the model performance. But for the demonstration here the validation set will do just as well.

In [64]:
# Prepare an example with 10 positive and 10 negative tweets.
example_for_prediction = np.append(val_x_prepared[0:10], val_x_prepared[-10:], axis=0)

# Make a prediction on the tweets.
model.predict(example_for_prediction)
1/1 [==============================] - 0s 64ms/step
Out[64]:
array([[0.8960061 ],
       [0.99425524],
       [0.9969176 ],
       [0.9509028 ],
       [0.99758005],
       [0.996095  ],
       [0.99190336],
       [0.9789588 ],
       [0.99836904],
       [0.9982948 ],
       [0.01093451],
       [0.04231445],
       [0.01320749],
       [0.01674573],
       [0.01814229],
       [0.00652216],
       [0.0163281 ],
       [0.00833105],
       [0.02320513],
       [0.03248583]], dtype=float32)
In [119]:
import numpy as np

# Assuming val_x_prepared is your validation dataset and is already preprocessed
# Selecting 10 positive and 10 negative tweets from the validation dataset
example_for_prediction = np.append(val_x_prepared[:10], val_x_prepared[-10:], axis=0)

# Make a prediction on the tweets
predictions = model.predict(example_for_prediction)

# Assuming you have access to the original tweets
original_tweets = val_x[:10] + val_x[-10:]

# Print the original tweets and their predictions
for tweet, prediction in zip(original_tweets, predictions):
    print(f"Tweet: {tweet}\nPrediction: {prediction}\n")
1/1 [==============================] - 0s 20ms/step
Tweet: ['bro', 'u', 'wan', 'cut', 'hair', 'anot', 'ur', 'hair', 'long', 'liao', 'bo', 'since', 'ord', 'liao', 'take', 'easy', 'lor', 'treat', 'save', 'leave', 'longer', ':)', 'bro', 'lol', 'sibei', 'xialan']
Prediction: [0.8960061]

Tweet: ['back', 'thnx', 'god', "i'm", 'happy', ':)']
Prediction: [0.99425524]

Tweet: ['think', 'ear', 'malfunction', 'thank', 'goodness', 'clear', 'one', 'apology', ':-)']
Prediction: [0.9969176]

Tweet: ['stick', 'centre', 'right', 'clown', 'right', 'joker', 'left', '...', ':)']
Prediction: [0.9509028]

Tweet: ['happy', 'friday', ':-)']
Prediction: [0.99758005]

Tweet: ['follow', ':)', 'x']
Prediction: [0.996095]

Tweet: ['teenchoice', 'choiceinternationalartist', 'superjunior', 'fight', 'oppa', ':D']
Prediction: [0.99190336]

Tweet: ['birthday', 'today', 'birthday', 'wish', 'hope', "there's", 'good', 'news', 'ben', 'soon', ':-)']
Prediction: [0.9789588]

Tweet: ['good', 'morning', ':-)', 'friday', '\U000fec00', 'plan', 'day', 'currently', 'play', 'shop', '...']
Prediction: [0.99836904]

Tweet: ['happy', 'friday', ':)']
Prediction: [0.9982948]

Tweet: ['want', 'birthday', 'already', ':(']
Prediction: [0.01093451]

Tweet: ['completely', 'agree', 'press', ':(']
Prediction: [0.04231445]

Tweet: ['im', 'super', 'duper', 'tire', ':(']
Prediction: [0.01320749]

Tweet: ['boring', 'time', ':(', 'know', '...']
Prediction: [0.01674573]

Tweet: ['ill', 'soon', 'promise', ':(', 'waaah']
Prediction: [0.01814229]

Tweet: ['wanna', 'change', 'avi', 'usanele', ':(']
Prediction: [0.00652216]

Tweet: ['puppy', 'break', 'foot', ':(']
Prediction: [0.0163281]

Tweet: ["where's", 'jaebum', 'baby', 'picture', ':(']
Prediction: [0.00833105]

Tweet: ['mr', 'ahmad', 'maslan', 'cook', ':(']
Prediction: [0.02320513]

Tweet: ['hull', 'supporter', 'expect', 'misserable', 'week', ':-(']
Prediction: [0.03248583]

You can see that the first 10 numbers are very close to 1, which means the model correctly predicted positive sentiment and the last 10 numbers are all close to zero, which means the model correctly predicted negative sentiment.

5 - Test With Your Own Input¶

Finally you will test with your own input. You will see that deepnets are more powerful than the older methods you have used before. Although you go close to 100 % accuracy on the first two assignments, you can see even more improvement here.

5.1 - Create the Prediction Function¶

In [65]:
def get_prediction_from_tweet(tweet, model, vocab, max_len):
    tweet = process_tweet(tweet)
    tweet = padded_sequence(tweet, vocab, max_len)
    tweet = np.array([tweet])

    prediction = model.predict(tweet, verbose=False)
    
    return prediction[0][0]

Now you can write your own tweet and see how the model predicts it. Try playing around with the words - for example change gr8 for great in the sample tweet and see if the score gets higher or lower.

Also Try writing your own tweet and see if you can find what affects the output most.

In [66]:
unseen_tweet = '@DLAI @NLP_team_dlai OMG!!! what a daaay, wow, wow. This AsSiGnMeNt was gr8.'

prediction_unseen = get_prediction_from_tweet(unseen_tweet, model, vocab, max_len)
print(f"Model prediction on unseen tweet: {prediction_unseen}")
Model prediction on unseen tweet: 0.7467228174209595

Exercise 8 - graded_very_positive_tweet¶

Instructions: For your last exercise in this assignment, you need to write a very positive tweet. To pass this exercise, the tweet needs to score at least 0.99 with the model (which means the model thinks it is very positive).

Hint: try some positive words and/or happy smiley faces :)

In [128]:
# GRADED VARIABLE: graded_very_positive_tweet

### START CODE HERE ###

# Please replace this sad tweet with a happier tweet
graded_very_positive_tweet = 'thnx god i\'m happy :)'

### END CODE HERE ###

Test your positive tweet below

In [129]:
# Test your graded_very_positive_tweet tweet
prediction = get_prediction_from_tweet(graded_very_positive_tweet, model, vocab, max_len)
if prediction > 0.99:
    print("\033[92m All tests passed")
else:
    print("The model thinks your tweet is not positive enough.\nTry figuring out what makes some of the tweets in the validation set so positive.")
 All tests passed

The emoji is taking a crucial part here

6 - Word Embeddings¶

In this last section, you will visualize the word embeddings that your model has learned for this sentiment analysis task. By using model.layers, you get a list of the layers in the model. The embeddings are saved in the first layer of the model (position 0). You can retrieve the weights of the layer by calling layer.get_weights() function, which gives you a list of matrices with weights. The embedding layer has only one matrix in it, which contains your embeddings. Let's extract the embeddings.

In [69]:
# Get the embedding layer
embeddings_layer = model.layers[0]

# Get the weights of the embedding layer
embeddings = embeddings_layer.get_weights()[0]

print(f"Weights of embedding layer have shape: {embeddings.shape}")
Weights of embedding layer have shape: (9535, 16)

Since your embeddings are 16-dimensional (or different if you chose some other dimension), it is hard to visualize them without some kind of transformation. Here, you'll use scikit-learn to perform dimensionality reduction of the word embeddings using PCA, with which you can reduce the number of dimensions to two, while keeping as much information as possible. Then you can visualize the data to see how the vectors for different words look like.

In [70]:
# PCA with two dimensions
pca = PCA(n_components=2)

# Dimensionality reduction of the word embeddings
embeddings_2D = pca.fit_transform(embeddings)

Now, everything is ready to plot a selection of words in 2d. Dont mind the axes on the plot - they point in the directions calculated by the PCA algorithm. Pay attention to which words group together.

In [71]:
#Selection of negative and positive words
neg_words = ['bad', 'hurt', 'sad', 'hate', 'worst']
pos_words = ['best', 'good', 'nice', 'love', 'better', ':)']

#Index of each selected word
neg_n = [vocab[w] for w in neg_words]
pos_n = [vocab[w] for w in pos_words]

plt.figure()

#Scatter plot for negative words
plt.scatter(embeddings_2D[neg_n][:,0], embeddings_2D[neg_n][:,1], color = 'r')
for i, txt in enumerate(neg_words): 
    plt.annotate(txt, (embeddings_2D[neg_n][i,0], embeddings_2D[neg_n][i,1]))

#Scatter plot for positive words
plt.scatter(embeddings_2D[pos_n][:,0], embeddings_2D[pos_n][:,1], color = 'g')
for i, txt in enumerate(pos_words): 
    plt.annotate(txt,(embeddings_2D[pos_n][i,0], embeddings_2D[pos_n][i,1]))

plt.title('Word embeddings in 2d')

plt.show()
No description has been provided for this image

As you can see, the word embeddings for this task seem to distinguish negative and positive meanings. However, similar words don't necessarily cluster together, since you only trained the model to analyze the overall sentiment. Notice how the smiley face is much further away from the negative words than any of the positive words are. It turns out that smiley faces are actually the most important predictors of sentiment in this dataset. Try removing them from the tweets (and consequently from the vocabulary) and see how well the model performs then. You should see quite a significant drop in performance.

Congratulations on finishing this assignment!

During this assignment you tested your theoretical and practical skills by creating a vocabulary of words in the tweets and coding a neural network that created word embeddings and classified the tweets into positive or negative. Next week you will start coding some sequence models!

Keep up the good work!

In [ ]: