Do the Mariobot!
As AI Technology grows and expands, it’s becoming easier and easier to make some pretty niche applications using AI. This new program, called MarioGPT, uses GPT2 technology to generate Mario levels based on text-input.
Checkout the developer’s description of how this technology works below.
Our MarioGPT model is a fine-tuned version of the distilled GPT2 language model. Like GPT2, MarioGPT is trained to predict next token sequences. Levels are represented as strings, which are tokenized by a Byte-Pair Encoding, similar to the original GPT2 model. The level is then split by columns and flattened into a single vector (or batch of vectors for multiple levels).
To incorporate prompt information, we utilize a frozen text encoder in the form of a pre-trained bidirectional LLM (BART), and output the average hidden states of the model’s forward pass. This average hidden state is then used in the cross attention layers of the GPT2 architecture in combination with the actual level sequencing being passed into the model.
If you would like to try your hand at some AI-Generated Mario, check out the instructions here.