Booting your project faster by caching weights at build time

Booting the model faster

This is highly recommended.

Your Potassium app comes with a file

Banana uses this file to download your model's weights at build time (see Dockerfile).

This ensures that the model weights are ready to use every time that you are calling your project - they don't need to be downloaded again.

To make use of this optimization, make sure to adapt the existing file to use your model, rather than the default BERT one.

After you've done so, save it and push to your Github repository to trigger a new build.

Here's an example going from a default BERT model here:

from transformers import pipeline

def download_model():
    pipeline('fill-mask', model='bert-base-uncased')

if __name__ == "__main__":

... to, for example, a MusicGen model here:

from audiocraft.models import MusicGen
from import audio_write

def download_model():
    MusicGen.get_pretrained('melody', device=device)

if __name__ == "__main__":

In both examples above it is assumed that the "get_pretrained" and "pipeline" functions download the model weights onto the calling machine behind-the-scenes. If you aren't using these functions you may need to explicitly download + load weights from disk e.g., usingtorch.load("path/to/your/weights/directory...")

Last updated