On V2 the biggest SDK change is that basically you don't need an SDK. You just deployed a Potassium app, which is just a server. And you can call that server directly on your own model url.
Do you remember the example curl script that we used previously when running a local Potassium app? You can just copy paste that, but change the localhost address to your model url & add your credentials:
curl-XPOST \-H"Content-Type: application/json" \-H'X-Banana-API-Key: YOUR-API-KEY' \-H'X-Banana-Model-Key: YOUR-MODEL-KEY' \-d'{"prompt": "Software developers start with a Hello, [MASK]! script."}' \https://<YOUR-MODEL-SPECIFIC-SLUG>.run.banana.dev/
The model url & an example script you'll find in the model view:
And that's basically it! But if you want some convenience... read on 😏
Python SDK example
So we said you don't need to use an SDK, but you can. We provide a simple python & node SDK that's essentially a wrapper around that request. Adding some convenience.
A python example is something like this:
from banana_dev import Client# Create a reference to your model on Bananamy_model =Client( api_key="YOUR_API_KEY", # Found in dashboard model_key="YOUR_MODEL_KEY", # Found in model view in dashboard url="https://YOUR_URL.run.banana.dev", # Found in model view in dashboard verbose=True)# Specify the model's input JSONinputs ={"prompt":"In the summer I like [MASK].",}# Call your model's inference endpoint on Bananaresult, meta = my_model.call("/", inputs)print(result)
You simply create a client object with has the call() method. You specify which endpoint in your Potassium app you want to call and the request inputs.
And an example in Node
And the Node example is essentially the same. Copy + Paste and fire away 🔥
import { Client } from"@banana-dev/banana-dev"constmy_model=newClient("YOUR_API_KEY",// Found in dashboard"YOUR_MODEL_KEY",// Found in model view in dashboard"https://YOUR_URL.run.banana.dev",// Found in model view in dashboardtrue// verbosity)// Specify the model's input JSONconstinputs= { prompt:"I like [MASK].",}// Call your model's inference endpoint on Banana const {json,meta } =awaitmy_model.call("/", inputs)console.log(json)