Skip to content

Formulaic Python Library

NOTE: The Python library is in active development and changes frequently.

The Formulaic Python library makes it easy to use Formulas inside your generative AI applications. Formulas are JSON scripts that contain AI prompts, template variables, and model configuration. You can explore our collection of open-licensed reference Formulas for many popular language models at Formulaic.app.

Installation

We recommend setting up a virtual environment before you start.

python3 -m venv venv

Install the Formulaic Python library:

pip install formulaic-ai

This tutorial also uses the OpenAI Python client. Install that if you don't have it.

pip install openai

Quick Start

We're going to build this script step-by-step below, using a Formula we retrieve using the Formulaic REST API. We also need an generative language model API. You may use any OpenAI compatible API. For simplicity we have chosen llamafile, a free and open source LLM format that runs on your local machine and comes with a built-in API server running at localhost:8080/v1.

You can also use any OpenAI large language model if you already have a key.

You can save this script as quickstart.py and run it in your terminal to see your results. Next we'll break it down step-by-step.

""" 
This example works with any LLM inference API that uses the OpenAI format and
OpenAI Python library

For this demo we've chosen llamafile, which is an LLM that runs on your local 
machine and includes a locally running OpenAI-compatible API endpoint. 

You may substitue in another providersuch as Anyscale or OpenAI by changing 
the values of endpoint_url and inference_api_key.

"""

from formulaic_ai import Formulaic 
import openai


formulaic_api_key = "your_personal_key"
endpoint_url = "http://localhost:8080/v1" # default for llamafile
inference_api_key = "sk-no-key-required"  # substitute if using another service


formula = Formulaic(formulaic_api_key)

formula.get_formula("2968bf58-a231-46ff-99de-923198c3864e")

# print the entire Formula script
print (formula.script)


# new values for the template variables
new_variables = {"occasion": "I'm scared of heights!", 'language': 'German'}

# render prompts by sustituting the new values
formula.render(new_variables)

# print the prompts that contain our new values
print (formula.prompts)

# change values, render, and print the prompts
new_variables = {"occasion": "It's my birthday!", 'language': 'Greek'}
formula.render(new_variables)
print (formula.prompts)


# Send our latest prompts to an OpenAI compatible endpoint

# create an OpenAI client
client = openai.OpenAI(
    base_url = endpoint_url,  
    api_key = inference_api_key
)
messages=[]

# iterate over the prompts and send to the model for completions
for p in formula.prompts:
    messages.append({"role": "user", "content": p})
    completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=messages
    )
    # print the user prompt we sent
    print(f"\nUser: {p}")
    # print the Assistant's response
    print(f"\nAssistant: {completion.choices[0].message.content}")

Step-by-step

Now we'll break it down step-by-step.

Do our imports

Import Formulaic which is what we'll use to retrieve prommpt templates from the Formulaic API and also substitute new variable values into our templates to make them usable in our application.

We'll also import the official OpenAI Python library for processing messages to and from compatible large language model APIs.

from formulaic_ai import Formula, OpenClient, load_formula

Setup: create a Formulaic API key and get llamafile running

Formulas are in JSON format and contain meta information such as Formula name and update date, model information, prompt sequences, and variables. You can retrieve your private Formulas on Formulaic.app as well as any public Formulas using the Formulaic API. To use the Formulaic API, you need to first create an API key from your profile page on Formulaic.app.

We use llamafile for this tutorial. We chose the mistral 7B instruct llamafile. To get it running, download the file (5GB) and run it from the command line to start the local HTTP server. Please see the full llamafile documentation for instructions on how to download and get started.

You can also use a cloud hosted language model provider, such as OpenAI or Anyscale.

Configure API keys and endpoints

  • Substitute your Formulaic API key into the script as the value of formulaic_api_key
  • If using llamafile, you can leave endpoint_url and inference_api_key as is. If using another provider like OpenAI or Anyscale, update those variables with the correct endpoint and key values.
formulaic_api_key = "your_personal_key"
endpoint_url = "http://localhost:8080/v1" # default for llamafile
inference_api_key = "sk-no-key-required"  # substitute if using another service

Instantiate Formulaic and retrieve a Formula

Create an instance of Formulaic and pass in our Formulaic API key:

formula = Formulaic(formulaic_api_key)

You can get the Formula ID for any of your private Formulas, and for any public Formulas published by the Formulaic team by looking at the GUID in the page URL of a Formula:

Screenshot of the page URL with the Formula ID

Now call get_formula() and pass in this Formula ID. This will query the API and store the Formula's prompt script in the script attribute, which we can print.

formula = Formulaic(formulaic_api_key)

formula.get_formula("2968bf58-a231-46ff-99de-923198c3864e")

# print the entire Formula script
print (formula.script)

Printing it we see:

{'id': '9f687aa2-fbb1-42f7-9278-1f247ade747a', 'recipe_id': '2968bf58-a231-46ff-99de-923198c3864e', 
'user_id': 'f61f6c2b-93a9-4782-b3e6-b5edff65e38b', 'created_at': '2024-03-28T18:19:33.227Z', 
'updated_at': '2024-03-28T18:25:34.418Z', 'script': {'model': {'id': 'mistralai/Mistral-7B-Instruct-v0.1', 
'name': 'Mistral 7B Instruct', 'vendor': 'Mistral', 'provider': 'Anyscale'}, 'sequences': 
[[{'text': 'You are a personal motivator assistant who is direct and believes that everyone can be their 
best. Generate a motivating slogan for the occasion of {{{occasion}}}'}, {'text': 'Now translate that 
slogan into {{{language}}}'}]], 'variables': [{'name': 'occasion', 'type': 'text', 'label': 'Occasion', 
'value': "I'm starting a new job today!", 'description': 'Why do you need motivation today? ', 
'example_value': 'Starting a new job today!'}, {'name': 'language', 'type': 'text', 'label': 'Language', 
'value': 'French', 'description': 'The language you want to translate your slogan into', 'example_value': 
'Latin'}]}}

Render prompts

The Formula we chose has two variables: occasion for our motivating sloga, and the language we want to translate into. Let's call render using some new values for those variables.

new_variables = {"occasion": "I'm scared of heights!", 'language': 'German'}

# render prompts by sustituting the new values
formula.render(new_variables)

Rendering replaces our template variables with our new values, generates prompts, and stores them in a new attribute called prompts. We can print those:

# print the prompts that contain our new values
print (formula.prompts)

And we see:

["You are a personal motivator assistant who is direct and believes that everyone can be their best. 
Generate a motivating slogan for the occasion of I'm scared of heights!", 
'Now translate that slogan into German']

You can change your input variables many times, and re-render to change the prompts in your Formulaic instance. This is common for applications that call the same Formula many times with different inputs.

# change values, render, and print the prompts
new_variables = {"occasion": "It's my birthday!", 'language': 'Greek'}
formula.render(new_variables)
print (formula.prompts)

And we see our new prompts:

["You are a personal motivator assistant who is direct and believes that everyone can be their best. 
Generate a motivating slogan for the occasion of It's my birthday!", 'Now translate that slogan into 
Greek']

Send our prompts to the LLM API and print our responses

We have prompts that are ready to be sent off to a language model. We create an OpenAI compatible client using the endpoint_url and inference_api_key variable we set for our LLM provider. We iterate over our prompts and send each to our LLM and print our messages and the LLM responses.

# create an OpenAI client
client = openai.OpenAI(
    base_url = endpoint_url,  
    api_key = inference_api_key
)
messages=[]

# iterate over the prompts and send to the model for completions
for p in formula.prompts:
    messages.append({"role": "user", "content": p})
    completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=messages
    )
    # print the user prompt we sent
    print(f"\nUser: {p}")
    # print the Assistant's response
    print(f"\nAssistant: {completion.choices[0].message.content}")

Your output may vary based on the LLM you choose. Printed out, we see this message exchange:

User: You are a personal motivator assistant who is direct and believes that everyone can be their 
best. Generate a motivating slogan for the occasion of It's my birthday!

Assistant: Absolutely, it's a wonderful day to celebrate YOU! Remember, every year is a new 
opportunity to grow, learn, and shine brighter. Happy Birthday! #EmbraceYourYear #BeYourBestSelf

User: Now translate that slogan into Greek

Assistant: "Για την μου γενέθλια, είμαι το πιο κρατάωμα μου!", which means "For my birthday, I 
am the best version of myself!"

That's your first Formulaic API call! Go build something great!