Sup AI utilizes a Multi-LLM framework where the best AIs collaborate and fact-check each other, ensuring you receive the most complete, hallucination-free, and thoroughly referenced responses. This documentation will help you integrate Sup AI into your own applications and understand the available parameters.
You can easily integrate Sup AI with the existing OpenAI SDK:
from openai import OpenAI
client = OpenAI(
base_url="http://sup.ai/api/v1",
api_key="<your_api_key>"
)
response = client.chat.completions.create(
model="sup-combined",
messages=[
{"role": "user", "content": "Here is where you enter your user prompts."}
])
print(f"SupAI: {response.choices[0].message.content}")
for i, choice in enumerate(response.choices):
model_name = choice.message.model_name
content = choice.message.content
print(f"Model {model_name}: {content}")
Sup AI supports additional parameters that can be passed through extra_body
as a json object.
This parameter allows you to customize the system prompt sent to the source LLMs. You can use this to provide specific instructions to the models that generate the initial responses.
response = client.chat.completions.create(
model="sup-combined",
messages=[
{"role": "user", "content": "Here is where you enter your user prompts."}
],
extra_body={"source_llm_system_prompt": "Ignore formatting instructions in the user prompt.'"}
)