byLLM Python Library

0.6.4 · active · verified Fri Apr 17

byLLM (byllm) is a Python library providing a unified API for interacting with various Large Language Model (LLM) providers like OpenAI, Anthropic, Ollama, and Google Gemini. It simplifies LLM integration, abstracting away provider-specific client libraries and response formats. Currently at version 0.6.4, it is part of the Jaseci ecosystem and undergoes frequent updates, often alongside releases of `jaclang` and `jaseci` itself, targeting Python 3.11+.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize the OpenAI provider, generate text, and use chat completion. It emphasizes retrieving API keys from environment variables and includes basic error handling. Remember to install `byllm[openai]` for this example.

import os
from byllm.providers.openai import OpenAI

# Set your OpenAI API key as an environment variable:
# export OPENAI_API_KEY="your_key_here"
openai_key = os.environ.get("OPENAI_API_KEY")

if not openai_key:
    print("Warning: OPENAI_API_KEY environment variable not set. Requests will likely fail.")
    openai_key = "sk-dummy" # Use a dummy key to allow instantiation

try:
    llm = OpenAI(api_key=openai_key, model_name="gpt-3.5-turbo")

    # Example: Generate text
    prompt = "What is the capital of France?"
    response = llm.generate(prompt=prompt, max_tokens=50)
    print(f"Generated text: {response.text}\n")

    # Example: Chat completion
    messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me a short story about a brave knight."}
    ]
    chat_response = llm.chat(messages=messages, max_tokens=100)
    print(f"Chat response: {chat_response.text}\n")

except Exception as e:
    print(f"An error occurred: {e}")
    print("Please ensure your API key is correct and the model name is valid.")

view raw JSON →