Fireworks LangChain Integration

1.1.0 · active · verified Sun Apr 12

An integration package connecting Fireworks AI with LangChain, providing functionalities for chat models (ChatFireworks) and embedding models (FireworksEmbeddings). The current version is 1.1.0, and the library follows a regular release cadence, often aligned with updates to the broader LangChain ecosystem.

Warnings

Install

Imports

Quickstart

Demonstrates how to initialize `ChatFireworks` with a specific model and make a simple chat completion call, ensuring the API key is provided, preferably via an environment variable. The model `llama-v3p3-70b-instruct` is an example; refer to Fireworks AI for the latest model IDs.

import os
from langchain_fireworks import ChatFireworks
from langchain_core.messages import HumanMessage, SystemMessage

# Ensure FIREWORKS_API_KEY environment variable is set.
# For a real application, load your API key securely.
# os.environ["FIREWORKS_API_KEY"] = "YOUR_FIREWORKS_API_KEY"

llm = ChatFireworks(
    model="accounts/fireworks/models/llama-v3p3-70b-instruct",
    temperature=0,
    max_tokens=None,
    fireworks_api_key=os.environ.get("FIREWORKS_API_KEY", "") # Fallback for quickstart if not set
)

messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is the capital of France?"),
]

response = llm.invoke(messages)
print(response.content)

view raw JSON →