Groq Integration for LangChain

1.1.2 · active · verified Sat Apr 11

langchain-groq is an integration package that connects Groq's ultra-fast inference API with the LangChain framework. It allows developers to leverage Groq's LPU-powered language models, such as Llama 3 and Mixtral, within their LangChain applications for fast and efficient LLM inference. The library is actively maintained, with frequent updates often aligning with new features or fixes in the broader LangChain ecosystem.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize and use the `ChatGroq` model within LangChain. It shows both a single `invoke` call and streaming capabilities. Ensure your `GROQ_API_KEY` is set as an environment variable.

import os
from langchain_groq import ChatGroq
from langchain_core.messages import HumanMessage, SystemMessage

# Ensure your Groq API key is set as an environment variable
# os.environ["GROQ_API_KEY"] = "your_groq_api_key_here"
# Or pass it directly to the ChatGroq constructor (not recommended for production)

groq_api_key = os.environ.get("GROQ_API_KEY")

if not groq_api_key:
    print("Error: GROQ_API_KEY environment variable not set.")
else:
    llm = ChatGroq(
        model="llama-3.3-70b-versatile",
        temperature=0.0,
        max_retries=2,
        api_key=groq_api_key
    )

    messages = [
        SystemMessage(content="You are a helpful assistant that translates English to French."),
        HumanMessage(content="I love programming."),
    ]

    # Invoke the model
    response = llm.invoke(messages)
    print("\n--- Invoke Response ---")
    print(f"Content: {response.content}")

    # Stream the response
    print("\n--- Streaming Response ---")
    for chunk in llm.stream(messages):
        print(chunk.content, end="")
    print()

view raw JSON →