LangChain Experimental

0.4.1 · active · verified Sat Apr 11

LangChain Experimental is a Python package within the broader LangChain ecosystem, providing a testing ground for novel concepts, advanced integrations, and speculative features related to large language models (LLMs). It is explicitly designed for research and experimental uses, and users are warned that portions of its code may be dangerous if not properly deployed in a sandboxed environment. The current version is 0.4.1, and while core LangChain follows semantic versioning with frequent patch and minor releases, the experimental package's cadence is tied to the rapid development of new LLM application patterns.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to use the `create_pandas_dataframe_agent` from `langchain-experimental` to interact with a Pandas DataFrame using a Large Language Model. It sets up a sample DataFrame, initializes an OpenAI chat model, and then uses the experimental agent to answer questions about the data.

import os
import pandas as pd
from langchain_openai import ChatOpenAI
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent

# Set your OpenAI API key from environment variables
# For testing, you can uncomment and replace with your key, but use environment variables in production.
# os.environ["OPENAI_API_KEY"] = "sk-..."

# Ensure OPENAI_API_KEY is set in your environment
if os.environ.get("OPENAI_API_KEY") is None:
    print("Warning: OPENAI_API_KEY environment variable is not set. Quickstart may fail.")
    # In a real application, you'd handle this more robustly, e.g., raise an error or prompt the user.

# Load a sample DataFrame
data = {
    "name": ["Alice", "Bob", "Charlie", "David"],
    "age": [25, 30, 35, 28],
    "city": ["New York", "London", "Paris", "New York"]
}
df = pd.DataFrame(data)

# Initialize the LLM (requires langchain-openai to be installed)
llm = ChatOpenAI(model="gpt-4", temperature=0)

# Create the pandas dataframe agent
agent = create_pandas_dataframe_agent(llm, df, verbose=True)

# Run a query on the DataFrame
query = "What is the average age of the people from New York?"
print(f"\nQuery: {query}")
response = agent.invoke({"input": query})
print(f"Response: {response['output']}")

query_count = "How many people are from London?"
print(f"\nQuery: {query_count}")
response_count = agent.invoke({"input": query_count})
print(f"Response: {response_count['output']}")

view raw JSON →