LangChain Experimental
LangChain Experimental is a Python package within the broader LangChain ecosystem, providing a testing ground for novel concepts, advanced integrations, and speculative features related to large language models (LLMs). It is explicitly designed for research and experimental uses, and users are warned that portions of its code may be dangerous if not properly deployed in a sandboxed environment. The current version is 0.4.1, and while core LangChain follows semantic versioning with frequent patch and minor releases, the experimental package's cadence is tied to the rapid development of new LLM application patterns.
Warnings
- breaking Components previously found under `langchain.experimental` have been moved to the `langchain_experimental` package. This requires updating import paths for any code referencing these modules.
- gotcha The `langchain-experimental` package is designed for 'research and experimental uses' and its components are subject to frequent changes. APIs within this package may not offer the same stability guarantees as core `langchain` and can introduce breaking changes even in minor versions.
- breaking Portions of the code in `langchain-experimental` may be dangerous if not properly deployed in a sandboxed environment. This is due to the nature of experimental LLM applications that can generate and execute code, interact with external systems, or process untrusted input.
- gotcha Historical vulnerabilities, such as CVE-2024-21513 affecting versions 0.0.15 to 0.0.20, have highlighted potential arbitrary code execution risks (e.g., through `eval` calls in `VectorSQLDatabaseChain`). While specific CVEs are patched, the experimental nature means similar risks could emerge in new features.
Install
-
pip install langchain-experimental -
pip install langchain-openai pandas
Imports
- create_pandas_dataframe_agent
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
Quickstart
import os
import pandas as pd
from langchain_openai import ChatOpenAI
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
# Set your OpenAI API key from environment variables
# For testing, you can uncomment and replace with your key, but use environment variables in production.
# os.environ["OPENAI_API_KEY"] = "sk-..."
# Ensure OPENAI_API_KEY is set in your environment
if os.environ.get("OPENAI_API_KEY") is None:
print("Warning: OPENAI_API_KEY environment variable is not set. Quickstart may fail.")
# In a real application, you'd handle this more robustly, e.g., raise an error or prompt the user.
# Load a sample DataFrame
data = {
"name": ["Alice", "Bob", "Charlie", "David"],
"age": [25, 30, 35, 28],
"city": ["New York", "London", "Paris", "New York"]
}
df = pd.DataFrame(data)
# Initialize the LLM (requires langchain-openai to be installed)
llm = ChatOpenAI(model="gpt-4", temperature=0)
# Create the pandas dataframe agent
agent = create_pandas_dataframe_agent(llm, df, verbose=True)
# Run a query on the DataFrame
query = "What is the average age of the people from New York?"
print(f"\nQuery: {query}")
response = agent.invoke({"input": query})
print(f"Response: {response['output']}")
query_count = "How many people are from London?"
print(f"\nQuery: {query_count}")
response_count = agent.invoke({"input": query_count})
print(f"Response: {response_count['output']}")