Prompty Python Runtime
Prompty is a new asset class and format for LLM prompts that aims to provide observability, understandability, and portability for developers. It includes a specification, tooling, and a runtime. The Python runtime (current version 0.1.50) allows developers to load, render, parse, and execute Prompty files within Python applications. It's actively developed by Microsoft, with distinct release cycles for its Python, C#, and TypeScript bindings.
Warnings
- breaking The Prompty specification evolved from v1 to v2, introducing changes in metadata structure. For example, `model.configuration` became `model.connection`, `model.parameters` became `model.options`, and `inputs` moved to `inputSchema.properties`. While Prompty v2 aims for backward compatibility by migrating v1 files, be aware of these structural changes when working with older `.prompty` assets.
- gotcha Prompty relies on specific 'extra' installations for different LLM providers (e.g., `[openai]` for OpenAI, `[azure]` for Azure OpenAI) and template engines (e.g., `[jinja2]`). Forgetting to install these can lead to `ImportError` or `ModuleNotFoundError` at runtime when trying to use a particular provider or templating feature.
- gotcha Prompty files frequently use environment variables (e.g., `${env:OPENAI_API_KEY}`, `${env:AZURE_OPENAI_ENDPOINT}`) for model connection and API keys. If these environment variables are not correctly set in the execution environment, the Prompty runtime will fail to authenticate or connect to the LLM, leading to runtime errors.
- gotcha While PyPI states `requires_python>=3.9`, the official documentation and code guidelines often recommend Python 3.10 or higher (and sometimes specifically 3.11 for modern syntax). Using Python 3.9 might limit access to newer features or cause unexpected behavior with examples written for later Python versions.
- gotcha Prompty is designed with a strong integration with a VS Code extension for authoring, debugging, and testing `.prompty` files. While the Python runtime can execute `.prompty` files independently, relying solely on the runtime without the VS Code extension can make the initial prompt engineering and debugging process less efficient for some users.
Install
-
pip install prompty -
pip install prompty[openai,azure,jinja2,all]
Imports
- prompty
import prompty
- Tracer
from prompty.tracer import Tracer, console_tracer, PromptyTracer, trace
Quickstart
import os
import prompty
# Create a dummy .prompty file for the example
prompty_content = """
---
name: greeting
model:
id: gpt-4o-mini
provider: openai
connection:
kind: key
apiKey: ${env:OPENAI_API_KEY}
inputSchema:
properties:
name:
kind: string
default: World
template:
format:
kind: jinja2
parser:
kind: prompty
---
system: You are a friendly assistant.
user: Say hello to {{name}}.
"""
with open("greeting.prompty", "w") as f:
f.write(prompty_content)
# Ensure OPENAI_API_KEY is set in your environment
# For local testing, you might set it like this:
# os.environ["OPENAI_API_KEY"] = "your_openai_api_key_here"
openai_api_key = os.environ.get('OPENAI_API_KEY', '')
if not openai_api_key:
print("Warning: OPENAI_API_KEY environment variable not set. Skipping execution.")
print("Please set OPENAI_API_KEY for the quickstart to run successfully.")
else:
try:
# One-shot: load + prepare + run
result = prompty.execute(
"greeting.prompty",
inputs={"name": "Jane"}
)
print("\nPrompty execution result (one-shot):")
print(result)
# Step-by-step example
agent = prompty.load("greeting.prompty")
messages = prompty.prepare(
agent,
inputs={"name": "World"}
)
step_by_step_result = prompty.run(agent, messages)
print("\nPrompty execution result (step-by-step):")
print(step_by_step_result)
except Exception as e:
print(f"\nAn error occurred during Prompty execution: {e}")
print("Ensure your OPENAI_API_KEY is valid and has access to gpt-4o-mini.")
# Clean up the dummy file
os.remove("greeting.prompty")