Llama API Client
The `llama-api-client` library is the official TypeScript client for interacting with the Llama API. It provides convenient access to the REST API from both server-side TypeScript and JavaScript environments. Currently at version `0.3.0`, the library exhibits an active development cadence with frequent releases (often weekly or bi-weekly), indicating ongoing feature enhancements and bug fixes. Key differentiators include its TypeScript-first design, comprehensive type definitions for all request parameters and response fields, and robust support for advanced features like streaming responses via Server-Sent Events (SSE) and flexible file uploads using various input types (e.g., `File`, `fs.ReadStream`, `toFile` helper). It is generated with Stainless, ensuring a consistent and well-documented API surface, and includes robust error handling for API and network issues. This client streamlines integration with Llama models for applications requiring conversational AI, content generation, or other generative AI capabilities.
Common errors
-
Error: LLAMA_API_KEY is not set
cause The Llama API client was initialized without an API key, and the `LLAMA_API_KEY` environment variable was not found in the current process environment.fixSet the `LLAMA_API_KEY` environment variable in your system or shell, or explicitly pass the `apiKey` option to the `LlamaAPIClient` constructor: `new LlamaAPIClient({ apiKey: 'your_key' })`. -
LlamaAPIClient.APIError: Request failed with status code 401
cause The provided API key is either invalid, expired, revoked, or does not have the necessary permissions for the requested operation.fixVerify that your `LLAMA_API_KEY` is correct, active, and has appropriate access rights by checking your Llama API account and key management page. -
LlamaAPIClient.APIError: Request failed with status code 400 (Bad Request)
cause The request payload or parameters sent to the API were malformed, contained invalid values, or were incomplete according to the Llama API's specification.fixReview the request parameters you are sending to the API call against the official Llama API documentation and the TypeScript type definitions provided by the library for correctness. -
TypeError: fetch failed
cause A low-level network connectivity issue (e.g., DNS resolution failure, firewall block, or no internet connection) prevented the client from establishing a connection to the Llama API endpoint.fixCheck your internet connection, ensure the Llama API service is operational (consult their status page), and verify that your network environment allows outgoing HTTPS connections to the API endpoint.
Warnings
- breaking The library is currently below version 1.0.0. This means the API surface may introduce breaking changes in minor versions (e.g., `0.2.x` to `0.3.x`) without strict adherence to semantic versioning until a stable 1.0.0 release. Always review changelogs for updates.
- gotcha Failure to properly configure the `LLAMA_API_KEY` (either via environment variable `process.env['LLAMA_API_KEY']` or explicitly in the `LlamaAPIClient` constructor) will result in authentication errors (e.g., 401 Unauthorized responses).
- gotcha While breaking from a `for await...of` loop for streaming responses stops processing new chunks, it might not immediately abort the underlying HTTP request. This could leave resources open or cause unnecessary network usage.
- gotcha The client throws `LlamaAPIClient.APIError` for non-success HTTP responses (4xx, 5xx). Not catching this specific error type means your application might not gracefully handle API-specific problems, potentially leading to uncaught exceptions or generic error messages.
Install
-
npm install llama-api-client -
yarn add llama-api-client -
pnpm add llama-api-client
Imports
- LlamaAPIClient
const LlamaAPIClient = require('llama-api-client');import LlamaAPIClient from 'llama-api-client';
- toFile
import LlamaAPIClient from 'llama-api-client'; const file = LlamaAPIClient.toFile(buffer, 'name.txt');
import { toFile } from 'llama-api-client'; - Chat.CompletionCreateParams
import LlamaAPIClient from 'llama-api-client'; const params: LlamaAPIClient.Chat.CompletionCreateParams = { messages: [{ content: 'string', role: 'user' }], model: 'model', }; - APIError
import LlamaAPIClient from 'llama-api-client'; try { /* ... */ } catch (error) { if (error instanceof LlamaAPIClient.APIError) { /* ... */ } }
Quickstart
import LlamaAPIClient from 'llama-api-client';
import { toFile } from 'llama-api-client'; // Included for file upload example context, though commented out for brevity
// Ensure your LLAMA_API_KEY is set as an environment variable (e.g., in a .env file or production config).
// For local development, 'dotenv' package might be used: `require('dotenv').config();`
const client = new LlamaAPIClient({
apiKey: process.env['LLAMA_API_KEY'] ?? '', // Provide an empty string fallback or handle validation for missing key
});
async function runLlamaClientExamples() {
try {
console.log('--- Creating a Chat Completion ---');
const chatResponse = await client.chat.completions.create({
messages: [{ content: 'Hello, what is the capital of France?', role: 'user' }],
model: 'llama-3-8b-instruct', // Using an example model identifier
max_tokens: 50,
temperature: 0.7,
});
console.log('Chat completion response:', chatResponse.completion_message?.content);
console.log('\n--- Streaming Response Example ---');
const stream = await client.chat.completions.create({
messages: [{ content: 'Tell me a short story about a brave knight.', role: 'user' }],
model: 'llama-3-8b-instruct',
stream: true,
max_tokens: 100,
});
process.stdout.write('Streamed story: ');
for await (const chunk of stream) {
if (chunk.completion_message) {
process.stdout.write(chunk.completion_message.content || '');
}
}
process.stdout.write('\n'); // Newline after stream finishes
// Conceptual example for file upload (requires an 'uploads' endpoint and actual file data)
// For a real scenario, you would typically use fs.createReadStream, a web File object, or Buffer.
// const dummyFileContent = Buffer.from('This is a test file for upload.');
// const dummyFile = await toFile(dummyFileContent, 'my-document.txt');
// const uploadResponse = await client.uploads.create({ file: dummyFile, purpose: 'fine-tune' });
// console.log('\nUpload initiated:', uploadResponse);
} catch (error) {
if (error instanceof LlamaAPIClient.APIError) {
console.error('Llama API Error caught:', error.status, error.code, error.message, 'Details:', error.error);
} else {
console.error('An unexpected error occurred:', error);
}
}
}
runLlamaClientExamples();