{"id":16105,"library":"llama-api-client","title":"Llama API Client","description":"The `llama-api-client` library is the official TypeScript client for interacting with the Llama API. It provides convenient access to the REST API from both server-side TypeScript and JavaScript environments. Currently at version `0.3.0`, the library exhibits an active development cadence with frequent releases (often weekly or bi-weekly), indicating ongoing feature enhancements and bug fixes. Key differentiators include its TypeScript-first design, comprehensive type definitions for all request parameters and response fields, and robust support for advanced features like streaming responses via Server-Sent Events (SSE) and flexible file uploads using various input types (e.g., `File`, `fs.ReadStream`, `toFile` helper). It is generated with Stainless, ensuring a consistent and well-documented API surface, and includes robust error handling for API and network issues. This client streamlines integration with Llama models for applications requiring conversational AI, content generation, or other generative AI capabilities.","status":"active","version":"0.3.0","language":"javascript","source_language":"en","source_url":"https://github.com/meta-llama/llama-api-typescript","tags":["javascript","typescript"],"install":[{"cmd":"npm install llama-api-client","lang":"bash","label":"npm"},{"cmd":"yarn add llama-api-client","lang":"bash","label":"yarn"},{"cmd":"pnpm add llama-api-client","lang":"bash","label":"pnpm"}],"dependencies":[],"imports":[{"note":"This is the default export for the main client class. While CommonJS `require` might work in some transpiled environments, native ESM `import` is the idiomatic and recommended way, especially in modern TypeScript projects.","wrong":"const LlamaAPIClient = require('llama-api-client');","symbol":"LlamaAPIClient","correct":"import LlamaAPIClient from 'llama-api-client';"},{"note":"The `toFile` utility function, used for standardizing file upload inputs, is a named export, not a property of the default `LlamaAPIClient` instance or class.","wrong":"import LlamaAPIClient from 'llama-api-client';\nconst file = LlamaAPIClient.toFile(buffer, 'name.txt');","symbol":"toFile","correct":"import { toFile } from 'llama-api-client';"},{"note":"Type definitions for request parameters are nested as static members under the main `LlamaAPIClient` object for clear categorization and IntelliSense.","symbol":"Chat.CompletionCreateParams","correct":"import LlamaAPIClient from 'llama-api-client';\n\nconst params: LlamaAPIClient.Chat.CompletionCreateParams = {\n  messages: [{ content: 'string', role: 'user' }],\n  model: 'model',\n};"},{"note":"The base class for all API-specific errors thrown by the client is nested under the main `LlamaAPIClient` object, allowing for structured error handling.","symbol":"APIError","correct":"import LlamaAPIClient from 'llama-api-client';\n\ntry { /* ... */ } catch (error) {\n  if (error instanceof LlamaAPIClient.APIError) { /* ... */ }\n}"}],"quickstart":{"code":"import LlamaAPIClient from 'llama-api-client';\nimport { toFile } from 'llama-api-client'; // Included for file upload example context, though commented out for brevity\n\n// Ensure your LLAMA_API_KEY is set as an environment variable (e.g., in a .env file or production config).\n// For local development, 'dotenv' package might be used: `require('dotenv').config();`\nconst client = new LlamaAPIClient({\n  apiKey: process.env['LLAMA_API_KEY'] ?? '', // Provide an empty string fallback or handle validation for missing key\n});\n\nasync function runLlamaClientExamples() {\n  try {\n    console.log('--- Creating a Chat Completion ---');\n    const chatResponse = await client.chat.completions.create({\n      messages: [{ content: 'Hello, what is the capital of France?', role: 'user' }],\n      model: 'llama-3-8b-instruct', // Using an example model identifier\n      max_tokens: 50,\n      temperature: 0.7,\n    });\n    console.log('Chat completion response:', chatResponse.completion_message?.content);\n\n    console.log('\\n--- Streaming Response Example ---');\n    const stream = await client.chat.completions.create({\n      messages: [{ content: 'Tell me a short story about a brave knight.', role: 'user' }],\n      model: 'llama-3-8b-instruct',\n      stream: true,\n      max_tokens: 100,\n    });\n    process.stdout.write('Streamed story: ');\n    for await (const chunk of stream) {\n      if (chunk.completion_message) {\n        process.stdout.write(chunk.completion_message.content || '');\n      }\n    }\n    process.stdout.write('\\n'); // Newline after stream finishes\n\n    // Conceptual example for file upload (requires an 'uploads' endpoint and actual file data)\n    // For a real scenario, you would typically use fs.createReadStream, a web File object, or Buffer.\n    // const dummyFileContent = Buffer.from('This is a test file for upload.');\n    // const dummyFile = await toFile(dummyFileContent, 'my-document.txt');\n    // const uploadResponse = await client.uploads.create({ file: dummyFile, purpose: 'fine-tune' });\n    // console.log('\\nUpload initiated:', uploadResponse);\n\n  } catch (error) {\n    if (error instanceof LlamaAPIClient.APIError) {\n      console.error('Llama API Error caught:', error.status, error.code, error.message, 'Details:', error.error);\n    } else {\n      console.error('An unexpected error occurred:', error);\n    }\n  }\n}\n\nrunLlamaClientExamples();\n","lang":"typescript","description":"This quickstart demonstrates how to initialize the Llama API client, perform a basic chat completion, handle streaming responses, and shows how to approach error handling using `LlamaAPIClient.APIError`."},"warnings":[{"fix":"Pin to exact patch versions (`~0.x.y` rather than `^0.x.y` in `package.json`) and carefully review changelogs for new releases to identify breaking changes before upgrading.","message":"The library is currently below version 1.0.0. This means the API surface may introduce breaking changes in minor versions (e.g., `0.2.x` to `0.3.x`) without strict adherence to semantic versioning until a stable 1.0.0 release. Always review changelogs for updates.","severity":"breaking","affected_versions":"<1.0.0"},{"fix":"Ensure `process.env['LLAMA_API_KEY']` is set securely in your environment or provide the `apiKey` directly in the client constructor: `new LlamaAPIClient({ apiKey: 'YOUR_API_KEY_HERE' })`.","message":"Failure to properly configure the `LLAMA_API_KEY` (either via environment variable `process.env['LLAMA_API_KEY']` or explicitly in the `LlamaAPIClient` constructor) will result in authentication errors (e.g., 401 Unauthorized responses).","severity":"gotcha","affected_versions":">=0.1.0"},{"fix":"For explicit and immediate cancellation of an active stream, access the `controller` property of the stream object and call `stream.controller.abort()`.","message":"While breaking from a `for await...of` loop for streaming responses stops processing new chunks, it might not immediately abort the underlying HTTP request. This could leave resources open or cause unnecessary network usage.","severity":"gotcha","affected_versions":">=0.2.0"},{"fix":"Wrap API calls in `try-catch` blocks and use `if (error instanceof LlamaAPIClient.APIError)` to handle API-specific errors, allowing access to `error.status`, `error.code`, and `error.message` for detailed diagnostics.","message":"The client throws `LlamaAPIClient.APIError` for non-success HTTP responses (4xx, 5xx). Not catching this specific error type means your application might not gracefully handle API-specific problems, potentially leading to uncaught exceptions or generic error messages.","severity":"gotcha","affected_versions":">=0.1.0"}],"env_vars":null,"last_verified":"2026-04-21T00:00:00.000Z","next_check":"2026-07-20T00:00:00.000Z","problems":[{"fix":"Set the `LLAMA_API_KEY` environment variable in your system or shell, or explicitly pass the `apiKey` option to the `LlamaAPIClient` constructor: `new LlamaAPIClient({ apiKey: 'your_key' })`.","cause":"The Llama API client was initialized without an API key, and the `LLAMA_API_KEY` environment variable was not found in the current process environment.","error":"Error: LLAMA_API_KEY is not set"},{"fix":"Verify that your `LLAMA_API_KEY` is correct, active, and has appropriate access rights by checking your Llama API account and key management page.","cause":"The provided API key is either invalid, expired, revoked, or does not have the necessary permissions for the requested operation.","error":"LlamaAPIClient.APIError: Request failed with status code 401"},{"fix":"Review the request parameters you are sending to the API call against the official Llama API documentation and the TypeScript type definitions provided by the library for correctness.","cause":"The request payload or parameters sent to the API were malformed, contained invalid values, or were incomplete according to the Llama API's specification.","error":"LlamaAPIClient.APIError: Request failed with status code 400 (Bad Request)"},{"fix":"Check your internet connection, ensure the Llama API service is operational (consult their status page), and verify that your network environment allows outgoing HTTPS connections to the API endpoint.","cause":"A low-level network connectivity issue (e.g., DNS resolution failure, firewall block, or no internet connection) prevented the client from establishing a connection to the Llama API endpoint.","error":"TypeError: fetch failed"}],"ecosystem":"npm"}