{"id":5949,"library":"gptcache","title":"GPTCache","description":"GPTCache is a powerful caching library designed to speed up and lower the cost of chat applications that rely on Large Language Model (LLM) services. It functions as a semantic cache, storing and retrieving responses for similar (not just exact) queries using embedding algorithms and vector stores. The library is actively maintained with frequent minor releases.","status":"active","version":"0.1.44","language":"en","source_language":"en","source_url":"https://github.com/zilliztech/GPTCache","tags":["LLM","cache","AI","performance","cost reduction","semantic cache"],"install":[{"cmd":"pip install gptcache","lang":"bash","label":"Install core library"},{"cmd":"pip install gptcache[openai]","lang":"bash","label":"Install with OpenAI support"},{"cmd":"pip install gptcache[langchain]","lang":"bash","label":"Install with LangChain support"},{"cmd":"pip install gptcache[redis]","lang":"bash","label":"Install with Redis support"}],"dependencies":[{"reason":"Required Python version.","package":"python","version":">=3.8.1"},{"reason":"Optional dependency for distributed caching or using Redis as a cache store.","package":"redis","optional":true},{"reason":"Optional dependency for integration with LangChain.","package":"langchain","optional":true},{"reason":"Transitive dependency, often related to LangChain integrations; specific versions might cause conflicts.","package":"pydantic","optional":true}],"imports":[{"note":"Commonly used pre-configured global cache instance for quick setup.","symbol":"cache","correct":"from gptcache import cache"},{"note":"The main class for creating a GPTCache instance, allowing custom configuration.","symbol":"GPTCache","correct":"from gptcache import GPTCache"},{"note":"Adapter to integrate GPTCache with the OpenAI API calls.","symbol":"openai","correct":"from gptcache.adapter import openai"}],"quickstart":{"code":"import os\nfrom gptcache import cache\nfrom gptcache.adapter import openai\n\n# Set your OpenAI API key from an environment variable\nos.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"sk-...\")\n\n# Initialize GPTCache\ncache.init()\n\n# The gptcache.adapter.openai module automatically wraps the openai library\n# Subsequent OpenAI API calls will use the cache\nresponse1 = openai.ChatCompletion.create(\n    model=\"gpt-3.5-turbo\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Hello, what is the capital of France?\"}\n    ]\n)\nprint(f\"First response (likely from LLM): {response1.choices[0].message.content}\")\n\n# A second identical request will hit the cache for faster response and cost savings\nresponse2 = openai.ChatCompletion.create(\n    model=\"gpt-3.5-turbo\",\n    messages=[\n        {\"role\": \"user\", \"content\": \"Hello, what is the capital of France?\"}\n    ]\n)\nprint(f\"Second response (likely from cache): {response2.choices[0].message.content}\")","lang":"python","description":"This quickstart demonstrates how to integrate GPTCache with the OpenAI API. After initializing the cache, subsequent OpenAI calls will automatically leverage the semantic caching capabilities. The first query will likely go to the LLM, while identical or semantically similar subsequent queries will be served from the cache."},"warnings":[{"fix":"Upgrade GPTCache to version 0.1.43 or newer to resolve known compatibility issues with Pydantic v2 and LangChain.","message":"When integrating with LangChain, particularly with Pydantic v2, older versions of GPTCache might have caused 'metaclass conflict errors' or 'LangChain chat pydantic bugs'.","severity":"gotcha","affected_versions":"<0.1.43"},{"fix":"Ensure `pip install gptcache[redis]` if you plan to use Redis. Upgrade to at least 0.1.36 for critical Redis connection fixes, and 0.1.43 to benefit from `redis` being an optional dependency, avoiding unnecessary installs.","message":"Using certain features like remote Redis cache stores or distributed caching might require explicit installation of `redis` and can encounter connection issues in older versions.","severity":"gotcha","affected_versions":"<0.1.43 (for optional Redis install), <0.1.36 (for Redis connection fix)"},{"fix":"Regularly update GPTCache to its latest version to ensure compatibility with evolving LLM APIs. Version 0.1.38 addressed specific OpenAI API base changes.","message":"Changes in external LLM APIs (e.g., OpenAI's API base for embeddings) can cause unexpected behavior or errors if GPTCache is not updated to reflect these changes.","severity":"gotcha","affected_versions":"<0.1.38"}],"env_vars":null,"last_verified":"2026-04-14T00:00:00.000Z","next_check":"2026-07-13T00:00:00.000Z"}