GPT4All

raw JSON →
2.8.2 verified Fri May 01 auth: no python

Python bindings for GPT4All, a locally running open-source LLM ecosystem. Version 2.8.2 supports local inference with models like Mistral, Llama, and GPT4All. Released regularly with updates to model support and API improvements.

pip install gpt4all
error ValueError: Unknown model name 'my-model'
cause Trying to instantiate a model with an incorrect or unsupported name.
fix
Call GPT4All.list_models() to see available model names, or provide a valid file path using model_path.
error ModuleNotFoundError: No module named 'gpt4all'
cause The package is not installed or installed in a different environment.
fix
Run pip install gpt4all in your active Python environment.
error AttributeError: 'GPT4All' object has no attribute 'generate'
cause Using an older version of the library or a model that does not support text generation.
fix
Update to the latest version with pip install --upgrade gpt4all. Ensure the model is a text generation model (not an embedding model).
breaking In version 2.5.0+, the GPT4All class constructor changed to require a model name string instead of a model object. Old code using `GPT4All(llmodel=...)` will break.
fix Use `GPT4All('model-name')` or `GPT4All(model_path='/path/to/model.bin')`.
gotcha Model names in GPT4All v2.x must match the exact name in the official model list (e.g., 'Meta-Llama-3-8B-Instruct-4bit'). Using a wrong name downloads a new model or raises an exception.
fix Check available models with `GPT4All.list_models()` and use the exact `name` field.
deprecated The `generate()` method's `n_predict` parameter was renamed to `max_tokens` in v2.7.0. Old `n_predict` still works but is deprecated and will be removed.
fix Use `max_tokens` instead of `n_predict`.
gotcha By default, models are downloaded to the user's cache directory (~/.cache/gpt4all/). If you expect a model to be in the current directory, specify `model_path` explicitly.
fix Use `GPT4All(model_path='./model.bin')` to load from current directory.

Initialize model with a known model name (or path to local file), then generate text.

from gpt4all import GPT4All
model = GPT4All("Meta-Llama-3-8B-Instruct-4bit")
output = model.generate("What is the capital of France?", max_tokens=50)
print(output)