RQ (Redis Queue)
raw JSON → 2.7.0 verified Tue May 12 auth: no python install: verified quickstart: stale
Simple Python job queue backed by Redis or Valkey. Current version is 2.7.0. Requires Python >=3.9, Redis >=5 or Valkey >=7.2. Much simpler than Celery — no broker config, just a Redis connection. Key API change: job.result property removed in 1.12, replaced by job.return_value(). job.get_status() now returns a JobStatus enum, not a string.
pip install rq Common errors
error ModuleNotFoundError: No module named 'redis' ↓
cause The `rq` library requires the `redis` Python client library as a dependency, but it is not installed in the current environment.
fix
Install the Redis Python client:
pip install redis error AttributeError: 'Job' object has no attribute 'result' ↓
cause The `job.result` property was removed in RQ version 1.12; you should now use `job.return_value()` to retrieve the job's return value.
fix
Replace
job.result with job.return_value().
# Old code: result = job.result
# New code:
result = job.return_value() error redis.exceptions.ConnectionError: Error 111 connecting to 127.0.0.1:6379. Connection refused. ↓
cause The RQ client or worker is unable to connect to the Redis server, likely because Redis is not running or is configured to listen on a different host/port.
fix
Ensure your Redis server is running and accessible at the specified host and port (default is
localhost:6379). You may need to start the Redis server (e.g., redis-server) or verify your REDIS_URL / connection parameters. error AttributeError: 'JobStatus' object has no attribute 'lower' ↓
cause In RQ versions >= 1.12, `job.get_status()` returns a `JobStatus` Enum member, which does not have string methods like `lower()`.
fix
Compare the status with members of the
JobStatus enum or explicitly convert to a string using str(job.get_status().value) if a string representation is truly needed.
from rq.job import JobStatus
# Old code: if job.get_status().lower() == 'finished':
# New code:
if job.get_status() == JobStatus.FINISHED:
pass error rq: command not found ↓
cause The `rq` command-line utility is not found in the system's PATH. This often occurs if `rq` was installed in a virtual environment that is not activated, or if the installation failed.
fix
Activate your virtual environment where
rq is installed (e.g., source venv/bin/activate or .\venv\Scripts\activate) before running rq worker. Alternatively, ensure rq is installed globally or in an environment whose bin directory is in your PATH. Warnings
breaking job.result property removed in rq 1.12.0. Accessing job.result raises AttributeError. All tutorials and LLM-generated code using job.result are broken. ↓
fix Replace job.result with job.return_value(). Note it returns None until the job completes.
breaking job.get_status() returns a JobStatus enum (e.g. JobStatus.FINISHED) not a string ('finished'). Comparing with strings like if job.get_status() == 'finished' always evaluates to False. ↓
fix from rq.job import JobStatus; if job.get_status() == JobStatus.FINISHED: ... Or use job.is_finished, job.is_failed, job.is_started properties.
breaking Python 3.8 support dropped. Minimum is Python 3.9. ↓
fix Pin rq<2.0 for Python 3.8 environments.
breaking redis-py 6.0.0 is explicitly blocked — rq will refuse to install with redis==6.0.0 due to critical incompatibilities. Use redis>=4.0.0,!=6.0.0. ↓
fix Pin redis!=6.0.0 in requirements. Use redis>=4.0.0,!=6.0.0 or upgrade to a later redis-py patch once fixed.
gotcha Enqueued functions must be importable by the worker process. Functions defined in __main__ or interactively (notebooks, scripts run directly) cannot be pickled and will raise PicklingError. ↓
fix Define task functions in separate module files (e.g. tasks.py). The worker imports them by module path.
gotcha Workers must be started in a separate process. Running rq worker in the same process as enqueuers causes deadlocks. Workers run: rq worker [queue_name] ↓
fix Start worker in a separate terminal: rq worker. Or use WorkerPool for multiple workers: from rq import WorkerPool; WorkerPool(queues=[q], connection=redis_conn).start()
gotcha On Windows and macOS, the default Worker uses fork() which is not available. Use SpawnWorker instead, which uses multiprocessing.spawn. ↓
fix from rq import SpawnWorker; SpawnWorker(queues=[q], connection=redis_conn).work(). Or use SimpleWorker for testing (no fork/spawn, runs in-process).
Install compatibility verified last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) - - 0.32s 24.7M
3.10 slim (glibc) - - 0.23s 25M
3.11 alpine (musl) - - 0.45s 27.8M
3.11 slim (glibc) - - 0.34s 28M
3.12 alpine (musl) - - 0.61s 19.4M
3.12 slim (glibc) - - 0.61s 20M
3.13 alpine (musl) - - 0.59s 19.0M
3.13 slim (glibc) - - 0.55s 19M
3.9 alpine (musl) - - 0.27s 23.5M
3.9 slim (glibc) - - 0.23s 24M
Imports
- Queue wrong
# job.result removed in rq 1.12 — AttributeError on modern rq: print(job.result) # String comparison with job.get_status() broken — it returns enum now: if job.get_status() == 'finished': # always False in rq >= 1.x passcorrectfrom redis import Redis from rq import Queue # Connection required at Queue creation time redis_conn = Redis(host='localhost', port=6379, db=0) q = Queue(connection=redis_conn) # Functions must be importable — define in a module, not __main__ from myapp.tasks import process_data job = q.enqueue(process_data, arg1, arg2) # Get result (rq >= 1.12) result = job.return_value() # None until job completes # Check status — returns JobStatus enum, not string from rq.job import JobStatus status = job.get_status() if status == JobStatus.FINISHED: print(job.return_value())
Quickstart stale last tested: 2026-04-23
# tasks.py — functions must be in an importable module
def add(x, y):
return x + y
def send_email(to, subject, body):
# ... email logic
return True
# enqueue.py — enqueue jobs
from redis import Redis
from rq import Queue
from tasks import add
redis_conn = Redis()
q = Queue(connection=redis_conn)
# Enqueue
job = q.enqueue(add, 4, 6)
print('Job ID:', job.id)
# Enqueue with options
job2 = q.enqueue(
add, 10, 20,
job_timeout=300, # seconds before job is killed
result_ttl=500, # seconds to keep result in Redis
retry=3 # retry on failure
)
# Check result (after worker runs)
import time
time.sleep(1)
print(job.return_value()) # 10
print(job.get_status()) # JobStatus.FINISHED
# --- Start worker in separate terminal ---
# rq worker