Loguru Python Logging

0.7.3 · active · verified Wed Mar 25

Simple Python logging library. Current version: 0.7.3 (Mar 2026). One global logger — no instantiation needed, just import. Default sink is stderr (not stdout). diagnose=True is the DEFAULT — shows variable values in tracebacks — leaks sensitive data in production. Must call logger.remove() before reconfiguring. Library authors must never call logger.add() — use logger.disable() instead. enqueue=True requires logger.complete() on shutdown to flush queued messages.

Warnings

Install

Imports

Quickstart

Loguru — reconfigure, file rotation, exception catching, structured context.

# pip install loguru
import sys
from loguru import logger

# Reconfigure: remove default stderr, add stdout + file
logger.remove()
logger.add(sys.stdout, level='INFO', diagnose=False)
logger.add('app.log', level='DEBUG', rotation='50 MB', diagnose=False)

# Basic logging
logger.debug('Debug message')
logger.info('Server started on port 8000')
logger.warning('Low disk space')
logger.error('Connection failed')
logger.critical('Database unreachable')

# Exception logging with full traceback
try:
    result = 1 / 0
except ZeroDivisionError:
    logger.exception('Calculation failed')  # logs traceback automatically

# Catch decorator
@logger.catch
def risky_function(x):
    return 100 / x

risky_function(0)  # caught and logged automatically

# Structured context with bind()
request_logger = logger.bind(request_id='req-123', user_id='usr-456')
request_logger.info('Processing payment')

# JSON output for log aggregators
logger.add('app.json', serialize=True, diagnose=False)

# Async / multiprocess safe
logger.add('async.log', enqueue=True, diagnose=False)
# On shutdown:
import asyncio
asyncio.run(logger.complete())  # flush queued messages

view raw JSON →