32. Logging and Debugging
π Learn how to effectively log and debug your Python applications. Master the logging module, use pdb for debugging, and explore best practices for maintaining healthy code. π
What we will learn in this post?
- π Introduction to Logging
- π Logging Basics and Levels
- π Loggers, Handlers, and Formatters
- π Advanced Logging Configuration
- π Debugging with pdb
- π Debugging Tools and IDEs
- π Logging Best Practices
The Importance of Logging in Applications
Logging is a crucial part of software development. It helps developers understand whatβs happening in their applications. When things go wrong, logs provide valuable information to troubleshoot issues. In production environments, effective logging monitors system health and performance in real-time. π
Print Statements vs. Logging
While print statements can show output during development, they have limitations:
- Print statements:
- Only show output on the console.
- Not suitable for production.
- Hard to manage in large applications.
- Logging:
- Can save messages to files or external systems.
- Offers different levels of severity (e.g., DEBUG, INFO, WARNING).
- Easier to filter and manage.
Logging solutions scale with your application, unlike print statements which become unmanageable.
Overview of Pythonβs Logging Module
Pythonβs built-in logging module provides a flexible framework for emitting log messages from Python programs. Hereβs a quick overview:
- Loggers: Create log messages.
- Handlers: Send log messages to their final destination (console, files, etc.).
- Formatters: Define the layout of log messages.
Since it is a standard library module, you can implement robust logging without adding external dependencies.
1
2
3
4
import logging
logging.basicConfig(level=logging.INFO)
logging.info("This is an info message.")
Real-World Example: API Request Logger π―
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import logging
import time
from functools import wraps
# Configure logging for API requests
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('api_requests.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger('api_logger')
def log_api_request(func):
"""Decorator to log API request details"""
@wraps(func)
def wrapper(*args, **kwargs):
start_time = time.time()
logger.info(f"API Request: {func.__name__} started")
try:
result = func(*args, **kwargs)
duration = time.time() - start_time
logger.info(f"API Request: {func.__name__} completed in {duration:.2f}s")
return result
except Exception as e:
logger.error(f"API Request: {func.__name__} failed - {str(e)}")
raise
return wrapper
@log_api_request
def fetch_user_data(user_id):
"""Simulate fetching user data from API"""
logger.debug(f"Fetching data for user_id: {user_id}")
# Simulate API call
return {"id": user_id, "name": "John Doe", "email": "john@example.com"}
# Usage in production
user = fetch_user_data(123)
Logging Hierarchy
The logging hierarchy consists of:
- Root Logger: The top-level logger.
- Child Loggers: Inherit settings from the root logger.
graph TD;
A[Root Logger]:::style1 --> B[Child Logger 1]:::style2;
A --> C[Child Logger 2]:::style3;
classDef style1 fill:#ff4f81,stroke:#c43e3e,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style2 fill:#6b5bff,stroke:#4a3f6b,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style3 fill:#ffd700,stroke:#d99120,color:#222,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
linkStyle default stroke:#e67e22,stroke-width:3px;
Understanding Logging Levels π
Logging is a way to track whatβs happening in your code. It helps you understand and debug your applications. There are five main logging levels:
Logging Levels Explained
- DEBUG: Use this for detailed information, mainly for developers. It helps in diagnosing problems.
- INFO: This level is for general information about the applicationβs progress. Itβs useful for tracking the flow of the application.
- WARNING: Indicates something unexpected happened, but the application is still running. Use it to alert about potential issues.
- ERROR: This level is for serious problems that prevent a function from working. Itβs a sign that something went wrong.
- CRITICAL: This is the highest level, indicating a severe error that may cause the program to stop. Immediate attention is needed.
Choosing the appropriate level ensures you capture critical errors without drowning in unnecessary data.
Using basicConfig()
You can set up logging easily with basicConfig(). Hereβs a simple example:
1
2
3
4
5
import logging
logging.basicConfig(level=logging.INFO)
logging.debug("This is a debug message") # Won't show
logging.info("This is an info message") # Will show
Filtering Messages
The logging level filters messages. For example, if you set the level to WARNING, only warnings, errors, and critical messages will appear.
1
2
3
logging.basicConfig(level=logging.WARNING)
logging.info("This won't show") # Ignored
logging.warning("This will show") # Displayed
Visual Summary
graph TD;
A[Logging Levels]:::style1 --> B[DEBUG]:::style2
A --> C[INFO]:::style3
A --> D[WARNING]:::style4
A --> E[ERROR]:::style5
A --> F[CRITICAL]:::style2
classDef style1 fill:#ff4f81,stroke:#c43e3e,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style2 fill:#6b5bff,stroke:#4a3f6b,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style3 fill:#ffd700,stroke:#d99120,color:#222,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style4 fill:#00bfae,stroke:#005f99,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style5 fill:#ff9800,stroke:#f57c00,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
linkStyle default stroke:#e67e22,stroke-width:3px;
Understanding Logging Architecture π
Logging is essential for tracking events in your applications. Letβs break down the key components: Logger, Handler, and Formatter. This modular design allows you to route logs to files, emails, or external services simultaneously.
Logger Objects π
A Logger is like a diary for your application. It records messages at different levels (DEBUG, INFO, WARNING, ERROR, CRITICAL).
Example:
1
2
3
4
import logging
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
Handler Objects π οΈ
Handlers send the log messages to their final destination. Common types include:
- StreamHandler: Sends logs to the console.
- FileHandler: Saves logs to a file.
Example:
1
2
3
4
5
6
7
# StreamHandler
stream_handler = logging.StreamHandler()
logger.addHandler(stream_handler)
# FileHandler
file_handler = logging.FileHandler('app.log')
logger.addHandler(file_handler)
Formatter Objects π¨
Formatters define how the log messages look. You can customize the format to include timestamps, log levels, and messages.
Example:
1
2
3
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
stream_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
Putting It All Together:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import logging
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
stream_handler = logging.StreamHandler()
file_handler = logging.FileHandler('app.log')
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
stream_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
logger.addHandler(file_handler)
logger.info('This is an info message!')
Real-World Example: Multi-Handler Logging System π―
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import logging
import logging.handlers
import sys
class ProductionLogger:
"""Production-ready logging configuration"""
def __init__(self, name, log_file='app.log'):
self.logger = logging.getLogger(name)
self.logger.setLevel(logging.DEBUG)
# Formatter for structured logs
formatter = logging.Formatter(
'%(asctime)s | %(name)s | %(levelname)s | %(filename)s:%(lineno)d | %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'
)
# File Handler with rotation
file_handler = logging.handlers.RotatingFileHandler(
log_file,
maxBytes=10*1024*1024, # 10MB
backupCount=5
)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
# Console Handler for errors only
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.WARNING)
console_handler.setFormatter(formatter)
# Add handlers
self.logger.addHandler(file_handler)
self.logger.addHandler(console_handler)
def get_logger(self):
return self.logger
# Usage in production application
prod_logger = ProductionLogger('ecommerce_app')
logger = prod_logger.get_logger()
logger.info("Application started")
logger.warning("Database connection pool at 80% capacity")
logger.error("Payment gateway timeout")
Flowchart of Logging Architecture:
graph TD;
A[Logger]:::style1 --> B[Handler]:::style2
B --> C[StreamHandler]:::style3
B --> D[FileHandler]:::style4
B --> E[Formatter]:::style5
classDef style1 fill:#ff4f81,stroke:#c43e3e,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style2 fill:#6b5bff,stroke:#4a3f6b,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style3 fill:#ffd700,stroke:#d99120,color:#222,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style4 fill:#00bfae,stroke:#005f99,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style5 fill:#ff9800,stroke:#f57c00,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
linkStyle default stroke:#e67e22,stroke-width:3px;
Now you have a friendly overview of logging architecture! Happy coding! π
Logging Configuration in Python π
Logging is essential for tracking events in your application. Letβs explore how to set it up using dictConfig and fileConfig, along with rotating file handlers and structured logging. Centralized configuration simplifies management across large projects and multiple environments.
Basic Logging Setup π οΈ
You can configure logging using a dictionary. Hereβs a simple example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import logging
import logging.config
logging_config = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
},
},
'handlers': {
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'app.log',
'maxBytes': 2000,
'backupCount': 5,
'formatter': 'simple',
},
'console': {
'class': 'logging.StreamHandler',
'formatter': 'simple',
},
},
'loggers': {
'my_logger': {
'handlers': ['file', 'console'],
'level': 'DEBUG',
},
},
}
logging.config.dictConfig(logging_config)
logger = logging.getLogger('my_logger')
logger.debug('This is a debug message!')
Key Features π
- Rotating File Handlers: Automatically manage log file sizes.
- Multiple Destinations: Log to both a file and the console.
- Structured Logging: Use formats to make logs easier to read.
Visual Representation π
flowchart TD
A[Start Logging]:::style1 --> B{Choose Handler}:::style2
B -->|File| C[RotatingFileHandler]:::style3
B -->|Console| D[StreamHandler]:::style4
C --> E[Log to File]:::style5
D --> F[Log to Console]:::style1
classDef style1 fill:#ff4f81,stroke:#c43e3e,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style2 fill:#6b5bff,stroke:#4a3f6b,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style3 fill:#ffd700,stroke:#d99120,color:#222,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style4 fill:#00bfae,stroke:#005f99,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style5 fill:#ff9800,stroke:#f57c00,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
linkStyle default stroke:#e67e22,stroke-width:3px;
With this setup, you can easily track your applicationβs behavior and troubleshoot issues effectively! Happy logging! π
Introduction to Pythonβs Built-in Debugger: pdb π
Debugging is an essential skill for any programmer. Python offers a powerful built-in debugger called pdb that helps you find and fix bugs in your code. Letβs explore some common commands and how to use them effectively! Mastering pdb saves you hours of trial-and-error debugging and lets you inspect state at runtime.
Common pdb Commands
Here are some key commands youβll use in pdb:
n: Next line - move to the next line of code.s: Step into - go into a function call.c: Continue - resume execution until the next breakpoint.p: Print - display the value of a variable.l: List - show the current location in the code.b: Breakpoint - set a breakpoint at a specific line.q: Quit - exit the debugger.
Setting Breakpoints and Inspecting Variables
To set a breakpoint, use the command b <line_number>. This allows you to pause execution and inspect variables. For example:
1
2
3
4
5
def add(a, b):
return a + b
result = add(2, 3)
print(result)
In pdb, you can set a breakpoint at the line return a + b and inspect a and b before the function returns.
Stepping Through Code
You can step through your code line by line using n and s. This helps you understand the flow and catch errors.
1
2
3
(pdb) b 2
(pdb) c
(pdb) p result
Real-World Example: Debugging a Web Scraper π―
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import pdb
def scrape_product_prices(url):
"""Scrape product prices from e-commerce site"""
prices = []
# Set breakpoint to inspect data
pdb.set_trace() # Debugger will pause here
# Simulate data extraction
raw_data = fetch_page(url)
for item in raw_data:
price = extract_price(item)
prices.append(price)
return prices
def extract_price(item):
"""Extract price from HTML element"""
# Complex parsing logic
price_text = item.find('span', class_='price').text
# Remove currency symbols and convert
return float(price_text.replace('$', '').replace(',', ''))
def fetch_page(url):
"""Simulate fetching page data"""
return [
type('obj', (object,), {'find': lambda *a, **k: type('span', (object,), {'text': '$1,234.56'})()})(),
type('obj', (object,), {'find': lambda *a, **k: type('span', (object,), {'text': '$789.00'})()})()
]
# When running this, pdb will pause at set_trace()
# You can use:
# - 'n' to step to next line
# - 'p raw_data' to inspect the fetched data
# - 'c' to continue execution
prices = scrape_product_prices('https://example.com/products')
Debugging Tools in Popular IDEs π οΈ
Visual Debugging in VS Code and PyCharm
Debugging helps you find and fix errors in your code. Both VS Code and PyCharm offer powerful debugging tools. Visual debuggers provide deeper insights into complex data structures without writing extra code.
Key Features
- Visual Debugging: See your code execution in real-time.
- Watch Expressions: Monitor variables as you step through your code.
- Conditional Breakpoints: Pause execution only when certain conditions are met.
Example Code
1
2
3
4
5
def add(a, b):
return a + b
result = add(5, 3) # Set a breakpoint here
print(result) # Watch this variable
Debugging Techniques
- Step Over: Execute the next line without going into functions.
- Step Into: Dive into the function to see its inner workings.
- Step Out: Exit the current function and return to the caller.
Flowchart of Debugging Process
graph TD;
A[Start Debugging]:::style1 --> B{Breakpoint?}:::style2;
B -- Yes --> C[Inspect Variables]:::style3;
B -- No --> D[Step Over/Into]:::style4;
C --> E[Check Conditions]:::style5;
D --> E;
E --> F[Fix Issues]:::style1;
F --> G[Continue Execution]:::style2;
G --> H[End Debugging]:::style3;
classDef style1 fill:#ff4f81,stroke:#c43e3e,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style2 fill:#6b5bff,stroke:#4a3f6b,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style3 fill:#ffd700,stroke:#d99120,color:#222,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style4 fill:#00bfae,stroke:#005f99,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
classDef style5 fill:#ff9800,stroke:#f57c00,color:#fff,font-size:16px,stroke-width:3px,rx:14,shadow:6px;
linkStyle default stroke:#e67e22,stroke-width:3px;
Logging Best Practices π
Logging is essential for understanding how your application behaves. Here are some best practices to keep in mind! Consistently applied practices make your logs actionable, reliable, and easier to search in production.
1. Appropriate Log Levels π
Use different log levels to categorize messages:
- DEBUG: For detailed information during development.
- INFO: General information about application progress.
- WARNING: Indications of potential issues.
- ERROR: Errors that need attention.
- CRITICAL: Serious errors that may halt the application.
Example:
1
2
logger.info("User logged in successfully.")
logger.error("Database connection failed.")
2. Structured Logging with JSON ποΈ
Using JSON for logs makes them easier to read and analyze. For example:
1
2
3
4
5
6
{
"timestamp": "2023-10-01T12:00:00Z",
"level": "ERROR",
"message": "Failed to fetch user data",
"userId": 12345
}
3. Log Rotation π
Rotate logs to prevent them from consuming too much disk space. Use tools like logrotate to manage this automatically.
4. Handling Sensitive Data π
Never log sensitive information like passwords or credit card numbers. Always sanitize logs to protect user privacy.
5. Logging in Production π
In production, ensure logs are written to a centralized system for easier monitoring. Use tools like ELK Stack or Splunk.
6. Integrating with Monitoring Tools π
Integrate your logs with monitoring tools like Prometheus or Grafana to visualize and alert on log data.
Resources
By following these practices, you can ensure your logging is effective and helpful! Happy logging! π
π§ Test Your Knowledge
Which logging level should you use for detailed diagnostic information during development?
DEBUG is the lowest logging level and provides detailed diagnostic information, perfect for development and troubleshooting.
What is the primary advantage of using Python's logging module over print statements?
The logging module offers flexibility to save messages to different destinations (files, console, remote servers) and categorize them by severity levels (DEBUG, INFO, WARNING, ERROR, CRITICAL), making it far superior to print statements for production use.
In the logging architecture, what component is responsible for sending log messages to their final destination?
Handlers are responsible for dispatching log messages to their specified destinations, such as files (FileHandler), console (StreamHandler), or external systems.
Which pdb command allows you to step INTO a function call to debug its internals?
The 's' (step) command in pdb steps into function calls, allowing you to debug the internal workings of functions. 'n' (next) would skip over the function call.
What is a critical security practice when implementing logging in production applications?
Never log sensitive information like passwords, API keys, credit card numbers, or personal identifiable information (PII). Always sanitize logs to protect user privacy and comply with security regulations.
π― Hands-On Assignment: Build a Production-Ready Logging System π
π Your Mission
Create a comprehensive logging system for a web application that handles user authentication, API requests, and error tracking. Build a production-ready solution with multiple handlers, log rotation, structured logging, and proper error handling that could be deployed to a real-world application.π― Requirements
- Create a
LoggerManagerclass that configures logging with:RotatingFileHandlerfor general logs (max 10MB, 5 backups)FileHandlerfor error logs (errors only)StreamHandlerfor console output (warnings and above)- Custom
Formatterwith timestamp, level, filename, and line number
- Implement logging decorators:
@log_execution_time- Logs function execution duration@log_exceptions- Catches and logs exceptions with full traceback@log_api_calls- Logs API endpoint access with parameters (sanitized)
- Create structured JSON logging for important events (authentication, payments)
- Implement a filter to sanitize sensitive data (passwords, tokens, credit cards)
- Write pytest test cases to verify logging behavior
π‘ Implementation Hints
- Use
logging.config.dictConfig()for centralized configuration - Create a custom
logging.Filtersubclass for sensitive data sanitization - Use
functools.wrapsin decorators to preserve function metadata - For JSON logging, use
json.dumps()in formatter or custom handler - Test with
caplogfixture in pytest to capture and verify log messages - Use
logging.getLogger(__name__)for module-specific loggers
π Example Input/Output
# Example: Using the logging system
from logger_manager import LoggerManager, log_execution_time, log_exceptions
# Initialize logging
manager = LoggerManager('myapp', log_dir='./logs')
logger = manager.get_logger()
@log_execution_time
@log_exceptions
def process_user_login(username, password):
"""Simulate user authentication"""
logger.info(f"Login attempt for user: {username}")
# Authentication logic here
if authenticate(username, password):
logger.info(f"User {username} logged in successfully")
return {"status": "success", "user": username}
else:
logger.warning(f"Failed login attempt for user: {username}")
return {"status": "failed"}
# Output in logs/app.log:
# 2026-01-11 14:23:45 | myapp | INFO | auth.py:15 | Login attempt for user: john_doe
# 2026-01-11 14:23:45 | myapp | INFO | auth.py:19 | User john_doe logged in successfully
# 2026-01-11 14:23:45 | myapp | INFO | decorators.py:25 | process_user_login executed in 0.125s
# Output in logs/errors.log (if error occurred):
# 2026-01-11 14:25:10 | myapp | ERROR | auth.py:22 | Database connection failed
# Traceback (most recent call last):
# File "auth.py", line 22, in process_user_login
# result = db.query(...)
# ConnectionError: Unable to connect to database
π Bonus Challenges
- Level 2: Add
SMTPHandlerto send email alerts for CRITICAL errors - Level 3: Implement
HTTPHandlerto send logs to external monitoring service (e.g., Sentry, Datadog) - Level 4: Create a
ColoredFormatterfor terminal output with ANSI color codes - Level 5: Add context managers for temporary log level changes:
with log_level(logging.DEBUG): - Level 6: Build a log aggregation dashboard using Flask to display real-time logs from multiple files
π Learning Goals
- Master Python's logging module architecture (loggers, handlers, formatters) π―
- Implement production-ready logging configurations with rotation β¨
- Create reusable logging decorators for cross-cutting concerns π
- Understand structured logging and JSON formats for analysis π
- Apply security best practices for sensitive data sanitization π
- Test logging behavior with pytest and fixtures π§ͺ
π‘ Pro Tip: This logging pattern is used in production frameworks like Django, Flask, and FastAPI! Major companies use centralized logging with tools like ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, or Datadog for real-time monitoring and alerting across distributed systems.
Share Your Solution! π¬
Completed the project? Post your code in the comments below! Show us your Python logging mastery! πβ¨
Conclusion: Master Logging and Debugging for Reliable Python Applications π
Logging and debugging are essential skills that separate professional developers from beginners, transforming code from functional to production-ready with proper observability and maintainability. By mastering Pythonβs logging module with handlers and formatters, using pdb for interactive debugging, and following best practices for security and performance, youβll build robust applications that are easier to monitor, troubleshoot, and scale in real-world production environments.