- ⚠️ Setting
propagate=Falseon module loggers blocks logs from reaching the root logger. - ⚒️ Adding handlers to every module creates duplicated log entries and misconfiguration.
- 📁 Centralizing your logging setup using
dictConfigmakes for structured, easy-to-manage log flow. - 🧪 Tools like
caplogin pytest help find and fix missing logs in tests. - 🧼 Using a logger factory guarantees consistent setup and keeps code from being repeated.
Why Your Python Logger Isn't Sending Logs
You've set up logging in your Python project. You expect logs from every module to go into one file. But when you check your logs, entries are missing from helper modules or sub-packages. Does this sound familiar? If so, you're not alone. This common problem often comes down to Python logger settings—and a few small errors in how loggers are set up. Let's look at how Python logging, and logger sending, really works. We will also cover how to make sure your logs all go to the right place.
How Python Logging Works
Python’s built-in logging module is powerful and flexible. It supports apps of any size, from small scripts to large systems. Python’s logging system has four main parts:
-
Loggers: These are named parts that apps use to write log messages. Each logger has a name, often the same as the module name using
__name__. Loggers are like a family tree. They get behavior from parent loggers based on their names. -
Handlers: Handlers decide where a log event goes. This can be writing to a file using
FileHandler, sending to the console usingStreamHandler, or even setups for HTTP or SMTP handlers. -
Formatters: Formatters show exactly how the log message should look. You can add time stamps, module names, log levels, or other data. Use format strings like
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'. -
Filters: Filters give you exact control. They let only specific log messages pass based on rules you set.
The Root Logger
The root logger sits at the top of the logging family tree. If a module has no specific logger setup, its messages go up to the root. The root often gets set up by default when you call logging.basicConfig().
Think of the root logger as your main control center. Every logger in your app can report to it—unless you tell it not to.
What Is Logger Propagation?
Logger propagation in Python is how log messages from a child logger go up the family tree to its parents. If a logger is not explicitly set with propagate = False, its output will travel up to parent loggers. It might reach the root logger and its handlers.
Default Behavior
By default:
logger = logging.getLogger('myapp.module')
logger.propagate # True
This default setting means you don’t need to attach handlers everywhere in your code. You set them once. Then all loggers can send their messages up to the right place.
According to Python's official documentation, “By default, loggers are set to send messages to the root logger.” This helps with centralized logging, which is good for projects with many source files.
Logger Propagation in a Real Multi-Module Project
Let's look at an example project setup to see logger propagation in action.
myproj/
├── main.py
├── utils.py
└── services/
└── data.py
Each of these modules needs to log, but we only want one main log output, ideally to a single file.
main.py
import logging
from services import data
import utils
logging.basicConfig(
level=logging.INFO,
filename='app.log',
format='%(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
logger.info("Main starting")
data.load_data()
utils.helper_function()
utils.py
import logging
logger = logging.getLogger(__name__)
def helper_function():
logger.info("Running helper_function")
services/data.py
import logging
logger = logging.getLogger(__name__)
def load_data():
logger.info("Loading data")
If done correctly, all logs—no matter which file writes them—should appear in app.log.
Mistake #1: Setting Up Handlers Per Module
❌ The Problem
Developers often make the mistake of adding handlers inside each module:
# utils.py
logger = logging.getLogger(__name__)
handler = logging.FileHandler('app.log')
logger.addHandler(handler)
logger.setLevel(logging.INFO)
Doing this in every module may seem fine, but it leads to:
- Duplicate log entries: Messages show up many times in the log file. This is because they are handled locally and also through propagation.
- Inconsistent formatting: If one module uses a different formatter than another, the log file becomes hard to read or understand.
- Error-prone repeated work: Setting up log details across many files is extra work and can cause errors.
✅ The Fix
Let all modules use the root logger’s setup. Remove handlers from individual modules:
# utils.py
logger = logging.getLogger(__name__)
# no handler or configuration
Then in main.py, set up everything in one place using basicConfig or dictConfig.
Mistake #2: Accidentally Setting propagate=False
logger = logging.getLogger(__name__)
logger.propagate = False
❌ The Problem
This stops the logger from sending its messages up to the root logger. This module's logs are quiet unless you attach a handler directly to it.
This mistake often happens when trying to silence outside libraries or when fixing duplicate logs. But doing this creates logging "dead zones" where logs don't go through.
✅ The Fix
Make sure you don't accidentally turn off propagate=True unless you really need to.
A central setup allows most of your app modules to use the root handler setup. It also uses propagation as it should.
Mistake #3: Duplicated Handlers = Duplicated Logs
❌ The Symptom
You're seeing each log message two, three, or even more times in your output file.
This usually happens because the same handler is added to many loggers in different modules.
❌ Example
# main.py AND utils.py both have this
file_handler = logging.FileHandler('app.log')
logger.addHandler(file_handler)
✅ The Solution
Only set up handlers once—it is best to do this on the root logger:
# main.py
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.FileHandler('app.log')]
)
All other modules can use this setup by simply calling:
logger = logging.getLogger(__name__)
How Log Levels Affect Logger Propagation
The log level decides if a message gets processed or ignored.
But, both the logger and the handler have their own levels. BOTH must approve the message before it gets written.
Example
# Logger level set to DEBUG
logger.setLevel(logging.DEBUG)
# Handler level set to INFO
handler.setLevel(logging.INFO)
Here, the logger records a DEBUG message. But the handler discards it because of the handler's level setting.
Best Practice
Use the same levels unless you need to filter some messages. You can even set different handlers for different outputs—for example:
- DEBUG to
debug.log - INFO+ to
app.log - ERROR to stderr or an alerting system
This gives you more control when you need it.
Centralized Logging with dictConfig
Instead of spreading setup across many files, Python has logging.config.dictConfig(). Use it for centralizing logging in one structured dictionary.
Example
import logging.config
LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
},
},
'handlers': {
'file_handler': {
'class': 'logging.FileHandler',
'filename': 'app.log',
'formatter': 'standard',
'level': 'INFO',
},
},
'root': {
'handlers': ['file_handler'],
'level': 'INFO',
},
}
logging.config.dictConfig(LOGGING_CONFIG)
This method makes it easier to:
- Share setups across different environments (like YAML using
PyYAML) - Control logger behavior in detail
- Check and change the whole system from one place
✅ All logs now go to a single file. This is great for main file outputs and connecting with log monitoring systems.
Testing Logger Propagation in Your App
If a log entry is missing, check it at each step.
🛠️ How to Check
- Check Logger Propagation Flag
logger = logging.getLogger('your.module')
print(logger.name, logger.propagate)
- Print the Handlers Attached
print(logger.handlers)
- Use Pytest’s
caplogFixture
In tests, caplog.text lets you look at logs in your code and confirm their content.
- Temporarily Send to Console
Replace or add StreamHandler() to see logs in STDOUT while debugging.
A Reusable Logger Factory for Consistent Setup Across Modules
Don't do the same work in every Python file. Make a shared helper:
# log_helper.py
import logging
def get_logger(name):
logger = logging.getLogger(name)
logger.propagate = True
return logger
Usage:
from log_helper import get_logger
logger = get_logger(__name__)
logger.info("Hello from helper")
Now your modules use the same logic and setup plan. This leads to consistent logs and fewer surprises.
When Python's Built-In Logging Isn't Enough
The built-in logging module is very good. But modern apps with specific logging needs might do better with libraries like:
- Loguru: Simpler code, automatic handlers, colored output.
- Structlog: Good for structured logging (JSON output) and adding log details.
- Sentry/Datadog integrations: For real-time error monitoring with full details.
Choose these if you:
- Need to attach request or user IDs
- Need machine-readable logs
- Send logs straight to remote log servers
For most apps, though, using python logging correctly and making sure python logger propagation works does the job.
Devsolus Quick-Fix Toolbox: Centralized Logging in <50 Lines
Need centralized logging to one file with very little setup? Use this.
logging_setup.py
import logging
def setup_logging(logfile='app.log', level=logging.INFO):
logging.basicConfig(
level=level,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(logfile),
logging.StreamHandler()
]
)
Then in main.py:
from logging_setup import setup_logging
setup_logging()
Now you’re logging to both the console and a file—from every part of your app, without repeated handlers.
Recap: Making Sure Python Logger Propagation Works Well
To keep logs from going missing, showing up twice, or being set up wrong:
- ☑️ Allow logger propagation unless you have a clear reason to stop it.
- 🧭 Centralize handler setup at the root logger or using
dictConfig. - 🔁 Don't set up handlers in every module—doing it once is enough.
- 🧪 Always test log levels, propagation, and output using tools like
caplog. - 🔌 Use a helper or factory pattern to get loggers the same way across your code.
By understanding python logging, using good parent-child logger structure, and letting propagation work for you, you will have logging that is efficient, easy to keep up, and grows with your app.
Citations
Python Software Foundation. (n.d.). Logging HOWTO. https://docs.python.org/3/howto/logging.html