Python3.11
Gunicorn + Flask + logging
Debian 12
我在开发一个小型的 Python Web 应用,选用的是 Gunicorn + Flask 的方案,日志采用了官方自带的 logging 库。
业务及其简单,但接口请求量比较大,日志记录比较多,我看到官方提供了一个logging.handlers.TimedRotatingFileHandler
的日志轮转处理器,就直接用了。
但是过了一段时间,我发现了日志丢失的问题:
假设 Gunicorn 启动了 3 个 worker 进程,进程号分别是 1001 、1002 和 1003 ,一开始启动 Gunicorn 时,3 个进程的日志都能正确的写入到 app.log 中,但是一旦发生了日志轮转,最终只有一个进程(比如 1001 )能够写入到新的 app.log 中,另外的 1002 和 1003 的日志就再也没有写入成功了。
我猜应该是和多进程日志处理和日志轮转相关的问题,轮转的时候,只有一个进程在切换 app.log ,其他进程找不到文件了,日志就丢失了?(我的猜测很粗糙,我不太理解原理)
当然,官方文档也提到了这点:
文档的建议是,使用 SocketHandler 或者 QueueHandler ,总之是单独使用一个进程处理日志。
生产环境下,有什么好的解决方案?
刚刚上面的轮转日志丢失,更加具体的,本质的原理是什么?
日志配置文件 logging.yaml 如下:
version:
1
formatters:
brief:
format: '%(asctime)s - %(levelname)s - %(name)s - %(message)s'
detail:
format: '%(asctime)s - %(levelname)s - %(process)d - %(processName)s - %(name)s - %(filename)s - %(funcName)s - %(message)s'
handlers:
console_handler:
class: logging.StreamHandler
level: DEBUG
formatter: brief
stream: ext://sys.stdout
info_handler:
class: logging.handlers.TimedRotatingFileHandler
level: INFO
formatter: detail
filename: logs/app.log
when: midnight
backupCount: 2
encoding: utf-8
error_handler:
class: logging.handlers.TimedRotatingFileHandler
level: ERROR
formatter: detail
filename: logs/error.log
when: midnight
backupCount: 2
encoding: utf-8
loggers:
study-flask:
level: DEBUG
handlers: [console_handler, info_handler, error_handler]
propagate: False
root:
level: DEBUG
handlers: [console_handler]
app.py 中关于日志配置的代码:
def log_config(log_config_file):
dict_config = yaml.load(
open(log_config_file, encoding='utf-8'),
Loader=yaml.FullLoader
)
Path.mkdir(Path.cwd().joinpath("logs"), parents=True, exist_ok=True)
logging.config.dictConfig(dict_config)
def create_app(config_mode):
app = Flask(__name__)
log_config('./logging.yaml')
# ... 省略其他代码
1
dajj 11 小时 47 分钟前
多进程模式下肯定这样是不行的。 考虑用其它东西,比如 redis 订阅之类的,交给另一个服务去处理日志
|
2
daxin945 11 小时 44 分钟前
我 Python 开发的时候一直用 Loguru ,推荐你试试
|
3
Koril OP 我自己又重新测试了下,假设 1001 对 app.log 进行轮转:
1. app.log 该名称变成 app.log.bak ,但实际上我发现 app.log.bak 的 inode 值没变 2. 1001 进程创建一个新的文件叫 app.log ,然后自己一个人把日志写入到这个新的文件里(新的 inode 值) 3. 1002 进程和 1003 进程被 1001 骗了,还在往 app.log.bak 写日志 大概,似乎,是这样子。。。 |
4
gcdsss 11 小时 40 分钟前
我踩过坑,解决方法,写一个多线程兼容的 handler 去处理
` class MultiCompatibleTimedRotatingFileHandler(TimedRotatingFileHandler): def doRollover(self): if self.stream: self.stream.close() self.stream = None # get the time that this sequence started at and make it a TimeTuple currentTime = int(time.time()) dstNow = time.localtime(currentTime)[-1] t = self.rolloverAt - self.interval if self.utc: timeTuple = time.gmtime(t) else: timeTuple = time.localtime(t) dstThen = timeTuple[-1] if dstNow != dstThen: if dstNow: addend = 3600 else: addend = -3600 timeTuple = time.localtime(t + addend) dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple) # 兼容多进程并发 LOG_ROTATE if not os.path.exists(dfn): f = open(self.baseFilename, "a") fcntl.lockf(f.fileno(), fcntl.LOCK_EX) if not os.path.exists(dfn): os.rename(self.baseFilename, dfn) # 释放锁 释放老 log 句柄 f.close() if self.backupCount > 0: for s in self.getFilesToDelete(): os.remove(s) if not self.delay: self.stream = self._open() newRolloverAt = self.computeRollover(currentTime) while newRolloverAt <= currentTime: newRolloverAt = newRolloverAt + self.interval # If DST changes and midnight or weekly rollover, adjust for this. if (self.when == "MIDNIGHT" or self.when.startswith("W")) and not self.utc: dstAtRollover = time.localtime(newRolloverAt)[-1] if dstNow != dstAtRollover: if ( not dstNow ): # DST kicks in before next rollover, so we need to deduct an hour addend = -3600 else: # DST bows out before next rollover, so we need to add an hour addend = 3600 newRolloverAt += addend self.rolloverAt = newRolloverAt ` |
5
julyclyde 11 小时 35 分钟前
用 stream 写就行了。日志存盘那是另外的事
|
6
adoal 11 小时 32 分钟前
如果不是日志量大到惊悚,建议打到 syslog 去,用 logrotate 管理切分。比应用程序或者框架自己管理日志切分省心。
|
8
yumenlong 10 小时 11 分钟前
import logging
from logging import getLogger, DEBUG from concurrent_log_handler import ConcurrentRotatingFileHandler formatter_log = logging.Formatter('%(asctime)s%(filename)s[%(lineno)d]%(funcName)s %(levelname)s %(message)s') logger = getLogger() logfile = os.path.abspath("log/resource.log") rotateHandler = ConcurrentRotatingFileHandler(logfile, "a", 3000 * 1024, 50) rotateHandler.setFormatter(formatter_log) logger.addHandler(rotateHandler) logger.setLevel(DEBUG) 我的也是 gunicorn 多进程,使用 ConcurrentRotatingFileHandler 正常记录多进程日志. |
9
yumenlong 10 小时 9 分钟前
使用 ConcurrentRotatingFileHandler 可以解决多进程日志记录的问题。ConcurrentRotatingFileHandler 是 ConcurrentLogHandler 库的一部分,专门设计用于在��进程环境中安全地进行日志记录和日志轮转。
|
10
yingxiangyu 9 小时 58 分钟前
python 自带的日志库想解决这个问题很麻烦,最简单的就是不在 python 内做轮转,直接打到 std ,用类似 supervisor 之类的捕获 std 输出做轮转
|
11
yingxiangyu 9 小时 55 分钟前
现在大部分不都是用 docker 部署,直接打到 std ,用 docker 自带的日志管理实现轮转
|
12
zhangchunjiiw 7 小时 37 分钟前
import os
import time from logging.handlers import TimedRotatingFileHandler class SafeRotatingFileHandler(TimedRotatingFileHandler): def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=False, utc=False): TimedRotatingFileHandler.__init__(self, filename, when, interval, backupCount, encoding, delay, utc) """ 多进程, 文件处理. Override doRollover lines commanded by "##" is changed by cc https://www.jianshu.com/p/d615bf01e37b """ def doRollover(self): """ do a rollover; in this case, a date/time stamp is appended to the filename when the rollover happens. However, you want the file to be named for the start of the interval, not the current time. If there is a backup count, then we have to get a list of matching filenames, sort them and remove the one with the oldest suffix. Override, 1. if dfn not exist then do rename 2. _open with "a" model """ if self.stream: self.stream.close() self.stream = None # get the time that this sequence started at and make it a TimeTuple currentTime = int(time.time()) dstNow = time.localtime(currentTime)[-1] t = self.rolloverAt - self.interval if self.utc: timeTuple = time.gmtime(t) else: timeTuple = time.localtime(t) dstThen = timeTuple[-1] if dstNow != dstThen: if dstNow: addend = 3600 else: addend = -3600 timeTuple = time.localtime(t + addend) # 多进程, 文件处理. dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple) # if os.path.exists(dfn): # os.remove(dfn) # Issue 18940: A file may not have been created if delay is True. # if os.path.exists(self.baseFilename): if not os.path.exists(dfn) and os.path.exists(self.baseFilename): os.rename(self.baseFilename, dfn) if self.backupCount > 0: for s in self.getFilesToDelete(): os.remove(s) if not self.delay: self.mode = "a" self.stream = self._open() newRolloverAt = self.computeRollover(currentTime) while newRolloverAt <= currentTime: newRolloverAt = newRolloverAt + self.interval # If DST changes and midnight or weekly rollover, adjust for this. if (self.when == 'MIDNIGHT' or self.when.startswith('W')) and not self.utc: dstAtRollover = time.localtime(newRolloverAt)[-1] if dstNow != dstAtRollover: if not dstNow: # DST kicks in before next rollover, so we need to deduct an hour addend = -3600 else: # DST bows out before next rollover, so we need to add an hour addend = 3600 newRolloverAt += addend self.rolloverAt = newRolloverAt init: class Logger(object): def __init__(self): handlers.SafeRotatingFileHandler = SafeRotatingFileHandler yaml: info: class: logging.handlers.SafeRotatingFileHandler level: INFO formatter: console filename: ./log/info.log when: MIDNIGHT interval : 1 backupCount: 7 encoding: utf8 表现就是在切分日志的时候, 会发生日志丢失 https://www.jianshu.com/p/d615bf01e37b 重写 TimedRotatingFileHandler, 之前遇到过相似问题, 希望可以帮到你. 这是 21 年左右的方式了, 现在也许会有更好的 :) |
13
zhangchunjiiw 7 小时 36 分钟前
```py
import datetime ``` |