admin管理员组

文章数量:1128310

Python 3.12.3, Celery 5.3.6, Django 4.2.11, Ubuntu 22.04.4

I have an infrastructure of Django and Celery servers running simultaneously on an Ubuntu server.

For logging I use DiscordHandler and customized TimedRotatingFileHandler defined as so:

class CustomizedTimedRotatingFileHandler(TimedRotatingFileHandler):
    '''
    log_name.date.log.log -> log_name.date.log
    '''
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.namer = lambda name: name.replace(".log", "") + ".log"

The project logging is configured in the settings file:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'handlers': {
        **{
            f"{k}_file": {
                "level": "INFO",
                "class": "api.logging.CustomizedTimedRotatingFileHandler",
                "filename": "log/sync.log",
                "when": "midnight",
                "backupCount": 2,
                "formatter": v,
            } for k, v in {"django": "verbose", "celery": "celery"}.items()
        }
    },
    'loggers': {
        'django': {
            'handlers': ['django_file'],
            'level': 'INFO',
            'propagate': True,
        },
        'django.server': {
            'handlers': ['django_file'],
            'level': 'INFO',
            'propagate': False,
        },
        'celery': {
            'handlers': ['celery_file'],
            'level': 'INFO',
            'propagate': False,
        }
    }
}

Right now I have 2 log files and then a file of today (which gets logs in only from Django, not Celery).

I've looked through the discord logs and have seen that there were some celery jobs working at midnight. And this is not the first occurrence, in fact, all the times the case has been that in the midnight there were ongoing tasks, which interrupted the rollover.

How do I get the correct file logging? My only guess is that I have to manually doRollover (as a celery task).

Any help is appreciated

Python 3.12.3, Celery 5.3.6, Django 4.2.11, Ubuntu 22.04.4

I have an infrastructure of Django and Celery servers running simultaneously on an Ubuntu server.

For logging I use DiscordHandler and customized TimedRotatingFileHandler defined as so:

class CustomizedTimedRotatingFileHandler(TimedRotatingFileHandler):
    '''
    log_name.date.log.log -> log_name.date.log
    '''
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.namer = lambda name: name.replace(".log", "") + ".log"

The project logging is configured in the settings file:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'handlers': {
        **{
            f"{k}_file": {
                "level": "INFO",
                "class": "api.logging.CustomizedTimedRotatingFileHandler",
                "filename": "log/sync.log",
                "when": "midnight",
                "backupCount": 2,
                "formatter": v,
            } for k, v in {"django": "verbose", "celery": "celery"}.items()
        }
    },
    'loggers': {
        'django': {
            'handlers': ['django_file'],
            'level': 'INFO',
            'propagate': True,
        },
        'django.server': {
            'handlers': ['django_file'],
            'level': 'INFO',
            'propagate': False,
        },
        'celery': {
            'handlers': ['celery_file'],
            'level': 'INFO',
            'propagate': False,
        }
    }
}

Right now I have 2 log files and then a file of today (which gets logs in only from Django, not Celery).

I've looked through the discord logs and have seen that there were some celery jobs working at midnight. And this is not the first occurrence, in fact, all the times the case has been that in the midnight there were ongoing tasks, which interrupted the rollover.

How do I get the correct file logging? My only guess is that I have to manually doRollover (as a celery task).

Any help is appreciated

Share Improve this question edited Jan 9 at 7:34 whatserface asked Jan 8 at 14:00 whatserfacewhatserface 691 silver badge9 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 1

With Celery workers, you presumably have multiple worker processes logging to the same physical file, which is not supported (with multiple threads, locks take care of serialising access - not so with multiple processes). See the logging cookbook section on this issue - you should probably set up a QueueHandler or SocketHandler which sends events from all workers to a separate logging process, which is the only one that writes to the log file.

本文标签: