nginx日志导致IO负载较高,如何处理
2014-02-17 16:31
369 查看
使用Nginx来收集日志,访问量较大,在将收集的数据直接记录磁盘文件时,导致磁盘IO过高,机器直接挂掉。
为减少磁盘IO操作,将日志写如内存分区;但日志量太大,很容易将内存写满。
Nginx支持日志压缩功能,具体如下:
If either the buffer or gzip (1.3.10, 1.2.7) parameter is used, writes to log will be buffered.
When buffering is enabled, the data will be written to the file:
if the next log line does not fit into the buffer;
if the buffered data is older than specified by the flush parameter (1.3.10, 1.2.7);
when a worker process is re-opening log files or is shutting down.
If the gzip parameter is used, then the buffered data will be compressed before writing to the file. The compression level can be set between 1 (fastest, less compression) and 9 (slowest, best compression). By default, the buffer size is equal to 64K bytes,
and the compression level is set to 1. Since the data is compressed in atomic blocks, the log file can be decompressed or read by “zcat” at any time.
nginx支持压缩后写日志,使用此功能可以将日志文件放在内存分区,减少磁盘IO操作。然后定时对日志轮询,将日志写入磁盘或传输到日志处理集群。
为减少磁盘IO操作,将日志写如内存分区;但日志量太大,很容易将内存写满。
Nginx支持日志压缩功能,具体如下:
access_log path format gzip[=level] [buffer=size] [flush=time];
If either the buffer or gzip (1.3.10, 1.2.7) parameter is used, writes to log will be buffered.
The buffer size must not exceed the size of an atomic write to a disk file. For FreeBSD this size is unlimited
When buffering is enabled, the data will be written to the file:
if the next log line does not fit into the buffer;
if the buffered data is older than specified by the flush parameter (1.3.10, 1.2.7);
when a worker process is re-opening log files or is shutting down.
If the gzip parameter is used, then the buffered data will be compressed before writing to the file. The compression level can be set between 1 (fastest, less compression) and 9 (slowest, best compression). By default, the buffer size is equal to 64K bytes,
and the compression level is set to 1. Since the data is compressed in atomic blocks, the log file can be decompressed or read by “zcat” at any time.
Example: access_log /path/to/log.gz combined gzip flush=5m; For gzip compression to work, nginx must be built with the zlib library.
nginx支持压缩后写日志,使用此功能可以将日志文件放在内存分区,减少磁盘IO操作。然后定时对日志轮询,将日志写入磁盘或传输到日志处理集群。
相关文章推荐
- nginx如何配置负载多个nodejs+socketio服务器
- 如何利用nginx处理DDOS进行系统优化详解
- linux嵌入式开发中,由串口日志中需要输入选择而导致的系统阻塞处理方法
- logstash 处理nginx 访问日志
- windows下Nginx日志处理脚本
- 在PCH中定制自己的LOG打印日志,分别在DEBUG 与 RELEASE的状态下处理,及如何把PCH引入到项目中
- 日志已满应如何处理?
- Nginx 是如何处理一个 HTTP 请求的
- logstash 处理nginx 错误日志
- 用php处理xml文件,大概有4千条数据,导致nginx崩溃
- [FAQ14456] system.img>2G导致编译otapackage时报错如何处理
- 如何处理网站被植入恶意的一些代码导致的被机房拦截提示
- 如何处理更改计算机名称导致oracle无法启动
- nginx学习笔记(7)Nginx如何处理一个请求---转载
- DockOne技术分享(十二):新浪是如何分析处理32亿条实时日志的?
- Nginx负载均衡多节点静态资源转发(单节点没有资源)处理
- logstash nginx error access 日志处理
- 新浪是如何分析处理32亿条实时日志的?
- python处理nginx日志,并统计分析---我这个写的处理时间效率不高,有好方法,请大家指正
- 用nginx转发请求tomcat 如何配置访问日志获取真实ip