V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
NGINX
NGINX Trac
3rd Party Modules
Security Advisories
CHANGES
OpenResty
ngx_lua
Tengine
在线学习资源
NGINX 开发从入门到精通
NGINX Modules
ngx_echo
vegetableChick
V2EX  ›  NGINX

Nginx 响应慢

  •  1
     
  •   vegetableChick · 2021-03-09 10:41:59 +08:00 · 2171 次点击
    这是一个创建于 1356 天前的主题,其中的信息可能已经有所发展或是发生改变。

    项目使用的是 django + uwsgi + nginx

    下面是我的 uwsgi 和 nginx 的一些配置, 还有请求日志

    1

    uwsgi.ini

    [uwsgi]
    pythonpath=/xxx
    static-map=/static=/xxx/static
    chdir=/xxx
    env=DJANGO_SETTINGS_MODULE=conf.settings
    module=xxx.wsgi
    master=True
    pidfile=logs/xxx.pid
    vacuum=True
    max-requests=100000
    enable-threads=true
    processes=16
    threads=32
    listen=1024
    log-slow=3000
    daemonize=logs/wsgi.log
    stats=/tmp/xxx/socket/stats.socket
    http=0.0.0.0:6187
    buffer-size=220000000
    socket-timeout=1500
    harakiri=1500
    http-timeout=1500
    

    reqeust log

    [pid: 10550|app: 0|req: 549/6061] 103.218.240.105 () {50 vars in 1037 bytes} 
    [Mon Mar  8 15:24:30 2021] GET /api/v2/analysis/xxxx => generated 3890508 bytes in 397 msecs
     (HTTP/1.1 200) 5 headers in 222 bytes (1 switches on core 16)
    

    2

    nginx.conf

    worker_processes  12;
    
    
    events {
        use epoll;
        worker_connections  65535;
    }
    
    
    http {
        include       mime.types;
        include       log_format.conf;
        include       upstream.conf;
        default_type  application/octet-stream;
    
        sendfile        on;
        tcp_nopush     on;
    
        keepalive_timeout  1800;
        server_tokens off;
    
        client_max_body_size 100m;
        gzip  on;
        gzip_min_length 1k;
        gzip_buffers 4 16k;
        gzip_comp_level 5;
        gzip_types text/plain application/json application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png;
        gzip_vary off;
        include "site-enabled/*.conf";
    }
    
    

    upstream.conf

    
    upstream bv_crm_server_proxy_line {
            server proxy.xxxx.cn:6187  weight=100 fail_timeout=0;
            keepalive 500;
    }
    
    

    log_format.conf

    log_format upstream '$remote_addr - $host [$time_local] "$request" '
                        '$status $body_bytes_sent $request_time $upstream_response_time '
                        '"$http_user_agent" "$http_x_forwarded_for" ';
    
    
    

    site-enabled.xxx.conf

    server {
        listen 7020;
        server_name  xxxx.xx.cn;
        client_max_body_size 100M;
        access_log  logs/xxx.log  upstream;
        root /home/smb/web/xxx/dist;
        client_header_buffer_size 16k;
        large_client_header_buffers 4 16k;
    
        location ^~ /api/ {
            proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
            proxy_send_timeout 1800;
            proxy_connect_timeout 1800;
            proxy_read_timeout 1800;
    
            proxy_ignore_client_abort on;
            proxy_pass http://bv_crm_server_proxy_line;
        }
       
    
        location / {
            try_files $uri /index.html =404;
        }
    }
    
    
    192.168.12.12 - xxx.cn [08/Mar/2021:15:24:34 +0800] "GET /api/v2/analysis/xxx HTTP/1.1" 
    200 531500 4.714 4.714 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 
    (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36" "103.120.18.243"
    
    
    
    

    现在 nginx 的响应速度很慢,请求大佬帮忙看一下是不是哪里配置的有问题。

    感谢

    8 条回复    2021-03-09 11:31:27 +08:00
    brader
        1
    brader  
       2021-03-09 10:51:55 +08:00
    你可否先尝试排除 nginx 之外的问题,仅仅测试 nginx ?
    ```
    location /{
    default_type text/plain;
    return 200 "hello nginx!\n";
    }
    ```
    defunct9
        2
    defunct9  
       2021-03-09 11:00:23 +08:00
    upstream fastcgi_backend {
    server 127.0.0.1:9000;

    keepalive 8;
    }

    server {
    ...

    location /fastcgi/ {
    fastcgi_pass fastcgi_backend;
    fastcgi_keep_conn on;
    ...
    }
    }
    vegetableChick
        3
    vegetableChick  
    OP
       2021-03-09 11:06:45 +08:00
    @brader 线上的机器感觉没法试。。其他请求 nginx 响应是正常的, 这个请求返回的是一个 3m 左右的 json, 不知道和这个有没有关系
    vegetableChick
        4
    vegetableChick  
    OP
       2021-03-09 11:09:35 +08:00
    @defunct9 感谢回复, 我没看懂您这个配置的意思。。可以大概说一下不
    chendy
        5
    chendy  
       2021-03-09 11:16:14 +08:00
    服务器带宽多大? 3m 的 json,10m 的机器要差不多 3s 才能发完
    barrysn
        6
    barrysn  
       2021-03-09 11:23:14 +08:00
    设置日志格式,看看 nginx 日志里记录的时间 哪个长,
    先确认问题出在哪里

    $request_time – Full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body
    $upstream_connect_time – Time spent establishing a connection with an upstream server
    $upstream_header_time – Time between establishing a connection to an upstream server and receiving the first byte of the response header
    $upstream_response_time – Time between establishing a connection to an upstream server and receiving the last byte of the response body
    brader
        7
    brader  
       2021-03-09 11:30:32 +08:00
    @vegetableChick 可以测试的,你可以不干涉你原有的东西,另增加一个匹配规则来测试就好了。
    如果你返回一个 3m 大小的东西,有一定访问人数,访问频繁,服务器带宽不够的话,确实会非常慢的,打个比方,你这就好像是,分发下载 app,不使用 oss 来分发,而使用服务器带宽硬抗。
    defunct9
        8
    defunct9  
       2021-03-09 11:31:27 +08:00
    o , 你这用的是 uwsgi,我建议换 fastcgi,用 fastcgi 的 keepalive 特性。加快速度 。
    而且呢
    #include uwsgi_params;
    #uwsgi_pass unix:///var/www/script/uwsgi.sock; # 指定 uwsgi 的 sock 文件所有动态请求
    用这种会更快。
    anyway,开 ssh,让我上去看看
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   915 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 114ms · UTC 20:12 · PVG 04:12 · LAX 12:12 · JFK 15:12
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.