2012-05-16 24 views
6

Estoy ejecutando nginx y tornado en instancias c1.medium.Tornado, Nginx, Apache ab - apr_socket_recv: restablecimiento de conexión por pares (104)

Cuando ejecuto ab, a continuación aparece mi salida. Nginx no funcionará. Intenté modificar el archivo de configuración para ninx en vano. Si ejecuto solo un puerto pasando nginx, p. Ej. `

http://127.0.0.1:8050/pixel?tt=ff` 

entonces es rápido. Mira el fondo Esto debe ser un problema de nginx, así que ¿cómo puedo resolverlo? También debajo está el archivo conf para nginx.

[email protected]:/etc/service# ab -n 10000 -c 50 http://127.0.0.1/pixel?tt=ff 
This is ApacheBench, Version 2.3 <$Revision: 655654 $> 
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ 
Benchmarking 127.0.0.1 (be patient) 
Completed 1000 requests 
Completed 2000 requests 
Completed 3000 requests 
Completed 4000 requests 
Completed 5000 requests 
Completed 6000 requests 
Completed 7000 requests 
Completed 8000 requests 
Completed 9000 requests 
apr_socket_recv: Connection reset by peer (104) 
Total of 9100 requests completed 

Esto debería fumar pero aún no es así.

que defina las siguientes parmamerts

ulimit is at 100000 

# General gigabit tuning: 
net.core.rmem_max = 16777216 
net.core.wmem_max = 16777216 
net.ipv4.tcp_rmem = 4096 87380 16777216 
net.ipv4.tcp_wmem = 4096 65536 16777216 
net.ipv4.tcp_syncookies = 1 
# this gives the kernel more memory for tcp 
# which you need with many (100k+) open socket connections 
net.ipv4.tcp_mem = 50576 64768 98152 
net.core.netdev_max_backlog = 2500 

Aquí está mi nginx conf:

user www-data; 
worker_processes 1; # 2*number of cpus 
pid /var/run/nginx.pid; 
worker_rlimit_nofile 32768; 
events { 
     worker_connections 30000; 
     multi_accept on; 
     use epoll; 
} 

http { 
     upstream frontends { 
      server 127.0.0.1:8050; 
      server 127.0.0.1:8051; 
     } 
     sendfile on; 
     tcp_nopush on; 
     tcp_nodelay on; 
     keepalive_timeout 65; 
     types_hash_max_size 2048; 
     # server_tokens off; 
     # server_names_hash_bucket_size 64; 
     # server_name_in_redirect off; 

     include /etc/nginx/mime.types; 
     default_type application/octet-stream; 

     # Only retry if there was a communication error, not a timeout 
    # on the Tornado server (to avoid propagating "queries of death" 
    # to all frontends) 
    proxy_next_upstream error; 

     server { 
     listen 80; 
     server_name 127.0.0.1; 
       ##For tornado 
       location/{ 
        proxy_pass_header Server; 
        proxy_set_header Host $http_host; 
        proxy_redirect off; 
        proxy_set_header X-Real-IP $remote_addr; 
        proxy_set_header X-Scheme $scheme; 
        proxy_pass http://frontends; 
       } 

si funciono ab por pasando nginx:

ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff 



[email protected]:/home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbTornadoServer# ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff 
This is ApacheBench, Version 2.3 <$Revision: 655654 $> 
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 
Licensed to The Apache Software Foundation, http://www.apache.org/ 

Benchmarking 127.0.0.1 (be patient) 
Completed 10000 requests 
Completed 20000 requests 
Completed 30000 requests 
Completed 40000 requests 
Completed 50000 requests 
Completed 60000 requests 
Completed 70000 requests 
Completed 80000 requests 
Completed 90000 requests 
Completed 100000 requests 
Finished 100000 requests 


Server Software:  TornadoServer/2.2.1 
Server Hostname:  127.0.0.1 
Server Port:   8050 

Document Path:   /pixel?tt=ff 
Document Length:  42 bytes 

Concurrency Level:  1000 
Time taken for tests: 52.436 seconds 
Complete requests:  100000 
Failed requests:  0 
Write errors:   0 
Total transferred:  31200000 bytes 
HTML transferred:  4200000 bytes 
Requests per second: 1907.08 [#/sec] (mean) 
Time per request:  524.363 [ms] (mean) 
Time per request:  0.524 [ms] (mean, across all concurrent requests) 
Transfer rate:   581.06 [Kbytes/sec] received 

Connection Times (ms) 
       min mean[+/-sd] median max 
Connect:  0 411 1821.7  0 21104 
Processing: 23 78 121.2  65 5368 
Waiting:  22 78 121.2  65 5368 
Total:   53 489 1845.0  65 23230 

Percentage of the requests served within a certain time (ms) 
    50%  65 
    66%  69 
    75%  78 
    80%  86 
    90% 137 
    95% 3078 
    98% 3327 
    99% 9094 
100% 23230 (longest request) 


2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 
2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 
2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 
2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET/HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 

oupput cuando se utiliza el - v 10 opción en ab:

GIF89a 
LOG: Response code = 200 
LOG: header received: 
HTTP/1.1 200 OK 
Date: Wed, 16 May 2012 21:56:50 GMT 
Content-Type: image/gif 
Content-Length: 42 
Connection: close 
Etag: "d5fceb6532643d0d84ffe09c40c481ecdf59e15a" 
Server: TornadoServer/2.2.1 
Set-Cookie: rtbhui=867bccde-2bc0-4518-b422-8673e07e19f6; Domain=rtb.rtbhui.com; expires=Fri, 16 May 2014 21:56:50 GMT; Path=/ 
+0

me encontré con mi problema ... ... sadly..haha mi jefe de cocina estaba reiniciando y supervisaba a cada chef corriendo a los 30 segundos. tuve un error allí. Se corrigió luego se resolvieron los problemas. – Tampa

Respuesta

2

Tuve el mismo problema al usar apache benchmark en una aplicación de sinatra usando webrick. Encontré la respuesta here.

Es realmente el problema de su servidor Apache.

El error se ha eliminado en las versiones superiores de apache. Intente descargarlos here.

0

que tenían el mismo problema y la búsqueda de información en los registros que tengo estas líneas:

Oct 15 10:41:30 bal1 kernel: [1031008.706185] nf_conntrack: table full, dropping packet. 
Oct 15 10:41:31 bal1 kernel: [1031009.757069] nf_conntrack: table full, dropping packet. 
Oct 15 10:41:32 bal1 kernel: [1031009.939489] nf_conntrack: table full, dropping packet. 
Oct 15 10:41:32 bal1 kernel: [1031010.255115] nf_conntrack: table full, dropping packet. 

En mi caso particular, el módulo conntrack es usar iptables en el interior debido a que el mismo servidor tiene el servidor de seguridad.

Una solución para revisión que se descargaba el módulo de seguimiento de conexiones, y otra y fácil es con estas dos líneas aplicadas en la política del cortafuegos:

iptables -t raw -I PREROUTING -p tcp -j NOTRACK 
iptables -t raw -I OUTPUT -p tcp -j NOTRACK 
+0

¿puedes compartir qué registros estás buscando? –

Cuestiones relacionadas