WRITELOOP

NGINX TIPS

2016 June 26

How to configure the number of connections per second that can be handled by a server:

  1. Get the number of cores the server has available:
$ grep processor /proc/cpuinfo | wc -l
> 1

Ideally, we must set 1 worker processor for each processor:

$ vim /etc/nginx/nginx.conf
worker_processes 1;
  1. You must know your system limitations. Almost any one of them you can’t do anything to surpass, except for one (open files). To get those limitations:
$ ulimit -a
> core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 30
file size               (blocks, -f) unlimited
pending signals                 (-i) 23457
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 99
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 23457
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Here we are. “open files” has the value of 1024. That is the number of simultaneous connections that can be served by Nginx. Important to have in mind that every browser usually opens up at least 2 connections per server, so that number can half. You must also keep in mind that this halfed number can be multiplied by the amount of cores. So:

$ vim /etc/nginx/nginx.conf
worker_connections 1024;

CONSIDERATIONS: -A) In many cases the default here is 1024. If nginx hits the limit it will log the error (24: Too many open files) and return an http status code error to the client. Chances are nginx and your OS can handle a LOT more that 1024 “open files” (file descriptors). That value can be safely increased. You can do that setting a new value with ulimit.

  • For the current user session:
$ ulimit -n 600000
  • Permanently, and for all users / sessions:
$ vim /etc/security/limits.conf
* soft nofile 600000 (value the kernel enforces)
* hard nofile 600000 (ceiling for the value above - a "maximum")

To check the modification was applied, run “ulimit -a” again and search for the “open files” value. And you must also set the corresponding parameter on sysctl:

  • For the current user session:
$ sysctl -w fs.file-max=600000
  • Permanently, and for all users / sessions:
$ vim /etc/sysctl.conf
fs.file-max=600000

To check the modification was applied, run “sysctl -a” again and search for the “open files” value. To ease finding that parameter, you can apply a grep filter: $ sysctl -a | grep ‘fs.file-max’ IMPORTANT: Ensuring changes to ulimit parameters persist: There are two places where changes need to be recorded: /etc/sysctl.conf /etc/security/limits.conf sysctl.conf is for setting a system wide ceiling:

max open files (systemic limit)

fs.file-max = 65536 limits.conf is for setting a user space floor and ceiling: /etc/security/limits.conf Ensure both a hard limit and a soft limit are set, otherwise the setting will not become active. -B) A nice formula to get an idea of the MAX number of connections is:

max = worker_processes * worker_connections * (total_of_current_active_connections / average_request_time)

If you are using nginx as a reverse proxy (e.g. using uwsgi), each request will always open up an additional connection to your backend. So, on that case, you must consider 1 connection as in fact being 2. 3) Configure the buffers. If anyone of them is too low, nginx will have to write to a temp file causing high I/O. There are mainly 4 directives to control that:

  • client_body_buffer_size: the request body size (keep in mind POST requests here, and that they generally are form submissions).
  • client_header_buffer_size: the request header size. Generally 1K is enough here.
  • client_max_body_size: Maximum allowed size for a request. Keep in mind that, when exceeded, nginx will respond with http status code 413 (Request Entity Too Large).
  • large_client_header_buffers: Maximum number and size of buffers on large client headers. E.g.:
$ vim /etc/nginx/nginx.conf
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 2 1k;
  1. Configure the timeouts. The following directives are available:
  • client_body_timeout, client_header_timeout: how much time a server will wait for a client body or client header to be sent after request. If that time expires and neither is sent, nginx will return http status code 408 (Request time out).
  • keepalive_timeout: Nginx will close connections with the client after the time specified here.
  • send_timeout: if the client takes nothing on the time interval between 2 readings, the connection will be shutdown. E.g.:
$ vim /etc/nginx/nginx.conf
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;
  1. If you have any other way to log every request made to server (E.g., Google Analytics), it may be good to turn off the access_log:
$ vim /etc/nginx/nginx.conf
access_log off;
  1. Avoid disk I/O at all costs (on your backend and on the server). Take into consideration that a low ram machine will need to swap to disk, and that swap is disk I/O.
  2. Talking about Disk I/O, it is nice to cache the open file descriptors. Here are the available directives for that: http://wiki.nginx.org/HttpCoreModule#open_file_cache
  3. Compress the data to be transfered through the network. Don’t worry with the CPU penalty because Nginx is already optimized to deal with the required amount of CPU load. To do that:
$ vim /etc/nginx/nginx.conf
gzip             on;
gzip_min_length  1000;
gzip_types       text/plain application/xml application/json;
gzip_comp_level 5
  1. Restart the nginx service (systemd / upstart / init.d) and do your measures again (you can use siege, ab or wrk/wrk2 for that).

Basic http authorization with nginx:

  1. Create a file to hold the password (compatible with apache htpasswd format):
  • Download the script to generate the file: $ wget http://trac.edgewall.org/export/10791/trunk/contrib/htpasswd.py $ chmod 755 htpasswd.py
  • Execute the script: If the file does not exist yet: $ htpasswd.py -c -b .htpasswd username password If you just want to add another user on it (keeping its previous content): $ htpasswd.py -b .htpasswd anotherusername anotherpassword
  1. Configure the desired nginx url to use the generated passwords file: location /test { auth_basic “Restricted”; auth_basic_user_file /path/to/.htpasswd; … }

Enable nginx status page:

$ systemctl stop nginx $ vim /etc/nginx.conf location /nginx_status {

Turn on stats

stub_status on; access_log off;

only allow access from 192.168.1.5

allow 192.168.1.5; deny all; } $ systemctl start nginx Now, go to “http://myshostname/nginx_status” with your browser. Sample output: Active connections: 586 server accepts handled requests 9582571 9582571 21897888 Reading: 39 Writing: 3 Waiting: 544 , where: 586 = Number of all open connections 9582571 = Accepted connections 9582571 = Handled connections 21897888 = Handled requests Then, to calculate the connections per second: Requests per connection = handled requests / handled connections Requests per connection = 21897888/9582571 (pass this to bc -l using echo ‘21897888/9582571’ | bc -l command) Requests per connection = 2.28

References:

TAGS