NGINX+Windows how to serve a static html file (100kb) to 10k concurrent users

Amina asked:

This is the NGINX config file:

            gzip                  on;
            gzip_disable    "MSIE [1-6].";
            gzip_vary          on;
            gzip_proxied    any;
            open_file_cache max=200000 inactive=20s; 
            open_file_cache_valid 30s; 
            open_file_cache_min_uses 2;
            open_file_cache_errors on;
            access_log off;
            sendfile on;
            tcp_nopush on;
            tcp_nodelay on; 
            keepalive_timeout 0;
            reset_timedout_connection on;
            client_body_timeout 10;

            ...

            events {
               worker_connections  4000;
            }
            worker_processes  4;

The problem is that many users cannot get the file (cannot connect/timeout)
The file is a push message, to an desktop app.

So, I have two questions:
1. Anyone knows the maximum “worker_connections” that ningx supports on Windows 2008 R2?
2. Do I need to change something in Windows Registery, I cannot find what to change, and the exact numbers.

I don’t want to be off-topic, but just to tell the background. Today I am serving the file using Amazon S3, and it cost almost $1000 per month. I have a dedicated server, so I want to save the $$$, and serve the file myself. If you know about other cheaper alternative to S3, you can comment.

Thank you.


I answered:

On Windows, nginx has significant limitations:

  • You can only have 1024 worker_connections. Any higher number will be ignored. And even if you start more than one, only one worker will actually do any work.
  • nginx can only use select(); there is no high-performance event handler.

These are the reasons why using nginx on Windows for high performance, high scalability environments is a bad idea.

Switch to nginx on a non-Windows operating system as soon as possible.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Bash: mv directory one at a time

NinjaCat asked:

I am trying to move all subdirectories of a folder to another share on the same server. If I do a mv *, I will run out of space since the folders are not removed until all folders get transferred. So I’d like to create a short script that loops through each one. Does any one have an example that I can look at? I’ve searched around but can’t find exactly what I am looking for.


I answered:

You want for.

An example (this will just show what will be done):

for item in `ls`; do
    echo mv "$item" /destination/directory
done

When you’re happy, remove echo to do it for real.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

What does this notice mean "SNI: extension not received from the client", in stunnel log

Kaustubh Khare asked:

I am working on multiple domain certificates using stunnel. I have two domains test.int and test1.int and given multiple certificates to each domain and one default certificate. I used sni option of stunnel to provide multiple domain certificates. Using javascript websocket i am trying to connect to secure server,
But log file output shows

SNI: extension not received from the client

So i am not sure about sni option supports or not. Could any one help me to explain its working or not. What is the meaning of “SNI: extension not received from the client” statement.

Thanks in advance for your valuable answers.

My stunnel.config file

output=/var/log/stunnel.log
pid=
debug = 7
fips = no
compression = rle

options = NO_SSLv2

syslog = no

[websockets]
cert = /usr/local/etc/stunnel/default.crt
key = /usr/local/etc/stunnel/default.key
accept  = 0.0.0.0:9443
connect = 127.0.0.1:9000


[sni1]
sni = websockets:test.int
cert = /usr/local/etc/stunnel/test.int.crt
key = /usr/local/etc/stunnel/test.int.key
connect = 127.0.0.1:9000

[sni2]
sni = websockets:test1.int
cert = /usr/local/etc/stunnel/test1.int.crt
key = /usr/local/etc/stunnel/test1.int.key
connect = 127.0.0.1:9000

Log file output

Service [websockets] accepted (FD=9) from 192.168.0.132:38257
2014.04.14 18:30:32 LOG7[7085:139648669734672]: Service [websockets] started
2014.04.14 18:30:32 LOG5[7085:139648669734672]: Service [websockets] accepted connection from 192.168.0.132:38257
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): before/accept initialization
**2014.04.14 18:30:32 LOG5[7085:139648669734672]: SNI: extension not received from the client**
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 read client hello A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 write server hello A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 write change cipher spec A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 write finished A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 flush data
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 read finished A
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    2 items in the session cache
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 client connects (SSL_connect())
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 client connects that finished
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 client renegotiations requested
2014.04.14 18:30:32 LOG7[7085:139648669734672]:   19 server connects (SSL_accept())
2014.04.14 18:30:32 LOG7[7085:139648669734672]:   19 server connects that finished
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 server renegotiations requested
2014.04.14 18:30:32 LOG7[7085:139648669734672]:   14 session cache hits
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 external session cache hits
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 session cache misses
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    2 session cache timeouts
2014.04.14 18:30:32 LOG6[7085:139648669734672]: SSL accepted: previous session reused
2014.04.14 18:30:32 LOG6[7085:139648669734672]: connect_blocking: connecting 127.0.0.1:9000
2014.04.14 18:30:32 LOG7[7085:139648669734672]: connect_blocking: s_poll_wait 127.0.0.1:9000: waiting 10 seconds
2014.04.14 18:30:32 LOG5[7085:139648669734672]: connect_blocking: connected 127.0.0.1:9000
2014.04.14 18:30:32 LOG5[7085:139648669734672]: Service [websockets] connected remote server from 127.0.0.1:44325
2014.04.14 18:30:32 LOG7[7085:139648669734672]: Remote socket (FD=10) initialized

Javascript code to connect secure server,

wss://192.168.0.132:9443/bo/socket.bo.php

I am using webbrowser version as Chrome 26 and firefox 24 and OS version centos 6.


I answered:

You tried to connect directly to an IP address, rather than a hostname. So there wouldn’t be any point to SNI, as you didn’t provide a name. You’re meant to use the hostname.

For instance:

wss://example.com:9443/bo/socket.bo.php

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Setting up Nginx — redirecting requests to a auth server and getting redirected back to serve the request

Anup asked:

Any incoming request on (x.x.x.x.x)–> redirect to x.x.x.x.auth.domain.edu — > that authenticates a user and redirects back to x.x.x.x server. (With a cookie set, In my case it is EZproxy server doing the cookie setting)

I have tried rewriting the request url and also using proxy_pass booth resulting in looping error error (from browser).

I must be missing some basic header or something i am not able to get what since yesterday morning.
Any suggestions regarding how the config must be?


I answered:

Your root directive has two problems:

  1. It uses a relative path. When a relative path is used, it’s relative to a default directory compiled into nginx. Do you know which one that is? It’s best to specify absolute paths.
  2. It is in the wrong place. The root directive should be specified in the server block. This is one of the most common nginx misconfigurations.

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Sharing unix socket via docker volume – permission denied

soupdiver asked:

I try to share my php5-fpm socket via a volume with my nginxwebserver. Fpm and nginx are running if different containers and I want to get them working via a shared volume where I place the socket file form fpm.

2014/04/13 10:53:35 [crit] 33#0: *1 connect() to unix:/container/fpm/run/php5-fpm.sock failed (13: Permission denied) while connecting to upstream, client: 192.168.8.2, server: docker.dev, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/container/fpm/run/php5-fpm.sock:", host: "docker.dev"

I already tried to set chmod to 777 and changing the group to www-data of php5-fpm.socket.

Dockerfile of fpm container

FROM ubuntu:13.10

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y php5-cli php5-common
RUN apt-get install -y php5-fpm php5-cgi

ADD ./php-fpm.conf /etc/php5/fpm/php-fpm.conf
ADD ./pool.d/www.conf /etc/php5/fpm/pool.d/www.conf
ADD ./php.ini /etc/php5/fpm/php.ini

CMD ["/usr/sbin/php5-fpm"]

Dockerfile of nginx container

FROM ubuntu:13.10

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nginx

ADD ./test.php /var/test/test.php
ADD ./test.html /var/test/test.html
ADD ./nginx.conf /etc/nginx/nginx.conf
ADD ./site /etc/nginx/sites-enabled/test

EXPOSE 80

CMD ["/usr/sbin/nginx"]

I can access the test.html but when accessing test.php I get 502 Bad Gateway.

Is there anything else I have to care about permissions when sharing stuff via volumes?


I answered:

Different containers cannot talk to each other via UNIX domain sockets, since they are in different network namespaces. There is an unofficial kernel patch that allows this, but you’re on your own if you use it.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Configure postfix to block PHP-sent mail() to certain recipients

Dr. Gianluigi Zane Zanettini asked:

I’m trying to prevent my CentOS 6.5 server from sending out emails to a certain list of recipients. (dont_contact_me@hotmail.com, dont_contact_me@gmail.com and so on).

I’ve configured postfix like this:

/etc/postfix/main.cf:

smtpd_recipient_restrictions = check_recipient_access hash:/etc/postfix/recipient_access

/etc/postfix/recipient_access:

dont_contact_me@hotmail.com REJECT
dont_contact_me@gmail.com REJECT

DB is built via:

postmap hash:recipient_access

postfix is reloaded

service postfix reload

php.ini is:

sendmail_path = /usr/sbin/sendmail -t -i

Unfortunately this doesn’t seems to work. If i use PHP mail() to send a mail to dont_contact_me@hotmail.com, it is delivered as always.

What am I missing?


I answered:

You may be able to abuse smtp_generic_maps to divert this mail. Unlike the other directives you mentioned, this one operates on outgoing mail.

While it’s not capable of dropping it, it can send it to a different mailbox, where you can then take appropriate action on it (such as suspending the customer who sent the mail).

In main.cf you would have:

smtp_generic_maps = hash:/etc/postfix/generic

And in /etc/postfix/generic:

banned_address@hotmail.com abuse@example.com
dont_contact_me@live.com abuse@example.com

This should send all such mail to your abuse mailbox for you to act on.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Upgrade to secure openssl fails

user216141 asked:

Upgrade to secure openssl fails

Method:

have in /etc/apt/sources.list:

deb http://security.debian.org/ wheezy/updates main contrib non-free

Then do:

apt-get update
apt-cache policy openssl
apt-get install openssl
apt-cache policy openssl will show you candidate updates

apt-get install openssl will upgrade to last openssl version

Actual:

# uname -a
Linux XXX 3.10-3-amd64 #1 SMP Debian 3.10.11-1 (2013-09-10) x86_64 GNU/Linux

# cat /etc/apt/sources.list | sed '/^#/d' | sed '/^$/d'
deb http://security.debian.org/ wheezy/updates main contrib non-free

# apt-cache policy openssl
openssl:
  Installed: 1.0.1e-3
  Candidate: 1.0.1e-3
  Version table:
 *** 1.0.1e-3 0
        100 /var/lib/dpkg/status
     1.0.1e-2+deb7u6 0
        500 ... <cannot post more than 2 "links"> wheezy/updates/main amd64 Packages
        500 ... <cannot post more than 2 "links"> wheezy/updates/main amd64 Packages
     1.0.1e-2+deb7u4 0
        500 ... <cannot post more than 2 "links"> wheezy/main amd64 Packages
        500 ... <cannot post more than 2 "links"> wheezy/main amd64 Packages

# apt-get install openssl
Reading package lists... Done
Building dependency tree
Reading state information... Done
openssl is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

What gives?


I answered:

I don’t know where you got openssl 1.0.1e-3. But since it has a release number higher than the versions actually available in the repositories, they are not considered upgrade candidates.

Install the update by selecting its version explicitly:

apt-get install openssl=1.0.1e-2+deb7u6

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

using nginx with SNI

justlovingIT asked:

By now I’ve not used SNI with nginx yet. But as IP address pools are quite filled and commercial XP support is about to cease (finally) I’m thinking about converting a few sites to SNI.

I’m aware of the general limitations and pitfalls that might come along with SNI (XP issue, very old browsers). But beyond that is there anything I should be aware of?

Like
- nginx related pitfalls when using SNI
- issues/bugs with recent (notable!) browsers


I answered:

If your version of nginx shows TLS SNI support when you do nginx -V then you’re ready to go. Just don’t use an IP address in the SSL web server‘s listen directives to use SNI for that virtual host.

For instance, change:

listen 198.51.100.206:443 ssl;

to:

listen 443 ssl;

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Yum update problems on AWS with missing requirements and duplicate sendmail

Christian asked:

I am running an AWS VM and tried:

sudo yum update

Result:

Error: Protected multilib versions: sendmail-8.14.4-8.12.amzn1.x86_64 != sendmail-8.14.4-7.9.amzn1.i386
 You could try using --skip-broken to work around the problem
** Found 2 pre-existing rpmdb problem(s), 'yum check' output follows:
kernel-2.6.34.7-56.40.amzn1.x86_64 has missing requires of mkinitrd
sendmail-8.14.4-8.11.amzn1.x86_64 is a duplicate with sendmail-8.14.4-7.9.amzn1.i386

I read else where to try this:

sudo yum --exclude=kernel* update

But same result.

This is a production server which I want to upgrade so I have to take special care. I was not the one preparing it. Looks like I use the amazon version of Red Hat 4.4.6-3.

Any suggestions how to fix this?


I answered:

Use yum distro-sync instead of yum update to fix package version mismatches in this scenario. This allows for packages to be downgraded if necessary to match the versions in the repositories.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

rewrite rule on location in nginx

Christian asked:

When I write the location in, it works, when I use the same location as a rewrite rule, it doesnt!!! I dont understand the logic. Can someone please explain?

   location /v3/ {
            alias /var/www/api/v3/html/;
            echo $document_uri;
            echo $document_root;
            echo $request_filename;
            echo $request_uri;
            echo $fastcgi_script_name;
   }

When I do the above, I get

/v3/info.php
/var/www/api/v3/html/
/var/www/api/v3/html/info.php
/v3/info.php
/v3/info.php

But if I now change the location to use rewrite:

   location ~ ^/(vd+)/ {
            alias /var/www/api/$1/html/;
            echo $document_uri;
            echo $document_root;
            echo $request_filename;
            echo $request_uri;
            echo $fastcgi_script_name;
   }

The paths all get screwed:

/v3/info.php
/var/www/api/v3/html/
/var/www/api/v3/html/
/v3/info.php
/v3/info.php

How come??


I answered:

You will need to match on the entire URL to do this.

For instance:

location ~ ^/(vd+)/(.*) {
    alias /var/www/api/$1/html/$2;

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.