How to enable nfsd to support IPv6 on Ubuntu Linux?

Evan Li asked:

I need to enable IPv6 NFS support on a Ubuntu linux server. This server has support IPv4 NFS.

Linux info -

root@nimbus-nfsserver:~# uname -a
Linux nimbus-nfsserver 2.6.35-22-server #35-Ubuntu SMP Sat Oct 16 22:02:33 UTC 2010 x86_64 GNU/Linux

Both IPv4 and IPv6 IP addresses are up:

root@nimbus-nfsserver:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:50:56:b1:30:88
          inet addr:10.114.165.41  Bcast:10.114.191.255  Mask:255.255.224.0
          inet6 addr: fc00:10:114:191:250:56ff:feb1:3088/64 Scope:Global
          inet6 addr: fe80::250:56ff:feb1:3088/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:25486 errors:0 dropped:0 overruns:0 frame:0
          TX packets:204 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1542572 (1.5 MB)  TX bytes:28300 (28.3 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:89 errors:0 dropped:0 overruns:0 frame:0
          TX packets:89 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:8316 (8.3 KB)  TX bytes:8316 (8.3 KB)

Current /etc/exports file:

root@nimbus-nfsserver:~# cat /etc/exports
/store *(rw,async,no_root_squash,no_subtree_check)

This is a purely internal NFS server, it allows NFS mount from any IP address.

What I want to do is to also enable any IPv6 client NFS mount. How to modify /etc/exports file, and what additional procedure to follow?

More info -

root@nimbus-nfsserver:/etc/rc3.d# netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:60077           0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:54482           0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:41335           0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:57184           0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:2049            0.0.0.0:*               LISTEN
tcp6       0      0 :::22                   :::*                    LISTEN
tcp6       0      0 :::48318                :::*                    LISTEN
root@nimbus-nfsserver:/etc/rc3.d# rpcinfo -p
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  52246  status
    100024    1   tcp  54482  status
    100021    1   udp  42170  nlockmgr
    100021    3   udp  42170  nlockmgr
    100021    4   udp  42170  nlockmgr
    100021    1   tcp  60077  nlockmgr
    100021    3   tcp  60077  nlockmgr
    100021    4   tcp  60077  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100005    1   udp  55594  mountd
    100005    1   tcp  57184  mountd
    100005    2   udp  55594  mountd
    100005    2   tcp  57184  mountd
    100005    3   udp  55594  mountd
    100005    3   tcp  57184  mountd

I answered:

NFS over IPv6 support first appeared in Ubuntu 10.10, Maverick Meerkat. Most of the work on NFS over IPv6 in Linux was actually done by early 2010, too late for inclusion in 10.04. Since you have 10.04, you cannot use NFS over IPv6. Your only option is to upgrade to a newer supported release (12.04 or 14.04 LTS).


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Set file description in apache2

felixphew asked:

In the default file listing page (mod_autoindex) on apache2 (OS X), there is a column marked “description”. Does anyone know of how you actually set what appears here? All of mine are blank, as are most of the ones on other servers that I see.

Sorry if this is really obvious but I can’t find the answer anywhere else, including apache’s docs


I answered:

I’m not sure you read Apache’s docs on mod_autoindex closely enough, as it does reveal two ways to set descriptions. Probably not your fault; they are kind of buried…

First, you can use AddDescription to set individual descriptions for files or groups of files by partial match.

Second, you can set IndexOptions +ScanHTMLTitles and Apache will use the <title> of each HTML document as its Description. This is CPU and disk intensive though.

Descriptions given by AddDescription take precedence.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

No shell and launch application on login

greyfox asked:

I am working on migrating an application from OpenVMS to RedHat Linux 6. The application is a green screen terminal application. The users will log into Linux via SSH and the application should automatically start but they should never have access to the shell. Once the application closes or crashes it should automatically log them out. What is the best way to approach this?

I’ve tried creating a new user with the following command.

useradd -s /sbin/nologin test

I then added ftp & to the users .bash_profile in the hope it would open the ftp console immediately and then once they quit it would log them out. However upon authentication on SSH the session is killed. Any ideas?


I answered:

Why don’t you just set the application as the user’s shell? That means it is the only thing that gets run when they log in, and (barring some sort of access within the application itself) they can’t really do anything else.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How, with a mounted disk image, can I migrate a MySQL database to a new server?

Jonathan Hayward asked:

One Ubuntu EC2 VPS is fried after an upgrade that went awry.

I don’t have the server live, but I have its disk mounted on another (Ubuntu EC2) VPS.

How can I migrate the contents of a database on the mounted filesystem so that I can load it on another machine?

I’ve used tar and sftp to move stuff in files and directories I’ve made. I don’t know what files would need to be copied to migrate MySQL, and I’ve gotten the impression that migrating a database appropriately means using database facilities to dump from one machine, then transferring the file to another machine, and then using database facilities to install on another machine: not simply cp.


I answered:

It’s generally safe to copy the files, so long as MySQL is not running. This doesn’t mean that you won’t have to repair them later, especially if MySQL had previously crashed.

You’ll find them in the specified datadir in MySQL’s config; the default is usually /var/lib/mysql.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Understanding XP users accessing a UCC certificate on IIS8

Simon asked:

I have a single IIS website which hosts 3 different websites all using the same UCC SSL certificate. (The code of my site examines the host header itself to decide which variation of the site to show).

I fixed my most fatal mistake already for my Internet Explorer users on XP which is to disable the ‘Require Subject Name Indication’ field in IIS. Since SNI is not supported on Windows XP if it were enabled when an XP user switched to HTTPS they would basically get kicked off the site with no warning (don’t tell my boss).

So I completely understand that when a user accesses any one of the 3 sites via HTTPS that all share the same IP, he is sent to the same IIS site and served the UCC certificate. Even XP supports UCC/SAN certificates so the client accepts the certificate and shows the site as expected.

Now…. forget all that and consider a second non hypothetical scenario.

  • Lets pretend I have two sites : cats.com and dogs.com
  • They’re different sites and are deployed in two different
    websites in IIS 8.
  • They’re both on the same UCC SSL certificate*
  • SNI is disabled
  • They are both bound to the same IP address

Now consider XP users accessing https://cats.com

Here’s how I understand things work:

  • DNS gives the browser back my IP – lets say 1.2.3.4
  • An HTTPs connection is negotiated,but because XP doesn’t know anything about SNI it isn’t sending an SSL host header (or whatever they’re called) but just accessing the secure site via the IP.
  • So IIS will look for that IP associated with an https website binding, and it will actually find TWO.
  • Let’s say IIS decides to just serve you the first one – so it gives you cats.com which is what you’re expecting. So that’s fine.
  • BUT lets say you go to https://dogs.com – shouldn’t you be actually served the site for cats.com because SSL on XP doesn’t know about SNI for SSL connections and anyway it’s disabled in IIS?

Well you don’t. You get dogs.com. I’ve tried this, and cats.com and dogs.com both work from a Windows XP Internet Explorer 8 client.

I just don’t understand why this works. I would have expected this behavior only in my first scenario when they all share a single IIS application.

My best guess is that IIS 8 is receiving the host header anyway and being smart enough to route it to the correct website and finds a UCC certificate name. Is that what’s happening?

Can someone explain further?

* just to deliberately annoy techically minded cat and dog owners ;-)


I answered:

IIS uses the HTTP Host: header to determine which web site to serve, just as with any other request.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

NGINX+Windows how to serve a static html file (100kb) to 10k concurrent users

Amina asked:

This is the NGINX config file:

            gzip                  on;
            gzip_disable    "MSIE [1-6].";
            gzip_vary          on;
            gzip_proxied    any;
            open_file_cache max=200000 inactive=20s; 
            open_file_cache_valid 30s; 
            open_file_cache_min_uses 2;
            open_file_cache_errors on;
            access_log off;
            sendfile on;
            tcp_nopush on;
            tcp_nodelay on; 
            keepalive_timeout 0;
            reset_timedout_connection on;
            client_body_timeout 10;

            ...

            events {
               worker_connections  4000;
            }
            worker_processes  4;

The problem is that many users cannot get the file (cannot connect/timeout)
The file is a push message, to an desktop app.

So, I have two questions:
1. Anyone knows the maximum “worker_connections” that ningx supports on Windows 2008 R2?
2. Do I need to change something in Windows Registery, I cannot find what to change, and the exact numbers.

I don’t want to be off-topic, but just to tell the background. Today I am serving the file using Amazon S3, and it cost almost $1000 per month. I have a dedicated server, so I want to save the $$$, and serve the file myself. If you know about other cheaper alternative to S3, you can comment.

Thank you.


I answered:

On Windows, nginx has significant limitations:

  • You can only have 1024 worker_connections. Any higher number will be ignored. And even if you start more than one, only one worker will actually do any work.
  • nginx can only use select(); there is no high-performance event handler.

These are the reasons why using nginx on Windows for high performance, high scalability environments is a bad idea.

Switch to nginx on a non-Windows operating system as soon as possible.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Bash: mv directory one at a time

NinjaCat asked:

I am trying to move all subdirectories of a folder to another share on the same server. If I do a mv *, I will run out of space since the folders are not removed until all folders get transferred. So I’d like to create a short script that loops through each one. Does any one have an example that I can look at? I’ve searched around but can’t find exactly what I am looking for.


I answered:

You want for.

An example (this will just show what will be done):

for item in `ls`; do
    echo mv "$item" /destination/directory
done

When you’re happy, remove echo to do it for real.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

What does this notice mean "SNI: extension not received from the client", in stunnel log

Kaustubh Khare asked:

I am working on multiple domain certificates using stunnel. I have two domains test.int and test1.int and given multiple certificates to each domain and one default certificate. I used sni option of stunnel to provide multiple domain certificates. Using javascript websocket i am trying to connect to secure server,
But log file output shows

SNI: extension not received from the client

So i am not sure about sni option supports or not. Could any one help me to explain its working or not. What is the meaning of “SNI: extension not received from the client” statement.

Thanks in advance for your valuable answers.

My stunnel.config file

output=/var/log/stunnel.log
pid=
debug = 7
fips = no
compression = rle

options = NO_SSLv2

syslog = no

[websockets]
cert = /usr/local/etc/stunnel/default.crt
key = /usr/local/etc/stunnel/default.key
accept  = 0.0.0.0:9443
connect = 127.0.0.1:9000


[sni1]
sni = websockets:test.int
cert = /usr/local/etc/stunnel/test.int.crt
key = /usr/local/etc/stunnel/test.int.key
connect = 127.0.0.1:9000

[sni2]
sni = websockets:test1.int
cert = /usr/local/etc/stunnel/test1.int.crt
key = /usr/local/etc/stunnel/test1.int.key
connect = 127.0.0.1:9000

Log file output

Service [websockets] accepted (FD=9) from 192.168.0.132:38257
2014.04.14 18:30:32 LOG7[7085:139648669734672]: Service [websockets] started
2014.04.14 18:30:32 LOG5[7085:139648669734672]: Service [websockets] accepted connection from 192.168.0.132:38257
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): before/accept initialization
**2014.04.14 18:30:32 LOG5[7085:139648669734672]: SNI: extension not received from the client**
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 read client hello A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 write server hello A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 write change cipher spec A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 write finished A
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 flush data
2014.04.14 18:30:32 LOG7[7085:139648669734672]: SSL state (accept): SSLv3 read finished A
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    2 items in the session cache
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 client connects (SSL_connect())
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 client connects that finished
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 client renegotiations requested
2014.04.14 18:30:32 LOG7[7085:139648669734672]:   19 server connects (SSL_accept())
2014.04.14 18:30:32 LOG7[7085:139648669734672]:   19 server connects that finished
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 server renegotiations requested
2014.04.14 18:30:32 LOG7[7085:139648669734672]:   14 session cache hits
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 external session cache hits
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    0 session cache misses
2014.04.14 18:30:32 LOG7[7085:139648669734672]:    2 session cache timeouts
2014.04.14 18:30:32 LOG6[7085:139648669734672]: SSL accepted: previous session reused
2014.04.14 18:30:32 LOG6[7085:139648669734672]: connect_blocking: connecting 127.0.0.1:9000
2014.04.14 18:30:32 LOG7[7085:139648669734672]: connect_blocking: s_poll_wait 127.0.0.1:9000: waiting 10 seconds
2014.04.14 18:30:32 LOG5[7085:139648669734672]: connect_blocking: connected 127.0.0.1:9000
2014.04.14 18:30:32 LOG5[7085:139648669734672]: Service [websockets] connected remote server from 127.0.0.1:44325
2014.04.14 18:30:32 LOG7[7085:139648669734672]: Remote socket (FD=10) initialized

Javascript code to connect secure server,

wss://192.168.0.132:9443/bo/socket.bo.php

I am using webbrowser version as Chrome 26 and firefox 24 and OS version centos 6.


I answered:

You tried to connect directly to an IP address, rather than a hostname. So there wouldn’t be any point to SNI, as you didn’t provide a name. You’re meant to use the hostname.

For instance:

wss://example.com:9443/bo/socket.bo.php

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Setting up Nginx — redirecting requests to a auth server and getting redirected back to serve the request

Anup asked:

Any incoming request on (x.x.x.x.x)–> redirect to x.x.x.x.auth.domain.edu — > that authenticates a user and redirects back to x.x.x.x server. (With a cookie set, In my case it is EZproxy server doing the cookie setting)

I have tried rewriting the request url and also using proxy_pass booth resulting in looping error error (from browser).

I must be missing some basic header or something i am not able to get what since yesterday morning.
Any suggestions regarding how the config must be?


I answered:

Your root directive has two problems:

  1. It uses a relative path. When a relative path is used, it’s relative to a default directory compiled into nginx. Do you know which one that is? It’s best to specify absolute paths.
  2. It is in the wrong place. The root directive should be specified in the server block. This is one of the most common nginx misconfigurations.

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Sharing unix socket via docker volume – permission denied

soupdiver asked:

I try to share my php5-fpm socket via a volume with my nginxwebserver. Fpm and nginx are running if different containers and I want to get them working via a shared volume where I place the socket file form fpm.

2014/04/13 10:53:35 [crit] 33#0: *1 connect() to unix:/container/fpm/run/php5-fpm.sock failed (13: Permission denied) while connecting to upstream, client: 192.168.8.2, server: docker.dev, request: "GET /test.php HTTP/1.1", upstream: "fastcgi://unix:/container/fpm/run/php5-fpm.sock:", host: "docker.dev"

I already tried to set chmod to 777 and changing the group to www-data of php5-fpm.socket.

Dockerfile of fpm container

FROM ubuntu:13.10

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y php5-cli php5-common
RUN apt-get install -y php5-fpm php5-cgi

ADD ./php-fpm.conf /etc/php5/fpm/php-fpm.conf
ADD ./pool.d/www.conf /etc/php5/fpm/pool.d/www.conf
ADD ./php.ini /etc/php5/fpm/php.ini

CMD ["/usr/sbin/php5-fpm"]

Dockerfile of nginx container

FROM ubuntu:13.10

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nginx

ADD ./test.php /var/test/test.php
ADD ./test.html /var/test/test.html
ADD ./nginx.conf /etc/nginx/nginx.conf
ADD ./site /etc/nginx/sites-enabled/test

EXPOSE 80

CMD ["/usr/sbin/nginx"]

I can access the test.html but when accessing test.php I get 502 Bad Gateway.

Is there anything else I have to care about permissions when sharing stuff via volumes?


I answered:

Different containers cannot talk to each other via UNIX domain sockets, since they are in different network namespaces. There is an unofficial kernel patch that allows this, but you’re on your own if you use it.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.