Does Opcache fix eAccelerator's and XCache's memory leak bugs?

matteo asked:

Both XCache and eAccelerator have a memory leak bug which will leak memory if you include the same file tens of times in a loop, even though the code executed in the file does not use up any memory at all at each execution. (the exact same script without eAccelerator or XCache does not leak memory)

I replaced XCache with eAccelerator and at first I was not observing the bug but then it appeared.

Does OPCache have this fixed or does it suffer from the same bug?

I use PHP 5.4.29
If so, can I install OPCache from PECL and will it be the same as the one that comes bundled with PHP 5.5+ (that is, without the memory leak), or do I have to upgrade to PHP 5.5 or higher?


I answered:

If you think eAccelerator and XCache leak badly, try using APC, which just plain crashes PHP entirely.

In the couple of years I’ve been using OPcache I’ve never seen a memory leak or crash.

That said, you should update PHP anyway, as 5.4 will be end of life in just a few days.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Squid running out of filedescriptors on CentOS 7

Pandorica asked:

I’m running Squid 3.3 (EPEL) on CentOS 7 and recently I have been getting the following error message in my cache.log

WARNING! Your cache is running out of filedescriptors

I am slightly confused by this because I seem to have ample descriptors available:

squidclient mgr:info | grep 'file descri'
Maximum number of file descriptors:   16384
Available number of file descriptors: 16326
Reserved number of file descriptors:   100

Squid was also compiled with this flag:

--with-filedescriptors=16384

Squid confirms that these are actually available on startup:

2015/08/18 21:11:45 kid1| With 16384 file descriptors available

However this error keeps occurring. Not long after this error is logged the squid process seems to also hit 100% CPU or use nearly all of the system memory up over 90%, causing the internet speed to drop to a crawl or just hang indefinitely. Killing the process and restarting resolves it but eventually it will happen again.

I have a total of 8 GB of memory available, these are the memory/cache related parameters in my squid.conf

cache_dir ufs /var/spool/squid 16000 16 256
cache_mem 1024 MB

I am also using ufdbguard and additional helper plugins for Kerberos and NTLM authentication.

Any advice?


I answered:

The number of file descriptors is set in the systemd unit file. By default this is 16384, as you can see in /usr/lib/systemd/system/squid.service.

To override this, create a locally overriding /etc/systemd/system/squid.service which changes the amount of file descriptors. It should look something like this:

.include /usr/lib/systemd/system/squid.service

[Service]
LimitNOFILE=65536

Do not edit the default file /usr/lib/systemd/system/squid.service, as it will be restored whenever the package is updated. That is why we put it in a local file to override defaults.

After creating this file, tell systemd about it:

systemctl daemon-reload

and then restart squid.

systemctl restart squid

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

DTrace limited on Centos/Fedora

kainaw asked:

I wanted to work on a problem on a Centos 6 box. I ran dtrace and it failed. It doesn’t accept -n or -l or -P or any command line options. It claims to ONLY accept -h, -G, -C, -I, -s, and -o. I figured this must be some weird Centos thing, so I went to verify on a Fedora 22 box. Same issue. It only accepts a very limited number of command line options. I went to try an Oracle box, which is very much RedHat like Centos and similar to Fedora. It worked fine. I was able to run just dtrace and get a long list of all the command line options. I went back to Centos and Fedora. When I enter dtrace, the output is Usage /bin/dtrace [--help] [-h | -G] [-C [-I<Path>]] -s File.d [-o <File>]. So, after an hour of Googling, I’ve given up. How do you get dtrace to work properly on Centos/Fedora? I’ve tried both as root and a user. I’ve searched for packages to increase the functionality. I’ve tried removing and reinstalling dtrace. The only thing left is to remove the package and install dtrace from source.


I answered:

The dtrace you find on Oracle Linux is not the dtrace which comes with Linux systemtap and that you will find on every other Linux distribution.

Rather, it is a port of Solaris dtrace provided by Oracle and only available on Oracle Linux.

The two commands are completely different and have different purposes.

The standard Linux kernel tracing facility is known as systemtap, and Oracle’s dtrace is just a proprietary wrapper over that. You can always use systemtap directly.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Can I directly register the output of a command in ansible as a boolean?

Kit Sunde asked:

I have some code checking for the existence of something. If it has 2 lines it means the post exists. Can I register the variable from the check into a boolean immediately in the first task, rather than needing to cast it in the second? My current solution:

- name: Check if home page has been created
  sudo_user: www-data
  shell: wp post list --post_type=page --post_title=Home --post_status=publish
    chdir={{wordpress_path}}
  register: is_homepage_created

- name: Booleanize homepage check
  set_fact:
    is_homepage_created={{is_homepage_created.stdout_lines|length >= 2}}

I answered:

After some playing around with wp, I could not get it to actually filter output on post title. It always displayed a list of every page. This may not be relevant to you, but it might.

Given this apparent bug, I’d rewrite the play as follows:

First, have wp output in CSV format, which will be easier to work with. Then check whether the desired output appears within it. In the CSV format, if a page named Home exists, then the string ,Home, will be in the output, and should not match anything else, so that is what we will look for.

- name: Get list of WordPress pages
  sudo_user: www-data
  command: wp post list --post_type=page --post_title=Home --post_status=publish --format=csv
    chdir={{wordpress_path}}
  register: wordpress_pages

- name: Create the homepage if it doesn't exist
  sudo_user: www-data
  command: wp post create --post_type=page --post_title=Home --porcelain
    chdir={{wordpress_path}}
  when: "',Home,' not in wordpress_pages.stdout"

Finally, it’s best practice to use command instead of shell unless you really absolutely need to pass the command through a shell.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Compile Node with GCC v4.9.2

Mick asked:

I can’t seem to be able to compile Node.js on CentOS 6.6 (64 bits) GCC v4.9.2

$ ./configure
Node.js configure error: No acceptable C compiler found!

        Please make sure you have a C compiler installed on your system and/or
        consider adjusting the CC environment variable if you installed
        it in a non-standard prefix.

More details:

$ which gcc
/usr/bin/gcc

$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/local/gcc/libexec/gcc/x86_64-unknown-linux-gnu/4.9.2/lto-wrapper
Target: x86_64-unknown-linux-gnu
Configured with: ../gcc-current/configure --enable-languages=c,c++,fortran --enable-multilib --prefix=/usr/local/gcc
Thread model: posix
gcc version 4.9.2 (GCC)

$ which python
/usr/bin/python

$ python --version
Python 2.6.6

I have tried to set CC..

$ CC="/usr/local/gcc/bin/gcc"

$ set | grep CC
CC=/usr/local/gcc/bin/gcc

but it leads to the same error.

Any ideas?


Edit 1 – Michael’s Question

What happened to the compiler the system came up with?

I have installed a newer version of the compiler (v4.9.2) in /usr/local/gcc, and removed the original compiler (v4.4.7) and tried to replace it that way:

yum remove -y gcc gcc-c++ cpp

sudo mv /usr/bin/g++  /usr/bin/g++_old
sudo mv /usr/bin/c++ /usr/bin/c++_old

sudo ln -s -f /usr/local/gcc/bin/gcc  /usr/bin/gcc
sudo ln -s -f /usr/local/gcc/bin/g++  /usr/bin/g++
sudo ln -s -f /usr/local/gcc/bin/c++ /usr/bin/c++
sudo ln -s -f /usr/local/gcc/bin/cpp /usr/bin/cpp
sudo ln -s -f /usr/local/gcc/bin/gfortran /usr/bin/gfortran
sudo ln -s -f /usr/local/gcc/bin/gcov /usr/bin/gcov

sudo cp /usr/local/gcc/lib64/libstdc++.so.6.0.20 /usr/lib64/.
sudo mv /usr/lib64/libstdc++.so.6 /usr/lib64/libstdc++.so.6.bak
sudo ln -s -f /usr/lib64/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6

I am doing this because I am installing HHVM on this system which needs a recent compiler.


I answered:

That compiler setup may work for hhvm, but it’s pretty useless for anything else. It’s quite difficult to have two gcc versions on the same system. You could do something like use a Software Collection, but I personally don’t like those as they are not very easy to use.

You should be using CentOS 7, anyway, as it won’t require you to replace the compiler, and more things will be current. Overall, basing the system on C7 will pretty much solve all your problems anyway and be more future proof.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Issue with bind_ip on mongodb mon Virtualbox

black sensei asked:

I have been struggling with mongo bind IP for a while , I think this is the time to shout for help. I am using hostonly network 192.168.56.0/24 . I have 4 boxes. 3 boxes for mongo 192.168.56.111,192.168.56.112,192.168.56.113. and one box for the app 192.168.56.114.

So for mongo bind param is : bind_ip =127.0.0.1,192.168.56.114,192.168.56.113,192.168.56.112,192.168.56.111,10.0.2.1

2015-08-22T12:35:44.547+0000 E NETWORK  [initandlisten] listen(): bind() failed errno:99 Cannot assign requested address for socket: 192.168.56.114:27017
2015-08-22T12:35:44.553+0000 I JOURNAL  [initandlisten] journal dir=/var/lib/mongodb/journal
2015-08-22T12:35:44.554+0000 I JOURNAL  [initandlisten] recover : no journal files present, no recovery needed
2015-08-22T12:35:44.623+0000 I JOURNAL  [durability] Durability thread started
2015-08-22T12:35:44.624+0000 I JOURNAL  [journal writer] Journal writer thread started
2015-08-22T12:35:44.624+0000 I CONTROL  [initandlisten] MongoDB starting : pid=3519 port=27017 dbpath=/var/lib/mongodb 64-bit host=vagrant-ubuntu-trusty-64
2015-08-22T12:35:44.625+0000 I CONTROL  [initandlisten] db version v3.0.5
2015-08-22T12:35:44.625+0000 I CONTROL  [initandlisten] git version: 8bc4ae20708dbb493cb09338d9e7be6698e4a3a3
2015-08-22T12:35:44.625+0000 I CONTROL  [initandlisten] build info: Linux ip-10-183-35-50 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
2015-08-22T12:35:44.625+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2015-08-22T12:35:44.626+0000 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1,192.168.56.114,192.168.56.113,192.168.56.112,192.168.56.111,10.0.2.15", port: 27017 }, storage: { dbPath: "/var/lib/mongodb" }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log" } }
2015-08-22T12:35:44.631+0000 I CONTROL  [initandlisten] now exiting
2015-08-22T12:35:44.631+0000 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
2015-08-22T12:35:44.632+0000 I NETWORK  [initandlisten] removing socket file: /tmp/mongodb-27017.sock
2015-08-22T12:35:44.632+0000 I NETWORK  [initandlisten] shutdown: going to flush diaglog...
2015-08-22T12:35:44.632+0000 I NETWORK  [initandlisten] shutdown: going to close sockets...
2015-08-22T12:35:44.632+0000 I STORAGE  [initandlisten] shutdown: waiting for fs preallocator...
2015-08-22T12:35:44.632+0000 I STORAGE  [initandlisten] shutdown: final commit...
2015-08-22T12:35:44.635+0000 I JOURNAL  [initandlisten] journalCleanup...
2015-08-22T12:35:44.635+0000 I JOURNAL  [initandlisten] removeJournalFiles
2015-08-22T12:35:44.636+0000 I JOURNAL  [initandlisten] Terminating durability thread ...
2015-08-22T12:35:44.735+0000 I JOURNAL  [journal writer] Journal writer thread stopped
2015-08-22T12:35:44.736+0000 I JOURNAL  [durability] Durability thread stopped
2015-08-22T12:35:44.736+0000 I STORAGE  [initandlisten] shutdown: closing all files...
2015-08-22T12:35:44.736+0000 I STORAGE  [initandlisten] closeAllFiles() finished
2015-08-22T12:35:44.737+0000 I STORAGE  [initandlisten] shutdown: removing fs lock...
2015-08-22T12:35:44.737+0000 I CONTROL  [initandlisten] dbexit:  rc: 48

I really don’t see what’s wrong anything with any other IP. aside loopback, own IP and 0.0.0.0/0 every other IP fails to start mongodb.


I answered:

The bind_ip directive is meant to specify the IP addresses on that same system to which MongoDB listens for connections, not IP addresses from which it receives connections.

Set this option to configure the mongod or mongos process to bind to and listen for connections from applications on this address. You may attach mongod or mongos instances to any interface; however, if you attach the process to a publicly accessible interface, implement proper authentication or firewall restrictions to protect the integrity of your database.

So it should either be absent (recommended), or contain 127.0.0.1 to accept only local connections, or the IP address of the local network interface.

If you want to restrict what IP addresses can connect to MongoDB, you’ll need to use the firewall.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

what does 'Logged in without disclosing public key – Intrusion?' mean?

example asked:

I have set up a new debian vm and installed gitlab-ce. There is not really much more on the VM…
Right from the beginning, the following msg started to show up in the auth.log:

Mon 2015-08-24 21:47:36.154862 CEST [s=a93d5b0787f54cb68c24d8c7c55985a4;i=2c1bc8;b=567468ca921c4b52ba291
    _TRANSPORT=syslog
    _UID=0
    _GID=0
    _BOOT_ID=567468ca921c4b52ba2911c8b97e5f3a
    _MACHINE_ID=b6d23c0be1dbee31de2dd2b1553a4f0c
    _HOSTNAME=kraken
    SYSLOG_FACILITY=4
    PRIORITY=4
    SYSLOG_IDENTIFIER=root
    _COMM=logger
    MESSAGE=ssh/bash[9276]: Logged in without disclosing public key - Intrusion?
    _PID=9283
    _SOURCE_REALTIME_TIMESTAMP=1440445656154862

By now it appears a few hundred times a day.

What exactly does it mean? Should I be worried?


update: the msg does seem to come from sshd

   1 23979 23979 23979 ?           -1 Ss       0   4:37 /usr/sbin/sshd -D  
23979  9274  9274  9274 ?           -1 Ss       0   0:00  _ sshd: root@pts/2    
 9274  9276  9276  9276 pts/2     9276 Ss+      0   0:00      _ -bash

It seems to be triggert at every login from root (at least as well) and then appears in the logs between once and 40 or so times.

OpenSSH_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015

I answered:

The journal entry indicates that, by pid, bash posted the log message, using the logger program. This indicates that something in your startup scripts is creating this message.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Can't connect to Nginx from remote browser (weird issue)

gnoirzox asked:

I’ve got a really weird issue with Nginx, I can’t access it from my browser.

I have installed a CentOS 7 virtual machine on my computer with Nginx, PHP-FPM and MariaDB installed and configured.

The configuration of Nginx is the following :

server {
listen       80;
server_name  localhost;

#charset koi8-r;
#access_log  /var/log/nginx/log/host.access.log  main;

location / {
    root   /path/to/www
    index  index.php;
    try_files $uri $uri/ /index.php?$args;
}

#error_page  404              /404.html;

# redirect server error pages to the static page /50x.html
#
error_page   500 502 503 504  /50x.html;
location = /50x.html {
    root   /usr/share/nginx/html;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ .php$ {
#    proxy_pass   http://127.0.0.1;
#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ .php$ {
    fastcgi_split_path_info ^(.+.php)(/.+)$;
    try_files $uri $uri/ = 404;
    root   /path/to/www/;
    fastcgi_pass   127.0.0.1:9000;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    include        fastcgi_params;
}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /.ht {
#    deny  all;
#}
}

I have also configured Iptables with the following rules :

INPUT_ZONES  all  --  anywhere             anywhere            
ACCEPT     icmp --  anywhere             anywhere            
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:mysql

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
OUTPUT_direct  all  --  anywhere             anywhere            
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:http
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:https
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:mysql

And I have also decided to disable SELinux for the time being…

To finish, when executing “tcpdump port 80″, I get this message while trying to access to the web server:

listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes
19:39:51.574889 IP 192.168.56.1.59338 > 192.168.56.101.http: Flags [S], seq 2033938019, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 551897257 ecr 0,sackOK,eol], length 0

And my computer web browser says that it can’t connect to the specified server…

Do you have any idea what might cause this issue ? Did I miss something ?

Sorry for this long message, but I really have no idea what to do now..

Thanks


I answered:

Your firewall rules reject all incoming traffic.

You tried to deal with this by manually appending rules to allow HTTP, HTTPS and MySQL connections, but this does not work since they are already rejected by a previous rule.

Further, your system is running firewalld.

To resolve the problem, you should use firewalld to manage your firewall rules.

For example:

firewall-cmd --add-service=http
firewall-cmd --add-service=https
firewall-cmd --add-service=mysql

To make them persist, run:

firewall-cmd --runtime-to-permanent

(This last requires that you have updated to at least CentOS 7.1.)


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Dell DRAC vs IPMIa

Kevin Baker asked:

I currently have a Dell PE R610 with a DRAC card installed, I was looking at getting a Dell PE C6100 and the description says it has IPMI with dedicated NIC. I have my server in a co-location about 1.5 hours away, so the DRAC has saved me a trip many times. Will the IMPI work like the DRAC?

Thanks,
Kevin


I answered:

DRAC is based on IPMI, so it will work approximately the same, but with fewer features. You will be able to get a remote console and change BIOS and firmware settings, update firmware, have alerts emailed to you, etc. All the most critical and basic stuff will work.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Copying ansible keypair public key to existing AWS instances

Justin asked:

We have around 10 instances already running on AWS using my personal AWS keypair justin.pem.

I am setting up an Ansible box, and generated a new AWS keypair called ansible.pem. I copied ansible.pem to the Ansible instance into .ssh and have it ready to use.

The problem is how do I inject the ansible.pem public key into .ssh/authorized_keys on each of our existing AWS instances?

When we create new instances, I want to assign the justin.pem key pair, but this means that Ansible won’t be able to ssh into newly created instances as well.

What is the solution to this? Seems like a chicken and egg problem.


I answered:

Create new instances with the ansible.pem public key, and then use Ansible’s authorized_keys module to distribute any additional public keys you want to access your instance with, such as the corresponding public key for justin.pem.

- name: Install justin's ssh key
  authorized_key: user=ec2-user
                  key="{{lookup('file', '/home/justin/.ssh/justin.pub')}}"

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.