ipset not being applied to iptables

cardinalPilot asked:

I’m trying to filter out a country that keeps probing my SMTP server (CentOS6) and I can’t seem to get the ipset to work out right in iptables.

I downloaded that countries IP addresses from ipdeny.com and installed the list as a text file. Originally, I had all my blacklist IP addresses in a big long iptables chain, but that could really affect the CPU adversely – hence me wanting to use an ipset.

Here’s an excerpt from that IP addresses file:

185.40.4.31
80.82.65.237
2.60.0.0/14

So now I’m trying to use that list in an ipset set. I verify the ipset set is populated using ‘ipset list’.

Name: blacklist
Type: hash:net
Header: family inet hashsize 2048 maxelem 65536
Size in memory: 108816
References: 1
Members:
....
185.40.4.31
185.40.152.0/22
...

With this ipset, I add it to iptables:

iptables -A INPUT -p tcp -m set --set blacklist src -j DROP

But when I try and test the set using hping3, the packages still gets thru.

hping3 --syn --destport 25 --count 3 -a 185.40.4.31 <server_ip>

When I was using the long iptables chain, things were working as expected.

Here’s the abbreviated output of iptables -L -n (I editted out most of the 6200+ ipdeny entries)

Chain INPUT (policy DROP)
target     prot opt source               destination
DROP       all  --  217.199.240.0/20     0.0.0.0/0
DROP       all  --  217.199.208.0/20     0.0.0.0/0
...
DROP       all  --  2.60.0.0/14          0.0.0.0/0
DROP       all  --  94.102.50.41         0.0.0.0/0
DROP       all  --  80.82.65.237         0.0.0.0/0
DROP       all  --  185.40.4.31          0.0.0.0/0
ACCEPT     all  --  192.168.2.0/24       0.0.0.0/0
ACCEPT     all  --  192.168.1.0/24       0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
DROP       tcp  --  0.0.0.0/0            0.0.0.0/0           tcp flags:!0x17/0x02 state NEW
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:27944 state NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:21 state NEW
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:53
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:53
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:80 state NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:443 state NEW
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:25
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:587
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:993
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:995
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:143
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:27940
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:110
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           icmp type 8
LOG        all  --  0.0.0.0/0            0.0.0.0/0           LOG flags 0 level 4
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       tcp  --  0.0.0.0/0            0.0.0.0/0           match-set blacklist src

Chain FORWARD (policy DROP)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0

I answered:

Your rule never takes effect because you have added it to the end of the chain. Immediately preceding it is a rule to drop all traffic, thus your rule is never reached. In iptables, rules are matched in order; this is different than many other firewalls.

To resolve the problem, move the rule up to earlier in the chain. And if you really want to blacklist those addresses, it should be as early as possible in the chain, e.g. the first rule.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

word wrap in ssh session not working with grep

user53029 asked:

When I ssh to my Linux servers and use grep like this:

grep ‘timeout exceeded’ logfile | less

word wrap does not work.

However, if I use the same command but use less first, like this:

less logfile | grep ‘timeout exceeded’

the lines wrap. I am not sure what the problem is or if this is normal or not. But it happens regardless of the ssh client I use. I have tried both putty and an Ubuntu client. How can I fix this?


I answered:

This is not the default behavior of less. The default is to wrap long lines.

You are seeing this behavior because you have the -S option (and several others) set in your LESS environment variable.

       -S or --chop-long-lines
              Causes  lines  longer than the screen width to be chopped (trun‐
              cated) rather than wrapped.  That is, the portion of a long line
              that does not fit in the screen width is not shown.  The default
              is to wrap long lines; that is, display  the  remainder  on  the
              next line.

To resolve the problem, check your shell startup scripts (e.g. $HOME/.bash_profile, $HOME/.bashrc) and the system shell startup scripts (e.g. those in the /etc/profile.d directory) to see where the environment variable is being set, and make the desired changes.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Linux mail(x) command. Can't open or use. Just spits out old message and returns to prompt

jaydisc asked:

(Debian Squeeze)

I was working on a shell script that piped output to the mail command. I must have done something wrong, as I am no longer able to use the command for anything. Even if I type “mail” with no arguments, it just spits out what appears to be the content I previously tried to email, but it then just returns me to a prompt. The same output occurs regardless of which arguments I use with the command.

We do not use local mail storage, and I have deleted all of the user files in /var/mail and /var/spool/mail (one is a link to the other), but for the life of me, I cannot figure out how to get use of this command back.

I’m struggling with any kind of searching for this problem as the search terms seem way too vague.


I answered:

I suspect that at some point you accidentally did something like:

....command... > /usr/bin/mail

instead of

....command... | /usr/bin/mail

thus replacing /usr/bin/mail with a copy of some data.


I would suggest that you reinstall the mail program. Because Debian ships several alternatives, you can find the one you have installed with:

root@www:~# ls -l /usr/bin/mail
lrwxrwxrwx 1 root root 22 2011-04-04 02:48 /usr/bin/mail -> /etc/alternatives/mail

root@www:~# ls -l /etc/alternatives/mail
lrwxrwxrwx 1 root root 18 2011-04-04 02:48 /etc/alternatives/mail -> /usr/bin/bsd-mailx

So the mail program is really /usr/bin/bsd-mailx on this system. Let us find out which package it came from:

root@www:~# apt-file search /usr/bin/bsd-mailx
bsd-mailx: /usr/bin/bsd-mailx

And finally we will reinstall that package.

root@www:~# apt-get install --reinstall bsd-mailx
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 1 not upgraded.
Need to get 155kB of archives.
After this operation, 0B of additional disk space will be used.
Do you want to continue [Y/n]?

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Where to create an AF_LOCAL/AF_UNIX socket file when not allowed to write in /var/run?

hl037_ asked:

FSH says that socket and pid files should go to /var/run
However, for security purpose, only root can creates file and subdirectories in this location.

A common solution is creating a subdirectory for the script in /var/run and ten chmod it… But what to do when you don’t have access to root user ?

Where should I put a .socket (and a .pid) if I don’t have access to root ?


I answered:

On systemd-based systems such as Arch Linux and (latest) Debian, services are expected to tell systemd that they want a directory under /run by adding a a tmpfiles.d configuration file to the system.

By default these are stored in /usr/lib/tmpfiles.d, though local additions can be added in /etc/tmpfiles.d which override the defaults.

The tmpfiles.d facility can be used to create and empty directories, create files, symlinks, device nodes, sockets, and more.

For example:

# cat /usr/lib/tmpfiles.d/php-fpm.conf
d /run/php-fpm 755 root root

This specifies to create a directory /run/php-fpm, with mode 0755, owned by root and group root. The directory will be created at system startup or whenever the systemd-tmpfiles-setup service is restarted. You can also run systemd-tmpfiles manually.

There are many other options available; check the tmpfiles.d documentation for full details.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How do i know if this logs are normal, and if somenone got into my server?

BlueStarry asked:

last month i was logging into my server as usual and was a mess: programs not working, /home not mounting anymore etc etc

Now i’ve downloaded all the Ubuntu server logs and i’ve noticed that auth is full of lines like this:

    Jun  7 06:57:01 ns375259 CRON[5663]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun  7 06:57:01 ns375259 CRON[5663]: pam_unix(cron:session): session closed for user root

I mean, really full, 2 months + of lines

Root access was denied on my ssh.. i don’t really know what that is.
What i should look for for a security breach in the logs?


I answered:

The repeated occurrence of “cron” indicates that this session was started by a cron job. It is not indicative of a compromise.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How to disable AAAA lookups?

Nils Toedtmann asked:

… to compensate for broken DNS servers that are outside our control.

Our problem: We deploy embedded devices that collect sensor data at various, mostly IPv4-only sites. Some sites have poorly maintained networks, e.g. misconfigured or otherwise broken DNS caches and/or firewalls that either ignore AAAA queries altogether, or respond to them with broken replies (e.g. wrong source IP!). As an external supplier to the facilities department, we have next to no influence on the (sometimes reluctant) IT departments. The chances of them fixing their DNS servers/firewalls any time soon are minuscule.

The effect on our device is that with each gethostbyname(), the processes have to wait until the AAAA queries time out, at which point some processes have already timed out their connection attempts altogether.

I am looking for solutions that are …

  • system-wide. I cannot reconfigure dozens of application individually
  • non-permanent and configurable. We need to (re-)enable IPv6 where/when it gets fixed/rolled out. Reboot is OK.
  • If a solution requires a core library like glibc to be replaced, the replacement library package should be available from a known-to-be-well-maintained repository (e.g. Debian Testing, Ubuntu universe, EPEL). Self-building is not an option for so many reasons that I don’t even know where to begin with, so I just don’t list them at all…

The most obvious solution would be to configure the resolver library e.g. via /etc/{resolv,nsswitch,gai}.conf to not query AAAA records. A resolv.conf option no-inet6 as suggested here would be exactly what i am looking for. Unfortunately it is not implemented, at least not on our systems (libc6-2.13-38+deb7u4 on Debian 7; libc6-2.19-0ubuntu6.3 on Ubuntu 14.04)

So how then? One finds the following methods suggested on SF and elsewhere, but non of them work:

  • Disabling IPv6 altogether, e.g. by blacklisting the ipv6 LKM in /etc/modprobe.d/, or sysctl -w net.ipv6.conf.all.disable_ipv6=1. (Out of curiosity: Why is the resolver asking for AAAA where IPv6 is disabled?)
  • Removing options inet6 from /etc/resolv.conf. It wasn’t there in the first place, inet6 is simply enabled by default these days.
  • Setting options single-request in /etc/resolv.conf. This only ensures that the A and the AAAA queries are done sequentially rather than in parallel
  • Changing precedence in /etc/gai.conf. That does not affect the DNS queries, only how multiple replies are processed.
  • Using external resolvers (or running a local resolver daemon that circumvents the broken DNS servers) would help, but is usually disallowed by the company’s firewall policies. And it can make internal resources inaccessible.

Alternative ugly ideas:

  • Run a DNS cache on localhost. Configure it to forward all non-AAAA queries, but to respond to AAAA queries with either NOERROR or NXDOMAIN (depending on the result of the corresponding A-query). I am not aware of a DNS cache able to do this though.
  • Use some clever iptables u32 match, or Ondrej Caletka’s iptables DNS module to match AAAA queries, in order to either icmp-reject them (how would the resolver lib react to that?), or to redirect them to a local DNS server that responds to everything with an empty NOERROR.

Note that there are similar, related questions on SE. My question differs insofar as it elaborates the actual problem i am trying to solve, as it lists explicit requirements, as it blacklists some often-suggested non-working solutions, and as it is not specific to a single application. Following this discussion, I posted my question.


I answered:

Stop using gethostbyname(). You should be using getaddrinfo() instead, and should have been for years now. The man page even warns you of this.

The gethostbyname*(), gethostbyaddr*(), herror(), and hstrerror() functions are obsolete. Applications should use getaddrinfo(3), getnameinfo(3), and gai_strerror(3) instead.

Here is a quick sample program in C which demonstrates looking up only A records for a name, and a Wireshark capture showing that only A record lookups went over the network.

In particular, you need to set ai_family to AF_INET if you only want A record lookups done. This sample program only prints the returned IP addresses. See the getaddrinfo() man page for a more complete example of how to make outgoing connections.

In the Wireshark capture, 172.25.50.3 is the local DNS resolver; the capture was taken there, so you also see its outgoing queries and responses. Note that only an A record was requested. No AAAA lookup was ever done.

#include <sys/types.h>
#include <sys/socket.h>
#include <string.h>
#include <stdlib.h>
#include <netdb.h>
#include <stdio.h>

int main(void) {
    struct addrinfo hints;
    struct addrinfo *result, *rp;
    int s;
    char host[256];

    memset(&hints, 0, sizeof(struct addrinfo));
    hints.ai_family = AF_INET;
    hints.ai_socktype = SOCK_STREAM;
    hints.ai_protocol = 0;

    s = getaddrinfo("www.facebook.com", NULL, &hints, &result);
    if (s != 0) {
        fprintf(stderr, "getaddrinfo: %sn", gai_strerror(s));
        exit(EXIT_FAILURE);
    }

    for (rp = result; rp != NULL; rp = rp->ai_next) {
        getnameinfo(rp->ai_addr, rp->ai_addrlen, host, sizeof(host), NULL, 0, NI_NUMERICHOST);
        printf("%sn", host);
    }
    freeaddrinfo(result);
}

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

RHEL6 installed wrong version of rpmforge

SUB asked:

I installed the wrong version of rpmforge for el7. Then I ran an update which installed this package on my system

python-crypto-2.6.1-1.el7.rf.x86_64

Notice the el7 but I am on rhel6. I then realized and removed the wrong repository and installed the right one for el6.

$ rpm -qa | grep rpmfor
rpmforge-release-0.5.2-2.el6.rf.x86_64

But the above process has broken the update process, which I know I could work around using --skip-broken option. How do I downgrade for the above mentioned package. I tried to uninstall and install it back again but I get this error:

Error: Trying to remove "c4ebpl", which is protected

It shows me some protected packages which can’t be removed.
Update process using sudo yum update gives me this error:

Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: libgmp.so.10()(64bit)
Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: libc.so.6(GLIBC_2.14)(64bit)
Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: python(abi) = 2.7
           Installed: python-2.6.6-52.el6.x86_64 (@el66/$releasever)
               python(abi) = 2.6
Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: libpython2.7.so.1.0()(64bit)
 You could try using --skip-broken to work around the problem

Would anyone know how to downgrade to the original packages? Is there a way to do a factory reset? Or do I need to re-install linux again?

Some things I tried:
I deleted the python-crypto.x86_64 package using this command

sudo rpm --nodeps -e python-crypto.x86_64

And the update went through. So I thought I should install the python-crypto.x86_64package now as I have the right el6 rpmforge repository. So I ran this command sudo yum install python-crypto.x86_64 but I got the same error:

Resolving Dependencies
--> Running transaction check
---> Package python-crypto.x86_64 0:2.6.1-1.el7.rf will be installed
--> Processing Dependency: python(abi) = 2.7 for package: python-crypto-2.6.1-1.el7.rf.x86_64
--> Processing Dependency: libc.so.6(GLIBC_2.14)(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64
--> Processing Dependency: libpython2.7.so.1.0()(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64
--> Processing Dependency: libgmp.so.10()(64bit) for package: python-crypto-2.6.1-1.el7.rf.x86_64
--> Finished Dependency Resolution
Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: libgmp.so.10()(64bit)
Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: libc.so.6(GLIBC_2.14)(64bit)
Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: python(abi) = 2.7
           Installed: python-2.6.6-52.el6.x86_64 (@el66/$releasever)
               python(abi) = 2.6
Error: Package: python-crypto-2.6.1-1.el7.rf.x86_64 (rpmforge)
           Requires: libpython2.7.so.1.0()(64bit)
 You could try using --skip-broken to work around the problem

I don’t know why its trying to find the el7 package? I have these libraries in my machine.

$ rpm -qa | grep rpmfor
rpmforge-release-0.5.3-1.el7.rf.x86_64

I answered:

First you need to install the correct rpmforge-release package. Download it and use rpm -U --oldpackage to install it over the wrong package.

Second, you need to clean the cached yum metadata that it had. Use yum clean all to get rid of everything.

Third, use yum distro-sync to downgrade any packages that were installed for the wrong distribution. (And note that this will also upgrade any out-of-date packages.)


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Running TOR on Centos 6

Darkness.su asked:

I’m the operator of the XMPP server on darkness.su.The server runs on Centos 6.

I installed TOR and configured it to provide a hidden service access to the server.It was working fine at first,but ever since an update a few months ago it started giving me these errors:

    799  May 25 14:19:37.060 [warn] Permissions on directory /var/lib/tor/hidden_service are too permissive.
  800  May 25 14:19:37.060 [warn] Failed to parse/validate config: Failed to configure rendezvous options. See logs for details.
  801  May 25 14:19:37.060 [err] Reading config failed--see warnings above.

I tried to check the logs,but I can’t find them,and setting one doesn’t seem to work.I’ve tried removing TOR and wiping all its folder,then reinstalling it.Same thing.

I’m installing through yum from TOR Project’s repository.

With chmod 700 on the hidden service directory(owned by TOR):

Jul 24 21:39:05.573 [warn] Directory /var/lib/tor/hidden_service/ cannot be read: Permission denied
Jul 24 21:39:05.573 [warn] Failed to parse/validate config: Failed to configure rendezvous options. See logs for details.
Jul 24 21:39:05.573 [err] Reading config failed--see warnings above

After changing directory owner to root:

Jul 24 22:11:36.236 [warn] /var/lib/tor/hidden_service/ is not owned by this user (_tor, 496) but by root (0). Perhaps you are running Tor as the wrong user?
Jul 24 22:11:36.236 [warn] Failed to parse/validate config: Failed to configure rendezvous options. See logs for details.
Jul 24 22:11:36.236 [err] Reading config failed--see warnings above.

I answered:

You need to check three things:

  1. The file ownership should be correct.

    If you use Tor from torproject.org, this should be _tor. If you use Tor from EPEL or Fedora, this should be toranon.

    chown -R _tor:_tor /var/lib/tor
    

    or

    chown -R toranon:toranon /var/lib/tor
    
  2. The permissions should be correct.

    The hidden service directory must be readable only by the Tor user.

    find /var/lib/tor/hidden_service -type d | xargs chmod u+rwx,go=
    find /var/lib/tor/hidden_service -type f | xargs chmod u+rw,go=
    
  3. SELinux contexts must be set correctly. In recent releases of RHEL/CentOS, Tor has an SELinux policy applied to it.

    To fix broken SELinux labels:

    restorecon -r -v /var/lib/tor
    

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Manipulate .conf file to rewrite https properly

mstoldt asked:

I’m having a nginx server and a domain, example.com. Each customer has his own subdomain, like wsb.example.com etc. Their content changes via PHP, but all subdomains get redirected to the same folder on the web server.

The only problem I’m having, is with accessing the domain without https. https://whatever.example.com/ works, but whatever.example.com redirects me to https://*.example.com/

What am I doing wrong?

server {
        listen *:80;
        listen *:443 ssl;
        server_name *.alterament.de;

        index index.php;
        root /var/www/webapp.alterament.de/public;

        ssl_certificate         ssl/alterament.de.crt;
        ssl_certificate_key     ssl/alterament.de.key;

        if ($ssl_protocol = "") {
                rewrite ^/(.*) https://$server_name/$1 permanent;
        }

        location ~ .php$ {
                fastcgi_pass   127.0.0.1:9000;
                fastcgi_index  index.php;
                fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
                include        fastcgi_params;
        }

        location /resources {
                root /var/www/webapp.alterament.de;
        }
}

I answered:

You used $server_name in your redirect, which causes the content of the server_name directive to be used. This is not what you want.

Instead, you should replace it with $host.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How to create advanced rules with firewall-cmd?

user109322 asked:

I want to create a rule using firewalld that uses criteria username or userID and maybe one other module criteria.

In iptables, I think you can do things like

iptables -A OUTPUT -m owner --uid-owner <UID> -j ACCEPT

(and you can add other -m modules or -p protocols to the same command)

But I read firewall-cmd manpage and I cannot find how to make same kind of rule. Even “rich rules” don’t seem to have this support. Do I have to use the “direct” feature? I can’t quite understand its syntax. Especially it worries me that these returns nothing!

firewall-cmd --direct --get-chains ipv4 filter
firewall-cmd --direct --get-rules ipv4 filter OUTPUT
firewall-cmd --direct --get-rules ipv4 filter INPUT

Of course iptables -L shows I have those tables, chains and rules in them.

So how do I add a permanent rule with owner and maybe one more criteria using firewalld?


I answered:

You don’t need to add or even have custom direct chains (though you can if you want to get really complicated. Just add to your existing chains directly.

After IP version, table, chain and priority, you simply specify the relevant iptables options:

firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 
        -m owner --uid-owner $UID -j ACCEPT

Underneath, at iptables, this will actually be added to a firewalld-managed chain named OUTPUT_direct, which is called from the OUTPUT chain.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.