Debian – One Gateway 2 interfaces how to?

Ragnar asked:

I have this scenario.

4 VM Debian8 : 1 DHCP 1 DNS 1 GW 1 Client

I can ping from all my VM (except GW) each other (@ip or hostname).

My GW have 2 interfaces (eth0->LAN / eth1->WAN). From it I can ping google.fr but I cannot ping my LAN (except with @ip).

In the file /etc/resolv.conf I have the DNS form my box on the WAN. If I put the conf of my LAN it’s the reverse (of course). I can ping my LAN but not WAN.

I activated ip_forward and I know I have to do some work with route but I have to admit I don’t really understand the command for route.

Can you explain me the logic of this ?

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.84.254  0.0.0.0         UG    0      0        0 eth0
0.0.0.0         192.168.10.2    0.0.0.0         UG    0      0        0 eth1
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.84.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0

I answered:

The LAN interface of your gateway VM should not have a gateway defined in /etc/network/interfaces. The gateway represents the default route to the Internet, and you have only one such route (via WAN, not LAN). Remove it, and then restart networking.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How to escape spaces in systemd unit files?

Cobra_Fast asked:

My unit file looks like this (already attempted to escape spaces as x20 like the docs say):

[Unit]
Description=My Service

[Service]
Type=simple
WorkingDirectory=/home/cobra/myx20service/
ExecStart=/home/cobra/myx20service/start.sh

[Install]
WantedBy=multi-user.target

but when attempting to start it, it fails with the following message:

Failed at step CHDIR spawning /home/cobra/my service/start.sh: No such file or directory
myservice.service: main process exited, code=exited, status=200/CHDIR

Giving the path from this error message to stat returns:

  File: ‘/home/cobra/my service/start.sh’
  Size: 280             Blocks: 8          IO Block: 4096   regular file
Device: 903h/2307d      Inode: 4718912     Links: 1
Access: (0754/-rwxr-xr--)  Uid: ( 1000/   cobra)   Gid: ( 1000/   cobra)
Access: 2015-05-24 22:42:12.702657594 +0200
Modify: 2015-03-27 22:28:05.682531000 +0100
Change: 2015-05-24 22:40:58.830298787 +0200
 Birth: -

I cannot remove the spaces from the file name as the service I’m attempting to run requires them for some reason.

Where am I going wrong?


I answered:

The obvious thing to do is to use double quotes.

ExecStart="/home/cobra/my service/start.sh"

You also should get rid of the start.sh script and move any necessary logic into the unit.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

ssh as user runs program with no authentication (neither password nor keys)

Peter Lyons asked:

I’d like to set up a user on my system (Ubuntu 14.04 x64) such that people can run ssh example@myhost.example.com and see some output from a program without being prompted for a password or having to have an ssh key. Here’s what I have done so far:

  • created the example user with my program as it’s shell via adduser example --shell /path/to/my/program
  • created an empty file at /home/example/.hushlogin to quiet the motd and other login messages

Things are working with the exception that I must complete the password prompt authentication challenge, which I’d like to bypass as this will be a publicly-available service.

Presumably customizing the PAM configuration under /etc/pam.d appropriately might do the trick but I need some guidance on the specifics. I want this change to only affect this specific user account, not every account on the system.


I answered:

Set PermitEmptyPasswords yes in /etc/ssh/sshd_config, and then make sure the user account has no password.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Are there any security benefits to deploying custom SSH DH groups to client-only systems?

Michael Kjörling asked:

One suggested mitigative strategy against Logjam-related attacks on SSH is to generate custom SSH Diffie-Hellman groups using something like (the below being for OpenSSH)

ssh-keygen -G moduli-2048.candidates -b 2048
ssh-keygen -T moduli-2048 -f moduli-2048.candidates

followed by replacing the system-wide moduli file with the output file moduli-2048. (ssh-keygen -G is used to generate candidate DH-GEX primes, and ssh-keygen -T to test the generated candidates for safety.)

This is pretty clearly a reasonable thing to do on SSH servers that otherwise would be using well-known groups that lend themselves well to precomputation, but are there any security benefits to deploying custom SSH DH groups onto client-only systems? (That is, systems that connect to SSH servers, but never act as an SSH server themselves.)

I am primarily interested in answers relating to OpenSSH on Linux, but more generic answers would be appreciated as well.


I answered:

You can if you really want, but I wouldn’t bother regenerating 2048-bit DH parameters for OpenSSH. There are much more important things you need to do to secure SSH, like disabling weak crypto.

What I would do is delete the existing ones which are less than 2048 bits.

awk '$5 >= 2000' /etc/ssh/moduli > /etc/ssh/moduli.strong && 
mv /etc/ssh/moduli.strong /etc/ssh/moduli

In case you hadn’t noticed, OpenSSH ships with a large number of pre-generated moduli, all the way up to 8192 bits. While we’re certainly concerned about 1024-bit primes today, 2048-bit ones are believed to be safe for the foreseeable future. And while that will eventually change, it could be next week, but it’s more likely to be long after we’ve become pensioners…

There is also this curious bit in the ssh-keygen man page:

It is important that this file contains moduli of a range of bit lengths and that both ends of a connection share common moduli.

Which seems to argue against replacing existing moduli, though it doesn’t really provide the actual reason for doing so.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Do spam impersonators impact my server's reputation?

starbeamrainbowlabs asked:

Recently I opened my server’s webmaster email inbox and found 32 “Delivery status notification failed” emails reporting the failure of the delivery of 32 emails I did not send.

After some investigation I have determined that several IP addresses are attempting to impersonate my web server by sending emails from random email addresses (that don’t exist) using their own mail servers.

I have a tight SPF record set:

 v=spf1 a mx ptr ip4:37.187.192.179 ip6:2001:41d0:52:a00::68e a:starbeamrainbowlabs.com a:mail.starbeamrainbowlabs.com -all

My Question: Do these spammers attempting to impersonate my mail server impact my mail server’s reputation? Will I get added to some blacklists and then be unable to send emails to certain domains?


I answered:

In the case you’ve described, the mails did not originate at your mail server, so they will not affect your mail server’s reputation.

The SPF record allows any mail server which is configured to check them to reject such messages or mark them as spam, but not all mail servers check SPF records, and some which do check them don’t actually do anything with the results.

The big providers like Gmail, Hotmail, etc., do check SPF records and use the results, so this is helping you to not receive a lot of bounces you would otherwise get. Not to mention killing a lot of this spam.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

NGNIX, SSL Certificates and PC-DSSI 3.1

ron M. asked:

We are going to have to pass a PCI 3.1 audit for the web application we’re currently developing. It’s on Amazon EC2 running NGINX under Debian.

We’re in contact with Symantec for certificates and we’re particularly interested in the Secure Site Pro with EV one and the Wildcard one (we would have one server with dynamic sub-domain names and that’s why we’re thinking about the wildcard one)

I just wanted to make sure I’m not going to spend thousands of dollars and find out these aren’t adequate for PCI 3.1 or that someone the combination of NGINX and Debian is not going to be working for those types of certs.

Does anyone have experience with trying to be PCI-DSS 3.1 compliant that can give some advice as to which SSL certificates we should be getting?


I answered:

TL;DR: PCI-DSS 3.1 is effective immediately, but the requirement to disable TLS 1.0 and SSL 3 takes effect after 30 June 2016.


In most cases you should have already disabled SSL 3 months ago, or more, for the POODLE vulnerability. So that isn’t a concern.

The interesting part of this requirement is not being able to use TLS 1.0.

The official word is:

SSL and early TLS are not considered strong cryptography and cannot be used as a security control after 30th June, 2016. Prior to this
date, existing implementations that use SSL and/or early TLS must have a formal Risk Mitigation and Migration Plan in place. Effective
immediately, new implementations must not use SSL or early TLS. POS POI terminals (and the SSL/TLS termination points to which they
connect) that can be verified as not being susceptible to any known exploits for SSL and early TLS, may continue using these as a security
control after 30th June, 2016.

Migrating from SSL and Early TLS, PCI-DSS Information Supplement

Where “early TLS” is defined as TLS 1.0. Only TLS 1.1 and 1.2 will be permitted, and 1.2 is strongly recommended.

While you will still be allowed to use TLS 1.0 and SSL 3 for point of sale devices and their backends, provided you can prove you’ve mitigated every possible problem, you should strongly consider updating these as well.

As an aside, this is yet another nail in Windows XP’s coffin…


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How to install packages offline?

carlosvega asked:

Our server runs offline and we need to install lot of yum packages, like oracle-jdk, elasticsearch, nginx, etc.

Is there any way to download the rpm dependencies so we can install them offline in the server?


I answered:

Maintain a local CentOS mirror on an Internet-connected machine with rsync to a public CentOS mirror that accepts rsync connections. You can then copy these directories to a USB stick and use them as installation sources. They already carry the necessary metadata to act as repositories, so you only need to point the installer at them.

$ du -sh /srv/www/mirrors/centos/7.1.1503/{os,updates}/x86_64 
7.1G    /srv/www/mirrors/centos/7.1.1503/os/x86_64
2.1G    /srv/www/mirrors/centos/7.1.1503/updates/x86_64

In the case of third party packages, you can also mirror those yourself using the reposync command line tool, which downloads the contents of yum repositories to a local filesystem.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

how to block all requests from URLs with MSDOS device name using isapi filter cve 2007-2897

Musa Zargar asked:

I recently had an audit report on my windows server 2008 R2 and it failed with the error/vulnerability:
Microsoft asp.net ms-dos device name DoS www (443/tcp).

I have not been able to find any solution to fix this vulnerability yet as noone of the solutions accross google suggest how to exactly use ISAPI filter to block all requests from URLs with msdos device name as there is no such particular string mentioned in the audit report.
There ought to be a string to add to ISAPI filter to block all such requests to work around this vulnerability?

Any quick help would be appreciated!

Regards


I answered:

Tell the auditor you aren’t running IIS 6. There is nothing else you really need to do. This vulnerability only affected IIS 6 running on Windows XP and Server 2003.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Combining partition types lvm

Luke asked:

I have an older disk setup that was made with msdos parition types and it appears that they kept growing the same disk in order to grow the lvm however it ran into an issue where it got to 2TB and couldn’t grow anymore. The client wanted to attach an additional 3TB disk to the lvm however you can only do 2TB with msdos parition type so I instead made it GPT (also trying to keep future growth in mind since historically they seem to grow the disks instead of attaching additional). It joined the lvm ok and the fsck and everything went fine. The OS is not booting off this disk and reboots went ok.

Does anyone see any potential issues with this setup? Any roadblocks to future growth? e.g. combining msdos/GPT partitions in the same lvm.


I answered:

If you used a partition as an LVM PV then it doesn’t matter whether the disk was partitioned as MSDOS or GPT. LVM does not operate at this level; it doesn’t care at all where the block device came from.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How much of a performance does using QEMU virtualization incur on Windows?

vfclists asked:

How much of a performance does using QEMU virtualization incur on a Windows host, when both the host and vm are running the same CPU?
Older articles indicate that using kqemu avoided the slow down in performance if both host and vm used the same instruction set. But it seems that kqemu is not used much recently or is not available for 64 bit systems.

Are more recent versions capable of maintaining performance without the use of `kqemu’?


I answered:

On a Windows host, qemu isn’t actually a hypervisor, but is doing full machine emulation with dynamic translation, which is horrendously slow, and there’s little that can be done to speed it up.

It’s maybe useful as a demonstration or for debugging purposes, but for anything serious you will want to use an actual Windows hypervisor such as Hyper-V, or some other actual hypervisor entirely (e.g. KVM on Linux).


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.