CPAN installation proble, HTTP::Message on CentOS

Joon asked:

I have a perl script that needs a long list of prerequisites. While installing these, I am not getting past the installation of HTTP::Message

When I run cpan HTTP::Message from the command line as root, I get the following output:

cpan HTTP::Message

Reading ‘/root/.cpan/Metadata’

Database was generated on Mon, 27 Apr 2015 12:53:26 GMT

Running install for module ‘HTTP::Message’

Running make for G/GA/GAAS/HTTP-Message-6.06.tar.gz

Checksum for
/root/.cpan/sources/authors/id/G/GA/GAAS/HTTP-Message-6.06.tar.gz ok

CPAN.pm: Building G/GA/GAAS/HTTP-Message-6.06.tar.gz

Checking if your kit is complete…

Looks good

Warning: prerequisite Encode::Locale 1 not found.

Warning: prerequisite HTTP::Date 6 not found.

Warning: prerequisite IO::HTML 0 not found.

Warning: prerequisite LWP::MediaTypes 6 not found.

Warning: prerequisite URI 1.10 not found.

Writing Makefile for HTTP::Message

However, I have run cpan individually for all of those prerequisites (as root), and the install succeeded.

What am I missing here?

I am running CentOS Linux release 7.1.1503 (Core)


I answered:

What’s wrong with just doing yum install perl-HTTP-Message? Why are you trying to use CPAN? You should avoid using CPAN when the perl modules are already packaged.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How do I get puppet master to listen on IPv6?

Machoke asked:

So I have a RHEL 7 server on an internal network with IPv6, and I am able to SSH into it via IPv6 and it is also serving DNS to other hosts over IPv6.

I have noticed that puppet master bind to IPv4 only:

$ netstat -n -l | grep 8140
tcp        0      0 0.0.0.0:8140            0.0.0.0:*               LISTEN

A quick Google reveals that it is most likely that ruby on RHEL 7 are compiled without IPv6 support.

So I just get it up and running with IPv4 for now.

Having everything else running on IPv6 though, I wonder what’s the best way to get puppet master to listen on IPv6? Can I install ruby from the upstream rpms with IPv6 turned on? Or install a separate IPv6 enabled ruby environment via rvm, but then how would I get puppet to use the one provided by rvm?


I answered:

In Puppet Enterprise the puppetmaster should be listening on a dual stack IPv6/IPv4 socket by default. Though PE has some other IPv6-related brokenness (my site) you’ll have to work around.

In open source Puppet, such as you may have obtained via EPEL, you need to set the bindaddress explicitly in the [main] section of /etc/puppet/puppet.conf:

[main]
bindaddress = ::

which by default will bind to a dual stack socket and accept IPv6 and IPv4 connections from anywhere.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Why is it important that servers have the exact same time

Jens Schauder asked:

I have read multiple times (although I can’t find it right now) that data centers take great effort to make sure that all server have the exact same time. Including, but not limited to worrying about leap seconds.

Why is it so important that server have the same time? And what are the actual tolerances?


I answered:

Security

In general, timestamps are used in various authentication protocols to help prevent replay attacks, where an attacker can reuse an authentication token he was able to steal (e.g. by sniffing the network).

Kerberos authentication does exactly this, for instance. In the version of Kerberos used in Windows, the default tolerance is 5 minutes.

This is also used by various one-time password protocols used for two-factor authentication such as Google Authenticator, RSA SecurID, etc. In these cases the tolerance is usually around 30-60 seconds.

Without the time being in sync between client and server, it would not be possible to complete authentication. (This restriction is removed in the newest versions of MIT Kerberos, by having the client and server determine the offset between their clocks during authentication, but these changes occurred after Windows Server 2012 R2 and it will be a while before you see it in a Windows version. But some implementations of 2FA will probably always need synchronized clocks.)

Administration

Having clocks in sync makes it easier to work with disparate systems. For instance, correlating log entries from multiple servers is much easier if all systems have the same time. In these cases you can usually work with a tolerance of 1 second, which NTP will provide, but ideally you want the times to be as closely synchronized as you can afford. PTP, which provides much tighter tolerances, can be much more expensive to implement.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Convert Vmware version 2 virtual machine to Hyper-V—General Road map needed

Mark asked:

I have taken over the management an old VMware host. I believe it is VMWare version 2 by what it says in the Control Panel/Programs of the Windows 2008 R2 server. I would like to convert/migrate one of the virtual machines to a Windows 2012 Datacenter R2 Hyper-V platform.

Using the Microsoft Virtual Machine Converter v.3 it is unable to find the “Source”. A message “Unable to contact VMware host machine.” is displayed. The wizard is searching for a “vCenter server, ESX server, or ESXi Server”. There is nothing on the host server that indicates it is any of those; just the VMware Server v. 2.0.0.2712 and VMware Remote Console Plug-in v. 2.5.0.122581. I suspect this to be the problem but I am unsure.

Can anyone provide a migration path? Is there a way to upgrade the VMware on the host server to version that can be converted to Hyper-V using MCMC? Is there a different tool that I can use to convert the VM.
Any insight will be appreciated. Thank you


I answered:

VMware Server 2.0 is kind of old, and had little or no remote management capabilities as we see in ESXi today. It was one of VMware’s evolutionary dead ends. Nevertheless this ought to be possible.

What I would do is the following. Note that the VM must be powered off.

  1. Locate VMware Server’s datastore. The datastore details will tell you where the files are located.

  2. Use the Starwind V2V Converter (free but registration required) to create a VHD or VHDX from the existing VMDK.

  3. Create a new Hyper-V VM using the new VHD. You will not be able to import the virtual machine settings (e.g. CPU, RAM, etc.) from VMware Server and must recreate these manually.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Centos6 image waits for metadata response from remote servers

Madhavan Kumar asked:

Centos-6.6 vm image takes almost ten minutes of boot time when ran using virsh. I captured the logs using virsh console. It looks like this,

ci-info: +-------+---------------+---------------+---------------+-----------+-------+
ci-info: | Route |  Destination  |    Gateway    |    Genmask    | Interface | Flags |
ci-info: +-------+---------------+---------------+---------------+-----------+-------+
ci-info: |   0   | 192.168.122.0 |    0.0.0.0    | 255.255.255.0 |    eth0   |   U   |
ci-info: |   1   |    0.0.0.0    | 192.168.122.1 |    0.0.0.0    |    eth0   |   UG  |
ci-info: +-------+---------------+---------------+---------------+-----------+-------+
2015-04-25 05:13:41,222 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: unexpected error ['Timeout' object has no attribute 'response']
2015-04-25 05:14:32,278 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: unexpected error ['Timeout' object has no attribute 'response']
2015-04-25 05:14:51,322 - DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds
2015-04-25 05:14:51,990 - url_helper.py[WARNING]: Calling 'http://192.168.122.1//latest/meta-data/instance-id' failed [0/120s]: bad status code [404]
2015-04-25 05:14:53,008 - url_helper.py[WARNING]: Calling 'http://192.168.122.1//latest/meta-data/instance-id' failed [1/120s]: bad status code [404]
2015-04-25 05:14:54,022 - url_helper.py[WARNING]: Calling 'http://192.168.122.1//latest/meta-data/instance-id' failed [2/120s]: bad status code [404]

My image tries to get some metadata from the remote server. Once that fails, it tries to collect the information from the local gateway.

Is this to do with cloud-init? Can i configure to turn off the remote server calls?


I answered:

By default cloud-init expects to receive metadata from an Amazon EC2-compatible metadata service, such as that included with OpenStack and possibly other services.

If you aren’t running your VM under such a service, you have two options:

  1. Disable or uninstall cloud-init. This is the easiest option and if you aren’t running in a cloud service then you should do this.

    Or…

  2. Create a configuration drive as an ISO CD image containing the metadata which would have been obtained from the metadata service, if one had been present, and permanently attach the image to the virtual machine. You almost certainly do not need to do this.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

List guest machine names from command prompt

Madhavan Kumar asked:

How to list the names of all guest machines from the command line using,

virt-install

Something like, virt-install list-vms will do?


I answered:

You mean virsh list or virsh list --all? See also man virsh for every option, or use the virt-manager GUI to manage your VMs.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

iptables performance – iprange vs subnets

GioMac asked:

I want to allow a range of ip addresses – two /24 subnets, which don’t fall under /23. I have two options:

  1. Use two rules with /24 masks and -s option
  2. Use single rule with -m iprange and specify whole range of ip’s

Which is fastest, more optimal way for performance?


I answered:

In this particular case, one iprange rule might be slightly faster than two CIDR rules, but the difference is so small it will likely be unnoticeable. Unless you’re routing multiple gigabits per second, it’s not worth trying to optimize here, and if you are, you should probably buy a purpose-built router anyway.

I recommend you use CIDR comparisons anyway, as this will be faster if you ever add disjoint ranges (and you probably will sooner or later). And it’s cleaner and easier to understand.


What’s going on here?

Let’s say you are comparing an IP address 192.0.2.87 to a network and prefix of 192.0.2.0/24. First, the prefix is used as the netmask, which is literally the 32-bit value with the bits on the network side set to 1 and on the host side set to 0. So /24 would have been written 255.255.255.0, though internally it, and the IP address and network, are stored as raw 32-bit values. (I’m ignoring endianness here as it’s not relevant to understanding how this works.)

What happens is that the IP address will be bitwise ANDed with the netmask, and the result compared to the network. If they are the same, then the IP address is within the network/prefix.

In the IP range, you need two comparisons. In this case, the IP address is first compared to the starting IP address. If it is greater or equal, we continue on and compare it to the ending IP address. If it is less or equal, we have a match.

The difference in processor time between these comparisons is negligible, especially since there’s a lot of overhead for each rule before you even get to the relevant machine language instructions which do the comparison (register loads, etc).


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Restart ESXi from remote while having a PSOD

schlimpf asked:

I wondered if it is possible to access some functions of a ESXi Server over the network, while the server has a PSOD.
The Server is running ESXi 5.1.

The server cannot be pinged, but is there some kind of network debug function?


I answered:

You’ll need to use the server’s out of band management interface (IPMI, iDRAC, ILO, etc.) to reboot the server.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

AWS Ubuntu 14 – Can't SSH After Reboot

Rich Jones asked:

Had an issue where an instance became unresponsive and was forced into a reboot. After the reboot I could ping the server but could not SSH into the server.

Ultimately I created a new instance and connected the volume to the new instance. However I would like to make sure when I reboot this instance I don’t run into the same problem.

Looking at the console log I see:

[ 2.968537] EXT4-fs (xvda1): INFO: recovery required on readonly filesystem

[ 2.972324] EXT4-fs (xvda1): write access will be enabled during recovery

[ 3.095607] EXT4-fs (xvda1): orphan cleanup on readonly fs

[ 3.354696] EXT4-fs (xvda1): 40 orphan inodes deleted

[ 3.358010] EXT4-fs (xvda1): recovery complete

[ 3.465864] EXT4-fs (xvda1): mounted filesystem with ordered data mode. Opts: (null)

My sshd_config file has two changes from the default UsePAM was set to no and PasswordAuthentication was set to yes. I don’t think that had anything to do with it.

My /etc/fstab file was set to:
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2
/dev/xvdg /hd3 auto noatime 0 0

/dev/xvdf /hd3 auto noatime 0 0

/dev/xvdf /hd2 auto noatime 0 0
/dev/xvdh /hd4 auto noatime 0 0
/dev/xvdh /hd4 auto noatime 0 0
/dev/xvdi /vol auto noatime 0 0
/dev/xvdi /hd5 auto noatime 0 0

I checked all the obvious things like making sure I’m connecting to the right IP, etc.

Any idea on why this did not allow SSH?


I answered:

You said:

My sshd_config file has two changes from the default UsePAM was set to no and PasswordAuthentication was set to yes. I don’t think that had anything to do with it.

Actually that almost certainly is your problem. On a Linux system with PAM, attempting to login bypassing PAM is not guaranteed to work, and generally does not.

Set UsePAM yes and try again.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

How to config Nginx runs php on sub directories?

Davuz asked:

I have 2 versions of portfolio website for 2 languages. I put them in 2 dir in the same root dir.

/root/
    |----vi/
    |----en/
    |----index.php

Then I config nginx as below:

server {
    listen 8080;
    server_name 192.168.0.117;

    access_log /u01/projects/company/log/access.log main;
    error_log /u01/projects/company/log/error.log;
    set $rootLocation /u01/projects/company/;

    root $rootLocation;
    index index.php;

    charset utf-8;
    client_max_body_size 100m;

    location ~* /(vi|en)/admin(/|/.*.php)$ {
       try_files $uri $uri/;
       gzip on;
       index index.php;
       include fastcgi_params;
       fastcgi_pass 127.0.0.1:9000;
       fastcgi_index index.php;
       fastcgi_param SCRIPT_FILENAME $rootLocation$fastcgi_script_name;
    }

    include common.conf;#some fastcgi config
}

Look at location, I have admin/ dir (and run php directly here) for every version. But when I access 192.168.0.117/en/admin/, browser always redirect to 192.168.0.117/en/index.php. What is wrong with this configuration? How to fix it?


I answered:

The problem is here:

       try_files $uri $uri/;

By doing this you explicitly told it to redirect to index.php. A trailing / in a try_files means to try the value of the index directive.

In a PHP-FPM location this should instead be:

       try_files $uri =404;

View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.