Ansible: SSH works, but ansible throws unreachable error

Created on 7 Apr 2016  ·  93Comments  ·  Source: ansible/ansible

ISSUE TYPE

  • Bug Report
ANSIBLE VERSION
ansible 2.0.0.2
  config file = 
  configured module search path = Default w/o overrides
CONFIGURATION

No changes

OS / ENVIRONMENT

OS X El Capitan Version 10.11.3

SUMMARY

I can connect to my Rasberry Pi through ssh through an ethernet cable via "ssh [email protected]" but running Ansible with this IP address as a host fails.

I have successfully configured this Rasberry Pi with ansible through wifi (using the wifi IP address), but now trying to use ansible via the direct ethernet connection I get the cryptic error message:

`TASK [setup] *******************************************************************
fatal: [169.254.0.2]: UNREACHABLE! => {"changed": false, "msg": "ERROR! (25, 'Inappropriate ioctl for device')", "unreachable": true}`

Because I _can_ successfully connect to this pi using that IP address through ssh from terminal, I am positing that this is a bug in Ansible.

STEPS TO REPRODUCE

I run this command to rune the role

ansible-playbook ansible-pi/playbook.yml -i ansible-pi/hosts --ask-pass --sudo -c paramiko -vvvv

I also tried

ansible-playbook ansible-pi/playbook.yml -i ansible-pi/hosts --ask-pass --sudo -vvvv

which lead to the same error.

hosts file

[pis]
169.254.0.2

playbook


---

- name: Ansible Playbook for configuring brand new Raspberry Pi

  hosts: pis
  roles:
    - pi
  remote_user: pi
  sudo: yes

I assume that the role is actually unimportant because ansible is failing at the ssh connection step.

EXPECTED RESULTS

I expect ansible to connect to pi and run the role (I have successfully done this via connecting over an IP address through wifi)

ACTUAL RESULTS
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal.
No config file found; using defaults
  passwd = fallback_getpass(prompt, stream)
Warning: Password input may be echoed.
SSH password: raspberry

[DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and 
make sure become_method is 'sudo' (default). This feature will be removed in a 
future release. Deprecation warnings can be disabled by setting 
deprecation_warnings=False in ansible.cfg.
Loaded callback default of type stdout, v2.0
1 plays in ansible-pi/playbook.yml

PLAY [Ansible Playbook for configuring brand new Raspberry Pi] *****************

TASK [setup] *******************************************************************
<169.254.0.2> ESTABLISH CONNECTION FOR USER: pi on PORT 22 TO 169.254.0.2
CONNECTION: pid 2118 waiting for lock on 10
CONNECTION: pid 2118 acquired lock on 10
fatal: [169.254.0.2]: UNREACHABLE! => {"changed": false, "msg": "ERROR! (25, 'Inappropriate ioctl for device')", "unreachable": true}

PLAY RECAP *********************************************************************
169.254.0.2                : ok=0    changed=0    unreachable=1    failed=0   
affects_2.0 affects_2.1 affects_2.2 affects_2.3 affects_2.4 affects_2.5 bug pluginconnectiossh

Most helpful comment

This happende all of a sudden when I upgraded Ansible.

To successfully run I had to:

ansible-playbook --limit grunndata playbook.yml -c paramiko -u deploy

Earlier I have only run

ansible-playbook --limit grunndata playbook.yml

Normal SSH with the following works with no issues:

ssh deploy@grunndata

Something has changed.

What information can I provide to help debug this?

I am running the following:

  • Ubuntu 16.04
  • Ansible 2.1.0.0 installed via pip

All 93 comments

Hi!

Thanks very much for your submission to Ansible. It sincerely means a lot to us.

We have some questions we'd like to know about before we can get this request queued up. If you can help answer them, we'd greatly appreciate it:

  • Have you tried disabling fact gathering to check for more verbose errors from a task instead?
  • With fact gathering disabled, have you tested the raw module?

Just as a quick reminder of things, this is a really busy project. We have over 800 contributors and to manage the queue effectively
we assign things a priority between P1 (highest) and P5. We'd like to thank you very much for your time!
We'll work things in priority order, so just wanted you to be aware of the queue and know we haven't forgotten about you!

We will definitely see your comments on this issue when reading this ticket, but may not be able to reply promptly. You may also wish to join one of our two mailing lists
which are very active:

Thank you once again for this and your interest in Ansible!

@mhfowler: I was able to bypass this by providing ansible_password in my inventory

ansible_password worked for me too

[testServer]
192.168.33.10

[testServer:vars]
ansible_password=vagrant

This happende all of a sudden when I upgraded Ansible.

To successfully run I had to:

ansible-playbook --limit grunndata playbook.yml -c paramiko -u deploy

Earlier I have only run

ansible-playbook --limit grunndata playbook.yml

Normal SSH with the following works with no issues:

ssh deploy@grunndata

Something has changed.

What information can I provide to help debug this?

I am running the following:

  • Ubuntu 16.04
  • Ansible 2.1.0.0 installed via pip

+1

needed to add -c paramiko because one of 5 hosts was failing, and I could ssh into all of them successfully.

For me, I had an .ssh/config entry for my user to match to the remote hostname.

Host servername  
    User username

I could SSH directly to the server with ssh servername

However, with Ansible, I needed to add the -u parameter to the deploy command:

ansible-playbook -vvvv -i poc book_deploy.yml --ask-vault-pass --ask-become-pass -u username

After that, could deploy ok.

A little odd it didn't use the .ssh/config file as previously, but the workaround works, thanks :)

@mhfowler closure has been requested for this issue or it has timed out waiting for your response.
click here for bot help

Why this? I was so happy doing ansible -m ping all now I need to do -u user -c paramiko

@roolo I also set 'ansible_password' and it began to work for me. What's it for? You can set it to literally anything you want and it will work now.

Same issue with ansible 2.1.2.0 and --ask-pass option.
OS X 10.11.6

ohallors fix didn't help.

I am away from my computer for a few weeks. Pls ping me about this after
3rd of November if I'll not reply by myself. Thx

Same.

$ ansible --version

ansible 2.2.0 (devel 6666d13654) last updated 2016/09/22 10:43:16 (GMT -700)
  lib/ansible/modules/core: (detached HEAD 0f505378c3) last updated 2016/09/23 17:20:56 (GMT -700)
  lib/ansible/modules/extras: (detached HEAD 935a3ab2cb) last updated 2016/09/23 17:20:56 (GMT -700)

Using -c paramiko seems to work better, it looks like -c smart is broken.

In case it helps anyone, I resolved this issue on Ubuntu 16.04 by replacing this line in my hosts file...

web1 ansible_ssh_host=my_remote_user@my_ip

with

web1 ansible_ssh_host=my_ip

and then making sure I had added

remote_user=my_remote_user

to my ansible.cfg

For me it was simply because I had added the "my_remote_user@" in front of my ip address. This had worked before I upgraded.

I had the same issue and pinging the host first somehow resolved the issue.

ansible <host> -i <inventory-file> -m ping

UPD: I have to run ping command almost every time before executing the playbook. As after a couple of minutes of inactivity playbook fails again.

@cue232s It says to Ansible what password to use for ssh connection.

http://docs.ansible.com/ansible/intro_inventory.html#list-of-behavioral-inventory-parameters (looks like the parameter is now called _ansible_ssh_pass_)

I resolved a similar issue on Mac OS X with ansible 2.1.2.0 that may help. Not sure where else to post it. I could ssh to the instance, but running my playbook resulted in:

fatal: [ec2-1-2-3-4.us-west-2.compute.amazonaws.com]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}

No other error output. But it worked with -c paramiko appended.

I downgraded to ansible 1.9.4 (pip install ansible==1.9.4) and now when I run it I get the error:

fatal: [ec2-1-2-3-4.us-west-2.compute.amazonaws.com] => SSH Error: unix_listener: "/Users/myname/.ansible/cp/ansible-ssh-ec2-1-2-3-4.us-west-2.compute.amazonaws.com-22-ubuntu.0o1S2DUmaWg7dLdF" too long for Unix domain socket

So I upgraded back to 2.1.2.0 and I added an ansible.cfg file to my project directory with this content:

[ssh_connection]
control_path=%(directory)s/%%h-%%r

And the connection worked.

I am experiencing the same problem as https://github.com/ansible/ansible/issues/15321#issuecomment-256346976

ansible-playbook is failing to connect and is not creating the socket under ~/.ansible/cp. If I run ansible -m ping first, the socket is created and ansible-playbook will succeed if I run within 60 seconds.

Interestingly, if I run ansible-playbook with -vvv option and then copy the exact ssh command shown and run it, the connection succeeds and ansible-playbook will also succeed.

I'm having the problem on ansible-2.1.2.0 installed with Homebrew on macOS Sierra 10.12.1

Downgrading to 2.1.1.0 eliminates the problem for me.

I had the same issue

  • ansible: 2.1.2.0
  • Fedora 24 as management box
  • CentOS 7 as managed box

I've resolved it by adding key used to authentication to ssh-agent. The key used by me was without password.

Having the same issue. Standard OpenSSH attempt fails but paramiko works.

Running Ansible inside Vagrant/Virtualbox on Windows to provision remote VMs. Both machines running Ubuntu 16.04. Ansible version 2.1.2.0 Ansible config file is located in /ansible/ansible.cfg.

hosts line:

raw1  ansible_host=xx.xx.xx.xx  ansible_port=22  ansible_user=root  ansible_ssh_pass=wer32dw

This fails:

ubuntu@devbox:/ansible$ sudo ansible raw1 -vvvv -m ping
Using /ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<xx.xx.xx.xx> ESTABLISH SSH CONNECTION FOR USER: root
<66.23.245.125> SSH: EXEC sshpass -d12 ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o Port=22 -o User=root -o ConnectTimeout=10 -o ControlPath=/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r 66.23.245.125 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477987158.35-58855315932449 `" && echo ansible-tmp-1477987158.35-58855315932449="` echo $HOME/.ansible/tmp/ansible-tmp-1477987158.35-58855315932449 `" ) && sleep 0'"'"''
raw1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh.",
    "unreachable": true
}

This works:

ubuntu@tgpdevbox:/ansible$ sudo ansible raw1 -vvvv -m ping -c paramiko
Using /ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<xx.xx.xx.xx> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO xx.xx.xx.xx
<xx.xx.xx.xx> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477987431.74-236753198598806 `" && echo ansible-tmp-1477987431.74-236753198598806="` echo $HOME/.ansible/tmp/ansible-tmp-1477987431.74-236753198598806 `" ) && sleep 0'
<xx.xx.xx.xx> PUT /tmp/tmp7oXJF4 TO /root/.ansible/tmp/ansible-tmp-1477987431.74-236753198598806/ping
<xx.xx.xx.xx> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1477987431.74-236753198598806/ /root/.ansible/tmp/ansible-tmp-1477987431.74-236753198598806/ping && sleep 0'
<xx.xx.xx.xx> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1477987431.74-236753198598806/ping; rm -rf "/root/.ansible/tmp/ansible-tmp-1477987431.74-236753198598806/" > /dev/null 2>&1 && sleep 0'
raw1 | SUCCESS => {
    "changed": false,
    "invocation": {
        "module_args": {
            "data": null
        },
        "module_name": "ping"
    },
    "ping": "pong"
}

##Solution: Upgraded to Ansible 2.2.0.0 and I no longer have to use -c paramiko

I had this error, with cron module, and upgrading to ansible 2.2.0.0 fixed for me too!

i can connect to my host as root but can not run my ansible

[root@workstation svc_deployer]# ansible puppet.home.io -m ping --become-user=root --ask-sudo-pass
SUDO password:
puppet.home.io | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh.",
    "unreachable": true
}
[root@workstation svc_deployer]# ssh puppet.home.io
[email protected]'s password:
Last login: Sun Nov 13 18:45:00 2016 from 192.168.56.160
[root@puppet ~]#

i tried verbose

[root@workstation svc_deployer]# sudo ansible puppet.home.io -m ping --become-user=root -c ssh -vvvv --become-method=sudo
Using /etc/ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<puppet.home.io> ESTABLISH SSH CONNECTION FOR USER: None
<puppet.home.io> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r puppet.home.io '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479092049.76-54149073209683 `" && echo ansible-tmp-1479092049.76-54149073209683="` echo $HOME/.ansible/tmp/ansible-tmp-1479092049.76-54149073209683 `" ) && sleep 0'"'"''
puppet.home.io | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh.",
    "unreachable": true
}

found the solution, ran the following command on my host to fix ssh key folder permission (Centos6.6)

[root@puppet ~]# restorecon -R -v /root/.ssh
restorecon reset /root/.ssh context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:ssh_home_t:s0
restorecon reset /root/.ssh/authorized_keys context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:ssh_home_t:s0

and was able to run setup

[root@workstation ~]# ansible puppet -m setup
puppet.home.io | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "192.168.56.170",
            "192.168.1.89"
        ],
        "ansible_all_ipv6_addresses": [
            "fe80::a00:27ff:fe6a:41b1",
            "2602:306:8b7f:37d0:a00:27ff:fea7:e797",
            "fe80::a00:27ff:fea7:e797"
        ],

tried a couple combinations of ansible_host names and I've found it works with

foo.bar.com
XXX.XXX.XXX.XXX (ip addresses)

and that it doesn't work (without specifying paramiko) with

foo-with-dashes.bar.com
foo.with.periods.AND.more.than.one.section.before.bar.com

Limiting ssh key permissions to 600 fixed this issue.

Having the same issue:
ansible 2.1.2.0
Ubuntu 14.04.5 x64

Error:

failed: [shshprod](item=shsh-api) => {"item": "shsh-api", "msg": "Failed to connect to the host via ssh.", "unreachable": true}

when i try to make ansible-playbook -i inventory.ini shsh.yml --key-file ssh/deploy
Is it for ssh key permissions ?

@jimi-c Why is this issue closed again?

The issue was auto-closed by our bot due to a lack of response. Based on several of the responses above, it appears to be (at least in some cases) related to permissions on SSH keys (which should always be 0600 for private keys).

Does not appear to be a permissions issue for me, currently on ansible 2.1.2.0
I retrieve the DNS name for the ec2 instance I'm trying to ansible. Comes back in the form: ec2-#{ip_address}.ap-southeast-2.compute.amazonaws.com

That results in the error:

UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}

However if I use the IP address for the server, it runs without issue... So seems much closer to the issue that @marcstreeter has encountered

bot_skip

I have experience this case, running ansible 2.0.2.0 with a target host running CentOS.

In my scenario, I have hosts containing details like ansible_host ansible_user and ansible_ssh_pass
However, it seems like the parsing of characters doesn't seem to support the "#" (ie 3 server has a password containing a "#" and those 3 host had flagged the error).

Running the playbook, it gives me an error:

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
<120.xxx.xxx.xxx> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 120.xxx.xxx.xxx
fatal: [sz-server]: UNREACHABLE! => {"changed": false, "msg": "Authentication failed.", "unreachable": true}

The strange part is that the error specifically states Authentication failed. but doing an SSH session, works.

Checking the server's secure log, it shows something like:

<server> sshd[23642]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<sz-server> user=root
<server> sshd[23642]: pam_succeed_if(sshd:auth): requirement "uid >= 1000" not met by user "root"
 <server> sshd[23642]: Failed password for root from <IP> port 61912 ssh2

Changing my password replacing "#" with a different special character like "%" works.
Now, the playbook successfully runs on all machine.

@upbeta01 that was an old bug, and I thought we had fixed it a while back. You may need to escape the # to prevent it from being considered the start of a comment

Thanks for the info @jimi-c , I might consider updating my ansible to the latest stable version. Not sure if 2.0.2.0 haven't had the patched applied for the bug.

I am using $ ansible --version 2.2.0.0, and the problem remains... hopefully for me I found this open thread... maybe we can add a warning when using Centos 7

I'm using ansible 2.2.0.0.

This command works because don't need sudo permissions:

ansible -m command -a 'df -h' ca.o.prv

But this one no:

ansible -s -m command -a 'fdisk -l' ca.o.prv
cas.o.prv | FAILED | rc=0 >>
MODULE FAILURE

Solved:
ansible all -s --ask-sudo-pass -m raw -a "fdisk -l"

@tyronzerafa & myself are seeing the same issues as above. our environment is:

OS:
```$ cat /etc/redhat-release
CentOS release 6.8 (Final)

Ansible Version:
```$ ansible --version
ansible 2.2.0.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

Failed ping:
```$ ansible tcfabrics -m ping
server3 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
server1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
server2 | SUCCESS => {
"changed": false,
"ping": "pong"
}

Successful ping:
```$ ansible tcfabrics -m ping -c paramiko
server3 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
server1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
server2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

the hosts are at random and any playbook or command that we trigger, using paramiko works, however using the default (smart?) fails at random (however never resulting in a 100% success).

if this issue is different feel free to let us know and we'll open a separate issue

ssh the UNREACHABLE ones, to see what's going on.

If you have no keyfile set up for the server use the -k flag

ansible -i hosts servers -m ping -u root -k

I extracted the full ssh command executed by ansible-playbook by adding "-vvvv" to its command-line, and ran it manually. It printed this out at the end:

unix_listener: "/home/saurav/.ansible/cp/ansible-ssh-very-long-aws-ec2-hostname-deploy.XXYY" too long for Unix domain socket

Replacing the EC2 hostname with its IP in the hosts file fixed it for me.

It seems ansible is not respecting ~/.ssh/config:

Host raspberrypi.local
StrictHostKeyChecking no

So I could log in via the shell but Ansible failed. The following fixed it for me:

ssh-keygen -R raspberrypi.local

I finally got mine to work, this is with the root user, first provision i suppose. Thanks for all the tips, even though this is a bit strange.

  • Host: Ubuntu 16.04.2 LTS
  • Server: Ubuntu 16.04.2 LTS
  • Ansible Version: Ansible 2.2.1.0
  • Paramiko with Python2.7 or Python3.5 works.

file: hosts

[test]
91.121.103.38 ansible_ssh_user=root

[test:vars]
ansible_password=MustBeTheRealPassword

command

ansible-playbook -vvvv preciousbook.yml -c paramiko -u root --ask-become-pass

I noticed for root user, I can run without the -u root flag, and without setting root@ipaddr , rather just leave it the as theipaddr` in hosts.

ansible-playbook -vvvv php.yml -c paramiko --ask-become-pass -- I haven't tried as non-root user or with SSH keys as I am not using Digital Ocean that makes that convenient when you boot up a machine.

@JREAM @PGUTH @roolo @ohallors @ringe @mhfowler @midolo @upbeta01 (and other folks who can reproduce this).

Could you test with with the 2.3 release candidate build (http://releases.ansible.com/ansible/ansible-2.3.0.0-0.3.rc3.tar.gz ) ?

And it if still fails, provide the ansible.cfg config, the -vvv output, and any shareable info about the ssh configs and versions involved?

@alikins - I tried testing it. I am getting an error upon running the command ansible-playbook. I try to isolate the release by using a virtualenv. Did I miss something?

┌─(ansible-2.3)[User][Eldies-MacBook-Pro][~/Private/work/infra/2.3/ansible-2.3.0.0/bin]
└─▪ ./ansible-playbook -l hz-monitor -vvvv
Traceback (most recent call last):
  File "./ansible-playbook", line 43, in <module>
    import ansible.constants as C
ImportError: No module named ansible.constants

Here is how it look like on the file structure.

┌─(ansible-2.3)[User][Eldies-MacBook-Pro][~/Private/work/infra/2.3/ansible-2.3.0.0/bin]
└─▪ ls
ansible            ansible-console    ansible-galaxy     ansible-pull       ansible.cfg        hosts
ansible-connection ansible-doc        ansible-playbook   ansible-vault      check_uptime.yml   roles

@upbeta01 Cool, thanks for testing. Looks like the ansible python package is not on sys.path. (ie, ~/Private/work/infra/2.3/ansible-2.3.0.0/lib/ansible needs to be in sys.path). Not sure how the virtualenv was setup, but if it is basically the unpacked tar.gz try:

cd ~/Private/work/infra/2.3/ansible-2.3.0.0/
source hacking/env-setup

That will add ~/Private/work/infra/2.3/ansible-2.3.0.0/lib/ansible to PYTHONPATH and set ANSIBLE_HOME to ~/Private/work/infra/2.3/ansible-2.3.0.0 which should get it going.

I believe a pip install of the tarball into the virtualenv should work as well, but haven't verified that.

I still have this issue, willing to test if you need.

Ubuntu 16.04 and ansible 2.0.0.2

xxxx@xxxx:/etc/ansible# ansible-playbook provision.yml

PLAY ***************************************************************************

TASK [setup] *******************************************************************
fatal: [c999951727-cloudpro-689901068]: UNREACHABLE! => {"changed": false, "msg": "ERROR! Authentication failure.", "unreachable": true}

PLAY RECAP *********************************************************************
c999951727-cloudpro-689901068 : ok=0    changed=0    unreachable=1    failed=0

xxx@xxxxx:/etc/ansible# ansible New -m ping
c999951727-cloudpro-689901068 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
root@rundeck:/etc/ansible# ansible-playbook provision.yml

PLAY ***************************************************************************

TASK [setup] *******************************************************************
ok: [c999951727-cloudpro-689901068]

TASK [user : Create Ansible User] **********************************************
changed: [c999951727-cloudpro-689901068]

TASK [user : Add Ansible Authorized Key] ***************************************
changed: [c999951727-cloudpro-689901068]

TASK [user : Create Personal User] *********************************************
changed: [c999951727-cloudpro-689901068]

TASK [user : Add Personal Authorized Key] **************************************
changed: [c999951727-cloudpro-689901068]

PLAY RECAP *********************************************************************
c999951727-cloudpro-689901068 : ok=5    changed=4    unreachable=0    failed=0

I am having the same issue

Make sure you have python-simplejson as you can't even gather facts without it. Adding this to the beginning of my playbooks helps.

  gather_facts: no
  pre_tasks:
    - name: 'install python2'
      raw: apt-get update; apt-get -y install python-simplejson
    - setup:
        filter=ansible_*
      tags: always

@shadycuz : for me, this resolved on Ubuntu 16.04 with this added to ansible.cfg:

[ssh_connection] 
# for running on Ubuntu
control_path=%(directory)s/%%h-%%r

On related note, this resolved it when running on Mac host:

[ssh_connection] 
# for running on OSX
control_path = %(directory)s/%%C

i had also that problem and figured out that my sftp server was not started. reconfigured sshd server by changing /etc/ssh/sshd_config, and restart ssh server. the problem has gone.

Running from a Mac host, @ttrahan 's ansible.cfg workaround did not work for me. I have a playbook that's connecting to several boxes, the majority of which are running Ubuntu 14.04 or 12.04. All of the Ubuntu boxes work without issue. The two boxes that are running CentOS, however, are both failing to connect with the following error:

fatal: [<a-centos-host>]: UNREACHABLE! => {"changed": false, "msg": "Failed to open session: [Errno 54] Connection reset by peer", "unreachable": true}

@alikins , not sure if the env-setup you mention is a var or actually pertaining to a file.

Here is what I see inside the directory hacking

┌─[User][Eldies-MacBook-Pro][~/Private/work/infra/2.3/ansible-2.3.0.0]
└─▪ source hacking/
dump_playbook_attributes.py  module_formatter.py          templates/   

should I redownload the tar file, noting it got new files in it?

Hi, Can someone help me. I am unable to ping the cisco router from my VM(CentOS 7. )

[root@centos7 ansible]# ansible ios -m ping 
<IP_address> | UNREACHABLE! => {
    "changed": false,
    "msg": "Authentication failed.",
    "unreachable": true
}

Here is the snippet of ansible.cfg file:

transport  = paramiko
host_key_checking = False
# SSH timeout
timeout = 20

hosts file:

[ios]
10.10.15.233 ansible_ssh_user=local ansible_ssh_pass=password

json [root@centos7 ansible]# ansible --version ansible 2.2.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides

Hi,

I had the same 'unreachable error' with ansible 2.3.0.0 running in Ubuntu 16.04.
Appended -c paramiko

Example: $ansible-playbook -i inventory ./router/tasks/get-vlan.yml -c paramiko )

But I then got a completely different error message - no authentication methods available,

That new error resolved itself when adding 'connection: local' to the YAML and continued to work even after dropping -c paramiko, thus resolving both issues.

  • reference to fix is here: [https://github.com/ansible/ansible/issues/16017]
$ ansible-playbook -i inventory ./router/tasks/get-vlan.yml
`---
- name: Check for VLAN 123 from Cisco IOS Routers
  hosts: routers
  connection: local

  vars_prompt:
      - name: "username"
        prompt: "Username"
        private: no
      - name: "password"
        prompt: "Password"

  tasks:
    - ios_command:
        username: "{{ username }}"
        password: "{{ password }}"`
        commands: "show ip int bri | inc vlan 123"

@dave-morrow Thank you for your reply.
After reading lot of documents, i figured out two days back that the fix is using 'connection=local' .
But, I wonder why IOS command line doesn't qualify as shell to ansible. Why do we need to run the commands on the ansible hosts. Any thoughts, please?

@b2sweety It's a good question for which I don't know the answer.

Sorry but Ansible is very new for me, in fact I've been playing with it for just over a day.
I do know Cisco (IOS) CLI is nothing like UNIX (POSIX) CLI, so perhaps that has something to do with it.

I'll leave it to others better qualified to respond to this question.

Hello Ansible Team,
I have configured Ansible on Rhel 7x machines. I took examples as 2 machine . 1 - Control server and Node 1.I copied SSH keys and able to login withhout password one another sucessfully.
while trying to check "ping: via ansible , getting error for Localhost/ local control server IP address on Invertory file.

  • Ping Sucess on Remote node but Un Reachable on Localhost
[ansible@ip-172-31-27-41 .ssh]$ ansible test -m ping
172.31.27.41 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n", 
    "unreachable": true
}

That is the error , Please provide me solution. My Mail - "nlkalyan.[email protected]"

What fixed it for me was disabling paramiko by setting ansible_connection=ssh.

Error:

UNREACHABLE! => {
    "changed": false,
    "msg": "('Bad authentication type', [u'publickey']) (allowed_types=[u'publickey'])",
    "unreachable": true
}

Fix:

# Hosts File
[host_name]
XX.XXX.XXX.XXX   ansible_user=username ansible_connection=ssh

run command:

ansible-playbook -i inventory_file playbook.yml

Host Environment:

  • ansible 2.2.1.0
  • OSX El Capitan 10.11.6

@Vcoleman, Thanks for the help..
I did the step you given, Still the same Error..
Ping comand

$ ansible test -m ping

The Error shown on mine was some what differ than yours.---->
""msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).rn""
I changed my Hosts file as below..

my host file entires ::

[test]
172.31.27.41 ansible_user=ansible  ansible_connection=ssh
#172.31.22.200
172.31.21.47

in the above 172.31.27.41 is my local server ( control Server)
again i had the same issue ..

@pavaniandkalyan Are you sure the user who has the correct SSH key on RPi is user ansible?

I am sure that , I have copied all the keys from control server to all nodes and vise versa as a ansible user.
Is it required to install / update / ping in control server ? I mean Control Server intraction must needed during Automation with Ansbile ? or only nodes push is enough ?

Doesn't seem right that I need to add -c paramiko:

ansible -i inventory/ec2.py us-west-2 -u ec2-user -m ping -c paramiko
x.x.217.210 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}



ansible -i inventory/ec2.py us-west-2 -u ec2-user -m ping                       
x.x.217.210 | UNREACHABLE! => {
    "changed": false, 
    "msg": "SSH Error: data could not be sent to remote host \"x.x.217.210\". Make sure this host can be reached over ssh", 
    "unreachable": true
}

msg: SSH Error: data could not be sent to remote host "x.x.217.210". Make sure this host can be reached over ssh

I was getting the same problem:

$ ansible local -m ping
127.0.0.1 | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n", 
    "unreachable": true
}

Solved the issue by installing sshpass using command:

sudo apt-get install sshpass

After installng sshpass, I executed this command:

ansible local -m ping --ask-pass
SSH password: 
127.0.0.1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Hope this helps!!!

use paramiko as a workaround.

ansible-playbook abc.yml -i development -c paramiko

or add to ansible config

[defaults]
transport = paramiko

I had the same issue:

My resolution 👍 *
ansible -c local -i all_servers all_servers -m ping

-c local worked for me, I set the transport = local and then I did not need to give -coption in the runs. I think -c local honors the .ssh/config file settings. if that is not given you need to define all the settings in ansible.cfg file. that's my guess.

The Above quote that I wrote yesterday, thinking that the Issue got resolved. THE ABOVE SETTINGS I MENTIONED ARE NOT THE RIGHT ANSWER

Heres what solved the problem

From what I see. The way ansible is working is quite right. It takes into consideration that you can ship your ansible.cfg file with your code. So the earlier versions actually worked with your .ssh/config files. But the new versions doesn't.

So the new versions of ansible lets you configure the ssh settings in the [ssh_connection] block. where you can put all the requirement for ssh. Now that means you would not need the .ssh/config file if its the control server and your inventory file can have IP's or dns Records. But like most of us are used to having ansible honour our .ssh/config had this issue. For this you have to explicity tell ansible the location of the ssh config file. Like below

ssh_args = -F /Users/vinitk/.ssh/config -o ControlMaster=auto -o ControlPersist=30m
 15 control_path = ~/.ssh/controlmasters/%%r@%%h:%%p

The ControlMaster and ControlPersist settings and the the Control_path corresponding to the same settings that I have in my .ssh/config file. This can be something else as well if you want ansible to create them in a different location. For ease of use I put the same path so that Ansible and any other tool like Parallel-ssh can use the same ControlMaster.

NOTE

the control_path = ~/.ssh/controlmasters/%%r@%%h:%%p has two %% signs as apposed to single one.

Then next part is that Ansible by default is set to smart in the way it transfers files. below is the block.

# Control the mechanism for transferring files (new)
# If set, this will override the scp_if_ssh option
#   * sftp  = use sftp to transfer files
#   * scp   = use scp to transfer files
#   * piped = use 'dd' over SSH to transfer files
#   * smart = try sftp, scp, and piped, in that order [default]
 transfer_method = scp

You need to have sftp Subsystem in the sshd_config file on the servers. In my case a couple of servers did not have that setting and sftp failed causing the connection to comeback as UNREACHABLE. As per the settings of smart it says that it will try sftp, then scp and then pipe in that order but it was not working for me setting the transfer_method = scp|piped works.

Hope that helps

Just an update for those who is interested.

Every solution here has failed for me. The only thing which worked is downgrading to ansible v. 2.1.5.0-1, which is available at the repo. Then it is all smooth.

I fought this for 2 hours and then found "-c ssh" to the run-playbook command. Worked like a charm. It gets around some issue that old OpenSSH versions have with Ansible.

Hi All ,
When ever i am trying bring the open shift cluster , i got following error. I run the below command.

sudo ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
fatal: [g_all_hosts | default([])]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname g_all_hosts | default([]): Name or service not known\r\n", "unreachable": true}

```json
[rhnuser3@ip-172-31-10-250 ~]$ ansible -m ping all
ip-172-31-10-250.ca-central-1.compute.internal | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Host key verification failed.rn",
"unreachable": true

We have verified following steps.

Master communicating to the node

[rhnuser3@ip-172-31-10-250 ~]$ ssh [email protected]
Last login: Wed Aug 16 08:05:16 2017 from master
[rhnuser3@node ~]$


Node Communicating to the master

[rhnuser3@ip-172-31-9-57 ~]$ ssh [email protected]
Last login: Wed Aug 16 07:56:06 2017 from node
[rhnuser3@master ~]$

We have changed necessary changed necessary configuration file from master and node server.

vi /etc/ansible/ansible.cfg


inventory = /etc/ansible/hosts
sudo_user = rhnuser3


Removed comments from following lines.
We have updated /etc/hosts/file – it is look like

[rhnuser3@ip-172-31-10-250 ~]$ cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

172.31.10.250 master
172.31.9.57 node

sudo vi /etc/ssh/sshd_config

PasswordAuthentication yes

PermitEmptyPasswords no

PasswordAuthentication no

sudo cat /var/log/secure


```log
Aug 16 08:28:09 localhost sshd[22792]: Disconnecting: Too many authentication failures for root [preauth]
Aug 16 08:28:09 localhost sshd[22792]: PAM 5 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=blk-222-40-174.eastlink.ca user=root
Aug 16 08:28:09 localhost sshd[22792]: PAM service(sshd) ignoring max retries; 6 > 3

Master server i performed the following commands

ssh-keygen -t rsa
cat /home/rhnuser3/.ssh/id_rsa.pub
sudo vi /home/rhnuser3/.ssh/authorized_keys
sudo chmod 600 .ssh/authorized_keys
sudo chown rhnuser3:rhnuser3 .ssh/authorized_keys
cat /home/rhnuser3/.ssh/id_rsa

Node server i performed the following command.

cd /home/rhnuser3
mkdir .ssh
chmod 700 .ssh
chown rhnuser3:rhnuser3 .ssh
sudo vi /home/rhnuser3/.ssh/authorized_keys
sudo chmod 600 .ssh/authorized_keys
sudo chown rhnuser3:rhnuser3 .ssh/authorized_keys
sudo vi /home/rhnuser3/.ssh/id_rsa
sudo chmod 600 .ssh/id_rsa
sudo chown rhnuser3:rhnuser3 .ssh/id_rsa

Please try suggest on these.
@virtusademo

virtusademo commented just now

[rhnuser3@master ~]$ ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.5 (default, May 3 2017, 07:55:04) [GCC 4.8.5 20150623 (Red Hat 4.8.5-14)]
[rhnuser3@master ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.1 (Maipo)

I am using ansible on 2.3.1.0. added my hosts in /etc/ansible/hosts. if i do, --list-hosts below error coming
ERROR! the field 'hosts' is required but was not set. In ansible.cfg inventory path is same. Can anyone help on this?

hi..evrey one am an begineer for ansible,am going to use it for continuous delivery.
i want to know how inventory,playbook,modules are interacting internally.
if anybody know this kindly reach me at [email protected]

How do we solve this at the end of the day?

edit: it appears the SSH service on my remote machine may have crashed. I tried to start a new ssh session with PuTTY and it closes the connection before the login prompt.

edit2: SSH service on the remote machine is indeed not working correctly anymore although I haven't gotten a response as to exactly what the error is yet. Since it happened directly after this Ansible script was run, I'm leaving this here as if an Ansible error caused sshd to crash it may still be related to this issue. Some details for anyone trying to recreate: both target and control machines are intel xeons running CentOS 7. control version is centos-release-7-4.1708.e17.centos.x86_64 with the target having an iris graphics setup.


I seem to be having this issue all of a sudden under ansible 2.4.1.0. Everything was working fine as I debugged a new role, and then this suddenly started happening.

These were the malformed tasks that failed just before it stopped connecting:

 - name: Get media SDK install folder contents
    command: "ls /opt/a-specific-directory/"
    register: directories

  - name: verify expected directories in install folder
    fail:
    when: not ({{directories}}|search({{item}}))
    vars:
        nested_list:
          - - dir1
            - dir2
            - dir3
            - dir4
            - dir5
            - dir6
            - dir7
            - dir8
            - dir9
    with_items: "{{ nested_list }}"

They threw this error:

TASK: verify expected directories in install folder
 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: not
({{directories}}|search({{item}}))

fatal: [10.105.15.118]: FAILED! => {"failed": true, "msg": "The conditional check 'not ({{directories}}|search({{item}}))' failed. The error was: template error while templating string: expected token ':', got 'string'. String: {% if not ({'stderr_lines': [], u'changed': True, u'end': u'2017-11-28 12:12:31.502092', 'failed': False, u'stdout': u'dir2\\ndir3\\ndir4\\ndir5\\ndir6\\ndir7\\ndir8\\ndir9', u'cmd': [u'ls', u'/opt/a-specific-directory/'], u'rc': 0, u'start': u'2017-11-28 12:12:31.500354', u'stderr': u'', u'delta': u'0:00:00.001738', 'stdout_lines': [u'dir2', u'dir3', u'dir4', u'dir5', u'dir6', u'dir7', u'dir8', u'dir9']}|search(dir1)) %} True {% else %} False {% endif %}\n\nThe error appears to have been in '/home/my-playbook-location/playbook.yml': line 6, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n  - name: Verify expected directories in media SDK install folder\n    ^ here\n"}

I copied the error msg from running ansible-playbook -vvvv myplaybook.yml afterwards and got:

Failed to connect to the host via ssh: OpenSSH_7.4p1, OpenSSL 1.0.2k-fips  26 Jan 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 58: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/MY-USER/.ansible/cp/01607ca611" does not exist
debug2: resolving "[MY-REMOTE-IP]" port 22
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to MY-REMOTE-IP [MY-REMOTE-IP] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: fd 3 clearing O_NONBLOCK
debug1: Connection established.
debug3: timeout: 10000 ms remain after connect
debug1: identity file /home/MY-USER/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /home/MY-USER/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/MY-USER/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/MY-USER/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/MY-USER/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/MY-USER/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/MY-USER/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/MY-USER/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1
debug1: match: OpenSSH_6.6.1 pat OpenSSH_6.6.1* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to 10.105.15.118:22 as 'root'
debug3: hostkeys_foreach: reading file "/home/MY-USER/.ssh/known_hosts"
debug3: record_hostkey: found key type ECDSA in file /home/MY-USER/.ssh/known_hosts:1
debug3: load_hostkeys: loaded 1 keys from 10.105.15.118
debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
debug3: send packet: type 20
debug1: SSH2_MSG_KEXINIT sent
Connection reset by MY-REMOTE-IP port 22

Not sure what to make of that, since it'd been working up until now.
Both id_rsa files on the controller, and the authorized_keys on the remote machine are unchanged since I added the public key there a week ago and permissions remain 600.

edit: it appears the SSH service on my remote machine may have crashed. I tried to start a new ssh session with PuTTY and it closes the connection before the login prompt.

Yeah, I even expect that problem whenever I start using Ansible on a new Mac. That's sad.

Just ran into this, only seems to occur on my mac though.

connection: local in the playbook fixed my problem

Same error facing:
fatal: [1.2.3.4]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '1.2.3.4' (ECDSA) to the list of known hosts.rnPermission denied (publickey).rn", "unreachable": true}

SSH works but ansible throws unreachable error

@induraj looks like the user that your running ansible with does not have access to the key.
could you also put the output of ansible --version.
also make sure you have install ansible only by one method either (pip or brew)

Thanks for quick response:
(ansble) ansible@sharma:~$ pip freeze |grep ansible
ansible==2.4.2.0

are u running through the virtualenv does that have access to the key?

yes they have access:
ll ~/.ssh/raj_aws.pem
-rw------- 1 ansible ansible 1692 Jan 13 23:12 .ssh/raj_aws.pem.
Is there anything required?

@induraj I am completely not sure but it could be not getting the right permissions through the virtualenv.

OK. even i am trying the same, still trying to figure it out.

let me share what I have till now.

  • created virtualenv
  • installed ansible with dependency.
  • able to create EC2 instance with ansible.
  • but facing problem login into the EC2 VM with ansible.

Could you please help me with accessing the EC2 VM via ansible so that I can move ahead my testing with ansible.

I have configured on base machine without virtualenv is working. Might be there is some user authentication issue.

If you ssh-agent has multiple keys use the ansible_ssh_private_key_file variable in your hosts entry to specify your private key instead of ssh-agent passing the wrong key and being rejected.

List Information

Hi!

Thanks very much for your interest in Ansible. It sincerely means a lot to us.

This appears to be a user question, and we'd like to direct these kinds of things to either the mailing list or the IRC channel.

If you can stop by there, we'd appreciate it. This allows us to keep the issue tracker for bugs, pull requests, RFEs and the like.

Thank you once again and we look forward to seeing you on the list or IRC. Thanks!

In my case the main issue is that the control file was not created and as a result the proxy command did not have proper permissions.

Command revealed by adding -vvvvvvv option to ansible cmd:

ssh -vvv -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o Port=22 -o 'IdentityFile="secret.id_rsa"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=luser -o ConnectTimeout=10 -o 'ProxyCommand=ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q [email protected]' -o ControlPath=/Users/me/.ansible/cp/3b9a3c71ba 10.0.3.27 '/bin/sh -c '"'"'python && sleep 0'"'"''

If I manually created a route to 10.0.3.* and reran that command without the -o 'ProxyCommand=ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -q [email protected]' option. The control file was created, then everything worked.

I mucked around a bit with ForwardAgent options on my host and on the proxy host to no avail. Eventually I just punted and ran that command for each of the failing hosts and let the hack unblock my work.

I was using a non-standard private key and it wasn't being found. It was 600 perms. ssh-add <path-to-private-key> fixed my issue.

Adding my key with ssh-copy-id to the remote server fixed the problem.

adding -o ControlMaster=auto -o ControlPersist=30m to ssh args fixed the issue for me.

  • ansible version: 2.4.1.0
  • os: macos sierra
  • remotes: ec2 instances (centos7, t2.micro)

More:

Getting "UNREACHABLE" error in the midst of role tasks. Would stop on the same task. But would run if I isolated only that task (via tags).

ansible -m ping myServer gave me UNREACHABLE! error.
ansible -c local -m ping myServer worked.

EDIT: To fix this in my playbook, I had to put:

- hosts: dev
  connection: local

I had the same problem with ansible 2.6.0
my ssh_args in ansible.cfg

ssh_args = -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=30m -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no

I was able to SSH into the hosts manually
but running ansible-playbook or ansible -m ping had problems
downgrading ansible to 2.5.5 solved the problem for me

I have the same problem , ubuntu 14.04 with ansible 2.6.2 (upgrade from ansible 1.9)

ansible -m ping myServer gave me UNREACHABLE! error.
ansible -c local -m ping myServer worked.

according to @kararukeys
I downgrade to 2.5.5 , But it`s still the same

2018-08-01 16:00:32 [mini@hq ansiblecontrol]$ ansible hqpc222.abc.com -i inventory/kw.production -m ping -vvv
/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/constant_time.py:26: CryptographyDeprecationWarning: Support for your Python version is deprecated. The next version of cryptography will remove support. Please upgrade to a 2.7.x release that supports hmac.compare_digest as soon as possible.
  utils.DeprecatedIn23,
ansible 2.5.5
  config file = /home/mini/D/ansiblecontrol/ansible.cfg
  configured module search path = [u'/home/mini/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]
Using /home/mini/D/ansiblecontrol/ansible.cfg as config file
Parsed /home/mini/D/ansiblecontrol/inventory/kw.production inventory source with ini plugin
[pid 2014] 16:00:41.194783 D mitogen: mitogen.service.Pool(0x7f73f53147d0, size=16, th='MainThread'): initialized
[pid 2014] 16:00:41.195890 D ansible_mitogen.process: Service pool configured: size=16
META: ran handlers
[pid 2033] 16:00:41.233401 D mitogen: unix.connect(path='/tmp/mitogen_unix_uMVCQQ')
[pid 2033] 16:00:41.234083 D mitogen: unix.connect(): local ID is 1, remote is 0
[pid 2014] 16:00:41.235861 D mitogen: mitogen.ssh.Stream(u'default').connect()
[pid 2014] 16:00:41.304490 D mitogen: hybrid_tty_create_child() pid=2037 stdio=63, tty=17, cmd: ssh -o "LogLevel ERROR" -o "Compression yes" -o "ServerAliveInterval 15" -o "ServerAliveCountMax 3" -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" -o "GlobalKnownHostsFile /dev/null" -C -o ControlMaster=auto -o ControlPersist=60s hqpc222.abc.com /usr/bin/python -c "'import codecs,os,sys;_=codecs.decode;exec(_(_(\"eNqFkc1OwzAQhM/NU+S2tmqlTuiFSJFAPSAOCClC9AAVyo9DLRLbOG5NeXq2KVKTcuC2n3bWMxrnbJ3pPjLSCEIDy/yIZBMiNNp+EJoGM5zrnUkIZzHn9Mw5G5PFbXziqtW9IPkY7BjWY/AIaNgf0L4tHLp2YZaFUBfWSwVhoephKb5EtXNF2Yphvdj1dlFKtTAHt9UKMOfsQjbPhsO9sL3U6iW92gy2Qu2lRYbb/O6Zwyabnp00iC2ZLtgU50A66fS7UGknFRrcbD/7JOGR6Arn0DOqdBc5nyY8XlKgAT7rrXSCxAwe7p8eOeevCjBOpWtsnQar7I0ce6+1EQrbBlsCjawoahInS35NGXxLgy81Jjvr1gx8CcevaMyvwWqYT/VeqP1/6r8p40nKH0t5sts=\".encode(),\"base64\"),\"zip\"))'"
[pid 2014] 16:00:41.305373 D mitogen: mitogen.ssh.Stream(u'local.2037').connect(): child process stdin/stdout=63
[pid 2014] 16:00:51.245756 D mitogen: mitogen.ssh.Stream(u'local.2037'): child process still alive, sending SIGTERM
[pid 2033] 16:00:51.246902 D mitogen: mitogen.core.Stream(u'unix_listener.2014').on_disconnect()
[pid 2033] 16:00:51.247108 D mitogen: Waker(Broker(0x7f73f4ac2dd0) rfd=14, wfd=15).on_disconnect()
[pid 2014] 16:00:51.247242 D mitogen: mitogen.core.Stream(u'unix_client.2033').on_disconnect()
hqpc222.abc.com | UNREACHABLE! => {
    "changed": false, 
    "msg": "Connection timed out.", 
    "unreachable": true
}
[pid 2014] 16:00:51.288028 I mitogen: mitogen.service.Pool(0x7f73f53147d0, size=16, th='mitogen.service.Pool.7f73f53147d0.worker-12'): channel or latch closed, exitting: None
[pid 2014] 16:00:51.288404 D mitogen: Waker(Broker(0x7f73f530af50) rfd=9, wfd=11).on_disconnect()
[pid 2014] 16:00:51.288691 D mitogen: <mitogen.unix.Listener object at 0x7f73f5314450>.on_disconnect()
2018-08-01 16:00:51 [mini@hq ansiblecontrol]$ 

any suggestions ? please ~

ISSUE TYPE
  • Bug Report
ANSIBLE VERSION
ansible 2.0.0.2
  config file = 
  configured module search path = Default w/o overrides
CONFIGURATION

No changes

OS / ENVIRONMENT

OS X El Capitan Version 10.11.3

SUMMARY

I can connect to my Rasberry Pi through ssh through an ethernet cable via "ssh [email protected]" but running Ansible with this IP address as a host fails.

I have successfully configured this Rasberry Pi with ansible through wifi (using the wifi IP address), but now trying to use ansible via the direct ethernet connection I get the cryptic error message:

`TASK [setup] *******************************************************************
fatal: [169.254.0.2]: UNREACHABLE! => {"changed": false, "msg": "ERROR! (25, 'Inappropriate ioctl for device')", "unreachable": true}`

Because I _can_ successfully connect to this pi using that IP address through ssh from terminal, I am positing that this is a bug in Ansible.

STEPS TO REPRODUCE

I run this command to rune the role

ansible-playbook ansible-pi/playbook.yml -i ansible-pi/hosts --ask-pass --sudo -c paramiko -vvvv

I also tried

ansible-playbook ansible-pi/playbook.yml -i ansible-pi/hosts --ask-pass --sudo -vvvv

which lead to the same error.

hosts file

[pis]
169.254.0.2

playbook


---

- name: Ansible Playbook for configuring brand new Raspberry Pi

  hosts: pis
  roles:
    - pi
  remote_user: pi
  sudo: yes

I assume that the role is actually unimportant because ansible is failing at the ssh connection step.

EXPECTED RESULTS

I expect ansible to connect to pi and run the role (I have successfully done this via connecting over an IP address through wifi)

ACTUAL RESULTS
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal.
No config file found; using defaults
  passwd = fallback_getpass(prompt, stream)
Warning: Password input may be echoed.
SSH password: raspberry

[DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and 
make sure become_method is 'sudo' (default). This feature will be removed in a 
future release. Deprecation warnings can be disabled by setting 
deprecation_warnings=False in ansible.cfg.
Loaded callback default of type stdout, v2.0
1 plays in ansible-pi/playbook.yml

PLAY [Ansible Playbook for configuring brand new Raspberry Pi] *****************

TASK [setup] *******************************************************************
<169.254.0.2> ESTABLISH CONNECTION FOR USER: pi on PORT 22 TO 169.254.0.2
CONNECTION: pid 2118 waiting for lock on 10
CONNECTION: pid 2118 acquired lock on 10
fatal: [169.254.0.2]: UNREACHABLE! => {"changed": false, "msg": "ERROR! (25, 'Inappropriate ioctl for device')", "unreachable": true}

PLAY RECAP *********************************************************************
169.254.0.2                : ok=0    changed=0    unreachable=1    failed=0   

On OpenSSH_7.9p1, OpenSSL 1.1.1b, ansible 2.7.8, when confronted with the UNREACHABLE error, with msg containing that authentication succeeded (Authenticated to) but mentioning a broken pipe (debug3: mux_client_read_packet: read header failed: Broken pipe), setting -o ControlMaster=no in ssh_args worked for me without having to use paramiko or dropping ControlPersist.

Was this page helpful?
0 / 5 - 0 ratings