question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
53,862,900
Error connecting: Error while fetching server API version: Ansible
I'm very new at Ansible. I've run following ansible PlayBook and found those errors: --- - hosts: webservers remote_user: linx become: yes become_method: sudo tasks: - name: install docker-py pip: name=docker-py - name: Build Docker image from Dockerfile docker_image: name: web path: docker state: build - name: Running the container docker_container: image: web:latest path: docker state: running - name: Check if container is running shell: docker ps Error message: FAILED! => {"changed": false, "msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(2, 'No such file or directory'))"} And here is my folder structure: . β”œβ”€β”€ ansible.cfg β”œβ”€β”€ docker β”‚ └── Dockerfile β”œβ”€β”€ hosts β”œβ”€β”€ main.retry β”œβ”€β”€ main.yml I'm confused that docker folder is already inside my local but don't know why I encountered those error message.
Error connecting: Error while fetching server API version: Ansible I'm very new at Ansible. I've run following ansible PlayBook and found those errors: --- - hosts: webservers remote_user: linx become: yes become_method: sudo tasks: - name: install docker-py pip: name=docker-py - name: Build Docker image from Dockerfile docker_image: name: web path: docker state: build - name: Running the container docker_container: image: web:latest path: docker state: running - name: Check if container is running shell: docker ps Error message: FAILED! => {"changed": false, "msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(2, 'No such file or directory'))"} And here is my folder structure: . β”œβ”€β”€ ansible.cfg β”œβ”€β”€ docker β”‚ └── Dockerfile β”œβ”€β”€ hosts β”œβ”€β”€ main.retry β”œβ”€β”€ main.yml I'm confused that docker folder is already inside my local but don't know why I encountered those error message.
docker, ansible
14
35,553
2
https://stackoverflow.com/questions/53862900/error-connecting-error-while-fetching-server-api-version-ansible
42,817,789
How to import a realm in Keycloak and exit
I have followed the Keycloak admin guide to export and import realms using standalone.sh it does work but it starts the server and does not exit. This is a problem for me because I want to automate this process via executing an Ansible playbook and so I can't because the task never ends. I found a workaround in Ansible by using async and wait_for but was hoping for a better way that does not require using the Admin REST API. - name: Stop keycloak service: name: keycloak state: stopped - name: Import realm into Keycloak shell: "{{keycloak_home}}/bin/standalone.sh -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=master -Dkeycloak.migration.usersExportStrategy=SAME_FILE -Dkeycloak.migration.realmName=master" async: 30 poll: 0 - name: Wait for Keycloak to be started and listen on port 8080 wait_for: host: 0.0.0.0 port: 8080 delay: 10 - name: Restart keycloak service: name: keycloak state: restarted
How to import a realm in Keycloak and exit I have followed the Keycloak admin guide to export and import realms using standalone.sh it does work but it starts the server and does not exit. This is a problem for me because I want to automate this process via executing an Ansible playbook and so I can't because the task never ends. I found a workaround in Ansible by using async and wait_for but was hoping for a better way that does not require using the Admin REST API. - name: Stop keycloak service: name: keycloak state: stopped - name: Import realm into Keycloak shell: "{{keycloak_home}}/bin/standalone.sh -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=master -Dkeycloak.migration.usersExportStrategy=SAME_FILE -Dkeycloak.migration.realmName=master" async: 30 poll: 0 - name: Wait for Keycloak to be started and listen on port 8080 wait_for: host: 0.0.0.0 port: 8080 delay: 10 - name: Restart keycloak service: name: keycloak state: restarted
ansible, keycloak
14
29,149
1
https://stackoverflow.com/questions/42817789/how-to-import-a-realm-in-keycloak-and-exit
64,147,785
AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute
I'm using Ansible and I'm getting this error after running my playbook. This is my playbook: --- - name: Set 192.168.122.4 Hostname hosts: 192.168.122.4 gather_facts: false become: true tasks: - name : Name 4 hostname: name: ansible1.example.com - name: Set 192.168.122.5 Hostname hosts: 192.168.122.5 gather_facts: false become: true tasks: - name : Name 5 hostname: name: ansible2.example.com - name: Manage Hosts File hosts: all gather_facts: true become: true tasks: - name: Deploy Hosts Template template: src: hosts.j2 dest: /etc/hosts This is my Template # Managed by Ansible 127.0.0.1 localhost {% for host in groups['all'] %} {{ hostvars[host]['ansible_default_ipv4']['address'] }} {{ hostvars[host]['ansible_fqdn'] }} {{ hostvars[host]['ansible_hostname'] }} {% endfor %} I get this error: AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_default_ipv4 I have tried using the debug module to see if I get back a response from ansible_default_ipv4 , and I get a list there including the address, which is what I want. So its not the case that the remote servers don't have that setup, they do, the info is there, but just how do I retrieve it so I can populate this /etc/hosts file. BUT I've found out that it's not about the ansible_default_ipv4 per se, because it will give the same error for another object. So I'm guessing that its something to do with these magic variables from the Template. My references are: RHCE 8 - Ansible RHCE - Using Jinja Templates to Populate Host Files by theurbanpenguin on YouTube, look it up
AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute I'm using Ansible and I'm getting this error after running my playbook. This is my playbook: --- - name: Set 192.168.122.4 Hostname hosts: 192.168.122.4 gather_facts: false become: true tasks: - name : Name 4 hostname: name: ansible1.example.com - name: Set 192.168.122.5 Hostname hosts: 192.168.122.5 gather_facts: false become: true tasks: - name : Name 5 hostname: name: ansible2.example.com - name: Manage Hosts File hosts: all gather_facts: true become: true tasks: - name: Deploy Hosts Template template: src: hosts.j2 dest: /etc/hosts This is my Template # Managed by Ansible 127.0.0.1 localhost {% for host in groups['all'] %} {{ hostvars[host]['ansible_default_ipv4']['address'] }} {{ hostvars[host]['ansible_fqdn'] }} {{ hostvars[host]['ansible_hostname'] }} {% endfor %} I get this error: AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_default_ipv4 I have tried using the debug module to see if I get back a response from ansible_default_ipv4 , and I get a list there including the address, which is what I want. So its not the case that the remote servers don't have that setup, they do, the info is there, but just how do I retrieve it so I can populate this /etc/hosts file. BUT I've found out that it's not about the ansible_default_ipv4 per se, because it will give the same error for another object. So I'm guessing that its something to do with these magic variables from the Template. My references are: RHCE 8 - Ansible RHCE - Using Jinja Templates to Populate Host Files by theurbanpenguin on YouTube, look it up
linux, ansible
14
36,467
3
https://stackoverflow.com/questions/64147785/ansibleundefinedvariable-ansible-vars-hostvars-hostvarsvars-object-has-no-att
38,638,896
ssl: auth method ssl requires a password
While trying to connect to a windows VM through Ansible I get this issue: TASK [setup] ******************************************************************* <10.xx.xx.xx> ESTABLISH WINRM CONNECTION FOR USER: winad-admin on PORT 5986 TO 10.xx.xx.xx fatal: [10.xx.xx.xx]: UNREACHABLE! => {"changed": false, "msg": "ssl: auth method ssl requires a password", "unreachable": true} Inventory file : ---hosts--- [win_servers] 10.xx.xx.xx [nonprod1_ad_servers:vars] ansible_user=administrator ansible_pass=Horse@1234 ansible_port=5986 ansible_connection=winrm # The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates: ansible_winrm_server_cert_validation=ignore And the powershell script used to enable winrm on the windows machine is as follows: # Configure a Windows host for remote management with Ansible # ----------------------------------------------------------- # # This script checks the current WinRM/PSRemoting configuration and makes the # necessary changes to allow Ansible to connect, authenticate and execute # PowerShell commands. # # Set $VerbosePreference = "Continue" before running the script in order to # see the output messages. # Set $SkipNetworkProfileCheck to skip the network profile check. Without # specifying this the script will only run if the device's interfaces are in # DOMAIN or PRIVATE zones. Provide this switch if you want to enable winrm on # a device with an interface in PUBLIC zone. # # Set $ForceNewSSLCert if the system has been syspreped and a new SSL Cert # must be forced on the WinRM Listener when re-running this script. This # is necessary when a new SID and CN name is created. Param ( [string]$SubjectName = $env:COMPUTERNAME, [int]$CertValidityDays = 365, [switch]$SkipNetworkProfileCheck = $true, $CreateSelfSignedCert = $true, [switch]$ForceNewSSLCert = $true, $VerbosePreference = "Continue" ) Function New-LegacySelfSignedCert { Param ( [string]$SubjectName, [int]$ValidDays = 365 ) $name = New-Object -COM "X509Enrollment.CX500DistinguishedName.1" $name.Encode("CN=$SubjectName", 0) $key = New-Object -COM "X509Enrollment.CX509PrivateKey.1" $key.ProviderName = "Microsoft RSA SChannel Cryptographic Provider" $key.KeySpec = 1 $key.Length = 1024 $key.SecurityDescriptor = "D:PAI(A;;0xd01f01ff;;;SY)(A;;0xd01f01ff;;;BA)(A;;0x80120089;;;NS)" $key.MachineContext = 1 $key.Create() $serverauthoid = New-Object -COM "X509Enrollment.CObjectId.1" $serverauthoid.InitializeFromValue("1.3.6.1.5.5.7.3.1") $ekuoids = New-Object -COM "X509Enrollment.CObjectIds.1" $ekuoids.Add($serverauthoid) $ekuext = New-Object -COM "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1" $ekuext.InitializeEncode($ekuoids) $cert = New-Object -COM "X509Enrollment.CX509CertificateRequestCertificate.1" $cert.InitializeFromPrivateKey(2, $key, "") $cert.Subject = $name $cert.Issuer = $cert.Subject $cert.NotBefore = (Get-Date).AddDays(-1) $cert.NotAfter = $cert.NotBefore.AddDays($ValidDays) $cert.X509Extensions.Add($ekuext) $cert.Encode() $enrollment = New-Object -COM "X509Enrollment.CX509Enrollment.1" $enrollment.InitializeFromRequest($cert) $certdata = $enrollment.CreateRequest(0) $enrollment.InstallResponse(2, $certdata, 0, "") # Return the thumbprint of the last installed certificate; # This is needed for the new HTTPS WinRM listerner we're # going to create further down. Get-ChildItem "Cert:\LocalMachine\my"| Sort-Object NotBefore -Descending | Select -First 1 | Select -Expand Thumbprint } # Setup error handling. Trap { $_ Exit 1 } $ErrorActionPreference = "Stop" # Detect PowerShell version. If ($PSVersionTable.PSVersion.Major -lt 3) { Throw "PowerShell version 3 or higher is required." } # Find and start the WinRM service. Write-Verbose "Verifying WinRM service." If (!(Get-Service "WinRM")) { Throw "Unable to find the WinRM service." } ElseIf ((Get-Service "WinRM").Status -ne "Running") { Write-Verbose "Starting WinRM service." Start-Service -Name "WinRM" -ErrorAction Stop } # WinRM should be running; check that we have a PS session config. If (!(Get-PSSessionConfiguration -Verbose:$false) -or (!(Get-ChildItem WSMan:\localhost\Listener))) { if ($SkipNetworkProfileCheck) { Write-Verbose "Enabling PS Remoting without checking Network profile." Enable-PSRemoting -SkipNetworkProfileCheck -Force -ErrorAction Stop } else { Write-Verbose "Enabling PS Remoting" Enable-PSRemoting -Force -ErrorAction Stop } } Else { Write-Verbose "PS Remoting is already enabled." } # Make sure there is a SSL listener. $listeners = Get-ChildItem WSMan:\localhost\Listener If (!($listeners | Where {$_.Keys -like "TRANSPORT=HTTPS"})) { # HTTPS-based endpoint does not exist. If (Get-Command "New-SelfSignedCertificate" -ErrorAction SilentlyContinue) { $cert = New-SelfSignedCertificate -DnsName $SubjectName -CertStoreLocation "Cert:\LocalMachine\My" $thumbprint = $cert.Thumbprint Write-Host "Self-signed SSL certificate generated; thumbprint: $thumbprint" } Else { $thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName Write-Host "(Legacy) Self-signed SSL certificate generated; thumbprint: $thumbprint" } # Create the hashtables of settings to be used. $valueset = @{} $valueset.Add('Hostname', $SubjectName) $valueset.Add('CertificateThumbprint', $thumbprint) $selectorset = @{} $selectorset.Add('Transport', 'HTTPS') $selectorset.Add('Address', '*') Write-Verbose "Enabling SSL listener." New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset } Else { Write-Verbose "SSL listener is already active." # Force a new SSL cert on Listener if the $ForceNewSSLCert if($ForceNewSSLCert){ # Create the new cert. If (Get-Command "New-SelfSignedCertificate" -ErrorAction SilentlyContinue) { $cert = New-SelfSignedCertificate -DnsName $SubjectName -CertStoreLocation "Cert:\LocalMachine\My" $thumbprint = $cert.Thumbprint Write-Host "Self-signed SSL certificate generated; thumbprint: $thumbprint" } Else { $thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName Write-Host "(Legacy) Self-signed SSL certificate generated; thumbprint: $thumbprint" } $valueset = @{} $valueset.Add('Hostname', $SubjectName) $valueset.Add('CertificateThumbprint', $thumbprint) # Delete the listener for SSL $selectorset = @{} $selectorset.Add('Transport', 'HTTPS') $selectorset.Add('Address', '*') Remove-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset # Add new Listener with new SSL cert New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset } } # Check for basic authentication. $basicAuthSetting = Get-ChildItem WSMan:\localhost\Service\Auth | Where {$_.Name -eq "Basic"} If (($basicAuthSetting.Value) -eq $false) { Write-Verbose "Enabling basic auth support." Set-Item -Path "WSMan:\localhost\Service\Auth\Basic" -Value $true } Else { Write-Verbose "Basic auth is already enabled." } # Configure firewall to allow WinRM HTTPS connections. $fwtest1 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS" $fwtest2 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS" profile=any If ($fwtest1.count -lt 5) { Write-Verbose "Adding firewall rule to allow WinRM HTTPS." netsh advfirewall firewall add rule profile=any name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow } ElseIf (($fwtest1.count -ge 5) -and ($fwtest2.count -lt 5)) { Write-Verbose "Updating firewall rule to allow WinRM HTTPS for any profile." netsh advfirewall firewall set rule name="Allow WinRM HTTPS" new profile=any } Else { Write-Verbose "Firewall rule already exists to allow WinRM HTTPS." } # Test a remoting connection to localhost, which should work. $httpResult = Invoke-Command -ComputerName "localhost" -ScriptBlock {$env:COMPUTERNAME} -ErrorVariable httpError -ErrorAction SilentlyContinue $httpsOptions = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck $httpsResult = New-PSSession -UseSSL -ComputerName "localhost" -SessionOption $httpsOptions -ErrorVariable httpsError -ErrorAction SilentlyContinue If ($httpResult -and $httpsResult) { Write-Verbose "HTTP: Enabled | HTTPS: Enabled" } ElseIf ($httpsResult -and !$httpResult) { Write-Verbose "HTTP: Disabled | HTTPS: Enabled" } ElseIf ($httpResult -and !$httpsResult) { Write-Verbose "HTTP: Enabled | HTTPS: Disabled" } Else { Throw "Unable to establish an HTTP or HTTPS remoting session." } Write-Verbose "PS Remoting has been successfully configured for Ansible." Has anyone faced this issue and let me know how this can be resolved ? I am able to connect to the port through telnet.. # telnet 10.xx.xx.xx 5986 Trying 10.xx.xx.xx... Connected to 10.xx.xx.xx. Escape character is '^]'. I tried this on another ansible server against another webserver in another network and it worked fine which was in a 172.xx.xx.xx network (which does not make sense). I know this error is related to this line of code: [URL] winrm config: PS C:\Users\winserver> winrm get winrm/config Config MaxEnvelopeSizekb = 500 MaxTimeoutms = 60000 MaxBatchItems = 32000 MaxProviderRequests = 4294967295 Client NetworkDelayms = 5000 URLPrefix = wsman AllowUnencrypted = false Auth Basic = true Digest = true Kerberos = true Negotiate = true Certificate = true CredSSP = false DefaultPorts HTTP = 5985 HTTPS = 5986 TrustedHosts Service RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD) MaxConcurrentOperations = 4294967295 MaxConcurrentOperationsPerUser = 1500 EnumerationTimeoutms = 240000 MaxConnections = 300 MaxPacketRetrievalTimeSeconds = 120 AllowUnencrypted = false Auth Basic = true Kerberos = true Negotiate = true Certificate = false CredSSP = false CbtHardeningLevel = Relaxed DefaultPorts HTTP = 5985 HTTPS = 5986 IPv4Filter = * IPv6Filter = * EnableCompatibilityHttpListener = false EnableCompatibilityHttpsListener = false CertificateThumbprint AllowRemoteAccess = true Winrs AllowRemoteShellAccess = true IdleTimeout = 7200000 MaxConcurrentUsers = 10 MaxShellRunTime = 2147483647 MaxProcessesPerShell = 25 MaxMemoryPerShellMB = 1024 MaxShellsPerUser = 30 But how does this work on one network and not on the other with the same configuration and settings ?
ssl: auth method ssl requires a password While trying to connect to a windows VM through Ansible I get this issue: TASK [setup] ******************************************************************* <10.xx.xx.xx> ESTABLISH WINRM CONNECTION FOR USER: winad-admin on PORT 5986 TO 10.xx.xx.xx fatal: [10.xx.xx.xx]: UNREACHABLE! => {"changed": false, "msg": "ssl: auth method ssl requires a password", "unreachable": true} Inventory file : ---hosts--- [win_servers] 10.xx.xx.xx [nonprod1_ad_servers:vars] ansible_user=administrator ansible_pass=Horse@1234 ansible_port=5986 ansible_connection=winrm # The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates: ansible_winrm_server_cert_validation=ignore And the powershell script used to enable winrm on the windows machine is as follows: # Configure a Windows host for remote management with Ansible # ----------------------------------------------------------- # # This script checks the current WinRM/PSRemoting configuration and makes the # necessary changes to allow Ansible to connect, authenticate and execute # PowerShell commands. # # Set $VerbosePreference = "Continue" before running the script in order to # see the output messages. # Set $SkipNetworkProfileCheck to skip the network profile check. Without # specifying this the script will only run if the device's interfaces are in # DOMAIN or PRIVATE zones. Provide this switch if you want to enable winrm on # a device with an interface in PUBLIC zone. # # Set $ForceNewSSLCert if the system has been syspreped and a new SSL Cert # must be forced on the WinRM Listener when re-running this script. This # is necessary when a new SID and CN name is created. Param ( [string]$SubjectName = $env:COMPUTERNAME, [int]$CertValidityDays = 365, [switch]$SkipNetworkProfileCheck = $true, $CreateSelfSignedCert = $true, [switch]$ForceNewSSLCert = $true, $VerbosePreference = "Continue" ) Function New-LegacySelfSignedCert { Param ( [string]$SubjectName, [int]$ValidDays = 365 ) $name = New-Object -COM "X509Enrollment.CX500DistinguishedName.1" $name.Encode("CN=$SubjectName", 0) $key = New-Object -COM "X509Enrollment.CX509PrivateKey.1" $key.ProviderName = "Microsoft RSA SChannel Cryptographic Provider" $key.KeySpec = 1 $key.Length = 1024 $key.SecurityDescriptor = "D:PAI(A;;0xd01f01ff;;;SY)(A;;0xd01f01ff;;;BA)(A;;0x80120089;;;NS)" $key.MachineContext = 1 $key.Create() $serverauthoid = New-Object -COM "X509Enrollment.CObjectId.1" $serverauthoid.InitializeFromValue("1.3.6.1.5.5.7.3.1") $ekuoids = New-Object -COM "X509Enrollment.CObjectIds.1" $ekuoids.Add($serverauthoid) $ekuext = New-Object -COM "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1" $ekuext.InitializeEncode($ekuoids) $cert = New-Object -COM "X509Enrollment.CX509CertificateRequestCertificate.1" $cert.InitializeFromPrivateKey(2, $key, "") $cert.Subject = $name $cert.Issuer = $cert.Subject $cert.NotBefore = (Get-Date).AddDays(-1) $cert.NotAfter = $cert.NotBefore.AddDays($ValidDays) $cert.X509Extensions.Add($ekuext) $cert.Encode() $enrollment = New-Object -COM "X509Enrollment.CX509Enrollment.1" $enrollment.InitializeFromRequest($cert) $certdata = $enrollment.CreateRequest(0) $enrollment.InstallResponse(2, $certdata, 0, "") # Return the thumbprint of the last installed certificate; # This is needed for the new HTTPS WinRM listerner we're # going to create further down. Get-ChildItem "Cert:\LocalMachine\my"| Sort-Object NotBefore -Descending | Select -First 1 | Select -Expand Thumbprint } # Setup error handling. Trap { $_ Exit 1 } $ErrorActionPreference = "Stop" # Detect PowerShell version. If ($PSVersionTable.PSVersion.Major -lt 3) { Throw "PowerShell version 3 or higher is required." } # Find and start the WinRM service. Write-Verbose "Verifying WinRM service." If (!(Get-Service "WinRM")) { Throw "Unable to find the WinRM service." } ElseIf ((Get-Service "WinRM").Status -ne "Running") { Write-Verbose "Starting WinRM service." Start-Service -Name "WinRM" -ErrorAction Stop } # WinRM should be running; check that we have a PS session config. If (!(Get-PSSessionConfiguration -Verbose:$false) -or (!(Get-ChildItem WSMan:\localhost\Listener))) { if ($SkipNetworkProfileCheck) { Write-Verbose "Enabling PS Remoting without checking Network profile." Enable-PSRemoting -SkipNetworkProfileCheck -Force -ErrorAction Stop } else { Write-Verbose "Enabling PS Remoting" Enable-PSRemoting -Force -ErrorAction Stop } } Else { Write-Verbose "PS Remoting is already enabled." } # Make sure there is a SSL listener. $listeners = Get-ChildItem WSMan:\localhost\Listener If (!($listeners | Where {$_.Keys -like "TRANSPORT=HTTPS"})) { # HTTPS-based endpoint does not exist. If (Get-Command "New-SelfSignedCertificate" -ErrorAction SilentlyContinue) { $cert = New-SelfSignedCertificate -DnsName $SubjectName -CertStoreLocation "Cert:\LocalMachine\My" $thumbprint = $cert.Thumbprint Write-Host "Self-signed SSL certificate generated; thumbprint: $thumbprint" } Else { $thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName Write-Host "(Legacy) Self-signed SSL certificate generated; thumbprint: $thumbprint" } # Create the hashtables of settings to be used. $valueset = @{} $valueset.Add('Hostname', $SubjectName) $valueset.Add('CertificateThumbprint', $thumbprint) $selectorset = @{} $selectorset.Add('Transport', 'HTTPS') $selectorset.Add('Address', '*') Write-Verbose "Enabling SSL listener." New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset } Else { Write-Verbose "SSL listener is already active." # Force a new SSL cert on Listener if the $ForceNewSSLCert if($ForceNewSSLCert){ # Create the new cert. If (Get-Command "New-SelfSignedCertificate" -ErrorAction SilentlyContinue) { $cert = New-SelfSignedCertificate -DnsName $SubjectName -CertStoreLocation "Cert:\LocalMachine\My" $thumbprint = $cert.Thumbprint Write-Host "Self-signed SSL certificate generated; thumbprint: $thumbprint" } Else { $thumbprint = New-LegacySelfSignedCert -SubjectName $SubjectName Write-Host "(Legacy) Self-signed SSL certificate generated; thumbprint: $thumbprint" } $valueset = @{} $valueset.Add('Hostname', $SubjectName) $valueset.Add('CertificateThumbprint', $thumbprint) # Delete the listener for SSL $selectorset = @{} $selectorset.Add('Transport', 'HTTPS') $selectorset.Add('Address', '*') Remove-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset # Add new Listener with new SSL cert New-WSManInstance -ResourceURI 'winrm/config/Listener' -SelectorSet $selectorset -ValueSet $valueset } } # Check for basic authentication. $basicAuthSetting = Get-ChildItem WSMan:\localhost\Service\Auth | Where {$_.Name -eq "Basic"} If (($basicAuthSetting.Value) -eq $false) { Write-Verbose "Enabling basic auth support." Set-Item -Path "WSMan:\localhost\Service\Auth\Basic" -Value $true } Else { Write-Verbose "Basic auth is already enabled." } # Configure firewall to allow WinRM HTTPS connections. $fwtest1 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS" $fwtest2 = netsh advfirewall firewall show rule name="Allow WinRM HTTPS" profile=any If ($fwtest1.count -lt 5) { Write-Verbose "Adding firewall rule to allow WinRM HTTPS." netsh advfirewall firewall add rule profile=any name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow } ElseIf (($fwtest1.count -ge 5) -and ($fwtest2.count -lt 5)) { Write-Verbose "Updating firewall rule to allow WinRM HTTPS for any profile." netsh advfirewall firewall set rule name="Allow WinRM HTTPS" new profile=any } Else { Write-Verbose "Firewall rule already exists to allow WinRM HTTPS." } # Test a remoting connection to localhost, which should work. $httpResult = Invoke-Command -ComputerName "localhost" -ScriptBlock {$env:COMPUTERNAME} -ErrorVariable httpError -ErrorAction SilentlyContinue $httpsOptions = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck $httpsResult = New-PSSession -UseSSL -ComputerName "localhost" -SessionOption $httpsOptions -ErrorVariable httpsError -ErrorAction SilentlyContinue If ($httpResult -and $httpsResult) { Write-Verbose "HTTP: Enabled | HTTPS: Enabled" } ElseIf ($httpsResult -and !$httpResult) { Write-Verbose "HTTP: Disabled | HTTPS: Enabled" } ElseIf ($httpResult -and !$httpsResult) { Write-Verbose "HTTP: Enabled | HTTPS: Disabled" } Else { Throw "Unable to establish an HTTP or HTTPS remoting session." } Write-Verbose "PS Remoting has been successfully configured for Ansible." Has anyone faced this issue and let me know how this can be resolved ? I am able to connect to the port through telnet.. # telnet 10.xx.xx.xx 5986 Trying 10.xx.xx.xx... Connected to 10.xx.xx.xx. Escape character is '^]'. I tried this on another ansible server against another webserver in another network and it worked fine which was in a 172.xx.xx.xx network (which does not make sense). I know this error is related to this line of code: [URL] winrm config: PS C:\Users\winserver> winrm get winrm/config Config MaxEnvelopeSizekb = 500 MaxTimeoutms = 60000 MaxBatchItems = 32000 MaxProviderRequests = 4294967295 Client NetworkDelayms = 5000 URLPrefix = wsman AllowUnencrypted = false Auth Basic = true Digest = true Kerberos = true Negotiate = true Certificate = true CredSSP = false DefaultPorts HTTP = 5985 HTTPS = 5986 TrustedHosts Service RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD) MaxConcurrentOperations = 4294967295 MaxConcurrentOperationsPerUser = 1500 EnumerationTimeoutms = 240000 MaxConnections = 300 MaxPacketRetrievalTimeSeconds = 120 AllowUnencrypted = false Auth Basic = true Kerberos = true Negotiate = true Certificate = false CredSSP = false CbtHardeningLevel = Relaxed DefaultPorts HTTP = 5985 HTTPS = 5986 IPv4Filter = * IPv6Filter = * EnableCompatibilityHttpListener = false EnableCompatibilityHttpsListener = false CertificateThumbprint AllowRemoteAccess = true Winrs AllowRemoteShellAccess = true IdleTimeout = 7200000 MaxConcurrentUsers = 10 MaxShellRunTime = 2147483647 MaxProcessesPerShell = 25 MaxMemoryPerShellMB = 1024 MaxShellsPerUser = 30 But how does this work on one network and not on the other with the same configuration and settings ?
ansible, winrm
14
17,196
2
https://stackoverflow.com/questions/38638896/ssl-auth-method-ssl-requires-a-password
38,660,246
Ansible delegate_to how to set user that is used to connect to target?
I have an Ansible (2.1.1.) inventory: build_machine ansible_host=localhost ansible_connection=local staging_machine ansible_host=my.staging.host ansible_user=stager I'm using SSH without ControlMaster . I have a playbook that has a synchronize command: - name: Copy build to staging hosts: staging_machine tasks: - synchronize: src=... dest=... delegate_to: staging_machine remote_user: stager The command prompts for password of the wrong user: local-mac-user@my-staging-host's password: So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts. What am I doing wrong? How do I fix this? EDIT: It works in 2.0.2, doesn't work in 2.1.x
Ansible delegate_to how to set user that is used to connect to target? I have an Ansible (2.1.1.) inventory: build_machine ansible_host=localhost ansible_connection=local staging_machine ansible_host=my.staging.host ansible_user=stager I'm using SSH without ControlMaster . I have a playbook that has a synchronize command: - name: Copy build to staging hosts: staging_machine tasks: - synchronize: src=... dest=... delegate_to: staging_machine remote_user: stager The command prompts for password of the wrong user: local-mac-user@my-staging-host's password: So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts. What am I doing wrong? How do I fix this? EDIT: It works in 2.0.2, doesn't work in 2.1.x
ansible, ansible-2.x
14
26,491
3
https://stackoverflow.com/questions/38660246/ansible-delegate-to-how-to-set-user-that-is-used-to-connect-to-target
43,560,657
Edit current user&#39;s shell with ansible
I'm trying to push my dot files and some personal configuration files to a server (I'm not root or sudoer). Ansible connects as my user in order to edit files in my home folder. I'd like to set my default shell to usr/bin/fish . I am not allowed to edit /etc/passwd so user: name: shaka shell: /usr/bin/fish won't run. I also checked the chsh command but the executable prompt for my password. How could I change my shell on such machines ? (Debian 8, Ubuntu 16, Opensuse)
Edit current user&#39;s shell with ansible I'm trying to push my dot files and some personal configuration files to a server (I'm not root or sudoer). Ansible connects as my user in order to edit files in my home folder. I'd like to set my default shell to usr/bin/fish . I am not allowed to edit /etc/passwd so user: name: shaka shell: /usr/bin/fish won't run. I also checked the chsh command but the executable prompt for my password. How could I change my shell on such machines ? (Debian 8, Ubuntu 16, Opensuse)
linux, bash, shell, ansible, system-administration
13
25,863
5
https://stackoverflow.com/questions/43560657/edit-current-users-shell-with-ansible
52,815,285
Ansible looping through files
Prior to Ansible 2.5, the syntax for loops used to be with_x . Starting at 2.5, loop is favored and with_x basically disappeared from the docs. Still, the docs mention exemples of how to replace with_x with loop . But I'm clueless as to how we're now supposed to loop through a directory of files. Let's say I need to upload all the files within a given dir, I used to use with_fileglob . - name: Install local checks copy: src: "{{ item }}" dest: /etc/sensu/plugins/ owner: sensu group: sensu mode: 0744 with_fileglob: - plugins/* So what's the modern equivalent? Is it even possible? I know I still can use with_fileglob but as I'm writing new roles, I'd better have them future-proof.
Ansible looping through files Prior to Ansible 2.5, the syntax for loops used to be with_x . Starting at 2.5, loop is favored and with_x basically disappeared from the docs. Still, the docs mention exemples of how to replace with_x with loop . But I'm clueless as to how we're now supposed to loop through a directory of files. Let's say I need to upload all the files within a given dir, I used to use with_fileglob . - name: Install local checks copy: src: "{{ item }}" dest: /etc/sensu/plugins/ owner: sensu group: sensu mode: 0744 with_fileglob: - plugins/* So what's the modern equivalent? Is it even possible? I know I still can use with_fileglob but as I'm writing new roles, I'd better have them future-proof.
loops, plugins, ansible, ansible-2.x
13
37,204
2
https://stackoverflow.com/questions/52815285/ansible-looping-through-files
34,903,026
Update Ansible 1.9.4 to Ansible 2.0
I have uninstall ansible 1.9.4 and install with sudo apt-get install ansible , the version 2.0.2. But when I execute: ikerlan$ ansible --version ansible 1.9.4 I have uninstall and reinstall using ansible ppa, when I install I can see this: Preparing to unpack .../ansible_2.0.0.2-1ppa~trusty_all.deb ... Unpacking ansible (2.0.0.2-1ppa~trusty) ... Processing triggers for man-db (2.6.7.1-1ubuntu1) ... Configurando ansible (2.0.0.2-1ppa~trusty) ... Processing triggers for python-support (1.0.15) ... But if I check ansible version: ikerlan@ikerlan-docker:~$ ansible --version ansible 1.9.4 configured module search path = None If I run the next: ikerlan@ikerlan-docker:~$ sudo dpkg -l | grep ansible ii ansible 2.0.0.2-1ppa~trusty all A radically simple IT automation platform Any help? Thanks
Update Ansible 1.9.4 to Ansible 2.0 I have uninstall ansible 1.9.4 and install with sudo apt-get install ansible , the version 2.0.2. But when I execute: ikerlan$ ansible --version ansible 1.9.4 I have uninstall and reinstall using ansible ppa, when I install I can see this: Preparing to unpack .../ansible_2.0.0.2-1ppa~trusty_all.deb ... Unpacking ansible (2.0.0.2-1ppa~trusty) ... Processing triggers for man-db (2.6.7.1-1ubuntu1) ... Configurando ansible (2.0.0.2-1ppa~trusty) ... Processing triggers for python-support (1.0.15) ... But if I check ansible version: ikerlan@ikerlan-docker:~$ ansible --version ansible 1.9.4 configured module search path = None If I run the next: ikerlan@ikerlan-docker:~$ sudo dpkg -l | grep ansible ii ansible 2.0.0.2-1ppa~trusty all A radically simple IT automation platform Any help? Thanks
ansible, ansible-2.x
13
34,046
4
https://stackoverflow.com/questions/34903026/update-ansible-1-9-4-to-ansible-2-0
30,812,453
How to install ansible on amazon aws?
Having trouble running Ansible on the latest version of amazon linux. [root@ip-10-0-0-11 ec2-user]# yum install ansible --enablerepo=epel [root@ip-10-0-0-11 ec2-user]# ansible-playbook Traceback (most recent call last): File "/usr/bin/ansible-playbook", line 44, in <module> import ansible.playbook ImportError: No module named ansible.playbook Using AMI ID: ami-a10897d6. Any ideas?
How to install ansible on amazon aws? Having trouble running Ansible on the latest version of amazon linux. [root@ip-10-0-0-11 ec2-user]# yum install ansible --enablerepo=epel [root@ip-10-0-0-11 ec2-user]# ansible-playbook Traceback (most recent call last): File "/usr/bin/ansible-playbook", line 44, in <module> import ansible.playbook ImportError: No module named ansible.playbook Using AMI ID: ami-a10897d6. Any ideas?
ansible
13
34,479
8
https://stackoverflow.com/questions/30812453/how-to-install-ansible-on-amazon-aws
29,623,062
Is there a way to have both encrypted and nonencrypted host vars?
If I encrypt host_vars/* files with ansible-vault , I don't seem to have a chance to have nonencrypted host vars other than those residing in the inventory file. Am I missing something?
Is there a way to have both encrypted and nonencrypted host vars? If I encrypt host_vars/* files with ansible-vault , I don't seem to have a chance to have nonencrypted host vars other than those residing in the inventory file. Am I missing something?
ansible
13
8,124
4
https://stackoverflow.com/questions/29623062/is-there-a-way-to-have-both-encrypted-and-nonencrypted-host-vars
29,013,749
How do I make Ansible ignore failed tarball extraction?
I have a command in an ansible playbook: - name: extract the tarball command: tar --ignore-command-error -xvkf release.tar It is expected that some files won't be extracted as they exist already ( -k flag). However, this results in ansible stopping the overall playbook as there is an error code from the tar extraction. How can I work around this? As you can see I have tried --ignore-command-error to no avail.
How do I make Ansible ignore failed tarball extraction? I have a command in an ansible playbook: - name: extract the tarball command: tar --ignore-command-error -xvkf release.tar It is expected that some files won't be extracted as they exist already ( -k flag). However, this results in ansible stopping the overall playbook as there is an error code from the tar extraction. How can I work around this? As you can see I have tried --ignore-command-error to no avail.
ansible
13
16,940
2
https://stackoverflow.com/questions/29013749/how-do-i-make-ansible-ignore-failed-tarball-extraction
33,912,128
Update ansible (ubuntu server 14)
I installed ansible in Ubunru server 14 using this tutorial [URL] After I checked version of ansible: $ ansible --version ansible 1.5.5 But I need 1.9. How to update it?
Update ansible (ubuntu server 14) I installed ansible in Ubunru server 14 using this tutorial [URL] After I checked version of ansible: $ ansible --version ansible 1.5.5 But I need 1.9. How to update it?
ubuntu-14.04, ansible
13
24,188
2
https://stackoverflow.com/questions/33912128/update-ansible-ubuntu-server-14
31,432,367
Ansible: insert a single word on an existing line in a file
I have to use Ansible modules in order to edit the /etc/ssh/sshd_config file - every time I create a new user I want to append it at these two lines: AllowUsers root osadmin <new_user> AllowGroups root staff <new_group> At this moment I'm using the shell module to execute a sed command but would like to use lineinfile, if possible - shell: "sed -i '/^Allow/ s/$/ {{ user_name }}/' /etc/ssh/sshd_config" Any suggestions would be sincerely appreciated.
Ansible: insert a single word on an existing line in a file I have to use Ansible modules in order to edit the /etc/ssh/sshd_config file - every time I create a new user I want to append it at these two lines: AllowUsers root osadmin <new_user> AllowGroups root staff <new_group> At this moment I'm using the shell module to execute a sed command but would like to use lineinfile, if possible - shell: "sed -i '/^Allow/ s/$/ {{ user_name }}/' /etc/ssh/sshd_config" Any suggestions would be sincerely appreciated.
ansible, sshd
13
40,874
5
https://stackoverflow.com/questions/31432367/ansible-insert-a-single-word-on-an-existing-line-in-a-file
32,048,021
yum + what is the message - No package ansible available
I am try to install the ansible tool on my linux red-hat version - 5.7 yum install ansible Loaded plugins: security Setting up Install Process No package ansible available. Nothing to do ansible isnt installed on my linux machine - for sure! so why I get - No package ansible available. and how to resolve this? the view from yum.repos.d is: /etc/yum.repos.d]# ls rhel-debuginfo.repo rhel-source.repo service-cd-repo.repo stp-default- repo.repo I have resolving as the following: ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=50 time=63.4 ms Update - try to install epel-release package yum install epel-release Loaded plugins: security service-cd | 951 B 00:00 swp-default | 951 B 00:00 Setting up Install Process No package epel-release available. Nothing to do second update: wget --no-check-certificate [URL] release-latest-5.noarch.rpm --2015-08-17 14:54:20-- [URL] release-latest-5.noarch.rpm Resolving dl.fedoraproject.org... 209.132.181.26, 209.132.181.27, 209.132.181.25, ... Connecting to dl.fedoraproject.org|209.132.181.26|:443... connected. WARNING: cannot verify dl.fedoraproject.org's certificate, issued by /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 Hig: Unable to locally verify the issuer's authority. HTTP request sent, awaiting response... 200 OK Length: 12232 (12K) [application/x-rpm] Saving to: epel-release-latest-5.noarch.rpm' 100% [==========================================================================================>] 12,232 54.0K/s in 0.2s 2015-08-17 14:54:22 (54.0 KB/s) - `epel-release-latest-5.noarch.rpm.1' saved [12232/12232] rpm -ivh epel-release-latest-5.noarch.rpm warning: epel-release-latest-5.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 217521f6 Preparing... ########################################### [100%] yum repolist Loaded plugins: security epel | 3.7 kB 00:00 service-cd | 951 B 00:00 swp-default | 951 B 00:00 repo id repo name status epel Extra Packages for Enterprise Linux 5 - i386 5,411 service-cd RHEL5 service-cd repository 155 swp-default RHEL5 yum repository 239 repolist: 5,805 yum install ansible Loaded plugins: security Setting up Install Process No package ansible available. Nothing to do
yum + what is the message - No package ansible available I am try to install the ansible tool on my linux red-hat version - 5.7 yum install ansible Loaded plugins: security Setting up Install Process No package ansible available. Nothing to do ansible isnt installed on my linux machine - for sure! so why I get - No package ansible available. and how to resolve this? the view from yum.repos.d is: /etc/yum.repos.d]# ls rhel-debuginfo.repo rhel-source.repo service-cd-repo.repo stp-default- repo.repo I have resolving as the following: ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=50 time=63.4 ms Update - try to install epel-release package yum install epel-release Loaded plugins: security service-cd | 951 B 00:00 swp-default | 951 B 00:00 Setting up Install Process No package epel-release available. Nothing to do second update: wget --no-check-certificate [URL] release-latest-5.noarch.rpm --2015-08-17 14:54:20-- [URL] release-latest-5.noarch.rpm Resolving dl.fedoraproject.org... 209.132.181.26, 209.132.181.27, 209.132.181.25, ... Connecting to dl.fedoraproject.org|209.132.181.26|:443... connected. WARNING: cannot verify dl.fedoraproject.org's certificate, issued by /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 Hig: Unable to locally verify the issuer's authority. HTTP request sent, awaiting response... 200 OK Length: 12232 (12K) [application/x-rpm] Saving to: epel-release-latest-5.noarch.rpm' 100% [==========================================================================================>] 12,232 54.0K/s in 0.2s 2015-08-17 14:54:22 (54.0 KB/s) - `epel-release-latest-5.noarch.rpm.1' saved [12232/12232] rpm -ivh epel-release-latest-5.noarch.rpm warning: epel-release-latest-5.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 217521f6 Preparing... ########################################### [100%] yum repolist Loaded plugins: security epel | 3.7 kB 00:00 service-cd | 951 B 00:00 swp-default | 951 B 00:00 repo id repo name status epel Extra Packages for Enterprise Linux 5 - i386 5,411 service-cd RHEL5 service-cd repository 155 swp-default RHEL5 yum repository 239 repolist: 5,805 yum install ansible Loaded plugins: security Setting up Install Process No package ansible available. Nothing to do
linux, ansible, yum
13
45,507
2
https://stackoverflow.com/questions/32048021/yum-what-is-the-message-no-package-ansible-available
28,188,508
Insert data into mysql tables using ansible
There should be some decent way to work with mysql databases using ansible like inserting data into tables or any command to run on mysql db. I know there are modules to create db, manage replications, user and variables: mysql_db - Add or remove MySQL databases from a remote host. mysql_replication (E) - Manage MySQL replication mysql_user - Adds or removes a user from a MySQL database. mysql_variables - Manage MySQL global variables My use case scenario is, I've installed mysql-server on ubuntu and created the database successfully and now I have to insert data into the tables and wondering if there is a way to achieve it via ansible.
Insert data into mysql tables using ansible There should be some decent way to work with mysql databases using ansible like inserting data into tables or any command to run on mysql db. I know there are modules to create db, manage replications, user and variables: mysql_db - Add or remove MySQL databases from a remote host. mysql_replication (E) - Manage MySQL replication mysql_user - Adds or removes a user from a MySQL database. mysql_variables - Manage MySQL global variables My use case scenario is, I've installed mysql-server on ubuntu and created the database successfully and now I have to insert data into the tables and wondering if there is a way to achieve it via ansible.
mysql, ubuntu, ansible
13
32,543
2
https://stackoverflow.com/questions/28188508/insert-data-into-mysql-tables-using-ansible
53,941,356
Failed to import docker or docker-py - No module named docker
I've installed Docker and Ansible to my AWS Ec2 Linux as follow: sudo yum update -y sudo yum install docker -v sudo service docker start sudo yum-config-manager --enable epel sudo yum repolist sudo yum install ansible I've found following error message when I've tried to pull docker images to my AWS Ec2 Linux with ansible. fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "Failed to import docker or docker-py - No module named docker. Try pip install docker or pip install docker-py (Python 2.6)"} Docker version Client: Version: 18.06.1-ce API version: 1.38 Go version: go1.10.3 Git commit: e68fc7a215d7133c34aa18e3b72b4a21fd0c6136 Built: Fri Oct 26 23:38:19 2018 OS/Arch: linux/amd64 Experimental: false Ansible version is ansible 2.6.8 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/ec2-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.6/site-packages/ansible executable location = /usr/bin/ansible python version = 2.6.9 (unknown, Nov 2 2017, 19:21:21) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] Here is my part of ansible playbook file - name: Pull a container image docker_container: name: mynodejs image: registry.gitlab.com/ppshein/test:latest pull: yes state: started published_ports: - 8080:80 Please let me know which I'm missed to configure inside AWS Ec2 Linux.
Failed to import docker or docker-py - No module named docker I've installed Docker and Ansible to my AWS Ec2 Linux as follow: sudo yum update -y sudo yum install docker -v sudo service docker start sudo yum-config-manager --enable epel sudo yum repolist sudo yum install ansible I've found following error message when I've tried to pull docker images to my AWS Ec2 Linux with ansible. fatal: [127.0.0.1]: FAILED! => {"changed": false, "msg": "Failed to import docker or docker-py - No module named docker. Try pip install docker or pip install docker-py (Python 2.6)"} Docker version Client: Version: 18.06.1-ce API version: 1.38 Go version: go1.10.3 Git commit: e68fc7a215d7133c34aa18e3b72b4a21fd0c6136 Built: Fri Oct 26 23:38:19 2018 OS/Arch: linux/amd64 Experimental: false Ansible version is ansible 2.6.8 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/ec2-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.6/site-packages/ansible executable location = /usr/bin/ansible python version = 2.6.9 (unknown, Nov 2 2017, 19:21:21) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] Here is my part of ansible playbook file - name: Pull a container image docker_container: name: mynodejs image: registry.gitlab.com/ppshein/test:latest pull: yes state: started published_ports: - 8080:80 Please let me know which I'm missed to configure inside AWS Ec2 Linux.
linux, docker, amazon-ec2, ansible
13
49,639
8
https://stackoverflow.com/questions/53941356/failed-to-import-docker-or-docker-py-no-module-named-docker
47,443,265
How to use Ansible wait_for to check a command status with multiple lines?
Ansible v2.4.0.0 I'm installing Gitlab-CE where I run the following in an Ansible task. As you can see, some of the processes are down, but they eventually come up. # gitlab-ctl status run: gitlab-workhorse: 0s, normally up run: logrotate: 1s, normally up down: nginx: 0s, normally up down: postgresql: 1s, normally up run: redis: 0s, normally up run: sidekiq: 0s, normally up run: unicorn: 0s, normally up How can I write an Ansible wait_for task to check when all the services are in the run state? IOW I only want to proceed to the next task when I see this # gitlab-ctl status run: gitlab-workhorse: 0s, normally up run: logrotate: 1s, normally up run: nginx: 0s, normally up run: postgresql: 1s, normally up run: redis: 0s, normally up run: sidekiq: 0s, normally up run: unicorn: 0s, normally up
How to use Ansible wait_for to check a command status with multiple lines? Ansible v2.4.0.0 I'm installing Gitlab-CE where I run the following in an Ansible task. As you can see, some of the processes are down, but they eventually come up. # gitlab-ctl status run: gitlab-workhorse: 0s, normally up run: logrotate: 1s, normally up down: nginx: 0s, normally up down: postgresql: 1s, normally up run: redis: 0s, normally up run: sidekiq: 0s, normally up run: unicorn: 0s, normally up How can I write an Ansible wait_for task to check when all the services are in the run state? IOW I only want to proceed to the next task when I see this # gitlab-ctl status run: gitlab-workhorse: 0s, normally up run: logrotate: 1s, normally up run: nginx: 0s, normally up run: postgresql: 1s, normally up run: redis: 0s, normally up run: sidekiq: 0s, normally up run: unicorn: 0s, normally up
ansible, ansible-2.x
13
25,978
1
https://stackoverflow.com/questions/47443265/how-to-use-ansible-wait-for-to-check-a-command-status-with-multiple-lines
44,004,727
how to run local command via ansible-playbook
I am trying to run some local command, iterating over inventory file and taking each hostname as an argument to the local command. Eg: I wanted to run a command "knife node create {{ hostname }}" in my local machine(laptop). The playbook is: - name: Prep node hosts: 127.0.0.1 connection: local gather_facts: no tasks: - name: node create command: "knife node create {{ hostname | quote }}" and my inventory file looks like: [qa-hosts] 10.10.10.11 hostname=example-server-1 Ofcourse, it wont work as the inventory has 'qa-hosts' and the play is for '127.0.0.1', as I wanted the play to run from my local machine. Would anyone help me with an idea how to get it done. Basically, I want get the variable 'hostname' and pass it to above play block.
how to run local command via ansible-playbook I am trying to run some local command, iterating over inventory file and taking each hostname as an argument to the local command. Eg: I wanted to run a command "knife node create {{ hostname }}" in my local machine(laptop). The playbook is: - name: Prep node hosts: 127.0.0.1 connection: local gather_facts: no tasks: - name: node create command: "knife node create {{ hostname | quote }}" and my inventory file looks like: [qa-hosts] 10.10.10.11 hostname=example-server-1 Ofcourse, it wont work as the inventory has 'qa-hosts' and the play is for '127.0.0.1', as I wanted the play to run from my local machine. Would anyone help me with an idea how to get it done. Basically, I want get the variable 'hostname' and pass it to above play block.
ansible
13
40,903
4
https://stackoverflow.com/questions/44004727/how-to-run-local-command-via-ansible-playbook
35,297,181
Speed up AMI and ASG Creation
using Ansible I create an AMI of a ubuntu instance then using this AMI to create an Launch configuration and then update and auto scaling group, is there any shortcuts I can take to speed up the ASG and AMI steps, take 10mins+
Speed up AMI and ASG Creation using Ansible I create an AMI of a ubuntu instance then using this AMI to create an Launch configuration and then update and auto scaling group, is there any shortcuts I can take to speed up the ASG and AMI steps, take 10mins+
amazon-web-services, amazon-ec2, ansible, autoscaling, amazon-ami
13
8,053
2
https://stackoverflow.com/questions/35297181/speed-up-ami-and-asg-creation
71,649,227
Line too long: Ansible lint
This is my Ansible task - name: no need to import it. ansible.builtin.uri: url: > [URL] vertex_region }}-aiplatform.googleapis.com/v1/projects/{{ project }}/locations/{{ vertex_region }}/datasets/{{ dataset_id }}/dataItems method: GET headers: Content-Type: "application/json" Authorization: Bearer "{{ gcloud_auth }}" register: images While checking for Ansible lint, it spills out: line too long (151 > 120 characters) (line-length) The error is for the url parameter of the task. I already used > to break down the url , not sure how I can reduce it even more to fit in line constrain given by Ansible lint?
Line too long: Ansible lint This is my Ansible task - name: no need to import it. ansible.builtin.uri: url: > [URL] vertex_region }}-aiplatform.googleapis.com/v1/projects/{{ project }}/locations/{{ vertex_region }}/datasets/{{ dataset_id }}/dataItems method: GET headers: Content-Type: "application/json" Authorization: Bearer "{{ gcloud_auth }}" register: images While checking for Ansible lint, it spills out: line too long (151 > 120 characters) (line-length) The error is for the url parameter of the task. I already used > to break down the url , not sure how I can reduce it even more to fit in line constrain given by Ansible lint?
ansible, static-analysis, ansible-lint
13
24,925
2
https://stackoverflow.com/questions/71649227/line-too-long-ansible-lint
47,873,671
Becoming non root user in ansible fails
I am trying to become a user "oracle" in ansible using the following playbook: - hosts: "myhost" tasks: - name: install oracle client become: yes become_user: oracle become_method: su shell: | whoami args: chdir: /tmp/client environment: DISTRIB: /tmp/client I am receiving an error: "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of /tmp/ansible-tmp-1513617986.78-246171259298529/': Operation not permitted\nchown: changing ownership of /tmp/ansible-tmp-1513617986.78-246171259298529/command.py': Operation not permitted\n}). For information on working around this, see [URL] I have red the article " [URL] " and added the following to /etc/ansible/ansible.cfg without any effect. allow_world_readable_tmpfiles = True My Ansible Version: ansible 2.4.2.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] Question: Is there a way to configure my host to accept ansible's becoming the oracle user?
Becoming non root user in ansible fails I am trying to become a user "oracle" in ansible using the following playbook: - hosts: "myhost" tasks: - name: install oracle client become: yes become_user: oracle become_method: su shell: | whoami args: chdir: /tmp/client environment: DISTRIB: /tmp/client I am receiving an error: "msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of /tmp/ansible-tmp-1513617986.78-246171259298529/': Operation not permitted\nchown: changing ownership of /tmp/ansible-tmp-1513617986.78-246171259298529/command.py': Operation not permitted\n}). For information on working around this, see [URL] I have red the article " [URL] " and added the following to /etc/ansible/ansible.cfg without any effect. allow_world_readable_tmpfiles = True My Ansible Version: ansible 2.4.2.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] Question: Is there a way to configure my host to accept ansible's becoming the oracle user?
ansible
13
23,550
3
https://stackoverflow.com/questions/47873671/becoming-non-root-user-in-ansible-fails
26,981,907
Using ansible to manage disk space
Simple ask: I want to delete some files if partition utilization goes over a certain percentage. I have access to "size_total" and "size_available" via "ansible_mounts". i.e.: ansible myhost -m setup -a 'filter=ansible_mounts' myhost | success >> { "ansible_facts": { "ansible_mounts": [ { "device": "/dev/mapper/RootVolGroup00-lv_root", "fstype": "ext4", "mount": "/", "options": "rw", "size_available": 5033046016, "size_total": 8455118848 }, How do I access those values, and how would I perform actions conditionally based on them using Ansible?
Using ansible to manage disk space Simple ask: I want to delete some files if partition utilization goes over a certain percentage. I have access to "size_total" and "size_available" via "ansible_mounts". i.e.: ansible myhost -m setup -a 'filter=ansible_mounts' myhost | success >> { "ansible_facts": { "ansible_mounts": [ { "device": "/dev/mapper/RootVolGroup00-lv_root", "fstype": "ext4", "mount": "/", "options": "rw", "size_available": 5033046016, "size_total": 8455118848 }, How do I access those values, and how would I perform actions conditionally based on them using Ansible?
ansible
13
48,047
4
https://stackoverflow.com/questions/26981907/using-ansible-to-manage-disk-space
58,389,314
Logrotate with ansible playbook
so i would like to create a ansible playbook that installs logrotate on all the servers in company. Also configs them to to set logs to be backed up weekly and then deleted a week after. So each week it makes a new log, backups last week log and on the third week it deletes the first one and repeats. So far i have found this but we do not use nginx. And it does not do exactly what i want. My knowledge in playbooks is quite limited so if someone could help with it would be awesome. Also i need it to check if the server has tomcat, apache or wildfly and then takes those logs. logrotate_scripts: - name: nginx-options path: /var/log/nginx/options.log options: - daily - weekly - size 25M - rotate 7 - missingok - compress - delaycompress - copytruncate
Logrotate with ansible playbook so i would like to create a ansible playbook that installs logrotate on all the servers in company. Also configs them to to set logs to be backed up weekly and then deleted a week after. So each week it makes a new log, backups last week log and on the third week it deletes the first one and repeats. So far i have found this but we do not use nginx. And it does not do exactly what i want. My knowledge in playbooks is quite limited so if someone could help with it would be awesome. Also i need it to check if the server has tomcat, apache or wildfly and then takes those logs. logrotate_scripts: - name: nginx-options path: /var/log/nginx/options.log options: - daily - weekly - size 25M - rotate 7 - missingok - compress - delaycompress - copytruncate
ansible
13
27,176
1
https://stackoverflow.com/questions/58389314/logrotate-with-ansible-playbook
32,091,667
Ansible - Fetch first few character of a register value
I would like to fetch first few character from a registered variable. Can somebody please suggest me how to do that. - hosts: node1 gather_facts: False tasks: - name: Check Value mango shell: cat /home/vagrant/mango register: result - name: Display Result In Loop debug: msg="Version is {{ result.stdout[5] }}" The above code displays the fifth character rather first 5 characters of the registered string. PLAY [node1] ****************************************************************** TASK: [Check Value mango] ***************************************************** changed: [10.200.19.21] => {"changed": true, "cmd": "cat /home/vagrant/mango", "delta": "0:00:00.003000", "end": "2015-08-19 09:29:58.229244", "rc": 0, "start": "2015-08-19 09:29:58.226244", "stderr": "", "stdout": "d3aa6131ec1a2e73f69ee150816265b5617d7e69", "warnings": []} TASK: [Display Result In Loop] ************************************************ ok: [10.200.19.21] => { "msg": "Version is 1" } PLAY RECAP ******************************************************************** 10.200.19.21 : ok=2 changed=1 unreachable=0 failed=0
Ansible - Fetch first few character of a register value I would like to fetch first few character from a registered variable. Can somebody please suggest me how to do that. - hosts: node1 gather_facts: False tasks: - name: Check Value mango shell: cat /home/vagrant/mango register: result - name: Display Result In Loop debug: msg="Version is {{ result.stdout[5] }}" The above code displays the fifth character rather first 5 characters of the registered string. PLAY [node1] ****************************************************************** TASK: [Check Value mango] ***************************************************** changed: [10.200.19.21] => {"changed": true, "cmd": "cat /home/vagrant/mango", "delta": "0:00:00.003000", "end": "2015-08-19 09:29:58.229244", "rc": 0, "start": "2015-08-19 09:29:58.226244", "stderr": "", "stdout": "d3aa6131ec1a2e73f69ee150816265b5617d7e69", "warnings": []} TASK: [Display Result In Loop] ************************************************ ok: [10.200.19.21] => { "msg": "Version is 1" } PLAY RECAP ******************************************************************** 10.200.19.21 : ok=2 changed=1 unreachable=0 failed=0
ansible
13
30,933
1
https://stackoverflow.com/questions/32091667/ansible-fetch-first-few-character-of-a-register-value
25,410,656
Ansible IP address variable - host part
I have the following problem: I'm writing playbook for setting IP address on the command line in Ansible. Lets say 10.10.10.x. I need to get the last part of my public IP lets say x.x.x.15 and assign it to the private: 10.10.10.15. Is there a variable for this? Can i capture some? I've tried to use something like: shell: "ip addr show | grep inet ...." register: host_ip But it is not what i need. It works, but only for a limited number of servers. The whole thing should be like that: "shell: /dir/script --options 10.10.10.{{ var }}" and {{ var }} should be the host part of the public IP. Edit: Thank you! Here's my final solution: - name: Get the host part of the IP shell: host {{ ansible_fqdn }} | awk '{print $4}' register: host_ip And {{ host_ip.stdout.split('.')[3] }} For using it later in the playbook.
Ansible IP address variable - host part I have the following problem: I'm writing playbook for setting IP address on the command line in Ansible. Lets say 10.10.10.x. I need to get the last part of my public IP lets say x.x.x.15 and assign it to the private: 10.10.10.15. Is there a variable for this? Can i capture some? I've tried to use something like: shell: "ip addr show | grep inet ...." register: host_ip But it is not what i need. It works, but only for a limited number of servers. The whole thing should be like that: "shell: /dir/script --options 10.10.10.{{ var }}" and {{ var }} should be the host part of the public IP. Edit: Thank you! Here's my final solution: - name: Get the host part of the IP shell: host {{ ansible_fqdn }} | awk '{print $4}' register: host_ip And {{ host_ip.stdout.split('.')[3] }} For using it later in the playbook.
shell, variables, ip, ansible
13
46,146
3
https://stackoverflow.com/questions/25410656/ansible-ip-address-variable-host-part
57,804,071
Whats the difference between ansible &#39;raw&#39;, &#39;shell&#39; and &#39;command&#39;?
What is the difference between raw , shell and command in the ansible playbook? And when to use which?
Whats the difference between ansible &#39;raw&#39;, &#39;shell&#39; and &#39;command&#39;? What is the difference between raw , shell and command in the ansible playbook? And when to use which?
ansible
13
25,441
2
https://stackoverflow.com/questions/57804071/whats-the-difference-between-ansible-raw-shell-and-command
39,013,796
Create user with option --disabled-password by Ansible
On Ubuntu 14.04 I creating user with disabled password like: sudo adduser --disabled-password myuser I need to do same with Ansible user module --disabled-password Similiar option in Ansible documentation is missing. Could somebody help me, how can I get the same result with user module?
Create user with option --disabled-password by Ansible On Ubuntu 14.04 I creating user with disabled password like: sudo adduser --disabled-password myuser I need to do same with Ansible user module --disabled-password Similiar option in Ansible documentation is missing. Could somebody help me, how can I get the same result with user module?
ubuntu, passwords, ansible
13
20,737
3
https://stackoverflow.com/questions/39013796/create-user-with-option-disabled-password-by-ansible
62,272,678
Ansible : The loop variable &#39;item&#39; is already in use
I want to run a task in ansible something similar to the following. #Task in Playbook - name : Include tasks block: - name: call example.yml include_tasks: "example.yml" vars: my_var: item with_items: - [1, 2] # example.yml - name: Debug. debug: msg: - "my_var: {{ my_var }}" with_inventory_hostnames: - 'all' I expect the output to be printing the my_var as values 1 in the first iteration and 2 in second iteration of the loop in the playbook. But instead, it is printing the hostnames # Output TASK [proxysql : Debug.] ************************************************************************************************ [WARNING]: The loop variable 'item' is already in use. You should set the loop_var value in the loop_control option for the task to something else to avoid variable collisions and unexpected behavior. ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.134.34.34" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.123.23.23" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.112.12.12" ] } TASK [proxysql : Debug.] ************************************************************************************************ [WARNING]: The loop variable 'item' is already in use. You should set the loop_var value in the loop_control option for the task to something else to avoid variable collisions and unexpected behavior. ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.134.34.34" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.123.23.23" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.112.12.12" ] } Thanks in advance
Ansible : The loop variable &#39;item&#39; is already in use I want to run a task in ansible something similar to the following. #Task in Playbook - name : Include tasks block: - name: call example.yml include_tasks: "example.yml" vars: my_var: item with_items: - [1, 2] # example.yml - name: Debug. debug: msg: - "my_var: {{ my_var }}" with_inventory_hostnames: - 'all' I expect the output to be printing the my_var as values 1 in the first iteration and 2 in second iteration of the loop in the playbook. But instead, it is printing the hostnames # Output TASK [proxysql : Debug.] ************************************************************************************************ [WARNING]: The loop variable 'item' is already in use. You should set the loop_var value in the loop_control option for the task to something else to avoid variable collisions and unexpected behavior. ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.134.34.34" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.123.23.23" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.112.12.12" ] } TASK [proxysql : Debug.] ************************************************************************************************ [WARNING]: The loop variable 'item' is already in use. You should set the loop_var value in the loop_control option for the task to something else to avoid variable collisions and unexpected behavior. ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.134.34.34" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.123.23.23" ] } ok: [10.1xx.xx.xx] => (item=None) => { "msg": [ "my_var: 10.112.12.12" ] } Thanks in advance
ansible, ansible-2.x
13
18,481
1
https://stackoverflow.com/questions/62272678/ansible-the-loop-variable-item-is-already-in-use
30,501,711
Accessing nested variable variables in Ansible
Here is my group_vars/all file: app_env: staging staging: app_a: db_host: localhost app_b: db_host: localhost production: app_a: db_host: app_a-db.example.net app_b: db_host: app_b-db.example.com If app_env environment has to be production, I overwrite this via inventory variables. This way, all deployments are staging unless you make them production explicitly. So, when I want to print the variable in a playbook, I can do --- - debug: var={{app_env}}.app_a.db_host This works! But how can I access the variable in another module, i.e. lineinfile ? Some Examples that didn't work out: - lineinfile: dest=/etc/profile line='export APP_A_DB_HOST="{{ app_env.app_a.db_host }}"' - lineinfile: dest=/etc/profile line='export APP_A_DB_HOST="{{ app_env[app_a][db_host] }}"' - lineinfile: dest=/etc/profile line='export APP_A_DB_HOST="{{ {{app_env}}.app_a.db_host }}"' Working solutions would be using the set_fact module (double lines of code, not really smart) or including different variable files, depending on app_env . But I really would like to know if there's a notation to access nested variable variables ;)
Accessing nested variable variables in Ansible Here is my group_vars/all file: app_env: staging staging: app_a: db_host: localhost app_b: db_host: localhost production: app_a: db_host: app_a-db.example.net app_b: db_host: app_b-db.example.com If app_env environment has to be production, I overwrite this via inventory variables. This way, all deployments are staging unless you make them production explicitly. So, when I want to print the variable in a playbook, I can do --- - debug: var={{app_env}}.app_a.db_host This works! But how can I access the variable in another module, i.e. lineinfile ? Some Examples that didn't work out: - lineinfile: dest=/etc/profile line='export APP_A_DB_HOST="{{ app_env.app_a.db_host }}"' - lineinfile: dest=/etc/profile line='export APP_A_DB_HOST="{{ app_env[app_a][db_host] }}"' - lineinfile: dest=/etc/profile line='export APP_A_DB_HOST="{{ {{app_env}}.app_a.db_host }}"' Working solutions would be using the set_fact module (double lines of code, not really smart) or including different variable files, depending on app_env . But I really would like to know if there's a notation to access nested variable variables ;)
ansible
13
25,575
1
https://stackoverflow.com/questions/30501711/accessing-nested-variable-variables-in-ansible
43,341,180
Ansible block vars
How can I set in Ansible block vars (visible only for tasks in block) ? I tried: --- - hosts: test tasks: - block: - name: task 1 shell: "echo {{item}}" with_items: - one - two but it seems that it's a wrong way.
Ansible block vars How can I set in Ansible block vars (visible only for tasks in block) ? I tried: --- - hosts: test tasks: - block: - name: task 1 shell: "echo {{item}}" with_items: - one - two but it seems that it's a wrong way.
variables, ansible
13
33,696
2
https://stackoverflow.com/questions/43341180/ansible-block-vars
36,031,848
How to get the SHA of the checked out code with ansible git module?
I would like to store the currently checked out commit SHA-1 hash for the version of code with Ansible. I want to set_fact of this version for use in another role.
How to get the SHA of the checked out code with ansible git module? I would like to store the currently checked out commit SHA-1 hash for the version of code with Ansible. I want to set_fact of this version for use in another role.
git, ansible
13
7,843
1
https://stackoverflow.com/questions/36031848/how-to-get-the-sha-of-the-checked-out-code-with-ansible-git-module
27,681,875
ansible-galaxy role fails with &quot;do not have permission to modify /etc/ansible/roles/&quot;
tl;dr = How do OS X users recommend working around this permissions error? I'm on OS X 10.10.1 and I recently installed Ansible running the following: sudo pip install ansible --quiet sudo pip install ansible --upgrade I want to start off with a galaxy role to install homebrew and went to run this one with the following error: $ ansible-galaxy install geerlingguy.homebrew - downloading role 'homebrew', owned by geerlingguy - downloading role from [URL] - extracting geerlingguy.homebrew to /etc/ansible/roles/geerlingguy.homebrew - error: you do not have permission to modify files in /etc/ansible/roles/geerlingguy.homebrew - geerlingguy.homebrew was NOT installed successfully. - you can use --ignore-errors to skip failed roles. While I see /etc is owned by root, I don't see any notes in documentation saying I should chmod anything. For reference: $ ansible --version ansible 1.8.2 configured module search path = None Is this expected or is my installation somehow wrong?
ansible-galaxy role fails with &quot;do not have permission to modify /etc/ansible/roles/&quot; tl;dr = How do OS X users recommend working around this permissions error? I'm on OS X 10.10.1 and I recently installed Ansible running the following: sudo pip install ansible --quiet sudo pip install ansible --upgrade I want to start off with a galaxy role to install homebrew and went to run this one with the following error: $ ansible-galaxy install geerlingguy.homebrew - downloading role 'homebrew', owned by geerlingguy - downloading role from [URL] - extracting geerlingguy.homebrew to /etc/ansible/roles/geerlingguy.homebrew - error: you do not have permission to modify files in /etc/ansible/roles/geerlingguy.homebrew - geerlingguy.homebrew was NOT installed successfully. - you can use --ignore-errors to skip failed roles. While I see /etc is owned by root, I don't see any notes in documentation saying I should chmod anything. For reference: $ ansible --version ansible 1.8.2 configured module search path = None Is this expected or is my installation somehow wrong?
ansible, osx-yosemite, ansible-galaxy
13
5,116
3
https://stackoverflow.com/questions/27681875/ansible-galaxy-role-fails-with-do-not-have-permission-to-modify-etc-ansible-ro
54,429,463
run ansible task only if tag is NOT specified
Say I want to run a task only when a specific tag is NOT in the list of tags supplied on the command line, even if other tags are specified. Of these, only the last one will work as I expect in all situations: - hosts: all tasks: - debug: msg: 'not TAG (won't work if other tags specified)' tags: not TAG - debug: msg: 'always, but not if TAG specified (doesn't work; always runs)' tags: always,not TAG - debug: msg: 'ALWAYS, but not if TAG in ansible_run_tags' when: "'TAG' not in ansible_run_tags" tags: always Try it with different CLI options and you'll hopefully see why I find this a bit perplexing: ansible-playbook tags-test.yml -l HOST ansible-playbook tags-test.yml -l HOST -t TAG ansible-playbook tags-test.yml -l HOST -t OTHERTAG Questions: (a) is that expected behavior? and (b) is there a better way or some logic I'm missing? I'm surprised I had to dig into the (undocumented, AFAICT) variable ansible_run_tags . Amendment: It was suggested that I post my actual use case. I'm using ansible to drive system updates on Debian family systems. I'm trying to notify at the end if a reboot is required unless the tag reboot was supplied, in which case cause a reboot (and wait for system to come back up). Here is the relevant snippet: - name: check and perhaps reboot block: - name: Check if a reboot is required stat: path: /var/run/reboot-required get_md5: no register: reboot tags: always,reboot - name: Alert if a reboot is required fail: msg: "NOTE: a reboot required to finish uppdates." when: - ('reboot' not in ansible_run_tags) - reboot.stat.exists tags: always - name: Reboot the server reboot: msg: rebooting after Ansible applied system updates when: reboot.stat.exists or ('force-reboot' in ansible_run_tags) tags: never,reboot,force-reboot I think my original question(s) still have merit, but I'm also willing to accept alternative methods of accomplishing this same functionality.
run ansible task only if tag is NOT specified Say I want to run a task only when a specific tag is NOT in the list of tags supplied on the command line, even if other tags are specified. Of these, only the last one will work as I expect in all situations: - hosts: all tasks: - debug: msg: 'not TAG (won't work if other tags specified)' tags: not TAG - debug: msg: 'always, but not if TAG specified (doesn't work; always runs)' tags: always,not TAG - debug: msg: 'ALWAYS, but not if TAG in ansible_run_tags' when: "'TAG' not in ansible_run_tags" tags: always Try it with different CLI options and you'll hopefully see why I find this a bit perplexing: ansible-playbook tags-test.yml -l HOST ansible-playbook tags-test.yml -l HOST -t TAG ansible-playbook tags-test.yml -l HOST -t OTHERTAG Questions: (a) is that expected behavior? and (b) is there a better way or some logic I'm missing? I'm surprised I had to dig into the (undocumented, AFAICT) variable ansible_run_tags . Amendment: It was suggested that I post my actual use case. I'm using ansible to drive system updates on Debian family systems. I'm trying to notify at the end if a reboot is required unless the tag reboot was supplied, in which case cause a reboot (and wait for system to come back up). Here is the relevant snippet: - name: check and perhaps reboot block: - name: Check if a reboot is required stat: path: /var/run/reboot-required get_md5: no register: reboot tags: always,reboot - name: Alert if a reboot is required fail: msg: "NOTE: a reboot required to finish uppdates." when: - ('reboot' not in ansible_run_tags) - reboot.stat.exists tags: always - name: Reboot the server reboot: msg: rebooting after Ansible applied system updates when: reboot.stat.exists or ('force-reboot' in ansible_run_tags) tags: never,reboot,force-reboot I think my original question(s) still have merit, but I'm also willing to accept alternative methods of accomplishing this same functionality.
tags, ansible
13
16,528
6
https://stackoverflow.com/questions/54429463/run-ansible-task-only-if-tag-is-not-specified
32,766,442
Get encrypted password for PostgreSQL login role
Is there a way to fetch the encrypted password for a login roll from a PostgreSQL server? To give some insight into my problem, I'm trying to manage the postgres user's password via Ansible. To do so, I would like to check the current value of the encrypted password (e.g. 'md5...' ) to see if it's current or not. If it is not, I would execute the appropriate ALTER ROLL command to update it. I know I can use pg_dumpall to see the password, e.g: $ pg_dumpall --roles-only <snip> CREATE ROLE postgres; ALTER ROLE postgres WITH ... PASSWORD 'md5...'; But this doesn't seem like a very reliable way of doing so.
Get encrypted password for PostgreSQL login role Is there a way to fetch the encrypted password for a login roll from a PostgreSQL server? To give some insight into my problem, I'm trying to manage the postgres user's password via Ansible. To do so, I would like to check the current value of the encrypted password (e.g. 'md5...' ) to see if it's current or not. If it is not, I would execute the appropriate ALTER ROLL command to update it. I know I can use pg_dumpall to see the password, e.g: $ pg_dumpall --roles-only <snip> CREATE ROLE postgres; ALTER ROLE postgres WITH ... PASSWORD 'md5...'; But this doesn't seem like a very reliable way of doing so.
postgresql, ansible
13
32,013
1
https://stackoverflow.com/questions/32766442/get-encrypted-password-for-postgresql-login-role
31,649,421
Ansible won&#39;t let me connect through SSH
I'm trying to connect from one server to another. In fact I'm trying to connect to my host OS (CoreOS) from within a docker container. I have set up a RSA key and it works like a charm when using standard command line to connect to the remote host. It works as expected. When I'm trying to run ansible customercare -m ping --user=core --connection=ssh --private-key=/home/jenkins/.ssh/id_rsa I'm met with this error 10.45.1.107 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue And the verbose option look like this: <10.45.1.107> ESTABLISH CONNECTION FOR USER: core <10.45.1.107> REMOTE_MODULE ping <10.45.1.107> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'IdentityFile=/home/jenkins/.ssh/id_rsa', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=core', '-o', 'ConnectTimeout=10', '10.45.1.107', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437988628.37-213828375275223 && chmod a+rx $HOME/. ansible/tmp/ansible-tmp-1437988628.37-213828375275223 && echo $HOME/.ansible/tmp/ansible-tmp-1437988628.37-213828375275223'"] 10.45.1.107 | FAILED => SSH encountered an unknown error. The output was: OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/root/.ansible/cp/ansible-ssh-10.45.1.107-22-core" does not exist debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.45.1.107 [10.45.1.107] port 22. debug2: fd 3 setting O_NONBLOCK debug1: fd 3 clearing O_NONBLOCK debug1: Connection established. debug3: timeout: 9985 ms remain after connect debug1: permanently_set_uid: 0/0 debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jenkins/.ssh/id_rsa" as a RSA1 public key debug1: identity file /home/jenkins/.ssh/id_rsa type 1 debug1: identity file /home/jenkins/.ssh/id_rsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7 debug1: match: OpenSSH_6.7 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /root/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group- exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert- v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh. com,ssh-dss-cert-v00@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20- poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20- poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256- etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac- md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96, hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256- etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac- md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96, hmac-md5-96 debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ssh-ed25519 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac- sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac- sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: server->client aes128-ctr hmac-sha1-etm@openssh.com zlib@openssh.com debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: client->server aes128-ctr hmac-sha1-etm@openssh.com zlib@openssh.com debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ED25519 54:85:33:0a:6f:78:74:a7:13:7d:74:bd:03:f1:9c:ce debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /root/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '10.45.1.107' is known and matches the ED25519 host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_ed25519_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/jenkins/.ssh/id_rsa (0x7f2295d969e0), explicit debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey debug3: authmethod_lookup publickey debug3: remaining preferred: ,gssapi-keyex,hostbased,publickey debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/jenkins/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug3: sign_and_send_pubkey: RSA 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Enabling compression at level 6. debug1: Authentication succeeded (publickey). Authenticated to 10.45.1.107 ([10.45.1.107]:22). debug1: setting up multiplex master socket debug3: muxserver_listen: temporary control path /root/.ansible/cp/ansible-ssh-10.45.1.107-22-core.xNa4LxZkP4s02v2j debug2: fd 4 setting O_NONBLOCK debug3: fd 4 is O_NONBLOCK debug3: fd 4 is O_NONBLOCK debug1: channel 0: new [/root/.ansible/cp/ansible-ssh-10.45.1.107-22-core] debug3: muxserver_listen: mux listener channel 0 fd 4 debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x08 debug1: control_persist_detach: backgrounding master process debug2: control_persist_detach: background process is 470 Control socket connect(/root/.ansible/cp/ansible-ssh-10.45.1.107-22-core): Connection refused Failed to connect to new control master debug1: forking to background debug1: Entering interactive session. debug2: set_control_persist_exit_time: schedule exit in 60 seconds Any clue on what is going on? [UPDATE] Here's the log from a successful SSH logon: jenkins@9031c65c8952:~$ ssh core@10.45.1.107 -vvvv OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.45.1.107 [10.45.1.107] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jenkins/.ssh/id_rsa" as a RSA1 public key debug1: identity file /home/jenkins/.ssh/id_rsa type 1 debug1: identity file /home/jenkins/.ssh/id_rsa-cert type -1 debug1: identity file /home/jenkins/.ssh/id_dsa type -1 debug1: identity file /home/jenkins/.ssh/id_dsa-cert type -1 debug1: identity file /home/jenkins/.ssh/id_ecdsa type -1 debug1: identity file /home/jenkins/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/jenkins/.ssh/id_ed25519 type -1 debug1: identity file /home/jenkins/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7 debug1: match: OpenSSH_6.7 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/home/jenkins/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /home/jenkins/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ssh-ed25519 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: server->client aes128-ctr hmac-sha1-etm@openssh.com none debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: client->server aes128-ctr hmac-sha1-etm@openssh.com none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ED25519 54:85:33:0a:6f:78:74:a7:13:7d:74:bd:03:f1:9c:ce debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/home/jenkins/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /home/jenkins/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '10.45.1.107' is known and matches the ED25519 host key. debug1: Found key in /home/jenkins/.ssh/known_hosts:1 debug1: ssh_ed25519_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/jenkins/.ssh/id_rsa (0x7fab14d1cab0), debug2: key: /home/jenkins/.ssh/id_dsa ((nil)), debug2: key: /home/jenkins/.ssh/id_ecdsa ((nil)), debug2: key: /home/jenkins/.ssh/id_ed25519 ((nil)), debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/jenkins/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug3: sign_and_send_pubkey: RSA 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to 10.45.1.107 ([10.45.1.107]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug2: callback start debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x10 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env MAIL debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env LOGNAME debug3: Ignored env LESSOPEN debug3: Ignored env LESSCLOSE debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Last login: Mon Jul 27 09:49:44 2015 from 172.17.0.37 CoreOS stable (717.3.0) core@localhost ~ $
Ansible won&#39;t let me connect through SSH I'm trying to connect from one server to another. In fact I'm trying to connect to my host OS (CoreOS) from within a docker container. I have set up a RSA key and it works like a charm when using standard command line to connect to the remote host. It works as expected. When I'm trying to run ansible customercare -m ping --user=core --connection=ssh --private-key=/home/jenkins/.ssh/id_rsa I'm met with this error 10.45.1.107 | FAILED => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue And the verbose option look like this: <10.45.1.107> ESTABLISH CONNECTION FOR USER: core <10.45.1.107> REMOTE_MODULE ping <10.45.1.107> EXEC ['ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'IdentityFile=/home/jenkins/.ssh/id_rsa', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=core', '-o', 'ConnectTimeout=10', '10.45.1.107', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437988628.37-213828375275223 && chmod a+rx $HOME/. ansible/tmp/ansible-tmp-1437988628.37-213828375275223 && echo $HOME/.ansible/tmp/ansible-tmp-1437988628.37-213828375275223'"] 10.45.1.107 | FAILED => SSH encountered an unknown error. The output was: OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/root/.ansible/cp/ansible-ssh-10.45.1.107-22-core" does not exist debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.45.1.107 [10.45.1.107] port 22. debug2: fd 3 setting O_NONBLOCK debug1: fd 3 clearing O_NONBLOCK debug1: Connection established. debug3: timeout: 9985 ms remain after connect debug1: permanently_set_uid: 0/0 debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jenkins/.ssh/id_rsa" as a RSA1 public key debug1: identity file /home/jenkins/.ssh/id_rsa type 1 debug1: identity file /home/jenkins/.ssh/id_rsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7 debug1: match: OpenSSH_6.7 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /root/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group- exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert- v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh. com,ssh-dss-cert-v00@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20- poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20- poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256- etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac- md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96, hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256- etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac- md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96, hmac-md5-96 debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ssh-ed25519 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac- sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac- sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: server->client aes128-ctr hmac-sha1-etm@openssh.com zlib@openssh.com debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: client->server aes128-ctr hmac-sha1-etm@openssh.com zlib@openssh.com debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ED25519 54:85:33:0a:6f:78:74:a7:13:7d:74:bd:03:f1:9c:ce debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/root/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /root/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '10.45.1.107' is known and matches the ED25519 host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_ed25519_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/jenkins/.ssh/id_rsa (0x7f2295d969e0), explicit debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey debug3: authmethod_lookup publickey debug3: remaining preferred: ,gssapi-keyex,hostbased,publickey debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/jenkins/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug3: sign_and_send_pubkey: RSA 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Enabling compression at level 6. debug1: Authentication succeeded (publickey). Authenticated to 10.45.1.107 ([10.45.1.107]:22). debug1: setting up multiplex master socket debug3: muxserver_listen: temporary control path /root/.ansible/cp/ansible-ssh-10.45.1.107-22-core.xNa4LxZkP4s02v2j debug2: fd 4 setting O_NONBLOCK debug3: fd 4 is O_NONBLOCK debug3: fd 4 is O_NONBLOCK debug1: channel 0: new [/root/.ansible/cp/ansible-ssh-10.45.1.107-22-core] debug3: muxserver_listen: mux listener channel 0 fd 4 debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x08 debug1: control_persist_detach: backgrounding master process debug2: control_persist_detach: background process is 470 Control socket connect(/root/.ansible/cp/ansible-ssh-10.45.1.107-22-core): Connection refused Failed to connect to new control master debug1: forking to background debug1: Entering interactive session. debug2: set_control_persist_exit_time: schedule exit in 60 seconds Any clue on what is going on? [UPDATE] Here's the log from a successful SSH logon: jenkins@9031c65c8952:~$ ssh core@10.45.1.107 -vvvv OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.45.1.107 [10.45.1.107] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jenkins/.ssh/id_rsa" as a RSA1 public key debug1: identity file /home/jenkins/.ssh/id_rsa type 1 debug1: identity file /home/jenkins/.ssh/id_rsa-cert type -1 debug1: identity file /home/jenkins/.ssh/id_dsa type -1 debug1: identity file /home/jenkins/.ssh/id_dsa-cert type -1 debug1: identity file /home/jenkins/.ssh/id_ecdsa type -1 debug1: identity file /home/jenkins/.ssh/id_ecdsa-cert type -1 debug1: identity file /home/jenkins/.ssh/id_ed25519 type -1 debug1: identity file /home/jenkins/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7 debug1: match: OpenSSH_6.7 pat OpenSSH* compat 0x04000000 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/home/jenkins/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /home/jenkins/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-ed25519-cert-v01@openssh.com,ssh-ed25519,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ssh-ed25519 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: none,zlib@openssh.com debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: server->client aes128-ctr hmac-sha1-etm@openssh.com none debug2: mac_setup: setup hmac-sha1-etm@openssh.com debug1: kex: client->server aes128-ctr hmac-sha1-etm@openssh.com none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ED25519 54:85:33:0a:6f:78:74:a7:13:7d:74:bd:03:f1:9c:ce debug3: load_hostkeys: loading entries for host "10.45.1.107" from file "/home/jenkins/.ssh/known_hosts" debug3: load_hostkeys: found key type ED25519 in file /home/jenkins/.ssh/known_hosts:1 debug3: load_hostkeys: loaded 1 keys debug1: Host '10.45.1.107' is known and matches the ED25519 host key. debug1: Found key in /home/jenkins/.ssh/known_hosts:1 debug1: ssh_ed25519_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/jenkins/.ssh/id_rsa (0x7fab14d1cab0), debug2: key: /home/jenkins/.ssh/id_dsa ((nil)), debug2: key: /home/jenkins/.ssh/id_ecdsa ((nil)), debug2: key: /home/jenkins/.ssh/id_ed25519 ((nil)), debug1: Authentications that can continue: publickey,password,keyboard-interactive debug3: start over, passed a different list publickey,password,keyboard-interactive debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/jenkins/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug3: sign_and_send_pubkey: RSA 53:f8:88:06:5b:c2:a3:0a:05:9f:2c:ed:3b:51:74:47 debug1: key_parse_private2: missing begin marker debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). Authenticated to 10.45.1.107 ([10.45.1.107]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug2: callback start debug2: fd 3 setting TCP_NODELAY debug3: packet_set_tos: set IP_TOS 0x10 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env MAIL debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env LOGNAME debug3: Ignored env LESSOPEN debug3: Ignored env LESSCLOSE debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Last login: Mon Jul 27 09:49:44 2015 from 172.17.0.37 CoreOS stable (717.3.0) core@localhost ~ $
docker, ansible, coreos
13
10,147
3
https://stackoverflow.com/questions/31649421/ansible-wont-let-me-connect-through-ssh
39,165,463
Add an item to a list dependent on a conditional in ansible
I would like to add an item to a list in ansible dependent on some condition being met. This doesn't work: some_dictionary: app: - something - something else - something conditional # only want this item when some_condition == True when: some_condition I am not sure of the correct way to do this. Can I create a new task to add to the app value in the some_dictionary somehow?
Add an item to a list dependent on a conditional in ansible I would like to add an item to a list in ansible dependent on some condition being met. This doesn't work: some_dictionary: app: - something - something else - something conditional # only want this item when some_condition == True when: some_condition I am not sure of the correct way to do this. Can I create a new task to add to the app value in the some_dictionary somehow?
ansible
13
18,354
4
https://stackoverflow.com/questions/39165463/add-an-item-to-a-list-dependent-on-a-conditional-in-ansible
36,897,095
What&#39;s correct way to upgrade APT packages using Ansible?
When setting up a new Linux server, I typically run apt-get update and then apt-get upgrade . The first command updates the list of available packages and their versions, but it does not install or upgrade any packages. The second command actually installs newer versions of the packages I have. What is the correct way to do this in Ansible? One way you could do it is like this: - name: update and upgrade apt packages apt: > upgrade=yes update_cache=yes cache_valid_time=3600 Or you could do it in two separate steps: - name: update apt packages apt: > update_cache=yes cache_valid_time=3600 - name: upgrade apt packages apt: upgrade=yes If you do it the first way, is Ansible smart enough to know that it should run 'update' before 'upgrade'? The Ansible apt documentation doesn't address this finer point.
What&#39;s correct way to upgrade APT packages using Ansible? When setting up a new Linux server, I typically run apt-get update and then apt-get upgrade . The first command updates the list of available packages and their versions, but it does not install or upgrade any packages. The second command actually installs newer versions of the packages I have. What is the correct way to do this in Ansible? One way you could do it is like this: - name: update and upgrade apt packages apt: > upgrade=yes update_cache=yes cache_valid_time=3600 Or you could do it in two separate steps: - name: update apt packages apt: > update_cache=yes cache_valid_time=3600 - name: upgrade apt packages apt: upgrade=yes If you do it the first way, is Ansible smart enough to know that it should run 'update' before 'upgrade'? The Ansible apt documentation doesn't address this finer point.
ansible, apt
13
29,030
2
https://stackoverflow.com/questions/36897095/whats-correct-way-to-upgrade-apt-packages-using-ansible
31,681,144
Call ssh-copy-id in an Ansible playbook - How to handle password prompt?
I have two servers. I manage serverA with Ansible. serverB is not managed with Ansible. I want serverA to be able to access serverB by copying the ssh_pub_key of serverA to serverB . This can be done manually by calling ssh-copy-id user@serverB on serverA . I want to do this with Ansible on serverA automatically. - name: Register ssh key at serverB command: ssh-copy-id -i /home/{{user}}/.ssh/id_rsa.pub -o StrictHostKeyChecking=no user@serverB Calling ssh-copy-id requires me to enter my ssh password for user@serverB, so the key can be copied. How can I do this via ansible? I want it to ask for the user@serverB password interactively while executing the playbook. Storing the password in ansible vault is also an option. Then I still do not know how to avoid the interactive password call of ssh-copy-id though. I also added -o StrictHostKeyChecking=no to the call because this is another interaction that normally requires user interaction when calling ssh-copy-id.
Call ssh-copy-id in an Ansible playbook - How to handle password prompt? I have two servers. I manage serverA with Ansible. serverB is not managed with Ansible. I want serverA to be able to access serverB by copying the ssh_pub_key of serverA to serverB . This can be done manually by calling ssh-copy-id user@serverB on serverA . I want to do this with Ansible on serverA automatically. - name: Register ssh key at serverB command: ssh-copy-id -i /home/{{user}}/.ssh/id_rsa.pub -o StrictHostKeyChecking=no user@serverB Calling ssh-copy-id requires me to enter my ssh password for user@serverB, so the key can be copied. How can I do this via ansible? I want it to ask for the user@serverB password interactively while executing the playbook. Storing the password in ansible vault is also an option. Then I still do not know how to avoid the interactive password call of ssh-copy-id though. I also added -o StrictHostKeyChecking=no to the call because this is another interaction that normally requires user interaction when calling ssh-copy-id.
ssh, ansible
13
41,837
2
https://stackoverflow.com/questions/31681144/call-ssh-copy-id-in-an-ansible-playbook-how-to-handle-password-prompt
46,411,107
Iterating over two lists in ansible
I'm new at Ansible and YAML syntax and I'm facing a simple issue: how to iterate over two lists, with the same index? Something like that: int[] listOne; int[] listTwo; --- Attribute some values to the lists ---- for(int i = 0; i < 10; i++){ int result = listOne[i] + listTwo[i]; } In my case, I'm trying to attribute some values to route53 module, and they are in different lists. Is there anyway to do it? I just found loops that iterate over a single list or nested lists.
Iterating over two lists in ansible I'm new at Ansible and YAML syntax and I'm facing a simple issue: how to iterate over two lists, with the same index? Something like that: int[] listOne; int[] listTwo; --- Attribute some values to the lists ---- for(int i = 0; i < 10; i++){ int result = listOne[i] + listTwo[i]; } In my case, I'm trying to attribute some values to route53 module, and they are in different lists. Is there anyway to do it? I just found loops that iterate over a single list or nested lists.
ansible
13
38,054
2
https://stackoverflow.com/questions/46411107/iterating-over-two-lists-in-ansible
39,580,797
How to escape backslash and double quote in Ansible (script module)
I'm very new to Ansible (2.x) and I am having trouble using the script module and passing parameters with double quotes and backslashes. Assuming we have a set variable {{foo}} which contains a string "foo", I have a task like this: set_fact: arg: \(-name "{{foo}}" \) name: call shell module script: path/somescript.sh "{{arg}}" My script needs the following structure of the argument in order to work: \(-name "foo" \) I tried several things such as: arg: \(-name \""{{foo}}"\" \) result: \\(-name \"foo\" \\) arg: '\(-name \""{{foo}}"\" \)' result: \\(-name \"foo\" \\) arg: \\(-name \""{{foo}}"\" \\) result: \\(-name \"foo\" \\) Is it possible to escape backslashes and double quotes in Ansible?
How to escape backslash and double quote in Ansible (script module) I'm very new to Ansible (2.x) and I am having trouble using the script module and passing parameters with double quotes and backslashes. Assuming we have a set variable {{foo}} which contains a string "foo", I have a task like this: set_fact: arg: \(-name "{{foo}}" \) name: call shell module script: path/somescript.sh "{{arg}}" My script needs the following structure of the argument in order to work: \(-name "foo" \) I tried several things such as: arg: \(-name \""{{foo}}"\" \) result: \\(-name \"foo\" \\) arg: '\(-name \""{{foo}}"\" \)' result: \\(-name \"foo\" \\) arg: \\(-name \""{{foo}}"\" \\) result: \\(-name \"foo\" \\) Is it possible to escape backslashes and double quotes in Ansible?
ansible
13
84,387
2
https://stackoverflow.com/questions/39580797/how-to-escape-backslash-and-double-quote-in-ansible-script-module
41,475,446
ansible-playbook extra vars with a space in a value
I've got a problem with providing a value via extra vars when I run my playbook using: ansible-playbook gitolite-docker.yml -e "GITOLITE_SSH_KEY=$(cat roles/gitolite-docker/files/john_rsa.pub)" --ask-vault-pass Here is the extract of the gitolite-docker.yml - name: logging admin.pub shell: echo "{{GITOLITE_SSH_KEY}}" > /home/ansusersu/gitoliteadmin.pub - name: create gitolite--docker container docker_container: name: gitolite image: alex2357/docker-gitolite state: started ports: - "8081:22" volumes: - "/docker/volumes/gitoliterepositories:/home/git/repositories" env: SSH_KEY: "{{GITOLITE_SSH_KEY}}" KEEP_USERS_KEYS: "dummytext" become: yes The problem is that I get only first few characters "ssh-rsa" from the SSH key. john@john-VirtualBox:~$ sudo cat /home/ansusersu/gitoliteadmin.pub ssh-rsa john@john-VirtualBox:~$ I get exactly the same value in both usages of {{GITOLITE_SSH_KEY}} . In the Docker container I have exactly the same value in log files. For Docker similar line works fine: docker run -d -p 8081:22 --name gitolite -e SSH_KEY="$(cat ~/.ssh/id_rsa.pub)" -v /docker/volumes/gitoliterepositories:/home/git/repositories alex2357/docker-gitolite When I saw that it seems to me I won't be able to achieve the same behavior with Ansible-playbook as with Docker as it considers the remaining staff as another extra var. Is there way to make it work?
ansible-playbook extra vars with a space in a value I've got a problem with providing a value via extra vars when I run my playbook using: ansible-playbook gitolite-docker.yml -e "GITOLITE_SSH_KEY=$(cat roles/gitolite-docker/files/john_rsa.pub)" --ask-vault-pass Here is the extract of the gitolite-docker.yml - name: logging admin.pub shell: echo "{{GITOLITE_SSH_KEY}}" > /home/ansusersu/gitoliteadmin.pub - name: create gitolite--docker container docker_container: name: gitolite image: alex2357/docker-gitolite state: started ports: - "8081:22" volumes: - "/docker/volumes/gitoliterepositories:/home/git/repositories" env: SSH_KEY: "{{GITOLITE_SSH_KEY}}" KEEP_USERS_KEYS: "dummytext" become: yes The problem is that I get only first few characters "ssh-rsa" from the SSH key. john@john-VirtualBox:~$ sudo cat /home/ansusersu/gitoliteadmin.pub ssh-rsa john@john-VirtualBox:~$ I get exactly the same value in both usages of {{GITOLITE_SSH_KEY}} . In the Docker container I have exactly the same value in log files. For Docker similar line works fine: docker run -d -p 8081:22 --name gitolite -e SSH_KEY="$(cat ~/.ssh/id_rsa.pub)" -v /docker/volumes/gitoliterepositories:/home/git/repositories alex2357/docker-gitolite When I saw that it seems to me I won't be able to achieve the same behavior with Ansible-playbook as with Docker as it considers the remaining staff as another extra var. Is there way to make it work?
ansible
13
16,496
1
https://stackoverflow.com/questions/41475446/ansible-playbook-extra-vars-with-a-space-in-a-value
42,651,026
Ansible dnf module enable Fedora Copr repository
I want to enable a Fedora Copr repository with Ansible. More specifically I want to convert this command: dnf copr enable ganto/lxd Using an Ansible command module I overcome this problem but break the task's idempotence (if run again, the role should not make any changes) ( changed_when: false is not an option). - name: Enable Fedora Copr for LXD command: "dnf copr enable -y ganto/lxd" Also, I tried this: - name: Install LXD dnf: name: "{{ item }}" state: latest enablerepo: "xxx" with_items: - lxd - lxd-client Where I test many variations for the option enablerepo without any success. Is that possible using the dnf Ansible module (or something else)?
Ansible dnf module enable Fedora Copr repository I want to enable a Fedora Copr repository with Ansible. More specifically I want to convert this command: dnf copr enable ganto/lxd Using an Ansible command module I overcome this problem but break the task's idempotence (if run again, the role should not make any changes) ( changed_when: false is not an option). - name: Enable Fedora Copr for LXD command: "dnf copr enable -y ganto/lxd" Also, I tried this: - name: Install LXD dnf: name: "{{ item }}" state: latest enablerepo: "xxx" with_items: - lxd - lxd-client Where I test many variations for the option enablerepo without any success. Is that possible using the dnf Ansible module (or something else)?
ansible, fedora, lxd, dnf, copr
13
7,291
3
https://stackoverflow.com/questions/42651026/ansible-dnf-module-enable-fedora-copr-repository
42,515,087
Set Ansible variable to undefined through extra-vars or inventory variable
So I have an Ansible playbook that looks like --- - hosts: mygroup tasks: - debug: msg: "{{ foo | default(inventory_hostname) }}" My inventory file looks like [mygroup] 127.0.0.1 Since foo is not defined anywhere, the debug prints 127.0.0.1 as expected. But suppose my inventory file looks like [mygroup] 127.0.0.1 foo=null When I run the playbook, it prints out the string null . I also tried with foo=None and it prints an empty string. How can set the variable to null through inventory or extra-vars? This may be useful when I want to unset a variable already defined in a playbook. I am using Ansible version 2.1.1.0.
Set Ansible variable to undefined through extra-vars or inventory variable So I have an Ansible playbook that looks like --- - hosts: mygroup tasks: - debug: msg: "{{ foo | default(inventory_hostname) }}" My inventory file looks like [mygroup] 127.0.0.1 Since foo is not defined anywhere, the debug prints 127.0.0.1 as expected. But suppose my inventory file looks like [mygroup] 127.0.0.1 foo=null When I run the playbook, it prints out the string null . I also tried with foo=None and it prints an empty string. How can set the variable to null through inventory or extra-vars? This may be useful when I want to unset a variable already defined in a playbook. I am using Ansible version 2.1.1.0.
ansible, ansible-2.x
13
31,455
1
https://stackoverflow.com/questions/42515087/set-ansible-variable-to-undefined-through-extra-vars-or-inventory-variable
19,552,420
In Ansible, how is the environment keyword used?
I have a playbook to install PythonBrew . In order to do this, I have to modify the shell environment. Because shell steps in Ansible are not persistent, I have to prepend export PYTHONBREW_ROOT=${pythonbrew.root}; source ${pythonbrew.root}/etc/bashrc; to the beginning of each of my PythonBrew-related commands: - name: Install python binary shell: export PYTHONBREW_ROOT=${pythonbrew.root}; source ${pythonbrew.root}/etc/bashrc; pythonbrew install ${python.version} executable=/bin/bash - name: Switch to python version shell: export PYTHONBREW_ROOT=${pythonbrew.root}; source ${pythonbrew.root}/etc/bashrc; pythonbrew switch ${python.version} executable=/bin/bash I'd like to eliminate that redundancy. On the Ansible discussion group , I was referred the environment keyword. I've looked at the examples in the documentation and it's not clicking for me. To me, the environment keyword doesn't look much different than any other variable. I've looked for other examples but have only been able to find this very simple example . Can someone demonstrate how the environment keyword functions in Ansible, preferably with the code sample I've provided above?
In Ansible, how is the environment keyword used? I have a playbook to install PythonBrew . In order to do this, I have to modify the shell environment. Because shell steps in Ansible are not persistent, I have to prepend export PYTHONBREW_ROOT=${pythonbrew.root}; source ${pythonbrew.root}/etc/bashrc; to the beginning of each of my PythonBrew-related commands: - name: Install python binary shell: export PYTHONBREW_ROOT=${pythonbrew.root}; source ${pythonbrew.root}/etc/bashrc; pythonbrew install ${python.version} executable=/bin/bash - name: Switch to python version shell: export PYTHONBREW_ROOT=${pythonbrew.root}; source ${pythonbrew.root}/etc/bashrc; pythonbrew switch ${python.version} executable=/bin/bash I'd like to eliminate that redundancy. On the Ansible discussion group , I was referred the environment keyword. I've looked at the examples in the documentation and it's not clicking for me. To me, the environment keyword doesn't look much different than any other variable. I've looked for other examples but have only been able to find this very simple example . Can someone demonstrate how the environment keyword functions in Ansible, preferably with the code sample I've provided above?
ansible
13
12,981
2
https://stackoverflow.com/questions/19552420/in-ansible-how-is-the-environment-keyword-used
57,433,963
Ansible + Kubernetes: how to wait for a Job completion
Thanks in advance for your time that you spent reading this. I'm playing with Kubernetes and use Ansible for any interactions with my cluster. Have some playbooks that successfully deploy applications. My main ansible component I use for deployment is k8s that allow me to apply my yaml configs. I can successfully wait until deployment completes using k8s: state: present definition: config.yaml wait: yes wait_timeout: 10 But, unfortunately, the same trick doesn't work by default with Kubernetes Jobs. The module simply exits immediately that is clearly described in ansible module, that's true: For resource kinds without an implementation, wait returns immediately unless wait_condition is set. To cover such a case, module spec suggests to specify wait_condition: reason: REASON type: TYPE status: STATUS The doc also says: The possible types for a condition are specific to each resource type in Kubernetes. See the API documentation of the status field for a given resource to see possible choices. I checked API specification and found the same as stated in the following answer : the only type values are β€œComplete” and β€œFailed”, and that they may have a ”True” or ”False” status So, my QUESTION is simple: is there anyone who know how to use this wait_condition properly? Did you try it already (as for now, it's relatively new feature)? Any ideas where to look are also appreciated. UPDATE: That's a kind of workaround I use now: - name: Run Job k8s: state: present definition: job_definition.yml - name: Wait Until Job Is Done k8s_facts: name: job_name kind: Job register: job_status until: job_status.resources[0].status.active != 1 retries: 10 delay: 10 ignore_errors: yes - name: Get Final Job Status k8s_facts: name: job_name kind: Job register: job_status - fail: msg: "Job Has Been Failed!" when: job_status.resources[0].status.failed == 1 But would be better to use the proper module feature directly.
Ansible + Kubernetes: how to wait for a Job completion Thanks in advance for your time that you spent reading this. I'm playing with Kubernetes and use Ansible for any interactions with my cluster. Have some playbooks that successfully deploy applications. My main ansible component I use for deployment is k8s that allow me to apply my yaml configs. I can successfully wait until deployment completes using k8s: state: present definition: config.yaml wait: yes wait_timeout: 10 But, unfortunately, the same trick doesn't work by default with Kubernetes Jobs. The module simply exits immediately that is clearly described in ansible module, that's true: For resource kinds without an implementation, wait returns immediately unless wait_condition is set. To cover such a case, module spec suggests to specify wait_condition: reason: REASON type: TYPE status: STATUS The doc also says: The possible types for a condition are specific to each resource type in Kubernetes. See the API documentation of the status field for a given resource to see possible choices. I checked API specification and found the same as stated in the following answer : the only type values are β€œComplete” and β€œFailed”, and that they may have a ”True” or ”False” status So, my QUESTION is simple: is there anyone who know how to use this wait_condition properly? Did you try it already (as for now, it's relatively new feature)? Any ideas where to look are also appreciated. UPDATE: That's a kind of workaround I use now: - name: Run Job k8s: state: present definition: job_definition.yml - name: Wait Until Job Is Done k8s_facts: name: job_name kind: Job register: job_status until: job_status.resources[0].status.active != 1 retries: 10 delay: 10 ignore_errors: yes - name: Get Final Job Status k8s_facts: name: job_name kind: Job register: job_status - fail: msg: "Job Has Been Failed!" when: job_status.resources[0].status.failed == 1 But would be better to use the proper module feature directly.
kubernetes, ansible
13
11,834
4
https://stackoverflow.com/questions/57433963/ansible-kubernetes-how-to-wait-for-a-job-completion
35,597,076
ansible sudo: sorry, you must have a tty to run sudo
I need to run playbooks on Vagrant boxes and on aws when I setup environment with cloud formation. In Vagrant file I use ansible-local and everything works fine name: Setup Unified Catalog Webserver hosts: 127.0.0.1 connection: local become: yes become_user: root roles: generic However when I create instance in AWS the ansible playbook fails with error: sudo: sorry, you must have a tty to run sudo This happen because it is run as root and it doesnt have tty. But I dont know how to fix it without making change in /etc/sudoers to allow !requiretty Is there any flags I can setup in ansible.cfg or in my Cloud Formation template? "#!/bin/bash\n", "\n", " echo 'Installing Git'\n"," yum --nogpgcheck -y install git ansible htop nano wget\n", "wget [URL] -O /root/.ssh/id_rsa\n", "chmod 600 /root/.ssh/id_rsa\n", "ssh-keyscan 172.31.7.235 >> /root/.ssh/known_hosts\n", "git clone git@172.31.7.235:something/repo.git /root/repo\n", "ansible-playbook /root/env/ansible/test.yml\n
ansible sudo: sorry, you must have a tty to run sudo I need to run playbooks on Vagrant boxes and on aws when I setup environment with cloud formation. In Vagrant file I use ansible-local and everything works fine name: Setup Unified Catalog Webserver hosts: 127.0.0.1 connection: local become: yes become_user: root roles: generic However when I create instance in AWS the ansible playbook fails with error: sudo: sorry, you must have a tty to run sudo This happen because it is run as root and it doesnt have tty. But I dont know how to fix it without making change in /etc/sudoers to allow !requiretty Is there any flags I can setup in ansible.cfg or in my Cloud Formation template? "#!/bin/bash\n", "\n", " echo 'Installing Git'\n"," yum --nogpgcheck -y install git ansible htop nano wget\n", "wget [URL] -O /root/.ssh/id_rsa\n", "chmod 600 /root/.ssh/id_rsa\n", "ssh-keyscan 172.31.7.235 >> /root/.ssh/known_hosts\n", "git clone git@172.31.7.235:something/repo.git /root/repo\n", "ansible-playbook /root/env/ansible/test.yml\n
ansible, sudo, aws-cloudformation
13
26,163
3
https://stackoverflow.com/questions/35597076/ansible-sudo-sorry-you-must-have-a-tty-to-run-sudo
53,988,796
Ansible - is it possible to add tags to hosts inside inventory?
As the topic says, my question is if its possible to add tags to the hosts described inside the inventory? My goal is to be able to run the ansible-playbook on specific host/group of hosts which has that specific tag e.g only on servers with tag 'Env=test and Type=test' So for example when I run the playbook: ansible-playbook -i hosts test.yml --extra-vars "Env=${test} Type=${test}" I will pass the tags in the command and it will run only on the filtered hosts. Thanks a lot! Update: Alternatively maybe doing something like in dynamic inventory? [URL] [tag_Name_staging_foo] [tag_Name_staging_bar] [staging:children] tag_Name_staging_foo tag_Name_staging_bar
Ansible - is it possible to add tags to hosts inside inventory? As the topic says, my question is if its possible to add tags to the hosts described inside the inventory? My goal is to be able to run the ansible-playbook on specific host/group of hosts which has that specific tag e.g only on servers with tag 'Env=test and Type=test' So for example when I run the playbook: ansible-playbook -i hosts test.yml --extra-vars "Env=${test} Type=${test}" I will pass the tags in the command and it will run only on the filtered hosts. Thanks a lot! Update: Alternatively maybe doing something like in dynamic inventory? [URL] [tag_Name_staging_foo] [tag_Name_staging_bar] [staging:children] tag_Name_staging_foo tag_Name_staging_bar
ansible, ansible-2.x, ansible-inventory
13
22,890
5
https://stackoverflow.com/questions/53988796/ansible-is-it-possible-to-add-tags-to-hosts-inside-inventory
41,028,037
How to set Pycharm with Ansible plugin to do code completion for variables?
I'm using Pycharm free community version 2016.2.3 with YAML/Ansible plugin, but I don't manage to trigger the code completion for variables. (I do know for sure it's possible.) Is there some configuration I need to set prior to that?
How to set Pycharm with Ansible plugin to do code completion for variables? I'm using Pycharm free community version 2016.2.3 with YAML/Ansible plugin, but I don't manage to trigger the code completion for variables. (I do know for sure it's possible.) Is there some configuration I need to set prior to that?
ansible, pycharm, yaml, code-completion
13
12,237
3
https://stackoverflow.com/questions/41028037/how-to-set-pycharm-with-ansible-plugin-to-do-code-completion-for-variables
40,127,586
Is Ansible Turing Complete?
Ansible offers many filters and conditionals. As far as I can tell; it should be possible to implement an Ansible playbook that executes a set of tasks that achieve the same outcome as a Turing Complete language. So, is it Turing Complete?
Is Ansible Turing Complete? Ansible offers many filters and conditionals. As far as I can tell; it should be possible to implement an Ansible playbook that executes a set of tasks that achieve the same outcome as a Turing Complete language. So, is it Turing Complete?
ansible, turing-complete
13
1,727
2
https://stackoverflow.com/questions/40127586/is-ansible-turing-complete
36,646,683
How do I save an ansible variable into a temporary file that is automatically removed at the end of playbook execution?
In order to perform some operations locally (not on the remote machine), I need to put the content of an ansible variable inside a temporary file. Please note that I am looking for a solution that takes care of generating the temporary file to a location where it can be written (no hardcoded names) and also that takes care of the removal of the file as we do not want to leave things behind.
How do I save an ansible variable into a temporary file that is automatically removed at the end of playbook execution? In order to perform some operations locally (not on the remote machine), I need to put the content of an ansible variable inside a temporary file. Please note that I am looking for a solution that takes care of generating the temporary file to a location where it can be written (no hardcoded names) and also that takes care of the removal of the file as we do not want to leave things behind.
ansible
13
16,001
3
https://stackoverflow.com/questions/36646683/how-do-i-save-an-ansible-variable-into-a-temporary-file-that-is-automatically-re
46,556,214
debugging with ansible: How to get stderr and stdout from failing commands to be printed respecting newlines so as to be human readable?
If I run a simple ansible playbook, I often get difficult-to-read output from failing tasks, like that below. Big problems: the linebreaks within the stdout are printed as \n, not an actual linebreak. This makes things like python tracebacks very obnoxious to read. stdout, stderr, cmd... the json blob being output contains lots of useful things, but since they are all run together on the same line it is very difficult for a human to parse. How can I get ansible to print its output in a format that I can read easily, so I can debug? Here is the yucky output: $ ansible-playbook playbooks/backUpWebsite.yml PLAY [localhost] *************************************************************** TASK [command] ***************************************************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": "python -c 'ksjfasdlkjf'", "delta": "0:00:00.037459", "end": "2017-10-03 19:58:50.525257", "failed": true, "rc": 1, "start": "2017-10-03 19:58:50.487798", "stderr": "Traceback (most recent call last):\n File \"<string>\", line 1, in <module>\nNameError: name 'ksjfasdlkjf' is not defined", "stdout": "", "stdout_lines": [], "warnings": []} to retry, use: --limit @<snip>playbooks/backUpWebsite.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 Here is the script that generated it: --- - hosts: localhost gather_facts: False tasks: #wrong on purpose! - shell: "python -c 'ksjfasdlkjf'" register: unobtainable
debugging with ansible: How to get stderr and stdout from failing commands to be printed respecting newlines so as to be human readable? If I run a simple ansible playbook, I often get difficult-to-read output from failing tasks, like that below. Big problems: the linebreaks within the stdout are printed as \n, not an actual linebreak. This makes things like python tracebacks very obnoxious to read. stdout, stderr, cmd... the json blob being output contains lots of useful things, but since they are all run together on the same line it is very difficult for a human to parse. How can I get ansible to print its output in a format that I can read easily, so I can debug? Here is the yucky output: $ ansible-playbook playbooks/backUpWebsite.yml PLAY [localhost] *************************************************************** TASK [command] ***************************************************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": "python -c 'ksjfasdlkjf'", "delta": "0:00:00.037459", "end": "2017-10-03 19:58:50.525257", "failed": true, "rc": 1, "start": "2017-10-03 19:58:50.487798", "stderr": "Traceback (most recent call last):\n File \"<string>\", line 1, in <module>\nNameError: name 'ksjfasdlkjf' is not defined", "stdout": "", "stdout_lines": [], "warnings": []} to retry, use: --limit @<snip>playbooks/backUpWebsite.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 Here is the script that generated it: --- - hosts: localhost gather_facts: False tasks: #wrong on purpose! - shell: "python -c 'ksjfasdlkjf'" register: unobtainable
ansible
13
15,458
1
https://stackoverflow.com/questions/46556214/debugging-with-ansible-how-to-get-stderr-and-stdout-from-failing-commands-to-be
28,572,400
Return Values of Ansible Commands
I am trying to find the return values of Ansible commands so I can better program in Ansible Playbooks. Using stat as an example. I don't see any any of the return values listed in the documentation. [URL] I am however able to find them by doing adhoc commands. Is there a better way? Perhaps they are not documented because it is OS specific in each instance. For example: ansible 12.34.56.78 -m stat -a "path=/appserver" 12.34.56.78 | success >> { "changed": false, "stat": { "atime": 1424197918.2113113, "ctime": 1423779491.431509, "dev": 64768, "exists": true, "gid": 1000, "inode": 9742, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0755", "mtime": 1423585087.2470782, "nlink": 4, "pw_name": "cloud", "rgrp": true, "roth": true, "rusr": true, "size": 4096, "uid": 1000, "wgrp": false, "woth": false, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } }
Return Values of Ansible Commands I am trying to find the return values of Ansible commands so I can better program in Ansible Playbooks. Using stat as an example. I don't see any any of the return values listed in the documentation. [URL] I am however able to find them by doing adhoc commands. Is there a better way? Perhaps they are not documented because it is OS specific in each instance. For example: ansible 12.34.56.78 -m stat -a "path=/appserver" 12.34.56.78 | success >> { "changed": false, "stat": { "atime": 1424197918.2113113, "ctime": 1423779491.431509, "dev": 64768, "exists": true, "gid": 1000, "inode": 9742, "isblk": false, "ischr": false, "isdir": true, "isfifo": false, "isgid": false, "islnk": false, "isreg": false, "issock": false, "isuid": false, "mode": "0755", "mtime": 1423585087.2470782, "nlink": 4, "pw_name": "cloud", "rgrp": true, "roth": true, "rusr": true, "size": 4096, "uid": 1000, "wgrp": false, "woth": false, "wusr": true, "xgrp": true, "xoth": true, "xusr": true } }
ansible
13
30,721
1
https://stackoverflow.com/questions/28572400/return-values-of-ansible-commands
41,375,578
Ansible - Install package pinned to major versions
Actual package name in the repo is package-2.6.12-3.el7.x86_64 . The goal is to install a package with Ansible, to: Ensure the point release is installed, such as package-2.6 Doesn't install major releases, such as package-3.0 Updates for minor releases, such as package-2.6.13-4 The repo can update packages from time to time, but I don't know when. My thought was to install a package like this; - name: Install package yum: name: package-2.6 state: present But the task fails, because package-2.6 is not in the repo. Whereas simply package works, but it is not future proof. Update: Seems wildcards * do work, eg name: "package-2.6*" . Ensure to quote the wildcard.
Ansible - Install package pinned to major versions Actual package name in the repo is package-2.6.12-3.el7.x86_64 . The goal is to install a package with Ansible, to: Ensure the point release is installed, such as package-2.6 Doesn't install major releases, such as package-3.0 Updates for minor releases, such as package-2.6.13-4 The repo can update packages from time to time, but I don't know when. My thought was to install a package like this; - name: Install package yum: name: package-2.6 state: present But the task fails, because package-2.6 is not in the repo. Whereas simply package works, but it is not future proof. Update: Seems wildcards * do work, eg name: "package-2.6*" . Ensure to quote the wildcard.
ansible, package
13
19,108
2
https://stackoverflow.com/questions/41375578/ansible-install-package-pinned-to-major-versions
20,575,084
Best way to always run ansible inside a virtualenv on remote machines?
Is there a better way to run ansible inside a virtualenv on the remote machines? So far the way I can see is to modify the .bashrc file, manually or with ansible itself. For example: tasks: - name: "Enable virtualenv in .bashrc" lineinfile: dest=.bashrc line="source {{ PROJECT_HOME }}/venv/bin/activate" # # Put tasks that rely on this precondition here (?) # # Optionally, disable this later on - name: "Disable virtualenv in .bashrc" lineinfile: dest=.bashrc line="source {{ PROJECT_HOME }}/venv/bin/activate" state=absent TODO: Check if the ways it could be done using ssh authorized keys: [URL]
Best way to always run ansible inside a virtualenv on remote machines? Is there a better way to run ansible inside a virtualenv on the remote machines? So far the way I can see is to modify the .bashrc file, manually or with ansible itself. For example: tasks: - name: "Enable virtualenv in .bashrc" lineinfile: dest=.bashrc line="source {{ PROJECT_HOME }}/venv/bin/activate" # # Put tasks that rely on this precondition here (?) # # Optionally, disable this later on - name: "Disable virtualenv in .bashrc" lineinfile: dest=.bashrc line="source {{ PROJECT_HOME }}/venv/bin/activate" state=absent TODO: Check if the ways it could be done using ssh authorized keys: [URL]
ansible
13
11,480
1
https://stackoverflow.com/questions/20575084/best-way-to-always-run-ansible-inside-a-virtualenv-on-remote-machines
34,314,328
Ansible variable override default in another role
I'm unsure how to override variables between roles in Ansible. To simplify the setup a little, I have two roles applied to the same host. The first role defines a variable in its default/main.yml : do_some_task: yes And looks for that variable in its tasks: - name: Some Task when: do_some_task The second role overrides that in its vars/main.yml , which is supposed to take precedence over the defaults: do_some_task: no However, the task is still being run, indicating that the variable wasn't overridden. It seems that the override is scoped to the tasks of the second role. I tested that by adding a debug task to both roles: - name: Test some task debug: "msg='do_some_task = {{ do_some_task }}'" This confirms that the first role sees a different value of the variable than the second. TASK: [role1 | Test some task] ok: [myhost] => { "msg": "do_some_task = True" } ... TASK: [role2 | Test some task] ok: [myhost] => { "msg": "do_some_task = False" } The common answer to this appears to be to set the variables in the inventory or the host vars. However this isn't particularly DRY: if you have many hosts in different inventories, you'd have to set the same variables in lots of places. So is there some way of overriding a variable from another role?
Ansible variable override default in another role I'm unsure how to override variables between roles in Ansible. To simplify the setup a little, I have two roles applied to the same host. The first role defines a variable in its default/main.yml : do_some_task: yes And looks for that variable in its tasks: - name: Some Task when: do_some_task The second role overrides that in its vars/main.yml , which is supposed to take precedence over the defaults: do_some_task: no However, the task is still being run, indicating that the variable wasn't overridden. It seems that the override is scoped to the tasks of the second role. I tested that by adding a debug task to both roles: - name: Test some task debug: "msg='do_some_task = {{ do_some_task }}'" This confirms that the first role sees a different value of the variable than the second. TASK: [role1 | Test some task] ok: [myhost] => { "msg": "do_some_task = True" } ... TASK: [role2 | Test some task] ok: [myhost] => { "msg": "do_some_task = False" } The common answer to this appears to be to set the variables in the inventory or the host vars. However this isn't particularly DRY: if you have many hosts in different inventories, you'd have to set the same variables in lots of places. So is there some way of overriding a variable from another role?
ansible
13
18,068
3
https://stackoverflow.com/questions/34314328/ansible-variable-override-default-in-another-role
41,165,454
Ansible &quot;postgresql_user&quot; module &quot;priv&quot; parameter syntax clearification
The documentation for the postgresql_user module on how privileges for a user should be defined conflicts with itself regarding the format. The format is described as such in the options table: priv | PostgreSQL privileges string in the format: table:priv1,priv2 However, the examples given below use another format priv: "CONNECT/products:ALL" priv: "ALL/products:ALL" # Example privileges string format INSERT,UPDATE/table:SELECT/anothertable:ALL The blog post Ansible Loves PostgreSQL mentions yet another format: priv: Privileges in β€œpriv1/priv2” or table privileges in β€œtable:priv1,priv2,…” format I'm having trouble creating users with read-only access, i.e. SELECT privilege on all tables. Could someone shed some light on the correct format to use, exemplified by giving a user read-only access on all tables?
Ansible &quot;postgresql_user&quot; module &quot;priv&quot; parameter syntax clearification The documentation for the postgresql_user module on how privileges for a user should be defined conflicts with itself regarding the format. The format is described as such in the options table: priv | PostgreSQL privileges string in the format: table:priv1,priv2 However, the examples given below use another format priv: "CONNECT/products:ALL" priv: "ALL/products:ALL" # Example privileges string format INSERT,UPDATE/table:SELECT/anothertable:ALL The blog post Ansible Loves PostgreSQL mentions yet another format: priv: Privileges in β€œpriv1/priv2” or table privileges in β€œtable:priv1,priv2,…” format I'm having trouble creating users with read-only access, i.e. SELECT privilege on all tables. Could someone shed some light on the correct format to use, exemplified by giving a user read-only access on all tables?
postgresql, ansible, privileges
13
3,843
4
https://stackoverflow.com/questions/41165454/ansible-postgresql-user-module-priv-parameter-syntax-clearification
29,728,530
Using Ansible postgresql_user with psycopg2 from VirtualEnv
The Ansible postgresql_user module demands a working installation of psycopg2: [URL] If this is installed in a VirtualEnv on the server, how can the Ansible module find it? Other Ansible modules seem to have explicit VirtualEnv support, so is this simply a missing feature?
Using Ansible postgresql_user with psycopg2 from VirtualEnv The Ansible postgresql_user module demands a working installation of psycopg2: [URL] If this is installed in a VirtualEnv on the server, how can the Ansible module find it? Other Ansible modules seem to have explicit VirtualEnv support, so is this simply a missing feature?
postgresql, virtualenv, ansible
13
8,856
1
https://stackoverflow.com/questions/29728530/using-ansible-postgresql-user-with-psycopg2-from-virtualenv
64,836,917
Ansible playbook which uses a role defined in a collection
This is an example of an Ansible playbook I am currently playing around with: --- - hosts: all collections: - mynamespace.my_collection roles: - mynamespace.my_role1 - mynamespace.my_role2 - geerlingguy.repo-remi The mynamespace.my_collection collection is a custom collection that contains a couple of roles, namely mynamespace.my_role1 and mynamespace.my_role2 . I have a requirements.yml file as follows: --- collections: - name: git@github.com:mynamespace/my_collection.git roles: - name: geerlingguy.repo-remi version: "2.0.1" And I install the collection and roles as follows: ansible-galaxy collection install -r /home/ansible/requirements.yml --force ansible-galaxy role install -r /home/ansible/requirements.yml --force Each time I attempt to run the playbook it fails with the following error: ERROR! the role 'mynamespace.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: roles: - mynamespace.my_role1 ^ here For the avoidance of doubt, I have tried multiple ways of defining the roles in the playbook including mynamespace.my_collection.my_role1 (the fully qualified name of the role within the collection). I suspect I've done something wrong or misunderstood how it should work but my understanding is a collection can contain multiple roles and once that collection is installed, I should be able to call upon one or more of the roles within the collection inside my playbook to use it but it doesn't seem to be working for me. The error seems to suggest it is looking for the role inside the collection but not finding it. The collection is installed to the path /home/ansible_collections/mynamespace/my_collection and within that directory is roles/my_role1 and roles/my_role2 . Maybe the structure of the roles inside the collection is wrong? I'm using Ansible 2.10 on CentOS 8. Thanks for any advice! EDIT: I just wanted to expand on something I alluded to earlier. I believe the docs say the fully qualified name should be used to reference the role in the collection within the playbook. Unfortunately, this errors too: ERROR! the role 'mynamespace.my_collection.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: roles: - mynamespace.my_collection.my_role1 ^ here
Ansible playbook which uses a role defined in a collection This is an example of an Ansible playbook I am currently playing around with: --- - hosts: all collections: - mynamespace.my_collection roles: - mynamespace.my_role1 - mynamespace.my_role2 - geerlingguy.repo-remi The mynamespace.my_collection collection is a custom collection that contains a couple of roles, namely mynamespace.my_role1 and mynamespace.my_role2 . I have a requirements.yml file as follows: --- collections: - name: git@github.com:mynamespace/my_collection.git roles: - name: geerlingguy.repo-remi version: "2.0.1" And I install the collection and roles as follows: ansible-galaxy collection install -r /home/ansible/requirements.yml --force ansible-galaxy role install -r /home/ansible/requirements.yml --force Each time I attempt to run the playbook it fails with the following error: ERROR! the role 'mynamespace.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: roles: - mynamespace.my_role1 ^ here For the avoidance of doubt, I have tried multiple ways of defining the roles in the playbook including mynamespace.my_collection.my_role1 (the fully qualified name of the role within the collection). I suspect I've done something wrong or misunderstood how it should work but my understanding is a collection can contain multiple roles and once that collection is installed, I should be able to call upon one or more of the roles within the collection inside my playbook to use it but it doesn't seem to be working for me. The error seems to suggest it is looking for the role inside the collection but not finding it. The collection is installed to the path /home/ansible_collections/mynamespace/my_collection and within that directory is roles/my_role1 and roles/my_role2 . Maybe the structure of the roles inside the collection is wrong? I'm using Ansible 2.10 on CentOS 8. Thanks for any advice! EDIT: I just wanted to expand on something I alluded to earlier. I believe the docs say the fully qualified name should be used to reference the role in the collection within the playbook. Unfortunately, this errors too: ERROR! the role 'mynamespace.my_collection.my_role1' was not found in mynamespace.my_collection:ansible.legacy:/home/ansible/roles:/home/ansible_roles:/home/ansible The error appears to be in '/home/ansible/play.yml': line 42, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: roles: - mynamespace.my_collection.my_role1 ^ here
ansible, ansible-role
13
9,196
2
https://stackoverflow.com/questions/64836917/ansible-playbook-which-uses-a-role-defined-in-a-collection
38,155,108
How to increase limit for open processes and files using ansible
I am setting up a MySQL server, and I was told to increase the ulimit for the number of open processes. I ran - name: "increase limit for the number of open files" shell: "ulimit -n 64000" - name: "increase limit for the number of open processes" shell: "ulimit -u 64000" on the ansible-playbook , but not only does it throw error "Illegal option -u", but also open files limit (-n) doesn't seem to get modified. (I ran ulimit -n on the server but it stays the same) What is the recommended way of increasing these limits, and how should I do it in Ansible? I saw pam_limits module. Should I use this module to modify nproc and nofile ? If so, which domain? Thank you.
How to increase limit for open processes and files using ansible I am setting up a MySQL server, and I was told to increase the ulimit for the number of open processes. I ran - name: "increase limit for the number of open files" shell: "ulimit -n 64000" - name: "increase limit for the number of open processes" shell: "ulimit -u 64000" on the ansible-playbook , but not only does it throw error "Illegal option -u", but also open files limit (-n) doesn't seem to get modified. (I ran ulimit -n on the server but it stays the same) What is the recommended way of increasing these limits, and how should I do it in Ansible? I saw pam_limits module. Should I use this module to modify nproc and nofile ? If so, which domain? Thank you.
mysql, ansible
12
25,758
2
https://stackoverflow.com/questions/38155108/how-to-increase-limit-for-open-processes-and-files-using-ansible
18,385,925
Error when running ansible-playbook
I've installed Ansible 1.2.3 on Ubuntu Precise 64. Running ansible-playbook -i ansible_hosts playbook.yml give me this error: ERROR: problem running ansible_hosts --list ([Errno 8] Exec format error) Here's the content of ansible_hosts : [development] localhost ansible_connection=local and playbook.yml : --- - hosts: development sudo: yes tasks: - name: install curl apt: pkg=curl update_cache=yes How can I make this work?
Error when running ansible-playbook I've installed Ansible 1.2.3 on Ubuntu Precise 64. Running ansible-playbook -i ansible_hosts playbook.yml give me this error: ERROR: problem running ansible_hosts --list ([Errno 8] Exec format error) Here's the content of ansible_hosts : [development] localhost ansible_connection=local and playbook.yml : --- - hosts: development sudo: yes tasks: - name: install curl apt: pkg=curl update_cache=yes How can I make this work?
ubuntu, ansible
12
20,703
7
https://stackoverflow.com/questions/18385925/error-when-running-ansible-playbook
66,185,645
Tower: What is causing &quot;[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details&quot;
I am just starting to work with Ansible Tower and made a project and then a job template under that project that uses a small initial test playbook (Test.yml): --- - hosts: east01.xxxxx.com become: yes become_method: sudo tasks: - name: test shell: echo 'line one line two line three' >> /tmp/abcdef.txt and, when I try to run that playbook by clicking the "rocketship" in Tower, it looks like it is working: Identity added: /tmp/awx_35003_yehjodc1/artifacts/35003/ssh_key_data (/tmp/awx_35003_yehjodc1 /artifacts/35003/ssh_key_data) [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details PLAY [east01.xxxxx.com] ********************************************** TASK [Gathering Facts] ********************************************************* ok: [east01.xxxxx.com] TASK [test] ******************************************************************** changed: [east01.xxxxx.com] PLAY RECAP ********************************************************************* east01.xxxxx.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 and it looks like that is actually working (I can see the file on the target machine being modified when I run the playbook), but can someone tell me what is causing that WARNING: [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details ?? Also the warning says to "use -vvvv" and I was wondering where/how do I do that (since I am running this playbook under Tower)? Thanks! Jim EDIT: I just did a test, where I ran the same yaml file using ansible-playbook (command line) and it ran without that warning, so I guess that the warning is something to do with some difference between ansible-playbook and Tower?
Tower: What is causing &quot;[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details&quot; I am just starting to work with Ansible Tower and made a project and then a job template under that project that uses a small initial test playbook (Test.yml): --- - hosts: east01.xxxxx.com become: yes become_method: sudo tasks: - name: test shell: echo 'line one line two line three' >> /tmp/abcdef.txt and, when I try to run that playbook by clicking the "rocketship" in Tower, it looks like it is working: Identity added: /tmp/awx_35003_yehjodc1/artifacts/35003/ssh_key_data (/tmp/awx_35003_yehjodc1 /artifacts/35003/ssh_key_data) [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details PLAY [east01.xxxxx.com] ********************************************** TASK [Gathering Facts] ********************************************************* ok: [east01.xxxxx.com] TASK [test] ******************************************************************** changed: [east01.xxxxx.com] PLAY RECAP ********************************************************************* east01.xxxxx.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 and it looks like that is actually working (I can see the file on the target machine being modified when I run the playbook), but can someone tell me what is causing that WARNING: [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details ?? Also the warning says to "use -vvvv" and I was wondering where/how do I do that (since I am running this playbook under Tower)? Thanks! Jim EDIT: I just did a test, where I ran the same yaml file using ansible-playbook (command line) and it ran without that warning, so I guess that the warning is something to do with some difference between ansible-playbook and Tower?
ansible, ansible-2.x, ansible-tower
12
40,620
2
https://stackoverflow.com/questions/66185645/tower-what-is-causing-warning-invalid-characters-were-found-in-group-names
24,616,107
Ansible with_subelements default value
i have a vars definition like this: sites: - site: mysite1.com exec_init: - "command1 to exec" - "command2 to exec" - site: mysite2.com then i have play with the following task - name: Execute init scripts for all sites shell: "{{item.1}}" with_subelements: - sites - exec_init when: item.0.exec_init is defined The idea here is that i will have multiple "Site" definitions with dozens of other properties in my vars, then i would like to execute multiple Shell script commands for those sites having "exec_init" defined Doing it this way it just always skip executing the task, i've tried this in all combinations i can imagine but i just can't get it to work... Is this the proper way of doing it? maybe i'm trying to achieve something that doesn't make sense? Thanks for your help
Ansible with_subelements default value i have a vars definition like this: sites: - site: mysite1.com exec_init: - "command1 to exec" - "command2 to exec" - site: mysite2.com then i have play with the following task - name: Execute init scripts for all sites shell: "{{item.1}}" with_subelements: - sites - exec_init when: item.0.exec_init is defined The idea here is that i will have multiple "Site" definitions with dozens of other properties in my vars, then i would like to execute multiple Shell script commands for those sites having "exec_init" defined Doing it this way it just always skip executing the task, i've tried this in all combinations i can imagine but i just can't get it to work... Is this the proper way of doing it? maybe i'm trying to achieve something that doesn't make sense? Thanks for your help
ansible
12
10,800
5
https://stackoverflow.com/questions/24616107/ansible-with-subelements-default-value
45,237,632
Ansible w/ Docker - Show current Container state
Im working on a little Ansible project in which I'm using Docker Containers. I'll keep my question short: I want to get the state of a running Dockercontainer! What I mean by that is, that i want to get the current state of the container, that Docker shows you by using the "docker ps" command. Examples would be: Up Exited Restarting I want to get one of those results from a specific container. But without using the Command or the Shell module ! KR
Ansible w/ Docker - Show current Container state Im working on a little Ansible project in which I'm using Docker Containers. I'll keep my question short: I want to get the state of a running Dockercontainer! What I mean by that is, that i want to get the current state of the container, that Docker shows you by using the "docker ps" command. Examples would be: Up Exited Restarting I want to get one of those results from a specific container. But without using the Command or the Shell module ! KR
docker, ansible, docker-compose, ansible-template
12
29,051
7
https://stackoverflow.com/questions/45237632/ansible-w-docker-show-current-container-state
18,050,911
How to format a variable in Ansible value
Given that Ansible processes all variables through Jinja2, and doing something like this is possible: - name: Debug sequence item value debug: msg={{ 'Item\:\ %s'|format(item) }} with_sequence: count=5 format="%02d" Which correctly interpolates the string as: ok: [server.name] => (item=01) => {"item": "01", "msg": "Item: 01"} ok: [server.name] => (item=02) => {"item": "02", "msg": "Item: 02"} ok: [server.name] => (item=03) => {"item": "03", "msg": "Item: 03"} ok: [server.name] => (item=04) => {"item": "04", "msg": "Item: 04"} ok: [server.name] => (item=05) => {"item": "05", "msg": "Item: 05"} Why then doesn't this work: - name: Debug sequence item value debug: msg={{ 'Item\:\ %02d'|format(int(item)) }} with_sequence: count=5 This apparently causes some sort of parsing issue which results in our desired string being rendered verbose: ok: [server.name] => (item=01) => {"item": "01", "msg": "{{Item\\:\\ %02d|format(int(item))}}"} Noting that in the above example item is a string because the default format of with_sequence is %d , and format() doesn't cast the value of item to the format required by the string interpolation %02d , hence the need to cast with int() . Is this a bug or am I missing something?
How to format a variable in Ansible value Given that Ansible processes all variables through Jinja2, and doing something like this is possible: - name: Debug sequence item value debug: msg={{ 'Item\:\ %s'|format(item) }} with_sequence: count=5 format="%02d" Which correctly interpolates the string as: ok: [server.name] => (item=01) => {"item": "01", "msg": "Item: 01"} ok: [server.name] => (item=02) => {"item": "02", "msg": "Item: 02"} ok: [server.name] => (item=03) => {"item": "03", "msg": "Item: 03"} ok: [server.name] => (item=04) => {"item": "04", "msg": "Item: 04"} ok: [server.name] => (item=05) => {"item": "05", "msg": "Item: 05"} Why then doesn't this work: - name: Debug sequence item value debug: msg={{ 'Item\:\ %02d'|format(int(item)) }} with_sequence: count=5 This apparently causes some sort of parsing issue which results in our desired string being rendered verbose: ok: [server.name] => (item=01) => {"item": "01", "msg": "{{Item\\:\\ %02d|format(int(item))}}"} Noting that in the above example item is a string because the default format of with_sequence is %d , and format() doesn't cast the value of item to the format required by the string interpolation %02d , hence the need to cast with int() . Is this a bug or am I missing something?
jinja2, ansible
12
32,997
1
https://stackoverflow.com/questions/18050911/how-to-format-a-variable-in-ansible-value
28,231,875
ansible jinja2 concatenate IP addresses
I would like to cocatenate a group of ips into a string. example ip1:2181,ip2:2181,ip3:2181,etc {% for host in groups['zookeeper'] %} {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }} {% endfor %} I have the above code, but can't seem to quite figure out how to concatenate into a string. searching for "Jinja2 concatenate" doesn't give me the info I need.
ansible jinja2 concatenate IP addresses I would like to cocatenate a group of ips into a string. example ip1:2181,ip2:2181,ip3:2181,etc {% for host in groups['zookeeper'] %} {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }} {% endfor %} I have the above code, but can't seem to quite figure out how to concatenate into a string. searching for "Jinja2 concatenate" doesn't give me the info I need.
jinja2, ansible
12
10,925
3
https://stackoverflow.com/questions/28231875/ansible-jinja2-concatenate-ip-addresses
54,030,680
How to create a &#39;null&#39; default in Ansible
I want 'lucy' to follow the user module creators' default behaviour which is to create and use a group matching the user name 'lucy'. However for 'frank' I want the primary group to be an existing one; gid 1003. So my hash looks like this: lucy: comment: dog frank: comment: cat group: 1003 And my task looks like this: - name: Set up local unix user accounts user: name: "{{ item.key }}" comment: "{{ item.value.comment }}" group: "{{ item.value.group | default(undef) }}" loop: "{{ users|dict2items }}" This doesn't work, as undef is not recognised. Nor is anything else I can think of. 'null', 'None' etc. all fail. '' creates an empty string which is not right either. I can't find out how to do it. Any ideas?
How to create a &#39;null&#39; default in Ansible I want 'lucy' to follow the user module creators' default behaviour which is to create and use a group matching the user name 'lucy'. However for 'frank' I want the primary group to be an existing one; gid 1003. So my hash looks like this: lucy: comment: dog frank: comment: cat group: 1003 And my task looks like this: - name: Set up local unix user accounts user: name: "{{ item.key }}" comment: "{{ item.value.comment }}" group: "{{ item.value.group | default(undef) }}" loop: "{{ users|dict2items }}" This doesn't work, as undef is not recognised. Nor is anything else I can think of. 'null', 'None' etc. all fail. '' creates an empty string which is not right either. I can't find out how to do it. Any ideas?
ansible, jinja2
12
29,469
1
https://stackoverflow.com/questions/54030680/how-to-create-a-null-default-in-ansible
45,269,225
Ansible playbook fails to lock apt
I took over a project that is running on Ansible for server provisioning and management. I'm fairly new to Ansible but thanks to the good documentation I'm getting my head around it. Still I'm having an error which has the following output: failed: [build] (item=[u'software-properties-common', u'python-pycurl', u'openssh-server', u'ufw', u'unattended-upgrades', u'vim', u'curl', u'git', u'ntp']) => {"failed": true, "item": ["software-properties-common", "python-pycurl", "openssh-server", "ufw", "unattended-upgrades", "vim", "curl", "git", "ntp"], "msg": "Failed to lock apt for exclusive operation"} The playbook is run with sudo: yes so I don't understand why I'm getting this error (which looks like a permission error). Any idea how to trace this down? - name: "Install very important packages" apt: pkg={{ item }} update_cache=yes state=present with_items: - software-properties-common # for apt repository management - python-pycurl # for apt repository management (Ansible support) - openssh-server - ufw - unattended-upgrades - vim - curl - git - ntp playbook: - hosts: build.url.com sudo: yes roles: - { role: postgresql, tags: postgresql } - { role: ruby, tags: ruby } - { role: build, tags: build }
Ansible playbook fails to lock apt I took over a project that is running on Ansible for server provisioning and management. I'm fairly new to Ansible but thanks to the good documentation I'm getting my head around it. Still I'm having an error which has the following output: failed: [build] (item=[u'software-properties-common', u'python-pycurl', u'openssh-server', u'ufw', u'unattended-upgrades', u'vim', u'curl', u'git', u'ntp']) => {"failed": true, "item": ["software-properties-common", "python-pycurl", "openssh-server", "ufw", "unattended-upgrades", "vim", "curl", "git", "ntp"], "msg": "Failed to lock apt for exclusive operation"} The playbook is run with sudo: yes so I don't understand why I'm getting this error (which looks like a permission error). Any idea how to trace this down? - name: "Install very important packages" apt: pkg={{ item }} update_cache=yes state=present with_items: - software-properties-common # for apt repository management - python-pycurl # for apt repository management (Ansible support) - openssh-server - ufw - unattended-upgrades - vim - curl - git - ntp playbook: - hosts: build.url.com sudo: yes roles: - { role: postgresql, tags: postgresql } - { role: ruby, tags: ruby } - { role: build, tags: build }
ansible, ansible-2.x
12
12,611
4
https://stackoverflow.com/questions/45269225/ansible-playbook-fails-to-lock-apt
40,115,323
Calculate set difference using jinja2 (in ansible)
I have two lists of strings in my ansible playbook, and I'm trying to find the elements in list A that aren't in list B - a set difference. However, I don't seem to be able to access the python set data structure. Here's what I was trying to do: - set_fact: difference: "{{ (set(listA) - set(listB)).pop() }}" But I get an error saying 'set' is undefined . Makes sense to me since it's not a variable but I don't know what else to do. How can I calculate the set difference of these two lists? Is it impossible with the stock jinja functionality in ansible?
Calculate set difference using jinja2 (in ansible) I have two lists of strings in my ansible playbook, and I'm trying to find the elements in list A that aren't in list B - a set difference. However, I don't seem to be able to access the python set data structure. Here's what I was trying to do: - set_fact: difference: "{{ (set(listA) - set(listB)).pop() }}" But I get an error saying 'set' is undefined . Makes sense to me since it's not a variable but I don't know what else to do. How can I calculate the set difference of these two lists? Is it impossible with the stock jinja functionality in ansible?
python, ansible, jinja2
12
19,020
2
https://stackoverflow.com/questions/40115323/calculate-set-difference-using-jinja2-in-ansible
39,394,333
How can I tell which init system Ansible runs when I use the &quot;service&quot; module?
From the Ansible documentation , the service module: Controls services on remote hosts. Supported init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart. For a given machine, how can I tell which init system Ansible is using? A system may have init and systemd on it, for example.
How can I tell which init system Ansible runs when I use the &quot;service&quot; module? From the Ansible documentation , the service module: Controls services on remote hosts. Supported init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart. For a given machine, how can I tell which init system Ansible is using? A system may have init and systemd on it, for example.
ansible, init
12
8,535
2
https://stackoverflow.com/questions/39394333/how-can-i-tell-which-init-system-ansible-runs-when-i-use-the-service-module
41,817,641
How to remove item from Ansible list?
I need remove some item from list "my_list_one": [ "item1", "item2", "item3" ] } I need remove item which includes string "2". As result, I need list as "my_list_one": [ "item1", "item3" ] } How could I realize it?
How to remove item from Ansible list? I need remove some item from list "my_list_one": [ "item1", "item2", "item3" ] } I need remove item which includes string "2". As result, I need list as "my_list_one": [ "item1", "item3" ] } How could I realize it?
ansible, ansible-2.x
12
25,637
2
https://stackoverflow.com/questions/41817641/how-to-remove-item-from-ansible-list
78,990,297
Ansible yum throwing future feature annotations is not defined
I have a highly used playbook which has a simple first task: yum , suddently, ever since I've upgraded my MacOSX the yum module stopped working Example: - name: Install git become: yes yum: name : git state: present Gives: >>> print(a['module_stderr']) OpenSSH_9.7p1, LibreSSL 3.3.6 debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files debug1: /etc/ssh/ssh_config line 54: Applying options for * debug2: resolve_canonicalize: hostname x.x.x.x is address debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling debug1: auto-mux: Trying existing master at '/tmp/ansible-ssh-x.x.x.x-22-mysql' debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug1: mux_client_request_session: master session id: 2 Traceback (most recent call last): File "<stdin>", line 12, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load rr Traceback (most recent call last):ap>", line 951, in _find_and_load_unlocked File "<stdin>", line 1, in <module>", line 894, in _find_spec NameError: name 'false' is not defined. Did you mean: 'False'? find_spec >>> le "<frozen importlib._bootstrap_external>", line 1131, in _get_spec >>> le "<frozen importlib._bootstrap_external>", line 1112, in _legacy_get_spec >>> or File "<frozen Fimportlib._bootstrap>", line 441, in spec_from_loader rn >>> le "<frozen importlib._bootstrap_external>", line 544, in spec_from_file_location >>> le "/tmp/ansible_ansible.legacy.dnf_payload_c428fwbu/ansible_ansible.legacy.dnf_payload.zip/ansible/module_utils/basic.py", line 5 >>> SyntaxError: future feature annotations is not defined The error: SyntaxError: future feature annotations is not defined usually related to an old version of python, but my remote server has Python3.9 and to verify it - I also added it in my inventory and I printed the ansible_facts to make sure. This error is across all of my servers, they haven't changed only my MaxOSX version ( Sonoma ) Tried using use_backend: yum/dnf/etc ... all values i've tried. Anyone has a clue ?
Ansible yum throwing future feature annotations is not defined I have a highly used playbook which has a simple first task: yum , suddently, ever since I've upgraded my MacOSX the yum module stopped working Example: - name: Install git become: yes yum: name : git state: present Gives: >>> print(a['module_stderr']) OpenSSH_9.7p1, LibreSSL 3.3.6 debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files debug1: /etc/ssh/ssh_config line 54: Applying options for * debug2: resolve_canonicalize: hostname x.x.x.x is address debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling debug1: auto-mux: Trying existing master at '/tmp/ansible-ssh-x.x.x.x-22-mysql' debug2: fd 3 setting O_NONBLOCK debug2: mux_client_hello_exchange: master version 4 debug1: mux_client_request_session: master session id: 2 Traceback (most recent call last): File "<stdin>", line 12, in <module> File "<frozen importlib._bootstrap>", line 971, in _find_and_load rr Traceback (most recent call last):ap>", line 951, in _find_and_load_unlocked File "<stdin>", line 1, in <module>", line 894, in _find_spec NameError: name 'false' is not defined. Did you mean: 'False'? find_spec >>> le "<frozen importlib._bootstrap_external>", line 1131, in _get_spec >>> le "<frozen importlib._bootstrap_external>", line 1112, in _legacy_get_spec >>> or File "<frozen Fimportlib._bootstrap>", line 441, in spec_from_loader rn >>> le "<frozen importlib._bootstrap_external>", line 544, in spec_from_file_location >>> le "/tmp/ansible_ansible.legacy.dnf_payload_c428fwbu/ansible_ansible.legacy.dnf_payload.zip/ansible/module_utils/basic.py", line 5 >>> SyntaxError: future feature annotations is not defined The error: SyntaxError: future feature annotations is not defined usually related to an old version of python, but my remote server has Python3.9 and to verify it - I also added it in my inventory and I printed the ansible_facts to make sure. This error is across all of my servers, they haven't changed only my MaxOSX version ( Sonoma ) Tried using use_backend: yum/dnf/etc ... all values i've tried. Anyone has a clue ?
ansible, yum
12
11,417
2
https://stackoverflow.com/questions/78990297/ansible-yum-throwing-future-feature-annotations-is-not-defined
56,478,867
How do I check whether a given directory is empty with Ansible?
I am trying to implement a role that checks whether a given directory is empty before proceeding with the rest of the playbook. I have tried this code but I am getting an error and I am not sure about the correct implementation. - name: Check if d folder is empty before proceeding find: paths: c/d/ patterns: "*.*" register: filesFound - fail: msg: The d folder is not empty. when: filesFound.matched > 0 - debug: msg: "The d folder is empty. Proceeding." This is the error that I am getting: fatal FAILED! => {"changed": false, "module_stderr": "Exception calling \"Create\" with \"1\" argument(s): \"At line:4 char:21 def _ansiballz_main(): An expression was expected after '('. At line:12 char:27 except (AttributeError, OSError): Missing argument in parameter list. At line:14 char:7 if scriptdir is not None: Missing '(' after 'if' in if statement. At line:21 char:7 if sys.version_info < (3,): Missing '(' after 'if' in if statement. At line:21 char:30 if sys.version_info < (3,): Missing expression after ','. At line:21 char:25 if sys.version_info < (3,): The '<' operator is reserved for future use. At line:23 char:32 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Missing expression after ','. At line:23 char:33 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Unexpected token 'imp.PY_SOURCE' in expression or statement. At line:23 char:32 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Missing closing ')' in expression. At line:23 char:46 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Unexpected token ')' in expression or statement. Not all parse errors were reported. Correct the reported errors and try again.\" At line:6 char:1 $exec_wrapper = [ScriptBlock]::Create($split_parts[0]) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CategoryInfo : NotSpecified: (:) [], MethodInvocationException\r\n + FullyQualifiedErrorId : ParseException\r\n \r\nThe expression after '&' in a pipeline element produced an object that was not valid. It must result in a command name, a script block, or a CommandInfo object. At line:7 char:2 &$exec_wrapper ~~~~~~~~~~~~~\ + CategoryInfo : InvalidOperation: (:) [], RuntimeException + FullyQualifiedErrorId : BadExpression ", "module_stdout": "", "msg": "MODULE FAILURE See stdout/stderr for the exact error", "rc": 1}
How do I check whether a given directory is empty with Ansible? I am trying to implement a role that checks whether a given directory is empty before proceeding with the rest of the playbook. I have tried this code but I am getting an error and I am not sure about the correct implementation. - name: Check if d folder is empty before proceeding find: paths: c/d/ patterns: "*.*" register: filesFound - fail: msg: The d folder is not empty. when: filesFound.matched > 0 - debug: msg: "The d folder is empty. Proceeding." This is the error that I am getting: fatal FAILED! => {"changed": false, "module_stderr": "Exception calling \"Create\" with \"1\" argument(s): \"At line:4 char:21 def _ansiballz_main(): An expression was expected after '('. At line:12 char:27 except (AttributeError, OSError): Missing argument in parameter list. At line:14 char:7 if scriptdir is not None: Missing '(' after 'if' in if statement. At line:21 char:7 if sys.version_info < (3,): Missing '(' after 'if' in if statement. At line:21 char:30 if sys.version_info < (3,): Missing expression after ','. At line:21 char:25 if sys.version_info < (3,): The '<' operator is reserved for future use. At line:23 char:32 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Missing expression after ','. At line:23 char:33 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Unexpected token 'imp.PY_SOURCE' in expression or statement. At line:23 char:32 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Missing closing ')' in expression. At line:23 char:46 MOD_DESC = ('.py', 'U', imp.PY_SOURCE) Unexpected token ')' in expression or statement. Not all parse errors were reported. Correct the reported errors and try again.\" At line:6 char:1 $exec_wrapper = [ScriptBlock]::Create($split_parts[0]) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CategoryInfo : NotSpecified: (:) [], MethodInvocationException\r\n + FullyQualifiedErrorId : ParseException\r\n \r\nThe expression after '&' in a pipeline element produced an object that was not valid. It must result in a command name, a script block, or a CommandInfo object. At line:7 char:2 &$exec_wrapper ~~~~~~~~~~~~~\ + CategoryInfo : InvalidOperation: (:) [], RuntimeException + FullyQualifiedErrorId : BadExpression ", "module_stdout": "", "msg": "MODULE FAILURE See stdout/stderr for the exact error", "rc": 1}
ansible, yaml
12
23,668
3
https://stackoverflow.com/questions/56478867/how-do-i-check-whether-a-given-directory-is-empty-with-ansible
38,181,433
Ansible cannot import docker-py even though it is installed
I checked this post and followed the fix in both answers and neither worked. I'm opening a new post partly because of that and partly because I'm getting a slightly different error even though the problem might be the same. Ansible host: $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Destination client myserver: $ pip list | egrep 'six|docker|websocket_client' docker-py (1.2.3) six (1.10.0) test.yml: --- - hosts: myserver remote_user: root tasks: - name: stop any running docker registries docker_container: name: registry state: stopped ... Ansible server (ansible-playbook aliased to ap): $ ap -vvvv test.yml The output: (probably extraneous output, snipped): fatal: [myserver]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "command": null, "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": null, "image": null, "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "json-file", "log_options": null, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "registry", "network_mode": null, "networks": null, "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": false, "read_only": false, "recreate": false, "restart": false, "restart_policy": null, "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "stopped", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": null, "volumes_from": null }, "module_name": "docker_container" }, "msg": (the pertinent error): "Failed to import docker-py - cannot import name NotFound. Try pip install docker-py"} I get the same error when I downgrade the docker-py module to 1.1.0 as per the first answer in the referenced post. I also tried to chmod the directories and it made no difference: (/usr/lib/python2.7/site-packages) myserver$ ls -lad docker* drwxr-xr-x. 6 root root 4096 Jul 4 10:57 docker/ drwxr-xr-x. 2 root root 4096 Jul 4 10:57 docker_py-1.2.3-py2.7.egg-info/ from chmod -R go+rx docker* . Has anyone seen this before? I have tried using the pip ansible module to install the modules and then after removing them manually, reinstalled them manually as in the referenced post. I'm also using 2.1.0.0. as you can see, which was supposed to fix this issue.
Ansible cannot import docker-py even though it is installed I checked this post and followed the fix in both answers and neither worked. I'm opening a new post partly because of that and partly because I'm getting a slightly different error even though the problem might be the same. Ansible host: $ ansible --version ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides Destination client myserver: $ pip list | egrep 'six|docker|websocket_client' docker-py (1.2.3) six (1.10.0) test.yml: --- - hosts: myserver remote_user: root tasks: - name: stop any running docker registries docker_container: name: registry state: stopped ... Ansible server (ansible-playbook aliased to ap): $ ap -vvvv test.yml The output: (probably extraneous output, snipped): fatal: [myserver]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_args": { "api_version": null, "blkio_weight": null, "cacert_path": null, "capabilities": null, "cert_path": null, "command": null, "cpu_period": null, "cpu_quota": null, "cpu_shares": null, "cpuset_cpus": null, "cpuset_mems": null, "debug": false, "detach": true, "devices": null, "dns_opts": null, "dns_search_domains": null, "dns_servers": null, "docker_host": null, "entrypoint": null, "env": null, "etc_hosts": null, "exposed_ports": null, "filter_logger": false, "force_kill": false, "groups": null, "hostname": null, "image": null, "interactive": false, "ipc_mode": null, "keep_volumes": true, "kernel_memory": null, "key_path": null, "kill_signal": null, "labels": null, "links": null, "log_driver": "json-file", "log_options": null, "mac_address": null, "memory": "0", "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "name": "registry", "network_mode": null, "networks": null, "oom_killer": null, "paused": false, "pid_mode": null, "privileged": false, "published_ports": null, "pull": false, "read_only": false, "recreate": false, "restart": false, "restart_policy": null, "restart_retries": 0, "security_opts": null, "shm_size": null, "ssl_version": null, "state": "stopped", "stop_signal": null, "stop_timeout": null, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null, "trust_image_content": false, "tty": false, "ulimits": null, "user": null, "uts": null, "volume_driver": null, "volumes": null, "volumes_from": null }, "module_name": "docker_container" }, "msg": (the pertinent error): "Failed to import docker-py - cannot import name NotFound. Try pip install docker-py"} I get the same error when I downgrade the docker-py module to 1.1.0 as per the first answer in the referenced post. I also tried to chmod the directories and it made no difference: (/usr/lib/python2.7/site-packages) myserver$ ls -lad docker* drwxr-xr-x. 6 root root 4096 Jul 4 10:57 docker/ drwxr-xr-x. 2 root root 4096 Jul 4 10:57 docker_py-1.2.3-py2.7.egg-info/ from chmod -R go+rx docker* . Has anyone seen this before? I have tried using the pip ansible module to install the modules and then after removing them manually, reinstalled them manually as in the referenced post. I'm also using 2.1.0.0. as you can see, which was supposed to fix this issue.
python, docker, pip, ansible
12
30,626
8
https://stackoverflow.com/questions/38181433/ansible-cannot-import-docker-py-even-though-it-is-installed
63,680,554
Ansible - install python3-apt package
Using Ubuntu 18.04 , Ansible 2.9, Python 3.6.9 , have installed python3-apt On a basic ansible command ansible -b all -m apt -a "name=apache2 state=latest" Get Error: FAILED! => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "msg": "Could not import python modules: apt, apt_pkg. Please install python3-apt package." } $ sudo apt-get install python3-apt $ ansible --version ansible 2.9.12 python version = 3.6.9 $ python --version Python 3.7.6
Ansible - install python3-apt package Using Ubuntu 18.04 , Ansible 2.9, Python 3.6.9 , have installed python3-apt On a basic ansible command ansible -b all -m apt -a "name=apache2 state=latest" Get Error: FAILED! => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "msg": "Could not import python modules: apt, apt_pkg. Please install python3-apt package." } $ sudo apt-get install python3-apt $ ansible --version ansible 2.9.12 python version = 3.6.9 $ python --version Python 3.7.6
python-3.x, ansible, ubuntu-18.04
12
31,593
4
https://stackoverflow.com/questions/63680554/ansible-install-python3-apt-package
71,578,834
Using Block in Handler - Ansible
I am writing a handler for an Ansible role to stop and start docker . The stop is written as follows in handlers/main.yml : - name: stop docker block: - name: stop docker (Debian based) block: - name: stop service docker on debian, if running systemd: name=docker state=stopped - name: stop service docker.socket on debian, if running systemd: name=docker.socket state=stopped when: ansible_pkg_mgr == "apt" - name: stop docker (CentOS based) block: - name: stop service docker on CentOS, if running service: name: docker state: stopped - name: stop service docker.socket on CentOS, if running service: name: docker state: stopped when: ansible_pkg_mgr == "yum" Then in my tasks/main file, I'm calling stop docker --- - name: test command: echo "Stopping docker" notify: - stop docker The error I'm receiving is ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'notified_hosts' If I run this as a task in a playbook it works. Is there a way to use block in an Ansible handler?
Using Block in Handler - Ansible I am writing a handler for an Ansible role to stop and start docker . The stop is written as follows in handlers/main.yml : - name: stop docker block: - name: stop docker (Debian based) block: - name: stop service docker on debian, if running systemd: name=docker state=stopped - name: stop service docker.socket on debian, if running systemd: name=docker.socket state=stopped when: ansible_pkg_mgr == "apt" - name: stop docker (CentOS based) block: - name: stop service docker on CentOS, if running service: name: docker state: stopped - name: stop service docker.socket on CentOS, if running service: name: docker state: stopped when: ansible_pkg_mgr == "yum" Then in my tasks/main file, I'm calling stop docker --- - name: test command: echo "Stopping docker" notify: - stop docker The error I'm receiving is ERROR! Unexpected Exception, this is probably a bug: 'Block' object has no attribute 'notified_hosts' If I run this as a task in a playbook it works. Is there a way to use block in an Ansible handler?
ansible
12
11,990
3
https://stackoverflow.com/questions/71578834/using-block-in-handler-ansible
43,903,134
Ansible: Accumulate output across multiple hosts on task run
I have the following playbook - hosts: all gather_facts: False tasks: - name: Check status of applications shell: somecommand register: result changed_when: False always_run: yes After this task, I want to run a mail task that will mail the accumulated output of all the commands for the above task registered in the variable result . As of right now, when I try and do this, I get mailed for every single host. Is there some way to accumulate the output across multiple hosts and register that to a variable?
Ansible: Accumulate output across multiple hosts on task run I have the following playbook - hosts: all gather_facts: False tasks: - name: Check status of applications shell: somecommand register: result changed_when: False always_run: yes After this task, I want to run a mail task that will mail the accumulated output of all the commands for the above task registered in the variable result . As of right now, when I try and do this, I get mailed for every single host. Is there some way to accumulate the output across multiple hosts and register that to a variable?
ansible
12
12,131
2
https://stackoverflow.com/questions/43903134/ansible-accumulate-output-across-multiple-hosts-on-task-run
41,263,933
When to use from_json filter in Ansible?
When should I use the from_json filter in Ansible? I found out that using it sometimes has and sometimes have no effect. Please consider the following example which illustrates the inconsistency I am getting. Included in reverse order are: the questions - expected result - actual result - the playbook - the data. The data is taken from this question and the playbook is based on this answer . The question(s): Why storing the left part (before json_query ) of the following expression in a variable and then using json_query on the variable causes the expression to be evaluated differently? "{{ lookup('file','test.json') | json_query(query) }}" Why does adding from_json filter alter the results (but does not if processing a variable): "{{ lookup('file','test.json') | from_json | json_query(query) }}" Expected result: Last four tasks should give the same result. Alternatively, last two tasks should give the same result as previous two tasks. Actual result (last four tasks only): One task result differs. TASK [This query is run against lookup value with from_json stored in a variable] *** ok: [localhost] => { "msg": [ 678 ] } TASK [This query is run against lookup value without from_json stored in a variable] *** ok: [localhost] => { "msg": [ 678 ] } TASK [This query is run directly against lookup value with from_json] ********** ok: [localhost] => { "msg": [ 678 ] } TASK [This query is run directly against lookup value without from_json - the result is empty - why?] *** ok: [localhost] => { "msg": "" } The playbook: --- - hosts: localhost gather_facts: no connection: local tasks: - set_fact: from_lookup_with_from_json: "{{ lookup('file','test.json') | from_json }}" - set_fact: from_lookup_without_from_json: "{{ lookup('file','test.json') }}" - name: Save the lookup value stored in a variable in a file for comparison copy: content="{{ from_lookup_with_from_json }}" dest=./from_lookup_with_from_json.txt - name: Save the lookup value stored in a variable in a file for comparison (they are the same) copy: content="{{ from_lookup_without_from_json }}" dest=./from_lookup_without_from_json.txt - name: This query is run against lookup value with from_json stored in a variable debug: msg="{{ from_lookup_with_from_json | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" - name: This query is run against lookup value without from_json stored in a variable debug: msg="{{ from_lookup_without_from_json | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" - name: This query is run directly against lookup value with from_json debug: msg="{{ lookup('file','test.json') | from_json | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" - name: This query is run directly against lookup value without from_json - the result is empty - why? debug: msg="{{ lookup('file','test.json') | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" The data ( test.json ): { "Foods" : [ { "Id": 456 , "Tags": [ {"Key":"For", "Value":"Heffalump"} , {"Key":"Purpose", "Value":"Food"} ] } , { "Id": 678 , "Tags": [ {"Key":"For", "Value":"Tigger"} , {"Key":"Purpose", "Value":"Food"} ] } , { "Id": 911 , "Tags": [ {"Key":"For", "Value":"Roo"} , {"Key":"Purpose", "Value":"Food"} ] } ] }
When to use from_json filter in Ansible? When should I use the from_json filter in Ansible? I found out that using it sometimes has and sometimes have no effect. Please consider the following example which illustrates the inconsistency I am getting. Included in reverse order are: the questions - expected result - actual result - the playbook - the data. The data is taken from this question and the playbook is based on this answer . The question(s): Why storing the left part (before json_query ) of the following expression in a variable and then using json_query on the variable causes the expression to be evaluated differently? "{{ lookup('file','test.json') | json_query(query) }}" Why does adding from_json filter alter the results (but does not if processing a variable): "{{ lookup('file','test.json') | from_json | json_query(query) }}" Expected result: Last four tasks should give the same result. Alternatively, last two tasks should give the same result as previous two tasks. Actual result (last four tasks only): One task result differs. TASK [This query is run against lookup value with from_json stored in a variable] *** ok: [localhost] => { "msg": [ 678 ] } TASK [This query is run against lookup value without from_json stored in a variable] *** ok: [localhost] => { "msg": [ 678 ] } TASK [This query is run directly against lookup value with from_json] ********** ok: [localhost] => { "msg": [ 678 ] } TASK [This query is run directly against lookup value without from_json - the result is empty - why?] *** ok: [localhost] => { "msg": "" } The playbook: --- - hosts: localhost gather_facts: no connection: local tasks: - set_fact: from_lookup_with_from_json: "{{ lookup('file','test.json') | from_json }}" - set_fact: from_lookup_without_from_json: "{{ lookup('file','test.json') }}" - name: Save the lookup value stored in a variable in a file for comparison copy: content="{{ from_lookup_with_from_json }}" dest=./from_lookup_with_from_json.txt - name: Save the lookup value stored in a variable in a file for comparison (they are the same) copy: content="{{ from_lookup_without_from_json }}" dest=./from_lookup_without_from_json.txt - name: This query is run against lookup value with from_json stored in a variable debug: msg="{{ from_lookup_with_from_json | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" - name: This query is run against lookup value without from_json stored in a variable debug: msg="{{ from_lookup_without_from_json | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" - name: This query is run directly against lookup value with from_json debug: msg="{{ lookup('file','test.json') | from_json | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" - name: This query is run directly against lookup value without from_json - the result is empty - why? debug: msg="{{ lookup('file','test.json') | json_query(query) }}" vars: query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id" The data ( test.json ): { "Foods" : [ { "Id": 456 , "Tags": [ {"Key":"For", "Value":"Heffalump"} , {"Key":"Purpose", "Value":"Food"} ] } , { "Id": 678 , "Tags": [ {"Key":"For", "Value":"Tigger"} , {"Key":"Purpose", "Value":"Food"} ] } , { "Id": 911 , "Tags": [ {"Key":"For", "Value":"Roo"} , {"Key":"Purpose", "Value":"Food"} ] } ] }
ansible
12
30,988
1
https://stackoverflow.com/questions/41263933/when-to-use-from-json-filter-in-ansible
33,438,617
Read Ansible variables of another group
Here is an example of inventory file: [web_servers] web_server-1 ansible_ssh_host=xxx ansible_ssh_user=yyy [ops_servers] ops_server-1 ansible_ssh_host=xxx ansible_ssh_user=zzz Furthermore, web_servers group has specific vars in group_vars/web_servers : tomcat_jmx_port: 123456 How can I access tomcat_jmx_port var when dealing with ops_servers ? Some will probably say I need a commun ancessor group (like all ) to put common vars, but this is just an example, in true life there are many vars I want to access from ops_servers , and I want to keep things clear so tomcat_jmx_port have to stay in web_servers group_vars file. In fact, I need a kind of local lookup. Any idee ? Thanks for your help.
Read Ansible variables of another group Here is an example of inventory file: [web_servers] web_server-1 ansible_ssh_host=xxx ansible_ssh_user=yyy [ops_servers] ops_server-1 ansible_ssh_host=xxx ansible_ssh_user=zzz Furthermore, web_servers group has specific vars in group_vars/web_servers : tomcat_jmx_port: 123456 How can I access tomcat_jmx_port var when dealing with ops_servers ? Some will probably say I need a commun ancessor group (like all ) to put common vars, but this is just an example, in true life there are many vars I want to access from ops_servers , and I want to keep things clear so tomcat_jmx_port have to stay in web_servers group_vars file. In fact, I need a kind of local lookup. Any idee ? Thanks for your help.
ansible
12
15,161
2
https://stackoverflow.com/questions/33438617/read-ansible-variables-of-another-group
31,969,872
why ansible always replaces double quotes with single quotes in templates?
I am trying to generate Dockerfiles with Ansible template - see the role source and the template in Ansible Galaxy and Github I need to genarate a standard Dockerfile line like: ... VOLUME ["/etc/postgresql/9.4"] ... However, when I put this in the input file: ... instruction: CMD value: "[\"/etc/postgresql/{{postgresql_version}}\"]" ... It ends up rendered like: ... VOLUME ['/etc/postgresql/9.4'] ... and I lose the " (which renders the Dockerfiles useless) Any help ? How can I convince Jinja to not substitute " with ' ? I tried \" , |safe filter, even {% raw %} - it just keeps doing it! Update: Here is how to reproduce the issue: Go get the peruncs.docker role from galaxy.ansible.com or Github (link is given above) Write up a simple playbook (say demo.yml ) with the below content and run: ansible-playbook -v demo.yml . The -v option will allow you to see the temp directory where the generated Dockerfile goes with the broken content, so you can examine it. Generating the docker image is not important to succeed, just try to get the Dockerfile right. - name: Build docker image hosts: localhost vars: - somevar: whatever - image_tag: "blabla/booboo" - docker_copy_files: [] - docker_file_content: - instruction: CMD value: '["/usr/bin/runit", "{{somevar}}"]' roles: - peruncs.docker Thanks in advance!
why ansible always replaces double quotes with single quotes in templates? I am trying to generate Dockerfiles with Ansible template - see the role source and the template in Ansible Galaxy and Github I need to genarate a standard Dockerfile line like: ... VOLUME ["/etc/postgresql/9.4"] ... However, when I put this in the input file: ... instruction: CMD value: "[\"/etc/postgresql/{{postgresql_version}}\"]" ... It ends up rendered like: ... VOLUME ['/etc/postgresql/9.4'] ... and I lose the " (which renders the Dockerfiles useless) Any help ? How can I convince Jinja to not substitute " with ' ? I tried \" , |safe filter, even {% raw %} - it just keeps doing it! Update: Here is how to reproduce the issue: Go get the peruncs.docker role from galaxy.ansible.com or Github (link is given above) Write up a simple playbook (say demo.yml ) with the below content and run: ansible-playbook -v demo.yml . The -v option will allow you to see the temp directory where the generated Dockerfile goes with the broken content, so you can examine it. Generating the docker image is not important to succeed, just try to get the Dockerfile right. - name: Build docker image hosts: localhost vars: - somevar: whatever - image_tag: "blabla/booboo" - docker_copy_files: [] - docker_file_content: - instruction: CMD value: '["/usr/bin/runit", "{{somevar}}"]' roles: - peruncs.docker Thanks in advance!
docker, jinja2, ansible
12
16,707
1
https://stackoverflow.com/questions/31969872/why-ansible-always-replaces-double-quotes-with-single-quotes-in-templates
55,621,257
Ansible fileglob: unable to find ... in expected paths
I am trying to use ansible to delete all of the files within a directory while keeping the directory. To that end, I'm using the with_fileglob key on a task to get all of the files out of that directory as item variables. I have created a minimum example that shows my issue here: Vagrantfile: Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision :ansible do |ansible| ansible.limit = "all" ansible.playbook = "local.yml" end end local.yml: - name: Test hosts: all become: true tasks: - name: Test debug debug: msg: "{{ item }}" with_fileglob: - "/vagrant/*" I expect to get a debug message for each file in the /vagrant directory - since this is the directory synced with the VM via Vagrant, I should get a message for the Vagrantfile, and for local.yml. Instead, I get the following confusing warning: PLAY [Test] ******************************************************************** TASK [Gathering Facts] ********************************************************* ok: [default] TASK [Test debug] ************************************************************** [WARNING]: Unable to find '/vagrant' in expected paths (use -vvvvv to see paths) PLAY RECAP ********************************************************************* default : ok=1 changed=0 unreachable=0 failed=0 What expected paths are being referred to here? I have tried this with multiple fileglobs, and they all fail in this way, what am I missing?
Ansible fileglob: unable to find ... in expected paths I am trying to use ansible to delete all of the files within a directory while keeping the directory. To that end, I'm using the with_fileglob key on a task to get all of the files out of that directory as item variables. I have created a minimum example that shows my issue here: Vagrantfile: Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision :ansible do |ansible| ansible.limit = "all" ansible.playbook = "local.yml" end end local.yml: - name: Test hosts: all become: true tasks: - name: Test debug debug: msg: "{{ item }}" with_fileglob: - "/vagrant/*" I expect to get a debug message for each file in the /vagrant directory - since this is the directory synced with the VM via Vagrant, I should get a message for the Vagrantfile, and for local.yml. Instead, I get the following confusing warning: PLAY [Test] ******************************************************************** TASK [Gathering Facts] ********************************************************* ok: [default] TASK [Test debug] ************************************************************** [WARNING]: Unable to find '/vagrant' in expected paths (use -vvvvv to see paths) PLAY RECAP ********************************************************************* default : ok=1 changed=0 unreachable=0 failed=0 What expected paths are being referred to here? I have tried this with multiple fileglobs, and they all fail in this way, what am I missing?
ansible, vagrant
12
13,373
2
https://stackoverflow.com/questions/55621257/ansible-fileglob-unable-to-find-in-expected-paths
43,283,100
How to loop in Ansible $var number of times?
I want to run a loop in Ansible the number of times which is defined in a variable. Is this possible somehow? Imagine a list of servers and we want to create some numbered files on each server. These values are defined in vars.yml: server_list: server1: name: server1 os: Linux num_files: 3 server2: name: server2 os: Linux num_files: 2 The output I desire is that the files /tmp/1 , /tmp/2 and /tmp/3 are created on server1, /tmp/1 and /tmp/2 are created on server2. I have tried to write a playbook using with_nested , with_dict and with_subelements but I can't seem to find any way to to this: - hosts: "{{ target }}" tasks: - name: Load vars include_vars: vars.yml - name: Create files command: touch /tmp/{{ loop_index? }} with_dict: {{ server_list[target] }} loop_control: loop_var: {{ item.value.num_files }} If I needed to create 50 files on each server I can see how I could do this if I were to have a list variable for each server with 50 items in it list which is simply the numbers 1 to 50, but that would be a self defeating use of Ansible.
How to loop in Ansible $var number of times? I want to run a loop in Ansible the number of times which is defined in a variable. Is this possible somehow? Imagine a list of servers and we want to create some numbered files on each server. These values are defined in vars.yml: server_list: server1: name: server1 os: Linux num_files: 3 server2: name: server2 os: Linux num_files: 2 The output I desire is that the files /tmp/1 , /tmp/2 and /tmp/3 are created on server1, /tmp/1 and /tmp/2 are created on server2. I have tried to write a playbook using with_nested , with_dict and with_subelements but I can't seem to find any way to to this: - hosts: "{{ target }}" tasks: - name: Load vars include_vars: vars.yml - name: Create files command: touch /tmp/{{ loop_index? }} with_dict: {{ server_list[target] }} loop_control: loop_var: {{ item.value.num_files }} If I needed to create 50 files on each server I can see how I could do this if I were to have a list variable for each server with 50 items in it list which is simply the numbers 1 to 50, but that would be a self defeating use of Ansible.
loops, dictionary, ansible, nested-loops
12
26,091
2
https://stackoverflow.com/questions/43283100/how-to-loop-in-ansible-var-number-of-times
34,862,768
Ansible with_items keeps overwriting last line of loop
This is my playbook. Pretty simple. The problem is with the "with_items". It iterates over all the items, but, it only writes the last item to the crontab file. I think it is overwriting it. Why is this happening? - name: Create cron jobs to send emails cron: name="Send emails" state=present special_time=daily job="/home/testuser/deployments/{{ item }}/artisan --env={{ item }} send:healthemail" with_items: - london - toronto - vancouver
Ansible with_items keeps overwriting last line of loop This is my playbook. Pretty simple. The problem is with the "with_items". It iterates over all the items, but, it only writes the last item to the crontab file. I think it is overwriting it. Why is this happening? - name: Create cron jobs to send emails cron: name="Send emails" state=present special_time=daily job="/home/testuser/deployments/{{ item }}/artisan --env={{ item }} send:healthemail" with_items: - london - toronto - vancouver
cron, ansible
12
3,746
1
https://stackoverflow.com/questions/34862768/ansible-with-items-keeps-overwriting-last-line-of-loop
30,350,881
Centos7 docker-py doesn&#39;t seem to be installed
I installed Centos7 minimal and then: ansible, docker, pip and using pip I installed docker-py. Versions: - Docker version 1.6.0, build 8aae715/1.6.0 - ansible 1.9.1 - docker_py-1.2.2 Trying to run a playbook, for example - name: redis container docker: name: myredis image: redis state: started i get msg: docker-py doesn't seem to be installed, but is required for the Ansible Docker module. I can't see the problem. Is it the CentOS, docker and ansible version? PS: I disabled the firewalld and SELinux Any ideas? Thanks
Centos7 docker-py doesn&#39;t seem to be installed I installed Centos7 minimal and then: ansible, docker, pip and using pip I installed docker-py. Versions: - Docker version 1.6.0, build 8aae715/1.6.0 - ansible 1.9.1 - docker_py-1.2.2 Trying to run a playbook, for example - name: redis container docker: name: myredis image: redis state: started i get msg: docker-py doesn't seem to be installed, but is required for the Ansible Docker module. I can't see the problem. Is it the CentOS, docker and ansible version? PS: I disabled the firewalld and SELinux Any ideas? Thanks
docker, ansible, centos7
12
14,374
4
https://stackoverflow.com/questions/30350881/centos7-docker-py-doesnt-seem-to-be-installed
42,653,655
Ansible - ignore_errors WHEN
Ansible 2.0.4.0 There are about three tasks which randomly fails. The output of the fail is: OSError: [Errno 32] Broken pipefatal: [machine1]: FAILED! => {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""} Is it possible to ignore the error, if Errno 32 is in the output of the error. - name: This task sometimes fails shell: fail_me! ignore_errors: "{{ when_errno32 }}" I"m aware this is a workaround. Solving the 'real' problem could take up way more time.
Ansible - ignore_errors WHEN Ansible 2.0.4.0 There are about three tasks which randomly fails. The output of the fail is: OSError: [Errno 32] Broken pipefatal: [machine1]: FAILED! => {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""} Is it possible to ignore the error, if Errno 32 is in the output of the error. - name: This task sometimes fails shell: fail_me! ignore_errors: "{{ when_errno32 }}" I"m aware this is a workaround. Solving the 'real' problem could take up way more time.
ansible
12
29,611
2
https://stackoverflow.com/questions/42653655/ansible-ignore-errors-when
38,352,641
How can I use jinja2 to join with quotes in Ansible?
I have an ansible list value: hosts = ["site1", "site2", "site3"] if I try this: hosts | join(", ") I get: site1, site2, site3 But I want to get: "site1", "site2", "site3"
How can I use jinja2 to join with quotes in Ansible? I have an ansible list value: hosts = ["site1", "site2", "site3"] if I try this: hosts | join(", ") I get: site1, site2, site3 But I want to get: "site1", "site2", "site3"
ansible
12
11,930
4
https://stackoverflow.com/questions/38352641/how-can-i-use-jinja2-to-join-with-quotes-in-ansible
27,787,412
Ansible-vault errors with &quot;Odd-length string&quot;
I'm running Ansible 1.8.2 . I have a vaulted file created on another system. On that system it works without any problems. However, when I run it on my local system I get the following error: $Β» ansible-vault --debug view vars/vaulted_vars.yml Vault password: Traceback (most recent call last): File "/usr/bin/ansible-vault", line 225, in main fn(args, options, parser) File "/usr/bin/ansible-vault", line 172, in execute_view this_editor.view_file() File "/usr/lib/python2.7/site-packages/ansible/utils/vault.py", line 280, in view_file dec_data = this_vault.decrypt(tmpdata) File "/usr/lib/python2.7/site-packages/ansible/utils/vault.py", line 136, in decrypt data = this_cipher.decrypt(data, self.password) File "/usr/lib/python2.7/site-packages/ansible/utils/vault.py", line 545, in decrypt data = unhexlify(data) TypeError: Odd-length string ERROR: Odd-length string I tried to manually type in the password or copy-pasting it, but the error still happens. What is going on here and how to fix this error?
Ansible-vault errors with &quot;Odd-length string&quot; I'm running Ansible 1.8.2 . I have a vaulted file created on another system. On that system it works without any problems. However, when I run it on my local system I get the following error: $Β» ansible-vault --debug view vars/vaulted_vars.yml Vault password: Traceback (most recent call last): File "/usr/bin/ansible-vault", line 225, in main fn(args, options, parser) File "/usr/bin/ansible-vault", line 172, in execute_view this_editor.view_file() File "/usr/lib/python2.7/site-packages/ansible/utils/vault.py", line 280, in view_file dec_data = this_vault.decrypt(tmpdata) File "/usr/lib/python2.7/site-packages/ansible/utils/vault.py", line 136, in decrypt data = this_cipher.decrypt(data, self.password) File "/usr/lib/python2.7/site-packages/ansible/utils/vault.py", line 545, in decrypt data = unhexlify(data) TypeError: Odd-length string ERROR: Odd-length string I tried to manually type in the password or copy-pasting it, but the error still happens. What is going on here and how to fix this error?
ansible
12
18,091
4
https://stackoverflow.com/questions/27787412/ansible-vault-errors-with-odd-length-string
46,724,196
Please explain usage of &quot;item&quot; in Ansible
I found some AWS Ansible code using word "{{ item.id }}" or {{ item.sg_name }} . I do not understand how "item" command works.
Please explain usage of &quot;item&quot; in Ansible I found some AWS Ansible code using word "{{ item.id }}" or {{ item.sg_name }} . I do not understand how "item" command works.
ansible
12
18,966
1
https://stackoverflow.com/questions/46724196/please-explain-usage-of-item-in-ansible
39,794,072
splitting string by whitespace and then joining it again in ansible/jinja2
I'm trying to "cleanup" whitespaces in a variable in Ansible (ansible-2.1.1.0-1.fc24.noarch) playbook and I though I'll first split() it and then join(' ') again. For some reason that approach is giving me error below :-/ --- - hosts: all remote_user: root vars: mytext: | hello there how are you? tasks: - debug: msg: "{{ mytext }}" - debug: msg: "{{ mytext.split() }}" - debug: msg: "{{ mytext.split().join(' ') }}" ... Gives me: TASK [debug] ******************************************************************* ok: [192.168.122.193] => { "msg": "hello\nthere how are\nyou?\n" } TASK [debug] ******************************************************************* ok: [192.168.122.193] => { "msg": [ "hello", "there", "how", "are", "you?" ] } TASK [debug] ******************************************************************* fatal: [192.168.122.193]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'list object' has no attribute 'join'\n\nThe error appears to have been in '.../tests.yaml': line 15, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ mytext.split() }}\"\n - debug:\n ^ here\n"} Any idea on what I'm doing wrong? It says the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'list object' has no attribute 'join' , but according to useful filters docs, it should work.
splitting string by whitespace and then joining it again in ansible/jinja2 I'm trying to "cleanup" whitespaces in a variable in Ansible (ansible-2.1.1.0-1.fc24.noarch) playbook and I though I'll first split() it and then join(' ') again. For some reason that approach is giving me error below :-/ --- - hosts: all remote_user: root vars: mytext: | hello there how are you? tasks: - debug: msg: "{{ mytext }}" - debug: msg: "{{ mytext.split() }}" - debug: msg: "{{ mytext.split().join(' ') }}" ... Gives me: TASK [debug] ******************************************************************* ok: [192.168.122.193] => { "msg": "hello\nthere how are\nyou?\n" } TASK [debug] ******************************************************************* ok: [192.168.122.193] => { "msg": [ "hello", "there", "how", "are", "you?" ] } TASK [debug] ******************************************************************* fatal: [192.168.122.193]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'list object' has no attribute 'join'\n\nThe error appears to have been in '.../tests.yaml': line 15, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n msg: \"{{ mytext.split() }}\"\n - debug:\n ^ here\n"} Any idea on what I'm doing wrong? It says the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'list object' has no attribute 'join' , but according to useful filters docs, it should work.
ansible, jinja2
12
37,501
1
https://stackoverflow.com/questions/39794072/splitting-string-by-whitespace-and-then-joining-it-again-in-ansible-jinja2
39,628,351
Ansible and Git Permission denied (publickey) at Git Clone
I have a playbook where I am trying to clone from a private repo (GIT) to a server. I have setup ssh forwarding and when I ssh into the server and try to manually clone from the same repo, it successfully works. However, when I use ansible for the to clone the repo to the server, it fails with "Permission Denied Public Key". This is my playbook deploy.yml : --- - hosts: webservers remote_user: root tasks: - name: Setup Git repo git: repo={{ git_repo }} dest={{ app_dir }} accept_hostkey=yes This is how my ansible.cfg looks: [ssh_args] ssh_args = -o FowardAgent=yes I am also able to perform all the other tasks in my playbooks (os operations, installations). I have tried: Specifying sshAgentForwarding flag in ansible.cfg on the server (ansible.cfg in same dir as playbook) using: ssh_args = -o ForwardingAgent=yes used become: false to execute the git clone running ansible -i devops/hosts webservers -a "ssh -T git@bitbucket.org" returns: an_ip_address | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true } This is the command that I use to run the playbook: ansible-playbook devops/deploy.yml -i devops/hosts -vvvv This is the error message I get: fatal: [162.243.243.13]: FAILED! => {"changed": false, "cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "invocation": {"module_args": {"accept_hostkey": true, "bare": false, "clone": true, "depth": null, "dest": "/var/www/aWebsite", "executable": null, "force": false, "key_file": null, "recursive": true, "reference": null, "refspec": null, "remote": "origin", "repo": "git@bitbucket.org:aUser/aRepo.git", "ssh_opts": null, "track_submodules": false, "update": true, "verify_commit": false, "version": "HEAD"}, "module_name": "git"}, "msg": "Permission denied (publickey).\r\nfatal: Could not r$ad from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.", "rc": 128, "stderr": "Permission denied (publickey).\r\nfatal: Could not read from remote r$pository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n", "stdout": "", "stdout_lines": []}
Ansible and Git Permission denied (publickey) at Git Clone I have a playbook where I am trying to clone from a private repo (GIT) to a server. I have setup ssh forwarding and when I ssh into the server and try to manually clone from the same repo, it successfully works. However, when I use ansible for the to clone the repo to the server, it fails with "Permission Denied Public Key". This is my playbook deploy.yml : --- - hosts: webservers remote_user: root tasks: - name: Setup Git repo git: repo={{ git_repo }} dest={{ app_dir }} accept_hostkey=yes This is how my ansible.cfg looks: [ssh_args] ssh_args = -o FowardAgent=yes I am also able to perform all the other tasks in my playbooks (os operations, installations). I have tried: Specifying sshAgentForwarding flag in ansible.cfg on the server (ansible.cfg in same dir as playbook) using: ssh_args = -o ForwardingAgent=yes used become: false to execute the git clone running ansible -i devops/hosts webservers -a "ssh -T git@bitbucket.org" returns: an_ip_address | UNREACHABLE! => { "changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true } This is the command that I use to run the playbook: ansible-playbook devops/deploy.yml -i devops/hosts -vvvv This is the error message I get: fatal: [162.243.243.13]: FAILED! => {"changed": false, "cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "invocation": {"module_args": {"accept_hostkey": true, "bare": false, "clone": true, "depth": null, "dest": "/var/www/aWebsite", "executable": null, "force": false, "key_file": null, "recursive": true, "reference": null, "refspec": null, "remote": "origin", "repo": "git@bitbucket.org:aUser/aRepo.git", "ssh_opts": null, "track_submodules": false, "update": true, "verify_commit": false, "version": "HEAD"}, "module_name": "git"}, "msg": "Permission denied (publickey).\r\nfatal: Could not r$ad from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.", "rc": 128, "stderr": "Permission denied (publickey).\r\nfatal: Could not read from remote r$pository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n", "stdout": "", "stdout_lines": []}
git, ansible
12
14,430
4
https://stackoverflow.com/questions/39628351/ansible-and-git-permission-denied-publickey-at-git-clone
39,554,281
ansible lookup pipe what does this pipe means?
In ansible, I can use something like: debug:var="{{lookup('pipe', 'date +%Y%m%d')}}" This can work, but what does the 'pipe' mean? cannot find any detail explanation for this in ansible document, want to understand what happens when this statement run. For example, is the 'date' means run 'date' command from shell? and then use pipe-like way to format the date in the specified way?
ansible lookup pipe what does this pipe means? In ansible, I can use something like: debug:var="{{lookup('pipe', 'date +%Y%m%d')}}" This can work, but what does the 'pipe' mean? cannot find any detail explanation for this in ansible document, want to understand what happens when this statement run. For example, is the 'date' means run 'date' command from shell? and then use pipe-like way to format the date in the specified way?
ansible
12
14,187
1
https://stackoverflow.com/questions/39554281/ansible-lookup-pipe-what-does-this-pipe-means
33,127,360
Append contents of a source file to a destination file
I need to scan /etc/fstab file for an entry and if it is not present append the contents of another file into /etc/fstab . Ansible modules that I've seen do not seem to allow appending a file to another file instead just adding a specific "text" line.
Append contents of a source file to a destination file I need to scan /etc/fstab file for an entry and if it is not present append the contents of another file into /etc/fstab . Ansible modules that I've seen do not seem to allow appending a file to another file instead just adding a specific "text" line.
file, ansible
12
42,071
2
https://stackoverflow.com/questions/33127360/append-contents-of-a-source-file-to-a-destination-file
29,856,738
How to define hash (dict) in ansible inventory file?
I am able to define a hash(dict) like below in group_vars/all: region_subnet_matrix: site1: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}a" site2: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}b" but for the life of me, I could not figure out how to define it under hosts file [all:vars] region_subnet_matrix="{ site1: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}a" site2: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}b" }" I know it was incorrect, but I don't know the right way. Can someone enlighten me, please?
How to define hash (dict) in ansible inventory file? I am able to define a hash(dict) like below in group_vars/all: region_subnet_matrix: site1: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}a" site2: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}b" but for the life of me, I could not figure out how to define it under hosts file [all:vars] region_subnet_matrix="{ site1: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}a" site2: region: "{{ aws_region }}" subnet: "subnet-xxxxxxx" zone: "{{aws_region}}b" }" I know it was incorrect, but I don't know the right way. Can someone enlighten me, please?
ansible, ansible-inventory
12
24,737
2
https://stackoverflow.com/questions/29856738/how-to-define-hash-dict-in-ansible-inventory-file
24,250,418
Finding file name in files section of current Ansible role
I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this: - name: copy file copy: src=*.txt dest=/path/to/fixedname.txt Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do: - name: find the filename connection: local shell: "ls -1 files/*.txt" register: myfile - name: copy file copy: src="files/{{ item }}" dest=/path/to/fixedname.txt with_items: - myfile.stdout_lines However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
Finding file name in files section of current Ansible role I'm fairly new to Ansible and I'm trying to create a role that copies a file to a remote server. The local file can have a different name every time I'm running the playbook, but it needs to be copied to the same name remotely, something like this: - name: copy file copy: src=*.txt dest=/path/to/fixedname.txt Ansible doesn't allow wildcards, so when I wrote a simple playbook with the tasks in the main playbook I could do: - name: find the filename connection: local shell: "ls -1 files/*.txt" register: myfile - name: copy file copy: src="files/{{ item }}" dest=/path/to/fixedname.txt with_items: - myfile.stdout_lines However, when I moved the tasks to a role, the first action didn't work anymore, because the relative path is relative to the role while the playbook executes in the root dir of the 'roles' directory. I could add the path to the role's files dir, but is there a more elegant way?
ansible
12
20,188
2
https://stackoverflow.com/questions/24250418/finding-file-name-in-files-section-of-current-ansible-role
72,368,575
How can I print out the actual values of all the variables used by an Ansible playbook?
An answer on StackOverflow suggests using - debug: var=vars or - debug: var=hostvars to print out all the variables used by an Ansible playbook. Using var=hostvars did not print out all of the variables. But I did get all of the variables printed out when I added the following lines to the top of the main.yml file of the role executed by my playbook: - name: print all variables debug: var=vars The problem is that the values of the variables printed out are not fully evaluated if they are dependent on the values of other variables. For example, here is a portion of what gets printed out: "env": "dev", "rpm_repo": "project-subproject-rpm-{{env}}", "index_prefix": "project{{ ('') if (env=='prod') else ('_' + env) }}", "our_server": "{{ ('0.0.0.0') if (env=='dev') else ('192.168.100.200:9997') }}", How can I get Ansible to print out the variables fully evaluated like this? "env": "dev", "rpm_repo": "project-subproject-rpm-dev", "index_prefix": "project_dev", "our_server": "0.0.0.0", EDIT: After incorporating the tasks section in the answer into my playbook file and removing the roles section, my playbook file looks like the following (where install-vars.yml contains some variable definitions): - hosts: all become: true vars_files: - install-vars.yml tasks: - debug: msg: |- {% for k in _my_vars %} {{ k }}: {{ lookup('vars', k) }} {% endfor %} vars: _special_vars: - ansible_dependent_role_names - ansible_play_batch - ansible_play_hosts - ansible_play_hosts_all - ansible_play_name - ansible_play_role_names - ansible_role_names - environment - hostvars - play_hosts - role_names _hostvars: "{{ hostvars[inventory_hostname].keys() }}" _my_vars: "{{ vars.keys()| difference(_hostvars)| difference(_special_vars)| reject('match', '^_.*$')| list| sort }}" When I try to run the playbook, I get this failure: shell> ansible-playbook playbook.yml SSH password: SUDO password[defaults to SSH password]: PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [192.168.100.111] TASK [debug] ******************************************************************* fatal: [192.168.100.111]: FAILED! => {"failed": true, "msg": "lookup plugin (vars) not found"} to retry, use: --limit @/usr/local/project-directory/installer-1.0.0.0/playbook.retry PLAY RECAP ********************************************************************* 192.168.100.111 : ok=1 changed=0 unreachable=0 failed=1
How can I print out the actual values of all the variables used by an Ansible playbook? An answer on StackOverflow suggests using - debug: var=vars or - debug: var=hostvars to print out all the variables used by an Ansible playbook. Using var=hostvars did not print out all of the variables. But I did get all of the variables printed out when I added the following lines to the top of the main.yml file of the role executed by my playbook: - name: print all variables debug: var=vars The problem is that the values of the variables printed out are not fully evaluated if they are dependent on the values of other variables. For example, here is a portion of what gets printed out: "env": "dev", "rpm_repo": "project-subproject-rpm-{{env}}", "index_prefix": "project{{ ('') if (env=='prod') else ('_' + env) }}", "our_server": "{{ ('0.0.0.0') if (env=='dev') else ('192.168.100.200:9997') }}", How can I get Ansible to print out the variables fully evaluated like this? "env": "dev", "rpm_repo": "project-subproject-rpm-dev", "index_prefix": "project_dev", "our_server": "0.0.0.0", EDIT: After incorporating the tasks section in the answer into my playbook file and removing the roles section, my playbook file looks like the following (where install-vars.yml contains some variable definitions): - hosts: all become: true vars_files: - install-vars.yml tasks: - debug: msg: |- {% for k in _my_vars %} {{ k }}: {{ lookup('vars', k) }} {% endfor %} vars: _special_vars: - ansible_dependent_role_names - ansible_play_batch - ansible_play_hosts - ansible_play_hosts_all - ansible_play_name - ansible_play_role_names - ansible_role_names - environment - hostvars - play_hosts - role_names _hostvars: "{{ hostvars[inventory_hostname].keys() }}" _my_vars: "{{ vars.keys()| difference(_hostvars)| difference(_special_vars)| reject('match', '^_.*$')| list| sort }}" When I try to run the playbook, I get this failure: shell> ansible-playbook playbook.yml SSH password: SUDO password[defaults to SSH password]: PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [192.168.100.111] TASK [debug] ******************************************************************* fatal: [192.168.100.111]: FAILED! => {"failed": true, "msg": "lookup plugin (vars) not found"} to retry, use: --limit @/usr/local/project-directory/installer-1.0.0.0/playbook.retry PLAY RECAP ********************************************************************* 192.168.100.111 : ok=1 changed=0 unreachable=0 failed=1
variables, ansible
12
89,243
4
https://stackoverflow.com/questions/72368575/how-can-i-print-out-the-actual-values-of-all-the-variables-used-by-an-ansible-pl
74,048,180
How to run ansible playbook from github actions - without using external action
I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook. ssh -t -v theUser@theHost shows me that the SSH connection works. The ansible sript however tells me, that the sudo Password is missing. If I leave the line ssh -t -v theUser@theHost out, ansible throws a connection timeout and cant connect to the server. => fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser@theHost . The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively. It simply doesn't work form my GithHub Action. Can you please tell me what I am doing wrong? My workflow file looks like this: name: CI # Controls when the workflow will run on: # Triggers the workflow on push or pull request events but only for the "master" branch push: branches: [ "master" ] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: run-playbooks: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 with: submodules: true token: ${{secrets.REPO_TOKEN}} - name: Run Ansible Playbook run: | mkdir -p /home/runner/.ssh/ touch /home/runner/.ssh/config touch /home/runner/.ssh/id_rsa echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts cd myproject-infrastructure/ansible eval ssh-agent -s chmod 700 /home/runner/.ssh/id_rsa ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
How to run ansible playbook from github actions - without using external action I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook. ssh -t -v theUser@theHost shows me that the SSH connection works. The ansible sript however tells me, that the sudo Password is missing. If I leave the line ssh -t -v theUser@theHost out, ansible throws a connection timeout and cant connect to the server. => fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser@theHost . The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively. It simply doesn't work form my GithHub Action. Can you please tell me what I am doing wrong? My workflow file looks like this: name: CI # Controls when the workflow will run on: # Triggers the workflow on push or pull request events but only for the "master" branch push: branches: [ "master" ] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: run-playbooks: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 with: submodules: true token: ${{secrets.REPO_TOKEN}} - name: Run Ansible Playbook run: | mkdir -p /home/runner/.ssh/ touch /home/runner/.ssh/config touch /home/runner/.ssh/id_rsa echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts cd myproject-infrastructure/ansible eval ssh-agent -s chmod 700 /home/runner/.ssh/id_rsa ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
linux, ssh, ansible, github-actions
12
17,932
2
https://stackoverflow.com/questions/74048180/how-to-run-ansible-playbook-from-github-actions-without-using-external-action
61,930,563
Extract file names without extension - Ansible
I have a variable file in ansible like below check: - file1.tar.gz - file2.tar.gz while iterating it in tasks i am using {{item}} with_items: - "{{check}}" , Is there a way to extract the filenames without extension while iterating? i.e i need file1 from file1.tar.gz and file2 from file2.tar.gz
Extract file names without extension - Ansible I have a variable file in ansible like below check: - file1.tar.gz - file2.tar.gz while iterating it in tasks i am using {{item}} with_items: - "{{check}}" , Is there a way to extract the filenames without extension while iterating? i.e i need file1 from file1.tar.gz and file2 from file2.tar.gz
ansible
12
22,793
3
https://stackoverflow.com/questions/61930563/extract-file-names-without-extension-ansible
54,275,015
can ansible-playbook read from stdin instead of a file?
Is it possible for ansible-playbook to read a playbook from the standard input? I thought maybe dash (-) would be a way to specify stdin , like it does in the cat command and I tried: $ ansible-playbook - But it fails with: ERROR! the playbook: - could not be found
can ansible-playbook read from stdin instead of a file? Is it possible for ansible-playbook to read a playbook from the standard input? I thought maybe dash (-) would be a way to specify stdin , like it does in the cat command and I tried: $ ansible-playbook - But it fails with: ERROR! the playbook: - could not be found
ansible
12
4,466
1
https://stackoverflow.com/questions/54275015/can-ansible-playbook-read-from-stdin-instead-of-a-file
40,845,709
Is a correct YAML file enough for a correct ansible playbook, syntax errors aside?
I have an ansible playbook which raises an error (with a dreadful message, as usual): ERROR! unexpected parameter type in action: <class 'ansible.parsing.yaml.objects.AnsibleSequence'> The error appears to have been in '/root/myplaybook.yml': line 17, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: # configure rsyslog - name: configure rsyslog to expose events on port 42000 ^ here The line in question is typical of other lines I have in this and other playbooks: # prepare environment # configure rsyslog - name: configure rsyslog to expose events on port 42000 lineinfile: - create: yes - dest: /etc/rsyslog.d/expose-42000.conf - line: "*.* @127.0.0.1:42000" notify: - restart rsyslog The file is validated by three online checkers, so there are no YAML errors. Is this fact enough for the file to be a correct ansible playbook? What I am trying to understand is whether a correct YAML file leaves me only with ansible syntax errors (a module which does not exist for instance) or is a playbook an extension of YAML (in the sense that a line like - name: blah blah blah is OK from a YAMl perspective, but will be rejected by ansible because (I am making up an example) it has more than two words. In other words I am checking if the following can be true: the YAML syntax is OK, the ansible keywords are OK but ansible does not conform to YAML syntax fully by having some limitations. EDIT : I had an error, spotted by Konstantin in his answer. I will leave this question in place since it helped me to understand that ansible does not put constraints on the YAML file itself, so when there is an error and the validation goes though I am really left with specific ansible syntax errors (or logical, like in my case).
Is a correct YAML file enough for a correct ansible playbook, syntax errors aside? I have an ansible playbook which raises an error (with a dreadful message, as usual): ERROR! unexpected parameter type in action: <class 'ansible.parsing.yaml.objects.AnsibleSequence'> The error appears to have been in '/root/myplaybook.yml': line 17, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: # configure rsyslog - name: configure rsyslog to expose events on port 42000 ^ here The line in question is typical of other lines I have in this and other playbooks: # prepare environment # configure rsyslog - name: configure rsyslog to expose events on port 42000 lineinfile: - create: yes - dest: /etc/rsyslog.d/expose-42000.conf - line: "*.* @127.0.0.1:42000" notify: - restart rsyslog The file is validated by three online checkers, so there are no YAML errors. Is this fact enough for the file to be a correct ansible playbook? What I am trying to understand is whether a correct YAML file leaves me only with ansible syntax errors (a module which does not exist for instance) or is a playbook an extension of YAML (in the sense that a line like - name: blah blah blah is OK from a YAMl perspective, but will be rejected by ansible because (I am making up an example) it has more than two words. In other words I am checking if the following can be true: the YAML syntax is OK, the ansible keywords are OK but ansible does not conform to YAML syntax fully by having some limitations. EDIT : I had an error, spotted by Konstantin in his answer. I will leave this question in place since it helped me to understand that ansible does not put constraints on the YAML file itself, so when there is an error and the validation goes though I am really left with specific ansible syntax errors (or logical, like in my case).
yaml, ansible
12
33,305
1
https://stackoverflow.com/questions/40845709/is-a-correct-yaml-file-enough-for-a-correct-ansible-playbook-syntax-errors-asid
33,625,581
How to pull while deployment in ansible
I am using Ansible for configuration management and the following task to clone a Git repo: # Example git checkout from Ansible Playbooks - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout version=release-0.22 This clones the repo with the particular version. Does it do a git pull when run again if the repo already exists? Or does it simply clone the repo all the time? How to do a git pull in Ansible if the repo already exists and how can we run a specific command if the repo exists and same if the repo is cloned for the first time?
How to pull while deployment in ansible I am using Ansible for configuration management and the following task to clone a Git repo: # Example git checkout from Ansible Playbooks - git: repo=git://foosball.example.org/path/to/repo.git dest=/srv/checkout version=release-0.22 This clones the repo with the particular version. Does it do a git pull when run again if the repo already exists? Or does it simply clone the repo all the time? How to do a git pull in Ansible if the repo already exists and how can we run a specific command if the repo exists and same if the repo is cloned for the first time?
git, github, ansible
12
14,112
2
https://stackoverflow.com/questions/33625581/how-to-pull-while-deployment-in-ansible
53,108,954
How to use command stdin in Ansible?
I've tried this : - name: Log into Docker registry command: docker login --username "{{ docker_registry_username }}" --password-stdin stdin: "{{ docker_registry_password }}" This results in a warning and a failing command: [WARNING]: Ignoring invalid attribute: stdin … Cannot perform an interactive login from a non TTY device I've also tried this : - name: Log into Docker registry command: docker login --username "{{ docker_registry_username }}" --password-stdin stdin: "{{ docker_registry_password }}" This results in a syntax error: ERROR! Syntax Error while loading YAML. Does command stdin actually work in Ansible 2.7? If so, how am I supposed to use it?
How to use command stdin in Ansible? I've tried this : - name: Log into Docker registry command: docker login --username "{{ docker_registry_username }}" --password-stdin stdin: "{{ docker_registry_password }}" This results in a warning and a failing command: [WARNING]: Ignoring invalid attribute: stdin … Cannot perform an interactive login from a non TTY device I've also tried this : - name: Log into Docker registry command: docker login --username "{{ docker_registry_username }}" --password-stdin stdin: "{{ docker_registry_password }}" This results in a syntax error: ERROR! Syntax Error while loading YAML. Does command stdin actually work in Ansible 2.7? If so, how am I supposed to use it?
ansible
12
21,437
3
https://stackoverflow.com/questions/53108954/how-to-use-command-stdin-in-ansible