question_id
int64 82.3k
79.7M
| title_clean
stringlengths 15
158
| body_clean
stringlengths 62
28.5k
| full_text
stringlengths 95
28.5k
| tags
stringlengths 4
80
| score
int64 0
1.15k
| view_count
int64 22
1.62M
| answer_count
int64 0
30
| link
stringlengths 58
125
|
|---|---|---|---|---|---|---|---|---|
54,333,423
|
How to keep ansible role from running multiple times when listed as a dependency?
|
We broke down our giant ansible workspace into individual, simple roles that can be run on their own. They all depend on our yum role that provisions repositories, etc, and all the roles (A, B, C) have it listed in their meta.yml : ./roles_galaxy/A/meta/main.yml: dependencies: - name: yum-repo src: foo ./roles_galaxy/B/meta/main.yml: dependencies: - name: yum-repo src: foo ./roles_galaxy/C/meta/main.yml: dependencies: - name: yum-repo src: foo However, this causes the yum-repo role to be executed multiple times when our deploy playbook is run, so we see multiple executions of the yum-repo role. We don't want it to do this, as it just takes up extra time and runs repeatedly: Playbook: - name: Common Roles hosts: things roles: - A - B - C Output: PLAY [Role A] ... TASK [yum-repo ...] PLAY [Role B] ... TASK [yum-repo ...] PLAY [Role C] ... TASK [yum-repo ...] I've tried allow_duplicates = false in our ansible.cfg , but I don't think that's the right solution as it still executes multiple times. If there's more information needed, I'm more than happy to try to provide a cleaned up version of it. Running ansible-2.5.5 currently.
|
How to keep ansible role from running multiple times when listed as a dependency? We broke down our giant ansible workspace into individual, simple roles that can be run on their own. They all depend on our yum role that provisions repositories, etc, and all the roles (A, B, C) have it listed in their meta.yml : ./roles_galaxy/A/meta/main.yml: dependencies: - name: yum-repo src: foo ./roles_galaxy/B/meta/main.yml: dependencies: - name: yum-repo src: foo ./roles_galaxy/C/meta/main.yml: dependencies: - name: yum-repo src: foo However, this causes the yum-repo role to be executed multiple times when our deploy playbook is run, so we see multiple executions of the yum-repo role. We don't want it to do this, as it just takes up extra time and runs repeatedly: Playbook: - name: Common Roles hosts: things roles: - A - B - C Output: PLAY [Role A] ... TASK [yum-repo ...] PLAY [Role B] ... TASK [yum-repo ...] PLAY [Role C] ... TASK [yum-repo ...] I've tried allow_duplicates = false in our ansible.cfg , but I don't think that's the right solution as it still executes multiple times. If there's more information needed, I'm more than happy to try to provide a cleaned up version of it. Running ansible-2.5.5 currently.
|
ansible, ansible-role
| 11
| 5,853
| 4
|
https://stackoverflow.com/questions/54333423/how-to-keep-ansible-role-from-running-multiple-times-when-listed-as-a-dependency
|
21,721,942
|
Is there an easy way to generate a graph of Ansible role dependencies?
|
Since version 1.3, Ansible has supported role dependencies to encourage reuse of role definitions. To audit and maintain larger orchestrations, it would be nice to have some way to easily generate a dependency graph of which roles depend on which other roles. An example of dependency definitions might be roles/app_node/meta/main.yml : --- dependencies: - { role: common, some_parameter: 3 } - { role: apache, port: 80 } - { role: postgres_client, dbname: blarg, other_parameter: 12 } where roles/postgres_client/meta/main.yml might include something like --- dependencies: - { role: postgres_common } - { role: stunnel, client: yes, local_port: 5432 remote_host: db_host remote_port: 15432 } Such nested dependencies can get messy to maintain when the number of roles in an orchestration grows. I therefore wonder if anyone has found an easy way to generate a graph of such dependencies, either graphically (dot or neato?) or just as an indented text graph? Such a tool could help reduce the maintenance complexity.
|
Is there an easy way to generate a graph of Ansible role dependencies? Since version 1.3, Ansible has supported role dependencies to encourage reuse of role definitions. To audit and maintain larger orchestrations, it would be nice to have some way to easily generate a dependency graph of which roles depend on which other roles. An example of dependency definitions might be roles/app_node/meta/main.yml : --- dependencies: - { role: common, some_parameter: 3 } - { role: apache, port: 80 } - { role: postgres_client, dbname: blarg, other_parameter: 12 } where roles/postgres_client/meta/main.yml might include something like --- dependencies: - { role: postgres_common } - { role: stunnel, client: yes, local_port: 5432 remote_host: db_host remote_port: 15432 } Such nested dependencies can get messy to maintain when the number of roles in an orchestration grows. I therefore wonder if anyone has found an easy way to generate a graph of such dependencies, either graphically (dot or neato?) or just as an indented text graph? Such a tool could help reduce the maintenance complexity.
|
scalability, administration, ansible, orchestration
| 11
| 5,260
| 3
|
https://stackoverflow.com/questions/21721942/is-there-an-easy-way-to-generate-a-graph-of-ansible-role-dependencies
|
37,701,342
|
Ansible using systemd instead of service module
|
I'm just getting my feet wet with Ansible 2.2 and Debops and I've run into the following problem. I have a host test-host to which I deployed a MySQL server (using geerlingguy.mysql ). The role uses the following handler to restart the service: --- - name: restart mysql service: "name={{ mysql_daemon }} state=restarted sleep=5" which, I thought, uses Ansibles service module to restart the server. However, that fails: unsupported parameter for module: sleep So just to rule out any weirdness with that custom role, I've tried to execute the module directly like so: ansible test-host -b -m service -a 'name=mysql sleep=5 state=restarted' with the same result. Running Ansible with more verbose output shows (among other things): Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible-2.2.0-py2.7.egg/ansible/modules/core/system/systemd.py So it appears that the systemd module is used instead of service (looking into the module shows that it is indeed aliased to service ). And, lo and behold, systemd does not support the sleep parameter. How to fix this?
|
Ansible using systemd instead of service module I'm just getting my feet wet with Ansible 2.2 and Debops and I've run into the following problem. I have a host test-host to which I deployed a MySQL server (using geerlingguy.mysql ). The role uses the following handler to restart the service: --- - name: restart mysql service: "name={{ mysql_daemon }} state=restarted sleep=5" which, I thought, uses Ansibles service module to restart the server. However, that fails: unsupported parameter for module: sleep So just to rule out any weirdness with that custom role, I've tried to execute the module directly like so: ansible test-host -b -m service -a 'name=mysql sleep=5 state=restarted' with the same result. Running Ansible with more verbose output shows (among other things): Running systemd Using module file /usr/local/lib/python2.7/site-packages/ansible-2.2.0-py2.7.egg/ansible/modules/core/system/systemd.py So it appears that the systemd module is used instead of service (looking into the module shows that it is indeed aliased to service ). And, lo and behold, systemd does not support the sleep parameter. How to fix this?
|
ansible, devops
| 11
| 6,013
| 1
|
https://stackoverflow.com/questions/37701342/ansible-using-systemd-instead-of-service-module
|
39,187,881
|
Querying ansible global group variables via (python)
|
I'm trying to query global group variables set in Ansible. I seem to be getting an empty dictionary and I'm not sure what else I can do. My code looks like this: def __init__(self, inventory_path=None): self.loader = DataLoader() self.variable_manager = VariableManager() self.inventory = Inventory(loader=self.loader, variable_manager=self.variable_manager, host_list=inventory_path) self.variable_manager.set_inventory(self.inventory) when I then try to get group vars as below: inventory_asg_groups = filter(lambda g: 'asg' in g, self.inventory.groups) for group in inventory_asg_groups: print(self.inventory.get_group_vars(self.inventory.get_group(group))) I get an empty dictionary: {} when I just do a: print(self.inventory.localhost.vars) I get this: {'ansible_python_interpreter': '/usr/local/opt/python/bin/python2.7', 'ansible_connection': 'local'} I know the inventory is being loaded, since I list all the groups in the inventory. How do I get the variables listed in group_vars/all via the python ansible api?
|
Querying ansible global group variables via (python) I'm trying to query global group variables set in Ansible. I seem to be getting an empty dictionary and I'm not sure what else I can do. My code looks like this: def __init__(self, inventory_path=None): self.loader = DataLoader() self.variable_manager = VariableManager() self.inventory = Inventory(loader=self.loader, variable_manager=self.variable_manager, host_list=inventory_path) self.variable_manager.set_inventory(self.inventory) when I then try to get group vars as below: inventory_asg_groups = filter(lambda g: 'asg' in g, self.inventory.groups) for group in inventory_asg_groups: print(self.inventory.get_group_vars(self.inventory.get_group(group))) I get an empty dictionary: {} when I just do a: print(self.inventory.localhost.vars) I get this: {'ansible_python_interpreter': '/usr/local/opt/python/bin/python2.7', 'ansible_connection': 'local'} I know the inventory is being loaded, since I list all the groups in the inventory. How do I get the variables listed in group_vars/all via the python ansible api?
|
python, ansible
| 11
| 2,821
| 1
|
https://stackoverflow.com/questions/39187881/querying-ansible-global-group-variables-via-python
|
37,434,598
|
Ansible: sudo without password
|
I want to run ansible with user sa1 without sudo password: First time OK: [root@centos1 cp]# ansible cent2 -m shell -a "sudo yum -y install httpd" cent2 | SUCCESS | rc=0 >> Second time FAILED: [root@centos1 cp]# ansible cent2 -s -m yum -a "name=httpd state=absent" cent2 | FAILED! => { "changed": false, "failed": true, "module_stderr": "", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "parsed": false } Please help!
|
Ansible: sudo without password I want to run ansible with user sa1 without sudo password: First time OK: [root@centos1 cp]# ansible cent2 -m shell -a "sudo yum -y install httpd" cent2 | SUCCESS | rc=0 >> Second time FAILED: [root@centos1 cp]# ansible cent2 -s -m yum -a "name=httpd state=absent" cent2 | FAILED! => { "changed": false, "failed": true, "module_stderr": "", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "parsed": false } Please help!
|
linux, ansible, sudo
| 10
| 62,796
| 5
|
https://stackoverflow.com/questions/37434598/ansible-sudo-without-password
|
49,040,013
|
How can I know what version of Jinja2 my ansible is using?
|
I tried to use pip list and pip freeze without success. It might be something obvious but I am not able to find it so far.
|
How can I know what version of Jinja2 my ansible is using? I tried to use pip list and pip freeze without success. It might be something obvious but I am not able to find it so far.
|
ansible, jinja2
| 10
| 21,868
| 2
|
https://stackoverflow.com/questions/49040013/how-can-i-know-what-version-of-jinja2-my-ansible-is-using
|
42,510,032
|
How to change ansible verbosity level without changing the command line arguments?
|
I want to control the verbosity of ansible playbooks using an environment variable or a global configuration item. This is because ansible is called from multiple places in multiple ways and I want to change the logging level for all further executions from the same shell session. I observed that if I configure ANSIBLE_DEBUG=true ansible will run in debug mode but debug more is extremely verbose and I am only looking for something similar to the -vvv option (DEBUG is more verbose than even -vvvv option) I tried to look for a variable inside [URL] but I wasn't able to find one this fits the bill.
|
How to change ansible verbosity level without changing the command line arguments? I want to control the verbosity of ansible playbooks using an environment variable or a global configuration item. This is because ansible is called from multiple places in multiple ways and I want to change the logging level for all further executions from the same shell session. I observed that if I configure ANSIBLE_DEBUG=true ansible will run in debug mode but debug more is extremely verbose and I am only looking for something similar to the -vvv option (DEBUG is more verbose than even -vvvv option) I tried to look for a variable inside [URL] but I wasn't able to find one this fits the bill.
|
ansible, ansible-2.x
| 10
| 30,127
| 4
|
https://stackoverflow.com/questions/42510032/how-to-change-ansible-verbosity-level-without-changing-the-command-line-argument
|
48,825,583
|
In Ansible how do you change a existing dictionary/hash values using a variable for the key
|
As the title suggest i want to loop over an existing dictionary and change some values, based on the answer to this question i came up with the code below but it doesn't work as the values are unchanged in the second debug call, I'm thinking it is because in the other question they are creating a new dictionary from scratch, but I've also tried it without the outer curly bracket which i would have thought would have caused it to change the existing value. - set_fact: uber_dict: a_dict: some_key: "abc" another_key: "def" b_dict: some_key: "123" another_key: "456" - debug: var="uber_dict" - set_fact: "{ uber_dict['{{ item }}']['some_key'] : 'xyz' }" with_items: "{{ uber_dict }}" - debug: var="uber_dict"
|
In Ansible how do you change a existing dictionary/hash values using a variable for the key As the title suggest i want to loop over an existing dictionary and change some values, based on the answer to this question i came up with the code below but it doesn't work as the values are unchanged in the second debug call, I'm thinking it is because in the other question they are creating a new dictionary from scratch, but I've also tried it without the outer curly bracket which i would have thought would have caused it to change the existing value. - set_fact: uber_dict: a_dict: some_key: "abc" another_key: "def" b_dict: some_key: "123" another_key: "456" - debug: var="uber_dict" - set_fact: "{ uber_dict['{{ item }}']['some_key'] : 'xyz' }" with_items: "{{ uber_dict }}" - debug: var="uber_dict"
|
ansible, ansible-2.x
| 10
| 40,583
| 1
|
https://stackoverflow.com/questions/48825583/in-ansible-how-do-you-change-a-existing-dictionary-hash-values-using-a-variable
|
50,026,802
|
Override Ansible playbook serial from command line
|
We use serial in nearly all of our playbooks but there are occasions where we need to make a quick change and it's unnecessary for the Ansible to abide by the serial restriction. Is there a way to override serial from the command line with a flag as part of the ansible-playbook command? Code example: - hosts: database serial: 1 become: yes Many thanks in advance!
|
Override Ansible playbook serial from command line We use serial in nearly all of our playbooks but there are occasions where we need to make a quick change and it's unnecessary for the Ansible to abide by the serial restriction. Is there a way to override serial from the command line with a flag as part of the ansible-playbook command? Code example: - hosts: database serial: 1 become: yes Many thanks in advance!
|
ansible
| 10
| 15,695
| 2
|
https://stackoverflow.com/questions/50026802/override-ansible-playbook-serial-from-command-line
|
33,871,906
|
How to strip newline from shell command's standard output run via ansible
|
So I have a scenario where I am executing a shell command on a machine using ansible to get some information on standard output . I am using register to log its result in a variable my_info and print my_info using debug , I am seeing its result with \n appended to it (Ansible has appended \n .Same command on linux do not appends \n" ).When I use the my_info in templates for a config it prints a new line in config hence messing up my config . Here is how the code and output goes . Ansible code : - name: calculate range address start raw: grep 'CONFIG_PARAMS' /path/to/the/file | head -n 1 register: my_info Output : ok: [My_HOST] => { "msg": "CONFIG_PARAMS\n" } How can we strip the space from this output or possibly make a change in template so that new line don't gets printed .
|
How to strip newline from shell command's standard output run via ansible So I have a scenario where I am executing a shell command on a machine using ansible to get some information on standard output . I am using register to log its result in a variable my_info and print my_info using debug , I am seeing its result with \n appended to it (Ansible has appended \n .Same command on linux do not appends \n" ).When I use the my_info in templates for a config it prints a new line in config hence messing up my config . Here is how the code and output goes . Ansible code : - name: calculate range address start raw: grep 'CONFIG_PARAMS' /path/to/the/file | head -n 1 register: my_info Output : ok: [My_HOST] => { "msg": "CONFIG_PARAMS\n" } How can we strip the space from this output or possibly make a change in template so that new line don't gets printed .
|
python, jinja2, ansible
| 10
| 21,079
| 3
|
https://stackoverflow.com/questions/33871906/how-to-strip-newline-from-shell-commands-standard-output-run-via-ansible
|
28,380,771
|
Error: ansible requires a json module, none found
|
Error while executing ansible ping module bash ~ ansible webservers -i inventory -m ping -k -u root -vvvv SSH password: <~> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO ~ <my-lnx> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO my-lnx ~ | FAILED => FAILED: [Errno 8] nodename nor servname provided, or not known <my-lnx> REMOTE_MODULE ping <my-lnx> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582 && echo $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582' <my-lnx> PUT /var/folders/8n/fftvnbbs51q834y16vfvb1q00000gn/T/tmpP6zwZj TO /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping <my-lnx> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping; rm -rf /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ >/dev/null 2>&1' my-lnx | FAILED >> { "failed": true, "msg": "Error: ansible requires a json module, none found!", "parsed": false } This is my inventory file bash ~ cat inventory [webservers] my-lnx ansible_ssh_host=my-lnx ansible_ssh_port=22 I have installed simplejosn module also in the client as well as remote machine bash ~ pip list | grep json simple-json (1.1) simplejson (3.6.5)
|
Error: ansible requires a json module, none found Error while executing ansible ping module bash ~ ansible webservers -i inventory -m ping -k -u root -vvvv SSH password: <~> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO ~ <my-lnx> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO my-lnx ~ | FAILED => FAILED: [Errno 8] nodename nor servname provided, or not known <my-lnx> REMOTE_MODULE ping <my-lnx> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582 && echo $HOME/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582' <my-lnx> PUT /var/folders/8n/fftvnbbs51q834y16vfvb1q00000gn/T/tmpP6zwZj TO /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping <my-lnx> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ping; rm -rf /root/.ansible/tmp/ansible-tmp-1423302966.66-77716810353582/ >/dev/null 2>&1' my-lnx | FAILED >> { "failed": true, "msg": "Error: ansible requires a json module, none found!", "parsed": false } This is my inventory file bash ~ cat inventory [webservers] my-lnx ansible_ssh_host=my-lnx ansible_ssh_port=22 I have installed simplejosn module also in the client as well as remote machine bash ~ pip list | grep json simple-json (1.1) simplejson (3.6.5)
|
json, ansible
| 10
| 22,051
| 4
|
https://stackoverflow.com/questions/28380771/error-ansible-requires-a-json-module-none-found
|
53,596,477
|
Ansible install node with nvm
|
I'm looking for a way to install a given version of node via ansible and nvm, the installation of nvm is working as expected because if I connect with the root user, I can execute the command nvm install 8.11.3 but this same command doesn't work with Ansible, I don't understand why. --- - name: Install nvm git: repo=[URL] dest=~/.nvm version=v0.33.11 tags: nvm - name: Source nvm in ~/.{{ item }} lineinfile: > dest=~/.{{ item }} line="source ~/.nvm/nvm.sh" create=yes tags: nvm with_items: - bashrc - profile - name: Install node and set version become: yes become_user: root shell: nvm install 8.11.3 ... error log TASK [node : Install node and set version] ************************************************************************************* fatal: [51.15.128.164]: FAILED! => {"changed": true, "cmd": "nvm install 8.11.3", "delta": "0:00:00.005883", "end": "2018-12-03 15:05:10.394433", "msg": "non-zero return code", "rc": 127, "start": "2018-12-03 15:05:10.388550", "stderr": "/bin/sh: 1: nvm: not found", "stderr_lines": ["/bin/sh: 1: nvm: not found"], "stdout": "", "stdout_lines": []} to retry, use: --limit .../.../ansible/stater-debian/playbook.retry
|
Ansible install node with nvm I'm looking for a way to install a given version of node via ansible and nvm, the installation of nvm is working as expected because if I connect with the root user, I can execute the command nvm install 8.11.3 but this same command doesn't work with Ansible, I don't understand why. --- - name: Install nvm git: repo=[URL] dest=~/.nvm version=v0.33.11 tags: nvm - name: Source nvm in ~/.{{ item }} lineinfile: > dest=~/.{{ item }} line="source ~/.nvm/nvm.sh" create=yes tags: nvm with_items: - bashrc - profile - name: Install node and set version become: yes become_user: root shell: nvm install 8.11.3 ... error log TASK [node : Install node and set version] ************************************************************************************* fatal: [51.15.128.164]: FAILED! => {"changed": true, "cmd": "nvm install 8.11.3", "delta": "0:00:00.005883", "end": "2018-12-03 15:05:10.394433", "msg": "non-zero return code", "rc": 127, "start": "2018-12-03 15:05:10.388550", "stderr": "/bin/sh: 1: nvm: not found", "stderr_lines": ["/bin/sh: 1: nvm: not found"], "stdout": "", "stdout_lines": []} to retry, use: --limit .../.../ansible/stater-debian/playbook.retry
|
node.js, ansible, nvm
| 10
| 16,530
| 6
|
https://stackoverflow.com/questions/53596477/ansible-install-node-with-nvm
|
50,479,554
|
How to convert the "_" (underscore) in a string to "-" (hyphen) inside Ansible Jinja template?
|
I am looking for a way to convert "example_test_password" to "test-password" inside Jinja2 templates. Any help would be appreciated. Regards, Muhammed Roshan
|
How to convert the "_" (underscore) in a string to "-" (hyphen) inside Ansible Jinja template? I am looking for a way to convert "example_test_password" to "test-password" inside Jinja2 templates. Any help would be appreciated. Regards, Muhammed Roshan
|
ansible, jinja2
| 10
| 16,886
| 1
|
https://stackoverflow.com/questions/50479554/how-to-convert-the-underscore-in-a-string-to-hyphen-inside-ansible-j
|
39,398,455
|
How to execute ansible playbook from crontab?
|
Is it possible to execute an ansible playbook from crontab? We have a playbook that needs to run at a certain time ever day, but I know that cron doesn't like ssh. Tower has a built in scheduling engine, but we are not interested in using Tower. How are other people scheduling ansible playbooks?
|
How to execute ansible playbook from crontab? Is it possible to execute an ansible playbook from crontab? We have a playbook that needs to run at a certain time ever day, but I know that cron doesn't like ssh. Tower has a built in scheduling engine, but we are not interested in using Tower. How are other people scheduling ansible playbooks?
|
ansible
| 10
| 37,179
| 6
|
https://stackoverflow.com/questions/39398455/how-to-execute-ansible-playbook-from-crontab
|
71,748,254
|
ansible-playbook: "Failed to update cache: unknown reason"
|
I am trying to deploy kypo cyber range and am following its official guide . While deploying the whole range using ansible-playbook , I am stuck on above error: TASK [docker : install prerequisites] ****************************************************************** fatal: [192.168.211.208]: FAILED! => {"changed": false, "msg": "Failed to update apt cache: unknown reason"} I have manually checked apt-get update which initially gave me a notification of: N: Skipping acquire of configured file 'stable/binary-i386/Packages' as repository '[URL] focal InRelease' doesn't support architecture 'i386' I followed this to add [amd=64] to repository which cleaned the error. Now apt-get update runs with without any warnings or errors, but ansible-playbook keeps on generating this error. I changed the verbosity level and got: fatal: [192.168.211.208]: FAILED! => { "changed": false, "invocation": { "module_args": { "allow_unauthenticated": false, "autoclean": false, "autoremove": false, "cache_valid_time": 0, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "force_apt_get": false, "install_recommends": null, "name": [ "apt-transport-https", "ca-certificates" ], "only_upgrade": false, "package": [ "apt-transport-https", "ca-certificates" ], "policy_rc_d": null, "purge": false, "state": "present", "update_cache": true, "update_cache_retries": 5, "update_cache_retry_max_delay": 12, "upgrade": null } }, "msg": "Failed to update apt cache: unknown reason" } How can I fix this?
|
ansible-playbook: "Failed to update cache: unknown reason" I am trying to deploy kypo cyber range and am following its official guide . While deploying the whole range using ansible-playbook , I am stuck on above error: TASK [docker : install prerequisites] ****************************************************************** fatal: [192.168.211.208]: FAILED! => {"changed": false, "msg": "Failed to update apt cache: unknown reason"} I have manually checked apt-get update which initially gave me a notification of: N: Skipping acquire of configured file 'stable/binary-i386/Packages' as repository '[URL] focal InRelease' doesn't support architecture 'i386' I followed this to add [amd=64] to repository which cleaned the error. Now apt-get update runs with without any warnings or errors, but ansible-playbook keeps on generating this error. I changed the verbosity level and got: fatal: [192.168.211.208]: FAILED! => { "changed": false, "invocation": { "module_args": { "allow_unauthenticated": false, "autoclean": false, "autoremove": false, "cache_valid_time": 0, "deb": null, "default_release": null, "dpkg_options": "force-confdef,force-confold", "force": false, "force_apt_get": false, "install_recommends": null, "name": [ "apt-transport-https", "ca-certificates" ], "only_upgrade": false, "package": [ "apt-transport-https", "ca-certificates" ], "policy_rc_d": null, "purge": false, "state": "present", "update_cache": true, "update_cache_retries": 5, "update_cache_retry_max_delay": 12, "upgrade": null } }, "msg": "Failed to update apt cache: unknown reason" } How can I fix this?
|
ansible, apt-get, package-management
| 10
| 38,307
| 8
|
https://stackoverflow.com/questions/71748254/ansible-playbook-failed-to-update-cache-unknown-reason
|
61,734,226
|
Ansible lint reports "Package installs should not use latest"
|
I've finally started using Ansible Lint to ensure I'm up to date and not missing things and I've found it reporting a curious error/notice. When I use dnf to install a package, I've been using state: latest as it's for a system bootstrapping process that I may run multiple times on the same instance, particularly during development. I always want the latest package installed in this scenario however Ansible Lint is reporting: Package installs should not use latest While I'm confident that in my use case I'm ok, is this simply because in the interest of "idempotency" one would normally not want this behaviour? Or is there another reason? If they're always going to report this, then why even offer the latest status option?
|
Ansible lint reports "Package installs should not use latest" I've finally started using Ansible Lint to ensure I'm up to date and not missing things and I've found it reporting a curious error/notice. When I use dnf to install a package, I've been using state: latest as it's for a system bootstrapping process that I may run multiple times on the same instance, particularly during development. I always want the latest package installed in this scenario however Ansible Lint is reporting: Package installs should not use latest While I'm confident that in my use case I'm ok, is this simply because in the interest of "idempotency" one would normally not want this behaviour? Or is there another reason? If they're always going to report this, then why even offer the latest status option?
|
ansible, ansible-lint
| 10
| 8,461
| 2
|
https://stackoverflow.com/questions/61734226/ansible-lint-reports-package-installs-should-not-use-latest
|
49,045,043
|
remove block of text from config file using ansible
|
I am trying to remove the below section from samba config file smb.conf. [public] path = /opt/samba/public guest ok = yes browsable = yes writable = yes read only = no Blockinfile module won't work as there are no markers . Lineinfile will also have a problem as there are lines which are common to other sections. e.g browsable = yes writable = yes How do I remove these lines using ansible? PS: replacing the config file with a new one is not possible as each server has a unique user mapped to it (not ideal when running batch jobs)
|
remove block of text from config file using ansible I am trying to remove the below section from samba config file smb.conf. [public] path = /opt/samba/public guest ok = yes browsable = yes writable = yes read only = no Blockinfile module won't work as there are no markers . Lineinfile will also have a problem as there are lines which are common to other sections. e.g browsable = yes writable = yes How do I remove these lines using ansible? PS: replacing the config file with a new one is not possible as each server has a unique user mapped to it (not ideal when running batch jobs)
|
ansible
| 10
| 14,939
| 3
|
https://stackoverflow.com/questions/49045043/remove-block-of-text-from-config-file-using-ansible
|
46,396,732
|
Ansible: Global template folder?
|
Couldn't find anything google'ing. There is group_vars/all/ for variables. Is there something similar for templates? I would like to use some templates across multiple roles.
|
Ansible: Global template folder? Couldn't find anything google'ing. There is group_vars/all/ for variables. Is there something similar for templates? I would like to use some templates across multiple roles.
|
ansible, ansible-template
| 10
| 9,096
| 3
|
https://stackoverflow.com/questions/46396732/ansible-global-template-folder
|
46,045,192
|
How to automatically install Ansible Galaxy roles, using Vagrant?
|
Using one playbook only, then it's not possible to have Ansible automagically install the dependent roles. At least according to this SO thread . But, I have the added "advantage" of using Vagrant and Vagrant's Ansible local provisioner . Any tricks I may apply?
|
How to automatically install Ansible Galaxy roles, using Vagrant? Using one playbook only, then it's not possible to have Ansible automagically install the dependent roles. At least according to this SO thread . But, I have the added "advantage" of using Vagrant and Vagrant's Ansible local provisioner . Any tricks I may apply?
|
vagrant, ansible, vagrantfile, ansible-role, ansible-galaxy
| 10
| 7,550
| 1
|
https://stackoverflow.com/questions/46045192/how-to-automatically-install-ansible-galaxy-roles-using-vagrant
|
39,856,009
|
Ansible synchronize mode permissions
|
I'm using an Ansible playbook to copy files between my host and a server. The thing is, I have to run the script repeatedly in order to upload some updates. At the beginning I was using the "copy" module of Ansible, but to improve performance of the synchronizing of files and directories, I've now switched to use the "synchronize" module. That way I can ensure Ansible uses rsync instead of sftp or scp. With the "copy" module, I was able to specify the file's mode in the destination host by adding the mode option (e.g. mode=644 ). I want to do that using synchronize, but it only has the perms option that accepts yes or no as values. Is there a way to specify the file's mode using "synchronize", without having to inherit it? Thx!
|
Ansible synchronize mode permissions I'm using an Ansible playbook to copy files between my host and a server. The thing is, I have to run the script repeatedly in order to upload some updates. At the beginning I was using the "copy" module of Ansible, but to improve performance of the synchronizing of files and directories, I've now switched to use the "synchronize" module. That way I can ensure Ansible uses rsync instead of sftp or scp. With the "copy" module, I was able to specify the file's mode in the destination host by adding the mode option (e.g. mode=644 ). I want to do that using synchronize, but it only has the perms option that accepts yes or no as values. Is there a way to specify the file's mode using "synchronize", without having to inherit it? Thx!
|
ansible, rsync
| 10
| 9,262
| 1
|
https://stackoverflow.com/questions/39856009/ansible-synchronize-mode-permissions
|
28,789,912
|
Ansible - Default/Explicit Tags
|
I've got a playbook that includes and tags various roles: - name: base hosts: "{{ host | default('localhost') }}" roles: - { role: apt, tags: [ 'base', 'apt', 'ubuntu']} - { role: homebrew, tags: [ 'base', 'homebrew', osx' ]} - { role: base16, tags: [ 'base', 'base16', 'osx' ]} - { role: nodejs, tags: [ 'base', 'nodejs' ]} - { role: tmux, tags: [ 'base', 'tmux' ]} - { role: vim, tags: [ 'base', 'vim' ]} - { role: virtualenv, tags: [ 'base', virtualenv', 'python' ]} - { role: homebrew_cask, tags: [ 'desktop', 'homebrew_cask', osx' ]} - { role: gnome_terminator, tags: [ 'desktop', 'gnome_terminator', ubuntu' ]} Most of the tasks are using when clauses to determine which OS they should run on, for example: - name: install base packages when: ansible_distribution == 'MacOSX' sudo: no homebrew: name: "{{ item.name }}" state: latest install_options: "{{ item.install_options|default() }}" with_items: homebrew_packages If I run ansible-playbook base.yml without specifying any tags, all the tasks run. If I specify a tag, for example ansible-playbook base.yml --tags='base' , only the roles tagged with base run . By default (if no tags are specified), I'd only like to run the roles tagged with 'base' , and not the roles tagged with 'desktop' . It would also be really nice to set a default 'os' tag, based on the current operating system, to avoid including all the tasks for the ubuntu when I'm running the playbook on OSX (and vice-versa). Any ideas if this is possible, and how I might do it?
|
Ansible - Default/Explicit Tags I've got a playbook that includes and tags various roles: - name: base hosts: "{{ host | default('localhost') }}" roles: - { role: apt, tags: [ 'base', 'apt', 'ubuntu']} - { role: homebrew, tags: [ 'base', 'homebrew', osx' ]} - { role: base16, tags: [ 'base', 'base16', 'osx' ]} - { role: nodejs, tags: [ 'base', 'nodejs' ]} - { role: tmux, tags: [ 'base', 'tmux' ]} - { role: vim, tags: [ 'base', 'vim' ]} - { role: virtualenv, tags: [ 'base', virtualenv', 'python' ]} - { role: homebrew_cask, tags: [ 'desktop', 'homebrew_cask', osx' ]} - { role: gnome_terminator, tags: [ 'desktop', 'gnome_terminator', ubuntu' ]} Most of the tasks are using when clauses to determine which OS they should run on, for example: - name: install base packages when: ansible_distribution == 'MacOSX' sudo: no homebrew: name: "{{ item.name }}" state: latest install_options: "{{ item.install_options|default() }}" with_items: homebrew_packages If I run ansible-playbook base.yml without specifying any tags, all the tasks run. If I specify a tag, for example ansible-playbook base.yml --tags='base' , only the roles tagged with base run . By default (if no tags are specified), I'd only like to run the roles tagged with 'base' , and not the roles tagged with 'desktop' . It would also be really nice to set a default 'os' tag, based on the current operating system, to avoid including all the tasks for the ubuntu when I'm running the playbook on OSX (and vice-versa). Any ideas if this is possible, and how I might do it?
|
tags, ansible
| 10
| 13,099
| 6
|
https://stackoverflow.com/questions/28789912/ansible-default-explicit-tags
|
33,593,516
|
How to add a host to the known_host file with ansible?
|
I want to add the ssh key for my private git server to the known_hosts file with ansible 1.9.3 but it doesn't work. I have the following entry in my playbook: - name: add SSH host key known_hosts: name='myhost.com' key="{{ lookup('file', 'host_key.pub') }}" I have copied /etc/ssh/ssh_host_rsa_key.pub to host_key.pub and the file looks like: ssh-rsa AAAAB3NzaC1... root@myhost.com If I run my playbook I always get the following error message: TASK: [add SSH host key] ****************************************************** failed: [default] => {"cmd": "/usr/bin/ssh-keygen -F myhost.com -f /tmp/tmpe5KNIW", "failed": true, "rc": 1} What I am doing wrong?
|
How to add a host to the known_host file with ansible? I want to add the ssh key for my private git server to the known_hosts file with ansible 1.9.3 but it doesn't work. I have the following entry in my playbook: - name: add SSH host key known_hosts: name='myhost.com' key="{{ lookup('file', 'host_key.pub') }}" I have copied /etc/ssh/ssh_host_rsa_key.pub to host_key.pub and the file looks like: ssh-rsa AAAAB3NzaC1... root@myhost.com If I run my playbook I always get the following error message: TASK: [add SSH host key] ****************************************************** failed: [default] => {"cmd": "/usr/bin/ssh-keygen -F myhost.com -f /tmp/tmpe5KNIW", "failed": true, "rc": 1} What I am doing wrong?
|
ansible
| 10
| 15,381
| 4
|
https://stackoverflow.com/questions/33593516/how-to-add-a-host-to-the-known-host-file-with-ansible
|
19,127,493
|
How do you prevent a dpkg installation task to notify a changed state when it runs for the second time?
|
There isn't a module for installing .deb packages directly. When you have to run dpkg as a command, it always mark the installation task as one that has changed. I'd some trouble configuring it correctly, so I'm posting here as a public notebook. Here is the task to install with dpkg: - name: Install old python command: dpkg -i {{ temp_dir }}/{{ item }} with_items: - python2.4-minimal_2.4.6-6+precise1_i386.deb - python2.4_2.4.6-6+{{ ubuntu_release }}1_i386.deb - libpython2.4_2.4.6-6+{{ ubuntu_release }}1_i386.deb - python2.4-dev_2.4.6-6+{{ ubuntu_release }}1_i386.deb The files where uploaded to {{temp_dir}} in another task.
|
How do you prevent a dpkg installation task to notify a changed state when it runs for the second time? There isn't a module for installing .deb packages directly. When you have to run dpkg as a command, it always mark the installation task as one that has changed. I'd some trouble configuring it correctly, so I'm posting here as a public notebook. Here is the task to install with dpkg: - name: Install old python command: dpkg -i {{ temp_dir }}/{{ item }} with_items: - python2.4-minimal_2.4.6-6+precise1_i386.deb - python2.4_2.4.6-6+{{ ubuntu_release }}1_i386.deb - libpython2.4_2.4.6-6+{{ ubuntu_release }}1_i386.deb - python2.4-dev_2.4.6-6+{{ ubuntu_release }}1_i386.deb The files where uploaded to {{temp_dir}} in another task.
|
deb, dpkg, ansible
| 10
| 6,014
| 3
|
https://stackoverflow.com/questions/19127493/how-do-you-prevent-a-dpkg-installation-task-to-notify-a-changed-state-when-it-ru
|
41,617,285
|
Undefined variable when running Ansible play
|
I am running multiple ansible plays defined in YAML files. In the last play I get the following error: {"failed": true, "msg": "The conditional check 'ansible_os_family == \"RedHat\"' failed. The error was: error while evaluating conditional (ansible_os_family == \"RedHat\"): 'ansible_os_family' is undefined\n Do I need to change anything with the facts gathering or something in the ansible.cfg ?
|
Undefined variable when running Ansible play I am running multiple ansible plays defined in YAML files. In the last play I get the following error: {"failed": true, "msg": "The conditional check 'ansible_os_family == \"RedHat\"' failed. The error was: error while evaluating conditional (ansible_os_family == \"RedHat\"): 'ansible_os_family' is undefined\n Do I need to change anything with the facts gathering or something in the ansible.cfg ?
|
ansible, ansible-2.x, ansible-facts
| 10
| 12,928
| 2
|
https://stackoverflow.com/questions/41617285/undefined-variable-when-running-ansible-play
|
39,565,561
|
How can I specify a different domain name in Ansible inventory
|
I am using a Linux VM managed many Linux boxes (in a different domain), I find it annoying to use FQDN for each individual servers, because our internal domain name is very long. For example [web] serve1.part.one.of.very.long.internal.domain.name.com anotherserver.part.one.of.very.long.internal.domain.name.com Is there a way to specify a default domain for groups of servers in inventory? I tried adding andible_domain variable in inventory file as a variable but did not work.
|
How can I specify a different domain name in Ansible inventory I am using a Linux VM managed many Linux boxes (in a different domain), I find it annoying to use FQDN for each individual servers, because our internal domain name is very long. For example [web] serve1.part.one.of.very.long.internal.domain.name.com anotherserver.part.one.of.very.long.internal.domain.name.com Is there a way to specify a default domain for groups of servers in inventory? I tried adding andible_domain variable in inventory file as a variable but did not work.
|
ansible
| 10
| 12,563
| 2
|
https://stackoverflow.com/questions/39565561/how-can-i-specify-a-different-domain-name-in-ansible-inventory
|
38,076,968
|
Jinja2 template variables to one line
|
Is it possible to create a jinja2 template that puts variables on one line? Something like this but instead of having two lines in the results have them comma separated. Template: {% for host in groups['tag_Function_logdb'] %} elasticsearch_discovery_zen_ping_unicast_hosts = {{ host }}:9300 {% endfor %} Results: elasticsearch_discovery_zen_ping_unicast_hosts = 1.1.1.1:9300 elasticsearch_discovery_zen_ping_unicast_hosts = 2.2.2.2:9300 Desired Results: elasticsearch_discovery_zen_ping_unicast_hosts = 1.1.1.1:9300,2.2.2.2:9300 Edit, this works for 2 items, better solution below: elasticsearch_discovery_zen_ping_unicast_hosts = {% for host in groups['tag_Function_logdb'] %} {{ host }}:9300 {%- if loop.first %},{% endif %} {% endfor %}
|
Jinja2 template variables to one line Is it possible to create a jinja2 template that puts variables on one line? Something like this but instead of having two lines in the results have them comma separated. Template: {% for host in groups['tag_Function_logdb'] %} elasticsearch_discovery_zen_ping_unicast_hosts = {{ host }}:9300 {% endfor %} Results: elasticsearch_discovery_zen_ping_unicast_hosts = 1.1.1.1:9300 elasticsearch_discovery_zen_ping_unicast_hosts = 2.2.2.2:9300 Desired Results: elasticsearch_discovery_zen_ping_unicast_hosts = 1.1.1.1:9300,2.2.2.2:9300 Edit, this works for 2 items, better solution below: elasticsearch_discovery_zen_ping_unicast_hosts = {% for host in groups['tag_Function_logdb'] %} {{ host }}:9300 {%- if loop.first %},{% endif %} {% endfor %}
|
ansible, jinja2
| 10
| 33,302
| 3
|
https://stackoverflow.com/questions/38076968/jinja2-template-variables-to-one-line
|
28,777,306
|
Redirect command output to file (existing command)
|
To write the stdout of a command (in this case, echo hi ) to a file, you can do: echo hi > outfile I would like a command instead of a redirection or pipe so that I do not need to invoke a shell. This is eventually for use with Ansible, which calls python's subprocess.POpen . I am looking for: stdout-to-file outfile echo hi tee makes copying stdout to a file easy enough, but it accepts stdin, not a separate command. Is there a common, portable command that does this? It's easy enough to write one, of course, but that's not the question. Ultimately, in Ansible, I want to do: command: to-file /opt/binary_data base64 -d {{ base64_secret }} Instead of: shell: base64 -d {{ base64_secret }} > /opt/binary_data Edit: Looking for a command available on RHEL 7, Fedora 21
|
Redirect command output to file (existing command) To write the stdout of a command (in this case, echo hi ) to a file, you can do: echo hi > outfile I would like a command instead of a redirection or pipe so that I do not need to invoke a shell. This is eventually for use with Ansible, which calls python's subprocess.POpen . I am looking for: stdout-to-file outfile echo hi tee makes copying stdout to a file easy enough, but it accepts stdin, not a separate command. Is there a common, portable command that does this? It's easy enough to write one, of course, but that's not the question. Ultimately, in Ansible, I want to do: command: to-file /opt/binary_data base64 -d {{ base64_secret }} Instead of: shell: base64 -d {{ base64_secret }} > /opt/binary_data Edit: Looking for a command available on RHEL 7, Fedora 21
|
bash, ansible
| 10
| 42,780
| 2
|
https://stackoverflow.com/questions/28777306/redirect-command-output-to-file-existing-command
|
54,077,230
|
Does Ansible manages all hosts in parallel or just five? (-f and :serial)
|
I read this two ansible docs: ansible-playbook -f --> Statement 1 ansible-playbook :serial --> Statement 2 and I found this two statements: Statement 1 -f <FORKS>, --forks <FORKS> specify number of parallel processes to use (default=5) Statement 2 Rolling Update Batch Size . By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword: Question What's correct? Does ansible uses all hosts at once or just 5? Or is maybe 5 just the default value of the -f parameter? Thanks for clarifying that! Cheers
|
Does Ansible manages all hosts in parallel or just five? (-f and :serial) I read this two ansible docs: ansible-playbook -f --> Statement 1 ansible-playbook :serial --> Statement 2 and I found this two statements: Statement 1 -f <FORKS>, --forks <FORKS> specify number of parallel processes to use (default=5) Statement 2 Rolling Update Batch Size . By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword: Question What's correct? Does ansible uses all hosts at once or just 5? Or is maybe 5 just the default value of the -f parameter? Thanks for clarifying that! Cheers
|
parallel-processing, ansible, fork
| 10
| 10,425
| 2
|
https://stackoverflow.com/questions/54077230/does-ansible-manages-all-hosts-in-parallel-or-just-five-f-and-serial
|
37,538,679
|
How to prevent Jinja2 substitution in Ansible playbook?
|
In my playbook, a JSON file is included using the include_vars module. The content of the JSON file is as given below: { "Component1": { "parameter1" : "value1", "parameter2" : "value2" }, "Component2": { "parameter1" : "{{ NET_SEG_VLAN }}", "parameter2": "value2" } } After the JSON file is included in the playbook, I am using uri module to sent an http request as given below: - name: Configure Component2 variables using REST API uri: url: "[URL] method: POST return_content: yes HEADER_x-auth-token: "{{ login_resp.json.token }}" HEADER_Content-Type: "application/json" body: "{{ Component2 }}" body_format: json As it can be seen, the body of the http request is send with the JSON data Component2 . However, Jinja2 tries to substitute the {{ NET_SEG_VLAN }} in the JSON file and throws and undefined error. The intention is not to substitute anything inside the JSON file using Jinja2 and send the body as it is in http request. How to prevent the Jinja2 substitution for the variables included from the JSON file?
|
How to prevent Jinja2 substitution in Ansible playbook? In my playbook, a JSON file is included using the include_vars module. The content of the JSON file is as given below: { "Component1": { "parameter1" : "value1", "parameter2" : "value2" }, "Component2": { "parameter1" : "{{ NET_SEG_VLAN }}", "parameter2": "value2" } } After the JSON file is included in the playbook, I am using uri module to sent an http request as given below: - name: Configure Component2 variables using REST API uri: url: "[URL] method: POST return_content: yes HEADER_x-auth-token: "{{ login_resp.json.token }}" HEADER_Content-Type: "application/json" body: "{{ Component2 }}" body_format: json As it can be seen, the body of the http request is send with the JSON data Component2 . However, Jinja2 tries to substitute the {{ NET_SEG_VLAN }} in the JSON file and throws and undefined error. The intention is not to substitute anything inside the JSON file using Jinja2 and send the body as it is in http request. How to prevent the Jinja2 substitution for the variables included from the JSON file?
|
jinja2, ansible, configuration-management
| 10
| 8,315
| 3
|
https://stackoverflow.com/questions/37538679/how-to-prevent-jinja2-substitution-in-ansible-playbook
|
65,322,926
|
Access variable from one role in another role in an Ansible playbook with multiple hosts
|
I'm using the latest version of Ansible, and I am trying to use a default variable in role-one used on host one , in role-two , used on host two , but I can't get it to work. Nothing I have found in the documentation or on StackOverflow has really helped. I'm not sure what I am doing wrong. Ideally I want to set the value of the variable once, and be able to use it in another role for any host in my playbook. I've broken it down below. In my inventory I have a hosts group called [test] which has two hosts aliased as one and two . [test] one ansible_host=10.0.1.10 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key> two ansible_host=10.0.1.20 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key> I have a single playbook with a play for each of these hosts and I supply the hosts: value as "{{ host_group }}[0]" for host one and "{{ host_group }}[1]" for host two . The play for host one uses a role called role-one and the play for host two uses a role called role-two . - name: Test Sharing Role Variables hosts: "{{ host_group }}[0]" roles: - ../../ansible-roles/role-one - name: Test Sharing Role Variables hosts: "{{ host_group }}[1]" roles: - ../../ansible-roles/role-two In role-one I have set a variable variable-one . --- # defaults file for role-one variable_one: Role One Variable I want to use the value of variable_one in a template in role-two but I haven't had any luck. I'm using the below as a task in role-two to test and see if the variable is getting "picked-up". --- # tasks file for role-two - debug: msg: "{{ variable_one }}" When I run the playbook with ansible-playbook test.yml --extra-vars "host_group=test" I get the below failure. TASK [../../ansible-roles/role-two : debug] *********************************************************************************************************************************************************************************************** fatal: [two]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['test']\" is undefined\n\nThe error appears to be in 'ansible-roles/role-two/tasks/main.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# tasks file for role-two\n- debug:\n ^ here\n"}
|
Access variable from one role in another role in an Ansible playbook with multiple hosts I'm using the latest version of Ansible, and I am trying to use a default variable in role-one used on host one , in role-two , used on host two , but I can't get it to work. Nothing I have found in the documentation or on StackOverflow has really helped. I'm not sure what I am doing wrong. Ideally I want to set the value of the variable once, and be able to use it in another role for any host in my playbook. I've broken it down below. In my inventory I have a hosts group called [test] which has two hosts aliased as one and two . [test] one ansible_host=10.0.1.10 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key> two ansible_host=10.0.1.20 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key> I have a single playbook with a play for each of these hosts and I supply the hosts: value as "{{ host_group }}[0]" for host one and "{{ host_group }}[1]" for host two . The play for host one uses a role called role-one and the play for host two uses a role called role-two . - name: Test Sharing Role Variables hosts: "{{ host_group }}[0]" roles: - ../../ansible-roles/role-one - name: Test Sharing Role Variables hosts: "{{ host_group }}[1]" roles: - ../../ansible-roles/role-two In role-one I have set a variable variable-one . --- # defaults file for role-one variable_one: Role One Variable I want to use the value of variable_one in a template in role-two but I haven't had any luck. I'm using the below as a task in role-two to test and see if the variable is getting "picked-up". --- # tasks file for role-two - debug: msg: "{{ variable_one }}" When I run the playbook with ansible-playbook test.yml --extra-vars "host_group=test" I get the below failure. TASK [../../ansible-roles/role-two : debug] *********************************************************************************************************************************************************************************************** fatal: [two]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['test']\" is undefined\n\nThe error appears to be in 'ansible-roles/role-two/tasks/main.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# tasks file for role-two\n- debug:\n ^ here\n"}
|
ansible
| 10
| 22,648
| 2
|
https://stackoverflow.com/questions/65322926/access-variable-from-one-role-in-another-role-in-an-ansible-playbook-with-multip
|
55,570,502
|
including handlers from different file
|
The handlers I have are not being run by the playbook or tasks I have the following directory structur: <project> - playbook.yml - <roles> -<handler> - main.yml -<meta> -<tasks> -main.yml The problem is the handler is never called. tasks/main.yml: - name: run task1 command: run_task notify: "test me now" handler/main.yml: - name: tested register: val1 listen: "test me now" The playbook just calls the task/main.yml and has host:all Do I ned an include/import? I tried in playbook but it didn't help
|
including handlers from different file The handlers I have are not being run by the playbook or tasks I have the following directory structur: <project> - playbook.yml - <roles> -<handler> - main.yml -<meta> -<tasks> -main.yml The problem is the handler is never called. tasks/main.yml: - name: run task1 command: run_task notify: "test me now" handler/main.yml: - name: tested register: val1 listen: "test me now" The playbook just calls the task/main.yml and has host:all Do I ned an include/import? I tried in playbook but it didn't help
|
ansible, listener, handler
| 10
| 14,121
| 3
|
https://stackoverflow.com/questions/55570502/including-handlers-from-different-file
|
54,606,581
|
Ignore all errors in vimrc at vim startup
|
I am trying to create an Ansible script to set up my mac. One role is to set up vim. A first clone my dot-files into a local folder and symlink them to ~/. In my vimrc I use vundle to install extension. So I try to start vim to install all extensions like this: - name: vim | Install vundle plugins shell: vim +PluginInstall +qall But when I start this, I get the error: E185: Cannot find color scheme 'molokai' Is it possible to suppress this error messages for the first startup?
|
Ignore all errors in vimrc at vim startup I am trying to create an Ansible script to set up my mac. One role is to set up vim. A first clone my dot-files into a local folder and symlink them to ~/. In my vimrc I use vundle to install extension. So I try to start vim to install all extensions like this: - name: vim | Install vundle plugins shell: vim +PluginInstall +qall But when I start this, I get the error: E185: Cannot find color scheme 'molokai' Is it possible to suppress this error messages for the first startup?
|
vim, ansible, vundle
| 10
| 3,838
| 5
|
https://stackoverflow.com/questions/54606581/ignore-all-errors-in-vimrc-at-vim-startup
|
48,661,016
|
Ansible playbook, what is the proper syntax to run a powershell script with a specific (domain) user, in an elevated mode?
|
running Ansible 2.4.2 in an offline environment, using kerberos to authenticate, Via an ansible playbook, what is the proper syntax to run a powershell script with a specific (domain) user: DOMAIN\someuser, in an elevated mode? By elevated mode I mean that in the Windows interface, I'd run the script by login in as DOMAIN\someuser , then by right clicking a cmd or powershell prompt shortcut, choosing "run as administrator". This of course does not mean I can run the script with the local user: "administrator". What I want to run is: powershell.exe -executionpolicy bypass -noninteractive -nologo -file "myscript.ps1" What I tried in a become.yml: - name: sigh win_command: powershell.exe -executionpolicy bypass -noninteractive -nologo -file "myscript.ps1" become: yes become_user: DOMAIN\someuser become_password: someuserpassword become_method: runas The script runs, with errors that relate to it not running in elevation. Tried the same with win_shell and raw. Tried without the become_user and become_password (the yml runs with the someuser@DOMAIN.local user and password so I don't really know if it's required for become). I'm dragging through this and finding no reference to a solution via become: [URL] Any ideas?
|
Ansible playbook, what is the proper syntax to run a powershell script with a specific (domain) user, in an elevated mode? running Ansible 2.4.2 in an offline environment, using kerberos to authenticate, Via an ansible playbook, what is the proper syntax to run a powershell script with a specific (domain) user: DOMAIN\someuser, in an elevated mode? By elevated mode I mean that in the Windows interface, I'd run the script by login in as DOMAIN\someuser , then by right clicking a cmd or powershell prompt shortcut, choosing "run as administrator". This of course does not mean I can run the script with the local user: "administrator". What I want to run is: powershell.exe -executionpolicy bypass -noninteractive -nologo -file "myscript.ps1" What I tried in a become.yml: - name: sigh win_command: powershell.exe -executionpolicy bypass -noninteractive -nologo -file "myscript.ps1" become: yes become_user: DOMAIN\someuser become_password: someuserpassword become_method: runas The script runs, with errors that relate to it not running in elevation. Tried the same with win_shell and raw. Tried without the become_user and become_password (the yml runs with the someuser@DOMAIN.local user and password so I don't really know if it's required for become). I'm dragging through this and finding no reference to a solution via become: [URL] Any ideas?
|
windows, powershell, ansible, ansible-2.x, kerberos-delegation
| 10
| 58,210
| 2
|
https://stackoverflow.com/questions/48661016/ansible-playbook-what-is-the-proper-syntax-to-run-a-powershell-script-with-a-sp
|
48,653,092
|
Ansible with items in range
|
I would like to achieve something like this with ansible - debug: msg: "{{ item }}" with_items: - "0" - "1" But to be generated from a range(2) instead of having hardcoded the iterations. How would yo do that?
|
Ansible with items in range I would like to achieve something like this with ansible - debug: msg: "{{ item }}" with_items: - "0" - "1" But to be generated from a range(2) instead of having hardcoded the iterations. How would yo do that?
|
loops, ansible
| 10
| 25,027
| 2
|
https://stackoverflow.com/questions/48653092/ansible-with-items-in-range
|
40,535,028
|
How to set proxy only for a particular ansible task?
|
I want to set an environment proxy only for a particular ansible task like get_url module to download some application from internet. Other all tasks should run without any proxy. How do I achieve this task.
|
How to set proxy only for a particular ansible task? I want to set an environment proxy only for a particular ansible task like get_url module to download some application from internet. Other all tasks should run without any proxy. How do I achieve this task.
|
proxy, ansible, geturl
| 10
| 29,503
| 2
|
https://stackoverflow.com/questions/40535028/how-to-set-proxy-only-for-a-particular-ansible-task
|
38,647,864
|
Ansible - grab a key from a dictionary (but not in a loop)
|
Another question regarding dictionaries in Ansible! For convenience, I have certain values for mysql databases held in dictionaries, which works fine to loop over using with_dict to create the DBs and DB users. mysql_dbs: db1: user: db1user pass: "jdhfksjdf" accessible_from: localhost db2: user: db2user pass: "npoaivrpon" accessible_from: localhost task: - name: Configure mysql users mysql_user: name={{ item.value.user }} password={{ item.value.pass }} host={{ item.value.accessible_from }} priv={{ item.key }}.*:ALL state=present with_dict: "{{ mysql_dbs }}" However, I would like to use the key from one of the dictionaries in another task, but I don't want to loop over the dictionaries, I would only like to use one at a time. How would I grab the key that describes the dictionary (sorry, not sure about terminology)? problem task: - name: Add the db1 schema shell: mysql {{ item }} < /path/to/db1.sql with_items: '{{ mysql_dbs[db1] }}' Error in ansible run: fatal: [myhost]: FAILED! => {"failed": true, "msg": "'item' is undefined"} I'm willing to believe with_items isn't the best strategy here, but does anyone have any ideas what is the right one? Thanks in advance, been stuck on this for a while now...
|
Ansible - grab a key from a dictionary (but not in a loop) Another question regarding dictionaries in Ansible! For convenience, I have certain values for mysql databases held in dictionaries, which works fine to loop over using with_dict to create the DBs and DB users. mysql_dbs: db1: user: db1user pass: "jdhfksjdf" accessible_from: localhost db2: user: db2user pass: "npoaivrpon" accessible_from: localhost task: - name: Configure mysql users mysql_user: name={{ item.value.user }} password={{ item.value.pass }} host={{ item.value.accessible_from }} priv={{ item.key }}.*:ALL state=present with_dict: "{{ mysql_dbs }}" However, I would like to use the key from one of the dictionaries in another task, but I don't want to loop over the dictionaries, I would only like to use one at a time. How would I grab the key that describes the dictionary (sorry, not sure about terminology)? problem task: - name: Add the db1 schema shell: mysql {{ item }} < /path/to/db1.sql with_items: '{{ mysql_dbs[db1] }}' Error in ansible run: fatal: [myhost]: FAILED! => {"failed": true, "msg": "'item' is undefined"} I'm willing to believe with_items isn't the best strategy here, but does anyone have any ideas what is the right one? Thanks in advance, been stuck on this for a while now...
|
python, yaml, ansible
| 10
| 56,974
| 1
|
https://stackoverflow.com/questions/38647864/ansible-grab-a-key-from-a-dictionary-but-not-in-a-loop
|
50,789,819
|
Using variable in default filter in Ansible Jinja2 template
|
Inside a an Ansible Jinja2 template I'm trying to set a "default" value which also has a variable in it but it's printing out the literal rather than interpolating it. For example: homedir = {{ hostvars[inventory_hostname]['instances'][app_instance]['homedir'] | default("/home/{{ app_instance }}/airflow") }} returns: airflow_home = /home/{{ app_instance }}/airflow How to refer to the app_instance variable?
|
Using variable in default filter in Ansible Jinja2 template Inside a an Ansible Jinja2 template I'm trying to set a "default" value which also has a variable in it but it's printing out the literal rather than interpolating it. For example: homedir = {{ hostvars[inventory_hostname]['instances'][app_instance]['homedir'] | default("/home/{{ app_instance }}/airflow") }} returns: airflow_home = /home/{{ app_instance }}/airflow How to refer to the app_instance variable?
|
ansible
| 10
| 7,550
| 1
|
https://stackoverflow.com/questions/50789819/using-variable-in-default-filter-in-ansible-jinja2-template
|
46,700,211
|
Decryption failed (no vault secrets would found that could decrypt)
|
UPDATED: I have organized my configs into a role based directory structure. Some of those roles have default variable files that have encrypted text. Here's a simplified and tested task list that fails: --- - name: 'Include some additional variables' include_vars: dir: "{{playbook_dir}}/roles/foo/defaults/vars" tags: 'debug' - name: 'Debug: display the variables' debug: msg: "{{item}}" with_items: - "{{encrypted_text_from_yml_file}}" tags: 'debug' - name: 'Deploy Foo plugins' block: - name: 'Transfer the folder to the application directory' synchronize: src: 'some_src_folder' dest: "{{some_unencrypted_text_from_another_yml_file}}" archive: false recursive: true tags: 'debug' I'm seeing the following error, however, when executing my playbook: TASK [<some_app> : Transfer the <some_folder> folder to the application directory] ********************************************************************************** fatal: [<some_hostname>]: FAILED! => {"failed": true, "msg": "Decryption failed (no vault secrets would found t hat could decrypt)"} My credentials are being retrieved from a password file. I tossed a debug task right after the variable include and all my variables that were encrypted displayed. The weird thing is the block of tasks where the exception is occurring is using a synchronize module. No variables from the vault are even being used... Any idea how to troubleshoot this? I increased the verbosity up to -vvvv and didn't see anything obvious. Using: ansible 2.4.0.0
|
Decryption failed (no vault secrets would found that could decrypt) UPDATED: I have organized my configs into a role based directory structure. Some of those roles have default variable files that have encrypted text. Here's a simplified and tested task list that fails: --- - name: 'Include some additional variables' include_vars: dir: "{{playbook_dir}}/roles/foo/defaults/vars" tags: 'debug' - name: 'Debug: display the variables' debug: msg: "{{item}}" with_items: - "{{encrypted_text_from_yml_file}}" tags: 'debug' - name: 'Deploy Foo plugins' block: - name: 'Transfer the folder to the application directory' synchronize: src: 'some_src_folder' dest: "{{some_unencrypted_text_from_another_yml_file}}" archive: false recursive: true tags: 'debug' I'm seeing the following error, however, when executing my playbook: TASK [<some_app> : Transfer the <some_folder> folder to the application directory] ********************************************************************************** fatal: [<some_hostname>]: FAILED! => {"failed": true, "msg": "Decryption failed (no vault secrets would found t hat could decrypt)"} My credentials are being retrieved from a password file. I tossed a debug task right after the variable include and all my variables that were encrypted displayed. The weird thing is the block of tasks where the exception is occurring is using a synchronize module. No variables from the vault are even being used... Any idea how to troubleshoot this? I increased the verbosity up to -vvvv and didn't see anything obvious. Using: ansible 2.4.0.0
|
ansible
| 10
| 50,492
| 4
|
https://stackoverflow.com/questions/46700211/decryption-failed-no-vault-secrets-would-found-that-could-decrypt
|
32,698,529
|
ansible - unarchive - input file not found
|
I'm getting this error while Ansible (1.9.2) is trying to unpack the file. 19:06:38 TASK: [jmeter | unpack jmeter] ************************************************ 19:06:38 fatal: [jmeter01.veryfast.server.jenkins] => input file not found at /tmp/apache-jmeter-2.13.tgz or /tmp/apache-jmeter-2.13.tgz 19:06:38 19:06:38 FATAL: all hosts have already failed -- aborting 19:06:38 I checked on the target server, /tmp/apache-jmeter-2.13.tgz file exists and it has valid permissions (for testing I also gave 777 even though not reqd but still got the above error mesg). I also checked md5sum of this file (compared it with what's there on the apache jmeter site) -- It matches! # md5sum apache-jmeter-2.13.tgz|grep 53dc44a6379b7b4a57976936f3a65e03 53dc44a6379b7b4a57976936f3a65e03 apache-jmeter-2.13.tgz When I'm using tar -xvzf on this file, tar is able to show/extract it's contents in the .tgz file. What could I be missing? At this point, I'm wondering unarchive method/module in Ansible must have some bug. My last resort (if I can't get unarchive in Ansible to work) would be to use Command: "tar -xzvf /tmp/....." but I don't want to do that as my first preference.
|
ansible - unarchive - input file not found I'm getting this error while Ansible (1.9.2) is trying to unpack the file. 19:06:38 TASK: [jmeter | unpack jmeter] ************************************************ 19:06:38 fatal: [jmeter01.veryfast.server.jenkins] => input file not found at /tmp/apache-jmeter-2.13.tgz or /tmp/apache-jmeter-2.13.tgz 19:06:38 19:06:38 FATAL: all hosts have already failed -- aborting 19:06:38 I checked on the target server, /tmp/apache-jmeter-2.13.tgz file exists and it has valid permissions (for testing I also gave 777 even though not reqd but still got the above error mesg). I also checked md5sum of this file (compared it with what's there on the apache jmeter site) -- It matches! # md5sum apache-jmeter-2.13.tgz|grep 53dc44a6379b7b4a57976936f3a65e03 53dc44a6379b7b4a57976936f3a65e03 apache-jmeter-2.13.tgz When I'm using tar -xvzf on this file, tar is able to show/extract it's contents in the .tgz file. What could I be missing? At this point, I'm wondering unarchive method/module in Ansible must have some bug. My last resort (if I can't get unarchive in Ansible to work) would be to use Command: "tar -xzvf /tmp/....." but I don't want to do that as my first preference.
|
module, tar, ansible, unpack
| 10
| 6,769
| 1
|
https://stackoverflow.com/questions/32698529/ansible-unarchive-input-file-not-found
|
50,314,254
|
Ansible - Creating dictionary from a list
|
i am trying to follow this thread but my output is not what was expected. every previous item is getting overwritten with the new item being added. my input is a list that i am loading into my accounts_list variables is as follows: account: - PR_user1 - PR_user2 There are no passwords in the input file. i need to create random passwords for each of the user account, use them in setting up various services, and then dump them into a text file for human use. my first task on which i am stuck is that once i have read them into a list, i want to iterate over them, create password for each account, and then store it inside a dictionary as key value pairs. i have tried both of the techniques mentioned to add an item to an existing dictionary, using combine as well as '+'. my input is a simple list called 'accounts'. - set_fact: # domain_accounts: "{{ domain_accounts|default({}) | combine({item|trim: lookup(...)} ) }}" domain_accounts: "{{ domain_accounts|default([]) + [{item|trim:lookup('...)}] }}" with_items: "{{account_list.accounts}}" My output is as follows: TASK [set account passwords] ****************************************************************** ok: [localhost] => (item=PR_user1) => {"ansible_facts": {"domain_accounts": [{"PR_user1": "u]oT,cU{"}]}, "changed": false, "item": "PR_user1"} ok: [localhost] => (item=PR_user2) => {"ansible_facts": {"domain_accounts": [{"PR_user2": "b>npKZdi"}]}, "changed": false, "item": "PR_user2"}
|
Ansible - Creating dictionary from a list i am trying to follow this thread but my output is not what was expected. every previous item is getting overwritten with the new item being added. my input is a list that i am loading into my accounts_list variables is as follows: account: - PR_user1 - PR_user2 There are no passwords in the input file. i need to create random passwords for each of the user account, use them in setting up various services, and then dump them into a text file for human use. my first task on which i am stuck is that once i have read them into a list, i want to iterate over them, create password for each account, and then store it inside a dictionary as key value pairs. i have tried both of the techniques mentioned to add an item to an existing dictionary, using combine as well as '+'. my input is a simple list called 'accounts'. - set_fact: # domain_accounts: "{{ domain_accounts|default({}) | combine({item|trim: lookup(...)} ) }}" domain_accounts: "{{ domain_accounts|default([]) + [{item|trim:lookup('...)}] }}" with_items: "{{account_list.accounts}}" My output is as follows: TASK [set account passwords] ****************************************************************** ok: [localhost] => (item=PR_user1) => {"ansible_facts": {"domain_accounts": [{"PR_user1": "u]oT,cU{"}]}, "changed": false, "item": "PR_user1"} ok: [localhost] => (item=PR_user2) => {"ansible_facts": {"domain_accounts": [{"PR_user2": "b>npKZdi"}]}, "changed": false, "item": "PR_user2"}
|
ansible
| 10
| 39,976
| 3
|
https://stackoverflow.com/questions/50314254/ansible-creating-dictionary-from-a-list
|
35,945,797
|
Warning while constructing a mapping in Ansible
|
Whenever I run my playbook the following warning comes up: [WARNING]: While constructing a mapping from /etc/ansible/roles/foo/tasks/main.yml, line 17, column 3, found a duplicate dict key (file). Using last defined value only. The relevant part of my main.yml in the tasks folder is like this: (line 17 is the task to clean the files which seems a bit off so I guess the problem is with the previous "script" line) - name: Run script to format output script: foo.py {{ taskname }} /tmp/fcpout.log - name: Clean temp files file: path=/tmp/fcpout.log state=absent And my vars file: --- my_dict: {SLM: "114", Regular: "255", Production: "1"} taskid: "{{my_dict[taskname]}}" To run my playbook I do: ansible-playbook playbooks/foo.yml --extra-vars "server=bar taskname=SLM" What I'm trying to do is to take the command line arguments, set the hosts: with the "server" parameter, get the taskname and from that find out to which id refers to. This id is used as the first input to my python script which runs remotely. The playbook works fine, but I don't understand why I get a warning. Could anyone explain what is wrong here?
|
Warning while constructing a mapping in Ansible Whenever I run my playbook the following warning comes up: [WARNING]: While constructing a mapping from /etc/ansible/roles/foo/tasks/main.yml, line 17, column 3, found a duplicate dict key (file). Using last defined value only. The relevant part of my main.yml in the tasks folder is like this: (line 17 is the task to clean the files which seems a bit off so I guess the problem is with the previous "script" line) - name: Run script to format output script: foo.py {{ taskname }} /tmp/fcpout.log - name: Clean temp files file: path=/tmp/fcpout.log state=absent And my vars file: --- my_dict: {SLM: "114", Regular: "255", Production: "1"} taskid: "{{my_dict[taskname]}}" To run my playbook I do: ansible-playbook playbooks/foo.yml --extra-vars "server=bar taskname=SLM" What I'm trying to do is to take the command line arguments, set the hosts: with the "server" parameter, get the taskname and from that find out to which id refers to. This id is used as the first input to my python script which runs remotely. The playbook works fine, but I don't understand why I get a warning. Could anyone explain what is wrong here?
|
command-line-arguments, jinja2, ansible, ansible-2.x
| 10
| 29,927
| 2
|
https://stackoverflow.com/questions/35945797/warning-while-constructing-a-mapping-in-ansible
|
31,295,711
|
Alarm action definition in ec2_metric_alarm ansible module
|
I am trying to set up an cloud watch alarm witch ansible ec2_metric_alarm module and I do not know how to set it to send an email on alarm The code is - name: add alarm ec2_metric_alarm: state: present region: eu-west-1 name: "LoadAverage" metric: "LoadAverage" statistic: Average comparison: ">" threshold: 3.0 evaluation_periods: 3 period: 60 unit: "None" description: "Load Average" dimensions: {'Role':{{itme[0]}}, Node:{{item[1]}} } alarm_actions: ["action1","action2"] What is the syntax or what do I do to express that I want it to send emails on in alarm_actions ?
|
Alarm action definition in ec2_metric_alarm ansible module I am trying to set up an cloud watch alarm witch ansible ec2_metric_alarm module and I do not know how to set it to send an email on alarm The code is - name: add alarm ec2_metric_alarm: state: present region: eu-west-1 name: "LoadAverage" metric: "LoadAverage" statistic: Average comparison: ">" threshold: 3.0 evaluation_periods: 3 period: 60 unit: "None" description: "Load Average" dimensions: {'Role':{{itme[0]}}, Node:{{item[1]}} } alarm_actions: ["action1","action2"] What is the syntax or what do I do to express that I want it to send emails on in alarm_actions ?
|
amazon-web-services, ansible
| 10
| 3,220
| 2
|
https://stackoverflow.com/questions/31295711/alarm-action-definition-in-ec2-metric-alarm-ansible-module
|
30,509,058
|
Post Json to API via Ansible
|
I want to make a POST request to an API endpoint via Ansible where some of the items inside the post data are dynamic, here is what I try and fail: My body_content.json: { apiKey: '{{ KEY_FROM_VARS }}', data1: 'foo', data2: 'bar' } And here is my Ansible task: # Create an item via API - uri: url="[URL] method=POST return_content=yes HEADER_Content-Type="application/json" body="{{ lookup('file','create_body.json') | to_json }}" Sadly this doesn't work: failed: [localhost] => {"failed": true} msg: this module requires key=value arguments .... FATAL: all hosts have already failed -- aborting My ansible version is 1.9.1
|
Post Json to API via Ansible I want to make a POST request to an API endpoint via Ansible where some of the items inside the post data are dynamic, here is what I try and fail: My body_content.json: { apiKey: '{{ KEY_FROM_VARS }}', data1: 'foo', data2: 'bar' } And here is my Ansible task: # Create an item via API - uri: url="[URL] method=POST return_content=yes HEADER_Content-Type="application/json" body="{{ lookup('file','create_body.json') | to_json }}" Sadly this doesn't work: failed: [localhost] => {"failed": true} msg: this module requires key=value arguments .... FATAL: all hosts have already failed -- aborting My ansible version is 1.9.1
|
ansible
| 10
| 51,322
| 2
|
https://stackoverflow.com/questions/30509058/post-json-to-api-via-ansible
|
53,331,405
|
Django compress error: Invalid input of type: 'CacheKey'
|
We suddenly started getting this issue when compressing django static files on production servers. Ubuntu 16.04, Python 3.x, Django 1.11. I am using an ansible-playbook to deploy. The error is as follows: CommandError: An error occurred during rendering /chalktalk/app/chalktalk-react-40/chalktalk-react-40/chalktalk/apps/exams/templates/exams/section-edit.html: Invalid input of type: 'CacheKey'. Convert to a byte, string or number first. It doesn't seem to be an issue in one of the static files but a general issue. Every time we run it, we get a different file. I was looking for any clues on google and nothing shows up with the same error.
|
Django compress error: Invalid input of type: 'CacheKey' We suddenly started getting this issue when compressing django static files on production servers. Ubuntu 16.04, Python 3.x, Django 1.11. I am using an ansible-playbook to deploy. The error is as follows: CommandError: An error occurred during rendering /chalktalk/app/chalktalk-react-40/chalktalk-react-40/chalktalk/apps/exams/templates/exams/section-edit.html: Invalid input of type: 'CacheKey'. Convert to a byte, string or number first. It doesn't seem to be an issue in one of the static files but a general issue. Every time we run it, we get a different file. I was looking for any clues on google and nothing shows up with the same error.
|
python, django, ansible, django-compressor
| 10
| 3,665
| 2
|
https://stackoverflow.com/questions/53331405/django-compress-error-invalid-input-of-type-cachekey
|
37,320,746
|
How to perform Ansible task if given directory is not empty?
|
I am using Ansible to move logs to backup directory (using shell module, mv command). mv command fails if there are no files to move. By default, it causes whole Ansible play to fail. I can proceed with play, even if task fails ( ignore_errors: yes ) I am not satisfied with this solution, because it produces error message TASK [move files to backup directory] ****************************************** fatal: [xx.xx.xx.xx]: FAILED! =...?No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring How to check if directory is empty in Ansible, and if empty jus skip task?
|
How to perform Ansible task if given directory is not empty? I am using Ansible to move logs to backup directory (using shell module, mv command). mv command fails if there are no files to move. By default, it causes whole Ansible play to fail. I can proceed with play, even if task fails ( ignore_errors: yes ) I am not satisfied with this solution, because it produces error message TASK [move files to backup directory] ****************************************** fatal: [xx.xx.xx.xx]: FAILED! =...?No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []} ...ignoring How to check if directory is empty in Ansible, and if empty jus skip task?
|
ansible
| 10
| 15,862
| 2
|
https://stackoverflow.com/questions/37320746/how-to-perform-ansible-task-if-given-directory-is-not-empty
|
33,370,889
|
How to use --ask-become-pass with ansible 1.9.4
|
I am a new user to ansible. I am attempting to use the privilege escalation feature to append a line to a file owned by root. The following documentation tells me I can use --ask-become-pass with become_user to be prompted for the become_user password but I have no idea how to use it. [URL] My current code I am working with is as follows: - name: Add deploy to sudoers remote_user: me become: yes become_method: su ask_become_pass: true lineinfile: dest=/etc/somefile line=sometext regexp="^sometext" owner=root state=present insertafter=EOF create=True Which gives me the error: ERROR: ask_become_pass is not a legal parameter in an Ansible task or handler Can anyone give me an idea of what I might be doing wrong here? Thanks in advance.
|
How to use --ask-become-pass with ansible 1.9.4 I am a new user to ansible. I am attempting to use the privilege escalation feature to append a line to a file owned by root. The following documentation tells me I can use --ask-become-pass with become_user to be prompted for the become_user password but I have no idea how to use it. [URL] My current code I am working with is as follows: - name: Add deploy to sudoers remote_user: me become: yes become_method: su ask_become_pass: true lineinfile: dest=/etc/somefile line=sometext regexp="^sometext" owner=root state=present insertafter=EOF create=True Which gives me the error: ERROR: ask_become_pass is not a legal parameter in an Ansible task or handler Can anyone give me an idea of what I might be doing wrong here? Thanks in advance.
|
ansible
| 10
| 25,339
| 3
|
https://stackoverflow.com/questions/33370889/how-to-use-ask-become-pass-with-ansible-1-9-4
|
31,949,254
|
How can I ignore an element from a list when looping through with Jinja?
|
I have an output as follows: [{ 'stderr': 'error: cannot open file', }, { 'stderr': '', }] Jinja snipper: {{ php_command_result.results | map(attribute='stderr') | sort | join('\r - ') }}" Returns a trailing - at the end because stderr is empty. How can I ignore empty values?
|
How can I ignore an element from a list when looping through with Jinja? I have an output as follows: [{ 'stderr': 'error: cannot open file', }, { 'stderr': '', }] Jinja snipper: {{ php_command_result.results | map(attribute='stderr') | sort | join('\r - ') }}" Returns a trailing - at the end because stderr is empty. How can I ignore empty values?
|
jinja2, ansible
| 10
| 9,789
| 1
|
https://stackoverflow.com/questions/31949254/how-can-i-ignore-an-element-from-a-list-when-looping-through-with-jinja
|
56,862,710
|
How to get just the version of the software when using ansible_facts.packages["zabbix-agent"]
|
I'm having an issue with using the package_facts module in Ansible. Basically, I just want to get the version of zabbix-agent installed as I need to do some stuff depending on which version is installed. Now I got this in a playbook task: - name: Gather Installed Packages Facts package_facts: manager: "auto" tags: - zabbix-check - name: "Zabbix Found test result" debug: var=ansible_facts.packages['zabbix-agent'] when: "'zabbix-agent' in ansible_facts.packages" tags: - zabbix-check - name: "Zabbix Not-found test result" debug: msg: "Zabbix NOT found" when: "'zabbix-agent' not in ansible_facts.packages" tags: - zabbix-check Which spits out something like this: ok: [vm3] => { "ansible_facts.packages['zabbix-agent']": [ { "arch": "x86_64", "epoch": null, "name": "zabbix-agent", "release": "1.el7", "source": "rpm", "version": "4.0.10" ] } ok: [vm4] => { "ansible_facts.packages['zabbix-agent']": [ { "arch": "x86_64", "epoch": null, "name": "zabbix-agent", "release": "1.el7", "source": "rpm", "version": "3.2.11" } ] } I want to get the value of that "Version": "3.2.11" so that I can store that in a variable and use that later. I've seen that post using yum and doing some json query but that won't work for me.
|
How to get just the version of the software when using ansible_facts.packages["zabbix-agent"] I'm having an issue with using the package_facts module in Ansible. Basically, I just want to get the version of zabbix-agent installed as I need to do some stuff depending on which version is installed. Now I got this in a playbook task: - name: Gather Installed Packages Facts package_facts: manager: "auto" tags: - zabbix-check - name: "Zabbix Found test result" debug: var=ansible_facts.packages['zabbix-agent'] when: "'zabbix-agent' in ansible_facts.packages" tags: - zabbix-check - name: "Zabbix Not-found test result" debug: msg: "Zabbix NOT found" when: "'zabbix-agent' not in ansible_facts.packages" tags: - zabbix-check Which spits out something like this: ok: [vm3] => { "ansible_facts.packages['zabbix-agent']": [ { "arch": "x86_64", "epoch": null, "name": "zabbix-agent", "release": "1.el7", "source": "rpm", "version": "4.0.10" ] } ok: [vm4] => { "ansible_facts.packages['zabbix-agent']": [ { "arch": "x86_64", "epoch": null, "name": "zabbix-agent", "release": "1.el7", "source": "rpm", "version": "3.2.11" } ] } I want to get the value of that "Version": "3.2.11" so that I can store that in a variable and use that later. I've seen that post using yum and doing some json query but that won't work for me.
|
ansible, ansible-facts
| 10
| 15,723
| 3
|
https://stackoverflow.com/questions/56862710/how-to-get-just-the-version-of-the-software-when-using-ansible-facts-packagesz
|
41,294,214
|
Ansible timezone module fails (different reasons on different OSes)
|
I decided to refactor some playbooks and give a try to the new timezone module . The task I try is a verbatim copy of the example given in the manual page: - name: set timezone to Asia/Tokyo timezone: name: Asia/Tokyo It fails on each system I tried. Results for Vagrant machines: On Debian 8 ( debian/jessie64 ): TASK [set timezone to Asia/Tokyo] ********************************************** fatal: [debian]: FAILED! => {"changed": false, "cmd": "/usr/bin/timedatectl set-timezone Asia/Tokyo", "failed": true, "msg": "Failed to set time zone: The name org.freedesktop.PolicyKit1 was not provided by any .service files", "rc": 1, "stderr": "Failed to set time zone: The name org.freedesktop.PolicyKit1 was not provided by any .service files\n", "stdout": "", "stdout_lines": []} On CentOS 7 ( centos/7 ) - different from Debian: TASK [set timezone to Asia/Tokyo] ********************************************** fatal: [centos]: FAILED! => {"changed": false, "cmd": "/usr/bin/timedatectl set-timezone Asia/Tokyo", "failed": true, "msg": "Failed to set time zone: Interactive authentication required.", "rc": 1, "stderr": "Failed to set time zone: Interactive authentication required.\n", "stdout": "", "stdout_lines": []} On Ubuntu 16.04 ( ubuntu/xenial64 ) - same as CentOS, different from Debian: TASK [set timezone to Asia/Tokyo] ********************************************** fatal: [ubuntu]: FAILED! => {"changed": false, "cmd": "/usr/bin/timedatectl set-timezone Asia/Tokyo", "failed": true, "msg": "Failed to set time zone: Interactive authentication required.", "rc": 1, "stderr": "Failed to set time zone: Interactive authentication required.\n", "stdout": "", "stdout_lines": []} Am I missing something? Is there some dependency required?
|
Ansible timezone module fails (different reasons on different OSes) I decided to refactor some playbooks and give a try to the new timezone module . The task I try is a verbatim copy of the example given in the manual page: - name: set timezone to Asia/Tokyo timezone: name: Asia/Tokyo It fails on each system I tried. Results for Vagrant machines: On Debian 8 ( debian/jessie64 ): TASK [set timezone to Asia/Tokyo] ********************************************** fatal: [debian]: FAILED! => {"changed": false, "cmd": "/usr/bin/timedatectl set-timezone Asia/Tokyo", "failed": true, "msg": "Failed to set time zone: The name org.freedesktop.PolicyKit1 was not provided by any .service files", "rc": 1, "stderr": "Failed to set time zone: The name org.freedesktop.PolicyKit1 was not provided by any .service files\n", "stdout": "", "stdout_lines": []} On CentOS 7 ( centos/7 ) - different from Debian: TASK [set timezone to Asia/Tokyo] ********************************************** fatal: [centos]: FAILED! => {"changed": false, "cmd": "/usr/bin/timedatectl set-timezone Asia/Tokyo", "failed": true, "msg": "Failed to set time zone: Interactive authentication required.", "rc": 1, "stderr": "Failed to set time zone: Interactive authentication required.\n", "stdout": "", "stdout_lines": []} On Ubuntu 16.04 ( ubuntu/xenial64 ) - same as CentOS, different from Debian: TASK [set timezone to Asia/Tokyo] ********************************************** fatal: [ubuntu]: FAILED! => {"changed": false, "cmd": "/usr/bin/timedatectl set-timezone Asia/Tokyo", "failed": true, "msg": "Failed to set time zone: Interactive authentication required.", "rc": 1, "stderr": "Failed to set time zone: Interactive authentication required.\n", "stdout": "", "stdout_lines": []} Am I missing something? Is there some dependency required?
|
ansible, ansible-2.x
| 10
| 8,592
| 2
|
https://stackoverflow.com/questions/41294214/ansible-timezone-module-fails-different-reasons-on-different-oses
|
36,290,485
|
How to add user and group without a password using Ansible?
|
I need to add group and user without password (nologin user) using Ansible script. I execute the following command: $ansible-playbook deploy_nagios_client.yml -i hosts -e hosts=qa1-jetty -v Below is main.yml --- # Create Nagios User and Group - name: Add group "nagios" group: name=nagios become: true - name: Add user "nagios" user: name=nagios groups=nagios password="" shell=/bin/bash append=yes comment="Nagios nologin User" state=present become: true Result
|
How to add user and group without a password using Ansible? I need to add group and user without password (nologin user) using Ansible script. I execute the following command: $ansible-playbook deploy_nagios_client.yml -i hosts -e hosts=qa1-jetty -v Below is main.yml --- # Create Nagios User and Group - name: Add group "nagios" group: name=nagios become: true - name: Add user "nagios" user: name=nagios groups=nagios password="" shell=/bin/bash append=yes comment="Nagios nologin User" state=present become: true Result
|
ansible, ansible-2.x
| 10
| 24,759
| 2
|
https://stackoverflow.com/questions/36290485/how-to-add-user-and-group-without-a-password-using-ansible
|
32,470,801
|
Specifying a particular callback to be used in playbook
|
I have created different playbooks for different operations in ansible . And I have also created different Callback Scripts for different kinds of Playbooks (And packaged them with Ansible and installed). The playbooks will be called from many different scripts/cron jobs. Now, is it possible to specify a particular callback script to be called for a particular playbook ? (Using a command line argument probably?) What's happening right now is, all the Callback scripts are called for each playbook. I cannot put the callback script relative to the location/folder of the playbook because it's already packaged inside the ansible package. Also, all the playbooks are in the same location too. I am fine with modifying a bit of ansible source code to accommodate it too if needed.
|
Specifying a particular callback to be used in playbook I have created different playbooks for different operations in ansible . And I have also created different Callback Scripts for different kinds of Playbooks (And packaged them with Ansible and installed). The playbooks will be called from many different scripts/cron jobs. Now, is it possible to specify a particular callback script to be called for a particular playbook ? (Using a command line argument probably?) What's happening right now is, all the Callback scripts are called for each playbook. I cannot put the callback script relative to the location/folder of the playbook because it's already packaged inside the ansible package. Also, all the playbooks are in the same location too. I am fine with modifying a bit of ansible source code to accommodate it too if needed.
|
ansible
| 10
| 8,411
| 3
|
https://stackoverflow.com/questions/32470801/specifying-a-particular-callback-to-be-used-in-playbook
|
30,519,470
|
Ansible EC2 Dynamic inventory minimum IAM policies
|
Has someone figured out the minimum IAM policies required to run the EC2 dynamic inventory script ( ec2.py ) on ansible via an IAM role? So far, I haven't seen a concrete reference in this matter other than specifying credentials for boto library in the official documentation of ansible, however, on production environments, I rarely use key pairs for access to AWS services from EC2 instances, instead I have embraced the use of IAM roles for that case scenario. I have tried policies allowing ec2:Describe* actions but it doesn't seem to be enough for the script as it always exits with Unauthorized operation . Could you help me out?
|
Ansible EC2 Dynamic inventory minimum IAM policies Has someone figured out the minimum IAM policies required to run the EC2 dynamic inventory script ( ec2.py ) on ansible via an IAM role? So far, I haven't seen a concrete reference in this matter other than specifying credentials for boto library in the official documentation of ansible, however, on production environments, I rarely use key pairs for access to AWS services from EC2 instances, instead I have embraced the use of IAM roles for that case scenario. I have tried policies allowing ec2:Describe* actions but it doesn't seem to be enough for the script as it always exits with Unauthorized operation . Could you help me out?
|
amazon-ec2, ansible, amazon-iam, ansible-inventory
| 10
| 4,567
| 4
|
https://stackoverflow.com/questions/30519470/ansible-ec2-dynamic-inventory-minimum-iam-policies
|
29,153,650
|
How to check out most recent git tag using Ansible?
|
Is there an easy way to have Ansible check out the most recent tag on a particular git branch, without having to specify or pass in the tag? That is, can Ansible detect or derive the most recent tag on a branch or is that something that needs to be done separately using the shell module or something?
|
How to check out most recent git tag using Ansible? Is there an easy way to have Ansible check out the most recent tag on a particular git branch, without having to specify or pass in the tag? That is, can Ansible detect or derive the most recent tag on a branch or is that something that needs to be done separately using the shell module or something?
|
git, deployment, tags, ansible
| 10
| 17,673
| 4
|
https://stackoverflow.com/questions/29153650/how-to-check-out-most-recent-git-tag-using-ansible
|
75,724,464
|
Is it possible to debug multiple variables in one Ansible task without using a loop?
|
I want to print var1 and var2 in one Ansible task. I have this working YAML. - debug: var: "{{ item }}" with_items: - var1 - var2 I wonder whether is possible to do it without using with_items nor msg parameter.
|
Is it possible to debug multiple variables in one Ansible task without using a loop? I want to print var1 and var2 in one Ansible task. I have this working YAML. - debug: var: "{{ item }}" with_items: - var1 - var2 I wonder whether is possible to do it without using with_items nor msg parameter.
|
ansible
| 10
| 18,107
| 4
|
https://stackoverflow.com/questions/75724464/is-it-possible-to-debug-multiple-variables-in-one-ansible-task-without-using-a-l
|
58,840,430
|
How to decode a Base64 var to a binary file with Ansible module
|
I am reading a base64 file from HashiCorpβs vault with the help of the hashi_vault module. Sample of code: - name: Vault get b64.pfx file set_fact: b64_pfx: "{{ lookup('hashi_vault', 'secret={{ path_pfx }} token={{ token }} url={{ url }} cacert={{ role_path}}/files/CA.pem')}}" Then as a next step I need to decode this base64 var to a binary format and store it in in a file. I am currently using shell module to do the work. Sample of code: - name: Decode Base64 file to binary shell: "echo {{ b64_pfx }} | base64 --decode > {{ pfxFile }}" delegate_to: localhost I was looking online for possible solutions e.g. ( Copy module with base64-encoded binary file adds extra character and How to upload encrypted file using ansible vault? ). But the only working solution that I can found is using the shell module. Since this is an old problem is there any workaround on this? Update: Do not use Python 2.7 as there seems to be a bug on the b64decode filter (sample below): <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p " echo /tmp/ansible-tmp-1573819503.84-50241917358990 " && echo ansible-tmp-1573819503.84-50241917358990=" echo /tmp/ansible-tmp-1573819503.84-50241917358990 " ) && sleep 0' Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py <localhost> PUT /tmp/ansible-local-18pweKi1/tmpjQGOz8 TO /tmp/ansible-tmp-1573819503.84-50241917358990/AnsiballZ_command.py <localhost> EXEC /bin/sh -c 'chmod u+x /tmp/ansible-tmp-1573819503.84-50241917358990/ /tmp/ansible-tmp-1573819503.84-50241917358990/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /tmp/ansible-tmp-1573819503.84-50241917358990/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /tmp/ansible-tmp-1573819503.84-50241917358990/ > /dev/null 2>&1 && sleep 0' changed: [hostname -> localhost] => { "changed": true, "cmd": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "delta": "0:00:00.126279", "end": "2019-11-15 13:05:04.227933", "invocation": { "module_args": { "_raw_params": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "rc": 0, "start": "2019-11-15 13:05:04.101654", "stderr": "", "stderr_lines": [], "stdout": "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file\nead5cb632f3ee80ce129ef5fe02396686c2761e0 /tmp/binary_file.ansible", "stdout_lines": [ "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file", "ead5cb632f3ee80ce129ef5fe02396686c2761e0 /tmp/binary_file.ansible" ] } Solution: use Python 3 with b64decode filter (sample below): <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p " echo /tmp/ansible-tmp-1573819490.9511943-224511378311227 " && echo ansible-tmp-1573819490.9511943-224511378311227=" echo /tmp/ansible-tmp-1573819490.9511943-224511378311227 " ) && sleep 0' Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/commands/command.py <localhost> PUT /tmp/ansible-local-18epk_0jsv/tmp4t3gnm7u TO /tmp/ansible-tmp-1573819490.9511943-224511378311227/AnsiballZ_command.py <localhost> EXEC /bin/sh -c 'chmod u+x /tmp/ansible-tmp-1573819490.9511943-224511378311227/ /tmp/ansible-tmp-1573819490.9511943-224511378311227/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /tmp/ansible-tmp-1573819490.9511943-224511378311227/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /tmp/ansible-tmp-1573819490.9511943-224511378311227/ > /dev/null 2>&1 && sleep 0' changed: [hostname -> localhost] => { "changed": true, "cmd": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "delta": "0:00:00.135427", "end": "2019-11-15 13:04:51.239969", "invocation": { "module_args": { "_raw_params": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "rc": 0, "start": "2019-11-15 13:04:51.104542", "stderr": "", "stderr_lines": [], "stdout": "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file\n4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file.ansible", "stdout_lines": [ "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file", "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file.ansible" ] } Since Python 2 is reaching the end of life in ( January 1, 2020 ) there is no point of raising the bug.
|
How to decode a Base64 var to a binary file with Ansible module I am reading a base64 file from HashiCorpβs vault with the help of the hashi_vault module. Sample of code: - name: Vault get b64.pfx file set_fact: b64_pfx: "{{ lookup('hashi_vault', 'secret={{ path_pfx }} token={{ token }} url={{ url }} cacert={{ role_path}}/files/CA.pem')}}" Then as a next step I need to decode this base64 var to a binary format and store it in in a file. I am currently using shell module to do the work. Sample of code: - name: Decode Base64 file to binary shell: "echo {{ b64_pfx }} | base64 --decode > {{ pfxFile }}" delegate_to: localhost I was looking online for possible solutions e.g. ( Copy module with base64-encoded binary file adds extra character and How to upload encrypted file using ansible vault? ). But the only working solution that I can found is using the shell module. Since this is an old problem is there any workaround on this? Update: Do not use Python 2.7 as there seems to be a bug on the b64decode filter (sample below): <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p " echo /tmp/ansible-tmp-1573819503.84-50241917358990 " && echo ansible-tmp-1573819503.84-50241917358990=" echo /tmp/ansible-tmp-1573819503.84-50241917358990 " ) && sleep 0' Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py <localhost> PUT /tmp/ansible-local-18pweKi1/tmpjQGOz8 TO /tmp/ansible-tmp-1573819503.84-50241917358990/AnsiballZ_command.py <localhost> EXEC /bin/sh -c 'chmod u+x /tmp/ansible-tmp-1573819503.84-50241917358990/ /tmp/ansible-tmp-1573819503.84-50241917358990/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /tmp/ansible-tmp-1573819503.84-50241917358990/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /tmp/ansible-tmp-1573819503.84-50241917358990/ > /dev/null 2>&1 && sleep 0' changed: [hostname -> localhost] => { "changed": true, "cmd": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "delta": "0:00:00.126279", "end": "2019-11-15 13:05:04.227933", "invocation": { "module_args": { "_raw_params": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "rc": 0, "start": "2019-11-15 13:05:04.101654", "stderr": "", "stderr_lines": [], "stdout": "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file\nead5cb632f3ee80ce129ef5fe02396686c2761e0 /tmp/binary_file.ansible", "stdout_lines": [ "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file", "ead5cb632f3ee80ce129ef5fe02396686c2761e0 /tmp/binary_file.ansible" ] } Solution: use Python 3 with b64decode filter (sample below): <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p " echo /tmp/ansible-tmp-1573819490.9511943-224511378311227 " && echo ansible-tmp-1573819490.9511943-224511378311227=" echo /tmp/ansible-tmp-1573819490.9511943-224511378311227 " ) && sleep 0' Using module file /usr/local/lib/python3.6/site-packages/ansible/modules/commands/command.py <localhost> PUT /tmp/ansible-local-18epk_0jsv/tmp4t3gnm7u TO /tmp/ansible-tmp-1573819490.9511943-224511378311227/AnsiballZ_command.py <localhost> EXEC /bin/sh -c 'chmod u+x /tmp/ansible-tmp-1573819490.9511943-224511378311227/ /tmp/ansible-tmp-1573819490.9511943-224511378311227/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c '/usr/bin/python /tmp/ansible-tmp-1573819490.9511943-224511378311227/AnsiballZ_command.py && sleep 0' <localhost> EXEC /bin/sh -c 'rm -f -r /tmp/ansible-tmp-1573819490.9511943-224511378311227/ > /dev/null 2>&1 && sleep 0' changed: [hostname -> localhost] => { "changed": true, "cmd": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "delta": "0:00:00.135427", "end": "2019-11-15 13:04:51.239969", "invocation": { "module_args": { "_raw_params": "shasum -a 1 /tmp/binary_file\nshasum -a 1 /tmp/binary_file.ansible\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "rc": 0, "start": "2019-11-15 13:04:51.104542", "stderr": "", "stderr_lines": [], "stdout": "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file\n4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file.ansible", "stdout_lines": [ "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file", "4a71465d449a0337329e76106569e39d6aaa5ef0 /tmp/binary_file.ansible" ] } Since Python 2 is reaching the end of life in ( January 1, 2020 ) there is no point of raising the bug.
|
ansible, base64, binaryfiles
| 10
| 22,278
| 2
|
https://stackoverflow.com/questions/58840430/how-to-decode-a-base64-var-to-a-binary-file-with-ansible-module
|
24,401,846
|
AWS EC2 instance create via Ansible IAM Roles instance_profile_name UnauthorizedOperation: Error
|
I am trying to create EC2 instance via ansible using IAM roles but I while launching new instance I get error failed: [localhost] => (item= IAMRole-1) => {"failed": true, "item": " IAMRole-1"} msg: Instance creation failed => UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: Ckcjt2GD81D5dlF6XakTSDypnwrgeQb0k ouRMKh3Ol1jue553EZ7OXPt6fk1Q1-4HM-tLNPCkiX7ZgJWXYGSjHg2xP1A9LR7KBiXYeCtFKEQIC W9cot3KAKPVcNXkHLrhREMfiT5KYEtrsA2A-xFCdvqwM2hNTNf7Y6VGe0Z48EDIyO5p5DxdNFsaSChUcb iRUhSyRXIGWr_ZKkGM9GoyoVWCBk3Ni2Td7zkZ1EfAIeRJobiOnYXKE6Q whereas iam role has full ec2 access, with following policy { "Version": "2012-10-17", "Statement": [ { "Action": "ec2:*", "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": "elasticloadbalancing:*", "Resource": "*" }, { "Effect": "Allow", "Action": "cloudwatch:*", "Resource": "*" }, { "Effect": "Allow", "Action": "autoscaling:*", "Resource": "*" } ] } Any suggestions please.
|
AWS EC2 instance create via Ansible IAM Roles instance_profile_name UnauthorizedOperation: Error I am trying to create EC2 instance via ansible using IAM roles but I while launching new instance I get error failed: [localhost] => (item= IAMRole-1) => {"failed": true, "item": " IAMRole-1"} msg: Instance creation failed => UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: Ckcjt2GD81D5dlF6XakTSDypnwrgeQb0k ouRMKh3Ol1jue553EZ7OXPt6fk1Q1-4HM-tLNPCkiX7ZgJWXYGSjHg2xP1A9LR7KBiXYeCtFKEQIC W9cot3KAKPVcNXkHLrhREMfiT5KYEtrsA2A-xFCdvqwM2hNTNf7Y6VGe0Z48EDIyO5p5DxdNFsaSChUcb iRUhSyRXIGWr_ZKkGM9GoyoVWCBk3Ni2Td7zkZ1EfAIeRJobiOnYXKE6Q whereas iam role has full ec2 access, with following policy { "Version": "2012-10-17", "Statement": [ { "Action": "ec2:*", "Effect": "Allow", "Resource": "*" }, { "Effect": "Allow", "Action": "elasticloadbalancing:*", "Resource": "*" }, { "Effect": "Allow", "Action": "cloudwatch:*", "Resource": "*" }, { "Effect": "Allow", "Action": "autoscaling:*", "Resource": "*" } ] } Any suggestions please.
|
amazon-ec2, amazon-iam, ansible
| 10
| 6,513
| 1
|
https://stackoverflow.com/questions/24401846/aws-ec2-instance-create-via-ansible-iam-roles-instance-profile-name-unauthorized
|
58,345,011
|
Setup windows 10 workstation using Ansible installed on WSL
|
I have installed Ansible in the WSL (Windows Subsystem for Linux) within my Windows 10 workstation. My goal is configure both, WSL, and the Windows 10 itself. I'm able to run playbooks against localhost, which connect and configures via SSH the WSL. However I am not sure Ansible can run playbooks against the Windows host to be able to setup Windows itself (e.g. install packages using Chocolatey) Is that even possible? Or Ansible can only setup a windows node when is installed in a different Linux machine?
|
Setup windows 10 workstation using Ansible installed on WSL I have installed Ansible in the WSL (Windows Subsystem for Linux) within my Windows 10 workstation. My goal is configure both, WSL, and the Windows 10 itself. I'm able to run playbooks against localhost, which connect and configures via SSH the WSL. However I am not sure Ansible can run playbooks against the Windows host to be able to setup Windows itself (e.g. install packages using Chocolatey) Is that even possible? Or Ansible can only setup a windows node when is installed in a different Linux machine?
|
windows, ansible, ansible-2.x
| 10
| 7,419
| 2
|
https://stackoverflow.com/questions/58345011/setup-windows-10-workstation-using-ansible-installed-on-wsl
|
47,147,685
|
Ansible 2.4 hostfile warning
|
In Ansible 2.4, I'm getting this deprecation warning: [DEPRECATION WARNING]: [defaults]hostfile option, The key is misleading as it can also be a list of hosts, a directory or a list of paths . This feature will be removed in version 2.8. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. For the life of me, I cannot figure out what this means. Anybody know?
|
Ansible 2.4 hostfile warning In Ansible 2.4, I'm getting this deprecation warning: [DEPRECATION WARNING]: [defaults]hostfile option, The key is misleading as it can also be a list of hosts, a directory or a list of paths . This feature will be removed in version 2.8. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. For the life of me, I cannot figure out what this means. Anybody know?
|
ansible
| 10
| 1,881
| 1
|
https://stackoverflow.com/questions/47147685/ansible-2-4-hostfile-warning
|
45,855,743
|
Double conditional - delete all folders older than 3 days but keep a minimum of 10
|
I have a bit of problem I can't seem overcome. I have a folder with a lot of folders that are generated. I want to delete all folders that are older than three days, but I want to keep a minimum of 10 folders. I came up with this half-working code and I'd like some suggestions how to tackle this. --- - hosts: all tasks: # find all files that are older than three - find: paths: "/Users/asteen/Downloads/sites/" age: "3d" file_type: directory register: dirsOlderThan3d # find all files that are in the directory - find: paths: "/Users/asteen/Downloads/sites/" file_type: directory register: allDirs # delete all files that are older than three days, but keep a minimum of 10 files - file: path: "{{ item.path }}" state: absent with_items: "{{ dirsOlderThan3d.files }}" when: allDirs.files > 10 and not item[0].exists ... item[9].exists
|
Double conditional - delete all folders older than 3 days but keep a minimum of 10 I have a bit of problem I can't seem overcome. I have a folder with a lot of folders that are generated. I want to delete all folders that are older than three days, but I want to keep a minimum of 10 folders. I came up with this half-working code and I'd like some suggestions how to tackle this. --- - hosts: all tasks: # find all files that are older than three - find: paths: "/Users/asteen/Downloads/sites/" age: "3d" file_type: directory register: dirsOlderThan3d # find all files that are in the directory - find: paths: "/Users/asteen/Downloads/sites/" file_type: directory register: allDirs # delete all files that are older than three days, but keep a minimum of 10 files - file: path: "{{ item.path }}" state: absent with_items: "{{ dirsOlderThan3d.files }}" when: allDirs.files > 10 and not item[0].exists ... item[9].exists
|
ansible
| 10
| 9,294
| 2
|
https://stackoverflow.com/questions/45855743/double-conditional-delete-all-folders-older-than-3-days-but-keep-a-minimum-of
|
40,139,757
|
Ansible-playbook: directly run handler
|
Is there a way to directly run handler by ansible-playbook ? For example, i have handler restart service in my role and sometimes i want just trigger it directly without deploying whole app.
|
Ansible-playbook: directly run handler Is there a way to directly run handler by ansible-playbook ? For example, i have handler restart service in my role and sometimes i want just trigger it directly without deploying whole app.
|
configuration, ansible
| 10
| 15,410
| 1
|
https://stackoverflow.com/questions/40139757/ansible-playbook-directly-run-handler
|
32,588,927
|
Deployment with Ansible from Gitlab CI, dealing with passwords
|
I'm trying to achieve an "password-free" deployment workflow using Gitlab CI and Ansible. Some steps do require a password (I'm already using SSH Keys whenever I can) so I've stored those password inside an Ansible Vault . Next, I would just need to provide the Vault password when running the playbook. But how could I integrate this nicely with Gitlab CI? May I register a gitlab-ci job (or jobs are suitable for builds only?), which just runs the playbook providing the vault password somehow?! Can this be achieved without a password laying around in plain text?! Also, I would be really happy if someone can point me some material that shows how we can deploy builds using Ansible. As you can notice, I've definitively found nothing about that.
|
Deployment with Ansible from Gitlab CI, dealing with passwords I'm trying to achieve an "password-free" deployment workflow using Gitlab CI and Ansible. Some steps do require a password (I'm already using SSH Keys whenever I can) so I've stored those password inside an Ansible Vault . Next, I would just need to provide the Vault password when running the playbook. But how could I integrate this nicely with Gitlab CI? May I register a gitlab-ci job (or jobs are suitable for builds only?), which just runs the playbook providing the vault password somehow?! Can this be achieved without a password laying around in plain text?! Also, I would be really happy if someone can point me some material that shows how we can deploy builds using Ansible. As you can notice, I've definitively found nothing about that.
|
ansible, gitlab-ci
| 10
| 14,612
| 2
|
https://stackoverflow.com/questions/32588927/deployment-with-ansible-from-gitlab-ci-dealing-with-passwords
|
18,937,680
|
vagrant ansible The following settings don't exist: inventory_file
|
I've pulled down a git repo and ran vagrant up but I'm getting this error message The following settings don't exist: inventory_file I've installed virtual box and vagrant and ansible for osx mountain lion. But I can't get anything to work. also when I run ansible all -m ping -vvvv I get <192.168.0.62> ESTABLISH CONNECTION FOR USER: Grant <192.168.0.62> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/Users/Grant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', '192.168.0.62', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && chmod a+rx $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && echo $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544'"] 192.168.0.62 | FAILED => SSH encountered an unknown error. The output was: OpenSSH_5.9p1, OpenSSL 0.9.8y 5 Feb 2013 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/Users/Grant/.ansible/cp/ansible-ssh-192.168.0.62-22-Grant" does not exist debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.62 [192.168.0.62] port 22. debug2: fd 3 setting O_NONBLOCK debug1: connect to address 192.168.0.62 port 22: Operation timed out ssh: connect to host 192.168.0.62 port 22: Operation timed out Any ideas on what is going on will be appreciated :)
|
vagrant ansible The following settings don't exist: inventory_file I've pulled down a git repo and ran vagrant up but I'm getting this error message The following settings don't exist: inventory_file I've installed virtual box and vagrant and ansible for osx mountain lion. But I can't get anything to work. also when I run ansible all -m ping -vvvv I get <192.168.0.62> ESTABLISH CONNECTION FOR USER: Grant <192.168.0.62> EXEC ['ssh', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/Users/Grant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'Port=22', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', '192.168.0.62', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && chmod a+rx $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544 && echo $HOME/.ansible/tmp/ansible-1379790346.17-244145524385544'"] 192.168.0.62 | FAILED => SSH encountered an unknown error. The output was: OpenSSH_5.9p1, OpenSSL 0.9.8y 5 Feb 2013 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/Users/Grant/.ansible/cp/ansible-ssh-192.168.0.62-22-Grant" does not exist debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.62 [192.168.0.62] port 22. debug2: fd 3 setting O_NONBLOCK debug1: connect to address 192.168.0.62 port 22: Operation timed out ssh: connect to host 192.168.0.62 port 22: Operation timed out Any ideas on what is going on will be appreciated :)
|
osx-mountain-lion, vagrant, ansible
| 10
| 4,700
| 4
|
https://stackoverflow.com/questions/18937680/vagrant-ansible-the-following-settings-dont-exist-inventory-file
|
70,608,966
|
Create link with specific owner/group
|
I'm trying to create some symbolic links with a specific owner/group, but it's always created with owner=root and group=root. Why? This is my code: - name: Get the directories to create symbolic links find: paths: /myPath/ register: result - name: Creation of symbolic links file: src: "{{ item.path }}" dest: /Path_Dest/{{ item.path | basename }} owner: 'owner1' group: 'group1' state: link with_items: "{{ result.files }}" Note : owner1 and group1 exist. No error in Ansible log
|
Create link with specific owner/group I'm trying to create some symbolic links with a specific owner/group, but it's always created with owner=root and group=root. Why? This is my code: - name: Get the directories to create symbolic links find: paths: /myPath/ register: result - name: Creation of symbolic links file: src: "{{ item.path }}" dest: /Path_Dest/{{ item.path | basename }} owner: 'owner1' group: 'group1' state: link with_items: "{{ result.files }}" Note : owner1 and group1 exist. No error in Ansible log
|
ansible
| 10
| 3,433
| 1
|
https://stackoverflow.com/questions/70608966/create-link-with-specific-owner-group
|
55,407,094
|
Ansible - What's the proper syntax for loop + zip when combining more than two lists?
|
I haven't been able to find the syntax for loop + zip when combining more than 2 lists. Since Ansible 2.5, as shown here , the following syntax replaces with_together with loop + zip: - name: with_together debug: msg: "{{ item.0 }} - {{ item.1 }}" with_together: - "{{ list_one }}" - "{{ list_two }}" - name: with_together -> loop debug: msg: "{{ item.0 }} - {{ item.1 }}" loop: "{{ list_one|zip(list_two)|list }}" My question is, whereas when using with_together, you could simply append lists, and reference them with iterating numbers, I haven't been able to find the method to use with loop + zip. I have tried: loop: "{{ list_one|zip(list_two)|zip(list_three)|zip(list_four)list }}" Without success.
|
Ansible - What's the proper syntax for loop + zip when combining more than two lists? I haven't been able to find the syntax for loop + zip when combining more than 2 lists. Since Ansible 2.5, as shown here , the following syntax replaces with_together with loop + zip: - name: with_together debug: msg: "{{ item.0 }} - {{ item.1 }}" with_together: - "{{ list_one }}" - "{{ list_two }}" - name: with_together -> loop debug: msg: "{{ item.0 }} - {{ item.1 }}" loop: "{{ list_one|zip(list_two)|list }}" My question is, whereas when using with_together, you could simply append lists, and reference them with iterating numbers, I haven't been able to find the method to use with loop + zip. I have tried: loop: "{{ list_one|zip(list_two)|zip(list_three)|zip(list_four)list }}" Without success.
|
loops, ansible
| 10
| 5,919
| 1
|
https://stackoverflow.com/questions/55407094/ansible-whats-the-proper-syntax-for-loop-zip-when-combining-more-than-two-l
|
52,901,272
|
Anisible pip3 install keeps failing on remote service (No setuptools found in remote host, please install it first)
|
I am trying to set up my remote servers and have Anisble install required packages. In my playbook.yml everything works fine except when it tries to install requirments.txt only on one remote server. It gives me the following error: FAILED! => {"changed": false, "msg": "No setuptools found in remote host, please install it first."} And yes, I do have setuptools install on the remote host. # pip3 show setuptools Name: setuptools Version: 40.4.3 Summary: Easily download, build, install, upgrade, and uninstall Python packages Home-page: [URL] Author: Python Packaging Authority Author-email: distutils-sig@python.org License: UNKNOWN Location: /usr/lib/python3.6/site-packages Requires: Required-by: pipenv Not sure why it even needs setuptools when I'm using pip3 to install. Here is my playbook snippet: - name: Install requirements pip: requirements: /.supv/bridge_modules/requirements.txt executable: pip3 It seems to work fine on the other remote hosts, just this one is having trouble. I've tried to uninstall setuptools and reinstall, still no luck. Any ideas?
|
Anisible pip3 install keeps failing on remote service (No setuptools found in remote host, please install it first) I am trying to set up my remote servers and have Anisble install required packages. In my playbook.yml everything works fine except when it tries to install requirments.txt only on one remote server. It gives me the following error: FAILED! => {"changed": false, "msg": "No setuptools found in remote host, please install it first."} And yes, I do have setuptools install on the remote host. # pip3 show setuptools Name: setuptools Version: 40.4.3 Summary: Easily download, build, install, upgrade, and uninstall Python packages Home-page: [URL] Author: Python Packaging Authority Author-email: distutils-sig@python.org License: UNKNOWN Location: /usr/lib/python3.6/site-packages Requires: Required-by: pipenv Not sure why it even needs setuptools when I'm using pip3 to install. Here is my playbook snippet: - name: Install requirements pip: requirements: /.supv/bridge_modules/requirements.txt executable: pip3 It seems to work fine on the other remote hosts, just this one is having trouble. I've tried to uninstall setuptools and reinstall, still no luck. Any ideas?
|
python-3.x, pip, ansible
| 10
| 14,255
| 4
|
https://stackoverflow.com/questions/52901272/anisible-pip3-install-keeps-failing-on-remote-service-no-setuptools-found-in-re
|
52,313,028
|
Ansible 2.6: Is there a way to reference the playbook's name in a role task?
|
Given a playbook like this: - name: "Tasks for service XYZ" hosts: apiservers roles: - { role: common } Is there a way to reference the playbook's name ("Tasks for service XYZ")? (i.e. a variable) EDIT: My intention is to be able to reference the playbook's name in a role task, i.e. sending a msg via slack like - name: "Send Slack notification indicating deploy has started" slack: channel: '#project-deploy' token: '{{ slack_token }}' msg: '*Deploy started* to _{{ inventory_hostname }}_ of {{ PLAYBOOK_NAME }} version *{{ service_version }}*' delegate_to: localhost tags: deploy
|
Ansible 2.6: Is there a way to reference the playbook's name in a role task? Given a playbook like this: - name: "Tasks for service XYZ" hosts: apiservers roles: - { role: common } Is there a way to reference the playbook's name ("Tasks for service XYZ")? (i.e. a variable) EDIT: My intention is to be able to reference the playbook's name in a role task, i.e. sending a msg via slack like - name: "Send Slack notification indicating deploy has started" slack: channel: '#project-deploy' token: '{{ slack_token }}' msg: '*Deploy started* to _{{ inventory_hostname }}_ of {{ PLAYBOOK_NAME }} version *{{ service_version }}*' delegate_to: localhost tags: deploy
|
ansible, ansible-2.x
| 10
| 9,012
| 3
|
https://stackoverflow.com/questions/52313028/ansible-2-6-is-there-a-way-to-reference-the-playbooks-name-in-a-role-task
|
46,541,438
|
extracting a variable from json output then debug and register the outout with ansible
|
Hi I have a problem of getting one of the variables extracted from a json output after doing a curl to be parsed and registered back to ansible Playbook: - name: debug stdout debug: msg: "{{ result.stdout | from_json }}" register: dataresult - name: debug fact debug: msg: "{{ dataresult.data.start_time_string }}" output : TASK [backup_api : debug stdout] *********************************************** task path: /home/ansible/cm-dha/roles/backup_api/tasks/main.yml:36 ok: [127.0.0.1] => { "msg": { "data": [ { "backup_id": 40362, "certified": null, "instance_id": 148, "start_time": 1506985211, "start_time_string": "10/03/2017 03:00:11 am" } ], "timestamp": 1507022232 } } error: fatal: [127.0.0.1]: FAILED! => { "failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'data'\n\nThe error appears to have been in '/home/ansible/cm-dha/roles/backup_api/tasks/main.yml': line 48, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: debug fact\n ^ here\n" The error is happening when trying to extract the value start_time_string so how to do it probably as I tried too many things like using with_items, with_dict , simulating the data[] output to debug and even doing a json query but without success so any help here?
|
extracting a variable from json output then debug and register the outout with ansible Hi I have a problem of getting one of the variables extracted from a json output after doing a curl to be parsed and registered back to ansible Playbook: - name: debug stdout debug: msg: "{{ result.stdout | from_json }}" register: dataresult - name: debug fact debug: msg: "{{ dataresult.data.start_time_string }}" output : TASK [backup_api : debug stdout] *********************************************** task path: /home/ansible/cm-dha/roles/backup_api/tasks/main.yml:36 ok: [127.0.0.1] => { "msg": { "data": [ { "backup_id": 40362, "certified": null, "instance_id": 148, "start_time": 1506985211, "start_time_string": "10/03/2017 03:00:11 am" } ], "timestamp": 1507022232 } } error: fatal: [127.0.0.1]: FAILED! => { "failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'data'\n\nThe error appears to have been in '/home/ansible/cm-dha/roles/backup_api/tasks/main.yml': line 48, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: debug fact\n ^ here\n" The error is happening when trying to extract the value start_time_string so how to do it probably as I tried too many things like using with_items, with_dict , simulating the data[] output to debug and even doing a json query but without success so any help here?
|
json, ansible
| 10
| 27,550
| 2
|
https://stackoverflow.com/questions/46541438/extracting-a-variable-from-json-output-then-debug-and-register-the-outout-with-a
|
36,999,933
|
Check stdout of async Ansible task
|
How can you failed_when based on the stdout of an async Ansible task? I've tried variations of: - name: Run command command: arbitrary_command async: 3600 poll: 10 register: result failed_when: "Finished 'command'" in result.stdout This results in: fatal: [localhost] => error while evaluating conditional: "Finished 'command'" in result.stdout
|
Check stdout of async Ansible task How can you failed_when based on the stdout of an async Ansible task? I've tried variations of: - name: Run command command: arbitrary_command async: 3600 poll: 10 register: result failed_when: "Finished 'command'" in result.stdout This results in: fatal: [localhost] => error while evaluating conditional: "Finished 'command'" in result.stdout
|
ansible
| 10
| 8,772
| 1
|
https://stackoverflow.com/questions/36999933/check-stdout-of-async-ansible-task
|
36,477,176
|
How to get the access key with iam_module of Ansible?
|
I am using Ansible to create AWS users. One of the features of Ansible is to create a user with access key. I am wondering how could I get the access key after the user was successfully created. [URL] tasks: - name: Create two new IAM users with API keys iam: iam_type: user name: "{{ item }}" state: present password: "{{ temp_pass }}" access_key_state: create with_items: - user
|
How to get the access key with iam_module of Ansible? I am using Ansible to create AWS users. One of the features of Ansible is to create a user with access key. I am wondering how could I get the access key after the user was successfully created. [URL] tasks: - name: Create two new IAM users with API keys iam: iam_type: user name: "{{ item }}" state: present password: "{{ temp_pass }}" access_key_state: create with_items: - user
|
amazon-web-services, ansible, amazon-iam
| 10
| 3,469
| 2
|
https://stackoverflow.com/questions/36477176/how-to-get-the-access-key-with-iam-module-of-ansible
|
34,394,672
|
Getting the IP address/attributes of the AWS instance created using Ansible
|
I know how to create an AWS instance using Ansible. Now what I want to achieve is to configure that instance as web server by installing nginx using the same playbook which created the instance. The goal of the playbook will be: Create an AWS instance. Configure the instance as Web server by setting up the Nginx server. Is it possible with ansible?
|
Getting the IP address/attributes of the AWS instance created using Ansible I know how to create an AWS instance using Ansible. Now what I want to achieve is to configure that instance as web server by installing nginx using the same playbook which created the instance. The goal of the playbook will be: Create an AWS instance. Configure the instance as Web server by setting up the Nginx server. Is it possible with ansible?
|
amazon-web-services, ansible
| 10
| 13,353
| 3
|
https://stackoverflow.com/questions/34394672/getting-the-ip-address-attributes-of-the-aws-instance-created-using-ansible
|
29,337,686
|
PgSQL - How to import database dump only when database completely empty?
|
The use-case actually to automate this with ansible . I want to import database dump only when database is completely empty (no tables inside). Of course there's always a way to execute sql statement, but this is last resort, I believe there should be more elegant solution for this. pg_restore manual doesn't provide this option as far as I see. Here's how I'm planning to do this with ansible: - name: db_restore | Receive latest DB backup shell: s3cmd --skip-existing get s3cmd ls s3://{{ aws_bucket }}/ | grep sentry | tail -1 | awk '{print $4}' sql.latest.tgz args: chdir: /root/ creates: sql.latest.tgz - name: db_restore | Check if file exists stat: path=/root/sql.latest.tgz register: sql_latest - name: db_restore | Restore latest DB backup if backup file found shell: PGPASSWORD={{ dbpassword }} tar -xzOf /root/sentry*.tgz db.sql | psql -U{{ dbuser }} -h{{ pgsql_server }} --set ON_ERROR_STOP=on {{ dbname }} when: sql_latest.stat.exists ignore_errors: True Ideally this should check if DB empty. No ansible module exist for this purpose. Google is also in silence.. Current solution actually also works, this will give error when import will fail, and I can just ignore error, but it's a bit painful to see a false alarm.
|
PgSQL - How to import database dump only when database completely empty? The use-case actually to automate this with ansible . I want to import database dump only when database is completely empty (no tables inside). Of course there's always a way to execute sql statement, but this is last resort, I believe there should be more elegant solution for this. pg_restore manual doesn't provide this option as far as I see. Here's how I'm planning to do this with ansible: - name: db_restore | Receive latest DB backup shell: s3cmd --skip-existing get s3cmd ls s3://{{ aws_bucket }}/ | grep sentry | tail -1 | awk '{print $4}' sql.latest.tgz args: chdir: /root/ creates: sql.latest.tgz - name: db_restore | Check if file exists stat: path=/root/sql.latest.tgz register: sql_latest - name: db_restore | Restore latest DB backup if backup file found shell: PGPASSWORD={{ dbpassword }} tar -xzOf /root/sentry*.tgz db.sql | psql -U{{ dbuser }} -h{{ pgsql_server }} --set ON_ERROR_STOP=on {{ dbname }} when: sql_latest.stat.exists ignore_errors: True Ideally this should check if DB empty. No ansible module exist for this purpose. Google is also in silence.. Current solution actually also works, this will give error when import will fail, and I can just ignore error, but it's a bit painful to see a false alarm.
|
postgresql, ansible, database-backups
| 10
| 5,512
| 4
|
https://stackoverflow.com/questions/29337686/pgsql-how-to-import-database-dump-only-when-database-completely-empty
|
28,606,876
|
Ansible: Check if service is listening on a specific port
|
How would you go about using Ansible to confirm whether a service is running on a specific port? For example: Is Apache running on port 80? Is MySQL listening on port 3912? Is Tomcat listening on port 8080? I understand that there are the service and wait_for commands, which individually check if a service is running and if a port is in use - but I've not found anything so far to check if a particular service is listening on a particular port. service and wait_for will indicate there's a service and a port, but there's no guarantee that the port is taken by that particular service - it could be taken by anything. wait_for , as I understand it, simply checks if it's being used. There is a regex_search parameter on wait_for which mentions searching in a socket connection for a particular string, but as I understand it this is simply reading any information that comes down that socket rather than having any access to what is sending that information. How can we go about this?
|
Ansible: Check if service is listening on a specific port How would you go about using Ansible to confirm whether a service is running on a specific port? For example: Is Apache running on port 80? Is MySQL listening on port 3912? Is Tomcat listening on port 8080? I understand that there are the service and wait_for commands, which individually check if a service is running and if a port is in use - but I've not found anything so far to check if a particular service is listening on a particular port. service and wait_for will indicate there's a service and a port, but there's no guarantee that the port is taken by that particular service - it could be taken by anything. wait_for , as I understand it, simply checks if it's being used. There is a regex_search parameter on wait_for which mentions searching in a socket connection for a particular string, but as I understand it this is simply reading any information that comes down that socket rather than having any access to what is sending that information. How can we go about this?
|
linux, ansible
| 10
| 31,781
| 4
|
https://stackoverflow.com/questions/28606876/ansible-check-if-service-is-listening-on-a-specific-port
|
72,094,835
|
Disable Ansible gather facts from the command line
|
To speed up execution of Ansible playbooks, I occasionally want to disable gathering facts during the setup phase. This can be done in the playbook by adding: gather_facts: False but how can it be controlled in the command line? I execute my Ansible playbook like this: ansible-playbook playbook.yaml -i inventory.yaml
|
Disable Ansible gather facts from the command line To speed up execution of Ansible playbooks, I occasionally want to disable gathering facts during the setup phase. This can be done in the playbook by adding: gather_facts: False but how can it be controlled in the command line? I execute my Ansible playbook like this: ansible-playbook playbook.yaml -i inventory.yaml
|
ansible, ansible-facts
| 10
| 15,922
| 2
|
https://stackoverflow.com/questions/72094835/disable-ansible-gather-facts-from-the-command-line
|
40,992,585
|
Ansible: Use variable for defining playbook hosts
|
I have the following version installed: ansible 2.3.0 (devel 2131eaba0c) I want to specify my host variable as external variable and then use it in the playbook similar to this: hosts: "{{integration}}" In my group_vars/all file I have the following defined variable: integration: "int60" The host file looks like this: [int60] hostA [int61] hostB Unfortunately this does not work. I also tried to define the host var in the following way: [integration] 127.0.0.1 ansible_host="{{ integration_env }}" and have the integration_env specified in my group_vars/all file. In this case it seemed like it ran the tasks locally and not in the desired environment. Is it possible to do something like this? I'd be open to whole new ways of doing this. The main goal is simply to define the host variable in a var file.
|
Ansible: Use variable for defining playbook hosts I have the following version installed: ansible 2.3.0 (devel 2131eaba0c) I want to specify my host variable as external variable and then use it in the playbook similar to this: hosts: "{{integration}}" In my group_vars/all file I have the following defined variable: integration: "int60" The host file looks like this: [int60] hostA [int61] hostB Unfortunately this does not work. I also tried to define the host var in the following way: [integration] 127.0.0.1 ansible_host="{{ integration_env }}" and have the integration_env specified in my group_vars/all file. In this case it seemed like it ran the tasks locally and not in the desired environment. Is it possible to do something like this? I'd be open to whole new ways of doing this. The main goal is simply to define the host variable in a var file.
|
ansible, ansible-2.x
| 10
| 18,386
| 1
|
https://stackoverflow.com/questions/40992585/ansible-use-variable-for-defining-playbook-hosts
|
33,094,075
|
ansible lineinfile regex multiline
|
I'm trying to edit apache.conf using Ansible. Here's part of my conf: # Sets the default security model of the Apache2 HTTPD server. It does # not allow access to the root filesystem outside of /usr/share and /var/www. # The former is used by web applications packaged in Debian, # the latter may be used for local directories served by the web server. If # your system is serving content from a sub-directory in /srv you must allow # access here, or in any related virtual host. <Directory /> Options FollowSymLinks AllowOverride None Require all denied </Directory> <Directory /usr/share> AllowOverride None Require all granted </Directory> <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> #<Directory /srv/> # Options Indexes FollowSymLinks AllowOverride All # Require all granted #</Directory> I want to change this block <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> into <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> set AllowOverride from None to All. I'm using this ansible task - name: change htaccess support lineinfile: dest: /etc/apache2/apache2.conf regexp: '\s<Directory /var/www/>\n\sOptions Indexes FollowSymLinks\n\sAllowOverride' line: "AllowOverride All" tags: - test However, AllowOverride All always added to the end of file. What's the correct regex pattern to do this jobs. I don't use ansible template cuz I only change one line.
|
ansible lineinfile regex multiline I'm trying to edit apache.conf using Ansible. Here's part of my conf: # Sets the default security model of the Apache2 HTTPD server. It does # not allow access to the root filesystem outside of /usr/share and /var/www. # The former is used by web applications packaged in Debian, # the latter may be used for local directories served by the web server. If # your system is serving content from a sub-directory in /srv you must allow # access here, or in any related virtual host. <Directory /> Options FollowSymLinks AllowOverride None Require all denied </Directory> <Directory /usr/share> AllowOverride None Require all granted </Directory> <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> #<Directory /srv/> # Options Indexes FollowSymLinks AllowOverride All # Require all granted #</Directory> I want to change this block <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> into <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> set AllowOverride from None to All. I'm using this ansible task - name: change htaccess support lineinfile: dest: /etc/apache2/apache2.conf regexp: '\s<Directory /var/www/>\n\sOptions Indexes FollowSymLinks\n\sAllowOverride' line: "AllowOverride All" tags: - test However, AllowOverride All always added to the end of file. What's the correct regex pattern to do this jobs. I don't use ansible template cuz I only change one line.
|
regex, apache, ansible
| 10
| 22,328
| 3
|
https://stackoverflow.com/questions/33094075/ansible-lineinfile-regex-multiline
|
21,738,661
|
ansible: recursive loop detected in template string
|
In a playbook, I am using a role this way: - { role: project, project_name: "{{project_name}}" } And in the "project" role, I actually have a dependency that wants to use the project_name variable of the "project" role: --- dependencies: - { role: users, users: [ { name: "{{project_name}}", home: "/home/{{project_name}}", shell: "/bin/bash", group: "{{project_name}}", } ] } But I get an error: recursive loop detected in template string: {{project_name}} Is changing the name of the "project_name" variable the only solution? Thanks
|
ansible: recursive loop detected in template string In a playbook, I am using a role this way: - { role: project, project_name: "{{project_name}}" } And in the "project" role, I actually have a dependency that wants to use the project_name variable of the "project" role: --- dependencies: - { role: users, users: [ { name: "{{project_name}}", home: "/home/{{project_name}}", shell: "/bin/bash", group: "{{project_name}}", } ] } But I get an error: recursive loop detected in template string: {{project_name}} Is changing the name of the "project_name" variable the only solution? Thanks
|
ansible
| 10
| 18,947
| 1
|
https://stackoverflow.com/questions/21738661/ansible-recursive-loop-detected-in-template-string
|
53,542,389
|
Modify JSON in Ansible
|
I have a management system where we define maintenance data to control virtual environment, and one of the options is VM shutdown timeframe for different teams. Now when new VM is created user should select from the available list of timeframes when his/her VM can be shut down without interruption to work shift. I need to be able to sync this list of timeframes from my database to job template survey. I'm stuck in modifying the JSON survey. I've tried this post best way to modify json in ansible but getting an error: "exception": " File \"/tmp/ansible_1qa8eR/ansible_module_json_modify.py\", line 38, in main\n res = jsonpointer.resolve_pointer(data, pointer)\n File \"/usr/lib/python2.7/site-packages/jsonpointer.py\", line 126, in resolve_pointer\n return pointer.resolve(doc, default)\n File \"/usr/lib/python2.7/site-packages/jsonpointer.py\", line 204, in resolve\n doc = self.walk(doc, part)\n File \"/usr/lib/python2.7/site-packages/jsonpointer.py\", line 279, in walk\n raise JsonPointerException(\"member '%s' not found in %s\" % (part, doc))\n", "msg": "member 'spec' not found in {'stderr_lines': [], 'changed': True, 'end' Here is my JSON that I'm trying to modify: { "spec": [ { "question_description": "", "min": 0, "default": "Test text", "max": 4096, "required": true, "choices": "", "variable": "_t", "question_name": "Note", "type": "textarea" }, { "required": true, "min": null, "default": "", "max": null, "question_description": "appliance id", "choices": "Unconfigured\n600,qvmProcessor/applianceexemptions,all", "new_question": true, "variable": "appid", "question_name": "Appliance ID", "type": "multiplechoice" }, { "required": true, "min": null, "default": "", "max": null, "question_description": "Select version", "choices": "1.2.3\n1.2.4\n1.2.5", "new_question": true, "variable": "version", "question_name": "App Version", "type": "multiplechoice" }, { "required": true, "min": 0, "default": "", "max": 1024, "question_description": "", "choices": "", "new_question": true, "variable": "newVMIP", "question_name": "IP for new VM", "type": "text" }, { "required": true, "min": 0, "default": "", "max": 1024, "question_description": "", "choices": "", "new_question": true, "variable": "requesterEmail", "question_name": "Requester's email", "type": "text" }, { "required": true, "min": null, "default": "", "max": null, "question_description": "Select the timeframe for automatic VM shutdown. ***NOTE*** EST Time is in 24 hour format", "choices": "23:00-02:00\n02:00-04:00\n04:00-06:00\n00:00-02:00", "new_question": true, "variable": "powerOFF_TimeFrame", "question_name": "Power OFF window", "type": "multiplechoice" }, { "required": true, "min": 0, "default": 5, "max": 30, "question_description": "The VM will be deleted after # of days specified (default=5).", "choices": "", "new_question": true, "variable": "vmNumReservedDays", "question_name": "Keep VM for # of days", "type": "integer" } ], "description": "", "name": "" } I have to update timeframes (the one before last) choices: "choices": "23:00-02:00\n02:00-04:00\n04:00-06:00\n00:00-02:00", Here is my code. I could read directly to variable but for now I'm just saving JSON to the file: - name: Sync Power Schedules From Database to Survey Spec hosts: awxroot gather_facts: no vars: new_choices: {} tasks: - name: Set shared directory name set_fact: sharedDataPath: /var/tmp/survey - name: Set shared file path name set_fact: sharedDataPathFile: "{{sharedDataPath}}/s.json" - name: Create directory to share data file: path: "{{ sharedDataPath }}" state: directory - name: Load Survey Spec to file shell: 'tower-cli job_template survey 70 > "{{ sharedDataPathFile }}"' - name: Make sure the survey spec file exists stat: path: "{{ sharedDataPathFile }}" register: isFileExists - name: Fail if file is not there fail: msg: "Cannot find survey spec exported file" when: isFileExists == False - name: Read exception file to a variable command: cat "{{ sharedDataPathFile }}" register: surveySpec when: isFileExists.stat.exists == True - name: Setting key set_fact: choices_key: "choices" - name: Setting new values set_fact: choices_value: "23:00-02:00\n02:00-04:00\n04:00-06:00\n00:00-04:00" - name: Create dictionary set_fact: new_choices: "{{ new_choices | combine({choices_key: choices_value}) }}" - json_modify: data: "{{ surveySpec }}" pointer: "/spec/6/choices" action: update update: "{{new_choices}}" register: result - debug: var: result.result
|
Modify JSON in Ansible I have a management system where we define maintenance data to control virtual environment, and one of the options is VM shutdown timeframe for different teams. Now when new VM is created user should select from the available list of timeframes when his/her VM can be shut down without interruption to work shift. I need to be able to sync this list of timeframes from my database to job template survey. I'm stuck in modifying the JSON survey. I've tried this post best way to modify json in ansible but getting an error: "exception": " File \"/tmp/ansible_1qa8eR/ansible_module_json_modify.py\", line 38, in main\n res = jsonpointer.resolve_pointer(data, pointer)\n File \"/usr/lib/python2.7/site-packages/jsonpointer.py\", line 126, in resolve_pointer\n return pointer.resolve(doc, default)\n File \"/usr/lib/python2.7/site-packages/jsonpointer.py\", line 204, in resolve\n doc = self.walk(doc, part)\n File \"/usr/lib/python2.7/site-packages/jsonpointer.py\", line 279, in walk\n raise JsonPointerException(\"member '%s' not found in %s\" % (part, doc))\n", "msg": "member 'spec' not found in {'stderr_lines': [], 'changed': True, 'end' Here is my JSON that I'm trying to modify: { "spec": [ { "question_description": "", "min": 0, "default": "Test text", "max": 4096, "required": true, "choices": "", "variable": "_t", "question_name": "Note", "type": "textarea" }, { "required": true, "min": null, "default": "", "max": null, "question_description": "appliance id", "choices": "Unconfigured\n600,qvmProcessor/applianceexemptions,all", "new_question": true, "variable": "appid", "question_name": "Appliance ID", "type": "multiplechoice" }, { "required": true, "min": null, "default": "", "max": null, "question_description": "Select version", "choices": "1.2.3\n1.2.4\n1.2.5", "new_question": true, "variable": "version", "question_name": "App Version", "type": "multiplechoice" }, { "required": true, "min": 0, "default": "", "max": 1024, "question_description": "", "choices": "", "new_question": true, "variable": "newVMIP", "question_name": "IP for new VM", "type": "text" }, { "required": true, "min": 0, "default": "", "max": 1024, "question_description": "", "choices": "", "new_question": true, "variable": "requesterEmail", "question_name": "Requester's email", "type": "text" }, { "required": true, "min": null, "default": "", "max": null, "question_description": "Select the timeframe for automatic VM shutdown. ***NOTE*** EST Time is in 24 hour format", "choices": "23:00-02:00\n02:00-04:00\n04:00-06:00\n00:00-02:00", "new_question": true, "variable": "powerOFF_TimeFrame", "question_name": "Power OFF window", "type": "multiplechoice" }, { "required": true, "min": 0, "default": 5, "max": 30, "question_description": "The VM will be deleted after # of days specified (default=5).", "choices": "", "new_question": true, "variable": "vmNumReservedDays", "question_name": "Keep VM for # of days", "type": "integer" } ], "description": "", "name": "" } I have to update timeframes (the one before last) choices: "choices": "23:00-02:00\n02:00-04:00\n04:00-06:00\n00:00-02:00", Here is my code. I could read directly to variable but for now I'm just saving JSON to the file: - name: Sync Power Schedules From Database to Survey Spec hosts: awxroot gather_facts: no vars: new_choices: {} tasks: - name: Set shared directory name set_fact: sharedDataPath: /var/tmp/survey - name: Set shared file path name set_fact: sharedDataPathFile: "{{sharedDataPath}}/s.json" - name: Create directory to share data file: path: "{{ sharedDataPath }}" state: directory - name: Load Survey Spec to file shell: 'tower-cli job_template survey 70 > "{{ sharedDataPathFile }}"' - name: Make sure the survey spec file exists stat: path: "{{ sharedDataPathFile }}" register: isFileExists - name: Fail if file is not there fail: msg: "Cannot find survey spec exported file" when: isFileExists == False - name: Read exception file to a variable command: cat "{{ sharedDataPathFile }}" register: surveySpec when: isFileExists.stat.exists == True - name: Setting key set_fact: choices_key: "choices" - name: Setting new values set_fact: choices_value: "23:00-02:00\n02:00-04:00\n04:00-06:00\n00:00-04:00" - name: Create dictionary set_fact: new_choices: "{{ new_choices | combine({choices_key: choices_value}) }}" - json_modify: data: "{{ surveySpec }}" pointer: "/spec/6/choices" action: update update: "{{new_choices}}" register: result - debug: var: result.result
|
python, json, ansible
| 10
| 24,201
| 2
|
https://stackoverflow.com/questions/53542389/modify-json-in-ansible
|
43,791,040
|
Suppress Ansible Ad Hoc Warning
|
I have a python script that takes advantage of an Ansible ad hoc command to get host information quickly. I'd like to suppress the warning when I'm attempting to gather information about a host that is in a different VPC, but shows in the following command used to find all instances: aws ec2 describe-instances Below is the python snippet I'm using to make and generate the ansible ad hoc command: command_string = "ansible -i /repo/ansible/inventory/"+env+"/hosts " + name + " -m shell -a 'df -h'" result = subprocess.Popen(command_string, shell=True, stdout=subprocess.PIPE).stdout.read() I understand that in a playbook setting for the shell module: warn=no will disable warnings, but I can't seem to figure out how to do so via adhoc, see below test: [root@box-1b 10.0.5.xxx:~] ansible -i /repo/ansible/inventory/nqa/hosts 10.19.1.17 -m shell -a 'warn=no' [WARNING]: No hosts matched, nothing to do [root@box-1b 10.0.5.xxx:~] ansible -i /repo/ansible/inventory/nqa/hosts 10.19.1.17 -m shell -a 'warn=false' [WARNING]: No hosts matched, nothing to do The output of my full script looks similar to the following: i-xxxxxx my-super-cool-box t2.small True 10.0.0.10 vol-xxxxxxx 100 i-xxxxxxx /dev/xvdf [WARNING]: No hosts matched, nothing to do [WARNING]: No hosts matched, nothing to do [WARNING]: No hosts matched, nothing to do The information printed about the specific instance is correct, and all I'm looking for is a way to suppress that warning without changing the global ansible configurations.
|
Suppress Ansible Ad Hoc Warning I have a python script that takes advantage of an Ansible ad hoc command to get host information quickly. I'd like to suppress the warning when I'm attempting to gather information about a host that is in a different VPC, but shows in the following command used to find all instances: aws ec2 describe-instances Below is the python snippet I'm using to make and generate the ansible ad hoc command: command_string = "ansible -i /repo/ansible/inventory/"+env+"/hosts " + name + " -m shell -a 'df -h'" result = subprocess.Popen(command_string, shell=True, stdout=subprocess.PIPE).stdout.read() I understand that in a playbook setting for the shell module: warn=no will disable warnings, but I can't seem to figure out how to do so via adhoc, see below test: [root@box-1b 10.0.5.xxx:~] ansible -i /repo/ansible/inventory/nqa/hosts 10.19.1.17 -m shell -a 'warn=no' [WARNING]: No hosts matched, nothing to do [root@box-1b 10.0.5.xxx:~] ansible -i /repo/ansible/inventory/nqa/hosts 10.19.1.17 -m shell -a 'warn=false' [WARNING]: No hosts matched, nothing to do The output of my full script looks similar to the following: i-xxxxxx my-super-cool-box t2.small True 10.0.0.10 vol-xxxxxxx 100 i-xxxxxxx /dev/xvdf [WARNING]: No hosts matched, nothing to do [WARNING]: No hosts matched, nothing to do [WARNING]: No hosts matched, nothing to do The information printed about the specific instance is correct, and all I'm looking for is a way to suppress that warning without changing the global ansible configurations.
|
python, amazon-web-services, ansible
| 10
| 30,825
| 4
|
https://stackoverflow.com/questions/43791040/suppress-ansible-ad-hoc-warning
|
61,856,958
|
Molecule - test roles from other directory
|
I want to test my roles which I have in other directory. Below my project structure: When I try use molecule, it can't find roles which are in roles directory. β― sudo molecule converge --> Test matrix βββ default βββ dependency βββ create βββ prepare βββ converge --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' Skipping, instances already created. --> Scenario: 'default' --> Action: 'prepare' Skipping, prepare playbook not configured. --> Scenario: 'default' --> Action: 'converge' --> Sanity checks: 'docker' ERROR! the role 'curl' was not found in /home/belluu/programming/Ansible-Posthog/molecule/default/roles:/root/.cache/molecule/Ansible-Posthog/default/roles:/home/belluu/programming:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/home/belluu/programming/Ansible-Posthog/molecule/default The error appears to be in '/home/belluu/programming/Ansible-Posthog/molecule/default/converge.yml': line 5, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: roles: - role: curl ^ here Molecule trying find right directory but without success. Is it possible to give him path to directory with roles?
|
Molecule - test roles from other directory I want to test my roles which I have in other directory. Below my project structure: When I try use molecule, it can't find roles which are in roles directory. β― sudo molecule converge --> Test matrix βββ default βββ dependency βββ create βββ prepare βββ converge --> Scenario: 'default' --> Action: 'dependency' Skipping, missing the requirements file. Skipping, missing the requirements file. --> Scenario: 'default' --> Action: 'create' Skipping, instances already created. --> Scenario: 'default' --> Action: 'prepare' Skipping, prepare playbook not configured. --> Scenario: 'default' --> Action: 'converge' --> Sanity checks: 'docker' ERROR! the role 'curl' was not found in /home/belluu/programming/Ansible-Posthog/molecule/default/roles:/root/.cache/molecule/Ansible-Posthog/default/roles:/home/belluu/programming:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/home/belluu/programming/Ansible-Posthog/molecule/default The error appears to be in '/home/belluu/programming/Ansible-Posthog/molecule/default/converge.yml': line 5, column 7, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: roles: - role: curl ^ here Molecule trying find right directory but without success. Is it possible to give him path to directory with roles?
|
ansible, molecule
| 10
| 6,977
| 1
|
https://stackoverflow.com/questions/61856958/molecule-test-roles-from-other-directory
|
53,914,975
|
'dict object' has no attribute 'stdout' in Ansible Playbook
|
My playbook: - name: JBoss KeyStore and Truststore passwords will be stored in the password vault #shell: less "{{ vault }}" shell: cat "{{ vault }}" register: vault_contents tags: - BW.6.1.1.10 with_items: - "{{ vault }}" - debug: msg: "JBoss config filedoes not contains the word vault" when: vault_contents.stdout.find('$VAULT') == -1 I'm trying to read multiple files through ansible using Jinga2 Template and parse the output as stdout and search for a keyword and report it. It fails with the below error: TASK [testing_roles : debug] **************************************************************************. ***************************************************************** fatal: [d84e4fe137f4]: FAILED! => {"failed": true, "msg": "The conditional check 'vault_contents.stdout.find('$VAULT') == -1' failed. The error was: error while evaluating conditional (vault_contents.stdout.find('$VAULT') == -1): 'dict object' has no attribute 'stdout'\n\nThe error appears to have been in '/Ansible/Ansible/Relearn/testing_roles/roles/testing_roles/tasks/main.yml': line 49, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - \"{{ vault }}\"\n - debug:\n ^ here\n"} to retry, use: --limit @/Ansible/Ansible/Relearn/testing_roles/playbook.retry When i append it with a single file entry it works as expected but when it is changes to a series of files it fails. Is this the right approach to scan multiple files in Ansible or should be be using some other module or method. Any help is greatly appreciated. In the vars file it has the below contents: vault: - /jboss-as-7.1.1.Final/standalone/configuration/standalone-full-ha.xml Thank you
|
'dict object' has no attribute 'stdout' in Ansible Playbook My playbook: - name: JBoss KeyStore and Truststore passwords will be stored in the password vault #shell: less "{{ vault }}" shell: cat "{{ vault }}" register: vault_contents tags: - BW.6.1.1.10 with_items: - "{{ vault }}" - debug: msg: "JBoss config filedoes not contains the word vault" when: vault_contents.stdout.find('$VAULT') == -1 I'm trying to read multiple files through ansible using Jinga2 Template and parse the output as stdout and search for a keyword and report it. It fails with the below error: TASK [testing_roles : debug] **************************************************************************. ***************************************************************** fatal: [d84e4fe137f4]: FAILED! => {"failed": true, "msg": "The conditional check 'vault_contents.stdout.find('$VAULT') == -1' failed. The error was: error while evaluating conditional (vault_contents.stdout.find('$VAULT') == -1): 'dict object' has no attribute 'stdout'\n\nThe error appears to have been in '/Ansible/Ansible/Relearn/testing_roles/roles/testing_roles/tasks/main.yml': line 49, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n - \"{{ vault }}\"\n - debug:\n ^ here\n"} to retry, use: --limit @/Ansible/Ansible/Relearn/testing_roles/playbook.retry When i append it with a single file entry it works as expected but when it is changes to a series of files it fails. Is this the right approach to scan multiple files in Ansible or should be be using some other module or method. Any help is greatly appreciated. In the vars file it has the below contents: vault: - /jboss-as-7.1.1.Final/standalone/configuration/standalone-full-ha.xml Thank you
|
ansible, ansible-2.x, ansible-facts, ansible-template
| 10
| 41,458
| 1
|
https://stackoverflow.com/questions/53914975/dict-object-has-no-attribute-stdout-in-ansible-playbook
|
42,267,299
|
Ansible Install MySql 5.7 - Set Root User Password
|
I've recently upgraded my vagrant from ubuntu/trusty-64 to bento/ubuntu-16.04 . With that MySQL was updated to 5.7 . I've made several updates to my playbook, but I keep getting stuck when setting the root user's password. In the past (before 5.7) the following was sufficient: - name: MySQL | Set the root password. mysql_user: name=root host=localhost password={{ mysql_root_password }} become: true In my playbook this is tested by attempting to delete an anonymous user. - name: MySQL | Delete anonymous MySQL server user for {{ server_hostname }} mysql_user: name="" host="{{ server_hostname }}" state="absent" login_user=root login_password={{ mysql_root_password }} However, now my playbook fails at this step, returning: "Access denied for user 'root'@'localhost'" TASK [mysql : MySQL | Delete anonymous MySQL server user for vagrant] ********** task path: /Users/jonrobinson/vagrant/survey/playbooks/roles/mysql/tasks/mysql.yml:51 fatal: [vagrant]: FAILED! => {"changed": false, "failed": true, "msg": "unable to connect to database, check login_user and login_password are correct or /home/vagrant/.my.cnf has the credentials. Exception message: (1698, \"Access denied for user 'root'@'localhost'\")"} I've tried several things: Setting the password blank for root user mysql_root_password="" Attempting to delete the root user then recreate it with Ansible. I get same error probably because it's trying to act at the root user. Manually updating the root password in mysql. - This also doesn't appear to work (password isn't recognized) unless I delete the root user and recreate it with all the permissions. Just updating the root user password appears to have no change. My Full MySQL YAML: --- - name: MySQL | install mysql packages apt: pkg={{ item }} state=installed become: true with_items: - mysql-client - mysql-common - mysql-server - python-mysqldb - name: MySQL | create MySQL configuration file template: src=my.cnf.j2 dest=/etc/mysql/my.cnf backup=yes owner=root group=root mode=0644 become: true - name: MySQL | create MySQLD configuration file template: src=mysqld.cnf.j2 dest=/etc/mysql/conf.d/mysqld.cnf backup=yes owner=root group=root mode=0644 become: true - name: MySQL | restart mysql service: name=mysql state=restarted become: true - name: MySQL | Set the root password. mysql_user: name=root host=localhost password={{ mysql_root_password }} become: true - name: MySQL | Config for easy access as root user template: src=mysql_root.my.cnf.j2 dest=/root/.my.cnf become: true - name: MySQL | Config for easy access as root user template: src=mysql_root.my.cnf.j2 dest={{ home_dir }}/.my.cnf when: "'{{ user }}' != 'root'" - name: MySQL | Delete anonymous MySQL server user for {{ server_hostname }} mysql_user: name="" host="{{ server_hostname }}" state="absent" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Delete anonymous MySQL server user for localhost mysql_user: name="" state="absent" host=localhost login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for IPV6 localhost (::1) mysql_user: name="root" password="{{ mysql_root_password }}" host="::1" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for IPV4 localhost (127.0.0.1) mysql_user: name="root" password="{{ mysql_root_password }}" host="127.0.0.1" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for localhost domain (localhost) mysql_user: name="root" password="{{ mysql_root_password }}" host="localhost" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for {{ server_hostname }} domain mysql_user: name="root" password="{{ mysql_root_password }}" host="{{ server_hostname }}" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Remove the MySQL test database mysql_db: db=test state=absent login_user=root login_password={{ mysql_root_password }} - name: MySQL | create application database user mysql_user: name={{ dbuser }} password={{ dbpass }} priv=*.*:ALL host='%' state=present login_password={{ mysql_root_password }} login_user=root - name: MySQL | restart mysql service: name=mysql state=restarted become: true
|
Ansible Install MySql 5.7 - Set Root User Password I've recently upgraded my vagrant from ubuntu/trusty-64 to bento/ubuntu-16.04 . With that MySQL was updated to 5.7 . I've made several updates to my playbook, but I keep getting stuck when setting the root user's password. In the past (before 5.7) the following was sufficient: - name: MySQL | Set the root password. mysql_user: name=root host=localhost password={{ mysql_root_password }} become: true In my playbook this is tested by attempting to delete an anonymous user. - name: MySQL | Delete anonymous MySQL server user for {{ server_hostname }} mysql_user: name="" host="{{ server_hostname }}" state="absent" login_user=root login_password={{ mysql_root_password }} However, now my playbook fails at this step, returning: "Access denied for user 'root'@'localhost'" TASK [mysql : MySQL | Delete anonymous MySQL server user for vagrant] ********** task path: /Users/jonrobinson/vagrant/survey/playbooks/roles/mysql/tasks/mysql.yml:51 fatal: [vagrant]: FAILED! => {"changed": false, "failed": true, "msg": "unable to connect to database, check login_user and login_password are correct or /home/vagrant/.my.cnf has the credentials. Exception message: (1698, \"Access denied for user 'root'@'localhost'\")"} I've tried several things: Setting the password blank for root user mysql_root_password="" Attempting to delete the root user then recreate it with Ansible. I get same error probably because it's trying to act at the root user. Manually updating the root password in mysql. - This also doesn't appear to work (password isn't recognized) unless I delete the root user and recreate it with all the permissions. Just updating the root user password appears to have no change. My Full MySQL YAML: --- - name: MySQL | install mysql packages apt: pkg={{ item }} state=installed become: true with_items: - mysql-client - mysql-common - mysql-server - python-mysqldb - name: MySQL | create MySQL configuration file template: src=my.cnf.j2 dest=/etc/mysql/my.cnf backup=yes owner=root group=root mode=0644 become: true - name: MySQL | create MySQLD configuration file template: src=mysqld.cnf.j2 dest=/etc/mysql/conf.d/mysqld.cnf backup=yes owner=root group=root mode=0644 become: true - name: MySQL | restart mysql service: name=mysql state=restarted become: true - name: MySQL | Set the root password. mysql_user: name=root host=localhost password={{ mysql_root_password }} become: true - name: MySQL | Config for easy access as root user template: src=mysql_root.my.cnf.j2 dest=/root/.my.cnf become: true - name: MySQL | Config for easy access as root user template: src=mysql_root.my.cnf.j2 dest={{ home_dir }}/.my.cnf when: "'{{ user }}' != 'root'" - name: MySQL | Delete anonymous MySQL server user for {{ server_hostname }} mysql_user: name="" host="{{ server_hostname }}" state="absent" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Delete anonymous MySQL server user for localhost mysql_user: name="" state="absent" host=localhost login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for IPV6 localhost (::1) mysql_user: name="root" password="{{ mysql_root_password }}" host="::1" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for IPV4 localhost (127.0.0.1) mysql_user: name="root" password="{{ mysql_root_password }}" host="127.0.0.1" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for localhost domain (localhost) mysql_user: name="root" password="{{ mysql_root_password }}" host="localhost" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Secure the MySQL root user for {{ server_hostname }} domain mysql_user: name="root" password="{{ mysql_root_password }}" host="{{ server_hostname }}" login_user=root login_password={{ mysql_root_password }} - name: MySQL | Remove the MySQL test database mysql_db: db=test state=absent login_user=root login_password={{ mysql_root_password }} - name: MySQL | create application database user mysql_user: name={{ dbuser }} password={{ dbpass }} priv=*.*:ALL host='%' state=present login_password={{ mysql_root_password }} login_user=root - name: MySQL | restart mysql service: name=mysql state=restarted become: true
|
mysql, ansible, ubuntu-16.04, mysql-5.7
| 10
| 19,440
| 2
|
https://stackoverflow.com/questions/42267299/ansible-install-mysql-5-7-set-root-user-password
|
30,739,178
|
How to add apt key with --recv-keys instead of --recv?
|
I want to install facebook osquery with ansible. The instructions for ubuntu are as follows: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C9D8B80B ... Unfortunately setting the id to C9D8B80B doesn't work. In tasks: - name: Add repository key apt_key: keyserver=keyserver.ubuntu.com id=C9D8B80B state=present The command fails: TASK: [osquery | Add repository key] ****************************************** failed: [x.x.x.x] => {"cmd": "apt-key adv --keyserver keyserver.ubuntu.com --recv C9D8B80B", "failed": true, "rc": 2} The difference is --recv C9D8B80B vs --recv-keys C9D8B80B . Which ansible apt_key option corresponds to --recv-keys ?
|
How to add apt key with --recv-keys instead of --recv? I want to install facebook osquery with ansible. The instructions for ubuntu are as follows: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C9D8B80B ... Unfortunately setting the id to C9D8B80B doesn't work. In tasks: - name: Add repository key apt_key: keyserver=keyserver.ubuntu.com id=C9D8B80B state=present The command fails: TASK: [osquery | Add repository key] ****************************************** failed: [x.x.x.x] => {"cmd": "apt-key adv --keyserver keyserver.ubuntu.com --recv C9D8B80B", "failed": true, "rc": 2} The difference is --recv C9D8B80B vs --recv-keys C9D8B80B . Which ansible apt_key option corresponds to --recv-keys ?
|
ubuntu, ansible
| 10
| 8,640
| 1
|
https://stackoverflow.com/questions/30739178/how-to-add-apt-key-with-recv-keys-instead-of-recv
|
29,275,576
|
Ansible: How to check files is changed in shell command?
|
I have file, generated by shell command - stat: path=/etc/swift/account.ring.gz get_md5=yes register: account_builder_stat - name: write account.ring.gz file shell: swift-ring-builder account.builder write_ring <--- rewrite account.ring.gz chdir=/etc/swift changed_when: ??? account_builder_stat.changed ??? <-- no give desired effect How can I check that the file has been changed?
|
Ansible: How to check files is changed in shell command? I have file, generated by shell command - stat: path=/etc/swift/account.ring.gz get_md5=yes register: account_builder_stat - name: write account.ring.gz file shell: swift-ring-builder account.builder write_ring <--- rewrite account.ring.gz chdir=/etc/swift changed_when: ??? account_builder_stat.changed ??? <-- no give desired effect How can I check that the file has been changed?
|
ansible
| 10
| 7,661
| 2
|
https://stackoverflow.com/questions/29275576/ansible-how-to-check-files-is-changed-in-shell-command
|
17,582,307
|
Ansible windows client or host with Ansible linux server? Possible?
|
I am using Ansible for some infrastructure management problem for my project. I achieved this task using a Linux client like say to copy a bin file from Ansible server and install it on a client machine. This involves tasks in my playbooks using normal Linux commands like ssh, scp, ./bin etc., Now I want to achieve the same in a windows client. I couldn't find any good documentation to try it out. If anyone of you have tried using Ansible with Windows client then it would be great if you could share the procedures or prototype or any piece of information to start with and progress further on my problem.
|
Ansible windows client or host with Ansible linux server? Possible? I am using Ansible for some infrastructure management problem for my project. I achieved this task using a Linux client like say to copy a bin file from Ansible server and install it on a client machine. This involves tasks in my playbooks using normal Linux commands like ssh, scp, ./bin etc., Now I want to achieve the same in a windows client. I couldn't find any good documentation to try it out. If anyone of you have tried using Ansible with Windows client then it would be great if you could share the procedures or prototype or any piece of information to start with and progress further on my problem.
|
windows, client, infrastructure, ansible
| 10
| 4,286
| 4
|
https://stackoverflow.com/questions/17582307/ansible-windows-client-or-host-with-ansible-linux-server-possible
|
53,529,978
|
Ansible: Unexpected templating type error: expected string or buffer
|
I have a register with the following contents: ok: [hostname] => { "changed": false, "msg": { "changed": true, "cmd": "cd /tmp\n ./status.sh dev", "delta": "0:00:00.023660", "end": "2018-11-28 17:46:05.838934", "rc": 0, "start": "2018-11-28 17:46:05.815274", "stderr": "", "stderr_lines": [], "stdout": "application is not running. no pid file found", "stdout_lines": [ "application is not running. no pid file found" ] } } When i see the substring "not" in the register's stdout, i want to trigger another task: - name: Starting Application As Requested shell: /tmp/start.sh when: operation_status.stdout | search('not') However, i see this error in my triggered task fatal: [host]: FAILED! => { "failed": true, "msg": "The conditional check 'operation_status.stdout | search('not')' failed. The error was: Unexpected templating type error occurred on ({% if operation_status.stdout | search('not') %} True {% else %} False {% endif %}): expected string or buffer\n\nThe error appears to have been in '/path/to/ansible_playbook.yml': line 46, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Starting Application As Requested\n ^ here\n" I only see this error when adding the when condition. Without it, my playbook succeeds. What am i doing wrong here? Version details: ansible 2.3.0.0 python version = 2.6.6 (r266:84292, Aug 9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
|
Ansible: Unexpected templating type error: expected string or buffer I have a register with the following contents: ok: [hostname] => { "changed": false, "msg": { "changed": true, "cmd": "cd /tmp\n ./status.sh dev", "delta": "0:00:00.023660", "end": "2018-11-28 17:46:05.838934", "rc": 0, "start": "2018-11-28 17:46:05.815274", "stderr": "", "stderr_lines": [], "stdout": "application is not running. no pid file found", "stdout_lines": [ "application is not running. no pid file found" ] } } When i see the substring "not" in the register's stdout, i want to trigger another task: - name: Starting Application As Requested shell: /tmp/start.sh when: operation_status.stdout | search('not') However, i see this error in my triggered task fatal: [host]: FAILED! => { "failed": true, "msg": "The conditional check 'operation_status.stdout | search('not')' failed. The error was: Unexpected templating type error occurred on ({% if operation_status.stdout | search('not') %} True {% else %} False {% endif %}): expected string or buffer\n\nThe error appears to have been in '/path/to/ansible_playbook.yml': line 46, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Starting Application As Requested\n ^ here\n" I only see this error when adding the when condition. Without it, my playbook succeeds. What am i doing wrong here? Version details: ansible 2.3.0.0 python version = 2.6.6 (r266:84292, Aug 9 2016, 06:11:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-17)]
|
string, ansible, conditional-statements, buffer, templating
| 10
| 32,140
| 1
|
https://stackoverflow.com/questions/53529978/ansible-unexpected-templating-type-error-expected-string-or-buffer
|
25,979,839
|
How to set FQDN with ansible?
|
It seems the recommended method doesn't work well to me: - name: Set hostname hostname: name=mx.mydomain.net After rebooting, you can see I have problems with fqdn, nothing in /etc/hosts . root@mx:~# cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 mail mail # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters root@mx:~# cat /etc/hostname mx.mydomain.net root@mx:~# hostname mx.mydomain.net root@mx:~# hostname -f hostname: Name or service not known
|
How to set FQDN with ansible? It seems the recommended method doesn't work well to me: - name: Set hostname hostname: name=mx.mydomain.net After rebooting, you can see I have problems with fqdn, nothing in /etc/hosts . root@mx:~# cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 mail mail # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters root@mx:~# cat /etc/hostname mx.mydomain.net root@mx:~# hostname mx.mydomain.net root@mx:~# hostname -f hostname: Name or service not known
|
ubuntu, ansible
| 10
| 14,376
| 1
|
https://stackoverflow.com/questions/25979839/how-to-set-fqdn-with-ansible
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.