Ansible is a tool to run commands remotely on batches of machines - it is analogous to chef and puppet. An advantage over them is that ansible just needs to be installed on the machine that will “give the orders”. On the “commanded” machines it just needs ssh access.
Collection of tasks and templates (among other things).
A role must have a specific goal. E.g.: install and configure nginx.
An inventory is a list of hosts, eventually grouped into groups, on which you will run playbooks. Inventories can be dynamic.
This is where you basically tell Ansible: please install roles x, y and z on machines m1, m2 and m3.
strategy: debug
on the same level as tasks
on the yml file. More here: http://docs.ansible.com/ansible/latest/playbooks_debugger.htmlCreate a virtualenv and install ansible from pip: $ pip install ansible
Create an environment variable which has your machine IP (localhost - It is just to make some tests for now.)
$ echo “127.0.0.1” > /ansible_hosts
$ export ANSIBLE_HOSTS=/ansible_hosts
Configure ssh-agent: If you don’t have an ssh key, generate it first with ssh-keygen. Then: $ ssh-agent bash $ ssh-add ~/.ssh/id_rsa
Copy your public key to each one of the hosts you want to remotely control: $ ssh-copy-id -i ~/.ssh/id_rsa.pub ‘-p 22 root@192.168.0.13’
You must inform ansible a list of the servers you want to command. $ vim ~/ansible_hosts [localhost] 127.0.0.1 [staging] 192.168.0.13:22 ansible_ssh_user=root [production] 192.168.0.113:2200 192.168.0.127:22 192.168.0.135:22 192.168.0.27:22 ansible_ssh_user=devel NOTE: in the case of some services notice I have passed the remote ssh user (ansible_ssh_user=root) because my local user key (tiago) gives me root credential on the respective remote machines.
Test if ansible is able to connect to the remote machines:
$ ansible -i ~/ansible_hosts all -m ping $ ansible -i ~/ansible_hosts all -a “echo ‘hello’”
If you have authentication error on anyone of the servers, you will have to manually ssh into the server to check your key is OK and valid. If it is, may be the case to use the “ansible_ssh_user” as described above.
$ vim ~/deploy.playbook.yml
---
- hosts: staging
remote_user: root
vars:
remote_deploy_dir: "/tmp/ansible/git/backend/"
tasks:
- name: Tests the connection with the host
ping:
remote_user: root
- name: Creates dir {{ remote_deploy_dir }}, if it does not exist
shell: mkdir -p {{ remote_deploy_dir }}
ignore_errors: True
- name: Creates file "/tmp/ansible/my_file.txt"
copy: src=/tmp/ansible/my_file.txt dest=/tmp/ansible/my_file.txt owner=root group=root mode=0644
- name: Copies dir "/tmp/ansible/recursivo"
copy: src=/tmp/ansible/recursivo dest=/tmp/ansible/ owner=root group=root mode=0644
- name: Gets current date/time
shell: date '+%Y%m%d_%H%M%S_%N'
register: mydate # picks the stout of this command and puts it on a var named "mydate"
- name: Creates a directory with the current date/time
shell: mkdir -p /tmp/ansible/{{ mydate.stdout }}
- name: TO apache2 service, if STOPPED
service: name=apache2 state=stopped
- name: STARTS service apache2
service: name=apache2 state=started
(references: https://galaxy.ansible.com/intro https://www.digitalocean.com/community/tutorials/how-to-use-ansible-roles-to-abstract-your-infrastructure-environment)
You could have all your playbook tasks on a giant file, but that would hardly be maintainable.
So, it is nice to modularize them, which is always a good practice. It has the added benefit that you can upload them to and use other roles from Ansible Galaxy, which is a repository containing many ansible roles that you can use, and include as requirements of your roles.
To do that, we have a cheatsheet below on how to develop a playbook following best practices.
$ ansible-galaxy init acme --force
$ ansible galaxy install --roles-path . username.rolename
(this installs the desired role to the current directory).