Terraform is an Infrastructure as Code tool used to create infrastructure and should never be used to configure operating systems of applications, instead, Ansible is the de facto standard for application configuration as code.
Linking the infrastructure created with Terraform and Ansible is done using infrastructure tags.
How to use Ansible and Terraform together
AWS tags created with Terraform are used to identify the infrastructure elements and apply Ansible playbooks.
Ansible Setup and Configuration
Configure Ansible by copying the private key file created when deploying the infrastructure in AWS and setting the dynamic inventory.
Ansible dynamic inventory is created by a script that interrogates AWS ans uses the AWS tags to classify the infrastructure.
Using the AWS Tags as Ansible Targets
The AWS tags present in the Ansible dynamic inventory are used by Ansible to select the playbook targets.
This demo shows how to use Ansible to configure resources created by Terraform.
Click the ► play button to see the asciinema demo:
The tutorial assumes that you have already completed the deployment of the infrastructure in AWS from the previous articles.
In order to link the Terraform infrastructure with Ansible, we will use the AWS tags created with Terraform to identify the elements and apply Ansible playbooks. The AWS tags are used to create an Ansible Ansible Dynamic Inventory.
The following screenshot from AWS EC2 Console shows the tags applied to an EC2 instance.
Best Practice: Tagging all resources
All the AWS resources created with Terraform had tags added that follow a company-wide standard. Tags are used for provisioning, monitoring, and cost control. Check AWS Resource Tagging best practices.
Set the environment variables needed to configure Ansible.
Ansible dynamic inventory is a feature in Ansible that allows you to generate inventory (host and group information) dynamically rather than statically defining it in a static inventory file.
To configure set the environment variable ANSIBLE_INVENTORY pointing to the script that will generate the inventory.
export ANSIBLE_INVENTORY=$(pwd)/inventory/ec2.py
Ansible requires SSH access to the instances and therefore the authentication is performed using a private key.
Copy the private key file created when deploying the infrastructure to ${HOME}/keys/ditwl_kp_infradmin.pem
For simplicity, we are using the same private key for everything, but in a real scenario, it’s recommended to have different keys for each combination of:
See Ansible Multiple Environments below for an explanation.
Private key file: set the correct access permissions for the private key (read and write only allowed to owner).
chmod 600 $HOME/keys/ditwl_kp_infradmin.pem
AWS Inventory Access Permissions: Set the correct access permissions for the inventory file that is part of the downloaded source code:
cd ansible-aws-ec2-terraform-tags
chmod 755 inventory/ec2.py
For convenience write the configuration instructions in a script file as Ansible needs to be configured every time a new shell session is started.
Review the configuration from the file ditwl_pro.sh
# source this file before using ansible # source ditwl_pro.sh export AWS_PROFILE='ditwl_infradmin' ANSIBLE_PRIVATE_KEY_FILE=${HOME}/keys/ditwl_kp_infradmin.pem #AWS Region export EC2_REGION='us-east-1' #Set export ANSIBLE_INVENTORY=$(pwd)/inventory/ec2.py export ANSIBLE_PRIVATE_KEY=$ANSIBLE_PRIVATE_KEY_FILE
The value of AWS_PROFILE is the name used for the profile that has the AWS credentials. It will be stored outside Ansible ~/.aws/credentials
and it is the same used in Terraform.
The EC2_REGION is the AWS region where the Cloud services are located. Setting the region speeds up Ansible AWS inventory creation.
The ANSIBLE_INVENTORY is a path to a file containing an Inventory of Hosts or a script. We will be using a script to generate the inventory dynamically by querying the AWS API.
The ANSIBLE_PRIVATE_KEY points to a file that will be used for SSH authentication when connecting to the AWS EC2 Linux hosts.
Before running any ansible commands, source the configuration file ditwl_pro.sh to set the appropriate environment variables.
ansible-aws-ec2-terraform-tags$ source ditwl_pro.sh ANSIBLE_INVENTORY: /home/jruiz/.../ansible-aws-ec2-terraform-tags/inventory/ec2.py ANSIBLE_PRIVATE_KEY: /home/jruiz/keys/ditwl_kp_infradmin.pem
In this example a WordPress server is configured using Ansible that requires an existing DB.
Manually create a MariaDB schema for WordPress and delegate the DNS zone to the Terraform-created DNS servers, these two activities have not been automated in the example.
Ansible will run and cache the results of the dynamic inventory when a playbook is applied.
We can also run the dynamic inventory script to obtain a JSON representation of all the groups and properties of our AWS Infrastructure.
ansible-aws-ec2-terraform-tags$ inventory/ec2.py { "_meta": { "hostvars": { "ditwl_ec2_pro_pub_wp01_1": { "ansible_host": "52.3.235.198", "ec2__in_monitoring_element": false, "ec2_account_id": "368675470651", "ec2_ami_launch_index": "0", "ec2_architecture": "x86_64", "ec2_block_devices": { "sda1": "vol-07c617da3623854c0" }, "ec2_client_token": "", "ec2_dns_name": "ec2-52-3-235-198.compute-1.amazonaws.com", "ec2_ebs_optimized": false, "ec2_eventsSet": "", "ec2_group_name": "", "ec2_hypervisor": "xen", "ec2_id": "i-071566e036e992733", "ec2_image_id": "ami-43a15f3e", "ec2_instance_profile": "", "ec2_instance_type": "t2.micro", "ec2_ip_address": "52.3.235.198", "ec2_item": "", "ec2_kernel": "", "ec2_key_name": "ditwl_kp_infradmin", "ec2_launch_time": "2018-03-15T14:11:57.000Z", "ec2_monitored": false, "ec2_monitoring": "", "ec2_monitoring_state": "disabled", "ec2_persistent": false, "ec2_placement": "us-east-1a", "ec2_platform": "", "ec2_previous_state": "", "ec2_previous_state_code": 0, "ec2_private_dns_name": "ip-172-17-32-217.ec2.internal", "ec2_private_ip_address": "172.17.32.217", "ec2_public_dns_name": "ec2-52-3-235-198.compute-1.amazonaws.com", "ec2_ramdisk": "", "ec2_reason": "", "ec2_region": "us-east-1", "ec2_requester_id": "", "ec2_root_device_name": "/dev/sda1", "ec2_root_device_type": "ebs", "ec2_security_group_ids": "sg-0ec2a678,sg-1fd2b669", "ec2_security_group_names": "ditwl-sg-ec2-pro-pub-01,ditwl-sg-ec2-def", "ec2_sourceDestCheck": "true", "ec2_spot_instance_request_id": "", "ec2_state": "running", "ec2_state_code": 16, "ec2_state_reason": "", "ec2_subnet_id": "subnet-23b4be47", "ec2_tag_Name": "ditwl-ec2-pro-pub-wp01-1", "ec2_tag_app": "wp", "ec2_tag_app_id": "wp-01", "ec2_tag_cost_center": "ditwl-permanent", "ec2_tag_environment": "pro", "ec2_tag_os": "ubuntu", "ec2_tag_os_id": "ubuntu-16", "ec2_tag_private_name": "ditwl-ec2-pro-pub-wp-01", "ec2_tag_public_name": "www", "ec2_virtualization_type": "hvm", "ec2_vpc_id": "vpc-a970cbd2" } } }, "ami_43a15f3e": [ "ditwl_ec2_pro_pub_wp01_1" ], "ec2": [ "ditwl_ec2_pro_pub_wp01_1" ], "i-071566e036e992733": [ "ditwl_ec2_pro_pub_wp01_1" ], "key_ditwl_kp_infradmin": [ "ditwl_ec2_pro_pub_wp01_1" ], "platform_undefined": [ "ditwl_ec2_pro_pub_wp01_1" ], "security_group_ditwl_sg_ec2_def": [ "ditwl_ec2_pro_pub_wp01_1" ], "security_group_ditwl_sg_ec2_pro_pub_01": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_Name_ditwl_ec2_pro_pub_wp01_1": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_app_id_wp_01": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_app_wp": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_cost_center_ditwl_permanent": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_environment_pro": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_os_id_ubuntu_16": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_os_ubuntu": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_private_name_ditwl_ec2_pro_pub_wp_01": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_public_name_www": [ "ditwl_ec2_pro_pub_wp01_1" ], "type_t2_micro": [ "ditwl_ec2_pro_pub_wp01_1" ], "us-east-1": [ "ditwl_ec2_pro_pub_wp01_1" ], "us-east-1a": [ "ditwl_ec2_pro_pub_wp01_1" ], "vpc_id_vpc_a970cbd2": [ "ditwl_ec2_pro_pub_wp01_1" ] }
For each EC2 instance, a node is added to hostvars, the node contains properties that will be available when running a playbook.
Examples:
In Ansible Dynamic Inventory the EC2 instance “ditwl_ec2_pro_pub_wp01_1” has the following properties:
{ "_meta": { "hostvars": { "ditwl_ec2_pro_pub_wp01_1": { "ansible_host": "52.3.235.198", ... "ec2_id": "i-071566e036e992733", ... "ec2_private_ip_address": "172.17.32.217", ... "ec2_tag_Name": "ditwl-ec2-pro-pub-wp01-1", ... } } },
Those properties are used inside Ansible Playbooks, in the following example the ec2_tag_Name is used to set the instance hostname:
- name: Set hostname hostname: name: "{{ ec2_tag_Name }}"
The value of ec2_tag_Name will be extracted from the corresponding Inventory host entry.
Instance tags are added to the hostvars properties as shown in the previous entry and also a group is created.
The Ansible Dynamic Inventory creates groups for each tag that is present in the AWS VPC and adds the names of the hosts inside the group.
This grouping is used to target our Ansible playbooks to selected operating systems, applications, and releases.
The pattern starts with the name “tag” and adds the name of the tag after that.
tag_[tag_name]_[tag_value]:[ "host_a" "host_b" "host_c" ]
Example:
{ "_meta": { ... "tag_environment_pro": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_os_id_ubuntu_16": [ "ditwl_ec2_pro_pub_wp01_1" ], "tag_os_ubuntu": [ "ditwl_ec2_pro_pub_wp01_1" ], ... }
The group tag_environment_pro corresponds to all EC2 instances that have a tag with the name environment and value pro.
The group tag_os_ubuntu corresponds to all EC2 instances that have a tag with the name os and value ubuntu.
The group tag_os_id_ubuntu_16 corresponds to all EC2 instances that have a tag with the name os_id and value ubuntu_16.
From our Terraform configuration file, we see that the EC2 instance aws_ec2_pro_pub_wp_01 has those tags.
aws_ec2_pro_pub_wp_01 = { name = "ditwl-ec2-pro-pub-wp01" .... tag_private_name = "ditwl-ec2-pro-pub-wp-01" tag_public_name = "www" tag_app = "wp" tag_app_id = "wp-01" tag_os = "ubuntu" tag_os_id = "ubuntu-16" tags_environment = "pro" tag_cost_center = "ditwl-permanent" tags_volume = "ditwl-ec2-pro-pub-wp-01-root" }
Ansible converts the underscore “_” in values to “-” in Inventory as can be seen in the OS_ID tag.
If we had EC2 instances in a pre environment, the inventory will show another group called tag_environment_pre with the instances.
Example:
{ "_meta": { ... "tag_environment_pre": [ "ditwl_ec2_pre_pub_abc01_1", "ditwl_ec2_pre_pub_abc01_2", "ditwl_ec2_pre_pub_abc01_3" ], "tag_environment_pro": [ "ditwl_ec2_pro_pub_wp01_1" ] ... }
Ansible playbooks use the hosts filter to select the target hosts, those are the hosts that will have each role applied.
All resources created with Terraform have tags that will be used by Ansible.
Unique tags used for host configuration:
Group tags used for host configuration: multiple hosts share the same tag/value pair.
Having a single playbook file for all the environments allows the infrastructure to be configured with a single command and increases reusability and compliance.
The file ditwl_pro.yml defines the hosts’ selectors and the roles that should be applied to each one. We use :& to AND groups for selectors.
- hosts: tag_os_ubuntu:&tag_environment_pro become: yes roles: - { role: linux/pam_limits, tags: [ 'pam_limits'] } - { role: linux/hosts_file, tags: [ 'hosts_file'] } - { role: linux/host_name, tags: [ 'host_name'] } tags: - common - hosts: tag_os_id_ubuntu_16:&tag_environment_pro become: yes roles: - { role: linux/add_packages, tags: [ 'add_packages'] } tags: - ubuntu_16 - hosts: tag_app_wp:&tag_environment_pro become: yes roles: - { role: linux/wordpress, tags: [ 'wordpress'] } tags: - wordpress
Line 1 starts with a hosts selector that will select hosts that are members of the dynamic inventory group tag_os_ubuntu AND also from the group tag_environment_pro.
The resulting hosts are the ones created by Terraform with the tags:
Ansible will apply the roles:
Line 10 selects the hosts that have a specific release (or ID) of Ubuntu that are in environment production, in our case the ones tagged by Terraform with:
Those hosts will have the role linux/add_packages applied.
This example is used to show how easy is to have a specific set of roles that are only applied to a specific Operating System release or ID.
Line 17 selects the hosts that are tagged as application wp and are in a production environment, it applies the role linux/wordpress.
If we have a more complex example where we need to differentiate from multiple hosts that have the same application tag but have some kind of difference we could use the app_id tag to further select.
Open the URL https://www.demo.itwonderlab.com/
[…] An example of a role defined for re-usability is the add_package role1, it was designed to install default packages defined in all group_vars and it is also used as a dependency for the WordPress role used in the How to use Ansible and Terraform together tutorial. […]