This is the continuation of a Terraform and Ansible demo to create a VPC in AWS with an EC2 instance connected to the MariaDB database running in RDS using a single Terraform plan.
AWS with Terraform: The Essential Guide: Sections
Our new tutorial series delves into cloud infrastructure deployment using Terraform or OpenTofu on AWS. In this series, the fundamentals are shown, guiding you through minimizing resource usage and simplifying the deployment complexities associated with cloud infrastructure. This tutorial series is a work in progress and will have these sections:
The Route 53 service provides DNS service with advanced options, see Route 53 in AWS Basic VPC Elements.
Route 53 configuration is done in terraform.tfvars. The configuration shown is basic and does not include MX records for e-mail or any other service.
#------------------------ # ROUTE53 #------------------------ #------------------------ # PUBLIC - PRO #------------------------ aws_route53_delegation_set_reference_name = "ditwl-r53-del" aws_route53_public = { name = "demo.itwonderlab.com" comment = "ditwl - Public DNS" tags_environment = "pro" }
aws_route53.tf creates an AWS Route 53 delegation set and the DNS hosted zone.
All domain zones hosted at route 53 get a random set of authoritative name servers, the registrar of the domain (AWS or other) has to be informed of the authoritative name servers that will be used to resolve addresses for the domain.
Best Practice: Don't change Authoritative Name Servers
Having a fixed set of authoritative name servers that don't change over time (as long as the set is not destroyed) is recommended.
Changing authoritative name servers can be problematic as it has to be planned ahead of time and to avoid service interruption the new and old name servers have to overlap during some hours or even days. Be aware that some caching DNS servers don't honor configured TTL.
As long as the delegation set created in the module aws_route53_delegation_set is not destroyed, we can recreate the DNS service from the module aws_route53_public.
#------------------------ # Public #------------------------ module "aws_route53_delegation_set" { source = "./modules/aws/route53/delegation_set" reference_name = var.aws_route53_delegation_set_reference_name } module "aws_route53_public" { source = "./modules/aws/route53/public_zone" name = var.aws_route53_public["name"] delegation_set_id = module.aws_route53_delegation_set.id comment = var.aws_route53_public["comment"] #TAGS tags_environment = var.aws_route53_public["tags_environment"] }
In this example a single DNS Service is created, it will be used for the public and private lookup of services.
In a real production infrastructure, a private zone should be created to resolve names for the internal servers, for example, backends should not be registered in the public DNS as its clients are inside the VPC, not outside.
EC2 instances get the DNS to use as a DHCP option on startup, if a VPC has private and public zones, AWS will first query the DNS on the private zone.
For the creation and registration of the EC2 instance a module is used, the module takes care of registering the new machine in the DNS server as well.
The AWS Route 53 domain zone created before is used to create an A record for all the EC2 instances created in the demo.
Best Practice: Do a Basic Registration of EC2 instances in Route 53
It is recommended to do a "basic" registration inside the EC2 module that creates the virtual machines. This way we make sure that each AWS EC2 instance is registered in Route 53 and that it follows a pattern.
It is possible to the registration outside the EC2 instance module, but if we create many EC2 instances in different Terraform files and later decide to change the registration standard, then every file will need to be modified. Having a separate registration process is prone to errors.
Example of EC2 creation and registration in file aws_ec2_pro_wp.tf
# Create WP instance module "aws_ec2_pro_wp" { source = "./modules/aws/ec2/instance/add" name = var.aws_ec2_pro_wp["name"] ... register_dns_private = true route53_private_zone_id = module.aws_route53_public.id register_dns_public = true route53_public_zone_id = module.aws_route53_public.id ... }
AWS assigns an ID to each AMI that is uploaded to AWS, the assigned AMI ID differs in each region.
The aws_ds_aws_ami.tf file uses a data provider to find the AMI ID based on filters for an Ubuntu 16.04 HVM image using EBS SSD, two filters are shown:
As we want to use the official image from Canonical, the owners' property is used with 099720109477 the ID assigned by AWS to Canonical.
#-------------------------------------------------------- # Ubuntu AMI #-------------------------------------------------------- # Use this data source to get the ID of the latests AMI for selected OS # "${data.aws_ami.NAME.id}" # Ubuntu 16.04 HVM using EBS SSD # registered AMI for use in other resources. #---------------------- # Ubuntu AMI 16.04 LTS #---------------------- data "aws_ami" "ubuntu1604" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"] } # filter { # name = "virtualization-type" # values = ["hvm"] # } owners = ["099720109477"] # Canonical }
The AMI ID can be used when creating an EC2 instance to use the latest available AMI by requesting the ID from the data source "${data.aws_ami.ubuntu1604.id}".
Since the AMI ID changes as Canonical uploads new releases of the "Ubuntu 16.04 HVM image using EBS SSD", the module accepts a list of properties in ignore_changes variable that will not trigger an instance destroy/create for changes in the provided list, in our case a change in AMI will not recreate the instance.
# Create WP instance module "aws_ec2_pro_wp" { source = "./modules/aws/ec2/instance/add" name = var.aws_ec2_pro_wp["name"] ami = data.aws_ami.ubuntu1604.id instance_type = var.aws_ec2_pro_wp["instance_type"] ... ignore_changes = ["ami"] }
Continue the Terraform and Ansible demo, see:
IT Wonder Lab tutorials are based on the diverse experience of Javier Ruiz, who founded and bootstrapped a SaaS company in the energy sector. His company, later acquired by a NASDAQ traded company, managed over €2 billion per year of electricity for prominent energy producers across Europe and America. Javier has over 25 years of experience in building and managing IT companies, developing cloud infrastructure, leading cross-functional teams, and transitioning his own company from on-premises, consulting, and custom software development to a successful SaaS model that scaled globally.
Are you looking for cloud automation best practices tailored to your company?