Tuesday, April 7, 2020

Using Ansible to deploy a demo environment to AWS

I promised a few months ago that I would be back at it, and well....  Things got in the way.  Lots of things.

For a while I am going to focus on one method that I use with Ansible to set up a demo environment in AWS.

The first playbook I am going to talk about can be found here: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_environment_deploy_aws.yaml

The set-up


Before we begin, let me describe the first assumption:
I place the variable file and the playbook in the same folder and I import the variables when I execute the playbook.

The second is how I establish the VPC, I do not use the default VPC

I use the Cloud Formation Template: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/demo-vpc.json
to establish a Virtual Private Cloud (aka network), 2 public and 2 private subnets, and the proper routing and gateways for communication between the subnets and to the outside world from each.

When I provision my machines for the demo, I will be referencing the same region, the vpc from above, and the subnets.  We will get to that in a bit.

Third; I run my Ansible host in the VPC, attached to one of the public subnets.  I lock down its public interface with a security group and I ssh to it to perform the tasks at hand.

Fourth: I have Ansible all set up with taking to the AWS API.
This was the first blog that really gave me success with getting that set up; https://tomoconnor.eu/blogish/part-3-ansible-and-amazon-web-services/#.XozLZHJ7l3j


The workflow


Having established the VPC, we will need to copy the vpc-id and the subnet-id(s) and paste them into the variable files.
This is the variable file for the machine provisioning: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_aws_vars.yaml

Let me describe that a bit:
---
region: "the AWS region your VPC is in from above"
vpc_id: "the ID of the VPC - something like vpc-0no9th755is5fake1"
vpc_cidr_ip: "the CIDR of the VPC, such as:" 10.240.0.0/16
loadbalancer_public_subnet: "the ID of one of the public subnets where loadbalancer machines will be placed - such as: subnet-0b03bad1228you93r"
webserver_private_subnet: "the ID of one of the private subnets for the webservers"
image_id: "the AMI image ID for the region - I use the Ubuntu 18.04 LTS minimal image myself"
key_name: "your ssh access key for AWS to embed in the image"
controller_public_subnet: "the other public subnet for the controller machine"
database_private_subnet: "the other private pubnet for the postgreSQL db server"
prepend_name: "a string to prepend to the machine names, such as 'mine'"

Now that the variables are all set up, and the vpc is in place, lets build some machines.

You can run the playbook with the following:
ansible-playbook controller_environment_deploy_aws.yaml -e "@controller_aws_vars.yaml"  

If I read that command line back it would speak like this:  Run the Ansible playbook controller_environment_deploy_aws.yaml including the variable from the file controller_aws_vars.yaml.
The '-e' means to include 'extra variables' to the execution of the playbook.  The '@' at the beginning of the variable means to reference a file on the Ansible host.

What all does that build for me

What you will now end up with is multi-fold.  Part of what I will describe is to make dealing with inventory a little easier.

If you read through the playbook you can probably understand that some security groups are created for each machine type.
The AMI that is referenced is used as the base image for all of the machines.  But the machines are created as different sizes.
The network attachments happen against the subnets defined in the vars file.

One key thing that happens is that individual inventory files are written out in the same folder where the playbook is being run from.
You will have an inventory file for each machine type.  These are how Ansible will reference each machine type later on.
We will be referencing '-i loadbalancers' when installing nginx, and '-i webservers' when installing nginx and the web site files. 
The '-i' refers to a specific inventory file.
I did it this way because it gives me the flexibility to quickly stand something up with inventory (and reset) without getting into all of details of Ansible inventory and the many ways to handle it.

You want to reset; delete the machines in AWS and delete the corresponding inventory files and start over.

No comments: