Tuesday, April 14, 2020

Installing NGINX OSS loadbalancer with Ansible - part 1

In the previous post I provisioned a number of machines for a demo environment into AWS.

Background

Remember, that to make ssh access easy between the Ansible server and the target machines, my Ansible host runs in the same VPC in AWS as the demo environment machines.

if you inspect the inventory files, that explains why I write in the internal DNS names of the instances.

Starting in this post I will be using those machines and deploying out a very simple topology of a load balancer in front of two webservers.
For this entire setup, we will be using NGINX.

If you recall back to the previous blog, the playbook created machines in AWS and wrote some inventory files to the file system where the playbook was run from.

So now you should see 4 additional files without extensions for controller, dbserver, loadbalancers, and webservers.  We will be using the loadbalancers inventory file in this exercise.

In this blog I am focused on this playbook: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_oss.yaml

The playbook uses the NGINX Role from Ansible Galaxy, you can find that here: https://galaxy.ansible.com/nginxinc/nginx

Installing NGINX OSS on the loadbalancer


To run the playbook execute the command:
ansible_playbook nginx_lb_oss.yml -i loadbalancers

This playbook uses both the loadbalancers and webservers file as it executes, let me explain.

The webservers file is read into memory.

Then at the loadbalancers machines the variable file nginx_lb_vars.yaml is read from the playbook folder and the variables 'webserver1' and 'webserver2' are replaced with the values from the webservers inventory file - so that any traffic being forwarded is to real machines.

The the role is invoked with all of the required 'extra_vars' in memory.

The Ansible Role for NGINX installs NGINX OSS (aka open source version) and adds a reverse proxy (aka load balancer) configuration based on my yaml variables file.  This points to the DNS names of the two web server instances.

Testing the configuration

Now, you probably ran that, and jumped straight to testing traffic by hitting the public endpoint of the loadbalancer machine.  And then you probably thought to yourself; 'what kind of hack is this guy, this doesn't work'

Well, you are right.  You should be receiving the response 'an error occurred'.  This is good, this is right and proper for where we are in the deployment.
If the URL can't be reached, then something else happened.  But if your error is: 'an error occurred' we are in a good place.

And might you guess why you have that error message?
Because neither of the two webservers are responding.  They don't yet have NGINX installed nor configured.
So everything is working as it should be.
It just isn't working, yet.


No comments: