Tuesday, April 28, 2020

Moving from NGINX OSS to NGINX Plus - part 3

In the previous blog posts I have been working with NGINX OSS (aka open source).
There is a lot you can do with the open source version, and using Ansible to drive it, you can automate most thinks that you want.

If you are coming in mid-stream, there are some assumptions in this demo.  Go back to the provisioning in AWS blog if you want the full detail.
The assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.

Starting here, I am going to focus on NGINX Plus.
It is the paid version of NGINX and includes extra features beyond what the open source version does.
A little bit of search will give you feature comparisons such as: https://www.nginx.com/products/nginx/#compare-versions

The reason that I am going to start talking about NGINX Plus is dynamic modules.

Moving from OSS to Plus

Moving from OSS to Plus can be done a couple different ways.
One way is to simply spin up new machine instances with NGINX Plus installed and point the traffic over and you are done.
In this example I am going to re-use the same loadbalancer machine that I setup in AWS and have NGINX OSS installed on to.

I am not concerned with downtime in this demo (not that it isn't just minutes) so I am going to uninstall OSS and re-install with the Plus binaries.

ansible-playbook nginx-remove.yaml -i loadbalancers

You can find this playbook here: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_remove.yaml
It literally stops and removes NGINX.  No frills, no fancy.  Nothing to see here, move along.

Installing NGINX Plus

NGINX Plus is delivered from NGINX and you need a subscription to access it.

Using the Ansible Role for NGINX, it isn't very different from NGINX OSS.
The playbook here: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus.yaml
looks nearly identical to the playbook for installing NGINX OSS.  The key differences arevthe variables file and the workflow that happens under the hood.

At the top of the variables file: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus_vars.yaml
There are additional settings of:
nginx_type: plus - install the plus version of NGINX
nginx_delete_license: true - delete the Plus repository license from the remote (good to do)
nginx_license: this is your license for access the NGINX Plus repository stored in the playbook directory.
nginx_rest_api_*: these relate to enabling the nginx plus api, which you probably want.

After that, the template variables begin as they were for NGINX OSS.
What happens
Just like with the playbook for NGINX OSS, the vars file nginx_lb_plus_vars.yaml variable file is read in the variables webserver1 and webserver2 are replaced with the values from the webservers inventory file.

Once done

You should be all set with a functioning load balancer again.  This time with NGINX Plus instead of NGINX OSS.

The next step, lets add and configure a module.

Pre_ and Post_tasks

This is the first time I have used pre_tasks and post_tasks.
These are useful when your playbook invokes one Role.
The pre_tasks are executed before the Role(s) and the post_tasks are executed after.

While you can use these when multiple roles are listed with include_role, you do have to be careful that the pre and post tasks align with all the Roles being invoked by the playbook.




Tuesday, April 21, 2020

Installing NGINX OSS webserver with Ansible - Part 2

This is part three of me walking through a simple demo that I set up using Ansible.

This installment installs the web server machines using NGINX OSS.

Please refer back to the first two installments to get an understanding of the assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.

The playbook

In this example I will focus on the playbook: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_web_demo_oss.yaml

As the previous blog, this uses the nginx Ansible Role.

This time, I have included the 'extra_vars' - extra variables specific to this playbook, within the playbook itself instead of using an external file.

The reason is did this was to follow a different pattern and that this is a static playbook for me.  The only substitution that I am doing involves the individual webserver machines that this particular configuration is applied to.

The play uses the default.conf Jinja2 template for an http server from the Ansible Role.
I am placing the configuration in the default location, setting an error page location.
Instructing it to respond to traffic from any IP address, on port 80.
And lastly I am setting the path and file for the page to be demo_index.html

(you will also find demo_index.html in the example repository along side this file).

Running the playbook:
ansible_playbook nginx_web_demo_oss.yaml -i webservers

The result

Like my previous post, we aren't done yet.  So another error message.
This time the error message that you should receive is no longer 'an error occurred' but rather '403 forbidden'

Why '403 forbidden' you might ask?  Because we configured a file path for the HTML file.
The path we set is: /usr/share/nginx/html/demo_index.html
But there is not a file at that path.  As far as the web server knows, you can't access whatever you are requesting.  So the 403 because it is a configured path instead of a 404 which you would get against a path that does not exist.

Fixing the 403

I thought I would be nice and fix the 403 in this blog post, so here is the solution.

ansible-playbook update_demo_index.html.yaml -i webservers

This is another simple playbook; https://github.com/brianehlert/ansible-nginx-examples/blob/master/update_demo_index_html.yaml

It simply copies the file demo_index.html to the correct path on each webserver.

Now, if you refresh your browser that you are using to test the loadbalancer, you should get an NGINX Demo page.

Why so granular

One reason for being so granular with the playbooks is to separate the tasks that are being performed.
The benefit is that the playbooks can be reused in other playbooks, like functions.  Another benefit is that they align with how Roles should be created, as reusable tasks.

If you get into using Ansible Tower, you can start linking together playbooks into a workflow - branching off for success and failure conditions.  Even adding pauses for approval workflows.

Tuesday, April 14, 2020

Installing NGINX OSS loadbalancer with Ansible - part 1

In the previous post I provisioned a number of machines for a demo environment into AWS.

Background

Remember, that to make ssh access easy between the Ansible server and the target machines, my Ansible host runs in the same VPC in AWS as the demo environment machines.

if you inspect the inventory files, that explains why I write in the internal DNS names of the instances.

Starting in this post I will be using those machines and deploying out a very simple topology of a load balancer in front of two webservers.
For this entire setup, we will be using NGINX.

If you recall back to the previous blog, the playbook created machines in AWS and wrote some inventory files to the file system where the playbook was run from.

So now you should see 4 additional files without extensions for controller, dbserver, loadbalancers, and webservers.  We will be using the loadbalancers inventory file in this exercise.

In this blog I am focused on this playbook: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_oss.yaml

The playbook uses the NGINX Role from Ansible Galaxy, you can find that here: https://galaxy.ansible.com/nginxinc/nginx

Installing NGINX OSS on the loadbalancer


To run the playbook execute the command:
ansible_playbook nginx_lb_oss.yml -i loadbalancers

This playbook uses both the loadbalancers and webservers file as it executes, let me explain.

The webservers file is read into memory.

Then at the loadbalancers machines the variable file nginx_lb_vars.yaml is read from the playbook folder and the variables 'webserver1' and 'webserver2' are replaced with the values from the webservers inventory file - so that any traffic being forwarded is to real machines.

The the role is invoked with all of the required 'extra_vars' in memory.

The Ansible Role for NGINX installs NGINX OSS (aka open source version) and adds a reverse proxy (aka load balancer) configuration based on my yaml variables file.  This points to the DNS names of the two web server instances.

Testing the configuration

Now, you probably ran that, and jumped straight to testing traffic by hitting the public endpoint of the loadbalancer machine.  And then you probably thought to yourself; 'what kind of hack is this guy, this doesn't work'

Well, you are right.  You should be receiving the response 'an error occurred'.  This is good, this is right and proper for where we are in the deployment.
If the URL can't be reached, then something else happened.  But if your error is: 'an error occurred' we are in a good place.

And might you guess why you have that error message?
Because neither of the two webservers are responding.  They don't yet have NGINX installed nor configured.
So everything is working as it should be.
It just isn't working, yet.


Tuesday, April 7, 2020

Using Ansible to deploy a demo environment to AWS

I promised a few months ago that I would be back at it, and well....  Things got in the way.  Lots of things.

For a while I am going to focus on one method that I use with Ansible to set up a demo environment in AWS.

The first playbook I am going to talk about can be found here: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_environment_deploy_aws.yaml

The set-up


Before we begin, let me describe the first assumption:
I place the variable file and the playbook in the same folder and I import the variables when I execute the playbook.

The second is how I establish the VPC, I do not use the default VPC

I use the Cloud Formation Template: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/demo-vpc.json
to establish a Virtual Private Cloud (aka network), 2 public and 2 private subnets, and the proper routing and gateways for communication between the subnets and to the outside world from each.

When I provision my machines for the demo, I will be referencing the same region, the vpc from above, and the subnets.  We will get to that in a bit.

Third; I run my Ansible host in the VPC, attached to one of the public subnets.  I lock down its public interface with a security group and I ssh to it to perform the tasks at hand.

Fourth: I have Ansible all set up with taking to the AWS API.
This was the first blog that really gave me success with getting that set up; https://tomoconnor.eu/blogish/part-3-ansible-and-amazon-web-services/#.XozLZHJ7l3j


The workflow


Having established the VPC, we will need to copy the vpc-id and the subnet-id(s) and paste them into the variable files.
This is the variable file for the machine provisioning: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_aws_vars.yaml

Let me describe that a bit:
---
region: "the AWS region your VPC is in from above"
vpc_id: "the ID of the VPC - something like vpc-0no9th755is5fake1"
vpc_cidr_ip: "the CIDR of the VPC, such as:" 10.240.0.0/16
loadbalancer_public_subnet: "the ID of one of the public subnets where loadbalancer machines will be placed - such as: subnet-0b03bad1228you93r"
webserver_private_subnet: "the ID of one of the private subnets for the webservers"
image_id: "the AMI image ID for the region - I use the Ubuntu 18.04 LTS minimal image myself"
key_name: "your ssh access key for AWS to embed in the image"
controller_public_subnet: "the other public subnet for the controller machine"
database_private_subnet: "the other private pubnet for the postgreSQL db server"
prepend_name: "a string to prepend to the machine names, such as 'mine'"

Now that the variables are all set up, and the vpc is in place, lets build some machines.

You can run the playbook with the following:
ansible-playbook controller_environment_deploy_aws.yaml -e "@controller_aws_vars.yaml"  

If I read that command line back it would speak like this:  Run the Ansible playbook controller_environment_deploy_aws.yaml including the variable from the file controller_aws_vars.yaml.
The '-e' means to include 'extra variables' to the execution of the playbook.  The '@' at the beginning of the variable means to reference a file on the Ansible host.

What all does that build for me

What you will now end up with is multi-fold.  Part of what I will describe is to make dealing with inventory a little easier.

If you read through the playbook you can probably understand that some security groups are created for each machine type.
The AMI that is referenced is used as the base image for all of the machines.  But the machines are created as different sizes.
The network attachments happen against the subnets defined in the vars file.

One key thing that happens is that individual inventory files are written out in the same folder where the playbook is being run from.
You will have an inventory file for each machine type.  These are how Ansible will reference each machine type later on.
We will be referencing '-i loadbalancers' when installing nginx, and '-i webservers' when installing nginx and the web site files. 
The '-i' refers to a specific inventory file.
I did it this way because it gives me the flexibility to quickly stand something up with inventory (and reset) without getting into all of details of Ansible inventory and the many ways to handle it.

You want to reset; delete the machines in AWS and delete the corresponding inventory files and start over.