Tuesday, May 19, 2020

Ubuntu 20 on RaspberryPi with wireless for Science!

For a few years now I have been running BOINC on four raspberry pi - kind of like a poor man's compute cluster.

Most of that time was spent chugging away for Seti@Home (in fact I have been chugging away for Seti@Home for a really long time)

With the shuttering of Seti@Home I needed to discover a new project and stumbled on Science United ( http://scienceunited.org ).
This was great.  I could still support science and it was based on the BOINC compute platform.

This plan all went sideways when I added science united to my raspi pi that was running raspian.
Lots of searching, a hack to use a different repo - because come to find out, science unite requires a minimum version of the BOINC client.  One older than the raspian repo supports by default.

So, the hack I found was fine, but I was not happy.  So I went to setting up Ubuntu Server on my raspberry pis.
Heading over to https://ubuntu.com/download/raspberry-pi I checked the version of my raspi pis and downloaded the correct image.  Uncompressed using 7-zip, and then burned the image using win32diskimager.  Just like I had done 50 billion times before.

I then attached my monitor, small keyboard and booted.
And quickly discovered, no way to set-up the wifi.
Some searching led me to tons of articles describing using the ubuntu image generator and editing network-config.
(Guess how many posts were copies of the original ubuntu tutorial? - I hate that)

Well, that is fine an dandy.  But does not totally work, as some other folks tried to point out.  Plus you have this very specific image generator installed that you will use how often?  Once per year.  ( waste )
Because: guess what?  You just installed a server operating system.  They don't have wireless, on purpose.  You have to add / wireless support.  Just like any server OS (windows too).

Here is how I finally sorted it all out:

  1. Attach your pi to your network using a cable
  2. boot
  3. logon using the logon 'ubuntu' and password 'ubuntu' ( if you have not done this prior and this is the first boot, be patient.  After to logon screen, you have to wait a bit for key generation comments to show up after cloud-init finishes.  After this you can logon and be fored to change your password)
  4. change your password
  5. update the pi:
    1. sudo apt update
    2. sudo apt upgrade
  6. install the wireless tools:
    1. sudo apt install wireless-tools
  7. run iwconfig - notice you have a wlan network interface now, most likely named 'wlan0'
  8. copy a netplan sample wireless config to the netplan folder:
    1. sudo cp /usr/share/doc/netplan/examples/wireless.yaml /etc/netplan/wireless.yaml
  9. edit that sample config file:
    1. sudo nano /etc/netplan/wireless.yaml
  10. set the interface name to 'wlan0'  (the example interface of  'wlp2s0b1' won't get you anywhere)
  11. I am using dhcp on my network, so:
    1. set dhcp4 to yes
    2. remove the addresses, gateway4, nameservers, and addresses lines (ctrl + k does this in nano)
    3. set the name of your access point by replacing network_ssid_name with the name of your wireless network
    4. Set the password for the access point
  12. save the file ( ctrl + x in nano )
  13. test your configuration changes with the command 'sudo netplan try'
  14. then I do 'sudo netplan generate' for safety

Now, I also want to rename my raspi pi ( aka change the hostname )
using 'sudo hostnamectl set-hostname ` it is done.

Now, restart, detach the network cable and test the wireless settings.

Now, if you want to attach to Science United, here is the rest:

  1. install the BOINC client:
    1. sudo apt-get install boinc-client
  2. install the BOINC management utility for command line (I found that I needed this to properly attach to Science United)
    1. sudo apt-get install boinctui
  3. run the boinctui
    1. boinctui
    2. attach to the localhost ( just hit enter )
    3. F9
    4. Projects
    5. Connect to account manager
    6. Science United
    7. Enter your credential for Science United

Watch the status to make sure you attached and begin receiving work.
If all is good, exit the boinctui with F9 -> File -> quit

There you go.  Your old raspi pis being useful, supporting science without any wire except the power.

Tuesday, May 12, 2020

NGINX Plus with ModSec OWASP by Ansible - part 5

Here is the last in this series of using Ansible with the NGINX Ansible Role.
This one was done as a challenge from one of my security peers.

First, the assumptions:
local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.

No long introduction this time.  If you have been following along the scenarios have started to repeat, but become more useful with more complex configurations.

The playbook

This time the OWASP playbook will be used: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus_modsec_OWASP_CRS.yaml
Along with that is the same nginx_lb_plus_modsec_vars.yaml variable file as the previous post.
One difference is that the framework file this time is: https://github.com/brianehlert/ansible-nginx-examples/blob/master/modsec_owasp.conf

If you compare it to the modsec_rules file from the previous post it lacks the test rule from last time.
Why?  Because I am going to build the rules on the fly within the playbook.

ansible-playbook nginx_lb_plus_modsec_OWASP_CRS.yaml -i loadbalancers

I am going to skip the basics of inventory and variables file reading.
The meat begins in the post_tasks of this playbook.

The playbook:
pulls the CRS from the SpiderLabs GitHub repository
unzips the archive
copies the example to a new file in the configuration directory
Selects out the rules names
Then writes out the names of the rules to implement into the framework modsec_rules.conf file.
Builds the includes
outputs the rule set just for review
and enables blocking.

After all this the configuration is tested to make sure it will work.  Assuming that passes nginx is reloaded to apply the configuration.

Now, the full OWASP rule set is implemented and working.

I am sure that someone can make that into a nice demo.

Wednesday, May 6, 2020

Getting Started with the Ansible Collection for NGINX Controller

Today, I have a new post.  And a video demo.

I am not going to write it here.  You can go view it at the NGINX blog.

To warn you, the subject line says it all.
I have developed a set of Ansible Roles for NGINX Controller and have combined those into a Collection.

Working with a Collection is slightly different than working with Roles, just like using Ansible Tower is a bit different than using AWX (aka free Ansible).

In the post I cover both at a high level, Collections and Tower with NGINX Controller.

I also have sample repositories that I am maintaining:
And for the series that has been publishing so far: https://github.com/brianehlert/ansible-nginx-examples

I plan on breaking down the NGINX Controller examples as I have been with the NGINX data plane examples.

Tuesday, May 5, 2020

NGINX Plus with modsec - part 4

Last blog we moved from NGINX OSS to NGINX Plus for the load balancer.
This time I am going to add the modsec module and configure a very basic test rule (one more post to get to the complex rules).

Just a reminder for folks entering the series mid stream:
The assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.

For this article we will use the following playbook:
And this accompanying variables file:

And we are using the Ansible Role for NGINX.

Running the playbook

Running the playbook is no different than the pattern in the previous posts:
ansible-playbook nginx_lb_plus_modsec.yaml -i loadbalancers

Like the NGINX Plus post before this will:
Read in the webservers inventory file and read in the nginx_lb_plus_modsec_vars.yaml file.
The variable file defines the path to the Plus key and cert, to delete the license and clean up, enable the NGINX Plus API.
The new variable option is: nginx_modules

In this case the waf module is added, which indicates mod security.

The remainder of the configuration is all the same as the two load balancer blogs prior. 
What does start to get unique to modsec is the post_tasks in the playbook.

Setting the waf module

In the post_tasks section of the playbook I am copying a framework config file for mod security rules.
After the framework file is copied into place the Rule is being enabled.
Then the NGINX config is tested (to make sure nothing went pear-shaped).
Assuming the configuration test passes the nginx process is restarted.

At this time some limited traffic should be blocked.
If you take a look at the modsec_rules.conf file, we are blocking a URL with 'test' in it.  As well as logging and returning a 403.

That is the basics of enabling a module with the Ansible Role on NGINX Plus.
In this case with additional settings of enabling a basic mod security rule.

Tuesday, April 28, 2020

Moving from NGINX OSS to NGINX Plus - part 3

In the previous blog posts I have been working with NGINX OSS (aka open source).
There is a lot you can do with the open source version, and using Ansible to drive it, you can automate most thinks that you want.

If you are coming in mid-stream, there are some assumptions in this demo.  Go back to the provisioning in AWS blog if you want the full detail.
The assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.

Starting here, I am going to focus on NGINX Plus.
It is the paid version of NGINX and includes extra features beyond what the open source version does.
A little bit of search will give you feature comparisons such as: https://www.nginx.com/products/nginx/#compare-versions

The reason that I am going to start talking about NGINX Plus is dynamic modules.

Moving from OSS to Plus

Moving from OSS to Plus can be done a couple different ways.
One way is to simply spin up new machine instances with NGINX Plus installed and point the traffic over and you are done.
In this example I am going to re-use the same loadbalancer machine that I setup in AWS and have NGINX OSS installed on to.

I am not concerned with downtime in this demo (not that it isn't just minutes) so I am going to uninstall OSS and re-install with the Plus binaries.

ansible-playbook nginx-remove.yaml -i loadbalancers

You can find this playbook here: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_remove.yaml
It literally stops and removes NGINX.  No frills, no fancy.  Nothing to see here, move along.

Installing NGINX Plus

NGINX Plus is delivered from NGINX and you need a subscription to access it.

Using the Ansible Role for NGINX, it isn't very different from NGINX OSS.
The playbook here: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus.yaml
looks nearly identical to the playbook for installing NGINX OSS.  The key differences arevthe variables file and the workflow that happens under the hood.

At the top of the variables file: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus_vars.yaml
There are additional settings of:
nginx_type: plus - install the plus version of NGINX
nginx_delete_license: true - delete the Plus repository license from the remote (good to do)
nginx_license: this is your license for access the NGINX Plus repository stored in the playbook directory.
nginx_rest_api_*: these relate to enabling the nginx plus api, which you probably want.

After that, the template variables begin as they were for NGINX OSS.
What happens
Just like with the playbook for NGINX OSS, the vars file nginx_lb_plus_vars.yaml variable file is read in the variables webserver1 and webserver2 are replaced with the values from the webservers inventory file.

Once done

You should be all set with a functioning load balancer again.  This time with NGINX Plus instead of NGINX OSS.

The next step, lets add and configure a module.

Pre_ and Post_tasks

This is the first time I have used pre_tasks and post_tasks.
These are useful when your playbook invokes one Role.
The pre_tasks are executed before the Role(s) and the post_tasks are executed after.

While you can use these when multiple roles are listed with include_role, you do have to be careful that the pre and post tasks align with all the Roles being invoked by the playbook.

Tuesday, April 21, 2020

Installing NGINX OSS webserver with Ansible - Part 2

This is part three of me walking through a simple demo that I set up using Ansible.

This installment installs the web server machines using NGINX OSS.

Please refer back to the first two installments to get an understanding of the assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.

The playbook

In this example I will focus on the playbook: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_web_demo_oss.yaml

As the previous blog, this uses the nginx Ansible Role.

This time, I have included the 'extra_vars' - extra variables specific to this playbook, within the playbook itself instead of using an external file.

The reason is did this was to follow a different pattern and that this is a static playbook for me.  The only substitution that I am doing involves the individual webserver machines that this particular configuration is applied to.

The play uses the default.conf Jinja2 template for an http server from the Ansible Role.
I am placing the configuration in the default location, setting an error page location.
Instructing it to respond to traffic from any IP address, on port 80.
And lastly I am setting the path and file for the page to be demo_index.html

(you will also find demo_index.html in the example repository along side this file).

Running the playbook:
ansible_playbook nginx_web_demo_oss.yaml -i webservers

The result

Like my previous post, we aren't done yet.  So another error message.
This time the error message that you should receive is no longer 'an error occurred' but rather '403 forbidden'

Why '403 forbidden' you might ask?  Because we configured a file path for the HTML file.
The path we set is: /usr/share/nginx/html/demo_index.html
But there is not a file at that path.  As far as the web server knows, you can't access whatever you are requesting.  So the 403 because it is a configured path instead of a 404 which you would get against a path that does not exist.

Fixing the 403

I thought I would be nice and fix the 403 in this blog post, so here is the solution.

ansible-playbook update_demo_index.html.yaml -i webservers

This is another simple playbook; https://github.com/brianehlert/ansible-nginx-examples/blob/master/update_demo_index_html.yaml

It simply copies the file demo_index.html to the correct path on each webserver.

Now, if you refresh your browser that you are using to test the loadbalancer, you should get an NGINX Demo page.

Why so granular

One reason for being so granular with the playbooks is to separate the tasks that are being performed.
The benefit is that the playbooks can be reused in other playbooks, like functions.  Another benefit is that they align with how Roles should be created, as reusable tasks.

If you get into using Ansible Tower, you can start linking together playbooks into a workflow - branching off for success and failure conditions.  Even adding pauses for approval workflows.

Tuesday, April 14, 2020

Installing NGINX OSS loadbalancer with Ansible - part 1

In the previous post I provisioned a number of machines for a demo environment into AWS.


Remember, that to make ssh access easy between the Ansible server and the target machines, my Ansible host runs in the same VPC in AWS as the demo environment machines.

if you inspect the inventory files, that explains why I write in the internal DNS names of the instances.

Starting in this post I will be using those machines and deploying out a very simple topology of a load balancer in front of two webservers.
For this entire setup, we will be using NGINX.

If you recall back to the previous blog, the playbook created machines in AWS and wrote some inventory files to the file system where the playbook was run from.

So now you should see 4 additional files without extensions for controller, dbserver, loadbalancers, and webservers.  We will be using the loadbalancers inventory file in this exercise.

In this blog I am focused on this playbook: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_oss.yaml

The playbook uses the NGINX Role from Ansible Galaxy, you can find that here: https://galaxy.ansible.com/nginxinc/nginx

Installing NGINX OSS on the loadbalancer

To run the playbook execute the command:
ansible_playbook nginx_lb_oss.yml -i loadbalancers

This playbook uses both the loadbalancers and webservers file as it executes, let me explain.

The webservers file is read into memory.

Then at the loadbalancers machines the variable file nginx_lb_vars.yaml is read from the playbook folder and the variables 'webserver1' and 'webserver2' are replaced with the values from the webservers inventory file - so that any traffic being forwarded is to real machines.

The the role is invoked with all of the required 'extra_vars' in memory.

The Ansible Role for NGINX installs NGINX OSS (aka open source version) and adds a reverse proxy (aka load balancer) configuration based on my yaml variables file.  This points to the DNS names of the two web server instances.

Testing the configuration

Now, you probably ran that, and jumped straight to testing traffic by hitting the public endpoint of the loadbalancer machine.  And then you probably thought to yourself; 'what kind of hack is this guy, this doesn't work'

Well, you are right.  You should be receiving the response 'an error occurred'.  This is good, this is right and proper for where we are in the deployment.
If the URL can't be reached, then something else happened.  But if your error is: 'an error occurred' we are in a good place.

And might you guess why you have that error message?
Because neither of the two webservers are responding.  They don't yet have NGINX installed nor configured.
So everything is working as it should be.
It just isn't working, yet.

Tuesday, April 7, 2020

Using Ansible to deploy a demo environment to AWS

I promised a few months ago that I would be back at it, and well....  Things got in the way.  Lots of things.

For a while I am going to focus on one method that I use with Ansible to set up a demo environment in AWS.

The first playbook I am going to talk about can be found here: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_environment_deploy_aws.yaml

The set-up

Before we begin, let me describe the first assumption:
I place the variable file and the playbook in the same folder and I import the variables when I execute the playbook.

The second is how I establish the VPC, I do not use the default VPC

I use the Cloud Formation Template: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/demo-vpc.json
to establish a Virtual Private Cloud (aka network), 2 public and 2 private subnets, and the proper routing and gateways for communication between the subnets and to the outside world from each.

When I provision my machines for the demo, I will be referencing the same region, the vpc from above, and the subnets.  We will get to that in a bit.

Third; I run my Ansible host in the VPC, attached to one of the public subnets.  I lock down its public interface with a security group and I ssh to it to perform the tasks at hand.

Fourth: I have Ansible all set up with taking to the AWS API.
This was the first blog that really gave me success with getting that set up; https://tomoconnor.eu/blogish/part-3-ansible-and-amazon-web-services/#.XozLZHJ7l3j

The workflow

Having established the VPC, we will need to copy the vpc-id and the subnet-id(s) and paste them into the variable files.
This is the variable file for the machine provisioning: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_aws_vars.yaml

Let me describe that a bit:
region: "the AWS region your VPC is in from above"
vpc_id: "the ID of the VPC - something like vpc-0no9th755is5fake1"
vpc_cidr_ip: "the CIDR of the VPC, such as:"
loadbalancer_public_subnet: "the ID of one of the public subnets where loadbalancer machines will be placed - such as: subnet-0b03bad1228you93r"
webserver_private_subnet: "the ID of one of the private subnets for the webservers"
image_id: "the AMI image ID for the region - I use the Ubuntu 18.04 LTS minimal image myself"
key_name: "your ssh access key for AWS to embed in the image"
controller_public_subnet: "the other public subnet for the controller machine"
database_private_subnet: "the other private pubnet for the postgreSQL db server"
prepend_name: "a string to prepend to the machine names, such as 'mine'"

Now that the variables are all set up, and the vpc is in place, lets build some machines.

You can run the playbook with the following:
ansible-playbook controller_environment_deploy_aws.yaml -e "@controller_aws_vars.yaml"  

If I read that command line back it would speak like this:  Run the Ansible playbook controller_environment_deploy_aws.yaml including the variable from the file controller_aws_vars.yaml.
The '-e' means to include 'extra variables' to the execution of the playbook.  The '@' at the beginning of the variable means to reference a file on the Ansible host.

What all does that build for me

What you will now end up with is multi-fold.  Part of what I will describe is to make dealing with inventory a little easier.

If you read through the playbook you can probably understand that some security groups are created for each machine type.
The AMI that is referenced is used as the base image for all of the machines.  But the machines are created as different sizes.
The network attachments happen against the subnets defined in the vars file.

One key thing that happens is that individual inventory files are written out in the same folder where the playbook is being run from.
You will have an inventory file for each machine type.  These are how Ansible will reference each machine type later on.
We will be referencing '-i loadbalancers' when installing nginx, and '-i webservers' when installing nginx and the web site files. 
The '-i' refers to a specific inventory file.
I did it this way because it gives me the flexibility to quickly stand something up with inventory (and reset) without getting into all of details of Ansible inventory and the many ways to handle it.

You want to reset; delete the machines in AWS and delete the corresponding inventory files and start over.