Showing posts with label Ubuntu. Show all posts
Showing posts with label Ubuntu. Show all posts

Tuesday, May 19, 2020

Ubuntu 20 on RaspberryPi with wireless for Science!

For a few years now I have been running BOINC on four raspberry pi - kind of like a poor man's compute cluster.

Most of that time was spent chugging away for Seti@Home (in fact I have been chugging away for Seti@Home for a really long time)

With the shuttering of Seti@Home I needed to discover a new project and stumbled on Science United ( http://scienceunited.org ).
This was great.  I could still support science and it was based on the BOINC compute platform.

This plan all went sideways when I added science united to my raspi pi that was running raspian.
Lots of searching, a hack to use a different repo - because come to find out, science unite requires a minimum version of the BOINC client.  One older than the raspian repo supports by default.

So, the hack I found was fine, but I was not happy.  So I went to setting up Ubuntu Server on my raspberry pis.
Heading over to https://ubuntu.com/download/raspberry-pi I checked the version of my raspi pis and downloaded the correct image.  Uncompressed using 7-zip, and then burned the image using win32diskimager.  Just like I had done 50 billion times before.

I then attached my monitor, small keyboard and booted.
And quickly discovered, no way to set-up the wifi.
Some searching led me to tons of articles describing using the ubuntu image generator and editing network-config.
(Guess how many posts were copies of the original ubuntu tutorial? - I hate that)

Well, that is fine an dandy.  But does not totally work, as some other folks tried to point out.  Plus you have this very specific image generator installed that you will use how often?  Once per year.  ( waste )
Because: guess what?  You just installed a server operating system.  They don't have wireless, on purpose.  You have to add / wireless support.  Just like any server OS (windows too).

Here is how I finally sorted it all out:


  1. Attach your pi to your network using a cable
  2. boot
  3. logon using the logon 'ubuntu' and password 'ubuntu' ( if you have not done this prior and this is the first boot, be patient.  After to logon screen, you have to wait a bit for key generation comments to show up after cloud-init finishes.  After this you can logon and be fored to change your password)
  4. change your password
  5. update the pi:
    1. sudo apt update
    2. sudo apt upgrade
  6. install the wireless tools:
    1. sudo apt install wireless-tools
  7. run iwconfig - notice you have a wlan network interface now, most likely named 'wlan0'
  8. copy a netplan sample wireless config to the netplan folder:
    1. sudo cp /usr/share/doc/netplan/examples/wireless.yaml /etc/netplan/wireless.yaml
  9. edit that sample config file:
    1. sudo nano /etc/netplan/wireless.yaml
  10. set the interface name to 'wlan0'  (the example interface of  'wlp2s0b1' won't get you anywhere)
  11. I am using dhcp on my network, so:
    1. set dhcp4 to yes
    2. remove the addresses, gateway4, nameservers, and addresses lines (ctrl + k does this in nano)
    3. set the name of your access point by replacing network_ssid_name with the name of your wireless network
    4. Set the password for the access point
  12. save the file ( ctrl + x in nano )
  13. test your configuration changes with the command 'sudo netplan try'
  14. then I do 'sudo netplan generate' for safety

Now, I also want to rename my raspi pi ( aka change the hostname )
using 'sudo hostnamectl set-hostname ` it is done.

Now, restart, detach the network cable and test the wireless settings.

Now, if you want to attach to Science United, here is the rest:


  1. install the BOINC client:
    1. sudo apt-get install boinc-client
  2. install the BOINC management utility for command line (I found that I needed this to properly attach to Science United)
    1. sudo apt-get install boinctui
  3. run the boinctui
    1. boinctui
    2. attach to the localhost ( just hit enter )
    3. F9
    4. Projects
    5. Connect to account manager
    6. Science United
    7. Enter your credential for Science United

Watch the status to make sure you attached and begin receiving work.
If all is good, exit the boinctui with F9 -> File -> quit

There you go.  Your old raspi pis being useful, supporting science without any wire except the power.


Tuesday, June 2, 2015

Linux VMs not getting IP with Hyper-V wireless external switches

For the past two days I have been building Ubuntu VMs on my laptop which runs Hyper-V.

I would install an Ubuntu VM, then try and update and discover that the VM has an IPv6 address but no IPv4 IP address.
So, off into the land of tweaking Ubuntu.  No go.

Next, my kids report router problems.  So I assume there is a correlation and I screw around with the router.  No change.

I delete and re-create virtual switches.  No Change.

After a bit of frustration and some calming attention to detail, I realize that the IPv6 address that my VM is getting is actually self generated, it was not getting it from my ISP (as I originally thought - since we do IPv6).

The one pattern is that; a virtual switch on the wired NIC always works with the VMs and the wireless doesn't.
The other pattern is that Windows VMs are just fine.  It is only Linux.

Now.  Since this is an external virtual switch that includes a wireless NIC a Network Bridge device is added.
 
I decided to poke around a bit.  I checked the properties of the Wi-Fi adapter (as it is common for power management to mess things up).  I discover that I cannot edit the driver properties of the Wi-Fi adapter due to the Network Bridge.

 
If I open the properties of the Network Bridge, I can then disable Power Management on the NIC. 
Come to find out tat was a waste of my time.  But hey, I had to try it.
 
But, wait a minute.  The bridge is dependent on the NIC.  ...  Network bindings pops into my head.
 
It used to be really easy to get into the bindings and totally mess things up.  Needless to say, it is not so intuitive any longer.
 
At the Network Connections press the ALT key, this reveals the file menu - select Advanced and then Advanced Settings.
 
What I notice is that the binding order is: Network Bridge, Wi-Fi, Ethernet

My thinking is; if Network Bridge is dependent on Wi-Fi, shouldn't it be after Wi-Fi?
(if you cluster or have clustered or have been around Windows as a server admin for a while you have probably messed with this before)
 
So I decided to give it a shot and move the Network bridge after Wi-Fi.
 
I then reboot for the change to take effect.
 
I then attach a VM to the virtual switch on my wireless NIC, cross my fingers, and power it on.
The VM boots right up, no hang at networking.  I logon, I type ifconfig and voila the VM has a proper network configuration.  I run 'sudo apt-get update' and all is glorious and good.
 
Just to fun, I build a Generation1 VM and install the pfsence router into it. 
That failed the auto configure test, but after reboot it came up just perfect (and it didn't prior).  And the latest version has the integration Components built-in and can use synthetic virtual NICs instead of Legacy - and even reports the IP address to the networking tab in Hyper-V Manager (I love that).
 
So much pain and consternation, for what now feels like a binding order bug.
I will update this is anything changes, but in the mean time: It works!
 
Now, why might Windows VMs work just fine? 
Because they keep trying to get an IP, they don't just try once at boot and then fail.  So that network stack can come live at any time, in any order (and generally does late in the boot sequence).

Monday, April 20, 2015

Ports and Docker containers

This is the feature of containers that the system administrator in me gets all excited about.

Ports.

This is more than security through obscurity, this is actually about isolated networking.
Docker has an internal network stack all to itself.

You can see it if you type ifconfig.  You see a Docker interface (not unlike a Hyper-V Internal network vNIC).
If you have a container running you can use sudo docker ps and one of the columns is 'ports'.

Lets begin with the intermediate of exposing ports.

For my examples I have been using my meshblu image and opening a bash shell and even mapping local file paths into it.  I am going to leave those out at the moment.

Now that I have a working model, I want to test it.  So I want to expose the listener port of the service running in my container.

sudo docker run -t -t -p 3000:3000 --name meshblu meshblu_appliance

The -p command allows the mapping of ports.
If I only defined -p 3000 I would be allowing port 3000 of the container to be mapped to some random port of the container host.  Instead I defined 3000:3000 - so as not to confuse myself.
What this does is map port 3000 of my container to port 3000 of my container host.

If I open any application on my container host or from a remote machine I can now access the application in my container on port 3000.  Just like opening a single port in a firewall.

Now.  My container has a number of supporting services such as Redis and MongoDB and other ports that the application will be listening on..  I would like to expose these as well.  They are there, in the container, running and responding and entirely hidden at the moment.
This is one that I did not consider intuitive.

sudo docker run -t -t -p 3000:3000 -p 1883:1883 -p 5683:5683 --name meshblu meshblu_appliance

Now I have mapped two additional ports.  Initially I tried using a single long string or an array (it just made sense) but you need to use individual port commands.

Just some other nifty stuff.  And I have not gotten beyond a single container yet.

Tuesday, April 14, 2015

Local file systems and Docker containers

Now I am going to move into the realm of areas that I consider fun.
What sort of trickery can I use containers for so that I can avoid installing 'stuff' on my development workstation?

Here is my scenario:  I am pulling some source from GitHub.  I might modify some scripts and need to test those.  I want to quickly test my changes within an installed and running instance of the application.  I don't want to 'install' the application on my workstation.

So, lets work thorough my real life example.  No modifications, just pulling the source and getting that into a container, without any need to build a custom container nor image.

Docker allows you to expose paths of the container host and map those into specific paths of the VM.  And this is where you can do some nifty things.

Say that you have some Node.js application.  And you want to run multiple instances of it, or you want to run and instance and have the logs write back to your development station. 
(This could be any number of combinations).

Lets run through an example:

Previously I ended with the example:
sudo docker run -i -t --name meshblu node:latest /bin/bash

If I extend that with this scenario we will see some new options.

sudo docker start -i -t -v "/home/brianeh/GitHub":"/home":ro --name meshblu node:latest /bin/bash

What I have added is the "-v" option.  this defines a path mapping.
"/home/brianeh/GitHub" is the GitHub folder of my user home path.  After the colon is the path in the container that this is mapped to.  "ro" means Read Only.  Or I could define that as "rw" - Read Write.

the neat thing is that once I run my container and enter its console I can type ls -l /home and I will see all of the files I have downloaded to my GitHub folder of my development machine.

This gives me a runspace within the container that is separate from my workstation where I can install applications but run the latest version of my code straight out of the development path.

(One reason why developers love containers)

Monday, March 9, 2015

Running a container from a Docker image

In the previous post I pulled three Docker images. 

If you are familiar with machine virtualization think of these as templates.  An image is used as the base of a container.

So, when a container is 'run' it is essentially a differencing file system that is linked to an image file system.  This allows something to happen within the container and for that to be written back to disk in a unique place.

This is really no different than the concept to using a single virtual disk and creating multiple virtual machines from it using differencing disks.  The differencing disk contains the unique character of each machine.  In this case the container contains any uniqueness.

Let me give a bit of background here.  If you were not aware, I am employed as a software tester.  I also do some development.  But, I like to keep my workstations clean.  And containers give me that in a tidy way.
Lately, I am working with a NodeJS application.  And I don't want to have to install Node and all of its dependencies on my workstation.  This is how I see many developers get themselves into the classic 'works on my machine' trap. 
At some point in time they installed some strange DLL or package ( or security settings or anything ) and then later took that as a dependency and never realized it.  Then they distribute the package, I put it on a different system, and wham, no workie.
So I am actually going to do a few tricks that containers easily enable.  And I will also use a real example that I am working with and you could as well.

Enough of that.  Lets take a moment to look at the syntax of the container run command
( the entire run reference is here:  http://docs.docker.com/reference/run/ )

I am going to begin with the Node image that I pulled down.  This image is based on Ubuntu and already has NodeJS and the NPM package manager installed.  This saves me a bit of time with installing Node, its dependent libraries

sudo docker run -i -t --name meshblu node:latest /bin/bash
Lets break this apart:
  1. sudo - this is Ubuntu and Docker runs as root (or local system) so in order to interact with it, you must elevate yourself.  Windows admins; think RunAs.
  2. docker - call the docker service then pass it a command
  3. run - self explanatory, but I want to run a container
  4. -i  - interactive.  This will put the container console into my current console.
  5. -t  - virtual tty.  This gives you a tty console session for STDIN.  Important if you want to interact with the container in a console way.
  6. --name  - this will be the name of the container, and simply helps you to keep them straight.  Without it a random (and silly) name is given.  And you will have to keep them straight.  The key here is that this is just a quick way to locate the container ID, which is the really important thing.
  7. node:latest - this is the name of the image I pulled / want to use.  It will check if you have this local, if not it will look to the configured hub and try to find it and pull it.
  8. /bin/bash - this is the command plus any arguments to run.  Everything after the image name will be executed within the container.  So you can have any command plus parameters at the end.  /bin/bash is simply a bash shell - pretty much were you are at any Linux console.
Go ahead, execute the command.
Notice that your prompt changed.  Because now your command window is connected to the container process and you are essentially "within" the container.  For me this is:  root@06b874873e86:/#
I am in the container as root, and the container ID just happens to be '06b874873e86'.

Now.  Exiting.
If you type 'exit' the container stops running and you drop back to your command prompt.
If you type 'ctrl+p' then 'ctrl+q' you drop back to your command prompt, but the container stays running.

To see any running containers type (at the Docker host):  sudo docker ps
To see all containers (running and stopped / paused): sudo docker ps -a
If you want back into a running container use:  sudo docker attach 06b87
(notice that I did not type the entire container id,  just enough to uniquely identify it.  A nifty usability feature)
Lastly, to start a container that has been stopped:  sudo docker start -i 06b87
(the -i connects you to it interactively)

Tuesday, March 3, 2015

Pulling Docker images

We have some Docker basics, and ran a bash command inside of a container.  And you possibly poked around a bit more.

After downloading that Ubuntu image on demand, you may have noticed that it looks like you have multiple images locally.  Where you really have the single image shown by the IMAGE ID but with multiple tags.

If you wonder where these images magically come from, it is a place called the Docker Hub ( hub.docker.com ).  Go to the hub and look around.  Notice that there are 'official' images and community images.  I, personally, stick with the official images as I know who is behind that image creation - Canonical is the source of the 'official' Ubuntu image.
Accountability, I like that.

Now I want a few images, I don't want to run them straight off, I want to download some official images and have them locally and then do some other things with them.  Also, this way I have them for offline use.

If you look at Ubuntu in the Docker Library ( https://registry.hub.docker.com/_/ubuntu/ ) you will notice the supported tags section.  In the previous post I referenced ubuntu:latest - looking at the tags you can see that this translates to trusty, and trusty ( I just happen to know ) is 14.04 LTS.

I could also pull Ubuntu 14.10 by defining ubuntu:utopic or get really experimental and use ubuntu:vivid

This is handy for many developers as they can define a version dependency, no different than a specific version of a DLL or a module.  Test can stabilize on a specific OS release, and so on.

So, lets pull the Mongo, Redis, and Node images.  Since I need a base MongoDB server, a Redis Server, and a place to run my NodeJS application.  This way I can work with these offline from Docker hub.

First node.  sudo docker pull node:latest
Notice that multiple images were downloaded.  At the time I wrote this there were 11.
All of these together will form the resulting image.  Kind of like using differencing disks and making changes and linking them together - one builds upon the previous.

After the download is complete, take a look  sudo docker images
And you see one image id.

If you want to know what is happening under the hood in Docker itself.  I found an excellent explanation to save me a bunch of typing: http://blog.thoward37.me/articles/where-are-docker-images-stored/
Now, the file locations are relative, but no longer exact due to updates to Docker.

But, as you can see from that post, this is Linux, so everything is simply right there on the file system of the Ubuntu Docker host.  Like a folder of files.  Not contained within some virtual disk ( which could be secured with permissions or BitLocker ). 
This is why we consider the host that runs Docker to be a physical security boundary and the running containers more of a process / network boundary.

Virtual Machines in themselves are considered physical security boundaries.  And the hypervisor system is designed to support and enforce that.

I will get deeper into that in a post or two just to show what you can do with this.  Basically, play a few virtualization tricks.

I had mentioned also pulling MongoDB and Redis; so lets go ahead and do that:
sudo docker pull redis:latest
sudo docker pull mongo:latest

At this point in time we should have pulled all of the images.  And next time we will do something more interesting.

Thursday, February 26, 2015

Doing something simple with Docker

A couple posts back I walked you through setting up an Ubuntu VM, and installing the latest version of Docker.  Then, I left you hanging.

Docker is interesting.  These container things are a cross between a VM and a user process.  There is still a base OS there (of some type) to bootstrap the application.  What I thought was interesting when I first poked Docker is that each container is a network isolation zone.

Docker has a great tutorial: https://www.docker.com/tryit/

And that was fine.  I did something.  But I really didn't understand it until I tried to really use it.

What does it take to get an application into an image and run it?  And this Docker Hub that is chock full of images, and Dockerfile - what is that?

Lets begin with an easy example as we get the language down.

I want to run a Docker container.  I want the OS in this Docker container to be Ubuntu (yes, Ubuntu within Ubuntu).

Returning to my Ubuntu VM from before, I logon as my user and try a couple Docker commands:
sudo docker images  - this lists the Docker images that have been built / downloaded to this machine and these images are used to run containers.

Notice that language - a container is a running instance of an image.  The image is analogous to a virtual disk with something in it.  The image consumes space on the disk of my Ubuntu VM.

sudo docker ps - If you have been around Linux before you have run across ps - processes.  The ps command lists the containers, and containers being processes only exist when they run.

Enough of that, lets get confusing and run an instance of Ubuntu on Ubuntu in the same command window where I ran my Docker command (the key here, watch the bouncing command prompt).

sudo docker run -i -t ubuntu:latest /bin/bash

Like most commands, lets read this on from right to left (not left to right).
Run a BASH shell, in the image 'ubuntu:latest', run a tty session, keep STDIN open (send input).
What this accomplishes is: the image is checked if it is local, if not it is pulled from the hub.  Then open the tty in the console session (where I ran the command) and run bash.

Notice when you do this that your prompt changed at your console.  That console window is now the container process.  What you do now is inside the container process and image.

I you really want to realize that you are somewhere else, type ifconfig at the prompt.  By default you should get a class B address in the 172 range.  There will be more later on this, but right now that VM can get out, but there are no incoming ports open to it.

When you are ready to get out of the image use exit
This actually stops the container in this case.  Since it closes the tty.

Thursday, February 12, 2015

Docker on Ubuntu on Hyper-V 2012 R2

I recently read through an MSDN article that described running Docker in a VM on Hyper-V.

Frankly, I was less than impressed at the complexity of the solution.  Especially since the concept here is not a huge leap.

The basic steps are:
  1. Build a VM on Hyper-V
  2. Install Docker into that VM
  3. Run containers in that VM
This achieves a couple things.
  • Your Docker containers are isolated within a VM. 
This is actually an important thing.  Docker has its own networking stack, but it also allows exposing the underlying storage to the VM to support things like databases and configurations or even updating source easily. 
The model here is one VM per tenant.  Thus forming that boundary and still getting the flexibility of both containers and VMs.
  • You can run the OS of your choice.
In my experimentation I have been using Ubuntu.  Mainly because it has good support, but primarily because they are right up to date with the kernel.  This gives me the latest Hyper-V support within that VM.

So, you want to setup Docker in a VM.  There are a few steps as I am outlining this in gory detail.  Here is goes:

  1. Install Ubuntu in the VM (14.04 LTS Server) or 14.10
  2. Add OpenSSH Server
  3. Determine IP
  4. Connect over SSH
  5. Update
    1. sudo apt-get update
  6. Upgrade the components (aka patch the OS)
    1. sudo apt-get upgrade -y
  7. Add Docker gpg key (that is 'qO' not 'qZero')
    1. sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -" 
  8. Update the app list
    1. sudo sh -c "echo deb http://get.docker.io/ubuntu docker main\ >> /etc/apt/sources.list.d/docker.list"
  9. Update the local apt repository after adding the docker reference
    1. sudo apt-get update
  10. Install (latest) Docker (on 12/15/14 this is 1.4.0)
    1. sudo apt-get install lxc-docker -y
Now you are ready to play with the magic of Containers.
 

Monday, December 19, 2011

Ubuntu Desktop 10.11 on Hyper-V piix4_smbus

So, I needed a small virtual machine that had the Hyper-V Integration Components working it (I need the clean shutdown action).

XP is just too large, but usually works well.  Linux, why not.  And the RedHat family just incorporated the latest of the Hyper-V Linux Integration Components.  Perfect.

Now, the flavor.  RedHat?  No, no license.  Fedora? Well, now that I think of it, I forgot this one before.  CentOS?  Great for a server.  Ubuntu.  Yep, I chose Ubuntu.

I created a VM and installed Ubuntu Server without a hitch.  Just be sure to use a Legacy Network Adapter during the install process.

I then created a VM and tried to install Ubuntu Desktop.  Big fail.
First of all, for the life of me I could not get the installer screen to popup.  I could not figure out what was going on.  After all, the server VM went straight to the language selection.

Well, at the bottom of the screen is this cryptic graphic of a keyboard, and equal sign, and what looks to be a person with arms and legs spread.  After a few reboots I interpreted this symbol to mean; “if you want to install, hit enter now”.
Hey, the language selection pops up!

Now, I select my language and hit enter.  Error message, drop to a prompt.  Fail two.

The error:  piix4_smbus 0000:07.3: SMBus base address uninitialized – upgrade BIOS or use force_addr=0xaddr

I only play a Linux guy on TV.  And a bit of searching showed me that this error is common on VirtualBox and the settings change suggested is not a safe thing to try.  Oh, and that I needed to re-compile my initrd.

This is a brand new build and freshly downloaded media.  Crazy talk!

Well, I looked around and figured out a workaround.
I managed to get the installer to load by selecting the “other options” at the installation selection menu (F6) and setting “acpi=off” (highlight it, enter or spacebar, ESC to close the options dialog).

Oh, and a nice feature, on the Windows 8 Developer Preview I actually had a mouse during the GUI installer of Ubuntu Desktop.

Now, enabling the Integration Components..

I did just a couple things.  Bring up a terminal using “Ctrl + Alt + t”
Then you need to do sudo –i to switch to root for the remainder of the session or type sudo at the beginning on each command.

Using nano (I never got any handle on vi and always have fallen back to nano) added 4 lines in /etc/initramfs-tools/modules  ( “nano /etc/initramfs-tools/modules” ) and add:
hv_vmbus
hv_storvsc
hv_blkvsc
hv_netvsc

Save that.  Then generate an updated initramfs with “update-initramfs –u” and reboot.

If you had rebooted prior to this you may have noticed a storage layer inaccessibility error that is now gone.

I then ran “apt-get update” and “apt-get upgrade” – two hours go by…Not necessary, but it does update all packages.

This entire time I have been using a Legacy Network Adapter with the Ubuntu Desktop VM.  After the update completed I shutdown the VM, removed the Legacy Network Adapter, added a (synthetic) Network Adapter and all was good.