tag:blogger.com,1999:blog-62300245592798119012024-03-13T03:58:21.469-07:00I.T. ProctologyWorking with the "back-end" of I.T. systems.<br>The things learned as an IT Pro'fessional turned software tester, researcher, and product manager.<p>Learn. Apply. Repeat.</p>BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.comBlogger323125tag:blogger.com,1999:blog-6230024559279811901.post-49705273311046507362020-05-19T14:23:00.000-07:002020-05-19T14:23:00.302-07:00Ubuntu 20 on RaspberryPi with wireless for Science!For a few years now I have been running BOINC on four raspberry pi - kind of like a poor man's compute cluster.<br />
<br />
Most of that time was spent chugging away for Seti@Home (in fact I have been chugging away for Seti@Home for a really long time)<br />
<br />
With the shuttering of Seti@Home I needed to discover a new project and stumbled on Science United ( <a href="http://scienceunited.org/">http://scienceunited.org</a> ).<br />
This was great. I could still support science and it was based on the BOINC compute platform.<br />
<br />
This plan all went sideways when I added science united to my raspi pi that was running raspian.<br />
Lots of searching, a hack to use a different repo - because come to find out, science unite requires a minimum version of the BOINC client. One older than the raspian repo supports by default.<br />
<br />
So, the hack I found was fine, but I was not happy. So I went to setting up Ubuntu Server on my raspberry pis.<br />
Heading over to <a href="https://ubuntu.com/download/raspberry-pi">https://ubuntu.com/download/raspberry-pi</a> I checked the version of my raspi pis and downloaded the correct image. Uncompressed using 7-zip, and then burned the image using win32diskimager. Just like I had done 50 billion times before.<br />
<br />
I then attached my monitor, small keyboard and booted.<br />
And quickly discovered, no way to set-up the wifi.<br />
Some searching led me to tons of articles describing using the ubuntu image generator and editing network-config.<br />
(Guess how many posts were copies of the original ubuntu tutorial? - I hate that)<br />
<br />
Well, that is fine an dandy. But does not totally work, as some other folks tried to point out. Plus you have this very specific image generator installed that you will use how often? Once per year. ( waste )<br />
Because: guess what? You just installed a server operating system. They don't have wireless, on purpose. You have to add / wireless support. Just like any server OS (windows too).<br />
<br />
Here is how I finally sorted it all out:<br />
<br />
<br />
<ol>
<li>Attach your pi to your network using a cable</li>
<li>boot</li>
<li>logon using the logon '<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">ubuntu</span>' and password '<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">ubuntu</span>' ( if you have not done this prior and this is the first boot, be patient. After to logon screen, you have to wait a bit for key generation comments to show up after cloud-init finishes. After this you can logon and be fored to change your password)</li>
<li>change your password</li>
<li>update the pi:</li>
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo apt update</span></li>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo apt upgrade</span></li>
</ol>
<li>install the wireless tools:</li>
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo apt install wireless-tools</span></li>
</ol>
<li>run <span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">iwconfig </span>- notice you have a wlan network interface now, most likely named 'wlan0'</li>
<li>copy a netplan sample wireless config to the netplan folder:</li>
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo cp /usr/share/doc/netplan/examples/wireless.yaml /etc/netplan/wireless.yaml</span></li>
</ol>
<li>edit that sample config file:</li>
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo nano /etc/netplan/wireless.yaml</span></li>
</ol>
<li>set the interface name to 'wlan0' (the example interface of 'wlp2s0b1' won't get you anywhere)</li>
<li>I am using dhcp on my network, so:</li>
<ol>
<li>set dhcp4 to yes</li>
<li>remove the addresses, gateway4, nameservers, and addresses lines (ctrl + k does this in nano)</li>
<li>set the name of your access point by replacing network_ssid_name with the name of your wireless network</li>
<li>Set the password for the access point</li>
</ol>
<li>save the file ( ctrl + x in nano )</li>
<li>test your configuration changes with the command '<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo netplan try</span>'</li>
<li>then I do '<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo netplan generate</span>' for safety</li>
</ol>
<br />
Now, I also want to rename my raspi pi ( aka change the hostname )<br />
using '<span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo hostnamectl set-hostname <my hostname="" new=""></my></span>` it is done.<br />
<br />
Now, restart, detach the network cable and test the wireless settings.<br />
<br />
Now, if you want to attach to Science United, here is the rest:<br />
<br />
<br />
<ol>
<li>install the BOINC client:</li>
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo apt-get install boinc-client</span></li>
</ol>
<li>install the BOINC management utility for command line (I found that I needed this to properly attach to Science United)</li>
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">sudo apt-get install boinctui</span></li>
</ol>
<li>run the boinctui</li>
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;">boinctui</span></li>
<li>attach to the localhost ( just hit enter )</li>
<li>F9</li>
<li>Projects</li>
<li>Connect to account manager</li>
<li>Science United</li>
<li>Enter your credential for Science United</li>
</ol>
</ol>
<br />
Watch the status to make sure you attached and begin receiving work.<br />
If all is good, exit the boinctui with F9 -> File -> quit<br />
<br />
There you go. Your old raspi pis being useful, supporting science without any wire except the power.<br />
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-51321084036843878782020-05-12T12:57:00.000-07:002020-05-12T12:57:11.589-07:00NGINX Plus with ModSec OWASP by Ansible - part 5Here is the last in this series of using Ansible with the NGINX Ansible Role.<br />
This one was done as a challenge from one of my security peers.<br />
<br />
First, the assumptions:<br />
local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.<br />
<br />
No long introduction this time. If you have been following along the scenarios have started to repeat, but become more useful with more complex configurations.<br />
<h3>
The playbook</h3>
This time the OWASP playbook will be used: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus_modsec_OWASP_CRS.yaml<br />
Along with that is the same <span class="pl-s">nginx_lb_plus_modsec_vars.yaml variable file as the previous post.</span><br />
<span class="pl-s">One difference is that the framework file this time is: https://github.com/brianehlert/ansible-nginx-examples/blob/master/modsec_owasp.conf</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">If you compare it to the modsec_rules file from the previous post it lacks the test rule from last time.<br />Why? Because I am going to build the rules on the fly within the playbook.</span><br />
<span class="pl-s"><br /></span>
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"><span class="pl-s">ansible-playbook nginx_lb_plus_modsec_OWASP_CRS.yaml -i loadbalancers</span></span></span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">I am going to skip the basics of inventory and variables file reading.</span><br />
<span class="pl-s">The meat begins in the post_tasks of this playbook.</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">The playbook:</span><br />
<span class="pl-s">pulls the CRS from the SpiderLabs GitHub repository</span><br />
<span class="pl-s">unzips the archive</span><br />
<span class="pl-s">copies the example to a new file in the configuration directory</span><br />
<span class="pl-s">Selects out the rules names</span><br />
<span class="pl-s">Then writes out the names of the rules to implement into the framework modsec_rules.conf file.</span><br />
<span class="pl-s">Builds the includes</span><br />
<span class="pl-s">outputs the rule set just for review</span><br />
<span class="pl-s">and enables blocking.</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">After all this the configuration is tested to make sure it will work. Assuming that passes nginx is reloaded to apply the configuration.</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">Now, the full OWASP rule set is implemented and working.</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">I am sure that someone can make that into a nice demo.</span>BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-29496419520895371832020-05-06T09:45:00.000-07:002020-05-06T09:45:05.333-07:00Getting Started with the Ansible Collection for NGINX ControllerToday, I have a new post. And a video demo.<div>
<br /></div>
<div>
I am not going to write it here. You can go view it at the NGINX blog.</div>
<div>
<br /></div>
<div>
To warn you, the subject line says it all.</div>
<div>
I have developed a set of Ansible Roles for NGINX Controller and have combined those into a Collection.</div>
<div>
<br /></div>
<div>
Working with a Collection is slightly different than working with Roles, just like using Ansible Tower is a bit different than using AWX (aka free Ansible).</div>
<div>
<br /></div>
<div>
In the post I cover both at a high level, Collections and Tower with NGINX Controller.</div>
<div>
<br /></div>
<div>
<a href="https://www.nginx.com/blog/getting-started-ansible-collection-nginx-controller/">https://www.nginx.com/blog/getting-started-ansible-collection-nginx-controller/</a></div>
<div>
<br /></div>
<div>
I also have sample repositories that I am maintaining:</div>
<div>
For the Tower demo: <a href="https://github.com/brianehlert/ansible-tower-nginx-controller-examples">https://github.com/brianehlert/ansible-tower-nginx-controller-examples</a></div>
<div>
For NGINX Controller general use cases: <a href="https://github.com/brianehlert/ansible-nginx-controller-examples">https://github.com/brianehlert/ansible-nginx-controller-examples</a></div>
<div>
And for the series that has been publishing so far: <a href="https://github.com/brianehlert/ansible-nginx-examples">https://github.com/brianehlert/ansible-nginx-examples</a></div>
<div>
<br /></div>
<div>
I plan on breaking down the NGINX Controller examples as I have been with the NGINX data plane examples.</div>
<div>
<br /></div>
<div>
<br /></div>
BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-1542343370940516782020-05-05T12:42:00.000-07:002020-05-05T12:42:00.626-07:00NGINX Plus with modsec - part 4Last blog we moved from NGINX OSS to NGINX Plus for the load balancer.<br />
This time I am going to add the modsec module and configure a very basic test rule (one more post to get to the complex rules).<br />
<br />
Just a reminder for folks entering the series mid stream:<br />
The assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.<br />
<br />
<br />
<br />
<br />
For this article we will use the following playbook:<br />https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus_modsec.yaml<br />
And this accompanying variables file:<br />
https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus_modsec_vars.yaml<br />
<br />
And we are using the Ansible Role for NGINX.<br />
<h3>
Running the playbook</h3>
Running the playbook is no different than the pattern in the previous posts:<br />
ansible-playbook nginx_lb_plus_modsec.yaml -i loadbalancers<br />
<br />
Like the NGINX Plus post before this will:<br />
Read in the webservers inventory file and read in the nginx_lb_plus_modsec_vars.yaml file.<br />
The variable file defines the path to the Plus key and cert, to delete the license and clean up, enable the NGINX Plus API.<br />
The new variable option is: <span class="pl-ent">nginx_modules</span><br />
<br />
<span class="pl-ent">In this case the waf module is added, which indicates mod security.</span><br />
<br />
<span class="pl-ent">The remainder of the configuration is all the same as the two load balancer blogs prior. </span><br />
<span class="pl-ent">What does start to get unique to modsec is the post_tasks in the playbook.</span><br />
<h3>
<span class="pl-ent">Setting the waf module</span></h3>
<span class="pl-ent">In the post_tasks section of the playbook I am copying a framework config file for mod security rules.</span><br />
<span class="pl-ent">https://github.com/brianehlert/ansible-nginx-examples/blob/master/modsec_rules.conf</span><br />
<span class="pl-ent"> </span><br />
<span class="pl-ent">After the framework file is copied into place the Rule is being enabled.</span><br />
<span class="pl-ent">Then the NGINX config is tested (to make sure nothing went pear-shaped).</span><br />
<span class="pl-ent">Assuming the configuration test passes the nginx process is restarted.</span><br />
<span class="pl-ent"><br /></span>
<span class="pl-ent">At this time some limited traffic should be blocked.</span><br />
<span class="pl-ent">If you take a look at the modsec_rules.conf file, we are blocking a URL with 'test' in it. As well as logging and returning a 403.</span><br />
<span class="pl-ent"><br /></span>
<span class="pl-ent">That is the basics of enabling a module with the Ansible Role on NGINX Plus.<br />In this case with additional settings of enabling a basic mod security rule.</span><br />
<span class="pl-ent"><br /></span>BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-69407024019045440062020-04-28T13:17:00.000-07:002020-04-28T13:17:01.441-07:00Moving from NGINX OSS to NGINX Plus - part 3In the previous blog posts I have been working with NGINX OSS (aka open source).<br />
There is a lot you can do with the open source version, and using Ansible to drive it, you can automate most thinks that you want.<br />
<br />
If you are coming in mid-stream, there are some assumptions in this demo. Go back to the provisioning in AWS blog if you want the full detail.<br />
The assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.<br />
<br />
Starting here, I am going to focus on NGINX Plus.<br />
It is the paid version of NGINX and includes extra features beyond what the open source version does.<br />
A little bit of search will give you feature comparisons such as: https://www.nginx.com/products/nginx/#compare-versions<br />
<br />
The reason that I am going to start talking about NGINX Plus is dynamic modules.<br />
<h3>
Moving from OSS to Plus</h3>
Moving from OSS to Plus can be done a couple different ways.<br />
One way is to simply spin up new machine instances with NGINX Plus installed and point the traffic over and you are done.<br />
In this example I am going to re-use the same loadbalancer machine that I setup in AWS and have NGINX OSS installed on to.<br />
<br />
I am not concerned with downtime in this demo (not that it isn't just minutes) so I am going to uninstall OSS and re-install with the Plus binaries.<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">ansible-playbook nginx-remove.yaml -i loadbalancers</span></span><br />
<br />
You can find this playbook here: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_remove.yaml<br />
It literally stops and removes NGINX. No frills, no fancy. Nothing to see here, move along.<br />
<h3>
Installing NGINX Plus</h3>
NGINX Plus is delivered from NGINX and you need a subscription to access it.<br />
<br />
Using the Ansible Role for NGINX, it isn't very different from NGINX OSS.<br />
The playbook here: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus.yaml<br />
looks nearly identical to the playbook for installing NGINX OSS. The key differences arevthe variables file and the workflow that happens under the hood.<br />
<br />
At the top of the variables file: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_plus_vars.yaml<br />
There are additional settings of:<br />
nginx_type: plus - install the plus version of NGINX<br />
nginx_delete_license: true - delete the Plus repository license from the remote (good to do)<br />
nginx_license: this is your license for access the NGINX Plus repository stored in the playbook directory.<br />
nginx_rest_api_*: these relate to enabling the nginx plus api, which you probably want.<br />
<br />
After that, the template variables begin as they were for NGINX OSS.<br />
What happens<br />
Just like with the playbook for NGINX OSS, the vars file nginx_lb_plus_vars.yaml variable file is read in the variables webserver1 and webserver2 are replaced with the values from the webservers inventory file.<br />
<h3>
Once done</h3>
You should be all set with a functioning load balancer again. This time with NGINX Plus instead of NGINX OSS.<br />
<br />
The next step, lets add and configure a module. <br />
<h3>
Pre_ and Post_tasks</h3>
This is the first time I have used pre_tasks and post_tasks.<br />
These are useful when your playbook invokes one Role.<br />
The pre_tasks are executed before the Role(s) and the post_tasks are executed after.<br />
<br />
While you can use these when multiple roles are listed with include_role, you do have to be careful that the pre and post tasks align with all the Roles being invoked by the playbook.<br />
<br />
<br />
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-15101821381898055342020-04-21T13:07:00.000-07:002020-04-21T13:07:00.684-07:00Installing NGINX OSS webserver with Ansible - Part 2This is part three of me walking through a simple demo that I set up using Ansible.<br />
<br />
This installment installs the web server machines using NGINX OSS.<br />
<br />
Please refer back to the first two installments to get an understanding of the assumptions: local file system inventory files, Ansible host deployed to the same VPC as the remote machines, variable files, run the playbooks from the same folder as the inventory and variable files.<br />
<br />
<h3>
The playbook</h3>
In this example I will focus on the playbook: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_web_demo_oss.yaml<br />
<br />
As the previous blog, this uses the nginx Ansible Role.<br />
<br />
This time, I have included the 'extra_vars' - extra variables specific to this playbook, within the playbook itself instead of using an external file.<br />
<br />
The reason is did this was to follow a different pattern and that this is a static playbook for me. The only substitution that I am doing involves the individual webserver machines that this particular configuration is applied to.<br />
<br />
The play uses the default.conf Jinja2 template for an http server from the Ansible Role.<br />
I am placing the configuration in the default location, setting an error page location.<br />
Instructing it to respond to traffic from any IP address, on port 80.<br />
And lastly I am setting the path and file for the page to be demo_index.html<br />
<br />
(you will also find demo_index.html in the example repository along side this file).<br />
<br />
Running the playbook:<br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">ansible_playbook nginx_web_demo_oss.yaml -i webservers</span></span><br />
<br />
<h3>
The result</h3>
Like my previous post, we aren't done yet. So another error message.<br />
This time the error message that you should receive is no longer 'an error occurred' but rather '403 forbidden'<br />
<br />
Why '403 forbidden' you might ask? Because we configured a file path for the HTML file.<br />
The path we set is: <span class="pl-s">/usr/share/nginx/html/</span><span class="pl-s">demo_index.html</span><br />
<span class="pl-s">But there is not a file at that path. As far as the web server knows, you can't access whatever you are requesting. So the 403 because it is a configured path instead of a 404 which you would get against a path that does not exist.</span><br />
<h3>
<span class="pl-s">Fixing the 403</span></h3>
<span class="pl-s">I thought I would be nice and fix the 403 in this blog post, so here is the solution.</span><br />
<span class="pl-s"><br /></span>
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"><span class="pl-s">ansible-playbook update_demo_index.html.yaml -i webservers</span></span></span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">This is another simple playbook; https://github.com/brianehlert/ansible-nginx-examples/blob/master/update_demo_index_html.yaml</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">It simply copies the file demo_index.html to the correct path on each webserver.</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">Now, if you refresh your browser that you are using to test the loadbalancer, you should get an NGINX Demo page.</span><br />
<h3>
<span class="pl-s">Why so granular</span></h3>
<span class="pl-s">One reason for being so granular with the playbooks is to separate the tasks that are being performed.</span><br />
<span class="pl-s">The benefit is that the playbooks can be reused in other playbooks, like functions. Another benefit is that they align with how Roles should be created, as reusable tasks.</span><br />
<span class="pl-s"><br /></span>
<span class="pl-s">If you get into using Ansible Tower, you can start linking together playbooks into a workflow - branching off for success and failure conditions. Even adding pauses for approval workflows.</span><br />
<span class="pl-s"><br /></span>BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-50205864718214212222020-04-14T13:08:00.000-07:002020-04-14T13:08:02.783-07:00Installing NGINX OSS loadbalancer with Ansible - part 1In the previous post I provisioned a number of machines for a demo environment into AWS.<br />
<h3>
Background </h3>
Remember, that to make ssh access easy between the Ansible server and the target machines, my Ansible host runs in the same VPC in AWS as the demo environment machines.<br />
<br />
if you inspect the inventory files, that explains why I write in the internal DNS names of the instances.<br />
<br />
Starting in this post I will be using those machines and deploying out a very simple topology of a load balancer in front of two webservers.<br />
For this entire setup, we will be using NGINX.<br />
<br />
If you recall back to the previous blog, the playbook created machines in AWS and wrote some inventory files to the file system where the playbook was run from.<br />
<br />
So now you should see 4 additional files without extensions for controller, dbserver, loadbalancers, and webservers. We will be using the loadbalancers inventory file in this exercise.<br />
<br />
In this blog I am focused on this playbook: https://github.com/brianehlert/ansible-nginx-examples/blob/master/nginx_lb_oss.yaml<br />
<br />
The playbook uses the NGINX Role from Ansible Galaxy, you can find that here: https://galaxy.ansible.com/nginxinc/nginx<br />
<h3>
Installing NGINX OSS on the loadbalancer</h3>
<br />
To run the playbook execute the command:<br />
<span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;">ansible_playbook nginx_lb_oss.yml -i loadbalancers </span></span><br />
<br />
This playbook uses both the loadbalancers and webservers file as it executes, let me explain.<br />
<br />
The webservers file is read into memory.<br />
<br />
Then at the loadbalancers machines the variable file nginx_lb_vars.yaml is read from the playbook folder and the variables 'webserver1' and 'webserver2' are replaced with the values from the webservers inventory file - so that any traffic being forwarded is to real machines.<br />
<br />
The the role is invoked with all of the required 'extra_vars' in memory.<br />
<br />
The Ansible Role for NGINX installs NGINX OSS (aka open source version) and adds a reverse proxy (aka load balancer) configuration based on my yaml variables file. This points to the DNS names of the two web server instances.<br />
<h3>
Testing the configuration</h3>
Now, you probably ran that, and jumped straight to testing traffic by hitting the public endpoint of the loadbalancer machine. And then you probably thought to yourself; 'what kind of hack is this guy, this doesn't work'<br />
<br />
Well, you are right. You should be receiving the response 'an error occurred'. This is good, this is right and proper for where we are in the deployment.<br />
If the URL can't be reached, then something else happened. But if your error is: 'an error occurred' we are in a good place.<br />
<br />
And might you guess why you have that error message?<br />
Because neither of the two webservers are responding. They don't yet have NGINX installed nor configured.<br />
So everything is working as it should be.<br />
It just isn't working, yet.<br />
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-20301482374237874392020-04-07T12:21:00.000-07:002020-04-07T12:21:16.675-07:00Using Ansible to deploy a demo environment to AWSI promised a few months ago that I would be back at it, and well.... Things got in the way. Lots of things.<br />
<br />
For a while I am going to focus on one method that I use with Ansible to set up a demo environment in AWS.<br />
<br />
The first playbook I am going to talk about can be found here: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_environment_deploy_aws.yaml<br />
<br />
<h3>
The set-up</h3>
<br />
Before we begin, let me describe the first assumption:<br />
I place the variable file and the playbook in the same folder and I import the variables when I execute the playbook.<br />
<br />
The second is how I establish the VPC, I do not use the default VPC <br />
<br />
I use the Cloud Formation Template: <a href="https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/demo-vpc.json">https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/demo-vpc.json</a><br />
to establish a Virtual Private Cloud (aka network), 2 public and 2 private subnets, and the proper routing and gateways for communication between the subnets and to the outside world from each.<br />
<br />
When I provision my machines for the demo, I will be referencing the same region, the vpc from above, and the subnets. We will get to that in a bit.<br />
<br />
Third; I run my Ansible host in the VPC, attached to one of the public subnets. I lock down its public interface with a security group and I ssh to it to perform the tasks at hand.<br />
<br />
Fourth: I have Ansible all set up with taking to the AWS API.<br />
This was the first blog that really gave me success with getting that set up; https://tomoconnor.eu/blogish/part-3-ansible-and-amazon-web-services/#.XozLZHJ7l3j<br />
<br />
<br />
<h3>
The workflow</h3>
<br />
Having established the VPC, we will need to copy the vpc-id and the subnet-id(s) and paste them into the variable files.<br />
This is the variable file for the machine provisioning: https://github.com/brianehlert/ansible-nginx-controller-examples/blob/master/controller_aws_vars.yaml<br />
<br />
Let me describe that a bit:<br />
<pre>---
region: "the AWS region your VPC is in from above"
vpc_id: "the ID of the VPC - something like vpc-0no9th755is5fake1"
vpc_cidr_ip: "the CIDR of the VPC, such as:" 10.240.0.0/16
loadbalancer_public_subnet: "the ID of one of the public subnets where loadbalancer machines will be placed - such as:<span style="font-size: x-small;"> <span style="font-family: "Courier New", Courier, monospace;">subnet-0b03bad1228you93r</span></span>"
webserver_private_subnet: "the ID of one of the private subnets for the webservers"
image_id: "the AMI image ID for the region - I use the Ubuntu 18.04 LTS minimal image myself"
key_name: "your ssh access key for AWS to embed in the image"
controller_public_subnet: "the other public subnet for the controller machine"
database_private_subnet: "the other private pubnet for the postgreSQL db server"
prepend_name: "a string to prepend to the machine names, such as 'mine'"</pre>
<br />Now that the variables are all set up, and the vpc is in place, lets build some machines.<br />
<br />
You can run the playbook with the following:<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New", Courier, monospace;">ansible-playbook controller_environment_deploy_aws.yaml -e "@controller_aws_vars.yaml" </span></span><br />
<br />
If I read that command line back it would speak like this: Run the Ansible playbook controller_environment_deploy_aws.yaml including the variable from the file controller_aws_vars.yaml.<br />
The '-e' means to include 'extra variables' to the execution of the playbook. The '@' at the beginning of the variable means to reference a file on the Ansible host.<br />
<br />
<h3>
What all does that build for me</h3>
What you will now end up with is multi-fold. Part of what I will describe is to make dealing with inventory a little easier.<br />
<br />
If you read through the playbook you can probably understand that some security groups are created for each machine type.<br />
The AMI that is referenced is used as the base image for all of the machines. But the machines are created as different sizes.<br />
The network attachments happen against the subnets defined in the vars file.<br />
<br />
One key thing that happens is that individual inventory files are written out in the same folder where the playbook is being run from.<br />You will have an inventory file for each machine type. These are how Ansible will reference each machine type later on.<br />We will be referencing '-i loadbalancers' when installing nginx, and '-i webservers' when installing nginx and the web site files. <br />
The '-i' refers to a specific inventory file.<br />I did it this way because it gives me the flexibility to quickly stand something up with inventory (and reset) without getting into all of details of Ansible inventory and the many ways to handle it.<br />
<br />
You want to reset; delete the machines in AWS and delete the corresponding inventory files and start over.<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-11306846642439957272019-11-12T15:19:00.000-08:002019-11-12T15:19:09.214-08:00my take on infrastructure as codeLately I have been working through a number of automation tasks to create environments and deal with various customer scenarios.<br />
<br />
I will say right now, I am all about 'cattle, not sacred cows' as in my configurations are always separate from the machine. All that configuration state can always come from some place else.<br />
<br />
15 years ago, I was doing this without automation. We rebuilt our servers at every upgrade of the primary application running on them. And I also updated all the firmware, etc. So I have been in this school of thought for a long time. We just didn't use source control back then, it was documents and settings files and a vault.<br />
<br />
The primary concept behind 'infrastructure as code' is that you can run some set of automation. Then bundle up all of the artifacts that drove that automation as a documented source of truth.<br />
<br />
Some folks think of the 'infrastructure as code' part as just the settings files. Just the code, and a set of variables. But I challenge that it is much larger than that.<br />
<br />
For example: with an Ansible playbook. You would have the source playbook, you have the environment variables passed in. And don't stop there.<br />
You might also have some Jinja2 templates that the playbook used as a transform, you might have had temporary variables in flight, maybe files needed to be purged to harden the machine in production, etc.<br />
<br />
All of that is part of the infrastructure as code. Not just what is fed in, but also the scripts and automation that drive the result. All of it.<br />
I should be able to take an archive, open it, play it, and get the same result.<br />
Which means that entire archive is your source of truth at that moment in time.<br />
<br />
It is the <i>moment in time</i> part that has some type of source control all involved.<br />
But in reality, your truth might not be in GitLab, it might be an archive in Artifactory. As it might include binaries and other things that don't source control well.<br />
<br />
So think about your pipelines and the artifacts make them up and move through them and the end results.<br />
Think about the view of; 'can you replay that and get the same result' or 'can you replay that offline'<br />
<br />
I know that way back when we started to look at out rebuild process and combined that with a regular disaster recovery exercise, we really started to refine things and get a handle on the entire process and the dependencies across processes.<br />Details that are really easy to overlook in the daily grind of making it all just work.<br />
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-35468440984306546432019-11-11T14:36:00.001-08:002019-11-11T14:36:47.989-08:00Sorry for the huge silenceSorry for the huge silence all.<br />
As some of you may recall, the entire Redmond location that I was at with Citrix was let go. RIF'd we call it.<br />
After that I landed at F5 in Seattle.<br />
<br />
The work at F5 was a pretty wild and constantly fast ride.<br />
With the acquisition of NGINX by F5, I became part of the NGINX business.<br />
<br />
That covers over 18 months in about as lightly as I can.<br />
<br />
What you will find from me going forward is automation. Probably interesting to DevOps and SRE types more so than what I used to write about.<br />And probably a lot more Linux than Windows.<br />
<br />
I talk to customers a lot more than I used to. <br />
What I find interesting, is that the problems in IT that I was dealing with 20 years ago are still present in the industry today.<br />
Yes, the tools have changed, the scope has changed, and the impacts have changed - but many of the problems still remain. <br />
It is just something that I find really interesting.<br />
<br />
Part of me finds it disturbing as well. Specifically that the core problems remain, but shift and change ever so slightly, but they are still present.<br />
Is it that tools have come and gone?<br />
The problems are solved, then the tools get re-written and they just surface again?<br />
Is it that IT changes and keeps bringing everything back around with each generation?<br />
<br />
Or, it is just that folks shift from one infrastructure to another and all that baggage just comes along for the ride? To the new, not fully complete new platform.....<br />
I speculate the latter more than not.<br />
<br />
Anyway. Back at it. Hopefully posting things that are useful to the community, and hoping to gather some insights as well.BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-52263223973536202632018-05-07T08:23:00.000-07:002018-05-07T08:23:03.460-07:00AWS exposes route tables so I can recover from their bugThe cloud. It is a wondrous thing. When it works.<br />
But I think that one thing that most all can agree on, it should 'just work'. All aspects of your cloud experience should 'just work'.<br />
<br />
In the past weeks I have shifted my attention to AWS. <br />
It has taken some time to get used to referring to a virtual network under the marketing term Virtual Private Cloud. Or a virtual machine as an 'EC2 instance' or any number of other marketing term focused things.<br />
My preference here, call it what it is, not the marketing name for the feature.<br />
Enough of that.<br />
<br />
Back to the title. One of the first things that I thought interesting with AWS is that within a VPC, the route table is exposed to me.<br />
Why would I want to / need to muck around with a route table in the cloud? I have port level firewalls rules (Security Groups), I have to stand up an Internet Gateway to enable outgoing traffic. Why would I ever need to muck with something as low level as a route table?<br />
<br />
Well, I can tell you from being burned on this multiple times now, it is so I can fix what the AWS portal screws up for me.<br />
<br />
Back to my original statement - this is the cloud, it should 'just work'. Networking is a pretty fundamental thing here. It needs to be solid, resilient, and always functional.<br />
But yet, at least three times now, in less than three months I have lost days due to multiple route table entries in a VPC that I created through the portal.<br />
<br />
And usually the kicker is that one of the route table entries is correct and there is a second one that is empty. A route table entry, but no defined route.<br />
<br />
The really strange things happen when things work say manually or with one VM, and then you start automating and suddenly what you automate does not work.<br />
As if one deployment uses the correct route table and the other doesn't.<br />
<br />
It is one of those obscure things, that as a customer, I expect to work - all the time. I should not have to think about this, I should not have to remember (in the recovery from frustration) that I had been burned by this in the past and go looking, only to discover that I have this strange named but empty route table entry.<br />
<br />
My call to AWS, fix it. Don't let it be mucked up in the first place. Give me success here. Don't frustrate me, don't waste my time.<br />
<br />
But then, you did get paid for extra days of running compute while i tried to figure it all out. So I guess that you might not have my interest at heart.<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-46259916078368984402018-01-04T13:05:00.000-08:002018-01-04T13:05:05.689-08:00Finding likes between two arraysIn a recent interview I was asked to write a function to sole the following problem:<br />
<br />
You have a SQL query and two arrays, find the like elements between the two arrays.<br />
<br />
(I am assuming that a few of you have your own opinions on how to solve this, and that is excellent - please enlighten me)<br />
<br />
The first thought that popped into my head was that this was not a coding interview, the second was that i really needed a laptop to work through this, and the third was - doesn't PowerShell have a cmdlet that will do this for me?<br />
<br />
Back at home, I thought I would investigate the PowerShell angle. Since I had access to a laptop.<br />
I am assuming a small dataset<br />
<br />
First, lets build a couple arrays to validate with:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">$array1 = "Elmer", "hunter", "Bugs", "rabbit", "Tweety", "bird", "Sylvester", "feline"</span><br />
<span style="font-family: Courier New, Courier, monospace;">$array2 = "Eddie", "investigator", "Roger", "rabbit", "Jessica", "rabbit", "Judge", "doom", "tweety", "bird"</span><br />
<br />
Now, I did add some variation for fun. Because why might this be an important skill?<br />
It might be important because I might need to mine a bunch of data or logs to investigate a pattern.<br />
<br />
At this point I have arbitrarily added an additional requirement on myself; I want to see all of the matched data items, not just the matched values. Equivalents vs. equals if you will.<br />
Why? Again, in an investigation I will probably want to use a fuzzy match instead of a literal match. And then I will want to work through the resulting set again. (most likely both with eyes and with code).<br />
<br />
In regards to PowerShell doing this for me, I discovered this:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">$compared = Compare-Object -ReferenceObject $array1 -DifferenceObject $array2 -IncludeEqual</span><br />
<div>
<br /></div>
<div>
But when looking at the output, working with this is not very intuitive and it hides my fuzzy match and detail output.</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">PS C:\Users\Brian> $compared</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">InputObject SideIndicator</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">----------- -------------</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">rabbit ==</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Tweety ==</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">bird ==</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Eddie =></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">investigator =></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Roger =></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Jessica =></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">rabbit =></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Judge =></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">doom =></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Elmer <=</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">hunter <=</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Bugs <=</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Sylvester <=</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">feline <=</span></div>
</div>
<div>
<br /></div>
<div>
I actually ended up going back to my original idea; </div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">$likes = @()</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"><br /></span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">foreach ($element1 in $array1){</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> foreach ($element2 in $array2){</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> if ($element1 -like $element2){</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> $likes += $element1;</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> $likes += $element2;</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> }</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;"> }</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">}</span></div>
</div>
<div>
<br /></div>
<div>
while not highly efficient and I am sure not wonderful with large data sets, it does give me the results that I wanted for further analysis:</div>
<div>
<br /></div>
<div>
<div>
<span style="font-family: Courier New, Courier, monospace;">PS C:\Users\Brian> $likes</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">rabbit</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">rabbit</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">rabbit</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">rabbit</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">Tweety</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">tweety</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">bird</span></div>
<div>
<span style="font-family: Courier New, Courier, monospace;">bird</span></div>
</div>
<div>
<br /></div>
<div>
It is easy to see from this output where I might want to go next; counts, deeper analysis, trends, etc. It all depends on the details in the records used.</div>
<div>
<br /></div>
<div>
If you have different ideas that meet my requirements, please share.</div>
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com4tag:blogger.com,1999:blog-6230024559279811901.post-10609240111687257242017-11-07T15:55:00.001-08:002017-11-07T15:55:26.588-08:00Service Principals and Azure ADThere are a few scenarios with Azure AD that folks commonly run in to.<br />
<br />
I have not blogged about the relationships of an Azure Tenant and Azure AD. So let me briefly mention the relationship between these two.<br />
(I will be repeating this all in future posts, as this relationship is important to grok)<br />
<br />
An Azure Tenant is the Account that you or your company has in Azure. It is divided into Subscriptions. <br />
Each Subscription is where you 'consume' in Azure; and it also serves as an isolation boundary (as in, resources in different services can only talk to each other through public entry points - they cannot directly touch each other).<br />
Below that you have Resource Groups, which are management containers (not isolation).<br />
And then you have the actual resources that you get billed for consuming.<br />
<br />
All of this together is an Azure Tenant - or phrased another way, you are a Tenant of Azure and this is your playground and thus billing entity.<br />
I will keep using the phrase Azure Tenant in this way.<br />
(I run across MSFT folks that use the word Subscription when referring to the Tenant, and it is just plain incorrect and thus confusing as to the ramifications)<br />
<br />
Azure AD is an entirely separate thing. It is this huge multi-tenant cloud based identity provider, with a number of cool features and touch points.<br />
An Azure Tenant must have an associated Azure AD, but an Azure AD has no dependency on an Azure Tenant (or an Office365 tenant - which is yet another entity).<br />
<br />
A single company can have multiple Azure AD's (which is highly likely), they could also have multiple Azure Tenants (which is not very likely, but possible).<br />
<br />
A single Azure Tenant can only be associated with one Azure AD. Nuance here; the tenant has one Azure AD, but people from other Azure ADs can be granted access. But the invited accounts are foreign accounts.<br />
<div>
<b></b><i></i><u></u><sub></sub><sup></sup><strike><br /></strike></div>
<div>
I bring all of this up since it is this multi-pronged association that usually gets folks into a pickle.</div>
<div>
<br /></div>
<div>
The "Service Principal" is a term that has been within the IT world for a long time. It describes the user account that a particular application or service runs under. If that application needs to access resources on the network, the Service Principal user account for that particular service is used.</div>
<div>
<br /></div>
<div>
This has worked well for decades in the enterprise with Active Directory. And now we need to take this to the cloud. And it works a bit differently with Azure Active Directory.</div>
<div>
<br /></div>
<div>
Within Azure Active Directory the service principal is called an "App Registration". And just like with the enterprise, you would have a unique app registration for each application.</div>
<div>
This is different than a user account that you simply grant access to Azure resources. While it really appears to be the same thing, it actually isn't.</div>
<div>
<br /></div>
<div>
Now, as we put things together a number of questions arise. Which Azure AD should my app registration reside in? Is there any limitation on the user account? What resources does the app registration need to access?</div>
<div>
<br /></div>
<div>
Most likely you are using the app registration to interact with your Azure tenant resources. For example, the app registration is used to provision new machines, or to power machines on and off. Essentially to perform lifecycle events on your Azure resources.</div>
<div>
<br /></div>
<div>
This means that the app registration must be "native" to the Azure Tenant Azure AD - regardless of what Azure AD your user accounts reside in. You cannot simply use a random user account.</div>
<div>
<br /></div>
<div>
If you manually create your app registration, you need to be a user administrator in your Azure AD. And then you can grant the permissions to the Subscription or Resource Groups and you are golden.</div>
<div>
<br /></div>
<div>
https://itproctology.blogspot.com/2017/04/manually-creating-service-principal-for.html</div>
<div>
<br /></div>
<div>
If you are using some other app to create the app registration programmatically things immediately get unique. All of a sudden your user account matters.</div>
<div>
And to programmatically create an app registration, your user account needs to be native to the </div>
<div>
Azure AD.</div>
<div>
This is a special kind of native. The Azure AD account that you use must be an "onMicrosoft" account within that Azure AD. You cannot use a user account that is synchronized with your on-premises Active Directory. It does not even matter if your user is a domain admin.</div>
<div>
<br /></div>
<div>
Each Azure AD begins life as an "onMicrosoft" entity. If you look at the properties of the Azure AD, you will see the information about its original creation, such as brianeh.onmicrosoft.com. And this remains even if you add a vanity domain, such as ITProctology.com to the Azure AD.</div>
<div>
<br /></div>
<div>
I mention all of this because you might be in a position where you need to create a special Azure AD account, so that your app can create its own app registration (Citrix Cloud does this, RDMI does this).</div>
<div>
<br /></div>
<div>
In your Azure AD, create a new cloud user, using name@domain.onmicrosoft.com This will create a cloud user that is native only to the Azure AD, and this user will have the full permissions to the API that it needs to create an app registration.</div>
<div>
<br /></div>
<div>
In your app you need to then use the credentials of this new user account so that it can access the API and self create its app registration.</div>
<div>
<br /></div>
<div>
A lot of explanation just to get you to the answer, but the why is often as important as the what.</div>
<div>
<br /></div>
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-30732367428824791032017-10-09T06:00:00.000-07:002017-11-07T15:09:59.107-08:00The gotchas of Azure AD Domain Services in ARMNot too awful long ago Azure Active Directory Domain Services moved over to the ARM portal from the Azure Classic portal.<br />
<br />
Yea! the world said. And there was much rejoicing.<br />
<br />
Now, the real world impacts of this.<br />
<br />
There are a few scenarios with Azure AD that folks commonly run in to.<br />
<br />
I have not blogged about the relationships of an Azure Tenant and Azure AD. So let me briefly mention the relationship between these two.<br />
(I will be repeating this all in future posts, as this relationship is important to grok)<br />
<br />
An Azure Tenant is the Account that you or your company has in Azure. It is divided into Subscriptions. <br />
Each Subscription is where you 'consume' in Azure; and it also serves as an isolation boundary (as in, resources in different services can only talk to each other through public entry points - they cannot directly touch each other).<br />
Below that you have Resource Groups, which are management containers (not isolation).<br />
And then you have the actual resources that you get billed for consuming.<br />
<br />
All of this together is an Azure Tenant - your phrased another way, you are a Tenant of Azure and this is your playground and thus billing entity.<br />
I will keep using the phrase Azure Tenant in this way.<br />
(I run across MSFT folks that use the word Subscription when referring to the Tenant, and it is just plain incorrect and thus confusing as to the ramifications)<br />
<br />
Azure AD is an entirely separate thing. It is this huge multi-tenant cloud based identity provider, with a number of cool features and touch points.<br />
An Azure Tenant must have an associated Azure AD, but an Azure AD has no dependency on an Azure Tenant (or an Office365 tenant - which is yet another entity).<br />
<br />
A single company can have multiple Azure AD's (which is highly likely), they could also have multiple Azure Tenants (which is not very likely, but possible).<br />
<br />
A single Azure Tenant can only be associated with one Azure AD. Nuance here; the tenant has one Azure AD, but people from other Azure ADs can be granted access. But the invited accounts are foreign accounts.<br />
<br />
Now. Some background into the processes that get us into the strange places that folks end up in.<br />
<br />
When an Azure Tenant is created an Azure AD is created for it.<br />
So you end up with some Azure AD such as you@tenantName.onmicrosoft.com<br />
<br />
This is fine. It gets you up and running and then you add your admins which might be your.admin@yourcomapny.com and they get invited. Everything in the Azure Portal works. Now, lets get into the cases that won't work in this scenario.<br />
<br />
This actually puts you in a very common scenario, the scenario where the Azure AD associated with your Azure Tenant is not the same Azure AD where your corporate user accounts reside.<br />
<br />
Now, if you only want to use Azure AD RBAC to add your IT folks to the Azure Tenant for administrative purposes, this is fine. And thus, a more common scenario than most folks want to realize.<br />
<br />
Now, lets get to Azure AD Domain Services in ARM. There is a security boundary here. The Azure Tenant. Therefore, when you enable AAD DS in ARM you are restricted to the Azure AD that is associated with the Azure Tenant.<br />
<br />
Oh, your Azure Tenant Azure AD is not the one with your users? Oh my! How do we resolve this?<br />
(trust me, the sarcasm is real here, I can't tell you how many times I have spoken to folks about this and it takes a while for all the dots to connect before they realize the dilemma).<br />
<br />
There are three ways to resolve this:<br />
<ol>
<li>'re-parent' the Azure Tenant. What does this mean? It means that you make some other Azure AD the primary Azure AD for your Azure Tenant. There is an option in the Azure Portal "Move to another Directory". The impacts: If you had any RBAC set up, you will break it, and therefore need to set it up all over again.</li>
<li>Use a vNet. When logged on to the Azure AD where the users are, create a new Azure Tenant and subscription. Turn on AAD DS there. Then set up a gateway between the vNet in this subscription where your AAD DS and user accounts are, and a vNet in a subscription of the Azure Tenant where your workloads are that require the domain services provided by AAD DS.</li>
<li>Don't use AAD DS. Stand up a Windows Server Domain Service VM (more than one for a proper deployment) and use Azure AD Connect to sync the users with the AD domain.</li>
</ol>
<div>
In the end, this is about your user accounts, the reasons why you wanted AAD DS in the first place (you need NTLM or Kerberos for any reason).</div>
<div>
Yes, AAD DS is convenient, but the security models that forced it into this particular assumption is not always in line with reality.</div>
<div>
<br /></div>
BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-38377670967476907142017-10-06T17:16:00.001-07:002017-10-06T17:46:56.747-07:00Day two as a free agent - looking sidewaysIf you missed the tale, I was RIF'd and I thought I would spend some time blogging about the experience and the whatever I decide to do next.<br />
<br />
All of this went down on Wednesday, October 4, 2017 and pretty much wrapped up by noon. Out of the building. <br />
My user account was gone by 2pm. (the things a calendar invite that you included your personal email account as an attendee can show you - when that other attendee is an object that can't be resolved)<br />
<br />
Talking last night, the wife was lamenting that she could not ping me on Skype any longer. I mentioned that I do still have a Skype for Business account thanks to the MVP program. And I proudly stated, the domain is blocked by China (ITProctology.com). She simply looked at me quizzically and said she 'will think about it'<br />
<br />
The day started like any other day. I got up at the 'usual' time, cleaned up, scooped the cat litter, got the kid out of bed.<br />
<br />
Kissed the lovely wife as she went off to work, got the kid to the bus stop, checked in the morning HAM radio net from the car. Then drove all the way... back home.<br />
<br />
7:30am<br />
Well, now what? I have had a list of all kinds of things I have needed to get done.<br />
<br />
Cold morning, checked the air in the tires, a bit low. Grabbed the air compressor and resolved that.<br />
<br />
Kind of chilly in the workshop, fired up the stove.<br />
Looked at the wood scraps and thought of the cat tree the wife wants me to build. Not feeling it.<br />
<br />
7:45am<br />
Watched an MVP PGI from yesterday that I missed. <br />
Got distracted by; <br />
dirty dishes, cats fighting, HAM hobby antenna research, smelly garbage can, full recycling can, making list of items for another project from the hardware store, checking email (each time a notification popped up), checking LinkedIn, checking Twitter, checking Facebook.<br />
I listened to the recording and viewed the visuals for the part I really wanted to see.<br />
<br />
8:45am<br />
Could really use a latte, warm, frothy, 15 minute drive - um, no.<br />
Eventually feeling parched. Looking for something cold and carbonated. Hmm, no cold beverage fridge.<br />
Tea it is I guess.<br />
<br />
9:30am<br />
Made notes of other follow up items I needed to do with HR.<br />
Returned call to HR from yesterday (they called while I was in the middle of the only thing I had scheduled all day) - ring, ring, ring, disconnect. <br />
I think, that was curious - I will try again later, she may have been on the phone with someone.<br />
<br />
9:45 am<br />
Phone rings - its Cabo. (Mexico) they want to know when I can come visit. Um, no...<br />
<br />
10:00am<br />
Start typing this<br />
<br />
10:13am<br />
All caught up<br />
Checked email, read an article the wife sent, investigated LinkedIn Premium, looked around thinking "what next"<br />
<br />
10:30am<br />
Yea! spam to delete<br />
<br />
And a little after that, my entire attempt at writing a humorous blog post was ruined.<br />
(the following falls into a category that I won't name. The comments are not disparagement to my former employer.)<br />
<br />
I finally got hold of my HR representative, all was fine, my questions were answered.<br />
Then I was asked a few questions (that I had already answered).<br />
They wanted to know the accounts that I was using to access particular resources so they could be deactivated. No problem. I gave the account names.<br />
Then I was asked for the passwords to those accounts. Um. No. Just no.<br />
My GitHub account, They wanted the password to my GitHub account. No, I never had a 'corporate' GitHub account. They can just remove my account from the 'company' in GitHub.<br />
And besides, I have access to other repositories that have nothing to do with my former employer. How can I trust the people that I am giving access to _my_ account?<br />
<b><br /></b>
Right there, any groove I had at writing humor was ruined.<br />
<br />
So, I simply stepped away from everything for the remainder of the day. And focused on other projects around the house.<br />
Now, I still can't get this incident out of my head. It is 5pm and I have to finish this post.<br />
<br />
It is Friday. Monday will bring a new adventure. And a new post.<br />
That one will actually be technical, and very useful to many folks.<br />
<br />
<br />
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-23541476129780804132017-10-05T08:53:00.001-07:002017-10-05T08:53:31.028-07:00Day one as a free agent - looking backtl;dr<br />
This is my therapy for working through the emotions of being RIFed this is not any commentary on my previous employer.<br />
As I open up about my experience, I hope to be helpful to others in at least letting you know you are not alone in your experience.<br />
This is me, pretty raw.<br />
<br />
Thursday, October 5 2017 6am<br />
<br />
I thought I was doing pretty well this morning.<br />
Then I saw a ping from a long time co-worker through LinkedIn. That was okay.<br />
It was when my phone reminded me that it was time to go to work... That really stirred up the emotions.<br />
<br />
It suddenly dawned on me that I have not bee out of work for 30 years. 20 of those years in the IT industry. The changes I have seen and been part of in some way. It is crazy.<br />
But, it is this that makes the emotion - the abrupt loss of comrades. Folks I have worked on projects with, suffered with, celebrated with, tackled big ideas and problems with.<br />
The forced end of time on the office really touches this emotional well of feelings. This is where the feelings come from, very guttural and powerful.<br />
<br />
I have always been a person that was broad across a number of technologies. It gave me a valuable wide angle lens; the systems view of IT. The dependencies, connections, combinations and touch points. How this impacted that and so on.<br />
I have worked with a number of younger folks that lack this view, or approached another way; lack the experience to have this view.<br />
<br />
I have long had two statements for every manger I have worked for:<br />
<ol>
<li>Keep me relevant</li>
<li>My job is to make you look good. Your job is to be my shit screen.</li>
</ol>
<div>
Keep me relevant - that has always been important. It is my way of expressing that I want to grow and I want to be involved in the company growing, in one simple statement.</div>
<div>
<br /></div>
<div>
My job is to make you look good - that is one that some folks have had a hard time with when I mention it. It is me appealing to something that my manager needs, he / she needs successful people and a strong team. That makes them look good, and keeps them relevant and valuable.</div>
<div>
<br /></div>
<div>
It is all a synergy of feedback loops. And quite honestly, these simple statements of relationship I think have been very powerful in my past success. Doors have been opened for me, and I have been allowed to organically take and make opportunities as a result.</div>
<br />
I cannot be more grateful to my last manager (who ended up being my neighbor (that was strange for a while)). He saw something in me and harnessed it, supported it, opened doors, and allowed me to just go. It was great.<br />
I did not fall into the 4 years and I am bored trap. The work stayed interesting and challenging. And that is so incredibly important.<br />
<br />
I also found a mentor for a couple years in there. Not with my former employer though. He helped me realize many things and to envision others. That is a relationship that I need to renew, without the encumberment of the employer relationship.<br />
<br />
But in writing this one thing has occurred to me.<br />
While I left behind lots of valuable works, great ideas, and intellectual property - they can't keep what is in my head. I still have ideas, I still have knowledge, I still have worth and value. All of those experiences - those belong to me and not to my former employer.<br />
That is my worth as I look back to figure out how to look forward.<br />
<br />
I have carried with me a couple office artifacts for many years now. One a cover from an Internet magazine long gone (not an online magazine, a magazine about the business of the Internet), the second a Calvin and Hobbs cartoon.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-SX5FIyeTiG0/WdZURvu4iEI/AAAAAAAACqg/6Ta4B3E0B9INBBUiF5BhZGm8HzDlz6b6gCLcBGAs/s1600/EPSON004.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1237" height="320" src="https://4.bp.blogspot.com/-SX5FIyeTiG0/WdZURvu4iEI/AAAAAAAACqg/6Ta4B3E0B9INBBUiF5BhZGm8HzDlz6b6gCLcBGAs/s320/EPSON004.JPG" width="247" /></a></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-e42JAqiHZwA/WdZVLKxv1qI/AAAAAAAACqo/dHHb6VA_SOE8yS2GETljgK0q7KqS5faogCLcBGAs/s1600/Calvin%2B_amp_%2BHobbes%2B-%2BI_m%2BSignificant.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="216" data-original-width="700" height="98" src="https://4.bp.blogspot.com/-e42JAqiHZwA/WdZVLKxv1qI/AAAAAAAACqo/dHHb6VA_SOE8yS2GETljgK0q7KqS5faogCLcBGAs/s320/Calvin%2B_amp_%2BHobbes%2B-%2BI_m%2BSignificant.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">http://www.gocomics.com/calvinandhobbes/2013/10/17</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Right now I am listening to the <a h="ID=SERP,5122.1" href="https://en.wikipedia.org/wiki/Passacaglia_and_Fugue_in_C_minor,_BWV_582"><strong><span style="color: #001ba0;">Passacaglia and Fugue in C minor</span></strong></a> which should only be played on a pipe organ, and the best version I have every heard was recording by Virgil Fox at the Fillmore East. C minor is the umami of musical keys. It is earthy, rich, flavorful.<br />
I have listened to this piece for years, generally as loud as my speakers can tolerate without distortion. It is 15 minutes that always helps me clear my head and release emotional tension.<br />
<br />
<b></b><i></i><u></u><sub></sub><sup></sup><strike></strike>Today, I am posting early. I have some resources to check out, and I am going to spend the afternoon with my tattoo artist, finishing the work he started a few months ago.<br />
Nothing more relaxing than some time under the needle.BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-69318454651704904582017-10-05T07:11:00.000-07:002017-10-05T07:11:03.100-07:00Being laid off suckstl:dr<br />
There was a substantial RIF yesterday<br />
Being RIF'ed sucks<br />
Yea, I am okay. And this blogging is therapeutic.<br />
No, this is not sour grapes, and I am not disparaging my previous employer in any way. Please don't take any comments in that way, that is not the intent.<br />
That is your warning, read on if you like.<br />
<br />
Wednesday, October 4, 2017 6am<br />
<br />
My mind has been buzzing in a thousand different directions lately. My team and I have been working under rumors of 'cost reductions', and our work site has appeared to be one of the targets.<br />
<br />
Quite frankly, the entire company has been on edge for two weeks now. Internal email volume has reduced to a trickle, chatter on Slack has trickled down to only the really critical questions or help. Really obvious that most everyone at this point in time knows that something is up.<br />
<br />
I am beginning this story in the morning of the 'big day'. My brain has been busy half the night working on this, and it just needs to get out of my head.<br />
<br />
I wonder how many on my team are going to wear a red shirt into work today, as I have...<br />
<br />
Needless to say, my emotions are mixed at this point. The one upside of rumors is that once the threads start to come together, it helps you move through the stages of grief. And the meeting invitation that many of us received I hope will be a relief, since the anticipation can stop and reality will be known.<br />
<br />
I can say this, no matter the face you put on this; it really is emotional. It is really easy to feel depressed and to feel unvalued. <br />
I honestly didn't think that writing this would be as difficult as it is seeming to be. But I am at the point of letting go, of what I am not clear. And I think that is the struggle.<br />
<br />
I have worked at my current company and office for 10+ years. I have made friends, worked with some incredibly smart people, worked on some incredibly cool and innovative projects. I have nothing to regret for my work, or the experience I have gained. <br />
<br />
So many things that I have been involved in, that I could not share, could not talk about to anyone other than my team.<br />
Until earlier this year I was in a research team. We were always forward looking, strategic in our projects, and very early in our efforts.<br />
Changes were made and that group was dissolved and we became a more traditional development team. Definitely different work.<br />
<br />
For me, I was finally able to work on my one of my passions; customer success. That was great. What was not great was the internal struggles due to the way the business processes, internal feedback, and internal silos reinforced thinking. This frustration of my position I will not miss.<br />
And I have to say, 'speaking' that frustration is relieving. But I don't want this to be about sour grapes. It really isn't.<br />
<br />
I wanted this to be about moving through and moving on. This is the first time I have been on this side of a layoff.<br />
I have been one of the lucky ones to remain behind multiple times, both in a leadership position and as an independent contributor. That is not simple, that is emotional and disruptive as well.<br />
<br />
I have to look at this as the kick in the butt to remake myself (again).<br />
This would not be the first professional shift in my life. I have remade myself many times, and risen to the occasion each time. Then it is always the question of "what's next?"<br />
<br />
This time it is different, the first question in my head is "now what?" and I have to consciously place that aside and ask "what's next?"<br />
That is what I need to focus on and simply think about what excites me, what challenges me, what can highly engage me for the next 10 years.<br />
<br />
Now, I am going to take a pause from writing, head into work, and do what a team does as we wait for the meeting that outlines our fate. Nothing anxious about that at all.... :-S<br />
<br />
Wednesday, October 4, 2017 12pm<br />
<br />
The message has been delivered.<br />
I have had a chance to talk to HR to clarify some questions about the severance.<br />
There is a strange feeling of relief. I am simply pretty numb to the whole thing.<br />
Strange.<br />
Standing around talking with my co-workers that have been tasked with escorting us out. What a sucky task. Being a survivor of these things in the past, not a great mental place to put the remaining folks in.<br />
<br />
And that's it.<br />
Move on, go away. Bye. <br />
That is it.<br />
That is the feeling.<br />
Have I said it is kind of surreal?<br />
<br />
A few of us retired for the afternoon to a local business to have lunch, a couple beers, and play Dungeons and Dragons for a few hours.<br />
That was a good distraction.<br />
<br />
That is all for now, more tomorrow. As I am sure there will be more tomorrow. And as I mentioned, this is therapeutic.<br />
<br />
<br />
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com3tag:blogger.com,1999:blog-6230024559279811901.post-26677295419992144682017-07-31T11:00:00.000-07:002017-07-31T11:00:10.245-07:00Isolating Citrix Cloud in your Azure TenantI have recently been studying issues that customers are having when trying to stand up a proof-of-concept environment for Citrix Cloud in Azure.<br />
<br />
Most of these customers are standing up the full XenApp and XenDesktop Service. However, our Citrix Cloud Services all have the same basic needs for any customer:<br />
<ol>
<li>Azure Subscription (for workers and infrastructure)</li>
<li>App Registration (this is an Azure Tenant service account for our cloud based control plane to perform worker lifecycle events within a subscription)</li>
<li>Virtual Network (the machines need IP addresses)</li>
<li>Active Directory (there is a much larger discussion here, but either a read / write Domain Controller VM or the Azure Active Directory Domain Service will work)</li>
<li>The DNS setting for the Virtual Network must be your Active Directory </li>
<li>Cloud Connector machines (the connection between the machines in the subscription and the control plane)</li>
<li>Some type of 'golden' image that is provisioned into the worker machines your end customers get their work done on.</li>
</ol>
<h3>
Growing this conversation from the bottom up;</h3>
<div>
Each customer of Azure has at least one Azure Tenant.</div>
<div>
This is your account in Azure. It is the highest level of connection between Azure and you the customer.</div>
<div>
Within your Azure Tenant you have Subscriptions.</div>
<div>
Subscriptions are billing boundaries and service boundaries (services within subscription cannot 'talk' to each other without extra work, as if they are in different buildings).</div>
<div>
<br /></div>
<h3>
Isolating Citrix Cloud in your tenant;</h3>
<div>
<br /></div>
<div>
Can you isolate Citrix Cloud to its own Subscription in your Azure Tenant? Yes! And that is actually the topology that I am going to describe here. How to isolate Citrix Cloud from your corporate infrastructure.</div>
<div>
<br /></div>
<div>
Common project slow down points that I have heard are: modifications to existing virtual networks and protecting Active Directory. </div>
<div>
<br /></div>
<div>
Focusing on the Virtual Network issue first;</div>
<div>
<br /></div>
<div>
You CAN create a virtual network dedicated to your Citrix Cloud deployment. </div>
<div>
The important things to remember are: </div>
<ul>
<li><u>You need a route to your Active Directory</u></li>
<li><u>You must update the DNS settings of the Citrix Cloud virtual network to be the AD</u></li>
</ul>
<u></u><div>
The DNS setting is the most common place where customers trip up. The DNS setting must be set. The Azure default results in the machines not being able to resolve the Active Directory.</div>
<div>
<br /></div>
<h3>
The three models as pictures;</h3>
It is often that pictures tell a story faster and easier, I wanted to provide those to get you started thinking about your individual topology as well.<br />
<br />
<div>
If your Active Directory is on the same Virtual Network you are most likely golden.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-dIhgwZ7Mmts/WXocnqxryoI/AAAAAAAACo4/gQnA1g9kPzoTpenufrWwrMX66Hw8tbKVgCLcBGAs/s1600/vNetOne.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://1.bp.blogspot.com/-dIhgwZ7Mmts/WXocnqxryoI/AAAAAAAACo4/gQnA1g9kPzoTpenufrWwrMX66Hw8tbKVgCLcBGAs/s320/vNetOne.jpg" width="320" /></a></div>
<div>
<br /></div>
<div>
If your Active Directory machine(s) is on a different Virtual Network in the same subscription, you can use peering between the two virtual networks.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-euPs4_uIAjk/WXocnrAvnJI/AAAAAAAACow/MRJ8lWksMM0zFALfHxvCIaHecG_ujSjpwCEwYBhgL/s1600/vNetPeer.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://4.bp.blogspot.com/-euPs4_uIAjk/WXocnrAvnJI/AAAAAAAACow/MRJ8lWksMM0zFALfHxvCIaHecG_ujSjpwCEwYBhgL/s320/vNetPeer.jpg" width="320" /></a></div>
<div>
<br /></div>
<div>
If your Active Directory machine(s) is on a different Virtual Network in a different subscription, you must use a gateway between the two virtual networks.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-H8TcZ_rU9SY/WXocng0kN2I/AAAAAAAACo0/obOk5ugJxQUhhcmL1YlpxJK1DvEU__XYwCEwYBhgL/s1600/vNetGateway.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://1.bp.blogspot.com/-H8TcZ_rU9SY/WXocng0kN2I/AAAAAAAACo0/obOk5ugJxQUhhcmL1YlpxJK1DvEU__XYwCEwYBhgL/s320/vNetGateway.jpg" width="320" /></a></div>
<div>
<br /></div>
BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-22146220376134894132017-07-28T11:00:00.000-07:002017-07-28T11:00:44.824-07:00Virtual Network permissions for Citrix CloudIn a previous post I covered how to manually create a Service Principal (App registration) for XenDesktop Essentials. (this also applies to the XenApp and XenDesktop Service)<br />
<br />
If you recall, this is the identity that Citrix Cloud will be using when it performs machine lifecycle actions in your Azure Subscription.<br />
<br />
Things with permissions can get a bit strange in Azure pretty quickly. One such area is Virtual Networks.<br />
<br />
First of all, a Virtual Network exists within a Subscription. It can belong to any Resource Group for management, but can be used by any machines or services within the subscription.<br />
<br />
Now, in the world of assumptions, this is all fine and easy if you grant the Service Principal account the Contributor role AND the resource group that your virtual network belongs to is within a resource group under that same subscription. You can take advantage of the inheritance.<br />
<br />
This is not always the case. In fact, it might not be the case for you at all. You might be putting very tight controls on that Virtual Network to ensure it never gets messed up.<br />
<br />
The minimum permissions that the Service Principal needs to your Virtual Network is the VM Contributor Role. This level of access is necessary for the automated provisioning and lifecycle of desktop or session workers.<br />
<br />
If you have a need to grant access to your Virtual Network or want to constrain access to your virtual network, here is how.<br />
<br />
Remove the inheritance at the Virtual Network Resource Group from the subscription if it is enabled.<br />
Explicitly grant the App Registration the VM Contributor role on the Virtual Network where worker machines will be attached when provisioned.<br />
<br />
You can find more about the permissions in this article that I authored: <a href="https://support.citrix.com/article/CTX224110" target="_blank">Manually granting Citrix Cloud Access to your Azure Subscription</a>BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-82302048928044018742017-07-27T08:20:00.000-07:002017-07-27T08:20:14.911-07:00Azure Resource Manager Templates for Citrix Cloud workloadsAt Citrix we recognize that different customers need different tools to accomplish their goals. In the end, it is all about selecting the right tools for your environment and business processes to get you moving forward in an efficient way.<br />
<br data-mce-bogus="1" />
It has been brought to our attention that getting started in Azure with Citrix Cloud is not necessarily as straightforward as it needs to be, especially when customers go it alone (without the aid of a sales engineer or an integrator).<br />
<br />
You will be seeing different tools, recommendations, updated documentation, and product enhancements to help get you (the customer) moving forward with your demonstration project, that Proof-of-Concept project, and moving into full production.<br />
<br />
One of those tools was recently mentioned on this blog: <a data-mce-href="https://www.citrix.com/blogs/2017/07/27/citrix-cloud-xendesktop-resource-location-creation-arm-template" href="https://www.citrix.com/blogs/2017/07/27/citrix-cloud-xendesktop-resource-location-creation-arm-template">Citrix Cloud XenDesktop Resource Location ARM Template</a><br />
<br data-mce-bogus="1" />
Without modification of the template this Azure Resource Manager template is focused on getting you up an going with that very first Demo environment.<br />
It provides everything from an Active Directory Domain to NetScaler VPX. And the glue in between to make it all work.<br />
<br />
Additionally, there are other <a data-mce-href="https://github.com/citrix/CitrixCloud-ARMTemplates" href="https://github.com/citrix/CitrixCloud-ARMTemplates">Azure Resource Manager templates </a>that are componentized to support you in building out the infrastructure in your own way or integrating with your current Azure environment for any of the Citrix Cloud offerings.<br />
<br data-mce-bogus="1" />
These are being built to bring success to your Proof-of-Concept and production deployments. You can find the PoC and Production template repository here: <a data-mce-href="https://github.com/citrix/CitrixCloud-ARMTemplates" href="https://github.com/citrix/CitrixCloud-ARMTemplates">https://github.com/citrix/CitrixCloud-ARMTemplates</a><br />
<br />
This is a community repository and we would love to see your additions and suggestions.<br />
<br />
I would also like to hear your stories and questions about using Azure to deploy your Citrix Cloud service, whether it be XenApp Essentials, XenDesktop Essentials, or XenApp and XenDesktop Service. <br />
<br />
Lets make it better together.<br />
<br />
<b></b><i></i><u></u><sub></sub><sup></sup><strike></strike><br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-57431817311021901252017-04-14T13:48:00.004-07:002017-04-14T13:48:51.014-07:00Getting started with Citrix Essentials on Azure
<br />
<div>
Earlier this month two Citrix Essentials products hit the Azure Marketplace; </div>
<div>
XenApp Essentials and XenDesktop Essentials. <a href="https://www.citrix.com/blogs/2017/04/03/xendesktop-essentials-xenapp-essentials-now-available-in-azure-marketplace/">https://www.citrix.com/blogs/2017/04/03/xendesktop-essentials-xenapp-essentials-now-available-in-azure-marketplace/</a><br /><br />In this short period of time, there has been <i a="" for="" have="" number="" public="" synergy="" think="" to="" wait="" we="" will=""> customers who have purchased the services or are kicking the tires. </i></div>
<div>
While I didn't give a number, I can say that it has been a pretty exciting first two weeks, and the interest from customers has been great. Really great.<br /><br />Both Essentials offerings run on Azure (exclusively) and are managed through Citrix Cloud.<br /><br />Since these are new services, the documentation is constantly coming on-line. Here are some references that should get you over the initial hurdles of understanding how to implement it all.<br /><br />The newly updated XenDesktop Essentials guide: <a href="http://docs.citrix.com/en-us/citrix-cloud/xenapp-and-xendesktop-service/xendesktop-essentials.html">http://docs.citrix.com/en-us/citrix-cloud/xenapp-and-xendesktop-service/xendesktop-essentials.html</a><br /><br />If you were wondering if you could take advantage of Azure Active Directory Domain Service? Yes, you can: <a href="https://www.citrix.com/blogs/2017/04/11/xenapp-xendesktop-services-support-azure-ad-domain-services/">https://www.citrix.com/blogs/2017/04/11/xenapp-xendesktop-services-support-azure-ad-domain-services/</a><br /><br />If you are in a hybrid cloud scenario (user workers in Azure with a VPN back to the datacenter where they need Kerberos or Windows pass-through authentication) you will need to setup up an Active Directory replica server in Azure: <a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-install-replica-active-directory-domain-controller">https://docs.microsoft.com/en-us/azure/active-directory/active-directory-install-replica-active-directory-domain-controller</a><br /><br />Are you setting up Windows Desktops in Azure? <br /> Be aware that you must implement Azure Active Directory. More details on the way.<br /><br />And take advantage of the Hybrid Use Benefit Windows 10 image in the Azure Gallery if you create a new golden desktop image.</div>
<div>
<br /></div>
BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-50408847916478336182017-04-07T13:00:00.000-07:002017-04-07T13:00:17.690-07:00Active Directory with XenDesktop Essentials in AzureXenDesktop Essentials and XenApp Essentials have hit the Azure Marketplace, and they are catching on.<br />
<br />
For those of you that remember Azure RemoteApp, XenApp Essentials is the replacement for that. And for those of you that want to give Windows Client desktops to your user-base, XenDesktop Essentials is for that.<br />
And, for those that want it all, There is XenApp and XenDesktop Service. Which offers it all.<br />
<br />
Now, the reason for my post. Active Directory and Azure Active Directory.<br />
There is a requirement of all of these solutions that the provisioned machines are joined to a domain. This is where I see many folks getting confused between all of the various Active Directory options.<br />
<br />
In reality, there are only two models that will work today (at the date of this post). Let me describe them in terms of what you need to accomplish.<br />
<br />
In both models, you have the user side running in Azure. Whether that be XenApp Servers (Terminal Servers for you really old folks) or Desktops (Windows Client or Windows Server desktops).<br />
<br />
Your answer to this next question defines the path that you need to head down.<br />
<br />
<b><span style="font-size: large;">Do your Azure based user sessions need to access resources in some other cloud / datacenter?</span></b><br />
A different way to ask this - do you need a VPN between your users in Azure and whatever other resources they need to access in some other cloud / datacenter.<br />
<br />
<b>If your answer was <span style="font-size: large;">no</span></b>; <br />
Then I am calling you 'cloud born' or 'Azure based'.<br />
Knowing this you can use Azure AD plus Azure Active Directory Domain Service.<br />
<br />
AD Sync is built in, and most likely Azure AD is your source for users. But you need the additional service to support domain join, group policy, and those traditional things that Active Directory provides.<br />
<br />
I personally love the following guide for getting AADDS all up and running: <a href="https://social.technet.microsoft.com/wiki/contents/articles/35324.azure-active-directory-domain-services-for-beginners.aspx" target="_blank">Azure Active Directory Domain Services for Beginners</a><br />
<br />
The trick here is that you need to use FQDNs for domain joins and domain references. If you customize your Azure AD domain, use that. If you don't it is YourDomain.onmicrosoft.com.<br />
<br />
When you need to add Group Policy to lock things down; <a href="https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-admin-guide-administer-group-policy" target="_blank">https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-admin-guide-administer-group-policy</a><br />
<br />
<b>If your answer was <span style="font-size: large;">yes</span></b>;<br />
Then you are more of a 'traditional' enterprise that is in some hybrid deployment model.<br />
Knowing this you need to use Azure AD plus Active Directory.<br />
<br />
You will need to enable AD Sync, you will need to establish a replica domain controller in Azure, and you (probably) already have a VPN between your datacenter and Azure virtual network.<br />
<br />
The replica domain controller in Azure: <a href="https://docs.microsoft.com/en-us/azure/active-directory/active-directory-install-replica-active-directory-domain-controller" target="_blank">https://docs.microsoft.com/en-us/azure/active-directory/active-directory-install-replica-active-directory-domain-controller</a><br />
Active Directory Sync / Connect to Azure AD: <a href="https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect" target="_blank">https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect</a><br />
(It does not matter where you install / run that, just that you do).<br />
<br />
In both cases; Don't forget to update the DNS settings of your Virtual Network with these new machine IP addresses.<br />
<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-85248987438961635262017-04-05T13:47:00.000-07:002017-11-07T15:43:21.261-08:00Manually creating a Service Principal for XenDesktop EssentialsI have been looking at the customer experience around XenDesktop Essentials lately, and I have helped a few customers with issues around defining their Service Principal accounts.<br />
<br />
Backing up a bit. What is this 'Service Principal' account and what is it used for?<br />
<br />
The Service Principal is the username / secret that is used by Citrix Cloud to talk to the Azure API and perform machine lifecycle actions in your Azure Subscription.<br />
<br />
You could call it a delegated user, or an application user, or simply an application account. <br />
The Service Principal is not a new concept in the enterprise world. In my background we always created very restricted user accounts for use by applications, granting only those permissions that were necessary for the application to perform its functions.<br />
<br />
I know there is guidance on using various PowerShell scripts to do this. But quite honestly, it is so few clicks in the Azure Portal, you might as well do it there. Far less hassle than installing the Azure cmdlets.<br />
<br />
Plus - by doing it this way, you can quickly identify if you have the permissions necessary, and get it fixed or pass the responsibility to the person that can do it.<br />
<br />
First, login to the Azure Account that 'Citrix' will be deploying workstations to. <br />
Next make sure that you have a subscription container for the 'Citrix stuff' and a Virtual Network for the workstations to use all ready to go.<br />
<br />
<br />
<ol><a href="https://3.bp.blogspot.com/-IluHuBYm2rE/WOVN6IZfcfI/AAAAAAAACjw/lEKtz6u1A2AafFXSPVX5qXB7s66Qq3PFQCLcB/s1600/appregistration1.PNG" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="142" src="https://3.bp.blogspot.com/-IluHuBYm2rE/WOVN6IZfcfI/AAAAAAAACjw/lEKtz6u1A2AafFXSPVX5qXB7s66Qq3PFQCLcB/s200/appregistration1.PNG" width="200" /></a></ol>
Create the App Registration / Service Principal<br />
<ol>
<li>Select the Azure Active Directory blade in the Azure Account</li>
<li>Select 'App registrations'</li>
<li>Select 'Add +'</li>
<li>Enter a name, leave the application type as web app / API, and enter a Sign-on URL such as 'https://localhost/xde'</li>
<li>Select Create</li>
</ol>
<div>
Grant it permission to interact with the Azure API for your account</div>
<ol>
<li>Once the registration is created, select it to view its settings</li>
<li>Select 'Required permissions'</li>
<li>Select 'Windows Azure Active Directory'</li>
<li>Select 'Sign in and read user profile' and</li>
<li>Select 'Read all users' basic profiles'</li>
<li>Select 'Save'</li>
<li>Select Add, Select an API, Select 'Windows Azure Service Management API', Select 'Select'</li>
<li>Select 'Access Azure Service Management as organization users'</li>
<li>Select 'Select'</li>
<li>Select 'Done'</li>
</ol>
<div>
Add a Key (the secret)</div>
<ol>
<li>In the Settings, Select 'Keys'</li>
<li>Enter a Key description, select a duration</li>
<li>Select 'Save'</li>
<li>Copy the Value of the key (this value is necessary when this Service Principal is used with Citrix Cloud - and there are warnings that you can never see this key again)</li>
</ol>
<div>
Grant the Service Principal access to the Subscription for 'Citrix stuff'</div>
<ol>
<li>Select the Billing Blade</li>
<li>Select the Subscription that you would like Citrix Cloud to be using</li>
<li>Select 'Access control'</li>
<li>Select '+ Add'</li>
<li>Under 'Role' select 'Contributor'</li>
<li>Under Select, type in the name of the App Registration you created (mine was 'xendesktop')</li>
<li>Select the Azure AD user</li>
<li>Select 'Save'</li>
</ol>
<div>
At this point in time, the Service Principal information can be handed off to your Citrix Administrator for establishing the Host connection to Azure in the Citrix Cloud portal. </div>
<div>
When Adding the Connection select the 'Use existing' option.</div>
<div>
<br /></div>
<div>
They will need;</div>
<div>
<ul>
<li>the Subscription UUID</li>
<li>the Active Directory ID</li>
<li>the Application ID</li>
<li>the Application secret (that value that I mentioned you had to copy and save)</li>
</ul>
</div>
<div>
If you return the Azure Active Directory blade, Select the Properties, you will find the Directory ID.</div>
<div>
Then select App registrations, select the one you created you can find the Application ID.</div>
<div>
The Subscription id, is back under the Billing blade.</div>
<div>
<br /></div>
<div>
<br /></div>
BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-19925321214641901042017-02-07T10:00:00.000-08:002017-02-07T10:00:32.890-08:00A reason to use state with OctobluI have been posting an 8 part series going though some advanced use of Octoblu.<br />
<br />
Part 1: <a href="http://itproctology.blogspot.com/2017/01/use-configuration-events-in-octoblu.html" target="_blank">Use configuration events in Octoblu</a><br />
Part 2: <a href="http://itproctology.blogspot.com/2017/01/creating-custom-devices-in-octoblu.html" target="_blank">Creating custom devices in Octoblu</a><br />
Part 3: <a href="http://itproctology.blogspot.com/2017/01/setting-state-of-octoblu-device-from.html" target="_blank">Setting the state of an Octoblu device from a flow</a><br />
Part 4: <a href="http://itproctology.blogspot.com/2017/01/listening-to-and-acting-on-device-state.html" target="_blank">Listening to and acting on device state change in Octoblu</a><br />
Part 5: <a href="http://itproctology.blogspot.com/2017/02/breaking-value-into-new-keys-with.html" target="_blank">Breaking values into new keys with a function node</a><br />
Part 6: <a href="http://itproctology.blogspot.com/2017/02/reformatting-nested-json-with-javascript.html" target="_blank">Reformatting nested JSON with JavaScript</a><br />
Part 7: <a href="http://itproctology.blogspot.com/2017/02/logical-data-nesting-with-your-octoblu.html" target="_blank">Logical data nesting with your Octoblu state device</a><br />
Part 8: <br />
<br />
Back at the beginning I introduced the concept of a state device.<br />
<br />
Now, if you aren't yet understanding why I might introduce a state device, consider this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-CGKh63tFW6I/WJTVS1kJzHI/AAAAAAAACjE/eORYWE_L1lUTOeODR-JiJBZ6qcA3b_aGgCLcB/s1600/SetGetKey.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="318" src="https://2.bp.blogspot.com/-CGKh63tFW6I/WJTVS1kJzHI/AAAAAAAACjE/eORYWE_L1lUTOeODR-JiJBZ6qcA3b_aGgCLcB/s320/SetGetKey.PNG" width="320" /></a></div>
<div align="left" class="separator" style="clear: both; text-align: center;">
<br /></div>
Have you ever found yourself using SetKey and GetKey within flows to persist data, even if only for a little while?<br />
<br />
Have you ever run into complex timing issues where you would love to break something into multiple flows and instead end up with one huge complex one?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-LIurw1rj4oo/WJTUfXdk67I/AAAAAAAACjA/WA6vWBd2pt4eUuPvSnUntEAhZvveyPInwCEw/s1600/HPECubeFlow.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="230" src="https://1.bp.blogspot.com/-LIurw1rj4oo/WJTUfXdk67I/AAAAAAAACjA/WA6vWBd2pt4eUuPvSnUntEAhZvveyPInwCEw/s320/HPECubeFlow.PNG" width="320" /></a></div>
<div>
<br /></div>
<div>
This is where the state device is an easy fit. Persist your data, in a common object that you can reference between flows.</div>
<div>
<br /></div>
<div>
Then, instead of relying on some message chugging through the system, you act upon a change to your state device. So you could dev null some message, perform no update and exit your logic stream.</div>
<div>
<br /></div>
<div>
In the example I have been laying out I have two primary scenarios: </div>
<div>
<br /></div>
<div>
Scenario 1: there are multiple incoming data sources</div>
<div>
I have multiple devices that are all similar and they are feeding in data that I need to evaluate in a common way. Each flow can update my state device independently, and then I simply have one evaluation flow to determine if I am going to send out my alert.</div>
<div>
<br /></div>
<div>
Scenario 2: there are multiple data listener paths</div>
<div>
Just the opposite. I have one primary input data source, it is big and complex.</div>
<div>
Then I have multiple flows, each of which evaluates a specific type of data or specific properties.</div>
<div>
<br /></div>
<div>
Either way, it allows me to compartmentalize my flow logic and reduce / remove redundancy across the system.</div>
<div>
<br /></div>
<div>
So I end up with something like this:</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-meVGDIRExaM/WJTX6ov6uyI/AAAAAAAACjQ/nR9WkLhjzdk6vqicBkSCUdliB-XwPT4QACEw/s1600/WinkObserve.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="85" src="https://1.bp.blogspot.com/-meVGDIRExaM/WJTX6ov6uyI/AAAAAAAACjQ/nR9WkLhjzdk6vqicBkSCUdliB-XwPT4QACEw/s320/WinkObserve.PNG" width="320" /></a></div>
<div>
Combined with this:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-yy1u93ST57c/WJTX6hWYIgI/AAAAAAAACjU/UjGfx7uvbZYwhU0haykPeseSt1DT6s6lgCEw/s1600/BatteryAlert.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="158" src="https://2.bp.blogspot.com/-yy1u93ST57c/WJTX6hWYIgI/AAAAAAAACjU/UjGfx7uvbZYwhU0haykPeseSt1DT6s6lgCEw/s320/BatteryAlert.PNG" width="320" /></a></div>
<div>
To do what I was doing in the first screenshot.</div>
<div>
<br /></div>
<div>
The big upside for me is that I removed all of the hardcoded naming filtering that I started with in order to persist the data.</div>
<div>
The flows are now able to be more dynamic and handle the same sets of data no matter if it was mine, or someone else's.</div>
<div>
<br /></div>
<div>
<br /></div>
<br />
<div>
<br /></div>
<div>
<br /></div>
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0tag:blogger.com,1999:blog-6230024559279811901.post-31874051296688215452017-02-06T10:00:00.000-08:002017-02-06T10:00:00.144-08:00Referencing nested array values in JavaScript from my Octoblu state devicePart 1: <a href="http://itproctology.blogspot.com/2017/01/use-configuration-events-in-octoblu.html" target="_blank">Use configuration events in Octoblu</a><br />
Part 2: <a href="http://itproctology.blogspot.com/2017/01/creating-custom-devices-in-octoblu.html" target="_blank">Creating custom devices in Octoblu</a><br />
Part 3: <a href="http://itproctology.blogspot.com/2017/01/setting-state-of-octoblu-device-from.html" target="_blank">Setting the state of an Octoblu device from a flow</a><br />
Part 4: <a href="http://itproctology.blogspot.com/2017/01/listening-to-and-acting-on-device-state.html" target="_blank">Listening to and acting on device state change in Octoblu</a><br />
Part 5: <a href="http://itproctology.blogspot.com/2017/02/breaking-value-into-new-keys-with.html" target="_blank">Breaking values into new keys with a function node</a><br />
Part 6: <a href="http://itproctology.blogspot.com/2017/02/reformatting-nested-json-with-javascript.html" target="_blank">Reformatting nested JSON with JavaScript</a><br />
Part 7: <a href="http://itproctology.blogspot.com/2017/02/logical-data-nesting-with-your-octoblu.html" target="_blank">Logical data nesting with your Octoblu state device</a><br />
<br />
Okay, here is the big post that I have spent an entire week working up to.<br />
<br />
I have to admit, I don't write code every day and am self taught JavaScript (along with Python, PowerShell, and batch) so this took me a while to work through.<br />
<br />
From my last post, my incoming message looks this this:<br />
<br />
<code>
{<br /> "msg": {<br /> "rooms": {<br /> "Redmond": {<br /> "lunch": {<br /> "motion": {<br /> "name": "Redmond_lunch_motion",<br /> "mapTitle": "Redmond",<br /> "room": "lunch",<br /> "device": "motion",<br /> },<br /> "refrigerator": {<br /> "name": "Redmond_lunch_refrigerator",<br /> "mapTitle": "Redmond",<br /> "room": "lunch",<br /> "device": "refrigerator",<br /> },<br /> "door": {<br /> "name": "Redmond_lunch_door",<br /> "mapTitle": "Redmond",<br /> "room": "lunch",<br /> "device": "door",<br /> }<br /> }<br /> }<br /> },<br /> "fromUuid": "d5b77d9b-aaf3-f089a7096ee0"<br /> },<br /> "node": "b5149300-9cbd-1f1b56e5d7bb"<br />}</code>
<br />
There can be a variable number of devices per room, and a variable number of rooms per map, and a variable number of maps. My nesting pattern above is <code>rooms.map.room.devices</code><br />
<br />
Now for the hard part.<br />
I want to evaluate the differences between values of different devices, per room.<br />
<i>This ends up being a lesson in how values are referenced in arrays in JavaScript.</i><br />
<br />
Before I move forward, I have abbreviated the above JSON to spare you scrolling. There are additional fields, and these additional fields contain Date objects that I am interested in. And these Dates are formatted as odd numbers which are actually <a href="https://en.wikipedia.org/wiki/Unix_time" target="_blank">Epoch Time</a>.<br />
<br />
So, to give you the full treatment, here is a real message:<br />
<br />
<code>
{<br /> "msg": {<br /> "rooms": {<br /> "Redmond": {<br /> "lunch": {<br /> "motion": {<br /> "name": "Redmond_lunch_motion",<br /> "mapTitle": "Redmond",<br /> "room": "lunch",<br /> "device": "motion",<br /> "motion": false,<br /> "motion_updated_at": 1485539337.480442,<br /> "battery": 1,<br /> "battery_updated_at": 1485539337.480442,<br /> "tamper_detected": null,<br /> "tamper_detected_updated_at": null,<br /> "temperature": 21.666666666666668,<br /> "temperature_updated_at": 1485539337.480442,<br /> "motion_true": "N/A",<br /> "motion_true_updated_at": 1485539125.483463,<br /> "tamper_detected_true": null,<br /> "tamper_detected_true_updated_at": null,<br /> "connection": true,<br /> "connection_updated_at": 1485539337.480442,<br /> "agent_session_id": null,<br /> "agent_session_id_updated_at": null,<br /> "connection_changed_at": 1485175984.3230183,<br /> "motion_changed_at": 1485539337.480442,<br /> "motion_true_changed_at": 1485539125.483463,<br /> "temperature_changed_at": 1485529054.5705206<br /> },<br /> "refrigerator": {<br /> "name": "Redmond_lunch_refrigerator",<br /> "mapTitle": "Redmond",<br /> "room": "lunch",<br /> "device": "refrigerator",<br /> "opened": false,<br /> "opened_updated_at": 1485539969.6240845,<br /> "tamper_detected": false,<br /> "tamper_detected_updated_at": 1476739884.682764,<br /> "battery": 1,<br /> "battery_updated_at": 1485539969.6240845,<br /> "tamper_detected_true": "N/A",<br /> "tamper_detected_true_updated_at": 1476739866.2962902,<br /> "connection": true,<br /> "connection_updated_at": 1485539969.6240845,<br /> "agent_session_id": null,<br /> "agent_session_id_updated_at": null,<br /> "opened_changed_at": 1485539969.6240845<br /> },<br /> "door": {<br /> "name": "Redmond_lunch_door",<br /> "mapTitle": "Redmond",<br /> "room": "lunch",<br /> "device": "door",<br /> "opened": false,<br /> "opened_updated_at": 1485538007.9089093,<br /> "tamper_detected": null,<br /> "tamper_detected_updated_at": null,<br /> "battery": 1,<br /> "battery_updated_at": 1485538007.9089093,<br /> "tamper_detected_true": null,<br /> "tamper_detected_true_updated_at": null,<br /> "connection": true,<br /> "connection_updated_at": 1485538007.9089093,<br /> "agent_session_id": null,<br /> "agent_session_id_updated_at": null,<br /> "opened_changed_at": 1485538007.9089093<br /> }<br /> }<br /> }<br /> },<br /> "fromUuid": "d5b77d9b-aaf3-f089a7096ee0"<br /> },<br /> "node": "b5149300-9cbd-1f1b56e5d7bb"<br />}</code>
<br />
Now, the output I am looking for is to take some of these sensor Date values and evaluate them between each of the three devices. Such as: door-refrigerator, motion-door, motion-refrigerator and so on.<br />
<br />
If these values were in the same part of the message, it would be really easy. I could simply dot reverence the values and do the math.<br />
But they are not. Each sensor is in its own document, in an array.<br />
<br />
Now, if you recall a few posts back, I have a naming convention and I am standardizing three of the names: "door", "refrigerator", and "motion". Those I am not allowing to change. But the room can and the map can.<br />
<br />
Recall, I began this exercise this with just an array of devices with values. Processed them to group by a logical naming pattern, saved that to an Octoblu state device, and now I am further processing that into my actionable data which I can easily handle with Octoblu filters to handle alerting or whatever I want to do.<br />
<br />
So, to get you to read to the end and not just steal my code here is the output that I am producing, per room.<br />
This gives me a nice single document per room as output - I can pass that to a demultiplex node to break the rot array apart and evaluate each document. <br />
<br />
My output looks like this:<br />
<br />
<code>
{<br /> "msg": [<br /> {<br /> "motion": "motion",<br /> "motionAt": 1485544607.3195794,<br /> "motionAtHuman": "2017-01-27T19:16:47.319Z",<br /> "mapTitle": "Redmond",<br /> "room": "lunch",<br /> "refrigerator": "refrigerator",<br /> "fridgeOpenedAt": 1485539969.6240845,<br /> "fridgeOpenedAtHuman": "2017-01-27T17:59:29.624Z",<br /> "door": "door",<br /> "doorOpenedAt": 1485538007.9089093,<br /> "doorOpenedAtHuman": "2017-01-27T17:26:47.908Z",<br /> "diffDoorsOpenedMinutes": 32,<br /> "diffDoorMotionMinutes": 109,<br /> "diffRefrigeratorMotionMinutes": 77,<br /> "sinceDoorOpenMinutes": 115,<br /> "sinceRefrigeratorOpenMinutes": 82,<br /> "sinceMotionMinutes": 5<br /> }<br /> ],<br /> "node": "98cb8680-a264-1b8483214e06"<br />}</code><br />
Now, to end this long, long story the JavaScript is below.<br />
What I tried to do was have an intuitive way to read the code and reference each level of the document arrays, so you could understand where you were in the hierarchy.<br />
<br />
<code>
// array to output<br />var output = [];<br />
for ( var map in (msg.rooms) ){<br />
for ( var room in msg.rooms[map] ){<br />
var doorOpenedAt;<br /> var fridgeOpenedAt;<br /> var motionAt;<br />
var roomOutput = {};<br />
for ( var sensor in msg.rooms[map][room]){<br />
switch ( msg.rooms[map][room][sensor].device ) {<br /> case "door":<br /> doorOpenedAt = moment.unix(msg.rooms[map][room][sensor].opened_changed_at);<br /> roomOutput.door = msg.rooms[map][room][sensor].device;<br /> roomOutput.doorOpenedAt = msg.rooms[map][room][sensor].opened_changed_at;<br /> roomOutput.doorOpenedAtHuman = doorOpenedAt;<br /> break;<br /> case "refrigerator":<br /> fridgeOpenedAt = moment.unix(msg.rooms[map][room][sensor].opened_changed_at);<br /> roomOutput.refrigerator = msg.rooms[map][room][sensor].device;<br /> roomOutput.fridgeOpenedAt = msg.rooms[map][room][sensor].opened_changed_at;<br /> roomOutput.fridgeOpenedAtHuman = fridgeOpenedAt;<br /> break;<br /> case "motion":<br /> motionAt = moment.unix(msg.rooms[map][room][sensor].motion_true_changed_at);<br /> roomOutput.motion = msg.rooms[map][room][sensor].device;<br /> roomOutput.motionAt = msg.rooms[map][room][sensor].motion_true_changed_at;<br /> roomOutput.motionAtHuman = motionAt;<br /> break;<br /> } // close of switch<br />
roomOutput.mapTitle = msg.rooms[map][room][sensor].mapTitle;<br /> roomOutput.room = msg.rooms[map][room][sensor].room;<br />
} // close of sensor<br />
roomOutput.diffDoorsOpenedMinutes = Math.abs(doorOpenedAt.diff(fridgeOpenedAt, 'minutes')); //removing Math.abs will give a + - if the refrigerator opens and the door does not it will be negative<br /> roomOutput.diffDoorMotionMinutes = Math.abs(doorOpenedAt.diff(motionAt, 'minutes'));<br /> roomOutput.diffRefrigeratorMotionMinutes = Math.abs(fridgeOpenedAt.diff(motionAt, 'minutes'));<br /> roomOutput.sinceDoorOpenMinutes = moment().diff(doorOpenedAt, 'minutes');<br /> roomOutput.sinceRefrigeratorOpenMinutes = moment().diff(fridgeOpenedAt, 'minutes');<br /> roomOutput.sinceMotionMinutes = moment().diff(motionAt, 'minutes'); <br /> output.push(roomOutput);<br />
} //close of room<br />
} // close of map<br />
return output;</code>
<br />
<br />
Lots of leading up to this post. But I like to expand folks' understanding along the way.<br />
And I know we don't all tolerate long articles.<br />
<br />
I can thank <a href="https://twitter.com/tkreidl" target="_blank">Tobias Kreidl</a> for even getting me started on this series of posts.<br />
He asked a simple question, and I had a final answer, but I wanted to tell the journey so that he understood how I got to where I did.<br />
That leaves it up to you to take what you need. That's just how I write and respond to questions.<br />
<br />BrianEhhttp://www.blogger.com/profile/09946552115562772058noreply@blogger.com0