Lately I have been working through a number of automation tasks to create environments and deal with various customer scenarios.
I will say right now, I am all about 'cattle, not sacred cows' as in my configurations are always separate from the machine. All that configuration state can always come from some place else.
15 years ago, I was doing this without automation. We rebuilt our servers at every upgrade of the primary application running on them. And I also updated all the firmware, etc. So I have been in this school of thought for a long time. We just didn't use source control back then, it was documents and settings files and a vault.
The primary concept behind 'infrastructure as code' is that you can run some set of automation. Then bundle up all of the artifacts that drove that automation as a documented source of truth.
Some folks think of the 'infrastructure as code' part as just the settings files. Just the code, and a set of variables. But I challenge that it is much larger than that.
For example: with an Ansible playbook. You would have the source playbook, you have the environment variables passed in. And don't stop there.
You might also have some Jinja2 templates that the playbook used as a transform, you might have had temporary variables in flight, maybe files needed to be purged to harden the machine in production, etc.
All of that is part of the infrastructure as code. Not just what is fed in, but also the scripts and automation that drive the result. All of it.
I should be able to take an archive, open it, play it, and get the same result.
Which means that entire archive is your source of truth at that moment in time.
It is the moment in time part that has some type of source control all involved.
But in reality, your truth might not be in GitLab, it might be an archive in Artifactory. As it might include binaries and other things that don't source control well.
So think about your pipelines and the artifacts make them up and move through them and the end results.
Think about the view of; 'can you replay that and get the same result' or 'can you replay that offline'
I know that way back when we started to look at out rebuild process and combined that with a regular disaster recovery exercise, we really started to refine things and get a handle on the entire process and the dependencies across processes.
Details that are really easy to overlook in the daily grind of making it all just work.
Working with the "back-end" of I.T. systems. Learn. Apply. Repeat.
The things learned as an IT Pro'fessional turned software tester, researcher, and product manager.
Tuesday, November 12, 2019
Monday, November 11, 2019
Sorry for the huge silence
Sorry for the huge silence all.
As some of you may recall, the entire Redmond location that I was at with Citrix was let go. RIF'd we call it.
After that I landed at F5 in Seattle.
The work at F5 was a pretty wild and constantly fast ride.
With the acquisition of NGINX by F5, I became part of the NGINX business.
That covers over 18 months in about as lightly as I can.
What you will find from me going forward is automation. Probably interesting to DevOps and SRE types more so than what I used to write about.
And probably a lot more Linux than Windows.
I talk to customers a lot more than I used to.
What I find interesting, is that the problems in IT that I was dealing with 20 years ago are still present in the industry today.
Yes, the tools have changed, the scope has changed, and the impacts have changed - but many of the problems still remain.
It is just something that I find really interesting.
Part of me finds it disturbing as well. Specifically that the core problems remain, but shift and change ever so slightly, but they are still present.
Is it that tools have come and gone?
The problems are solved, then the tools get re-written and they just surface again?
Is it that IT changes and keeps bringing everything back around with each generation?
Or, it is just that folks shift from one infrastructure to another and all that baggage just comes along for the ride? To the new, not fully complete new platform.....
I speculate the latter more than not.
Anyway. Back at it. Hopefully posting things that are useful to the community, and hoping to gather some insights as well.
As some of you may recall, the entire Redmond location that I was at with Citrix was let go. RIF'd we call it.
After that I landed at F5 in Seattle.
The work at F5 was a pretty wild and constantly fast ride.
With the acquisition of NGINX by F5, I became part of the NGINX business.
That covers over 18 months in about as lightly as I can.
What you will find from me going forward is automation. Probably interesting to DevOps and SRE types more so than what I used to write about.
And probably a lot more Linux than Windows.
I talk to customers a lot more than I used to.
What I find interesting, is that the problems in IT that I was dealing with 20 years ago are still present in the industry today.
Yes, the tools have changed, the scope has changed, and the impacts have changed - but many of the problems still remain.
It is just something that I find really interesting.
Part of me finds it disturbing as well. Specifically that the core problems remain, but shift and change ever so slightly, but they are still present.
Is it that tools have come and gone?
The problems are solved, then the tools get re-written and they just surface again?
Is it that IT changes and keeps bringing everything back around with each generation?
Or, it is just that folks shift from one infrastructure to another and all that baggage just comes along for the ride? To the new, not fully complete new platform.....
I speculate the latter more than not.
Anyway. Back at it. Hopefully posting things that are useful to the community, and hoping to gather some insights as well.
Subscribe to:
Posts (Atom)