Exploring the Cloud

Exploring the Cloud

What I learned from the cloud resume challenge

My IT Background

I’m a Windows System Administrator. Many years ago I started as a helpdesk analyst in the IT department of a cable manufacturing company. I was the first point of contact for just about everything, password resets, hardware, software, you name it I did it.

I moved on a couple of years later to a service desk role in a managed service provider providing outsourced IT support to a variety of different companies. Sometimes I was just a voice on the end of the phone and other times I would have extended onsite placements with a customer.

I eventually graduated into a Junior System Administrator role which became more senior over time and that is the progression I have continued over the years. Technologies I have worked with in my SysAdmin career include Windows Server, Linux, Virtualization, Storage, SQL, Networking, Microsoft Exchange, Azure and M365 to name a few.

Cloud Resume Challenge

In more recent times I have become interested in parlaying my IT experience into a more cloud/DevOps-focused role. That shouldn’t be too hard right?

After all, I had worked with Azure and Microsoft 365 for a few years and knew the portal pretty well.

Turns out there were a few gaps to fill in my experience to have a chance at a true DevOps job.

I had been working on improving my skills for a couple of years already but lacked any practical examples in the form of projects or a portfolio so I went looking for something to do.

I found the “Cloud Resume Challenge” and decided this would be a good chance to practically apply a lot of the concepts I had been reading about and build something.

In a nutshell, the challenge is to deploy your resume to a static website hosted on a public cloud service (I chose AWS) and create a backend to log the number of visitors to the site. The idea then is to use serverless applications and infrastructure as code to deploy the entire solution from a git repository using CI/CD pipelines.

All of this felt to me like a great opportunity to build my cloud and DevOps muscles, so I got to work.

After a few weeks of work, I had something resembling a working solution. I had touched several AWS services and learned a lot more about each of them than just reading study guides and documentation could cover.

Services used included S3, CloudFront, AWS Certificate Manager, Lambda, API Gateway and DynamoDB.

In addition to these services I used Terraform to deploy Infrastructure as code, HTML, CSS and JavaScript to build the website, python in the Lambda function to retrieve and store the site visitor count, GitHub repositories for source control, GitHub actions to automate the deployment of code and run tests.

All of this was great exposure to Cloud / DevOps concepts that I hadn’t been exposed to professionally.

See my resume here https://resume.shellflow.com

Big Picture

Coming from a traditional IT background the area of Cloud/DevOps that resonates with me is documentation.

Documentation has always been an important part of all the IT roles I have had however in this setting the documentation only seems to remain accurate until shortly after the end users are given access.

Traditional IT documentation usually involves lots of spreadsheets recording various details of server configurations, Word documents full of screenshots which may or may not contain all of the actual steps taken to build a system and any other useful information about the configuration of the system.

Multiple administrators have admin access to the systems and changes get made.

Some of these changes are documented and over time the system that is running starts to diverge from the one that was originally built.

Infrastructure as code resonates with me because it becomes living documentation of the system as it is configured right now.

Source control repositories like git keep track of changes that were made to the system making it a lot easier and faster to identify breaking issues and even resolve or roll back automatically in the CICD pipeline before end users even notice.

In addition to the documentation advantages, I think infrastructure as code improves other areas that are often on the mind of IT management, backup and security.

Backups are vital to any IT system including in the cloud to ensure business continuity if the disaster recovery plan ever needs to be implemented.

Traditionally testing backups and DR was difficult due to the amount of time and resources required to properly execute a test to prove the process would work as expected.

In the world of cloud and DevOps with a decoupled architecture and all resources defined as code the key backup is data with less of a focus on backing up VMs and their operating systems as well. A DR test can be automated and reported on regularly allowing for faster identification and remediation of issues.

Security also has the potential to be greatly enhanced by becoming part of the design of a system rather than being tacked on at the end. Once again infrastructure as code allows for easy auditing and surfacing of potential issues so that they can be addressed before they are exploited.

Visibility is the key advantage in my mind when comparing traditional IT to Cloud/DevOps. Issues that previously went unnoticed until they urgently needed to be addressed can now be quickly surfaced and remediated long before they are ever major issues.

In my opinion, Cloud/ DevOps done well removes a lot of the firefighting that used to be “business as usual” and opens up a whole lot of extra value for the end users of the system.