AWS: The Cloud Plug

AWS: The Cloud Plug

Recently I completed the cloud resume challenge. This project, created by Forrest Brazeal, tasks participants to use HTML, JavaScript, and Python to create a fullstack resume application that tracks resume views; and to host the application via specific cloud services offered by AWS. Finally, the challenge requires the AWS-hosted resume to be decoupled from the AWS console; and mandates CI/CD pipelines for both the frontend and backend application.

Being a developer, tackling the programming side of the resume was the easy part. Instead of HTML, I decided to use React as the vast majority of the resume would be stateless functional components returning JSX. By the time I finished this chunk of the challenge, I was starting to question why cloud developers seem to get offered the bag of all bags(it can't be this easy). Then I met the plug.

The Plug and the Process

It is not my place to explain the inner workings of a top-tier salesman of bulk street pharmaceuticals; however, I will indulge in a parallel comparison of AWS- the cloud plug- with such salesmen. Let me give a narrative of the process I underwent to connect with the cloud plug.

It's no secret that the plug deals in weight; and weight needs storage. The AWS S3 buckets are the storage facilities that'll allow you to upload your code(amongst other things), and serve that code directly from the bucket. Therefore, I uploaded my React frontend to AWS's S3, set policy permissions for getting and updating the files, made the content public, and enabled static hosting. In comparison, however, this is like moving product from the trunk of the car: it's enough to get the job done, but supreme clientele will be unimpressed.

Every plug has corner boys, usually in front of a storefront, moving product. AWS calls its corner, CloudFront. After setting my S3 bucket as the origin of my CloudFront distribution, and adjusting more permissions so that my CloudFront distribution could access the contents of the S3, I was able to serve my frontend through this distribution. From SSL security, a cloudfront.net domain name, and options to specify the behavior of the distribution, it was easy to see the benefits of being affiliated with the plug through CloudFront. Considering AWS already has my product in storage, and since serving that product from storage is not a long-term solution, I disabled the static hosting from the S3 bucket. In addition, I appended index.html to the CloudFront distribution path pattern so that the CloudFront.net domain would take the necessary file for serving my resume. In short, the corner boys can get the product directly from storage and then serve users from CloudFront.

This process would seem complete; however, just like the S3 bucket doesn't lead to supreme clientele, a CloudFront distribution forever ties the product with the plug. Such a relationship would never allow the connect to know my name! This is where Route 53 comes into play.

To complete serving the frontend to users, I chose a domain with Route 53. I would assign the DNS from that domain to my CloudFront distribution. Finally, I would give CloudFront my domain name as the alternate name for the distribution. At this step, SSL security for the custom domain must be requested or imported; so I requested one through AWS, and created a new record in Route 53 that would take the CloudFront Distribution as the alias. Once the two were linked with security, no more AWS orange tops. I was now certified to move with my own name for my site. The thing about a plug, however, is that they tend to have their hands in every facet of the product's lifecycle.

Serving the backend was not as complex as the frontend, but the process requires us to visit the plug's trap house. Dynamo DB is AWS database system where you can create tables and store data. It can be thought of as their trap house as this is where the work is cooked up and then served to the corner boys for distribution. I created a table called Views that took an id property and a no_of_views property. An AWS Lambda function- or the trap kitchen- is where I created a method to get the database table by id, and update views by one. Having created my frontend in React, I only needed to create a simple stateful functional component to call the Lambda URL- my serverless server- and apply the value it returned to my component's state- done! My resume application was now fully functional with AWS cloud services; fully tapped in with the plug.

The challenge could not be complete without building CI/CD pipelines so that when I would need to make any changes, or if I wanted to pull the whole thing down, I could automate the whole process without having to visit the AWS console. I mean how many times would you want to see the plug for every mistake or change you make? Creating a GItHub actions yml file took care of the workflow of the frontend, and after several tutorials on Terraform I was able to build an IaC for the backend.

Conclusion

The Cloud Resume Challenge was a really good introduction to AWS. My greatest takeaway as a developer from this challenge was the process of implementing infrastructure as code(IaC). Having built projects that use CircleCi, my scope had been limited to testing, updating, and deploying code. Working with Terraform to automate permissions, roles, and processes offered by Amazon allowed me to consider opportunities that I missed in my previous projects to automate jobs. For that reason, I plan to tackle another Cloud project. With a little luck, I may be able to run off on the plug and meet the connect!