Beyond Netlify: Hosting a Static Site with AWS & Terraform
Mon Feb 24 2025
Context
These days, if you’re looking to host a static website, there are countless options that offer generous free tiers and easy, hands-off deployment. Many platforms allow you to connect your GitHub repository and have a deployed website within minutes.
If you're unfamiliar with the hosting-as-a-service options available, here are some of the more popular ones:
- Netlify
- Vercel
- Render
- Surge
- GitHub Pages
- Cloudflare Pages
- Google Firebase
- AWS Amplify
Having used several of these services, they all provide a smooth experience. However, these platforms abstract the underlying infrastructure, making it difficult to see what happens behind the scenes. While convenient, this abstraction may not always be suitable for every use case, and by design limit the amount you learn about cloud infrastructure.
For a previous project, I used Docker on a DigitalOcean droplet to manage a reverse proxy, frontend, backend, and database. While this setup worked well for a full-stack application, it was overkill for a simple static website. This time, I wanted a more hands-on approach with AWS, using Terraform to manage infrastructure as code. This allowed me to understand the underlying infrastructure better while gaining experience with new tools.
Security and QoL Considerations
When working with AWS, I always recommend implementing basic security best practices. For this project, I followed these steps:
- Created a non-root AWS admin user
- Set up Organizational Units (OUs) for development and testing
- Configured AWS SSO for user authentication (this is a great guide, with some outdated terminology - this is now done through IAM.)
- Set up billing alerts!
Manual Setup
With security in place, I proceeded with manually setting up the static site hosting:
Steps to Set Up Hosting Manually with a custom domain:
1. Create an AWS account (this is your root user - and just like your computer, is not meant to be used for most things)
2. Create an AWS admin user
3. Set up Organizational Units (OUs) for development and testing
4. Configure SSO for development and testing
5. Develop the frontend - I used Astro.js, but any static site generator (or plain old HTML + CSS) works for this
6. Upload the built files to an S3 bucket
7. Set up CloudFront routing** for content delivery
8. Create an SSL certificate (Guide for linking to Cloudflare domains)
9. Configure the DNS settings to point your domain to CloudFront
10. Update CloudFront to use the SSL certificate
Automating Deployment with GitHub Actions
Instead of manually building and uploading the website to S3, I set up GitHub Actions to automate the process. This is a multi-stage (separate build, and deploy steps) action like so:
1. Building the website: When a commit is pushed to the main branch, GitHub Actions triggers a workflow to build the Astro.js site.
2. Syncing files to S3: The built files are automatically synced to the S3 bucket using the AWS CLI.
3. Invalidating the CloudFront cache: After deployment, a CloudFront cache invalidation is triggered to ensure the latest version of the site is served.
This automation ensures that every change pushed to the repository is immediately reflected on the live site without manual intervention.
AWS Services Used
While this is all in all a fairly simple project, it uses multiple AWS services and is a good way to get to know the core functionality of the platform (beyond your standard ec2 instance).
- AWS IAM (Identity and Access Management)
- AWS Account Manager
- AWS Billing Center
- AWS S3 (Static file storage)
- AWS ACM (SSL certificate management)
- AWS CloudFront (Content delivery network)
- AWS Route 53 (Domain name management) (I already had the domain through Cloudflare so didn't use this)
- AWS DynamoDB (Visitor tracking) (optional)
- AWS Lambda (Serverless API function) (optional)
Why Terraform?
Terraform plays a crucial role in this project by defining and managing the AWS infrastructure in a repeatable and automated way. With Terraform, I provision the S3 bucket for static file storage, configure CloudFront for content delivery, and establish IAM policies for security. This approach ensures that infrastructure changes are version-controlled and can be applied consistently across environments. By using Terraform Cloud, I also gain the benefit of remote state management, which helps coordinate deployments without manual intervention. The combination of Terraform and GitHub Actions provides a fully automated deployment pipeline that simplifies hosting and scaling the resume site.
While it’s possible to manage these AWS services manually, Terraform makes it easier to replicate infrastructure, maintain consistency, and apply changes with version control. By defining infrastructure as code, I can automate deployment and ensure that future iterations of the project remain maintainable and scalable.
Final Thoughts
This project was a great learning experience, allowing me to go beyond managed hosting platforms and gain hands-on knowledge of AWS. With the way this project is configured, I would be able to tear down, and redeploy the project with almost as little effort as a third party hosting service. While services like Netlify and Vercel offer ease of use, setting up AWS manually provides a deeper understanding of cloud infrastructure and the flexibility to tailor deployments to specific needs. Moving forward, I plan to refine this setup further by integrating additional automation and monitoring tools.
Curious about the project?
You can find the final deployed project here.
You can find the repo, with GitHub Actions, and Terraform here.