Introductory post: How I built my website


Created: Sep 09 2025   -   38 views
Category: technology
Tags: how-to cloud-computing database about-me

Welcome to my website! This was my first true attempt at a fully fledged personal site where I own the backend and custom build it to my specifications. I wanted to write my first ever blogpost to explain how I was able to build the site from scratch.

I initially built this to replace my old personal portfolio for recruitment purposes. As someone who is soon graduating from UCSD with a masters in Computer Science, I wanted to make sure my public image was perfect and that I had a neat space for me to showcase my experience and my skills, especially in such a competitive field.

Basics

Domain Name

Before starting any back-end development, I made sure to purchase the right domain name for my site. Instead of merchant sites such as GoDaddy or Squarespace which tend to have higher prices, I used Porkbun to buy my domain name.

Server Hosting

The backend is hosted on an AWS t2.micro EC2 instance, provisioned through AWS’ free tier. Before mapping my domain name to the instance’s public IP, I set up Nginx as a reverse proxy to route incoming traffic to the internal HTTP web server, which handles the HTTPS handshake protocol for secure connection. Nginx also handles loadbalancing with increased traffic, something I can configure later on when I get more popular B-).

The backend runs locally on the instance over a designated port, while Nginx manages the routing of HTTPS requests to the server’s HTTP endpoints.

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name timkraemer.me www.timkraemer.me;
    return 301 https://$host$request_uri;
}

# HTTPS server
server {
    listen 443 ssl;
    server_name timkraemer.me www.timkraemer.me;

    ssl_certificate /etc/letsencrypt/live/timkraemer.me/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/timkraemer.me/privkey.pem;

    location / {
        proxy_pass http://localhost:8080;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Any connection that is made to the HTTP endpoint is rerouted to the HTTPS endpoint to allow for the most secure connection. Internally, Nginx forwards the traffic incoming from port 443 to the internal port my webserver is listening to, in this case localhost:8080 which is standard.

The SSL certificate is generated by Let’s Encrypt! and handled by Nginx. You can read more about how to configure SSL certificate with Nginx here.

I dont expect traffic to increase substantially in the near future, so AWS’ free tier is perfect for a small project like this. In the future if the needs are there (or if I need more default storage space), I might consider moving up to something like the t2.nano.

Important: If you wanted to achieve the same, make sure to enable port 80 and port 443 in your EC2 security configuration.

Golang Backend

My website uses a 100% Golang backend to handle routing, page handlers, and blog logic. I chose Golang vs other framework languages such as Python and Node.JS due to its efficient handling, speed and safety, as well as its support with many open source libraries. The core web-app uses Golang’s net/http package to register page and API routes, where you can simply assign urls to page handlers through:

import "net/http"
http.HandleFunc("/", HomeHandler)

func HomeHandler(w http.ResponseWriter, r *http.Request) {
    tmpl := template.Must(template.ParseFiles("views/home.html"))
    tmpl.Execute(w, nil)
}

I handled page serving for all components of my website in a similar fashion, except my blog, which I will explain how I built later. Each page’s handler is stored in a custom package handlers, and its actually quite simple to track each route configuration. I have them in a file routes.go:

package handlers
import "net/http"

func  RegisterRoutes() {
// HTML render routes
http.HandleFunc("/", HomeHandler)
http.HandleFunc("/education", EducationHandler)
http.HandleFunc("/experience", ExperienceHandler)
...

From there its pretty much it (well not really, but for very simple website serving, this setup works). Each page has an associated .html view that is served through each handler. Any additional funtionality a page might have (such as analytics tracking, posting, etc.) can be implemented into the respective handler.

Blog Page

The most intricate portion of my website is the custom blog page I built from scratch. This was an interesting project that allowed me to practice my secure API integration as well as come up with a smart caching protocol to minimize costs associated with AWS.

The blog is composed of a landing page, and a post template page. Posts are stored in markdown .md format, and utilizes the blackfriday/v2 package found at github.com/russross/blackfriday/v2:

htmlContent  :=  blackfriday.Run([]byte(post.MdContent))

SQLite3 Database

Posts are locally stored on the server using the sqlite3 package (github.com/mattn/go-sqlite3) for low overhead and fast fetch times. Each post that is created is stored as a combination of:

  • Post ID
  • Title
  • Markdown Content
  • Creation Date
  • Tags
  • Category
  • View Count

Golang’s interface with sqlite3 has many really cool perks, such as passing values to/from by reference from a Golang struct. As an example this is a simple way of inserting a post into the database:

func InsertPostIntoDB(post BlogPost) error {
    id := uuid.New()
    post.ID = id.String()
    _, err := blogDB.db.Exec("INSERT INTO posts (
            id, 
            title, 
            preview, 
            content, 
            created, 
            category, 
            tags, 
            viewcount) 
            VALUES (?, ?, ?, ?, ?, ?, ?, ?)", 
            id.String(), post.Title, post.Preview, post.MdContent, post.Created, post.Category, post.Tags, 0)
    if  err != nil {
        return err
    }
    return nil
}

Additionally, I defined a list of common API interfaces for the blog that any CRUD app would have, including fetching posts through specific criteria (such as by tag or category), deleting posts, and getting the number of posts per tag/category. It is best practice to only create what you need, and to avoid creating too many functions if there is no use for them.

Post Template

For robust and scalability, I created a general blog post template that is prefilled with blog content, without having to create and store a new view each time. This allows for faster load times, and simpler logic. To achieve this in Golang, we fetch the blog from the database and store it in a blog struct, and then we pass each field as an argument into the function that servers the view to the user.

// Convert markdown to HTML
htmlContent := blackfriday.Run([]byte(post.MdContent))
tags := strings.Split(post.Tags, ",")

// Find the blog post template
tmpl := template.Must(template.ParseFiles("templates/blog-post.html"))

// Parse the blog content into the template
tmpl.Execute(w, map[string]interface{}{
    "Post": post,
    "Tags": tags,
    "HTMLContent": template.HTML(htmlContent),
})

The parser will pre-fill the content in the template at designated {{Post}}, {{Tags}}, and {{HTMLContent}} elements.

Caching

Images in a blog post are cool, however images are large, take up space, and waste resources if they are continuously fetched and served. AWS provides up to 20,000 requests (GET, POST, etc) per month in their free tier, which is alot, but for best practices, and to avoid unecessary costs from request surges, I implemented a caching policy locally on the server to prevent popular images from being continuously fetched. The cache takes up 1 GB of space on the server, and follows the oldest-replaced policy where if the cache is full, the last image that was accessed in the cache gets replaced. This means that if I reload a post with an image, the image will be fetched from S3 at most once, and any subsequent reloads would serve the image from the cache.

// Serve from local cache if exists
if  _, err := os.Stat(localPath); err == nil {
    http.ServeFile(w, r, localPath)
    logEntry := fmt.Sprintf("Serving image %s from cache
", imgPath)
    logging.LogEntry("./logs/image_request.log", logEntry);
    return
}

If the image is not in the local cache, we then check S3, and then store into the cache, evicting based on last access time if needed:

// Evict if needed
const maxSize int64 = 1024 * 1024 * 1024
if err := EvictCacheIfNeeded("./imagecache/", maxSize); err != nil {
    log.Printf("Eviction error: %v
", err)
}

if err := os.MkdirAll(filepath.Dir(localPath), 0755); err != nil {
    log.Printf("mkdir error: %v
", err)
}

// Save to cache
f, err := os.Create(localPath)
if err == nil {
    io.Copy(f, resp.Body)
    f.Close()
} else {
    log.Printf("File save error: %v
", err)
}
http.ServeFile(w, r, localPath)

Deployment Workflow

My CI/CD pipeline is in no way complete, and is the bare minimum that is needed for such a website, but I have plans on adding basic unit testing, linting, and other verification steps before deployment to the prod. Currently, any changes are first merged into a staging area develop, where it can be throughly tested for continuity. Once all changes are verified, a PR to main will trigger changes to be pulled in my webserver, and the golang backend to be rerun. Here is a simplified version of my deployment .yml file:

name: Deploy to EC2

on:
  push:
    branches:
      - main   # Trigger deploys when code is pushed to main

jobs:
  deploy:
    name: Deploy to EC2 instance
    runs-on: ubuntu-latest

    steps:
      - name: Deploy over SSH
        uses: appleboy/ssh-action@v1.0.0
        with:
          host: ${{ secrets.HOST }}
          username: ${{ secrets.USER }}
          key: ${{ secrets.SSH_KEY }}
          script: |
            cd ~/personal-website
            git pull origin main
            go build -o personal-website ./cmd/web
            sudo systemctl restart personal-website

The End?

Overall the website is never over. I started this project in early July, and it has gone through countless redesigns and tweaks. Whenever I want to add new features, such as theme presets that linger, or more advanced analytics, tracking, and logging, I seem to sink hours into it not because its difficult, its simply just fun to work on it! Expect this website to go through changes in the future, as I continuously experiment with what looks good and what doesn’t. Feel free to revisit again in the future to see what it will look like!