Server-side Rendering for (Almost) Free

Sunday 02 November 2025


This website, as well as nitrojunkie.uk are both server-side rendered, using Python and Flask. I'd initially deployed these on a VPS at DigitalOcean as it appeared to be about the cheapest way at about £5 a month. However, so far this year, the two sites combined have cost me a total of 1p to run.

Architecture

Flask

The sites are pretty simple Flask sites consisting of various routes in Python that fetch the required data, do any formatting that's needed then serve a static HTML template with the right data fed in. For example, the code that fed you this page was this route:

@app.route('/projects/<article_id>')
def article(article_id: str) -> str:
    ''' Load a single article '''
    articles = get_by_meta_key(md_directory, 'id', article_id)

    if len(articles) == 0:
        return Response(status=404)
    if len(articles) > 1:
        return Response(status=500)

    the_article = articles[0]
    return render_template('article.html', 
        post=markdown(the_article.content), 
        metadata=the_article.metadata,
        page_title=f'{the_article.metadata["title"]} - ')

That pulls out a project with the right ID free-websites, and gives back article.html which is even simpler:

{% include 'header.html' %}
<main>
    <section id="article">
        <h1>{{ metadata.title}} </h1>
        <p>{{ metadata.date | human_date }}</p>
        <hr />
        {{post|safe}}
    </section>
</main>
{% include 'footer.html' %}

There are of course calls off to other code, but that's besides the point.

The point is that for this site to function, there needs to be a server to run that first piece of Python to perform the intial "server-side" render. That rules out most of the 100% free (and even some of the paid) hosting options as they're often geared up towards serving simple files that don't require any compute on the server.

Docker

One of the things that helped me out here is the fact that I'd already dockerised the sites after a couple of unfortunate mistakes that took down all the sites at once, as I decided I wanted to be able to isolate the sites from one another as much as possible.

Docker provides us a way to package up the code, and the environment itself into an image that we can run anywhere compatible. This site runs with the following:

FROM python:3.14-bookworm
RUN apt-get update
RUN apt-get -y install apache2 apache2-dev
COPY src/requirements.txt /var/www/jc/requirements.txt
RUN /usr/local/bin/pip3 install --upgrade pip
RUN /usr/local/bin/pip3 install  -r /var/www/jc/requirements.txt
COPY --chown=www-data:www-data config/httpd.conf /etc/apache2/apache2.conf
COPY --chown=www-data:www-data src/ /var/www/jc
RUN apache2 -t
EXPOSE 80
ENTRYPOINT ["apache2", "-D", "FOREGROUND"]

That file tells Docker to grab a copy of Python at version 3.14, running on Debian Bookworm. It then installs the Apache web server along with some dependencies and copies my code into place on the newly created environment.

Deployment

So now we have an easily distributable image that contains everything needed to run the site. In fact, with a single command I can have a copy running completly locally on my laptop to develop against. We could of course run tihs on a Linux server (the same as I was doing in DigitalOcean to begin with), but that's far from free. So enter Google Cloud.

Google Cloud Run

Cloud Run is Google's service to natively run Docker containers in their cloud infrastructure. We simply point Cloud Run at the image, and it will start up. Google give you a DNS hostname that automatically routes to your app.

To keep things cheap we need to configure the app. By default, Google configures auto scaling to allow up to 5 copies of the service to be running simultaneously. We need to turn this way down. I have mine set at a minimum of 0 (so nothing is running when it's not needed), and a maximum of 1. The effect of this is that the service completely shuts down when nothing is accessing it, and starts up as needed. That adds some latency, but we'll deal with that in a second.

Keeping requests away from our service

Google will bill us for every second the service runs, so we want to keep as many requests off the service as possible. For this we'll configure CloudFlare's free tier to sit in front of our service and act as a cache.

Despite builing a server side site to make things simple to update, I don't actually make that many changes. So I have CloudFlare configured to cache the whole site and hold on to that cache for a month. That means that as long as someone has accessed my site from your local CloudFlare node within the last month, your request probably didn't hit my Google Cloud account.

The only downside to this caching is that when you do want to update the site, you need to clear the cache. In my case, this is done by the Jenkins pipeline that builds and deploys the site.