This is the second part of the guide for building once own weblog. In this part we will discuss hosting from a personal server and basic security handling and accessibility through the internet.
In this section I give a general overview about system requirements and layout a possible system architecture. Section 2 covers the steps necessary to prepare a self hosted server on a raspberry pi. In section 3 we make the web server accessible in the world wide web for useres to explore. Finally, the project is wrapped up in section 4.
This section gives a short overview the components needed to run a web blog and provids an architecture schematic of the implemented solution.
First we need to define the minimum technical requirements to run a modern web blog for private use.
| Category | Component | Minimum Requirement | Alternative | Purpose |
|---|---|---|---|---|
| Hardware | CPU | 1 Core | Cloud hosting | Handles server processes and web traffic requests. |
| RAM | 1 GB | Required for web server | ||
| Storage | 10 GB | Space for OS web files, database | ||
| Network | 5 Mbps Upstream / Downstream |
Ensures fast and stable access for visitors | ||
| Operating System | Linux / Windows | Linux Ubuntu/Debian | Stable secure and widely supported environment | |
| Access & Security | Remote Access | SSH with key authentication | Secure remote administration | |
| Fierwall | UFW / Windows Defender or iptables | pfsense or similar for additional security features | Protect against unauthorized access | |
| Encryption | TLS Certbot & Let’s Encrypt | Automatic Certificate Management Environment (ACME) | Enable HTTPS for secure connection | |
| Application Layer | HTTP Server | Gunicorn | Apache, MS IIS, … | Handles HTTP requests and serves web content |
| CMS (HTML, CSS, Java Script magement) / Framework | Django, Flask, CherryPy, Pyramid, Bottle, Falcon | WordPress, Squarespace, Wix … | Manages blog content and presentation | |
| Database | Database Engine | SQLite/ MySQL | MariaDB, PostgreSQL | Stores blog posts, metadata, and user data |
| Caching & Proxy | Reverse Proxy | - | Traefik, NGINX | Improves security and performance, allows to scale. |
| Domain & DNS | Domain Name | Registered TLD | Custom domain (e.g., www.bennys-blog.dev) |
Identifies the blog online |
| DNS Provider | any public available DNS-Provider | Cloudflare | Manages domain routing and provides performance/security benefits. | |
| WAN Access | Internet infrastructure provided by ISP | Directly through the Home Router | VPN via an Virtual Private Server (VPS) , Cloudflare | Ensures accessibility of the web blog in the www |
| Maintanace | Backup | Manual | Automated | Protects data integrity and recovery. |
| Monitoring | Optional | Reverse Proxy built in tools, UptimeRobot, OpenStatus | Tracks server health and warns about threats. | |
| Version Control | Optional | Git | Tracks content and configuration changes. | |
| Additional Enhancements | Containeriz | Optional | Docker or Podman | Simplifies deployment and scalability. |
| CDN Integration | Optional | Cloudflare, Fastly | Speeds up content delivery globally. |
The requirements for such a setup are not excessively demanding, especially for an individual blog with modest traffic, but non the less I want to give an overview of all components we need to setup or configure to implement the required technologies. For all system components we will use battle proven of the shelf products.
Based on the requirements from the previous section I will now layout the solution that fits my needs. You might choose other system components depending on your preferences and situation. Logically some setup steps will differ, but the overall implementation process will be very similar still.
System components for my personal website:
Web Server / HW
Self hosting on low performance hardware
Production ready HTTP server hosting the web blog
Reverse Proxy
Virtual Private Server (VPS)
Local Area Network (LAN)
World Wide Web
Additional Enhancements
The connections between all systems components of the actual implemented solution are shown in the figure below:
In this chapter I explain how to prepare a Django project for self hosting.
In part 1 of this guide we created our first web blog and we did so in debug mode. In the terminal run
This will display a list of warnings, some of which must be addressed before deployment.
System check identified some issues:
WARNINGS:
?: (security.W004) You have not set a value for the SECURE_HSTS_SECONDS setting. If your entire site is served only over SSL, you may want to consider setting a value and enabling HTTP Strict Transport Security. Be sure to read the documentation first; enabling HSTS carelessly can cause serious, irreversible problems.
?: (security.W008) Your SECURE_SSL_REDIRECT setting is not set to True. Unless your site should be available over both SSL and non-SSL connections, you may want to either set this setting True or configure a load balancer or reverse-proxy server to redirect all connections to HTTPS.
?: (security.W009) Your SECRET_KEY has less than 50 characters, less than 5 unique characters, or it's prefixed with 'django-insecure-' indicating that it was generated automatically by Django. Please generate a long and random value, otherwise many of Django's security-critical features will be vulnerable to attack.
?: (security.W012) SESSION_COOKIE_SECURE is not set to True. Using a secure-only session cookie makes it more difficult for network traffic sniffers to hijack user sessions.
?: (security.W016) You have 'django.middleware.csrf.CsrfViewMiddleware' in your MIDDLEWARE, but you have not set CSRF_COOKIE_SECURE to True. Using a secure-only CSRF cookie makes it more difficult for network traffic sniffers to steal the CSRF token.
?: (security.W018) You should not have DEBUG set to True in deployment.
?: (security.W020) ALLOWED_HOSTS must not be empty in deployment.
Create a new file in the project folder root and call it .env. This will be our environment file. Write the following:
In the mysite/settings.py file replace the beginning of the file with the following:
import os
from dotenv import load_dotenv
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/5.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
env_path = load_dotenv(os.path.join(BASE_DIR, '.env'))
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY', 'django-insecure-your-insecure-key')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.environ.get('DJANGO_DEBUG', '') != 'False'
ALLOWED_HOSTS = ["*"]
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = TrueThe secret key is stored outside of Django and then loaded in a method called dynamic importing. This reduces the risk of leaking the key by accident, for example by pushing the settings.py file to a public repository and allows for simplified key rotation.
⚠️ Caution
Make sure you add the environment file .env to .gitignore.
The secret key can be generated by a Django utility tools itself. To do so we can open a shell wrapped around Django by typing
Here we type the following
>>> from django.core.management.utils import get_random_secret_key
>>> print(get_random_secret_key())
>>> [your-secret-key]copy your secret key to the according place in the environment file. If you run ** python manage.py check –deploy ** again you will see that the overall number of errors should have reduced to only two.
System check identified some issues:
WARNINGS:
?: (security.W004) You have not set a value for the SECURE_HSTS_SECONDS setting. If your entire site is served only over SSL, you may want to consider setting a value and enabling HTTP Strict Transport Security. Be sure to read the documentation first; enabling HSTS carelessly can cause serious, irreversible problems.
?: (security.W008) Your SECURE_SSL_REDIRECT setting is not set to True. Unless your site should be available over both SSL and non-SSL connections, you may want to either set this setting True or configure a load balancer or reverse-proxy server to redirect all connections to HTTPS.
Warning security.W008 can be ignored, because we configure a reverse proxy later on. HSTS is recommended to prevent man-in-the-middle attacks on initial HTTP requests and cookie hijacking, as it instructs browsers to always use HTTPS. However, I will skip this setup for now to avoid introducing a misconfiguration that could lead to long-lasting and hard-to-debug issues. HSTS can be safely enabled later in the project.
For now let’s test the deploy setting by collecting the static files first and the running the default Django http server.
Now if we open the browser at ** http://127.0.0.1:8000/ ** the django web page from part 1 of this guide looks like it is broken.
The static files are not served, meaning the style and blog content is not loaded, despite the fact we have prior collected all the static files. This is due to the fact that in deployment the static files are served by the web server i.e.: the reverse proxy. However, I prefer a different approach making the Django application self contained by using a dedicated library called WhiteNoise. WhiteNoise is a Django middelware to for explicitly serving static files from its configured location.
WhiteNoise can be installed as a python package
pip install whitenoise
Then in mysite/settings.py add the following to the middelware setting
MIDDLEWARE = [
# ...
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
# ...
]The website should now appear and work as usual.
ℹ️ Note
Serving static files through a reverse proxy is very useful when it comes to scaling. The static files can be outsourced to one or multiple different servers. This allows for further separation between function and content allowing for global performance and response time optimization .
Now that we can serve the static files, we need to take care of the http server as we know the built in django server is not recommended in production. Eventually, we want to have a robust server capable of handling hundredths of requests. Therefore, we use Gunicorn (short for Green Unicorn) a popular WSGI (Web Server Gateway Interface) HTTP server for running Python web applications, including Django projects. WSGI is a standard interface between web servers and Python web applications or frameworks (like Django, Flask, etc.). Before WSGI, every Python web framework had its own way of talking to web servers. That made deployment messy and inconsistent (more information can be found at PEP 3333).
Gunicorn works according to a pre-fork model to create individual worker processes to handle multiple client requests concurrently
When a web server uses the pre-fork model, it:
Starts a single master process.
The master process creates (“forks”) multiple worker processes before any requests arrive.
Each worker process is a separate, independent copy of the application.
Each worker handles one request at a time (in the classic synchronous model).
┌───────────────────┐
│ Master Process │
│-------------------│
│ Spawns N workers │
│ Monitors workers │
└───────┬───────────┘
│
┌─────────────────┼───────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Worker 1 │ │ Worker 2 │ ... │ Worker N │
│ Handles │ │ Handles │ │ Handles │
│ Requests │ │ Requests │ │ Requests │
└──────────┘ └──────────┘ └──────────┘
N number of workers of different classes can be chosen specified in the Gunicon Documentation. To just get the system to run the default sync worker class is sufficient, handling single requests at a time.
To integrate gunicorn and us it as our http server we first install the Gunicorn python module
In the terminal, test Gunicorn with the following command
mysite is the same folder in which the wsgi.py file is located, usually the setting.py file will also be there. you should see an output similar to this one below:
[2025-10-24 18:10:24 +0200] [11090] [INFO] Starting gunicorn 23.0.0
[2025-10-24 18:10:24 +0200] [11090] [INFO] Listening at: http://0.0.0.0:8000 (11090)
[2025-10-24 18:10:24 +0200] [11090] [INFO] Using worker: sync
[2025-10-24 18:10:24 +0200] [11091] [INFO] Booting worker with pid: 11091
[2025-10-24 18:10:24 +0200] [11092] [INFO] Booting worker with pid: 11092
[2025-10-24 18:10:24 +0200] [11093] [INFO] Booting worker with pid: 11093
Without further adjustments we have small but fine web application and it is a good time to think about on how to ship the web server from our development environment to different dedicated systems. My target system is my raspberry PI 4 B model, which I just found to be available for this project. You may also choose to deploy the web server in the cloud at Amazons AWS or Microsofts Azure web servers.
In any case we need to choose the method of shipment, here are some options:
Manual copy
The arguably simplest method, but it comes with many caveats. It will be be hussel to apply any changes to the server and maintain system integrity.
Package based
Package-based shipping means you bundle (package) your web application or server, along with all its files, dependencies, and configuration into a single installable package. In other words, instead of manually copying code to servers, you create a versioned software package — like a .deb, .rpm, .whl, or .tar.gz — and ship that to your servers. Once on the server, you install the package using the system’s package manager (like apt, yum, or pip).
Containerized
This is the method of my choice for this project. Containerized shipping means packaging your web server or web application along with all of its dependencies, configuration, runtime, and libraries into a container image so it can run identically on any system that supports containers (like Docker or Kubernetes).
CI/CD
CI/CD shipping means delivering and deploying your web application or web server through an automated Continuous Integration / Continuous Delivery (or Deployment) pipeline. Instead of manually building, packaging, and deploying your app, CI/CD shipping automates the whole process from code commit to production deployment. CI/CD pipelines are a common appearance in multinational companies to support their own ecosystems.
VM-based
VM-based shipment means delivering and running your web server or application on a virtual machine (VM) a full, isolated operating system instance running on physical hardware (host machine or cloud). Instead of shipping just your code or a container, you ship a pre-configured virtual machine or configure a VM to run your application.
Docker is a platform that lets developers build, package, and run applications in containers. Have you ever heard of the phrase: “Well I do not no what is your problem, it works on my machine”. Docker is an effective way to prevent this exact situation. To do that we need to create an image of our application, basically a read-only template built from a Dockerfile. A Container is then a running instance of an image in its own environment. Contrary to Virtual Machines the containers share the host machine’s OS kernel and do not run their own.
As a prerequisite it is required to that any machine that wants to use our container needs docker installed.
You may or may not install Docker Desktop (a Docker GUI) as well. Just using a CLI is sufficient for this project.
After installation we can create a new dockerfile at the root of our project folder with the simple name Dockerfile.
The dockerfile contains a list of instructions
FROM python:3.12
# Create the app directory
RUN mkdir /app
# Set the working directory inside the container
WORKDIR /app
# Prevents Python from writing pyc files to disk
ENV PYTHONDONTWRITEBYTECODE=1
#Prevents Python from buffering stdout and stderr
ENV PYTHONUNBUFFERED=1
# Upgrade pip
RUN pip install --upgrade pip
# Copy the Django project and install dependencies
COPY requirements.txt /app/
# run this command to install all dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the Django project to the container
COPY . /app/
# Expose the Django port
EXPOSE 8000
# Run Gunicorn http server
CMD ["gunicorn", "mysite.wsgi", "--workers=3", "--bind", "0.0.0.0:8000"]In the dockerfile we create a working directory for our container in which we copy the requirements.txt file.
Let’s create this file next
asgiref==3.9.1
Django==5.2.6
gunicorn==23.0.0
packaging==25.0
pillow==11.3.0
python-dotenv==1.1.1
sqlparse==0.5.3
whitenoise==6.11.0
If you are working in a virtual environment (.venv) the following command comes in handy:
If you do not work in an virtual environment you might want to select the versions and modules to be listed manually, to not install unnecessary ones.
The image of the webapp can be built with
Docker then starts building the image according to the Dockerfile instructions and automatically install all dependencies. The new image can be found with
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE
my_image latest ab885b71e05b 2 minutes ago 1.24GB
To finally run the image as an container we make use of the docker compose tool.
To use this tool we will need another file at the root of our project folder called docker-compose.yml. Here we define how we want to run one or multiple container applications.
For example:
services:
webblog:
image: my_image
container_name: testblog
command: gunicorn mysite.wsgi --workers=3 --bind 0.0.0.0:8000
volumes:
- ./staticfiles:/app/staticfiles
- ./media:/app/media
ports:
- "8000:8000"
with the command
we finally bring the webapp back into action in its containerized version. The app can be shutdown any time with the command
A reverse proxy is a server that sits between client devices and backend servers, acting as an intermediary that forwards client requests to those servers and returns the responses back to the clients.
It is very useful and best practice to implement one because it is capable to perform the following tasks:
Due to the fact that we used Docker in the previous chapter it is fairly easy to setup a reverse proxy, because we do not need to use only the images we build our self, but can also images of reverse proxies.
I will use the an image of the Traefik reverse proxy, but feel free to use any proxy you prefere.
To integrate treafik into our project we just need to adjust the docker-compose.yml file from earlier.
Like this:
services:
webblog:
image: my_image
container_name: testblog
command: gunicorn mysite.wsgi --workers=3 --bind 0.0.0.0:8000
labels:
- "traefik.enable=true"
- "traefik.http.routers.testblog-localhost.rule=Host(`127.0.0.1`)"
- "traefik.http.services.testblog.loadbalancer.server.port=8000"
volumes:
- ./staticfiles:/app/staticfiles
- ./media:/app/media
networks:
- web
traefik:
image: traefik:v3.5
container_name: testtraefik
command:
- "--api.dashboard=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:8050"
- "--entrypoints.traefik.address=:8081"
ports:
- "8050:8050"
- "8081:8081"
volumes:
- /var/run/docker.sock:/var/run/docker.sock # Required for Traefik to access Docker
networks:
- web
restart: always
networks:
web:
external: false
Now, we have two services that we start at once when we hit the command docker-compose up. A manual download of traefik:v3,5 is not required. As you might have noticed the port configuration has moved from the the webblog service to the traefik service, but therefore we introduce a new network called web. In general docker containers are connected to each other by networks that act like a virtual LAN. These networks are not accessible from outside.
If using docker, the traefik is most easily configured by using labels and command properties. This is not true for all reverse proxy, but traefik specific. To still be able to access our webblog we place traefiks loadbalancer at port 8000. We also define a route from which we can access the webblog with a traefik router. In this case only the loopback address 127.0.0.1 is valid. The webblog is now accessible at port 8050. Any request has to pass trough the reverse proxy first.
ℹ️ Please Note
The ports 8081 and 8050 defined here are arbitrary. The default entrypoint for web traffic is port 80. I had already a service running on port 80 so I simply choose a different port for illustration.
Traefik comes with a build in dashboard that gives more insight on the configured routes.
Open it on http://127.0.0.1:8081 and you will see the following:
In the HTTP Routers tab more information can be found on how the entrypoint is connected to the service and which middelwares are configured in between.
For now the webblog service is only available on the loopback address. To change that we can configure an additional router.
- "traefik.http.routers.testblog-localnetwork.rule=Host(`Your hosts computers LAN IP Address`)"
The naming testblog-localnetwork is arbitrary and can be chosen to your liking. In the HTTP Routers tab of the dashboard will be a new router with this name after restarting the application (docker-compose down and then docker-compose up).
⚠️ Caution
Make sure your firewall is configured correctly to allow access your from LAN. For Windows and for Linux check sudo ufw status if installed and add new rules with sudo ufw allow 8050/tcp.
in the first part of this guide we used the admin page to upload content to the blog. In deployment we definitely do not want to expose this endpoint in the web. Therefor, we block this path with a traefik middleware and a new router.
services:
webblog:
image: my_image
container_name: testblog
command: gunicorn mysite.wsgi --workers=3 --bind 0.0.0.0:8000
labels:
- "traefik.enable=true"
- "traefik.http.routers.testblog-localhost.rule=Host(`127.0.0.1`)"
- "traefik.http.routers.testblog-localnetwork.rule=Host(`Your LAN IP`) "
- "traefik.http.routers.testblog-localnetworkAdmin.rule=Host(`Your LAN IP`) && PathPrefix(`/admin`)"
- "traefik.http.routers.testblog-localnetworkAdmin.middlewares=block-all@docker"
- "traefik.http.services.testblog.loadbalancer.server.port=8000"
volumes:
- ./staticfiles:/app/staticfiles
- ./media:/app/media
networks:
- web
traefik:
image: traefik:v3.5
container_name: testtraefik
command:
- "--api.dashboard=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--entrypoints.web.address=:8050"
- "--entrypoints.traefik.address=:8081"
labels:
- "traefik.http.routers.traefik-ext.entrypoints=traefik"
- "traefik.http.routers.traefik-ext.rule=Host(`Your LAN IP`) && (PathPrefix(`/`) || PathPrefix(`/api`))"
- "traefik.http.middlewares.block-all.ipwhitelist.sourcerange=0.0.0.0/32"
- "traefik.http.routers.traefik-ext.middlewares=block-all@docker"
ports:
- "8050:8050"
- "8081:8081"
volumes:
- /var/run/docker.sock:/var/run/docker.sock # Required for Traefik to access Docker
networks:
- web
restart: always
networks:
web:
external: false
Now, by navigating to http://Your LAN IP:8050 the webblog is still accessible, but not the endpoint http://Your LAN IP:8050/admin. However, access through http://127.0.0.1:8050/admin is still possible.
Instead of just displaying forbidden you might be interested to setup a so called honeypot to track the IPs which try to access the forbidden page. For Django there is the [django-admin-honeypot] (https://pypi.org/project/django-admin-honeypot/) package available, if you are interested.
TLS (Transport Layer Security) is a cryptographic protocol that ensures secure communication over a network.
First, it is required that the website has an certificate that proves that the website is who it claims to be. A certificate in the context of TLS/HTTPS is a digital document that proves the identity of a website or server and enables encrypted communication. There a so called CAs (Certificate Authorities) like Let’s Encrypt, trusted servers that issue these certificates. The certificate can be viewed on any browser by clicking on the lock icon next to the domain name.
The certificate also holds the public key that can be used to encrypt any messages between the website and the users browser. TLS uses a combination of symmetric and asymmetric encryption. More information can be found here.
┌──────────── Symmetric Encryption ─────────────┐ ┌──────────── Asymmetric Encryption ────────────┐
│ │ │ │
│ ┌────────────┐ │ │ ┌────────────┐ │
│ │ Sender │ │ │ │ Sender │ │
│ │------------│ │ │ │------------│ │
│ │ Plaintext │ │ │ │ Plaintext │ │
│ │ + Secret │ │ │ │ + Receiver │ │
│ │ Key │ │ │ │ Public Key │ │
│ └─────┬──────┘ │ │ └─────┬──────┘ │
│ │ Encrypt & Decrypt with same key │ │ │ Encrypt with receiver's public key │
│ ▼ │ │ ▼ │
│ ┌────────────┐ │ │ ┌────────────┐ │
│ │ Ciphertext │ │ │ │ Ciphertext │ │
│ └─────┬──────┘ │ │ └─────┬──────┘ │
│ │ │ │ │ │
│ │ Send over network │ │ │ Send over network │
│ ▼ │ │ ▼ │
│ ┌────────────┐ │ │ ┌────────────┐ │
│ │ Receiver │ │ │ │ Receiver │ │
│ │------------│ │ │ │------------│ │
│ │ Ciphertext │ │ │ │ Ciphertext │ │
│ │ + Secret │ │ │ │ + Private │ │
│ │ Key │ │ │ │ Key │ │
│ └─────┬──────┘ │ │ └─────┬──────┘ │
│ │ Decrypt with same key │ │ │ Decrypt with private key │
│ ▼ │ │ ▼ │
│ ┌────────────┐ │ │ ┌────────────┐ │
│ │ Plaintext │ │ │ │ Plaintext │ │
│ └────────────┘ │ │ └────────────┘ │
│ │ │ │
└───────────────────────────────────────────────┘ └───────────────────────────────────────────────┘
Usually to acquire the CA-Certificate and handling the TLS handshake requires several steps. Luckily for us, now a days many applications and reverse proxies come with their own ACME (Automated Certificate Management Environment) systems.
Below is the final version of the docker-compose.yml file including the ACME configuration and including a router for your personal domain which we will acquire in section 3.4.
services:
webblog:
image: my_image
container_name: testblog
command: gunicorn mysite.wsgi --workers=3 --bind 0.0.0.0:8000
labels:
- "traefik.enable=true"
- "traefik.http.routers.testblog-localhost.rule=Host(`127.0.0.1`)"
- "traefik.http.routers.testblog-localnetwork.rule=Host(`YOUR LAN IP`) "
- "traefik.http.routers.testblog.rule=Host(`www.your-domain.com`)"
- "traefik.http.routers.testblog-admin.rule=Host(`www.your-domain.com`) && PathPrefix(`/admin`)"
- "traefik.http.routers.testblog.tls=true"
- "traefik.http.routers.testblog-admin.tls=true"
- "traefik.http.routers.testblog-admin.entrypoints=websecure"
- "traefik.http.routers.testblog.entrypoints=websecure"
- "traefik.http.routers.testblog-admin.middlewares=block-all@docker"
- "traefik.http.services.testblog.loadbalancer.server.port=8000"
volumes:
- ./staticfiles:/app/staticfiles
- ./media:/app/media
- ./data:/app/data
networks:
- web
traefik:
image: traefik:v3.5
container_name: testtraefik
command:
- "--api.dashboard=true"
- "--api.insecure=false"
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik.address=:8080"
- "--certificatesresolvers.myresolver.acme.httpchallenge=true" # Enable Let's Encrypt HTTP challenge
- "--certificatesresolvers.myresolver.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.myresolver.acme.email=your-email@gmail.com"
- "--certificatesresolvers.myresolver.acme.storage=/etc/traefik/acme.json"
labels:
- "traefik.http.middlewares.block-all.ipwhitelist.sourcerange=0.0.0.0/32"
- "traefik.http.routers.traefik-ext.entrypoints=traefik"
- "traefik.http.routers.traefik-ext.rule=Host(`YOUR LAN IP`) && (PathPrefix(`/`) || PathPrefix(`/api`))"
- "traefik.http.routers.traefik-ext.middlewares=block-all@docker"
- "traefik.http.routers.traefik-dashboard.rule=PathPrefix(`/`) || PathPrefix(`/api`)"
- "traefik.http.routers.traefik-dashboard.entrypoints=traefik"
- "traefik.http.routers.traefik-dashboard.service=api@internal"
ports:
- "80:80" # HTTP (used for ACME challenge)
- "443:443" # HTTPS
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock # Required for Traefik to access Docker
- ./traefik/certificates/acme.json:/etc/traefik/acme.json # For ACME storage
networks:
- web
restart: always
volumes:
acme.json: {} # Required to store certificates
networks:
web:
external: false
SSH stands for Secure Shell with assigned default Port 22. It is a network protocol used to securely connect to another computer over an unsecured network (like the internet). Most computers come with support for ssh that allows us to access and control remote servers and so does my target the Raspberry PI.
SSH is deactivated by default, so first it is necessary to activate ssh on the raspberry. Second, the raspberry needs to be connected to the same LAN than the development PC. Third you need to know the raspberries local IP address.
Then, we can connect to the raspberry with ssh root@your-server-ip either on Linux or on Windows in bash or PowerShell.
ssh pi@192.168.1.101
pi@192.168.1.101's password:
Linux raspberrypi 5.15.32-v7l+ #1538 SMP Thu Mar 31 19:39:41 BST 2022 armv7l
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Create a new folder for the Webblog
First On your development PC, we build a new image for the designated target
you need to compress the docker image and the send it to the remote server.
Then the image can be copied to the new created folder
On the remote server, after transferring the images should be visible in the folder with command ls.
Load the new image with
If the console might seem unresponsive at first this is normal. If you have a container running with an old version of the image the container will use the old version until restart.
Also copy the docker-compose.yml file to the remote server.
In this section I explain how to make a self hosted webblog accessible from the world wide web.
This chapter has the prerequisite that a self hosted server in your LAN a is setup and ready for action.
The technologies that server as the Internet backbone are sometimes described as an hourglass with the IP (Internet Protocol) at its narrowest point. Nowadays we have a mixture of devices that either support IPv6, IPv4 or both. IPv6 supports \(2^{128}\) unique addresses compared to the low number of \(2^{32}\) unique addresses of IPv4 (about 4 billion). So Internet Service Provideres ISPs use a solution to extend the number of addressable devices by placing them behind a GCNAT (Carrier-Grade NAT) separating them from the rest of the internet somewhat similar to how a router create a LAN separating local devices from global accessible devices in the WAN. A CGNAT allows customer NATs (Network Address Translation) to share a single public IP address. This is somewhat problematic, because if we want that a user can navigate to our website they will have to enter a unique domain name that is translate to a unique public IP and a NAT behind a GCNAT does not have a single unique public address for IPv4. The problem is solved with IPv6, but only if the ISP actually provides us with an IPv6 ready NAT.
Illustration found at: www.f5.com 2025 F5, Inc.
Unfortunately for us we do not know if the CGNAT is just a trend during the transition period from IPv4 to IPv6 or a technology that has come to last. In the following sections, I therefore show how to address this problem with an VPS (Virtual Private Server) and an VPN (Virtual Private Network).
A Virtual Private Server is a virtual machine that provides many of the same capabilities as a dedicated physical server. It’s created by dividing a physical server into multiple virtual servers using a technology called virtualization.
There are several online providers of VPS. I personally use a VPS from IONOS with the cheapest option.
The server can be accessed with ssh and its public IP and random initial password. Make sure to open Port 22, 80 and 433 as well as Port 51820 in the VPS Firewall settings first. We will need Port 51820 to send packages between the VPS and the homeserver trough a VPN later on in section 3.3.
Before we continue we will configure ssh to use a more secure private public key pair for logging.
Output:
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
Just hit enter on all request. Go to the ssh folder cd .ssh to view the new generated keys by using the catcommand.
Copy your private key in a file at a save location somewhere at your development PC. Set the new generated keys to be used by ssh by placing it in the autherized_keys file.
To access the remote VPS server from the development PC type
Additionally we can forbid password login, so only the private key can be used to access the VPS by setting PasswordAuthentication no in /etc/ssh/sshd_config.
Finally execute
sudo service ssh restart
If you do not want to acquire a VPS there is the alternative to use Cloudflare tunnel services to traverse CGNAT. I did not explore this solution, but I found this video that outline this approach. Basically using Cloudflare as a middleman one has to trust its TLS encryption service i.e. the company behind it that all encryption is handled correctly.
Wireguard creates a secure, encrypted tunnel between two or more devices over the internet, so that they can communicate as if they were on the same private network.
To use an VPN tunnel Wiregaurd needs to be installed on both the VPS and the Webserver
Wireguard connections are secured by asymmetric encryption. So we have to create a private public key pair on the VPS and the home server as we did in section 3.2.1. Then the public keys must be exchanged between the servers. You may use the built in Wireguard key generator.
To configure Wiregured you have to edit /etc/wireguard/wg0.conf.
On the VPS, write the following to the config file:
[Interface]
PrivateKey=YOUR-VPS-PRIVATE-KEY
Address=10.0.0.1/24
ListenPort=51820
[Peer]
PublicKey=YOUR-HOMESERVER-PUBLIC-KEY
AllowedIPs=10.0.0.2/32
On the homeserver, write the following to the config file:
[Interface]
PrivateKey=YOUR-HOMESERVER-PRIVATE-KEY
Address = 10.0.0.2/24
[Peer]
PublicKey =YOUR-VPS-PUBLIC-KEY
Endpoint = YOUR-VPS-PUBLIC-IP:51820
AllowedIPs = 10.0.0.1/32
PersistentKeepalive = 25
We configure the VPN to communicate in its own private network 10.0.0.X, but we effectively only allow one IP address to reach each server hence the /32. Port 51820 is the default UDP port that WireGuard listens on and uses to send encrypted VPN traffic between peers.
Bring the interface up with
sudo wg-quick up wg0
and make it start at boot with
sudo systemctl enable wg-quick@wg0
the status of the connection can be checked with
sudo wg
Traffic from the internet will arrive at port 80 or port 443 of the VPS server. The VPN is in a different network than the public VPS server so we need to redirect incoming and outgoing traffic from public VPS IP address to the VPN destination address. The tool iptables can be used to do so.
Using DNAT (Destination Network Address Translation) incoming HTTP and HTTPS can be routed to the home server’s Wireguard (WG) IP IF stand for interface name. In my case eth0 for the public interface and wg0 for the Wireguard interface.
sudo iptables -t nat -A PREROUTING -i YOUR_PUBLIC_IF -p tcp --dport 80 -j DNAT --to-destination HOME_WG_IP:80
sudo iptables -t nat -A PREROUTING -i YOUR_PUBLIC_IF -p tcp --dport 443 -j DNAT --to-destination HOME_WG_IP:443Ensure forwarded packets are allowed through the gateway. Allow NEW+ESTABLISHED from public -> wg (targeting home server IP)
sudo iptables -A FORWARD -i YOUR_PUBLIC_IF -o WG_IF -p tcp -d HOME_WG_IP --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i YOUR_PUBLIC_IF -o WG_IF -p tcp -d HOME_WG_IP --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPTAllow return traffic from wg -> public
Masquerade outgoing packets on the wg interface so home server replies go back via wg (This rewrites source to gateway’s wg IP so replies are routed through the tunnel.)
sudo iptables -t nat -A POSTROUTING -o WG_IF -j MASQUERADE
The last puzzle piece is to find a good name for your website. The Domain name has to be acquired by an DNS (Domain Name System) hosting service like namecheap or similar. Every DNS hosting service allows to configure records to link your domain and subdomains to an public IP.
In the example above, additional to the linking my public VPS IP address to www.bennys-blog.dev, I created a URL redirect entry. In this case if bennys-blog.dev is entered in the webbrowser the https protocol is used automatically.
At this point bring your webserver up and type in your domain in the webbrowser to enjoy your website and tell everyone they should do so too.
If you have access to your router it might be a good idea to look at its settings. IP addresses are assigned by the responsible router i.e.: its DHCP server, but the IP addresses are usually volatile and might change. It is convenient if the web server always has the same local IP, so we do not have to look it up every now and then again.
To open your router settings you have to find its IP first. Commonly the device providing internet access is called Default Gateway. Most often their IP addresses are at the bottom of a local address range which are typically form 10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255 or 192.168.0.0 to 192.168.255.255.
On Windows you can find your default gateway with the ipconfig command in the terminal. On Linux with netstat -rn.
In my case, my router is located at 192.168.1.1.
In the DHCP settings I assigned the raspberry and development PC with fixed IP addresses.
In this project we are using a VPN tunnel with and VPS to enable access to the webblog. If the router has a global IP address it is also possible to configure port forwarding to directly forward any incoming traffic to the web servers default web port. For configuring forwarding it is best to consult the routers manual to learn more about the different forwarding options available.
To check, if port forwarding is possible check the WAN information in the router settings.
The WAN IP 192,168.0.135(Dynamic IP) indicates jet another local address range, meaning there is an additional router installed by the ISP that assign local IP addresses to home routers. This is typical if multiple apartments have a common ISP. So port forwarding in this case is not possible, because the ISP router settings which probably sits at IP 192.168.0.1 are not accessible and will block any traffic that does not go through port 80 or 443.
In case the WAN IP address is actually from a global address range like 224.0.0.0 to 239.255.255.255. Even if the WAN IP is global there is still the problem that the ISP is in charge of assigning the global IP to your router and they will most likely do so dynamically. In this case there are two options:
In the end, the web blog is finally online and accessible to everyone. Hosting a website requires a combination of domain management, server setup, DNS configuration, and security measures. By carefully following these steps, you can ensure your site is accessible, secure, and reliable for users worldwide. Modern tools like Docker, cloud hosting, and automated SSL certificates make deployment faster and easier, but configuring all system components still requires significant effort and attention.
Many paths lead to Rome, and for each off-the-shelf product, there is corresponding documentation to guide you. There is no single go-to solution, and self-hosting is a relatively niche pursuit. Most people would rather avoid the complexity and simply purchase a subscription from an established website builder. However, going through the process of building a self-hosted website provides a sense of accomplishment and creativity, allowing you to truly craft something unique that you control and maintain entirely on your own.
Thank you for reading.
Getting something done is more important than being perfect.