Thank you for reading! If you liked it let me know
I very much appreciate your support!
A peek to my own infrastructure.
By: On: 2026-01-06
Now a days there are so many options for VPS servers, and all those providers might do a lot of things for you, but regardless of the amount of things they might do for you, you need to weight properly what you will do for your self, when it regards to the server configuration. Whether you’re using an AWS EC2 instance, a DigitalOcean droplet, a Hetzner VPS, or even a Raspberry Pi sitting in the back of your closet, the principles and commands in this guide remain the same - you just need a machine running Debian/Ubuntu (or similar) that you have root/sudo access to.
This guide walks you through setting up a production-ready self-hosted server from the ground up. You’ll learn how to:
By the end, you’ll have a flexible infrastructure where deploying a new application is as simple as writing a Docker Compose file and letting Traefik handle the rest. Whether you’re hosting personal projects, development environments, or production services, this foundation will serve you well.
The first steps are obviously securing the access to your server. And securing means that you get assurances about who is accessing, so typically you’ll hear and read about ssh configurations and ufw firewall.
First let’s configure the firewall, first let’s recall:
A firewall is a software that will allow incoming and outgoing connections/traffic.
We want to be able to call out anywhere we want from the machine, so we want ufw default allow outgoing.
We also want to block any incoming conections, but making exceptions, so we want ufw default deny incoming.
Now in order to make exceptions we can run ufw allow ${PORT_NUMBER}. For example:
# To allow "standard" SSH
ufw allow 22
# To allow HTTP
ufw allow 80
# To allow HTTPS
ufw allow 443It’s worth mentioning that we need to enable the port we are using for the connection, if we are doing
sshwithout any modifications then this should be fine, else, we need to open that port up, and potentially block22.Also if you are not careful you can lock your self out of the machine, because the
ufwneeds to be activated, and if you don’t enable the port for the current connection (or any other means of getting a shell) you will be locked out and will depend on a physical or VPS Provider means of connection.
To enable the firewall you can run ufw enable, and now it will be active. To test it, you can attempt to probe with telnet ${IP_ADDRESS} ${PORT_NUMBER}
Now that you’ve secure your ports, extra reading about how ufw works, and how to enable a port to be access from an specific source, would be beneficial. (Suppose you want to set a Docker Swarm cluster, you might want to allow only traffic from the nodes)
Now we need to address the way you would access the server, for this we’ll need to check the ssh configuration, usually located at /etc/ssh/sshd_config.
On a high level what we want to do here is:
sudo execution)ssh keys.ssh group to access.So this is a two step process, (1) Make sure the ssh group exists and that the wheel or sudoers group exists, and (2) modify the sshd configuration.
First, let’s create the necessary groups if they don’t exist:
# Create ssh group if it doesn't exist
sudo groupadd ssh
# Most systems already have wheel or sudo group
# but you can verify with:
getent group wheel
# or
getent group sudoNow, create a non-root user and add them to the appropriate groups:
# Create a new user
sudo useradd -m -s /bin/bash ${USERNAME}
# Add user to ssh group
sudo usermod -aG ssh ${USERNAME}
# Add user to sudoers group
sudo usermod -aG sudo ${USERNAME}
# Set a temporary password
sudo passwd ${USERNAME}Before we lock down SSH authentication, make sure you have your SSH key copied to the server:
# From your local machine, copy your public key
ssh-copy-id ${USERNAME}@${SERVER_IP}
# Verify you can login with the key
ssh ${USERNAME}@${SERVER_IP}Now let’s modify the SSH daemon configuration usually /etc/ssh/sshd_config, and find and modify (or add) the following lines:
# Disable root login
PermitRootLogin no
# Disable password authentication
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
# Only allow users in the ssh group
AllowGroups ssh
# Optional: Change the default SSH port (security through obscurity)
# Port 2222
# Use protocol 2 only
Protocol 2
# Disable empty passwords
PermitEmptyPasswords no
# Enable public key authentication
PubkeyAuthentication yes
Important: Before applying these changes, make absolutely sure you can login with your SSH key as your non-root user. Test this in a separate terminal session while keeping your current session open.
Once you’ve verified everything works, restart the SSH service:
sudo systemctl restart sshdTest your configuration from a new terminal:
# This should work
ssh ${USERNAME}@${SERVER_IP}
# This should fail
ssh root@${SERVER_IP}If you changed the SSH port, remember to update your firewall rules and specify the port when connecting:
ufw allow ${NEW_SSH_PORT}
ufw delete allow 22
ssh -p ${NEW_SSH_PORT} ${USERNAME}@${SERVER_IP}With SSH properly configured, the next step is to protect against brute-force attacks. Fail2ban monitors log files and automatically bans IP addresses that show malicious signs, such as too many password failures.
To install Fail2ban:
# On Debian/Ubuntu
sudo apt update
sudo apt install fail2banConfigure Fail2ban by creating a local configuration file:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.localOpen /etc/fail2ban/jail.local and find and modify the [sshd] section:
[sshd]
enabled = true
port = ssh
# Or if you changed the SSH port:
# port = 2222
logpath = %(sshd_log)s
maxretry = 3
bantime = 3600
findtime = 600This configuration will ban an IP for 1 hour (bantime = 3600) if it fails 3 login attempts (maxretry = 3) within 10 minutes (findtime = 600).
Enable and start Fail2ban:
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
# Check the status
sudo fail2ban-client status sshdTip: You can view banned IPs with
sudo fail2ban-client status sshdand manually unban an IP withsudo fail2ban-client set sshd unbanip ${IP_ADDRESS}.
Docker is essential for self-hosting, allowing you to run applications in isolated containers. Here’s how to install it properly:
First, remove any old versions:
# On Debian/Ubuntu
sudo apt remove docker docker-engine docker.io containerd runcInstall Docker using the official repository:
sudo apt update
sudo apt install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginEnable and start Docker:
sudo systemctl enable docker
sudo systemctl start dockerAdd your user to the docker group to run Docker commands without sudo:
sudo usermod -aG docker ${USERNAME}
# Log out and back in for this to take effect
# Or run:
newgrp dockerVerify the installation:
docker --version
docker compose version
docker run hello-worldThe Docker daemon runs as root, and users in the
dockergroup have root-equivalent privileges. Only add trusted users to this group. For enhanced security, consider using Docker rootless mode, or if you are interested in Podman as it brings a different approach on container execution.
With these tools in place you have a solid foundation for self-hosting applications.
Before we jump into setting up Traefik, let’s talk about organizing your Docker services. Having a clean directory structure makes it easier to manage multiple services, backups, and configurations.
I recommend creating a dedicated directory for all your services, typically in your home directory:
mkdir -p ~/services
cd ~/servicesHere’s an example structure that works well:
~/services/
├── traefik/
│ ├── docker-compose.yml
│ ├── conf.d/
│ │ ├── middlewares.yml
│ │ ├── cups.yml
│ │ └── other-services.yml
│ ├── letsencrypt/
│ │ └── acme.json
│ ├── certs/
│ │ ├── wildcard.crt
│ │ └── wildcard.key
│ └── users
├── logs/
│ └── traefik/
├── data/
│ ├── app1/
│ ├── app2/
│ └── ...
├── secrets/
│ └── (api keys, tokens, etc.)
└── your-app/
└── docker-compose.yml
logs/ and data/ are accessible to multiple servicescd ~/services
# Create the base directories
mkdir -p traefik/{conf.d,letsencrypt,certs}
mkdir -p logs/traefik
mkdir -p data
mkdir -p secrets
# Set proper permissions for sensitive files
chmod 700 secretsThis structure will be the foundation for all the services we deploy. When you add a new service, you’ll create a new directory under ~/services/ with its own docker-compose.yml, and it will connect to the shared traefik-network.
Before setting up your reverse proxy, you need to configure DNS to point your domain names to your server. The approach differs depending on whether you’re running a public server or a local network setup.
DNS (Domain Name System) is what translates human-readable domain names like example.com into IP addresses that computers can use. For self-hosting, you’ll need to decide:
For public-facing services, you need to configure DNS records with your domain registrar or DNS provider (like Cloudflare, Route53, Google Domains, etc.).
At minimum, you’ll need A records pointing to your server’s IP address:
Type Name Value TTL
A @ your.server.ip 3600
A *.example.com your.server.ip 3600
A traefik.example.com your.server.ip 3600
A whoami.example.com your.server.ip 3600
The wildcard record (*.example.com) allows any subdomain to resolve to your server, which is useful when you’re adding new services frequently.
Important: DNS changes can take anywhere from a few minutes to 48 hours to propagate globally, though typically it’s much faster with larger DNS providers. Use a low TTL (like 300-3600 seconds) during initial setup so changes propagate quickly.
Before proceeding with Traefik setup, verify your DNS records have propagated:
# Check A record
dig example.com +short
# Should return your server IP
# Check wildcard record
dig test.example.com +short
# Should also return your server IP
# Check from multiple locations
# Using Google's DNS
dig @8.8.8.8 example.com +short
# Using Cloudflare's DNS
dig @1.1.1.1 example.com +shortFor local networks not exposed to the internet, you have several options for DNS resolution:
Edit /etc/hosts (Linux/macOS) or C:\Windows\System32\drivers\etc\hosts (Windows) on each client device:
192.168.1.100 example.local
192.168.1.100 traefik.example.local
192.168.1.100 app.example.local
192.168.1.100 whoami.example.local
Pros: Simple, no additional infrastructure needed Cons: Must be configured on every device, no wildcard support
Most home routers allow you to configure local DNS entries. The exact steps vary by router, but typically:
192.168.1.1 or 192.168.0.1)*.example.local192.168.1.100Common router interfaces:
Pros: Applies to all devices on the network automatically Cons: Limited by router capabilities, may not support wildcards
For more control, run your own DNS server using Pi-hole, dnsmasq, or BIND:
Using dnsmasq (lightweight option):
# Install dnsmasq
sudo apt install dnsmasq
# Configure dnsmasq
sudo tee -a /etc/dnsmasq.conf > /dev/null <<EOF
# Local domain configuration
address=/example.local/192.168.1.100
address=/.example.local/192.168.1.100
EOF
# Restart dnsmasq
sudo systemctl restart dnsmasqUsing Pi-hole:
Pi-hole is a popular option that provides DNS services with built-in ad-blocking. You can install it and configure local DNS records through its web interface. Pi-hole also offers DHCP services and detailed DNS query logging.
curl -sSL https://install.pi-hole.net | bashhttp://pi.hole/adminThen configure your router to use the DNS server:
192.168.1.100)Pros: Full control, supports wildcards, can provide additional features (ad-blocking, logging) Cons: Requires dedicated server or container, single point of failure
After configuring DNS (local or public), verify resolution from a client machine:
# Test basic resolution
nslookup example.com
# Test with specific DNS server
nslookup example.com 8.8.8.8
# More detailed query
dig example.com
# Test HTTPS connectivity (after Traefik is running)
curl -I https://example.comFor servers accessible both internally and externally, consider split DNS:
192.168.1.100) via router/local DNSThis approach:
Configuration example:
example.com → public.ip.addressexample.com → 192.168.1.100Note: Make sure your firewall rules match your DNS configuration. Public DNS records without proper firewall rules won’t allow external access, and local DNS records won’t help if the service isn’t listening on the local network.
With DNS configured, we can now set up Traefik to handle incoming traffic and automatically manage SSL certificates.
A reverse proxy sits in front of your services and routes incoming requests to the appropriate backend service. Think of it as a smart traffic director for your server. Benefits include:
Now that we have Docker running and our directory structure ready, we need a way to route traffic to our containers and manage SSL certificates automatically. Traefik is a dynamic reverse proxy that integrates seamlessly with Docker and handles Let’s Encrypt certificates automatically.
First, navigate to your services directory and create the Traefik structure:
cd ~/services/traefikThe directory structure should already exist from our earlier setup, but let’s verify:
ls -la
# Should show: conf.d/ letsencrypt/ certs/Create the external network that Traefik will use to communicate with other containers:
docker network create traefik-networkNow create the docker-compose.yml file in ~/services/traefik/:
networks:
traefik-network:
external: true
services:
traefik:
image: traefik:v3.0
restart: unless-stopped
command:
- --api.dashboard=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.web.http.redirections.entryPoint.to=websecure
- --entrypoints.web.http.redirections.entryPoint.scheme=https
- --entrypoints.web.http.redirections.entrypoint.permanent=true
- --entrypoints.websecure.address=:443
- --providers.file.directory=/etc/traefik/conf.d
- --providers.file.watch=true
# Let's Encrypt resolver for public domains
- --certificatesresolvers.letsencrypt.acme.email=your-email@example.com
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
- --certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web
- --entrypoints.websecure.http.tls.certresolver=letsencrypt
- --entrypoints.websecure.http.tls.domains[0].main=example.com
- --entrypoints.websecure.http.tls.domains[0].sans=*.example.com
- --accesslog=true
- --accesslog.filepath=/var/log/traefik/access.log
ports:
- "80:80"
- "443:443"
networks:
- traefik-network
volumes:
- ${HOME}/services/logs/traefik:/var/log/traefik
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${HOME}/services/traefik/letsencrypt:/letsencrypt
- ${HOME}/services/traefik/certs:/certs:ro
- ${HOME}/services/traefik/conf.d:/etc/traefik/conf.d:ro
- ${HOME}/services/traefik/users:/etc/traefik/users:ro
extra_hosts:
- "host.docker.internal:host-gateway"
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.permanent=true"
- "traefik.http.routers.redirect-https.rule=HostRegexp(`{any:.*}`)"
- "traefik.http.routers.redirect-https.entrypoints=web"
- "traefik.http.routers.redirect-https.middlewares=redirect-to-https"
- "traefik.http.routers.traefik-dashboard.rule=Host(`traefik.example.com`)"
- "traefik.http.routers.traefik-dashboard.entrypoints=websecure"
- "traefik.http.routers.traefik-dashboard.tls=true"
- "traefik.http.routers.traefik-dashboard.service=api@internal"
- "traefik.http.routers.traefik-dashboard.tls.certresolver=letsencrypt"
- "traefik.http.routers.traefik-dashboard.middlewares=basic-auth@file"
whoami-external:
image: "traefik/whoami"
restart: unless-stopped
networks:
- traefik-network
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami-external.rule=Host(`whoami.example.com`)"
- "traefik.http.routers.whoami-external.entrypoints=websecure"
- "traefik.http.routers.whoami-external.tls=true"
- "traefik.http.routers.whoami-external.tls.certresolver=letsencrypt"
- "traefik.http.services.whoami-external.loadbalancer.server.port=80"
whoami-internal:
image: "traefik/whoami"
restart: unless-stopped
networks:
- traefik-network
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami-internal.rule=Host(`whoami-internal.example.com`)"
- "traefik.http.routers.whoami-internal.entrypoints=websecure"
- "traefik.http.routers.whoami-internal.tls=true"
- "traefik.http.services.whoami-internal.loadbalancer.server.port=80"Notice how the volumes are mounted using absolute paths with ${HOME} environment variable:
${HOME}/services/logs/traefik:/var/log/traefik - Logs go to the shared logs directory${HOME}/services/traefik/letsencrypt:/letsencrypt - Let’s Encrypt certificates stay in Traefik’s directory${HOME}/services/traefik/certs:/certs:ro - Custom certificates are read-only${HOME}/services/traefik/conf.d:/etc/traefik/conf.d:ro - Configuration files are read-only${HOME}/services/traefik/users:/etc/traefik/users:ro - User authentication files are read-onlyUsing ${HOME} instead of ~ or relative paths ensures the mounts work correctly regardless of how you run Docker Compose.
This organization makes it easy to back up important data (just backup ~/services/) and keeps your system clean.
When you add new services, create them in their own directories:
cd ~/services
mkdir my-app
cd my-appCreate a docker-compose.yml for your app:
networks:
traefik-network:
external: true
services:
my-app:
image: my-app:latest
restart: unless-stopped
networks:
- traefik-network
volumes:
- ${HOME}/services/data/my-app:/data
labels:
- "traefik.enable=true"
- "traefik.http.routers.my-app.rule=Host(`myapp.example.com`)"
- "traefik.http.routers.my-app.entrypoints=websecure"
- "traefik.http.routers.my-app.tls=true"
- "traefik.http.routers.my-app.tls.certresolver=letsencrypt"
- "traefik.http.services.my-app.loadbalancer.server.port=8080"Notice how the data volume (${HOME}/services/data/my-app:/data) uses an absolute path to reference the shared data directory. This keeps all persistent data in one place for easier backups.
This configuration includes one automatic certificate resolver and support for manually provided certificates:
Before configuring certificates, consider your use case:
For Local Networks (not exposed to the internet):
For Public Servers (exposed to the internet):
I’ll cover both scenarios so you can choose what fits your needs.
For services that require custom certificates (internal CA, purchased certificates, or self-signed), you need to configure them via file provider. Create a certificates configuration file at traefik/conf.d/certificates.yml:
tls:
certificates:
- certFile: /certs/wildcard.crt
keyFile: /certs/wildcard.keyIf you need to generate your own self-signed wildcard certificate for internal use or development, you can use OpenSSL:
# Create a directory for certificates
mkdir -p traefik/certs
cd traefik/certs
# Generate a private key
openssl genrsa -out wildcard.key 4096
# Create a configuration file for the certificate
cat > wildcard.cnf <<EOF
[req]
default_bits = 4096
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = v3_req
[dn]
C=US
ST=State
L=City
O=Organization
OU=IT Department
CN=*.example.com
[v3_req]
subjectAltName = @alt_names
[alt_names]
DNS.1 = *.example.com
DNS.2 = example.com
DNS.3 = *.internal.example.com
EOF
# Generate the certificate signing request and self-signed certificate
openssl req -new -x509 -days 3650 -key wildcard.key -out wildcard.crt -config wildcard.cnf -extensions v3_req
# Set appropriate permissions
chmod 600 wildcard.key
chmod 644 wildcard.crt
# Verify the certificate
openssl x509 -in wildcard.crt -text -noout | grep -A 1 "Subject Alternative Name"This creates a self-signed certificate valid for 10 years (-days 3650) that covers:
*.example.com (all subdomains)example.com (the root domain)*.internal.example.com (nested subdomains)Update your traefik/conf.d/certificates.yml to use the wildcard certificate:
tls:
certificates:
- certFile: /certs/wildcard.crt
keyFile: /certs/wildcard.keyImportant: Self-signed certificates will trigger browser warnings. For production use, either:
- Use Let’s Encrypt (free and trusted by browsers)
- Purchase a certificate from a trusted CA
- Set up your own internal CA and distribute the root certificate to client devices
- Accept the browser warnings for internal-only services
To avoid browser warnings when accessing your services, you need to install the self-signed certificate on each device that will access your server.
# Add to system keychain
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain wildcard.crt
# Or add to user keychain (no sudo required)
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain wildcard.crt
# Restart browsers for changes to take effect# Copy certificate to system certificates directory
sudo cp wildcard.crt /usr/local/share/ca-certificates/my-server.crt
# Update certificate store
sudo update-ca-certificates
# For Firefox specifically, you may need to import via browser settings
# Settings > Privacy & Security > Certificates > View Certificates > Import# PowerShell as Administrator
Import-Certificate -FilePath wildcard.crt -CertStoreLocation Cert:\LocalMachine\Root
# Or use the Certificate Manager GUI:
# Win+R, type 'certmgr.msc'
# Trusted Root Certification Authorities > Certificates > Right-click > All Tasks > ImportThe process varies slightly by Android version and manufacturer, but generally:
Alternative method for newer Android versions:
# If you have ADB access
adb push wildcard.crt /sdcard/Download/
# Then follow the GUI steps aboveAfter installing the certificate, test by visiting your service:
https://service.example.com should load without warningsIf you’re managing many devices, consider:
Before starting Traefik, you need to create a basic authentication file for the dashboard. Generate a password hash:
# Install apache2-utils for htpasswd
sudo apt install apache2-utils
# Create the users file with htpasswd
mkdir -p traefik/conf.d
htpasswd -c traefik/conf.d/users admin
# You'll be prompted to enter the password twiceCreate the middleware configuration file at traefik/conf.d/middlewares.yml:
http:
middlewares:
basic-auth:
basicAuth:
usersFile: "/etc/traefik/conf.d/users"Place your certificate files in the traefik/certs directory:
mkdir -p traefik/certs
# Copy your certificate and key files
cp /path/to/your/cert.crt traefik/certs/internal.example.com.crt
cp /path/to/your/key.key traefik/certs/internal.example.com.key
# Set appropriate permissions
chmod 600 traefik/certs/*.keyNow start Traefik:
docker compose up -dThis configuration does several important things:
traefik.example.com with basic authenticationThe included whoami-external service is a simple test application that you can use to verify Traefik is working correctly with Let’s Encrypt certificates. Once deployed, you should be able to access it at https://whoami.example.com. The whoami-internal service demonstrates using custom provided certificates for internal services at https://whoami-internal.example.com.
Now that Traefik is running, you can add services in two ways: as Docker containers or as host-based services.
For containerized applications, add them to the traefik-network and configure labels:
services:
your-app:
image: your-app:latest
networks:
- traefik-network
labels:
- "traefik.enable=true"
- "traefik.http.routers.your-app.rule=Host(`app.example.com`)"
- "traefik.http.routers.your-app.entrypoints=websecure"
- "traefik.http.routers.your-app.tls=true"
- "traefik.http.routers.your-app.tls.certresolver=letsencrypt"
- "traefik.http.services.your-app.loadbalancer.server.port=8080"
networks:
traefik-network:
external: trueSometimes you need to proxy services running directly on your host machine (CUPS, systemd services, local applications).
Traefik accesses the host through host.docker.internal, configured in the extra_hosts section:
extra_hosts:
- "host.docker.internal:host-gateway"Create configuration files in traefik/conf.d/ for each host service.
Example: Proxying CUPS Print Server at localhost:631
Create traefik/conf.d/cups.yml:
http:
routers:
cups:
rule: "Host(`cups.example.com`)"
entryPoints:
- websecure
service: cups
tls:
certResolver: letsencrypt
services:
cups:
loadBalancer:
servers:
- url: "http://host.docker.internal:631"Example: Generic host service on port 8080
Create traefik/conf.d/myservice.yml:
http:
routers:
myservice:
rule: "Host(`myservice.example.com`)"
entryPoints:
- websecure
service: myservice
tls:
certResolver: letsencrypt
services:
myservice:
loadBalancer:
servers:
- url: "http://host.docker.internal:8080"For services using self-signed certificates, omit the certResolver:
http:
routers:
internal-service:
rule: "Host(`internal.example.com`)"
entryPoints:
- websecure
service: internal-service
tls: {}
services:
internal-service:
loadBalancer:
servers:
- url: "http://host.docker.internal:8080"Important considerations for host services:
0.0.0.0 or 127.0.0.1)conf.d directory, so new configurations are picked up automaticallyNote: Make sure your domain’s DNS A records point to your server’s IP address. Let’s Encrypt needs to verify domain ownership through HTTP challenge.
We now have a server with:
At this point, it might not look like much because we’re not explicitly running an http server that serves the typical static site similar to the apache demo, but we did something way more powerful: we set up a docker environment where you can pull containers (or compose files or stacks), and just run projects you find online, as if you were running them locally, but exposing them directly on a production-like server.
The workflow for adding a new service is:
traefik-network to the servicedocker compose up -dVoila, a new service is deployed with automatic HTTPS…
This might look like a lot, but the payoff of deploying in a clean and simple way is worth it. Each service is isolated, SSL is automatic, and adding new services is just a matter of configuration rather than complex server setup.
As next steps, you can look deeper into systemd services and the linux permission system and how that might affect your docker deployments, custom sshd configuration or more docker things such as volumes or swarms.
Thank you for reading! If you liked it let me know
I very much appreciate your support!