there is a point where we all start getting comfy in the cloud. we get used to the simplicity and the intuitiveness of spinning up nodes with just a few lines of YAML, delegated to a data center somewhere else - probably on the same continent as us.
but when working in our local environment things get comparatively more difficult. we can start a project, go through all the docs to configure our environment, work on it for a bit and then get pulled away from it for while. coming back to that project would then be much harder since it’s very likely we’ve forgotten what we did right the last time we were focused on it.
compare that to cloud where our projects, for the most part, are self-contained with their CI/CD pipelines. we could always make new commits to the repo and as long as the pipeline runs we know we are good.
so I thought to myself what if I could bring the CI/CD methodology to my local? this would simplify the process of starting new projects locally, and would help me maintain them somewhere else that’s not my desktop.
my goal is to be able to:
start a new project,
define a pipeline for it that would compile the project and export a package to a self hosted respository,
and I could simply pull the package and run it on my current workstation without needing to reconfigure any settings or environment values.
roughly speaking, this is how it would look:
1. new project
2. add docker and compose
3. add a pipeline
1
2
3
4
5
6
7
8
9
10
11
12
> mkcd ~/dev/cool_project && go mod init example.com/cool_project
> cat << EOF > ~/dev/cool_project/main.go
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
EOF
at the heart of this setup sits a version control service, one that would ideally also offer the ability to run pipelines and host package repositories. on the cloud we have GitHub, Azure DevOps and the likes. we could theoritically run self-hosted agents for them on our local network, but then we would need to expose our local network to public, with it comes numerous security and networking considerations such a firewall and ddns. this is a huge undertaking, the goal here is to make life easier, so cloud repositories are a pass.
there are a number open source implementations of Git, honourable mentions are
gitness and
gogs each brining a lot to the table. although ultimately I went with
gitea. what you choose depends on your preference, and of course this setup, for the most part, will work for any of the three.
gitea.foo.lan is the domain I am serving my instance of gitea at. this is a CNAME pointing to foo.lan, which is an A Record. both are set in my
dns server.
ROOT_URL and traefik labels indicate that we’re using https for our connection. tls is terminated at traefik as indicated by the labels. we will later create this tls certificate
here and provide those certs to traefik
here
I have a few servers in my local network, each of them have a docker engine running a number of applications that I self-host. if I host gitea on any of these hosts I would have to serve it over a specific port that’s not :80,:443, then I would have to remember what port I used for which application. that doesn’t sounds like making life easier.
with a reverse proxy I can configure hostname routing instead of port routing. ergo, instead of http://foo.lan:80 leading me to gitea, I could configure http://gitea.foo.lan to do the same. there are a good number of options for reverse proxy with the gold standard being
nginx. but in the spirit of making life easier, I decided to go with
traefik for this.
while with nginx we need to provide verbose configurations for our routing, using traefik we can delegate a large portion of this responsibility to traefik itself via labels.
COMPOSE_PROJECT_NAME=traefik
HOST_DNS=foo.lan # the dns name I've assigned to the host traefik will be running on.DYNAMIC_CONFIG=./config.yml # this is where we declare our tls certs for traefik to useCERTIFICATES_DIR=./certs/ # mounting the certificates (generated in the next section)TRAEFIK_VERSION=2.10
we are telling traefik to expose both web endpoint and websecure endpoint. you could set up default https redirection if you choose so. although I’d recommend against it as we’d run into a lot of issues once
we are setting up runner.
you may have noticed in
last section that we exposed gitea on an external network named traefik_public. that network is created here, and it’s how traefik know about other services on this particular docker socket.
continuing with the goal of making life easier, once I have my local git repository I would not want to refer to it with an IP. for a few reasons, namely:
most local devices lease their IP from the DHCP server and this IP could change. we could solve for it using static ips, however,
192.168.0.29/user/repo.git does not feel right, or clean, or methodical, or easy.
so we need a dns server on our local network. for this I went with
pi-hole. although there are other options such as
blocky. or one could also just edit hosts file and assign a qualifier to an ip, although I wouldn’t recommend.
pick a server in your local that is online as long as there is electriciy to your home. setup your dns server here. there are lots of great documentations on pihole, so I won’t go into details and will just focus on running it in a docker.
Tip
there two ways to setup networking for pihole, or any other dns service you go with.
in bridge mode, pihole will attach itself to port :53 of the host. so we need to ensure there are no other services on the host that are using this port. for instance: I setup pihole on my synology that had the synology dns service running which I had to shut down.
in this mode we will use host’s ip address as our dns server. this means that other containers running in this docker engine on this host will not be able to resolve dns names over its host ip address.
more here.
macvlan requests a dedicated ip address for our network from the router. hence allowing us to use :53 of this ip instead of host’s. this is a good option if we do need :53 to remain free for host.
the problem with macvlan is that the host, where the docker engine is running, would not be able to resolve to it. this means that our host cannot use our internal dns server.
there are ways to resolve this, but it adds unneccessary maintenance to our setup.
there are good
documentation on installing pihole with docker. we could use the following for a simple installation:
version:"3"# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/services:pihole:container_name:piholeimage:pihole/pihole:latest# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"ports:- "53:53/tcp"- "53:53/udp"- "67:67/udp"# Only required if you are using Pi-hole as your DHCP server- "1080:80"# I'm reserving port 80 for traefikenvironment:TZ:'America/Toronto'# WEBPASSWORD: 'set a secure password here or it will be random'# Volumes store your data between container upgradesvolumes:- '/my/network/share/pihole/pihole:/etc/pihole'- '/my/network/share/pihole/dnsmasq.d:/etc/dnsmasq.d'- '/my/network/share/pihole/resolv.conf:/etc/resolv.conf'# https://github.com/pi-hole/docker-pi-hole#note-on-capabilitiescap_add:- NET_ADMIN# Required if you are using Pi-hole as your DHCP server, else not neededrestart:unless-stopped
you may have noticed that I’m mounting /etc/resolv.conf file. this file tells the container what hostnames to use to resolve dns. since this container would sit behind my router, where I would set its own ip as my dns server it would attempt to resolve hosts using its own ip and unsuprisingly fail. so to avoid this, create a resolv.conf file and set its content to following:
after pihole is up and running, don’t forget to login and add a DNS Record for the host you intend to run gitea on. foo.lan as an example. Next, you’ll need to setup a CNAM for gitea, gitea.foo.lan. what will happen here is:
when user requests gitea.foo.lan
pihole routes traffic to foo.lan, whre gitea.foo.lan will become host header value.
foo.lan will resolve to port :80, where traefik will capture it, and route it to destination application via its hostname: gitea.foo.lan. we defined this using traefik… labels in
gitea docker-compose.
one final config we need would be adding SSL to our
gitea endpoint. this is because to use the docker repository hosted on gitea we need to run docker login and it requires a secure connection.
there are several ways to achieve this, ranging from the
basic openssl method, to using custom libraries such as
mkcert and
minica. I decided to use the latter, but any of the above work.
to create a root certificate, and subsequently a certificate for our local dns endpoints we could do:
Tip
whichever method you choose to use, you’d still need to install the root certificate on every machine that’s involved in your workflow. this includes your desktop/laptop as well as docker hosts and containers.
1
2
3
4
5
6
7
8
# - Run below.# - Double click on .pfx to install. choose local machine, and choose Trusted Root.. Store.# - Go to Cert Manager, find minica. Double Click -> Details Tab -> \# Copy to File. Anywhere you want.# - Go to generated .cer file, double click, install for local machine \# in Trusted Root... Store.# - Restart browsers.openssl pkcs12 -export -out rootCA.pfx -inkey rootCA.key -in rootCA.pem
## installgo install github.com/jsha/minica@latest
# Generate a root key and cert in minica-key.pem, and minica.pem, then# generate and sign an end-entity key and cert, storing them in ./foo.lan/$ minica --domains foo.lan
# Wildcard$ minica --domains '*.foo.lan'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
## from: https://github.com/FiloSottile/mkcert$ mkcert -install
# Created a new local CA 💥# The local CA is now installed in the system trust store! ⚡️# The local CA is now installed in the Firefox trust \ # store (requires browser restart)! 🦊$ mkcert "foo.lan""*.foo.lan"# Created a new certificate valid for the following names 📜# - "foo.lan"# - "*.foo.lan"# # # The certificate is at "./foo.lan+2.pem" \# and the key at "./foo.lan+2-key.pem" ✅
while for the most part we could leave the generated config.yaml as is, we need to make two changes to allow our locally signed root certificate on the runner:
FROM golang:alpine AS builderWORKDIR /appCOPY go.mod go.sum ./RUN go mod downloadCOPY . .RUN go build -o main .FROM alpine:latest WORKDIR /root/COPY --from=builder /app/main .EXPOSE 3000CMD["./main"]
and finally, add our action to .gitea/workflows/ci.yaml
name:cion:push:branches:- 'main'jobs:build:runs-on:cth-ubuntu-lateststeps:-name:Extract Gitea Server URLid:extract-urlrun:>- echo "::set-output name=url::$(
echo ${{ gitea.server_url }} |
sed 's~^https://~~ '
)"-name:Checkoutuses:actions/checkout@v4-name:Set up Docker Buildxuses:docker/setup-buildx-action@v3# https://docs.docker.com/build/buildkit/toml-configuration/with:buildkitd-flags:--debugconfig-inline:| [registry."${{ steps.extract-url.outputs.url }}"]
insecure=true
http=true- name:Login to DockerHubuses:docker/login-action@v2with:registry:${{ steps.extract-url.outputs.url }}username:${{ secrets.DOCKER_USERNAME }}password:${{ secrets.DOCKER_PASSWORD }}-name:Build and pushuses:docker/build-push-action@v5with:context:.file:./Dockerfilepush:true## https://docs.gitea.com/next/usage/packages/container#push-an-imagetags:"${{ steps.extract-url.outputs.url }}/${{gitea.repository}}:${{gitea.sha}}, \
${{ steps.extract-url.outputs.url }}/${{gitea.repository}}:latest"
and voila! commit the changes, and push them to your gitea instance. you could see the pipeline kickoff shortly after the commit. once the pipeline completes, you should see the docker file in https://gitea.foo.lan/user/repository/packages.
it was quite a bit of work to get to this point, but I look at it as an investment for the future. considering gitea uses github actions syntax and approach, expanding on our current setup would be relatively simple in that we could customize our local cicd infrastructure by creating custom actions and chaining them together.