<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>Rhys' Blog</title>
    <link>https://blog.rhysperry.com</link>
    <description>Write-ups of things I've been doing and what I've been thinking about. Infrastructure/Network/GitOps/Cyber.</description>
    <item>
      <title>Homelab Upgrade Pt.5 - GitOps Tooling Deployment</title>
      <link>https://blog.rhysperry.com/homelab-gitops/</link>
      <guid>https://blog.rhysperry.com/homelab-gitops/</guid>
      <pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate>
      <description>&lt;p&gt;As a certified DevNetSecAIOps™ engineer, I'd be lost without some damn good tooling. &lt;/p&gt;
&lt;h2&gt;What the heck even is GitOps?&lt;/h2&gt;
&lt;p&gt;&lt;img alt="Realistic GitOps/DevOps/DevSecOps diagram" src="/static/homelab-gitops/devops.png" /&gt;&lt;/p&gt;
&lt;p&gt;For many people in the tech industry the question of what does *Ops actually mean can be quite difficult - corporate wants people to do thing X, management wants people to do thing Y, and the engineers themselves thought they were meant to be doing Z. For some people who have been in the industry a bit longer, it might even be a bit of a painful term given that they were once hired as a simple "X engineer"... that then slowly morphed into DevOps... that now is DevSecOps or even DevSecAIOps... and now for some reason their job role is 5x the scope it used to be with no clear definition.&lt;/p&gt;
&lt;p&gt;In my mind, GitOps is just doing DevOps within Git tooling and CI/CD workflows (i.e. proper releases, proper PR approval workflows, automated pipelines), with DevOps being a fairly vague marriage between engineers that understand both application and infrastructure, and use Infrastructure as Code to manage everything. I'll likely be doing a blog post in the future discussing my thoughts on the can of worms that is *Ops at some point, but I think that covers what I feel well enough to make this post make sense, and I'd encourage others to share their thoughts and opinions too :)&lt;/p&gt;
&lt;h2&gt;Deploying the tooling&lt;/h2&gt;
&lt;p&gt;As described in my &lt;a href="/homelab-kubernetes/"&gt;previous blog post&lt;/a&gt;, I now have a Kubernetes cluster ready to go in my homelab, so everything I deploy next will be on top of that. To keep things nicely managed, I'll be deploying all of the tooling as helm releases managed by Terraform.&lt;/p&gt;
&lt;h3&gt;GitLab&lt;/h3&gt;
&lt;p&gt;The first tool I needed to deploy was GitLab (y'know... for the Git in GitOps), which after digging around its documentation a bit certainly has an interesting relationship with its Helm deployment. The official stance is seemingly that they don't recommend deploying it using Helm - preferring a simple Linux deployment instead - but also that it's the best way to deploy the software for high availability and scalability.&lt;/p&gt;
&lt;p&gt;&lt;img alt="GitLab pods running" src="/static/homelab-gitops/gitlab-pods.png" /&gt;&lt;/p&gt;
&lt;p&gt;I decided to continue with the Helm method anyway, because I've heard anecdotally that a lot of big players are using it in production... so it should be fine for my homelab. GitLab's chart needed to have a lot of variables changed to help it integrate better with my existing ingress controller and cert-manager, and finding which flags of similar names were the true canonical ones that needed to be changed was a small challenge, but after working everything out the chart applied nicely and all of the pods were happy.&lt;/p&gt;
&lt;p&gt;&lt;img alt="GitLab jobs taking forever" src="/static/homelab-gitops/gitlab-jobs.png" /&gt;&lt;/p&gt;
&lt;p&gt;Well... that was after it took 40 minutes for the initial minio bucket and database migrations jobs to run... on a fresh installation? Anyway, it runs eventually, I just needed to have some patience.&lt;/p&gt;
&lt;p&gt;&lt;img alt="GitLab runner running" src="/static/homelab-gitops/gitlab-runner.png" /&gt;&lt;/p&gt;
&lt;p&gt;I could then go to the web interface and log into the &lt;code&gt;root&lt;/code&gt; account using the automatically generated password (which is stored in a kubernetes secret), and everything looked happy - the default runner even started properly.&lt;/p&gt;
&lt;h3&gt;ArgoCD&lt;/h3&gt;
&lt;p&gt;The next tool I wanted to deploy was ArgoCD. ArgoCD is, unsurprisingly, a continuous deployment tool. It is built specifically for deploying applications from controlled sources such as Git repositories and Helm charts, into kubernetes clusters, and it provides a lot of very useful capabilities such as staged update/configuration rollouts with blue/green or canary methods, as well as automated health checking of various application components.&lt;/p&gt;
&lt;p&gt;&lt;img alt="ArgoCD pods running" src="/static/homelab-gitops/argocd-pods.png" /&gt;&lt;/p&gt;
&lt;p&gt;Getting ArgoCD deployed was as simple as adding another Helm chart to my terraform, and after applying the Terraform everything looked happy. I did need to tweak some of the chart values in order to make sure ArgoCD knew to use the Cilium ingress, as well as my custom domain and the cluster TLS issuer, however it was so much easier to figure out what needed doing compared to the huuuuuge GitLab chart.&lt;/p&gt;
&lt;p&gt;Similar to GitLab, the default credentials were in a kubernetes secret, and heading to the web interface showed everything as happy.&lt;/p&gt;
&lt;h2&gt;Creating a simple test app&lt;/h2&gt;
&lt;p&gt;To test whether all the tooling was working as expected and to give a basic example of what GitLab+ArgoCD can do, I thought it'd be a good idea to get together a basic deployment, along with some CI in GitLab. A personal favourite of mine when working with kubernetes is a "fruits test"... I'm not really sure where or who I got the idea from, but it literally just involves using a &lt;code&gt;hashicorp/http-echo&lt;/code&gt; container to respond with the name of a fruit. I like it because it's good enough to test that all of the networking through components like ingress are working happily.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-yaml"&gt;kind: Pod
apiVersion: v1
metadata:
  name: banana-app
  labels:
    app: banana
spec:
  containers:
    - name: banana-app
      image: hashicorp/http-echo
      args:
        - &amp;quot;-text=banana&amp;quot;

---

kind: Service
apiVersion: v1
metadata:
  name: banana-service
spec:
  selector:
    app: banana
  ports:
    - port: 5678

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: fruits-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod 
spec:
  ingressClassName: cilium
  rules:
  - host: fruits.k8s.rhysperry.com
    http:
      paths:
        - path: /banana
          pathType: Prefix
          backend:
            service:
              name: banana-service
              port:
                number: 5678
  tls:
    - hosts:
      - fruits.k8s.rhysperry.com
      secretName: fruits-tls
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I'd recommend splashing out and deploying two or three fruits (all behind the same ingress), but to save space on my blog, and to leave an exercise for the smartest of readers, the example above only has one. I put this file inside of a GitLab repository under &lt;code&gt;k8s/app.yaml&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-yaml"&gt;stages:
  - validate

validate-manifests:
  stage: validate
  image:
    name: ghcr.io/yannh/kubeconform:latest-alpine
    entrypoint: [&amp;quot;&amp;quot;]
  script:
    - /kubeconform -summary -output json k8s/app.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To then test if CI was working, I created a GitLab CI definition to validate the schema of &lt;code&gt;app.yaml&lt;/code&gt;. This uses a handy tool called &lt;code&gt;kubeconform&lt;/code&gt; (because &lt;a href="https://noyaml.com/"&gt;not all yaml is good enough for kubernetes&lt;/a&gt; :D).&lt;/p&gt;
&lt;p&gt;&lt;img alt="GitLab CI job running" src="/static/homelab-gitops/gitlab-ci.png" /&gt;&lt;/p&gt;
&lt;p&gt;Once the GitLab CI definition was pushed, I could see that it had run without errors from the GitLab web interface, so that at least confirmed that GitLab and it's runner were working as expected. Nice to see that the app YAML also passed validation successfully.&lt;/p&gt;
&lt;p&gt;&lt;img alt="GitLab CI job running" src="/static/homelab-gitops/argocd-app.png" /&gt;&lt;/p&gt;
&lt;p&gt;I could then add the repo and application in the ArgoCD web interface by pointing it at my GitLab repo and the &lt;code&gt;k8s&lt;/code&gt; path to test deploying it to the cluster (realistically this would be done using IaC and an Argo manifest, but this is just a test). It took a few seconds to sync, however eventually all of the resources created successfully and were deployed to the cluster. I could finally &lt;code&gt;curl&lt;/code&gt; some fresh fruits from the ingress :)&lt;/p&gt;
&lt;p&gt;I also tested making a chance to the app definition in GitLab, and sure enough ArgoCD picked up on the change and deployed it to the cluster. It should be noted though that it wasn't a &lt;em&gt;graceful&lt;/em&gt; deployment though, however ArgoCD rollouts will be a later blog post.&lt;/p&gt;
&lt;p&gt;All of the source code for deploying the tooling can be found in &lt;a href="https://github.com/rhysperry111/helios-kubernetes/tree/main/05-terraform-deploy-gitops"&gt;step 5&lt;/a&gt; of my &lt;a href="https://github.com/rhysperry111/helios-kubernetes"&gt;helios-kubernetes IaC repo&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Homelab Upgrade Pt.4 - Automagic Kubernetes?</title>
      <link>https://blog.rhysperry.com/homelab-kubernetes/</link>
      <guid>https://blog.rhysperry.com/homelab-kubernetes/</guid>
      <pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate>
      <description>&lt;p&gt;Is deploying kubeadm Kubernetes on a rolling release distro with no versioning probably a bad idea in the long run? Yes. Watch me do it anyway :D&lt;/p&gt;
&lt;h2&gt;A fairly good starting point&lt;/h2&gt;
&lt;p&gt;A few years ago when I first started messing around with Kubernetes at home, I got some Ansible playbooks made up to create a basic cluster and install storage and network interfaces. Those playbooks were a little janky though, so before I'm happy to stick a stamp of approval on them, there's a few things that I'll need to change first before I'm happy to run them on top of the VMs I made in &lt;a href="/homelab-terraform/"&gt;my previous post&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Multiple controllers.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;HA kube-api access.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Installation of cluster interfaces.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Making the playbooks work for multiple controllers&lt;/h2&gt;
&lt;p&gt;My old ansible playbooks were built fairly quickly, and while they technically accepted controllers as a group of hosts to act upon... there really wasn't any thought into how that would work and the same steps were just run on all controllers. This was fine when I was making VMs manually just for testing (and so really didn't feel spinning up more controllers anyway), but now that VM creation is automatic and I'm going to be moving a lot of resources into Kubernetes, it's actually fairly important.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-yaml"&gt;    - name: Bootstrap | Export join commands as facts (available to other plays)
      set_fact:
        worker_join_command: &amp;quot;{{ worker_join_cmd.stdout }}&amp;quot;
        controller_join_command: &amp;gt;-
          {{ worker_join_cmd.stdout }}
          --control-plane
          --certificate-key {{ cert_key_cmd.stdout | trim }}

- name: Controllers | Join secondary controllers to control plane
  hosts: controllers
  become: true
  serial: 1  # Join one at a time to avoid etcd race conditions
  tasks:

    - name: Secondary controllers | Join control plane
      command: &amp;gt;
        {{ hostvars[groups['controllers'][0]].controller_join_command }}
      args:
        creates: /etc/kubernetes/kubelet.conf
      when: inventory_hostname != groups['controllers'][0]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Luckily, getting multiple controllers in a kubeadm cluster working really isn't that hard. You simply initialize your main controller first with &lt;code&gt;kubeadm init&lt;/code&gt;, and then run your normal &lt;code&gt;kubeadm join&lt;/code&gt; command with the &lt;code&gt;--control-plane&lt;/code&gt; flag added for the additional controllers.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Controllers showing properly in k9s" src="/static/homelab-kubernetes/controllers.png" /&gt;&lt;/p&gt;
&lt;p&gt;A quick run of the playbook and test with &lt;code&gt;k9s&lt;/code&gt; showed all the nodes as happy and with the correct roles.&lt;/p&gt;
&lt;h2&gt;Using kube-vip to make kube-api access HA&lt;/h2&gt;
&lt;p&gt;There isn't much point in having multiple controllers if you have no way to ensure that kube-api traffic actually gets sent to a controller that isn't down. Luckily, kube-vip is a project that provides a quick and simple way to create a floating IP to share between multiple machines (either using ARP or BGP), and it works nicely as a static pod so kubelet will handle running it.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-yaml"&gt;    - name: kube-vip | Generate static pod manifest
      shell: &amp;gt;
        ctr --namespace k8s.io run --rm --net-host
        ghcr.io/kube-vip/kube-vip:latest
        kube-vip-gen-{{ ansible_date_time.epoch }}
        /kube-vip manifest pod
        --interface {{ k8s_vip_interface }}
        --address {{ k8s_vip }}
        --controlplane
        --arp
        --leaderElection
        &amp;gt; /etc/kubernetes/manifests/kube-vip.yaml
      args:
        creates: /etc/kubernetes/manifests/kube-vip.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The standard kube-vip container very helpfully provides an inbuilt tool to generate a static pod manifest based on arguments provided to it, and so a very simple step to get kube-vip working on the controllers. Everything should just work now right? Magic HA done.&lt;/p&gt;
&lt;p&gt;For whatever reason I wasn't able to get any response back from the the API server behind the VIP at all, and after a little bit of debugging with low-level CRI commands, it seems that I was running into the issue described on GitHub &lt;a href="https://github.com/kube-vip/kube-vip/issues/684"&gt;here&lt;/a&gt;. This seems to be an issue caused by a kubeadm behaviour change in 1.29+, which (provisionally) can be fixed by using &lt;code&gt;super-admin.conf&lt;/code&gt; instead of &lt;code&gt;admin.conf&lt;/code&gt; on the primary controller before &lt;code&gt;kubeadm init&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-yaml"&gt;    - name: kube-vip | Patch manifest to use super-admin.conf (required k8s &amp;gt;= 1.29)
      replace:
        path: /etc/kubernetes/manifests/kube-vip.yaml
        regexp: '(path: /etc/kubernetes/)admin\.conf'
        replace: '\1super-admin.conf'
      when: inventory_hostname == groups['controllers'][0]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Some simple Ansible allows this to be done, and after that everything seems to work. In the future this should probably be changed to a proper permissions role rather than handing kube-vip admin config files, but there is ongoing discussion about how best to achieve that.&lt;/p&gt;
&lt;h2&gt;Moving the cluster interface installation from Ansible to Terraform&lt;/h2&gt;
&lt;p&gt;My old playbooks integrate installation of the CNI and CSI as core components within the Ansible playbook. That was because naive younger me hadn't discovered the beauty of terraform and a properly tracked state. Since the CNI and CSI (cilium and longhorn in my case) both can be installed perfectly fine as cluster resources with helm charts, they would actually be served a lot better by being managed by terraform anyway.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;resource &amp;quot;helm_release&amp;quot; &amp;quot;cilium&amp;quot; {
  name             = &amp;quot;cilium&amp;quot;
  namespace        = &amp;quot;kube-system&amp;quot;
  repository       = &amp;quot;https://helm.cilium.io/&amp;quot;
  chart            = &amp;quot;cilium&amp;quot;
  create_namespace = true
  wait             = false

  values = [
    yamlencode({
      ipam = {
        mode = &amp;quot;kubernetes&amp;quot;
      }
      kubeProxyReplacement = true
      externalIPs = {
        enabled = true
      }
      ingressController = {
        enabled = true
      }
      k8sServiceHost       = var.k8s_vip
      k8sServicePort       = 6443
      bpf = {
        masquerade = true
      }
      bgpControlPlane = {
        enabled = true
      }
    })
  ]
}

resource &amp;quot;helm_release&amp;quot; &amp;quot;longhorn&amp;quot; {
  name             = &amp;quot;longhorn&amp;quot;
  namespace        = &amp;quot;longhorn-system&amp;quot;
  repository       = &amp;quot;https://charts.longhorn.io&amp;quot;
  chart            = &amp;quot;longhorn&amp;quot;
  create_namespace = true
  wait             = false
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Simply getting rid of the steps in the Ansible to install those, and creating them as Terraform resources instead worked first try. That can't bode well for the next step...&lt;/p&gt;
&lt;h2&gt;Some basic testing&lt;/h2&gt;
&lt;p&gt;&lt;img alt="Happy pods showing in k9s" src="/static/homelab-kubernetes/pods.png" /&gt;&lt;/p&gt;
&lt;p&gt;Cluster made... everything will just work now right? Well there isn't any red in k9s for the CNI and CSI pods, so at least that's a good start. Maybe I'll try spinning up some pods and doing some basic tests first though before calling it a day.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Happy pods showing in k9s" src="/static/homelab-kubernetes/networknt.png" /&gt;&lt;/p&gt;
&lt;p&gt;Pinging the internet works - good. I can reach some internal services by their API - also good. Connectivity between pods works - very good. But wait what... I can't reach &lt;code&gt;google.com&lt;/code&gt;??? Surely this is a DNS issue then, but seemingly not because &lt;code&gt;nslookup&lt;/code&gt; gives me the correct IP. Hmmm. Maybe MTU issues?!?!&lt;/p&gt;
&lt;p&gt;Wait a damn second. Look at that curl debug output. It's reaching out to an IP that wasn't any of the DNS responses. Also wait an even more damned second... that's &lt;strong&gt;my public IP&lt;/strong&gt;. Huh?&lt;/p&gt;
&lt;p&gt;&lt;img alt="DNS resolution priority" src="/static/homelab-kubernetes/dns.png" /&gt;&lt;/p&gt;
&lt;p&gt;After an annoying amount of digging, &lt;code&gt;getent hosts&lt;/code&gt; showed the cause of the problem. For whatever reason, whenever I asked for the OS to resolve a name like &lt;code&gt;google.com&lt;/code&gt;, it would try and append the DNS search domain first, making it &lt;code&gt;google.com.rhysperry.com&lt;/code&gt; in my case... and since I have wildcard DNS setup for my domain, that returned back my public IP.&lt;/p&gt;
&lt;p&gt;But why was this behaviour happening? Well the pod gets its &lt;code&gt;resolv.conf&lt;/code&gt; from the host node, but I don't remember setting that search domain in my cloud-init settings. Well, as it turns out, that's the exact problem - if you don't set a search domain in cloud-init, Proxmox will automatically use &lt;em&gt;it's own&lt;/em&gt; search domain as the default. This is a little funny - the whole reason I can't reach google inside of a container is because of a setting 3 levels lower down in the stack.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;   searchdomain = &amp;quot;.&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Simply adding an extra parameter to my VM creation terraform to unset the search domain fixed the problem, and I could finally sleep well knowing that I was a true kubestronaut. The obvious next steps are to get some cool stuff running in kubernetes... but that feels like something I should leave for my next post :)&lt;/p&gt;
&lt;p&gt;All of the source code for this can be found in &lt;a href="https://github.com/rhysperry111/helios-kubernetes/tree/main/02-ansible-install-kubernetes"&gt;step 2&lt;/a&gt; and &lt;a href="https://github.com/rhysperry111/helios-kubernetes/tree/main/03-terraform-deploy-interfaces"&gt;step 3&lt;/a&gt; of my &lt;a href="https://github.com/rhysperry111/helios-kubernetes"&gt;helios-kubernetes IaC repo&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Homelab Upgrade Pt.3 - Proxmox x Terraform = ...</title>
      <link>https://blog.rhysperry.com/homelab-terraform/</link>
      <guid>https://blog.rhysperry.com/homelab-terraform/</guid>
      <pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate>
      <description>&lt;p&gt;I love terraform... like reeeeeally love terraform... how nicely does it play with Proxmox though?&lt;/p&gt;
&lt;h2&gt;A nice problem to have&lt;/h2&gt;
&lt;p&gt;&lt;img alt="Resource usage on Proxmox" src="/static/homelab-terraform/usage.png" /&gt;&lt;/p&gt;
&lt;p&gt;As mentioned in my &lt;a href="/homelab-cluster/"&gt;Homelab Upgrade Pt.2 blog post&lt;/a&gt;, I now have a slightly-beefier-than needed Proxmox cluster in my homelab. So far, it's only really been running old VMs that I migrated from my old server, but I'd like to start doing things properly the same way I would do at work - having everything automated with Infrastructure as Code, with the source in Git for a nice audit log (although change-approval might be out of scope given I'm a one-man-band), and CI/CD to deploy changes and ensure that there is no unmanaged drift.&lt;/p&gt;
&lt;p&gt;The first step in getting this sorted out is working out how to reliably terraform VMs in Proxmox, and so this blog post will detail that journey.&lt;/p&gt;
&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;A little surprising given how popular Proxmox is, it doesn't have an official terraform provider (&lt;a href="https://bugzilla.proxmox.com/show_bug.cgi?id=3497"&gt;although there is an issue open...&lt;/a&gt;). It seems that the community has converged on &lt;a href="https://github.com/Telmate/terraform-provider-proxmox"&gt;one maintained by Telmate&lt;/a&gt;, so I guess I'll be using that one.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;As a sidenote, even though I'm mentioning "terraform" throughout this post, I'm only doing that to refer to the technology. Software wise everything is actually OpenTofu which is a fork of Terraform that came about after HashiCorp did some license fuckery. You can read more about it &lt;a href="https://www.opencoreventures.com/blog/hashicorp-switching-to-bsl-shows-a-need-for-open-charter-companies"&gt;here&lt;/a&gt;, &lt;a href="https://opentofu.org/blog/our-response-to-hashicorps-cease-and-desist/"&gt;here&lt;/a&gt; and &lt;a href="https://www.runtime.news/hashicorps-threats-to-a-terraform-fork-fell-flat-and-might-have-made-it-stronger/"&gt;here&lt;/a&gt;, but if you care about FOSS it's worth considering switching.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Anyway this should be simple, reading the docs the provider just seems to need an API token.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;terraform {
  required_providers {
    proxmox = {
      source  = &amp;quot;telmate/proxmox&amp;quot;
      version = &amp;quot;3.0.1&amp;quot;
    }
  }
}

provider &amp;quot;proxmox&amp;quot; {
  pm_api_url          = var.proxmox_api_url
  pm_api_token_id     = var.proxmox_token_id
  pm_api_token_secret = var.proxmox_token_secret
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I mean... that seems to work. I made a basic &lt;code&gt;proxmox_vm_qemu&lt;/code&gt; resource to clone my Arch cloud-init VM, &lt;code&gt;tofu apply&lt;/code&gt;ed, and yeah a VM appeared in Proxmox. Great :D&lt;/p&gt;
&lt;p&gt;This should be a cakewalk.&lt;/p&gt;
&lt;h2&gt;The 3.x.x problem&lt;/h2&gt;
&lt;p&gt;The Telmate provider and the beautiful people that maintain it have been putting a lot of effort into putting together a major rewrite for major version 3 (and thank you for that... genuinely &amp;lt;3). Turns out though... some of the initial stable releases in the 3.x.x line are a little bit wonky, and so ramping up the complexity and count of resources deployed sometimes spits out some interesting errors - some disk configurations error on apply, sometimes it just seems like the provider is making an API request that Proxmox doesn't support...&lt;/p&gt;
&lt;p&gt;After a fair bit of digging around on GitHub issues (where it seems like I wasn't the only person running into issues), I found that most issues really could just be solved by moving to an RC release.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;version = &amp;quot;3.0.2-rc07&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Yeah... it's not great to have your IaC pinning release candidates explicitly, but hey - a dirty solution is better than no solution any day of the week. And you can't really expect me to go around making VMs from the Proxmox UI can you? I'd rather debug funny IaC providers than perform ClickOps like an animal.&lt;/p&gt;
&lt;h2&gt;HA ghosts&lt;/h2&gt;
&lt;p&gt;I'd already made a simple test VM... and I'd now got the version issues out of the way. Probably a good next test is to try and spin up a VM with all the bells and whistles I'll be using eventually.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;resource &amp;quot;proxmox_vm_qemu&amp;quot; &amp;quot;test&amp;quot; {
  name        = &amp;quot;test-vm&amp;quot;
  target_node = &amp;quot;prox&amp;quot;
  clone       = &amp;quot;arch-cloud-template&amp;quot;
  full_clone  = true
  os_type     = &amp;quot;cloud-init&amp;quot;
  vm_state    = &amp;quot;running&amp;quot;
  bios        = &amp;quot;ovmf&amp;quot;
  scsihw      = &amp;quot;virtio-scsi-single&amp;quot;
  boot        = &amp;quot;order=virtio0&amp;quot;
  agent       = 1

  cpu {
    type  = &amp;quot;host&amp;quot;
    cores = 4
  }

  memory = 8192

  disks {
    virtio {
      virtio0 {
        disk {
          size    = &amp;quot;30G&amp;quot;
          storage = &amp;quot;vault&amp;quot;
        }
      }
    }
    ide {
      ide0 {
        cloudinit {
          storage = &amp;quot;vault&amp;quot;
        }
      }
    }
  }

  network {
    id       = 0
    model    = &amp;quot;virtio&amp;quot;
    bridge   = &amp;quot;vmbr0&amp;quot;
    firewall = true
  }

  ipconfig0  = &amp;quot;ip=192.168.0.221/24,gw=192.168.0.1&amp;quot;
  nameserver = &amp;quot;1.1.1.1&amp;quot;
  sshkeys    = var.ssh_public_key
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The VM made itself successfully... and after a bit of waiting around I could even SSH into it, so it seems cloud-init was working perfectly as well. Yippee!&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Plan: 0 to add, 1 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I then ran a &lt;code&gt;tofu plan&lt;/code&gt; afterwards to see how happy the state consistency was and wait... what? I've changed nothing and for whatever reason the provider thinks there's a change that needs to be applied already.&lt;/p&gt;
&lt;p&gt;After skimming the diff, it seems that for whatever reason the &lt;code&gt;hastate&lt;/code&gt; variable was planned to change, despite me never having defined it initially. As it turns out, if you have HA enabled in Proxmox, Proxmox will automatically add a &lt;code&gt;hastate&lt;/code&gt; to every VM you make upon creation, and due to some slight inconsistencies in the terraform provider, these aren't properly brought into the statefile. The provider will then see this discrepancy during its plan, and try to revert it. Every single damn time.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;hastate = &amp;quot;started&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The fix was fairly simple though, just be explicit about the &lt;code&gt;hastate&lt;/code&gt; you want the VM to have in the terraform definition, and then the provider would know to properly create and track it in Proxmox.&lt;/p&gt;
&lt;h2&gt;Scaling things up&lt;/h2&gt;
&lt;p&gt;Since the single-VM test went so swimmingly, it was a good time to try and increase the VM count and see what broke next.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;resource &amp;quot;proxmox_vm_qemu&amp;quot; &amp;quot;controllers&amp;quot; {
  count       = var.controller_count
  name        = format(&amp;quot;controller-%02d&amp;quot;, count.index + 1)
  target_node = element(var.proxmox_nodes, count.index % length(var.proxmox_nodes))
  # ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I used a simple &lt;code&gt;count&lt;/code&gt; in terraform to create multiple instances of the resource, and a slight hack to spread the VMs out across my &lt;code&gt;target_node&lt;/code&gt;s as Proxmox doesn't fully support machines just being owned by the cluster and moved around with something like &lt;a href="https://knowledge.broadcom.com/external/article/391137/vmware-drs-overview-optimizing-resource.html"&gt;DRS&lt;/a&gt; yet.&lt;/p&gt;
&lt;p&gt;I also added &lt;code&gt;pm_parallel = 4&lt;/code&gt; to the provider definition, because ain't nobody got time to wait for each VM to come up sequentially.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Error: 500 unable to create VM 108: config file already exists
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Well that's not good. Upon apply it seems that all 3 counts of my VM tried to create with the same ID? Well it turns out there's a race condition in the provider caused querying the next available VM ID from the Proxmox API for each of them all at once... then it tries to create all of them with that ID all at once. In reality this is caused by bad API design - for a static ID like that it'd be a lot better to not need to include it in the request, and then just return what was used in the response, however that's what you get without a first-party provider :)&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;resource &amp;quot;proxmox_vm_qemu&amp;quot; &amp;quot;controllers&amp;quot; {
  count = var.controller_count
  vmid  = 800 + count.index
  # ...
}

resource &amp;quot;proxmox_vm_qemu&amp;quot; &amp;quot;workers&amp;quot; {
  count = var.worker_count
  vmid  = 850 + count.index
  # ...
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The unsatisfying fix to this was really just to hardcode the VM ids in terraform. A simple base+offset method is perfectly fine. I can hear you screaming from the sidelines already "but what if you have more than 50 controllers?"... and well... I guess I'll cross that bridge when I get to it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;There is one more issue that I'd like to talk about, but I haven't got to the bottom of yet so I'll leave it out for now. For some reason, sometimes destroying a VM from terraform will leave its disks in the datastore, and then recreating a VM with the same ID causes havoc. If anybody has any hints... please help... and if I find the cause I'll add it into this post.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Making it proper&lt;/h2&gt;
&lt;p&gt;Now that I was fairly confident I'd worked out all the issues with the provider, I could actually do what makes terraform so magic - splitting out the actually useful information into well-documented tfvars, and using terraform as a programming language to turn that described state into the needed resources.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-hcl"&gt;locals {
  vm_defaults = {
    clone              = var.template_name
    full_clone         = true
    os_type            = &amp;quot;cloud-init&amp;quot;
    start_at_node_boot = true
    vm_state           = &amp;quot;running&amp;quot;
    hotplug            = &amp;quot;network,disk,usb&amp;quot;
    bios               = &amp;quot;ovmf&amp;quot;
    scsihw             = &amp;quot;virtio-scsi-single&amp;quot;
    boot               = &amp;quot;order=virtio0&amp;quot;
    agent              = 1
    hastate            = &amp;quot;started&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;First it was worth breaking out a lot of the boilerplate VM options (that didn't need to be user-tuned) into locals to avoid duplication.&lt;/p&gt;
&lt;p&gt;Then I wrote up a nice tfvars definition with sensible defaults, and changed main.tf to use those.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# Generated by Terraform. Do not edit manually.

all:
  vars:
    ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
    ansible_python_interpreter: /usr/bin/python3
    ansible_user: arch
  children:
    controllers:
      hosts:
%{ for i, c in controllers ~}
        ${c.name}:
          ansible_host: ${network_prefix}.${controller_ip_start + i}
%{ endfor ~}
    workers:
      hosts:
%{ for i, w in workers ~}
        ${w.name}:
          ansible_host: ${network_prefix}.${worker_ip_start + i}
%{ endfor ~}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As an extra nice step, I created a template that would take the IPs of VMs generated by terraform, and generate an ansible inventory that could be used later.&lt;/p&gt;
&lt;p&gt;&lt;img alt="VMs in Proxmox" src="/static/homelab-terraform/vms.png" /&gt;&lt;/p&gt;
&lt;p&gt;After all of that mess... everything just kinda works as expected. I now have a nice way to automate building VMs to build off of :)&lt;/p&gt;
&lt;p&gt;All of the source code for this can be found in &lt;a href="https://github.com/rhysperry111/helios-kubernetes/tree/main/01-terraform-proxmox-vms"&gt;step 1&lt;/a&gt; of my &lt;a href="https://github.com/rhysperry111/helios-kubernetes"&gt;helios-kubernetes IaC repo&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Homelab Upgrade Pt.2 - Proxmox and CEPH</title>
      <link>https://blog.rhysperry.com/homelab-cluster/</link>
      <guid>https://blog.rhysperry.com/homelab-cluster/</guid>
      <pubDate>Thu, 23 Oct 2025 00:00:00 GMT</pubDate>
      <description>&lt;p&gt;Installing the hypervisor is the easy bit right? Nothing complicated could happen at all...&lt;/p&gt;
&lt;h2&gt;The plan&lt;/h2&gt;
&lt;p&gt;As I talked about in my &lt;a href="/homelab-hardware/"&gt;previous blog post&lt;/a&gt;... I am now the proud owner of more compute than is reasonable. That hardware now needs a purpose, and given that my current infrastructure runs on Proxmox it makes sense to use that. Proxmox has good support for clustering both for storage and VM management, so it should be a fun experiment.&lt;/p&gt;
&lt;p&gt;To get a fully-running cluster, I roughly need to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install boot drives in each new node&lt;/li&gt;
&lt;li&gt;Network the servers&lt;/li&gt;
&lt;li&gt;Install Proxmox on each host&lt;/li&gt;
&lt;li&gt;Form a cluster with the my old server&lt;/li&gt;
&lt;li&gt;Migrate VMs from the old server to the new servers&lt;/li&gt;
&lt;li&gt;Move the drives from the old server to the new nodes&lt;/li&gt;
&lt;li&gt;Form a CEPH storage pool on the new drives&lt;/li&gt;
&lt;li&gt;Migrate the VMs to the CEPH storage&lt;/li&gt;
&lt;li&gt;Retire old server&lt;/li&gt;
&lt;li&gt;Profit?&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Preparing the hardware&lt;/h2&gt;
&lt;p&gt;As bought, the server contained no drives. I am going to move all 8 of the SSDs from my old server to my new server eventually to make a CEPH pool, but I can't do that until Proxmox is up and running so I can migrate VMs first. Since the server has spare PCIe slots, and I have spare NVMes lying around, using those is probably the best path forward.&lt;/p&gt;
&lt;p&gt;&lt;img alt="The M.2 adapters I bought" src="/static/homelab-cluster/m2-adapter.png" /&gt;&lt;/p&gt;
&lt;p&gt;I got some snazzy M.2 to standard PCIe adapters on Amazon, put in the SSDs I already have and that should be good enough to boot from.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Networking at the back of the server" src="/static/homelab-cluster/blinkenlights.png" /&gt;&lt;/p&gt;
&lt;p&gt;Each node has a NIC with 2xSFP+ ports on the back, so I also got the networking wired up to a switch (all on one VLAN for now), and hopefully once Proxmox is installed we can get everything doing LACP nicely.&lt;/p&gt;
&lt;h2&gt;Installing Proxmox&lt;/h2&gt;
&lt;p&gt;Running through the Proxmox install was easy as anything - just boot the server from the installation ISO, tell the server to install to the SSD, give it a hostname and IP, then set a password, and bang. Done. Reboot and we're all fine and dandy right?&lt;/p&gt;
&lt;p&gt;Wrong. Reboot and no bootable device found... huh? Well after a bit of digging around it turns out that even though the server is modern enough support NVMe drives... it doesn't support them as boot drives. Ruh roh.&lt;/p&gt;
&lt;p&gt;&lt;img alt="PCIe riser with MicroSD card slot" src="/static/homelab-cluster/microsd.png" /&gt;&lt;/p&gt;
&lt;p&gt;Then I noticed an awfully convenient feature of a the PCIe risers that came with the servers... is that a MicroSD card slot? Yes. Yes it is. And as it turns out it exists for this exact reason.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;timeout -1
textonly true

default_selection &amp;quot;Proxmox&amp;quot;

scanfor manual

menuentry &amp;quot;Proxmox&amp;quot; {
    volume 3c427ff6-4540-40d5-975e-3163901bff51
    loader /EFI/proxmox/shimx64.efi
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It was a little cursed, and I still don't know how I feel about it, but the server &lt;em&gt;does&lt;/em&gt; support booting from the MicroSD, and so with a small &lt;a href="https://www.rodsbooks.com/refind/"&gt;rEFInd&lt;/a&gt; shim (where rEFInd has its own NVMe drivers), it was possible to chainload Proxmox.&lt;/p&gt;
&lt;p&gt;So Proxmox was now booted and I could get to the web UI - great :D&lt;/p&gt;
&lt;p&gt;I then made some quick tweaks to the Proxmox networking to make sure that it was treating the two uplinks as proper LACP pairs, and then the nodes were good to go.&lt;/p&gt;
&lt;h2&gt;Migrating old VMs&lt;/h2&gt;
&lt;p&gt;The plan to migrate the VMs from my old server (so I could then salvage it from RAM and SSDs) was fairly simple. Form one giant Proxmox cluster with the 4 new servers and the one old one... then just migrate the VMs to the local datastores on the new servers.&lt;/p&gt;
&lt;p&gt;There was one slight issue though... since the old server was a single unit with 16TB of aggregated storage, I'd made a few VMs that had more than 1TB of disk storage on their own, and they were way too large to migrate to any individual one of the new nodes. After a bit of tedious work clearing caches on some old VMs, cleaning up logs, and thinking "hmmm... do I really need this old test ISO from 4 years ago that I kinda forgot existed", I was ready to start moving VMs.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Migrating VMs in the Proxmox UI" src="/static/homelab-cluster/migrate.png" /&gt;&lt;/p&gt;
&lt;p&gt;Actually migrating the VMs was almost &lt;em&gt;too&lt;/em&gt; simple. Just right click the VM, click "Migrate", pick a destination server and Proxmox gets going. I won't pretend it was a quick process migrating 4TB+ of data between the servers, but eventually after a few hours of waiting, everything was moved across.&lt;/p&gt;
&lt;p&gt;Finally, once all the VMs had been moved across to the new servers, I was ready to shut down all the servers, and gut the hardware from the old server to be reused in the new servers. I was able to double the amount of RAM in each node (yippee), giving me 256GB across the cluster, and I then moved the 8 drives from the old server to be spread out amongst the nodes with 2 in each.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I should also note that there was quite a complicated step in between here and the next step... reflashing the inbuilt RAID cards to use generic HBA firmware to ensure that the OS and CEPH got full control of the drives without any middleware fuckery. I've realised that I didn't actually document any of that process at the time of doing it, but I may at some point in the future write a short blog post about it if I feel it might be useful.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;Playing around with CEPH&lt;/h2&gt;
&lt;p&gt;With the servers now booted with more RAM, and 2x2TB drives each, it was time to work out a nice way to get CEPH working. For those who don't know, CEPH is a distributed filesystem - i.e. each server has its own storage, and they work together to pool that storage into a single filesystem. This is unlike the confusingly-named clustered filesystem types, which are built around single block devices that have shared access from many devices (like you may see in SAN architectures).&lt;/p&gt;
&lt;p&gt;CEPH is cool in that it was built with resiliency in mind, with the ability to define in as much different ways as you want your different levels of failure zones. You can specify which drives are connected to which backplanes, which backplanes are in which servers, which servers are in which racks (and so on), and it's CEPH's job to work out how to best protect your data given the number of replicas you let it make. The only downside to all of this is that it generally needs &lt;em&gt;very&lt;/em&gt; fast networking to work nicely, and ideally networking that is entirely separate from your normal data networking. Sadly each of my own servers only have 2 links, and since I want LACP redundancy, CEPH is going to have to learn how to share :)&lt;/p&gt;
&lt;p&gt;&lt;img alt="CEPH services on each server" src="/static/homelab-cluster/ceph-services.png" /&gt;&lt;/p&gt;
&lt;p&gt;Proxmox makes CEPH very easy to get started with, so all I really had to do was enable CEPH on each node, and pick which servers would take which roles in the cluster. It is generally recommended to have a proper quorum of monitors (hence I made 3), and at the very least highly available managers and metadata servers (hence making a primary/secondary of each).&lt;/p&gt;
&lt;p&gt;&lt;img alt="CEPH services on each server" src="/static/homelab-cluster/ceph-osds.png" /&gt;&lt;/p&gt;
&lt;p&gt;I was then able to format the drives in my server as CEPH OSDs from the UI, and just like magic I had a huge CEPH pool.&lt;/p&gt;
&lt;p&gt;I also needed to setup a CephFS filesystem. By default CEPH is a simple block store, which is just fine for VMs, but Proxmox needs a proper filesystem for ISO images. Luckily, this was also just one click away and I was able to make a CephFS filesystem easily.&lt;/p&gt;
&lt;p&gt;As a last step, I needed to actually migrate all of the VMs from the local host datastores to the CEPH cluster. This was easy like before... but definitely not quick, as it was effectively a hugely resource-intensive DDOS trying to get every server to copy terabytes of data to all of the other servers, but after a few days all was good and all my VMs could be started again.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Homelab Upgrade Pt.1 - New hardware!</title>
      <link>https://blog.rhysperry.com/homelab-hardware/</link>
      <guid>https://blog.rhysperry.com/homelab-hardware/</guid>
      <pubDate>Wed, 15 Oct 2025 00:00:00 GMT</pubDate>
      <description>&lt;p&gt;Did somebody say server? Wait server cluster? Multi-node server chassis?!?!? Count me in.&lt;/p&gt;
&lt;h2&gt;The eBay find&lt;/h2&gt;
&lt;p&gt;Every once in a while I like to peruse eBay for interesting deals, and a few weeks ago I got interested in a specific type of server...&lt;/p&gt;
&lt;p&gt;&lt;img alt="Example blade server" src="/static/homelab-hardware/blade-server.png" /&gt;&lt;/p&gt;
&lt;p&gt;I'm sure we've all seen huge multi-blade chassis servers, and thought "damn those would be impractical" for a homelab. They're usually at a minimum 6-8U, the loudest machines in existence, and endlessly thirsty for power. Well, they actually have a slightly more manageable little sibling: node servers.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Example node server" src="/static/homelab-hardware/node-server.png" /&gt;&lt;/p&gt;
&lt;p&gt;After looking around at the various options, anything from the &lt;a href="https://www.dell.com/support/product-details/en-us/product/poweredge-c6300/overview"&gt;Dell PowerEdge C6300&lt;/a&gt; series seemed like a decent fit. They were modern enough to have DDR4 RAM, but old enough to be fairly cheap on the used market. They were small enough to fit under a sofa (don't ask), but large enough to house 4 nodes with a number of drives each which I thought would be a reasonable number to have for a clustered environment (3 being the minimum for most clustered applications).&lt;/p&gt;
&lt;p&gt;&lt;img alt="Ebay listing" src="/static/homelab-hardware/ebay.png" /&gt;&lt;/p&gt;
&lt;p&gt;After a few weeks of searching I found an impossibly good price for one at £175 (update Mar 2026: yeah... the same device is ~£900 now) - it seemingly had everything I needed as well so I pulled the trigger and decided to work out the details once it arrived.&lt;/p&gt;
&lt;h2&gt;The hardware&lt;/h2&gt;
&lt;p&gt;Once the hardware arrived (and I worked out how to haul 2 giant boxes up the stairs) I gave it a very quick check over.&lt;/p&gt;
&lt;p&gt;&lt;img alt="First look at node after unboxing" src="/static/homelab-hardware/unbox.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The spec as delivered was (with 4x for the chassis-total):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2x Intel(R) Xeon(R) CPU E5-2630 v3 -&amp;gt; 32c64t @2.4GHz&lt;/li&gt;
&lt;li&gt;32GB DDR4 -&amp;gt; 128GB DDR4&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I also had 128GB of RAM and a bunch of drives in my old server, but until I had migrated the workload off of that it had to stay intact for now, and that's a story for my next blog post :)&lt;/p&gt;
&lt;h2&gt;Fans go brrr... too much brrr&lt;/h2&gt;
&lt;p&gt;Another thing I noticed was that despite me thinking that node servers should have been the more manageable, quieter cousin to blade servers... that might have just been me finding an excuse to purchase some cool hardware. In reality, when the servers were booted they would run their fans at near 100% all the time (and these are some of the beefier server fans I've had the misfortune of dealing with), with absolutely no way to spin them down even through IPMI as fan control was handled by the interface-less chassis and not the nodes.&lt;/p&gt;
&lt;p&gt;I initially thought it'd be fine, but after a few weeks of being unable to solve the problem in software, and a very quickly dropping partner-approval-factor, I decided I'd have to remediate the issue in hardware.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Old fans" src="/static/homelab-hardware/old-fan.png" /&gt;&lt;/p&gt;
&lt;p&gt;The fans that came in the server were an odd size - 60mm square and 40mm deep, and they also came with a weird proprietary fan connector that in reality was just a more annoying way to wire a standard 4-pin fan. I decided I'd try and replace them with some &lt;a href="https://www.noctua.at/en/products/nf-a6x25-pwm"&gt;Noctua NF-A6x25&lt;/a&gt;s, as they seemed to be the closest match that had a chance of being quiet. I had some concerns that they might not have enough airflow or pressure, or that I wouldn't be able to get the connector situation sorted out... but given the rapidly declining partner-approval-factor decided to purchase the fans and work it out once I had them.&lt;/p&gt;
&lt;p&gt;&lt;img alt="New fans installed" src="/static/homelab-hardware/new-fans.png" /&gt;&lt;/p&gt;
&lt;p&gt;Installing the fans was... interesting. I was able to take the foam off of the old fans and stick it back to the Noctua ones to allow them to slot into the same place on the chassis, but the power connector situation proved a bit tricky. With a little dupont connector fuckery (literally shaving them down on one corner to fit them into the weird PCIE-power-ish connector) I got things working though.&lt;/p&gt;
&lt;p&gt;Once the fans were installed I booted the server back up, and to my surprise... everything seemed to work? The chassis was even able to read the fan speeds correctly, and while it became pretty apparent that the BMC would not be turning the fans below 100% speed ever, this was still a lot quieter than the old fans, and barely audible once the server was closed.&lt;/p&gt;
&lt;p&gt;And well... that's that. I may be the only person with a node server under their sofa with Noctua fans in existence, and only time will tell whether it throttles under load, but at least it's quiet and I can play around with clustered technologies at home.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Two birds, one stone</title>
      <link>https://blog.rhysperry.com/two-birds-one-stone/</link>
      <guid>https://blog.rhysperry.com/two-birds-one-stone/</guid>
      <pubDate>Tue, 20 Sep 2022 00:00:00 GMT</pubDate>
      <description>&lt;p&gt;Ahh... guess I've finally got to get started and write a blog. I've been looking at things I should probably get done before I apply for a job and apparently a blog is something people like, so I've decided to kill two birds with one stone and write a blog about making a digital CV (fancy, I know).&lt;/p&gt;
&lt;h2&gt;The idea.&lt;/h2&gt;
&lt;p&gt;I've been thinking about a few different ways I could make my digital CV, but one that I really like the thought of is having one based off of a file structure. Each heading/subheading in my CV should be a folder in this, and then each bullet point or paragraph (to be decided) will be a file, or maybe even a fancy hyperlink thingy.&lt;/p&gt;
&lt;p&gt;That's fine, but how the hell is an somebody going to navigate this? &lt;em&gt;Well...&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Windows 3.1 Program Manager" src="/static/two-birds-one-stone/windows-program.png" /&gt;&lt;/p&gt;
&lt;p&gt;I really like the look of the Windows 3.1 program manager, so I think I'll use something that looks like that for navigation. Everbody loves retro too, so it'll just have to be a big hit.. right?&lt;/p&gt;
&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;The first I'll get down to doing is defining the directory stucture and any data in it. I want this all to be in it's own seperate file so that if I, or anybody else (since I will probably decide to open-source it), wants to use this code for something else in the future they can just swap out the file for something they might want.&lt;/p&gt;
&lt;p&gt;Since I'm going to be working in javascript I'll just chuck all the data I need into a nice big JSON file. Everybody loves JSON anyway.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-json"&gt;{
    &amp;quot;title&amp;quot;: &amp;quot;CV&amp;quot;,
    &amp;quot;structure&amp;quot;: [
        {
            &amp;quot;name&amp;quot;: &amp;quot;contact&amp;quot;,
            &amp;quot;data&amp;quot;: &amp;quot;Name: Rhys Perry\nEmail: rhysperry111@gmail.com\nWebsite: https://rhysperry.com&amp;quot;
        },
        {
            &amp;quot;name&amp;quot;: &amp;quot;education&amp;quot;,
            &amp;quot;structure&amp;quot;: [
                {
                    &amp;quot;name&amp;quot;: &amp;quot;qe&amp;quot;,
                    &amp;quot;structure&amp;quot;: [
                        {
                            &amp;quot;name&amp;quot;: &amp;quot;alevels&amp;quot;,
                            &amp;quot;data&amp;quot;: &amp;quot;Computer Science\nMaths\nPhysics&amp;quot;
                        },
                        {
                            &amp;quot;name&amp;quot;: &amp;quot;aslevels&amp;quot;,
                            &amp;quot;data&amp;quot;: &amp;quot;Economics&amp;quot;
                        },
                        {
                            &amp;quot;name&amp;quot;: &amp;quot;gcses&amp;quot;,
                            &amp;quot;data&amp;quot;: &amp;quot;Computer Science\nMaths\nFurther Maths\nPhysics\nChemisty\nBiology\nEnglish\nGerman\nHistory&amp;quot;
                        }
                    ]
                }
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I've only added half of the CV so far, as all I need is some test data to develop the program around. I also didn't want to add too much info since this will be a publically available version so I can't include things like my address and phone number. I feel like I've also covered most of the things I'll need to test as I've got multiple levels of directories, some multi-line and some single-line files as well as a directory with more than one thing in it.&lt;/p&gt;
&lt;p&gt;As you can see, the structure I've gone with for the file is that there is a base object with a title that I'll probably use for the program title and an array called structure that contains all the information about the directory structure. In that there can be any assortment of objects that either have a name and data (which are files), or a name and a structure array of their own (which are directories).&lt;/p&gt;
&lt;h2&gt;Elements&lt;/h2&gt;
&lt;p&gt;Now's the time to design a few template elements that I'll use for creating the UI. I'll need five main things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The "desktop" will be the element that every other window lives inside. It will need:&lt;ul&gt;
&lt;li&gt;To dynamically fill the whole screen&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A "window" element will be used to show a window on the screen with some contents on it. It will need:&lt;ul&gt;
&lt;li&gt;A way of layering it with other windows (&lt;code&gt;position: absolute&lt;/code&gt; and &lt;code&gt;z-index&lt;/code&gt;?)&lt;/li&gt;
&lt;li&gt;A title bar with window management buttons&lt;/li&gt;
&lt;li&gt;To allow the contents to be scolled&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A "directory view" element will be used to show a list of items. It will need:&lt;ul&gt;
&lt;li&gt;To automatically use the right amount of columns for the window size (&lt;a href="https://developer.mozilla.org/en-US/docs/Web/CSS/repeat"&gt;auto fill&lt;/a&gt; looks like it will do)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A "directory item" element to be added to the directory view. It will need:&lt;ul&gt;
&lt;li&gt;A nice way to handle different length names&lt;/li&gt;
&lt;li&gt;To be able to display a given icon&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A "text view" element will be used for viewing text files. It will need:&lt;ul&gt;
&lt;li&gt;To... show some text&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is how I think that would would all translate into HTML:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-html"&gt;&amp;lt;div class=&amp;quot;desktop-container&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;

&amp;lt;div class=&amp;quot;window-container&amp;quot;&amp;gt;
    &amp;lt;div class=&amp;quot;window-titlebar&amp;quot;&amp;gt;
            &amp;lt;div class=&amp;quot;window-close&amp;quot;&amp;gt;
                &amp;lt;p class=&amp;quot;window-close-text&amp;quot;&amp;gt;-&amp;lt;/p&amp;gt;
            &amp;lt;/div&amp;gt;
            &amp;lt;div class=&amp;quot;window-title&amp;quot;&amp;gt;
                &amp;lt;p class=&amp;quot;window-title-text&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;
            &amp;lt;/div&amp;gt;
            &amp;lt;div class=&amp;quot;window-minimise&amp;quot;&amp;gt;
                &amp;lt;p class=&amp;quot;window-minimise-text&amp;quot;&amp;gt;⌄&amp;lt;/p&amp;gt;
            &amp;lt;/div&amp;gt;
            &amp;lt;div class=&amp;quot;window-fullscreen&amp;quot;&amp;gt;
                &amp;lt;p class=&amp;quot;window-fulscreen-text&amp;quot;&amp;gt;⌃&amp;lt;/p&amp;gt;
            &amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div class=&amp;quot;window-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;

&amp;lt;div class=&amp;quot;dirview-container&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;

&amp;lt;div class=&amp;quot;diritem-container&amp;quot;&amp;gt;
    &amp;lt;div class=&amp;quot;diritem-icon&amp;quot;&amp;gt;
        &amp;lt;img class=&amp;quot;diritem-icon-image&amp;quot;&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;div class=&amp;quot;diritem-name&amp;quot;&amp;gt;
        &amp;lt;p class=&amp;quot;diritem-name-text&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;

&amp;lt;p class=&amp;quot;textview-text&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;CSSucks&lt;/h2&gt;
&lt;p&gt;Now that I have some elements to play with, I need to write up all the CSS to position things nicely and make things look fairly nice. I'll just hardcode a basic UI using the componenets above and tweak things until it looks like an actual desktop. Without the CSS it looks like it's from 2002, but unlike most developers who want to add CSS to make it look newer, I want to add CSS to make it look older... 10 years older.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-css"&gt;* {
    /* Don't know how people live without this */
    margin: 0px;
    padding: 0px;
}

.desktop-container {
    position: relative;
    width: 100%;
    height: 100%;
    background-color: grey;
}

.window-container {
    position: absolute;
    background-color: white;
    border: 2px solid black;
}

.window-titlebar {
    width: 100%;
    height: 25px;
    background-color: blue;
    color: white;
    font-weight: bold;
    display: grid;
    gap: 0px;
    grid-template-columns: 25px auto 25px 25px;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;img alt="Stuff looking already close to the Windows 3.1 style" src="/static/two-birds-one-stone/first-css.png" /&gt;&lt;/p&gt;
&lt;p&gt;With just a littler bit of CSS it is already getting really close... which is odd given my past experiences with CSS. I might even go as far as saying I &lt;em&gt;like&lt;/em&gt; working with CSS now, but something inside me says that's going too far.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Looking almost identical to the 3.1 style" src="/static/two-birds-one-stone/next-css.png" /&gt;&lt;/p&gt;
&lt;p&gt;I think that's all the CSS I need to do. I did end up tweaking a few things, such as changing the title bar buttons to be images rather than text, jiggled the HTML a bit, making it compliant without "quirks mode" and I also added the 3.1 tiled backgrounds, but overall it was a pretty painless experience.&lt;/p&gt;
&lt;h2&gt;Logic&lt;/h2&gt;
&lt;p&gt;The next thing I need to work on is getting the javascript side of the code done. The basic outline of things should look something like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Load the structure metadata file&lt;/li&gt;
&lt;li&gt;Create the root window&lt;/li&gt;
&lt;li&gt;Listen for clicks&lt;ul&gt;
&lt;li&gt;On window buttons for closing/minimising/fullscreening&lt;/li&gt;
&lt;li&gt;On items in the directory view for opening another directory/opening a text viewer&lt;/li&gt;
&lt;li&gt;On the titlebar to handle dragging&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;First I'll write some code to load the JSON file.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-js"&gt;fetch('data.json')
    .then(response =&amp;gt; response.json())
    .then(data =&amp;gt; {
        // Can now read the data from 'data'
    });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now that I've done that, I'll need a way to create elements in javascript. I think the simplest way to do this would be to have an few classes: One for interacting with a desktop element, one base class for interacting with window elements, directory view and text view elements that will extend the window class and finally a directory item class to be added to the text views. &lt;/p&gt;
&lt;p&gt;My general rule for creating classes is that anything within the class that will be accessed by something else should be a direct child of the class (e.g. &lt;code&gt;myobject.name&lt;/code&gt;) rather than needing to be accessed through another child (e.g. &lt;code&gt;myobject.otherchild.otherchild.name&lt;/code&gt;). This does sometimes mean that I need to define setters and getters, but it keeps the code a lot cleaner outside of the class and relegates all handling of class-specific things to staying inside the class definiton.&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-js"&gt;class Desktop {
    constructor(parent, width, height) {
        // Setup element
        this.element = document.createElement('div');
        this.element.className = 'desktop-container';
        this.element.style.width = width;
        this.element.style.height = height;

        // Add to parent
        parent.appendChild(this.element);
    }
}

class Window {
    constructor(desktop, width, height, title, x, y, z) {
        // Setup element
        this.element = document.createElement('div');
        this.element.className = 'window-container';
        this.element.style.width = width;
        this.element.style.height = height;
        this.element.style.left = x;
        this.element.style.top = y;
        this.element.style.zIndex = z;
        this.element.innerHTML = `
            &amp;lt;div class=&amp;quot;window-titlebar&amp;quot;&amp;gt;
                &amp;lt;div class=&amp;quot;window-close&amp;quot;&amp;gt;
                    &amp;lt;img class=&amp;quot;window-close-img&amp;quot; src=&amp;quot;icons/close.ico&amp;quot;&amp;gt;
                &amp;lt;/div&amp;gt;
                &amp;lt;div class=&amp;quot;window-title&amp;quot;&amp;gt;
                    &amp;lt;p class=&amp;quot;window-title-text&amp;quot;&amp;gt;${title}&amp;lt;/p&amp;gt;
                &amp;lt;/div&amp;gt;
                &amp;lt;div class=&amp;quot;window-minimise&amp;quot;&amp;gt;
                    &amp;lt;img class=&amp;quot;window-minimise-img&amp;quot; src=&amp;quot;icons/minimise.ico&amp;quot;&amp;gt;
                &amp;lt;/div&amp;gt;
                &amp;lt;div class=&amp;quot;window-fullscreen&amp;quot;&amp;gt;
                    &amp;lt;img class=&amp;quot;window-fullscreen-img&amp;quot; src=&amp;quot;icons/fullscreen.ico&amp;quot;&amp;gt;
                &amp;lt;/div&amp;gt;
            &amp;lt;/div&amp;gt;
            &amp;lt;div class=&amp;quot;window-content&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;
        `;

        // Add to desktop
        desktop.element.appendChild(this.element);

        // Make content area accessible
        this.content = this.element.getElementsByClassName('window-content')[0];
    }

    get z() {
        return this.element.style.zIndex;
    }

    get width() {
        return this.element.style.width;
    }

    get height() {
        return this.element.style.height;
    }

    get x() {
        return this.element.style.left;
    }

    get y() {
        return this.element.style.top;
    }
}

class DirectoryItem {
    constructor(directoryWindow, data) {
        // Create element
        this.element = document.createElement('div');
        this.element.className = 'diritem-container';

        // Figure out item type
        let type;
        if (typeof data.data != 'undefined') {
            type = 'file';
        } else if (typeof data.structure != 'undefined') {
            type = 'folder';
        } else {
            // If unknown type, do nothing
            return;
        }

        // Set icon location based on type
        let icon;
        if (type == 'file') {
            icon = 'icons/file.ico';
        } else if (type == 'folder') {
            icon = 'icons/folder.ico';
        }

        // Add rest of HTML to element
        this.element.innerHTML = `
        &amp;lt;div class=&amp;quot;diritem-icon&amp;quot;&amp;gt;
            &amp;lt;img class=&amp;quot;diritem-icon-image&amp;quot; src=&amp;quot;${icon}&amp;quot;&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;div class=&amp;quot;diritem-name&amp;quot;&amp;gt;
            &amp;lt;p class=&amp;quot;diritem-name-text&amp;quot;&amp;gt;${data.name}&amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
        `;

        // Add element to directoryview
        directoryWindow.view.appendChild(this.element);
    }
}

class DirectoryWindow extends Window {
    constructor(desktop, width, height, title, x, y, z, structure) {
        // Create window
        super(desktop, width, height, title, x, y, z);

        // Create directoryview element
        this.view = document.createElement('div');
        this.view.className = 'dirview-container';

        // Add directoryview element to window
        this.content.appendChild(this.view);

        // Create items from structure
        for (let item of structure) {
            new DirectoryItem(this, item);
        }
    }
}

class TextWindow extends Window {
    constructor(desktop, width, height, title, x, y, z, data) {
        // Create window
        super(desktop, width, height, title, x, y, z);

        // Create textview element
        this.view = document.createElement('div');
        this.view.className = 'textview-container';

        // Add textview element to window
        this.content.appendChild(this.view);

        // Add text to textview
        let text = document.createElement('p');
        text.className = 'textview-text';
        text.innerText = data;
        this.view.appendChild(text);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;All I need to do now is add some event listeners on the directory items to spawn windows&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-js"&gt;// Set click callback based on type
let callback;
if (type == 'file') {
    callback = () =&amp;gt; {
        let x = desktop.mouseX + 'px';
        let y = desktop.mouseY + 'px';
        let z = parseFloat(directoryWindow.z) + 1;
        let width = '750px';
        let height = '500px';
        new TextWindow(desktop, width, height, data.name, x, y, z, data.data);
    };
} else if (type == 'folder') {
    callback = () =&amp;gt; {
        let x = desktop.mouseX + 'px';
        let y = desktop.mouseY + 'px';
        let z = parseFloat(directoryWindow.z) + 1;
        let width = '600px';
        let height = '400px';
        new DirectoryWindow(desktop, width, height, data.name, x, y, z, data.structure);
    };
}

// Add callback to element
this.element.addEventListener('click', callback);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That was super easy, but I did need to change a few things in other places such as adding code to track the mouse position on the desktop, which then meant I had to pass the desktop through to the directory element. I'll just quickly create the main desktop and first window and see if things work&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-js"&gt;fetch('data.json')
    .then(response =&amp;gt; response.json())
    .then(data =&amp;gt; {
        let desktop = new Desktop(document.body, '100%', '100%');
        new DirectoryWindow(desktop, '600px', '400px', data.title, '100px', '50px', 1, data.structure);
    });
&lt;/code&gt;&lt;/pre&gt;</description>
    </item>
  </channel>
</rss>