
Learning to Debug Again in an AI-First World
Last Tuesday, I attended Tech Talent Tuesday’s first event of 2026, and it was great to see the Calgary tech community come together to discuss industry issues heading into the new year. The presentation centered around a roundtable discussion with seven representatives from some of Calgary’s biggest tech companies, focusing on the state of the industry today and how Calgary can improve. This discussion essentially framed the theme for many of the upcoming panels that Tech Talent Tuesday will host throughout 2026.
Although there were a lot of interesting takeaways from the roundtable, one point really stood out to me considering I’ve fallen victim to it myself. Foundational problem-solving skills are weakening across the industry, particularly among junior developers, and an overreliance on AI is eroding those core skills. As someone who recently graduated as a software engineer, I found this extremely relatable and have been feeling it firsthand. Overreliance on AI is becoming a massive problem in our industry, but when your programming habits are formed with a “co-pilot” from day one, it’s hard to reverse the damage that’s been done. However, using AI doesn’t mean you have to throw away the thinking side of the process.
I wanted to take some time today to discuss a new debugging method I’ve been using to solve issues on my homelab, one that was actually inspired by AI. It all started when I was using GitHub Copilot while working on features for a client project. I noticed a small feature that explained how the model was reasoning and thinking before providing a response. That idea became the inspiration for the method I now use whenever I encounter an issue.
A few days ago, the RSS feeds on my Jellyfin arr stack stopped working. I had absolutely no idea why, other than the fact that it seemed like the containers in the stack were no longer talking to each other. I opened a new note in Obsidian (an amazing note-taking tool that I may write a blog post about in the future) and started by writing down the Main Issue. In this section, I documented what was currently wrong with the app, log messages I thought might be relevant, and my working hypothesis for what could be causing the issue. This forced me to clearly define the problem and articulate a potential cause. As a bonus, it also created a solid prompt that I could paste directly into ChatGPT to assist with the debugging process. The idea behind this step is to keep short, structured notes for every action you take while debugging.
Each entry typically follows this flow: action → reason → result → hypothesis for the next step.
This creates a clear footprint of the steps you took to diagnose the issue, including the exact commands used, why you ran them (demonstrating your understanding), and links to any resources that helped along the way. It’s important to note that the Investigation section is not about fixing the issue, it’s about confirming your original hypothesis and identifying the root cause. Once you truly understand the problem, building a fix becomes significantly simpler. The last step, or the hypothesis, is what sets you up for the action or the first step of the next debug attempt. This is because the first thing you try likely won't be the confirmation that you are looking for. Sticking with my scenario of the arr stack, an example of an investigation step could be the following:
- - I first check to make sure the docker network for the jellyfin-addons stack still exists
- - qBittorrent and FlareSolverr are both missing from the network
- - both services are using
network_mode: service:gluetunso this is anticipated behaviour
- - Because of the fact that everything was working before, the network and compose file are configured properly, and all containers are running correctly, I rule out that this is a docker issue.
Once you have identified the issue, its time to move on to the fix. I always start by writing down an overview of what the cause of the problem was and why the issue was happening to begin with. This helps to get a better idea of what to focus your research on when looking for a solution. In the case of my arr stack, I wrote that “the problem was stemming from the UFW on Linux that I enabled on configuration of my Nginx reverse proxy. If UFW is active and blocking all incoming connections, the containers’ web UIs are still reachable from your LAN, but Sonarr/Radarr can’t talk to qBittorrent internally. This happens because Docker hooks its own iptables chains and bypasses UFW except on forwarded/internal traffic. This is exactly what docker container traffic is which is why despite the network being configured correctly, traffic still isn't able to travel between containers.” After writing down the root cause, I then followed it up with a defined goal of treating docker traffic the same as local traffic and began researching a fix.
After some time, I landed on a solution and carefully documented the implementation. It's important to link your sources, explain your thought process, and save the exact commands so that if you ever encounter a similar issue again, you won’t just know what to do, you'll also remember why the fix works. More importantly, this process has helped me rebuild a habit of deliberate problem-solving. AI is still part of my workflow, but it’s no longer the driver. By forcing myself to articulate hypotheses, validate assumptions, and document my reasoning, I’ve found that I understand my systems far better than I did before.
If foundational problem-solving really is eroding across the industry, I don’t think the solution is to stop using AI altogether. Instead, we need better ways to use it with methods that reinforce thinking rather than replace it. This debugging approach has done exactly that for me, and I hope it helps others do the same.


