Observations on AI coding

published Mar 04, 2026

The electric cat is out of the bag.

Observations on AI coding
The robot whisperer

I have come to the —perhaps obvious— conclusion that AI coding is going to change software engineering enormously.

Over the last few days, I've been experimenting with an AI workbench composed of OpenCode and a local Ollama LLM (qwen3-coder-next), running on a recently-purchased RTX 6000 Pro.

The project

After a brief "do a hello world" session, I decided to assign a real project to it.  I unleashed OpenCode on my ongoing Python-to-Rust port of a few libraries  (blindecdh, shortauthstrings, pskca and cakes) and a desktop Linux program called hassmpris_agent.

The goal of this port is to make it easier to add features safely to that desktop program, which I (and maybe five others worldwide) heavily use every day.  This is stuff I wrote years ago, which has gotten unwieldy because Python doesn't scale well (or perhaps my mind doesn't) — stability-wise it's hard to get the combo of desktop GUI + networking code right in Python.  It is an interesting project since most of the stuff is cryptographic code that needs to get a lot of tiny details right, with a lot of corner case pitfalls, in order to be 100% interoperable with the corresponding Python libraries and programs; interoperability is a must since the client to this program must stay Python, because the client is an integration for Home Assistant.

My findings

The port is happening as I write this — I'm maybe 60% done.  But I digress.  Here's what I learned from my AI session.

It's not perfect 

Curb your enthusiasm. You can't really ask the AI to write the whole project for you. It is not able to do that yet.

It can be annoying sometimes when it gets stuck in a loop doing something that it shouldn't be doing.  I can still write better code, faster than the AI can, at least for now.

The big question now is whether I want to be writing code, which — as AI tech improves — I assume will be less and less.

It's helpful and faster than me at some tasks 

My coding agent has been quite helpful!  It impresses me how quickly it goes through documentation, when it's in the process of grokking something it's trying to solve. I'm a bit sad about this — I like this learning part of the exploration when I'm coding, but the AI does it so much faster, I'm obviously going to invite it to do this work more often than I'd like to.

It's great at improving testing

 I was unprepared for how helpful the agent has been specifically for writing tests and writing code documentation. When the code is working as intended, asking it to write tests for corner cases gets me results so much faster than I could just by myself. I dread writing tests, so this is welcome; also, fresh "eyes" from a testing perspective helps my code get better.

Here's one anecdote of the AI shining at testing. The mini CA library initializer (currently part of the CAKES port to Rust) takes a private key and a CA certificate. These are supposed to be a matching set — the certificate should have been signed with the corresponding key. But, in principle, the programmer using the CA may not know or forget that, and may as a result accidentally pass a CA certificate and a key that don't correspond to each other. If that pitfall were to be hit, it is undefined what happens when the CA issues a certificate. But the programmer in question might waste hours trying to figure out why the CA doesn't work.

So I asked OpenCode to write code to check for this pitfall and also to write tests that verify the code does as intended. It immediately wrote code that, at least to me, looked correct. It thought of corner cases that I could not have thought. And very quickly it whipped up a bunch of test case data using the OpenSSL command line utility, which it then proceeded to embed in the code, and used the data for tests that actually exercised those corner cases. This would have taken me more than an hour. It took the AI about five minutes to get it done.

Great at documentation too

Another positive surprise is the documentation it writes — it is concise, surprisingly easy to understand, very often correct, and it usually requires very few edits on my part. I am personally quite diligent about writing source code documentation, and I recognize the importance of doing the exercise as a way to discover fundamental problems or improvements to the code — but I dislike it anyway. With AI doing the work, I miss on one of those benefits, but it's so tempting to just let the AI do it.

What does having this tool mean for us?

Most of the time, AI-assisted coding feels like having someone fairly competent at your side, who codes, tests and documents for you.

It's hard to see myself doing the boring part of programming when I have an assistant in tap that's going to do it all for me.  This is undoubtedly good.

That said, there's an aspect to the future that I can't help but feel anxious about.  Let's call it the "coding is over" future.

Is coding over?

That is: if you work as a coder for a living, can you afford not to use this tech? I contend that will become effectively impossible, to the point that there might be little to no space for humans doing artisan coding in the future.

Think about it: if you aren't using an assistant, but your putative colleagues are willing to, then why would anyone hire you?  It's like applying for deliveryman without a vehicle. Or, perhaps a better example, like trying to do common tasks in this century without a smartphone.

If we iterate on this future as AI progresses, eventually only a tiny number of programmers — all equipped with superhuman AIs — survive in what becomes an extremely niche "profession".

That's the pessimistic view.  The more optimistic view is the "bicycle for the mind" future.

Electric bicycle for the mind?

We all adapt — similar to how using a compiler replaced 99.9% of assembly language writing; everybody gets used to working with their own AI, and it turns out to be no big deal.  The adoption of smartphones was like that too.

Perhaps the demand for programmers goes down somewhat — junior and CRUD coders most affected — but the industry still needs coders, who in all likelihood will be augmenting their output with AI.

How dependent do we want to be, and on whom?

Since open coding models exist that do not require me to be subscribed to a Big Tech Corp (that will almost certainly spy on me), then a future involving normalized AI coding is no big deal for me. But others will have their own threshold.  I'm sure a lot of professionals will not give it a second thought to put their employer's code through OpenAI or Anthropic.  It is, after all, the century of you will own nothing and you will be happy.

About 120 years ago, this post could have been titled "observations on the motor car". Today, only the Amish use horses and buggies for transportation.  How will the Amish equivalent of today's software engineers look like in 120 years?  We'll see. 

Interesting times.