AI is front-and-centre in the media right now, and much of the discussion around it is polarised - AI is either the deux ex machina or the downfall of modern civilisation. My experience has been that there is a middle path, and after having what feels like the same conversations with people I want to write down some of my thoughts.
As a preface, I would categorise myself as a late-adopter and AI skeptic. I absolutely believe there are impending negative effects of AI: the amplification of false information; the loss of human creativity; the wasteful nature of AI “slop”; and the massive power generation and resource-hoarding involved.
However, as a tool I also believe that there are beneficial uses of AI, and this is how I’d like to see it used more. It’s become a trope, but in my opinion AI will make the world a better place by doing our cleaning so that we can spend more time making art - not by replacing artists so they have more time to clean. Crucially, I see the improvements being much less dramatic than claimed, as others have already covered.
My interest is in AI as a programming and productivity tool. I’ve used a range of AI coding tools over the past couple of years: Gemini, Windsurf, Cursor, Copilot, and now Claude. Most of these I picked up to try, then shortly put back down when they required too much babysitting for little benefit. After the past few months of using Claude, I finally feel like the tool is saving me time and letting me focus on more of what I enjoy (developing products and systems) and less of what I don’t (dealing with syntax errors and remembering how libraries work).
I treat Claude like having robot engineers working for me. They aren’t smart, but they have great recall and breadth of knowledge. Given strict instruction they can output my ideas with a great deal of fidelity, and even handle vagueness in my recall when it comes to some of the details. They can be taught through memories and configuration, although they still require correction from time to time.
The advantage is that the price of these robots’ time is a fraction of that of a similar human - for tens of pounds a day I’ve got as many hands at my disposal as I desire. There’s a practical limit to the leverage they give me; experience has shown I need to be able to review everything they produce. However, I am still able to produce more code with the time I have available to me, and crucially (at least from my perspective) I can output better features than I feel I would in the robots’ absence.
One of the things I enjoy most is that the time cost of experimenting is drastically reduced - with a tool that can easily do refactors across a service, and prototype and test different approaches, the sunk cost is far smaller. I can have a robot whip me up something in five minutes, give it a test and review to see how it feels, and then either keep going in the same direction or unwind the whole thing and start again.
I also enjoy the conversational aspect of developing with the robots - I can have an idea and fire off an initial prompt, then think about the next steps whilst that executes. I can interrupt the flow if I see something going in a direction that I don’t like and course-correct. I can even set another robot off to do some research in parallel to inform future steps or check my understanding.
This development style still requires a reasonable amount of my attention. For me, that’s a good thing - I don’t want my tools to do things that I’m not aware of or aren’t under my control. This, I feel, is the flaw in the “vibe-coding” approach to AI development and the conversations I see from non-technical folks using AI to produce code.
Here is where we find the sharp edge. By all means, use AI tools to vibe-code prototypes, but I would not currently trust the robot to ship anything to production without human review. Although much less prone to wild hallucinations than its predecessors, the robot will make guesses and lie to you with a confidence that requires human scrutiny to refute; it will stray from your instruction and require admonishment to follow the correct conventions and best practices; and it often struggles to see the big picture, even with tools like planning mode - the human brain has a far larger context window than the robot.
No tool is perfect, and even with the sharp edges I’m going to keep using and experimenting with AI. I’m thankful that I’m working for a company that is supporting the use of these tools and is enabling people to use them across the business, not only as a coding assistant. However, I’m keeping one hand on my gun; just in case…