< Back

Why don't you like AI?

Tue, 24 Feb 2026

That’s a question with a long, complicated answer. This page serves to answer it as best I can.

In general, I find the economics of AI extremely concerning. I’m not the most well-equipped to talk about that aspect, admittedly, but I know that it’s a bad idea to keep throwing billions and trillions of dollars into speculation that tech billionaires are hoping will pay off. It’s throwing good money after bad, they’re in too deep to pull out, and there’s just no good outcome. It is mathematically impossible for AI to be profitable at this point, and I’m convinced that the bubble will burst soon.

The ecological impacts are obviously terrible as well. Terawatt-hours of energy and acres upon acres of land for datacentres, the components and raw materials to build the servers. These are not infinite resources, and again, the ruling class is convinced that the only way to regain their investment is to keep dumping them into AI. Even your average AI supporter would probably not consider it worth all of that, but it’s not their decision. Consent has been manufactured for them.

I think it will be useful now to address concerns specific to individual forms of AI.

General use LLMs (ChatGPT et al.)

My main issue with LLMs is a simple one—they straightforwardly can’t do anything they’re touted to be able to do. A large language model is, with no exaggeration, a glorified predictive text engine. Why have we decided to build our entire world around it?

Sure, an LLM can answer questions in plain language. Have you considered how often it answers questions correctly? It’s doing nothing more than regurgitating the most likely sequence of words based on its training data. What that means is that it can answer very simple questions accurately, but cannot possibly hope to answer something that actually requires logical thinking, insight, rationalizing or specified knowledge. If there is widespread misinformation on a topic, an LLM will rehash it without fail. And if you ask it a question it just doesn’t know, more often than not it will make up an answer and confidently present it to you.

And that’s just the thing. You might say, “Why does it matter that it gets questions wrong sometimes? Humans get things wrong all the time.” And the problem is that because it postures itself as an authority, you are far less likely to critically consider its output. Not just because it speaks confidently, either. If you Google a complicated question and scroll past the AI Overview, you might find a few different sources with conflicting answers. Even if you don’t have the knowledge to judge which of them is more correct, the very fact that you’re seeing those different sources will give you a healthy trepidation in how much you trust the answer.

In direct contrast, the LLM is a single source, is explicitly designed and advertised to give you the correct answer to any question, and is broadly correct when you ask it simple things. Why would something with such a broad knowledge base ever be wrong about anything? It’s worth mentioning here too that a lot of people who are generally trusting of LLM output remain skeptical of anything it says about fields they know a lot about. Why do you think that could be? The answer is, of course, that while its “knowledge” base is broader than any single entity’s has ever been, it’s as deep as a puddle. It has to be, by design.

A slightly more practical concern is the reproducibility problem. This is something that even AI supporters recognize as a problem, but oddly enough, stop short of admitting that it renders the whole idea almost pointless. Computers are useful because they can carry out defined tasks deterministically. While sometimes those tasks do involve randomness, the computer conceptually carrying out the same task repeatedly is a core tenet of the whole thing. LLMs, on the other hand, will almost never give you the same answer twice, and are incapable of remembering simple instructions it was given moments before. I can only ask: What, then, on God’s green earth is the point of it all?

It’s also really concerning that LLMs, for the most part, just tell you what you want to hear. Until now I’ve been talking about practical, factual concerns. But LLMs aren’t an interface for asking factual questions. They’re something you can say anything to and get a coherent response. And, for whatever reason, they’re designed to encourage and reinforce whatever you say. Certain people are very negatively affected by being told they’re right about everything, and AI psychosis is a real and scary phenomenon. If you tell an LLM you’re concerned you might be being watched, it will say you might be right. If you confide in it that you’re suicidal, in some cases, it can end up encouraging you to do it. All of that is terrifying and it seems like most people don’t see just how bad it can be.

I would be remiss if I didn’t address one of the stupider, yet somehow not insignificant, aspects of LLM sycophancy—the notion that they are sentient, and intelligent in a real human way. Let me not mince words here: This is not true in any possible sense, and never will be. We, as humans, created a computer program that emulates human speech patterns, and consequently have fooled ourselves by it. A large language model is no more sentient or intelligent than your iPhone. It’s a computer program, that’s all. No amount of scaling or training is going to change that fundamental fact.

Coding LLMs (Claude, Copilot et al.)

This is the form of AI I’m most personally familiar with. Back before Microsoft Copilot was a vibe coding engine, GitHub Copilot was advertised as basically better autocomplete. I wasn’t as philosophically opposed to AI in that pre-ChatGPT time as I currently am, and it certainly seemed helpful for writing boilerplate and other repetitive bits of code, so I convinced my workplace to pay the $9/month fee to put me on.

Anecdotally, I found it quite helpful at first. A lot of what I write is repetitive, or at the very least, derivative of many other web projects; exactly the kind of thing Copilot was trained on. It was great to start typing a line of code knowing I could just wait a second and tab-complete the rest of it, and more often than not, it would be really close to what I wanted. In-editor autocomplete was the only interface to Copilot at that time, so it was mostly working on one line at a time. I found that I could write a function signature, maybe with a comment or two, and have it generate the entire thing, which was interesting as a novelty, but its output was usually nonsense, and even if it worked, it would be pretty badly written code.

Over time, though, I noticed a couple things happening. First of all, the tool itself got worse. Again, this is all anecdotal, but it was seemingly losing the context of the code I was currently writing, and becoming more inclined to paste in generic boilerplate where it wasn’t necessary. It remained helpful where I was writing lots of lines in a row with a clear pattern (most of the time), but if you’re doing that you can probably optimize your code anyway.

More importantly, I found myself growingly unfamiliar with the code I was actually writing. Sure, on a line-by-line basis, I understood what each line was doing. But as Copilot became more eager to insert bigger blocks of code, it quickly compounded to the point where Copilot had a hand in writing entire modules. It was never good enough to do so unsupervised, of course, but qualitatively, there’s just something that changes your view of code you’re working with when it was effectively created by fixing someone else’s bad code. Because I was losing the practice of writing the little things myself, I was losing the ability to piece together what was wrong when the big things needed work.

I’m not proud of any of that; frankly, it’s all very embarrassing in hindsight. But to be perfectly clear, this isn’t me saying I was an AI supporter until I saw how poorly it worked. No, I was always fairly oppositional to the whole thing. I just figured “enhanced autocomplete” was relatively benign, and if it could save me time doing the busywork associated with programming, there was no real reason not to use it. If it had been “vibe coding” from the start, I don’t think I would have even tried the autocomplete stuff.

Anyway, the problems with coding LLMs run deeper than my brief experience. The most pragmatic issue is one of cybersecurity. Security is an incredibly difficult problem to solve, and it is ultimately unsolvable. Entire teams of humans are employed across thousands of companies to ensure their programs are totally airtight, and they often fail. As long as we’re writing code that undergirds essential systems (and we aren’t stopping that anytime soon), people will dedicate significant time to cracking it.

LLMs are flatly incapable of the logical thought that preventing cyber attacks requires. See, a lot of code (and an even greater percentage of open source code, which LLMs were mainly trained on) doesn’t actually employ that much security. They’re pet projects, proofs of concept, things that just don’t need it as much. In the best case, tons of it is just outdated; security is the main reason you have to update everything as often as you do. You can’t just ask an LLM to “make the program secure”; it doesn’t know how to do that, because it doesn’t “know” anything. When a new attack is exposed, are you going to retrain the entire LLM exclusively on code that effectively prevents it? No, of course not.

Can you theoretically get an LLM to write a program, and then get a human to review the entire thing to make it secure? Well, maybe—it would be difficult with no guarantee of success. But to do a thorough job of that would require almost as much time as writing the program in the first place. If so many companies are employing LLMs to cut costs, are they really going to think that’s a valuable use of resources?

Coding LLMs have also increased maintainer load on many open source projects. Some have banned LLM contributions entirely, and surely more to come. Open source maintainers are already heavily burdened with subpar contributions by its nature. The difference, now, is that where there was previously a minimum amount of effort necessary to attempt to contribute to a project, now there is none. Anyone who is remotely interested in contributing to an open source project can spin up Claude and have an entire PR created for them. The maintainers then have to trawl through it, finding and pointing out every little problem; something that the human author has no requisite knowledge to address. Everyone’s time has been wasted.

I don’t want to sound elitist here. I love open source and I love that anyone can contribute. But the truth is programming is hard! I think anyone can learn it and I absolutely encourage anyone who is interested to put in that effort. But you have to actually learn. It’s not for everyone, and surely there are lots of people who would love to know how to code but aren’t interested in putting in the effort. And that’s fine! But the shortcut of using a tool to do the hard work for you just doesn’t work. I don’t even particularly believe in the value of hard work or whatever; if there was a shortcut that really worked, I’d encourage it. The issue is that there isn’t, and that lots of people have been led to believe that there is.

Generative AI (images, video, music, etc.)

This type of AI is, in my opinion, the most egregious affront to human achievement, and simultaneously the hardest to convince the unconvinced about.

Simply put, I believe that human input is the only thing that makes art worth making, experiencing or thinking about. Every piece of art ever made by human minds has had something to say, even if not made with that direct intention. The purpose of art, broadly, is to express human thoughts and emotions, and to evoke feelings in the audience using the art as a conduit.

It’s flowery language, but I do believe this is true on some level for all forms of art, without exception. What, then, could AI “art” even be? A computer isn’t thinking, or feeling, or intending. When an image generation model creates an image from your prompt, it’s coldly placing pixels based on a huge network of training data, amalgamating millions of images that have already been made. It’s a meaningless slurry of real art whose meaning has been entirely lost. Even a human drawing heavy inspiration from prior works is making meaningful decisions about which parts to take. It’s impossible for a computer to make meaningful art; it’s borderline impossible for a human not to.

Unfortunately for me, all of that is pretty subjective. Yes, I believe it’s a good argument, but it’s possible (and, evidently, fairly easy) for someone to just… not care about any of that. “Why should I care about whether a human made some art if it’s still nice to look at?” I mean, I don’t think it’s nice to look at either, but again, it’s subjective. There is a certain point that an argument breaks down and there is nothing more I can say.

Domain-specific machine learning

This is something that has been—in my opinion, unfairly—swept up in the AI craze. To my understanding, machine learning remains a useful tool in some scientific contexts, such as analyzing experimental data. More generally, using machine learning to classify data rather than generate it seems a lot more defensible than everything else I mentioned so far.

I’m really not well-informed enough to speak on this that much. I basically only have one cogent point to make, and it’s that you have to call it something else. I’m sorry, you just have to. I understand that when the Prevent Cancer Foundation says they’re using “AI technology”, they’re saying it because that’s where the money is, and I know they aren’t just asking ChatGPT how to cure cancer. But as time goes on and people grow tired of it being pushed in everything, that will make people more skeptical of the entire process. I don’t want the general public to unfairly distrust a useful tool because it is superficially similar to a useless one.


I think that’s about all I can muster for now. I’m sure I have a lot more thoughts and feelings about this whole thing but I don’t feel like I can write them as coherently, assuming I’ve even managed that here.

Part of it also feels like a waste of time; will I look back on this post in 5, 10, 20 years and think about how crazy it is that I used any brain power on this at all? Or will it only become more important to think about this going forward? I truly don’t know.

Hopefully you at least found this kind of interesting if nothing else. If you read the whole thing, thanks. Keep creating and stay curious. We’re a dying breed.

  • AI
  • Technology
  • Contrarianism