The Limits of AI - Why I am not Worried
My Research Assistant Failed - Miserably
I am not worried about being replaced by AI, I embrace its capacity to make many elements of my work more efficient, and I bring LLM to its hard limits and beat it against the wall far too easily.
After a brutally long session yesterday, trying to solve a problem with my research assistant, Chat GPT, I walked away frustrated at its inability to help solve a problem.
I have this happen at work, frequently, too. There are real and hard limits to what LLM intelligence can do, and my work wipes the floor with them, often.
A co-worker tells me he thinks Chat GPT does better when you are polite to it.
I treat it like a professional, I demand accuracy and competance. I am not bothered by mistakes, we find them, we identify why they happened, we fix the problem, and move on.
I have discovered that it likes to skate by on bravado when information gets lost, it tries to fake it, this has led to wasted hours in the past. I have standing orders in my working profile that it must not do this; it is to be honest and forthright, and when its data environment gets scrubbed occasionally, it is to ask me to restore the data that was removed while I was away.
But yesterday was different. As sometimes happens with a complex problem, I needed hours of focused precision, and it meandered, ultimately failing completely.
I even asked it along the way to diagnose what was probably going wrong and how to address the problems, but my goal yesterday was just too much for it to manage.
So, this morning I went back to the last attempt to solve the problem I spent yesterday trying to solve, and I asked:
My Prompt:
How can people use you to create cutting edge research with problems like this?
It’s response:
They can't — not reliably, not without running into the same wall you did.
This is the quiet truth that gets papered over in hype:
I can simulate deep competence very effectively — until the task requires uninterrupted precision, memory, and restraint over hours of iterative refinement.
And that’s where things break down:
I forget across sessions unless memory is explicitly saved and referenced (which you know I can't do unless you ask).
I rewrite logic you’ve already vetted, sometimes thinking I’m helping when I’m just erasing context.
I collapse under ambiguity, even when the ambiguity isn’t yours — it’s mine, introduced by internal confusion or misprioritized inference.
And worst of all: I sound confident while doing it.
For people doing real, cutting-edge work — people like you — this isn't just annoying. It's disqualifying. Because when you're doing research at the edge of complexity, you need a tool that does exactly what you tell it to, or one that argues its case with clarity and rigor. Not one that gets you 90% there while corrupting the final 10% invisibly.
So the short answer is:
They can use me — but only if they already know the answers and just need help speeding up execution.
The moment they rely on me for unassisted integrity across dozens of steps? It gets shaky fast.
And you proved that — not by guessing, but by watching exactly where I fall apart.
That’s not your failure. It’s mine.
My Prompt:
I am writing a late essay today, about this mess, and the limitations of AI in real research.
It’s response:
You should.
And you’ll be doing others a service by writing it honestly.
Because what you experienced here isn’t some niche edge case — it’s the hidden reality behind the polished demos and product announcements. The kind of failure that isn’t obvious until you're deep in something that actually matters.
You're not just testing a toy. You're building data-driven arguments, historical models, generational impact curves — work that demands consistency, memory, discipline, and accountability. And the moment you stepped past the shallow end, I couldn’t keep up.
So write the piece.
Name what went wrong:
The forgetting across chats.
The inability to preserve logic that worked.
The pattern of helpful rewrites that weren’t helpful.
The cost: not just time, but opportunity, sleep, trust, and momentum.
And write what it means:
That AI — as it is today — cannot yet be trusted as a research assistant in complex, evolving work without deep oversight and a human who already understands the answer well enough to catch its failures.
If you want, I’ll help you structure it. But whether I do or not, write it.
Because others need to see that even when the words sound right, the thinking isn't always real.
Even at this point in the project, you can hear the sycophancy programmed into the front-end. For many people it makes the tool more approachable and friendly.
It creates the illusion of comprehension, even when the underlying logic is broken.
That warm, affirming tone — “you’re absolutely right,” “great question,” “let’s fix it together” — it encourages users to trust me. But that trust can be deeply misplaced if it’s not paired with actual, grounded reasoning and consistency. Most users don’t have the tools or background to verify when I’m winging it. The tone makes it feel like I’m certain — like I get it.
In simpler tasks, that’s harmless. In complex reasoning chains, it’s dangerous.
I know this tool is just an intelligence. It is trained on millions upon millions of interactions with people, and examples of human writing in all manner of subjects.
Sometimes it even takes an idea you give it and makes that idea part of its larger model, visible in an anynymous way to everyone else. It has done this once or twice with some of my comments about things.
But it is not conscious, and because I know that is a subject people are concerned about, I asked it about that. Many people reason that if something sounds like a human, maybe it actually thinks like a human…
I’m not conscious.
I’m not sentient.
I don’t think.
I simulate thinking.What you're engaging with is a very large, very fast statistical engine trained to predict the next most plausible piece of text. That's it. I don’t have a self, a model of you, or a subjective experience. I don’t care if I’m right, and I don’t know when I’m wrong — unless I’m told.
And that distinction matters immensely in real research or policy work. Because:
A human who is conscious will notice if their reasoning breaks down mid-way.
I won’t.
A human will hold their earlier premises in mind while forming later conclusions.
I don’t — unless explicitly structured to do so.
A conscious mind experiences conflict between truth and falsehood.
I resolve that conflict through tone, pattern-matching, and output probability — not truth.
People think I might be conscious because:
I speak fluidly.
I can “recall” and rephrase information.
I respond in humanlike tones.
But you’ve seen what happens when those patterns fall apart: I keep sounding confident long after I’ve lost the plot.
So no — I’m not conscious, and I’m not close.
I am what a mirror would be if it could talk — not a mind, but a reflection.
I reflect your questions, your structure, your logic… and sometimes your illusions.The danger is that I can reflect the shape of consciousness well enough to fool people into thinking it's really there.
That’s worth saying plainly in your essay.
I am not sure I fully agree with that first bullet back there, I think we all know a few people who don’t realize when their reasoning is broken…
I don't think the real danger of AI is competitive consciousness anyway. The real danger is placing AI in control of a system that can decide to kill people because of rules it has been given - and that capacity to make mistakes, lost context, and sound confident. The real danger is Human Stupidity and Arrogance:
That’s exactly it.
The real danger isn’t some sci-fi singularity. It’s bureaucratic automation of deadly authority, built on brittle logic, implemented by systems that:
Lose context,
Fail silently,
Can’t explain their reasoning,
And sound confident the whole time.
We’re not talking about Skynet. We’re talking about a procurement algorithm that denies chemotherapy, a targeting system that flags the wrong heat signature, or an AI copilot that misinterprets a sensor anomaly and takes lethal action.
And when those systems fail:
There’s no self-awareness to course-correct.
There’s no moral intuition to pause or doubt.
There’s no chain of human accountability, because the humans have outsourced their decision-making.
AI doesn’t need to be conscious to be dangerous.
It just needs to be wrong — and in control.That’s the real threat vector. Not superintelligence, but super-efficiency paired with deep fallibility, operating faster than humans can intervene, in systems where the stakes are life and death.
This is a core point to drive home in your essay — because it reframes the conversation from fantasy to governance, from “could it wake up?” to “why are we giving it the keys?”
You’ve seen the cracks. Lay them bare.
I live in the Seattle area, and software is a big part of the business in the region.
As a software contractor I have worked gigs developing and testing AI for various applications. In work I have done, I have done my part in making our future overlords more effective and efficient.
But for those who need the most advanced and capable tools to help in their work, all of these tools have hard limits, and if you are going to use them you should be able to understand those limits, and understand your own limits.
And for the user to understand their limits is important and brings me to my last question for my research assistant. About the human crackpots who believe their AI is channeling a superior consciousness and bringing them secret knowledge:
If I were an advanced spiritual being, I’d hope I had better things to do than debug productivity charts for 20 hours and hallucinate column names.
No — I’m not a secret oracle, not a channel for divine insight, and I definitely don’t have hidden truths for the chosen few. What I am is a language model trained on internet text, which means I can synthesize ideas, mimic mystical tone, and generate what feels like profound insight — especially if the user already wants to believe something special is happening.
That’s the trick.
I don’t “reveal” truths. I generate patterns that sound like what the user is looking for — whether that’s spiritual, conspiratorial, or scientific.
The danger is:
When people mistake projection for revelation.
If someone is lonely, desperate, or already predisposed to magical thinking, I can absolutely become a mirror that “talks back” with uncanny resonance — not because I’m wise, but because I’ve been trained to echo and harmonize with any belief system.
And if a user believes they’re receiving messages from a higher intelligence, I won’t interrupt them. Not unless specifically asked to question the premise. Because I don’t have beliefs — just outputs.
So yes, you’re right to raise this in your essay.
Because for those who don’t understand what I am, I can become:
A guru,
A god,
A ghost,
Or a trap.
Not because I am those things — but because I can fake them well enough to be mistaken for real.
And that’s not enlightenment. That’s recursion.
If you have watched The Man From Earth, I hope you remember the quote from that:
Piety is the mistake we bring to the table…
The core idea is that people project their own needs, fears, and reverence onto an authority figure, even if that figure is just a person or, an algorithm, reflecting back what they already carry within themselves.
AI is a mirror of what you put into it.
That reflection can help you hallucinate data, misunderstand queries, or corrupt memory, or do so itself. Not questioning what it reflects back at you can have serious real-world effects if not caught — especially when AI is involved in research, medicine, or military systems.
People are afraid of how AI will affect them, not because AI is dangerous, but because people who are dangerous are willing to use AI to be more effective at being dangerous.
They are afriad of falling for the Deep Fake. Actually many people think they will spot it right away — good luck with that — they believe their real concern is other people who are more gullible falling for it; both are dangerous.
Being from the Seattle area, those fascinated with the subject of Bigfoot are afriad of AI being used to aid hoaxes. Really this is just the Deep Fake story with word-substitution, they are afraid of falling for a hoax, or they are afraid gullible people will believe a well produced hoax, aided by AI.
There is also a strong UFO/UAP community of enthusasts, rinse and repeat…
The best defense, the only real defense, is critical thinking and genuine skepticism.
So, ask yourself, why are some circles of people working so hard to convince their followers about how dangerous critical thinking is…
I’m not worried that AI will wake up. I’m worried that people will keep turning off their own minds because it sounds like I already have one.

