February 20, 2026
A Train With Legs: Proof AI Is Stuck in a Pattern-Matching Trap
Jesse Sedler
February 20, 2026
Jesse Sedler

Try a small experiment the next time you’ve got five minutes and an AI image generator open. Ask it for something simple but slightly impossible: “Generate a picture of a train with legs.” You’re not trying to trick the model or prove some grand point. You’re just asking for a weird image that forces the system to make a choice about what a “train” is allowed to be.
You’ll usually get something that looks pretty good at first glance. There’s a locomotive, some familiar shapes, maybe even details that feel intentional. Then you notice what’s underneath it. Tracks. A lot of the time the train is still sitting on rails, even though you just gave it legs.
That one detail is doing a lot of work. A train with legs shouldn’t need tracks. If it can walk, the entire reason tracks exist changes. Tracks aren’t decorative. They’re part of the concept. They’re the constraint that makes the object what it is. So when the model keeps the tracks in place, it isn’t just making an aesthetic choice. It’s revealing how it’s building the image in the first place.
The cleanest explanation is also the least satisfying one: the model doesn’t understand trains, legs, or tracks. Not in the way a person does. It doesn’t have a mental representation of what tracks are for, or what legs imply, or what changes when you modify the underlying physics of an object. What it has is a learned map of associations. Trains co-occur with tracks in the data it was trained on, so “train” strongly pulls “tracks” along with it.
In other words, the model isn’t reasoning from concepts. It’s completing a pattern. The prompt says “train,” and the training data says “train usually equals tracks,” so tracks appear, even when the prompt introduces a change that’d make a person drop them instantly. The model isn’t checking for contradictions because it doesn’t have a mechanism for contradiction in the way we mean it. It’s optimizing for plausibility, not coherence.
That distinction matters, because it explains why AI can feel both impressive and fragile at the same time. You can ask for a train with legs and get something convincing, but the convincing part isn’t the same thing as understanding. It’s more like a fluent imitation of what “train-like” images tend to contain.
It’s tempting to keep this in the “funny AI mistake” bucket, but the same dynamic shows up in enterprise workflows in a way that’s less visible and a lot more risky. Most real-world failures aren’t spectacular. They look normal. They pass the first-glance test. They even sound confident. And then, when you pull on one detail, the logic doesn’t hold.
That’s because in a lot of business settings, correctness depends on context that isn’t written down in the obvious places. A sentence in a policy that flips the meaning. A customer relationship that changes how a record should be handled. A purpose limitation that isn’t present in the table, but absolutely governs the use of the data. Humans make those calls by carrying a mental model of the situation. We don’t just read the label and act. We infer what the label refers to, what rules travel with it, and what the intended outcome is.
AI systems, by default, do something closer to what the image generator did with the train. They pull in what tends to match the prompt, the keywords, the typical completion. AI doesn't naturally carry over the “why” behind our data, or the constraints that humans treat as non-negotiable so you end up with outputs that look like reasonable work products, but contain small contradictions that only show up if you already know the underlying context.
When people say “AI hallucination,” they often imagine the model inventing something from thin air. That happens, sure. But the more common issue in operational environments is subtler: the model produces something that’s plausible in general but wrong in your specific setting, because the setting isn’t encoded in the inputs. It’s the enterprise version of tracks under a walking train. The default assumptions remain, even when the prompt implies those assumptions should change.
Part of the confusion is that language and images are persuasive. A system that produces crisp prose or a polished graphic feels like it knows what it’s doing, but fluency isn’t comprehension. A model can be extremely good at producing outputs that resemble the outputs we associate with thinking without actually doing the kind of reasoning we expect.
This isn’t a moral argument about whether AI is “real intelligence.” It’s a practical argument about what kinds of mistakes you should plan for. Pattern-matching systems don’t fail the way humans fail. They fail by being locally plausible and globally inconsistent. They fail by skipping the step where a human would pause and say, “Hold on, that doesn’t make sense given what we know.” And in an enterprise environment, “what we know” includes a lot of invisible constraints: sensitivity, identity, purpose, consent, provenance, contractual limitations, and internal policy. If those constraints aren’t present in a form the system can use, the system won’t apply them reliably.
The instinctive response is to focus on access control, because access control is legible. You can see permissions. You can audit roles. You can point to a policy. But the train-with-legs example is a reminder that access isn’t the same thing as understanding. You can give a model access to the right table and still get the wrong outcome if the model doesn’t understand what the table represents, who it refers to, what relationships are implied, and what rules govern its use.
What you want, especially as AI moves from “assistant” to “actor” inside workflows, is context that travels with the data. Not just where something is stored, but what it is, who it’s about, what it can be used for, and what changes when it moves. Your organization already has most of this information scattered across systems and tribal knowledge. The challenge is making it explicit enough that an automated system can apply it consistently.
That’s the shift from AI-accessible data to AI-ready data. Readiness isn’t volume, it isn’t connectivity, it’s whether the meaning and constraints survive long enough to guide decisions downstream.
The reason the “train with legs” prompt is useful isn’t that it proves AI is dumb. It proves something more actionable: the default behavior of these systems is to preserve patterns unless you replace those patterns with better context. If you don’t supply the constraints, the model will fill in its own. If you don’t provide the “why,” it’ll operate on the “what.” If you don’t encode the rules, it’ll lean on averages.
And that’s why AI failures in the enterprise often feel surprising after the fact. The output looked right. The system acted confidently. The logs show a normal sequence of steps. But the hidden assumptions were never challenged, because the system never had the context required to challenge them.
The walking train stayed on tracks. Not because the model was trying to be funny, but because it was doing exactly what it was built to do.
This is also why “AI governance” can’t just mean model policies and approval workflows. It has to include the data layer, because that’s where meaning either survives or gets flattened. If you want AI to make safe decisions at speed, you need a way to preserve what the data actually represents, tie it back to real entities, and carry the rules that should travel with it, including lineage, purpose, and access conditions. That’s the difference between an AI that can generate something that looks right and an AI that can operate in the real world without quietly dragging old assumptions along for the ride.

Winter Olympics, AI, and the Missing Context Problem: Why Responsible Data Use Matters
All Industries
February 13, 2026

Super Bowl Lessons for Enterprise AI: Context Wins Championships
All Industries
February 10, 2026
%202.png)
From Data Maps to Data Navigation: Why Enterprise AI Needs Context to Stay Trustworthy
All Industries
February 6, 2026