Learning from Charles Simonyi's Intentional Programming
By Ben Houston, 2025-05-01
Charles Simonyi's 1995 vision for Intentional Programming (IP) was nothing short of radical. At a time when object-oriented programming was still cresting in popularity, Simonyi argued that the real problem wasn't the lack of better languages -- it was the existence of programming languages themselves. He wanted to replace them with something deeper: a persistent, editable graph of “intentions” from which code could be derived, transformed, and re-imaged.
Today, as we grapple with how to make use of LLMs in software development -- and in particular, how to move from agentic coding to declarative, intent-driven toolkits -- Simonyi's ideas deserve renewed attention. There are striking parallels between Intentional Programming and modern intent-based programming. But there are also crucial differences, shaped by the emergence of AI, improvements in user experience expectations, and the practical realities of today's developer tooling.
A Shared Diagnosis: Languages Are Leaky Abstractions
Simonyi's critique began with an inventory of what he considered “terrible things” that languages normalized: namespace collisions, intermingled implementation and intent, and the brittleness of syntax and semantics. These issues, he argued, were not merely annoying -- they were infrastructural flaws that discouraged abstraction, inhibited reuse, and made meta-programming nearly impossible for everyday developers.
Intent-based programming arrives at a similar conclusion, but from a different historical path. We observe that LLMs perform best when they are given clear specifications, not when they're asked to reason over legacy codebases filled with implicit assumptions. In this view, the problem isn't just languages -- it's the lack of persistent, machine-readable intent.
Simonyi wanted to eliminate the tyranny of fixed languages. We want to eliminate the tyranny of implicit code. The conclusions converge: the unit of software expression should be intent, not syntax.
Intentions Then and Now
Simonyi's “intentions” were closer to modular AST nodes with pluggable syntax and semantic reduction enzymes -- something between a programmable language kernel and a versioned DSL hub. These intentions could define their own editors, notations, transformations, and debugging views.
In modern intent-programming, intent is instead specified declaratively (often in YAML or JSON), with the system inferring or generating code as a derived artifact. There is a looser coupling between specification and implementation: Simonyi's model was tightly integrated, almost Lisp-like in its homoiconicity. Today's systems lean on LLMs as lossy but powerful inference machines that bridge high-level descriptions with working implementations.
That makes modern intent-based systems, in comparison, far easier to prototype and deploy. Simonyi envisioned custom GUI editors for intention trees. We use text editors and YAML, and rely on generative AI to fill in the gaps.
The Compiler as a Platform
One of Simonyi's central insights was that programming languages, like word processors, would give way to platforms. Just as Microsoft Word displaced specialized typesetting tools by becoming a universal host for document features, Simonyi predicted that intentional programming platforms would host a growing set of composable intentions contributed by a wide developer ecosystem.
This idea maps cleanly to intent-based programming in the AI era. The new toolkits are a platform: it supports generators (akin to intentions) that encapsulate domain knowledge and generation strategies. Instead of being restricted to one syntax or compiler pass, developers can mix UI generators, data layer generators, workflow generators, and so on.
Where IP envisioned this through custom imaging/reduction enzymes, today we see it in pluggable LLM prompts, reusable spec types, and generation engines. The AI model becomes the compiler -- and the compiler is now stochastic, data-driven, and contextual.
Divergence: Recursion, AI, and the Lossy Frontier
Perhaps the biggest difference is the role of recursion. In Simonyi's world, intentions reduced deterministically, with compiler-like precision. In today's systems, recursive application of intent-based generation -- where a toolkit generates new intent specs as intermediate outputs -- creates powerful bootstrapping workflows, but also introduces flexibility and adaptive because of its reliance on LLMs to fill in the gaps.
We accept this loss of compiler-like precision because the productivity gains are immense. We no longer expect canonical reductions -- we expect useful ones. And with proper inference caching and version control, we can stabilize those reductions.
Another divergence is Simonyi's aversion to text as a programming medium. His vision leaned heavily on structural editing and GUI-driven manipulation of program trees. Today, we've embraced natural language as the interface layer, using LLMs to parse and generate structured outputs. Rather than avoiding text, we primarily consume and generate it.
Legacy Code and the Realpolitik of Adoption
Simonyi emphasized the ability of IP to absorb legacy code and allow for gradual reengineering. That's a vital concern today as well. Any viable intent-based system must offer a gradient of adoption, enabling developers to embed intent islands inside traditional codebases.
Modern toolkits achieve this by working with file-based inputs and standard text formats. YAML intent specs live alongside .tsx
files. Inference engines regenerate code, but never without developer review. This makes them less pure than IP, but more usable in the wild.
The toolkit model -- where intent can be added incrementally and does not dominate the entire system -- may be the pragmatic successor to Simonyi's purer but more top-heavy approach.
Lessons from Simonyi's Vision
Simonyi was right that we needed to move away from languages as immutable artifacts. He was right that programmer intent deserved to be a first-class, structured, persistent construct. He was even right that the best way to scale abstraction wasn't to predefine every feature -- but to delegate definition to an open ecosystem of users.
But in the age of AI, we do not build everything by hand. We describe, we infer, and we re-describe. Our intentions are not just declarative -- they are generative, recursive, and minimalist, allow for LLM-based common sense-based elaboration behind the scenes. What Simonyi saw as compiler extensions, we now treat as AI inference strategies. What he treated as editing environments, we treat as natural language UIs.
Yet the goal remains the same: make software reflect human thought more directly, more flexibly, and more transparently. In that sense, Intentional Programming is not dead -- it is reborn, fragment by fragment, inside every intent-aware, AI-powered system we build.
The future is not a language. It's a conversation -- structured, recursive, and resolvable into code.