For decades, Nick Montfort has blurred the lines between literature and computation, reshaping how we understand authorship, creativity, and the poetic potential of code. A professor at MIT and a pioneering figure in the field of electronic literature, Montfort’s work spans interactive fiction, generative poetry, and conceptual art, all grounded in the material realities of computing. Whether he’s using BASIC on a Commodore 64 or crafting minimalist programs in Python, his approach highlights how platforms, languages, and constraints shape artistic output.
In this conversation, Montfort reflects on his early influences—from Infocom’s text adventures to batch-mode programming—and how they informed his dual identity as both coder and poet. We delve into foundational questions about authorship in an age of remix culture, explore the visual and structural possibilities of obsolete platforms, and consider the cultural layers embedded in projects like Autopia and Memory Slam 2.0. For Montfort, the code and the text are not separate but entangled; the algorithm is as much the poem as the language it produces.
As the generative art world becomes increasingly dominated by closed, corporate AI models, Montfort advocates for transparent, explainable systems that can be studied, modified, and shaped by artists. His minimal, open-source aesthetic isn’t nostalgic—it’s deliberate, political, and deeply pedagogical. This exclusive Q&A for Exploring the Nexus offers a glimpse into a practice that resists the opaque and celebrates the expressive power of computation on its own terms.
Let’s go all the way back… What first drew you to the intersection of writing and computation? A lot of people would see them as complete opposites.
It wasn’t something I dreamed up on my own. It was already out there in what I was experiencing during the home computer era. When I started writing and reading seriously, I was also learning to program in BASIC. In the early 1980s, the most successful entertainment software in the U.S. came from Infocom. They made interactive fiction—text adventures in literary genres. If you saw them today, you might think your computer had crashed: it was all just blocks of text.
But they were considered video games, and they topped the charts. These were examples of early natural language processing and simulations of fascinating, explorable worlds. They were also literary, labeled as science fiction, fantasy, mystery, humor. There were stories to uncover, but also a connection to literary riddles.
I wrote some interactive fiction and even did my first academic book on the topic. I still teach a class at MIT on interactive narrative, but most of my work now is on computer-generated text—narrative, poetic, and sometimes more ambiguous.
Do you consider yourself a poet who codes or a coder who writes poetry? Or something else entirely—a “coder poet”?
That last one probably fits best. There are people who come from either background. In From Fingers to Digits by Margaret Boden and Ernest Edmonds, they interview people like Manfred Mohr, an abstract expressionist who switched to computers through a composer friend. Other people came from programming and moved into art.
Younger practitioners often don’t have that binary origin. Take Kate Sicchio, who founded Live Code NYC. She didn’t start as a musician or a programmer—the performance aspect of coding was her entry point.
As for me, I didn’t start on one side or the other. I see my work primarily as computational poetry. I also do conceptual or constrained poetry and art, but computation grounds it all.
Before we get into specific works, how do you choose a programming language for a poem? Does the language come first, or does the poem? Also, your work feels minimal and concise. How do you keep things that clean, given how messy coding can get?
That’s a great question. There are best practices in software development that help, much like in science. Experimental physicists don’t just wing it—they plan carefully.
As for what comes first—it varies. It could be a word, a rhyme, a poetic form, an observation. Poets often start by noticing something. But I’m also interested in the materiality of computing. We call it the cloud, or the Web, or Steam [Valve’s video game distribution platform], as if it’s immaterial. But it’s very physical: data centers, rare earth elements, human labor. Software isn’t made by a single person either.
Perl, which I use, was created by Larry Wall to help with his church newsletter. BASIC was developed at Dartmouth to make computing accessible to everyone, not just scientists.
Languages like Python and JavaScript are more common today, each with strengths and limitations. When I say “platform,” I mean computational ones: the Commodore 64, Apple II, web browsers. You can program them.
Sometimes I start by exploring a platform’s capabilities. I have a project comparing Commodore BASIC, AppleSoft BASIC, and a modern BASIC dialect. Each has quirks worth examining.
There’s also a visual aspect to your work. Text is very visual, especially on screen. Does that affect your language choice too?
Definitely. Take the Commodore 64—it uses a monospace grid, but you can set individual background and text colors. It has an extended ASCII set with graphic symbols.
You can do some of that on a modern terminal, but it’s much harder on something like the Apple II. You’d need extra hardware and a complex process to do what BASIC can do in a single line on the Commodore 64.
The Commodore 64 and VIC-20 are great for visual or concrete poetry. They just lend themselves to that kind of work.
What is it about those machines that you find so compelling?
They let you do a lot with characters and text. You can just turn them on, type a few lines, and something happens. I use the Commodore 64 in live coding performances as a visualist alongside musicians.
Sure, there’s nostalgia. But it’s also well-designed for exploring what a computer can do. You can make games, use sprites, compose music—but I focus on language. It’s just a very effective machine for that.
Let’s talk about Memory Slam 2.0. You revisit early text generators. What has that taught you about the origins and limits of computational creativity?
It says more about origins than limits. These were systems made under constraints.
Memory Slam 2.0 recreates classic text generators from the “batch era.” You wrote your code on paper, punched cards, submitted them to a computer center, and waited. Errors were common. It was physically demanding and slow.
These programs were created by people from different backgrounds—some were proto-computer scientists, some artists. Not many traditional writers. The innovation under those conditions is inspiring.
One piece is from the 1980s but was still written in batch mode. Before interactive computing, you couldn’t just type a line and see what happened.
Of the seven programs, do you have a favorite?
Different ones for different reasons. I wanted people to be able to study and modify them. If I wanted to show how they were originally made, I’d have used punch cards. Instead, I made formal reimplementations.
One standout is Theo Lutz’s Stochastic Text. It uses Kafka’s vocabulary to generate logical propositions—sometimes even contradictory ones like “Every farmer is near” and “Not every farmer is near.”
It’s easy to remix. Replace the vocabulary with words from Moby-Dick and it still works, but the mood changes. Even people with no programming experience can get started quickly.
In workshops, I start with that one. It’s quick and accessible.
Let’s talk about Autopia. Why use car names as your lexicon? What does that reveal about American culture?
That’s ultimately for readers to interpret, though I’ve speculated. Car names often use Indigenous peoples, verbs, or conquest-related terms like Explorer or Navigator. RAM is both a noun and a verb.
The project started in Anaheim, near Disneyland’s Autopia ride. I was out running and surrounded by cars. I started jotting down car names. Later, I added more by looking online, but only included real words and cars plausible in the U.S.
I built a semantic grammar, so certain kinds of words appear in certain syntactic positions. It’s not a new technique, but not commonly used in NLP. The results are grammatically coherent but open to interpretation.
Your work feels like a playground for semioticians. Can I ask a bigger question? Is the code the poem? Or the generated text? Or the whole thing?
Great question. For me, computation is the medium. People sometimes say the computer is a tool. No—a brush is a tool. The medium is oil on canvas.
If my work is online or in a gallery, it must run live. A video is just documentation. Same with books. Autopia is online and an installation, but it’s also a book.
I publish with a range of presses. The Truelist, from Counterpath, has 120 pages of output followed by one page of code. Run that code and you get the same 120 pages. No randomness.
So, is the poem the output or the code? I leave that open. But they’re two forms of the same thing. The code is a compressed version. With Python, you can decompress it. But the reverse isn’t true.
That asymmetry matters.
For poems that are randomly generated, how do you pick a canonical version for print?
Good question. Computers use pseudorandomness, not true randomness. You can fix a seed value so it always gives the same output.
You can also weight word probabilities, or base choices on what came before (conditional probability). Some people run generators and curate the best outputs. I don’t.
I just take the first output. In #! (Shebang), the printed versions are all first outputs. I try to embed the process in the program itself.
That ties into authorship. Taroko Gorge has been widely remixed. If the writer is just recombining, is that authorship?
Yes—even the recombining has lineage. Burroughs got cut-ups from Tzara. There’s a long history of rule-based creation.
Taroko Gorge is conservative in terms of how it reads, really. A lot of my other work presents very unusual texts. I wrote it in Taiwan, ported it from Python to JavaScript. People remix the JavaScript version. That’s what makes it special—the remixes, not my original.
It’s like inventing a poetic form. Others take it further. That’s fine by me. I hope more of my work gets remixed like that.
Does that negate authorship?
No. I’m not hung up on it, but I am a computational poet. I write the programs. That’s authorship.
The Taroko Gorge remixes show that. People fork the code. They change structures. One person used Metallica lyrics in four-line stanzas.
That’s authorship, too. If a remix incited violence or broke the law, they’d be responsible, not me. Legally, authorship still matters.
Copyright was made to benefit publishers, not protect authors. I use it to make my work open. I usually include a permissive license. “Do what you want with this.”
Last question. How does your work differ from today’s generative AI systems?
Mine is minimal, transparent. Simple, linear algorithms. You don’t need a CS degree to understand them.
My work succeeds, when it does, because it’s clear, brief, open. LLMs are the opposite—opaque and corporate. You wouldn’t read my output and say, “That sounds like a person.” That’s not the goal. I want uncanny, non-human language that resonates. LLMs are glossolalia machines—they make things sound right without understanding. I’ve worked with free and open large language models and will again. I’m not anti-LLM. But all the major LLMs people are using and studying are closed, corporate. That’s a problem for art and education.
My work may be very simply, but it’s explainable. If Zoom started fabricating answers and made my video image say something that I wasn’t saying, or gesture differently, it would be unacceptable. Yet we accept it in AI.
For teaching, simpler and modifiable models are better. I don’t reject LLMs. But I want systems artists and poets can shape.
WORDS: Marc Landas.

