Speculative Version Control
The world of design needs to rewrite histories in order to save humanity from its own past.
I read something the other day from the author Daniel Pink: “The era of left brain' dominance, with its emphasis on logical, linear, analytical, and computer-like thinking, is on the way out. We're entering a new world in which right-brain qualities—inventiveness, empathy, meaning—will dominate.” Apparently this was written in 2005!
But it got me thinking.
About AI.
Again. Sigh.
What's inspiring about the current state of AI is that it has instantly commoditized all of our most robotic tendencies and aspirations. We've spent a few hundred years perfecting and completing a long, arduous, scientific and industrial revolution, then post-war optimization process—which is to say the hunt for productivity and efficiency via increasing formulaic specialization. We've been trying to act like robots since forever.
Logic, reason, and living our lives as a series of if/then statements that could probably be coded somewhere was sort of the fashionable, rational thing. It was the thing that took the place of blind religious faith. We've always wanted explanations for things, which was probably the initial value prop of the church, and then became the core value prop, too, of the Age of Enlightenment.
During the Enlightenment, we had alternative explanations for things, for phenomena—things that had already been explained by religion, faith, lore, oral history—and now those same exact phenomena were being re-explained and given different causal thrusts. We got hooked. We fell hard for that stuff. The people who could turn complex phenomena into clean formulas the fastest were the people who were most rewarded in society.
The people who could turn complex phenomena into clean formulas the fastest were the people who were most rewarded in society.
Just look at academia, an entire profession devoted to turning all world phenomena into a repeatable formula. That's what we call knowledge, intellect, sophistication. The hunt for the algo. When the creation of knowledge took a long time, it was considered noble, a worthy pursuit, one you might not ever finish, like the Sagrada Familia or [pick any] field of academic study. But now it takes like ten hours.
We're all a little peeved that our sexy sophistication got taken away from us. It all just got flattened because who needs reason, logic, observation, and extrapolation, when you've got straight-up probability on your side, and with next to no margin of error? It's just the age of applied statistics, as Ted Chiang would say. Who needs to observe, catalog, and explain when there's a big red button that will just tell you what the next thing is (or could plausibly, believably be)?
Now the obvious big, huge, giant problem is that AI is a full accounting of humanity's past—which is often ugly, hideous. There is not much value in modeling futures based on pasts that never should have happened in the first place. I think the biggest creative opportunity in a world of AI is going to be the giant group project of rewriting our past.
Maybe we can fight the shitty consequences of applied statistics with deeply creative, non-model-generated applied ethics. We must. We have to. We have to give the robots training data that we don't feel awful about. It's kinda like how I try not to cuss in front of my kids, and the cigarettes are at the way, way, way deep dark bottom of my purse. I want to emit training data that I feel good about.
We must, as creative humans, do the one thing AI can't—the one thing remaining!—which is to have a moral opinion. To take a position on right-from-wrong. To have a point of view. That is so design. That's our role. We have points of view. We are the P.O.V. people. We believe things in our hearts and bones, not just in our heads, and those are areas of the human body that AI cannot access. We can generate incremental new data from our hearts and bones. We have to look inside there and find a general ethics. We are uniquely qualified as humans and among humans to have an ethics-rooted point of view, and to use that moral opinion—that creationary stance, that human posture, those ethics—to rewrite the past.
We have to embark on a gigantic cross-disciplinary, creative project of retelling our species' stories the way they should have been, so that our robot progeny, being raised on a diet of strictly applied-statistics, can grow up not to be psycho-killers.
We're raising some very weird children here—that's a little bit what this Large Language Models thing is, amirite?—and we're doing it as a collective society. We're all married to each other now, despite barely knowing each other, we have this scary-ass baby together, and we have to link arms and get to work on the alternative training data A.S.A.P. And let me be clear—the real history should be preserved, tagged, coded, and loaded right next to the giant fan-fiction fairytale that we all need to write together.
It's important to preserve the comps, in line, next to each other, forever wedded, so that there is never any shadow of a doubt which one is the real cussing mother who smokes cigarettes, and which one is the person just trying to raise a well-adjusted, not-terrible kid. History matters, history is what makes us who we are, history gives us the moral compass that we have—and it has to be fact-checked, documented, and memorialized right next to the new curriculum.
So yeah, let's all be curriculum designers, starting now. Parenting is a little bit like unscaled, perpetual curriculum design on the fly, right? Take your little corner of the universe that you know something about—is it graphic cover art from jazz albums in the 1960s? Cool. If that's your thing, and you know those stories, including the mean ugly bad bits of which there are many, then write your true-fact version, duplicate file, then write your Good Parent version right next to it.
This is about monkey see, monkey do. Our AI overlords aren't overlords, they're just monkeys. They seek to mimic patterns, so we have to feed them good ones, desirable ones, and unfortunately we don't have any! Or we have very few; because we are flawed. Humans are flawed; and so is our past, forevermore. It's time we started thinking of the LLMs more like parrots than people, which is a horrible insult to parrots, many of whom I'm told are quite sentient, moral, cunning, and capable of sadness and humor. The point is that parrots mimic. AI models are in the business of biomimicry, and the 'bio' is us.
AI models are in the business of biomimicry, and the 'bio' is us.
Counterintuitively, the big silver lining in all of this is that computing and rote reasoning aren’t really ‘special’ anymore. The world of applied statistics has in many ways released us from that kind of robot work (not to be confused with craft work!). Instead, we can be creative. We can pursue beauty. We can wallow in complex thought. We can develop craft skills, read books, write plays, and build on the ideas of others. We can understand context and constraints, then respond with empathy and wisdom. We can reverse climate change. We can liberate the oppressed. We can be human again—that's the silver lining.
We now have permission to stop being mechanical. We can be human. We can be unpredictable, ambiguous, ethical, sentient, emotive, innovative—which by the way literally means doing something unprecedented—we can be radical, surprising, non-probabilistic.
We can be human now. It's time.