Against Prediction:
Designing Uncertain Tools

An interview with Patrick Hebron, the machine intelligence design manager at Adobe.

Thomas Pullin for Noema Magazine
Credits
Elvia Wilk is a writer and editor living in New York. Her first novel, “Oval,” was published by Soft Skull Press in 2019. She is the recipient of a 2019 Andy Warhol Arts Writers grant and is a 2020 Transformations of the Human fellow at the Berggruen Institute.

In 2018, Patrick Hebron was asked to assemble a team for a new Machine Intelligence Design initiative at Adobe. Operating like a research and development department, the group has expanded the realm of new potential interfaces and products as well as reconceived the typical workflow by which software evolves, forging a paradigm shift in the industry — and on the front end, a shift in the way creative people use digital tools.   

As a designer, software developer, teacher and author, Hebron’s perspective on artificial intelligence is informed by philosophy and the arts. While he allows that machine learning can create tools that maximize productivity by mimicking aspects of the human mind, he also believes that they can offer a new “counterpart” to human intelligence — a unique creative partner.

In our conversation, he speaks about the ethics of AI, his unique initiative at Adobe and how to balance research with development without closing down possibilities. As he puts it, “A box of Legos should have a bunch of bricks, not an instruction set. There should be doubt as to what the tool is for. That’s where it becomes creative.”


Elvia Wilk: Your team has published articles about the language and aesthetics of interface tools, pointing out how they often limit rather than expand creative thinking. On one level, your role at Adobe is to implement machine learning to improve the toolset that’s available, but on another level, it’s about rethinking the human-tool relationship.

Patrick Hebron: Especially over the last couple of years, in my world and also outside the tech industry, people are thinking about the role of machines in creative work. I think we have to be less literal when talking about creativity. Some of the earliest machine learning projects at Adobe and elsewhere envisioned something like an assistant that helps you with the drudge work. I think that framing puts too fine a point on the idea that creativity is separate from its implementation.

People operate in a much more intuitive way than that. They think about what they’re doing, but they don’t have that much distance from it. It’s more muddied — what constitutes creative work versus grunt work is not a clear line. Does the machine do the legwork and the person plays a kind of curatorial role? To some extent, perhaps. But that’s too hard-edged. The main purpose of my team is to ask: If we think about machine learning technologies from the ground up, in terms of what new possibilities they offer, how do we build new interaction paradigms around them?

“A box of Legos should have a bunch of bricks, not an instruction set. There should be doubt as to what the tool is for.”

Wilk: I struggle with the continued, humanist hang-up on creativity, not because I don’t think there’s an exceptional aspect to the human mind, but because the mind has never existed in a vacuum, distinct from other intelligences and tools. The world affects the thought process, and there is a continual reciprocal development of tools to suit changing needs. The tools we make then affect our behaviors. From reading your work, I gather that machine learning is special because it could change and speed up that feedback loop, allowing the tool to adapt to the user in real-time, right?

Hebron: Yes, that changing relationship is why I always have a hard time wrapping my mind around some of the classic user questions: What is this thing for, is it for novices or professionals, etc? I do my best to avoid these questions, because the best thing you can possibly accomplish as the maker of a tool is to build something that gets used in ways you didn’t anticipate. If you’re building a tool that gets used in exactly the ways that you wrote out on paper, you shot very low. You did something literal and obvious.

I often wonder: If you came up with the first programming language, how would you pitch that to a venture capitalist? The more universal the tool, the harder it is to try to make a pitch for it. Any targeted pitch would miss the point. You say: “This is a thing that can execute any logically valid idea.” The venture capitalist says: “Okay, can you give me an example of that?” “Well, hmm, you could use it to build a calculator.” “Oh, okay, so you’re building a system that can build calculators. I can see what the market for that would be.” But that’s missing the point. Sure, you could build a calculator, but you could also build anything else! The pitch for a programming language is impossible, yet it’s a fundamentally important thing to make.

In a different realm than programming languages, look at what people have done with the video game Minecraft — people go and build 8-bit computers and all these crazy things. This is the best thing about Minecraft, in my opinion. These projects are completely divorced from Minecraft’s originally conceived use-case. A box of Legos should have a bunch of bricks, not an instruction set. There should be doubt as to what the tool is for. That’s where it becomes creative. But that’s a pretty tough pitch when it comes to design tools.

“The best thing you can possibly accomplish as the maker of a tool is to build something that gets used in ways you didn’t anticipate.”

Wilk: What’s an example of a digital design tool that doesn’t come with an instruction manual?

Hebron: One thing machine learning can do is look at a sequence of actions performed by the user and try to predict the next step. But the possibility space is pretty vast, so it’s going to have a really hard time predicting what the user is trying to make. Even if a prediction is successful, because the tool is the one that got there, the user might reject the solution if they feel like the idea didn’t come from them.

So, it’s actually fairly fruitless to try to predict what the user wants to do. You can’t do it well, and if you do, you negate the fact that the user wanted to do it in the first place. I think a better goal is to try to predict what the tool itself should become in order to enable the person to go in the direction that they’re trying to go in. Rather than trying to predict what the user’s trying to make, try to predict what tool they need to make it. At a simple level, maybe they’re going to need a color picker next, so you offer the color picker — you don’t try to predict the color they’ll pick.

Wilk: Is the goal to strike a balance between offering too many and too few possibilities?  

Hebron: The set of possibilities for what you might want to create with a design tool is beyond immense. The tool needs to have some way of being general but not get in the way of anything specific that you want to achieve.

Alan Perlis has this quote: “Beware the Turing tar pit, in which everything is possible but nothing of interest is easy.” Open-endedness is great, but achieving something within it is not a given.

“If you came up with the first programming language, how would you pitch that to a venture capitalist?”

Wilk: Right, sometimes the more limits you have, the more creative you can be within a given frame. But it seems like you’re saying machine-learning tools could have the benefits of both: the limited framework in the moment and the vast potential overall. And they could do this by anticipating how general or how limited a user might want their choices to be at a given moment?

Hebron: There are cases where someone has a fully baked idea, sits down to execute it and almost nothing changes in the process. There is, of course, an executional difference between having it in your head and having done it. But if the project involves fairly well-trodden terrain — like making a clothing ad — that difference doesn’t matter a whole lot. When you’re trying to make something original, though, executional steps are very relevant to the outcome.

There is always some uncertainty in the user’s mind. Why not admit that and bring that forward? Tools are currently aimed at productivity and getting things done, but one of the great benefits of machine learning is that it is no more concrete than the human mind. There’s potentially a pairing there.

Wilk: There’s definitely a pairing. But does that mean equivalence? In machine learning, there is often an obsession with modeling the human mind. Besides the question of whether they can, why do we think our most advanced technologies even should model the mind? 

Hebron: We’ve got what, eight billion humans on this planet? I see very little reason to try to use machine learning to create more human minds. We have a very straightforward approach to producing people. Why would we try to invent new mechanisms for that?

First of all, we’ll fail at it. I do not think that the primary reason we would fail has to do with architectural challenges. I think the main challenge would be simulation. Maybe genetics are a prerequisite for being a human, but I think the main thing that is requisite for being a human is being acculturated within human society. When people propose imitating the human mind, they ignore the extent to which our embodiment and acculturation shape our intellects. Even a minor shift in details would lead to a dramatic movement away from human behavior. I don’t think that we can simulate a human’s experience of the world sufficiently.

“The fact that the tool got there and not the user may mean that the user will reject the solution because they feel like the idea didn’t come from them.”

But, again, what would be the point? What do we need more humans for? Machine intelligence can provide a counterpoint to human intelligence. We should see this as something akin to the search for extraterrestrial life or the effort to decode dolphin language. We can better understand our own intelligence by contrasting it with other meanings that intelligence might have.

Currently, we have such a poor understanding of what intelligence means outside of ourselves that, given the capability to design a new intelligence, the only thing we can think to create is something like ourselves. This is a limitation in our thinking about tools.

I think a similar design process will unfold with the creation of AI as with any design process.  You thought you were going here, but you ended up a bit further over there. You zig, and you zag. The properties of AI are going to come out different than what we thought. This will be far more illuminating to who we are than landing on what we thought we wanted.

Wilk: We’re both affiliated with the Transformations of the Human program at the Berggruen Institute, which provides a framework for philosophical and artistic thinking about these questions. A main focus of the program is to question the entrenched disciplinary division between the humanities and the sciences. For instance, the implications of AI on the humanities, and vice versa, are huge. After all, the question of how a machine can mimic or serve a human user is a small step from the question: What’s a human? How often do those discussions come up in your team?

Hebron: The ethical implications of AI and the possibility that automation would replace designers is a big topic for us. I’m sure this is a question across many industries. AI’s impact on the economy is likely to come with extremely difficult growing pains that will impact a large number of people in a tangible way. Over time, it will probably resolve into a new state where people use their time differently than now.

People have made the argument that the Renaissance was a product of increased free time. I don’t want to push that comparison too hard, but it’s difficult to know what the long-term effects of these changes will be. Certainly in the near-term, there will be very real growing pains. That’s a terrible thing. But when one door is closed, another is opened.

Automation may replace some creative jobs. At the same time, I think the real opportunity for machine learning in design is actually a very humanistic one. Machine learning enables the tool to recognize not just the pixels in an image but also the content that those pixels represent. This means that users won’t need to translate their ideas into the language of pixels. They can keep the ideas in the language of the content. That’s what we’re trying to build with the projects that I lead. The hope is that we can enable people to express themselves more fully. And my involvement in the Transformations of the Human program at the Berggruen Institute has been really instrumental to my thinking about this subject.

“‘Beware the Turing tar pit, in which everything is possible but nothing of interest is easy.’”
— Alan Perlis

Wilk: Could you say more about the goal of enabling people to express themselves more fully?

Hebron: The path to expertise in a tool like Photoshop should have much more to do with the design domain rather than the tool itself. The tool should come to the user much more than it does now. If you like to express yourself more verbally than visually, for instance, the tool should allow that. We should use machine learning to serve your individual creative process better. We want to enable you to do more and think bigger. Rather than spending your time learning to use the tool, you can spend it thinking about what ideas or emotions you want to communicate.

The way I start some pitches at Adobe is to talk about what changed in architecture with the advent of computer-aided design, or in graphic design with the advent of Photoshop. The tools changed how people worked. It did lead to some shifts in the economy, but ultimately, these tools did not remove the need for architects or designers. They enabled people to operate at a larger conceptual scale. The longer trajectory of AI in all manners of work is likely that we will be able to scaffold our society more effectively. There’s plenty to worry about, but I think the trajectory bends in the right direction.