In his fascinating post, Computational Law, Symbolic Discourse and the AI Constitution, Stephen Wolfram points out that technology tends to make things more complex even as it makes them easier:
Back 50 years ago, pretty much the only way to define a procedure for anything was to write it down, and have humans implement it. But then along came computers, and programming. And very soon it started to be possible to define vastly more complex procedures—to be implemented not by humans, but instead by computers.
You know what else is a set of written instructions for humans to implement? Contracts. And of course contracts will eventually be reduced to code.
But that doesn’t at all mean contracts will be reduced to push a button, get a contract.
Some things are computationally irreducible. That means you can’t iron all the bugs out of the code because some legal relationships—in this case—are too complex.1 So someone will have to code the contract, maintain the code, improve the code, calibrate the machine learning on which the conditions are based, calibrate the value systems used by the AI contract drafters, and so on. In fact, Wolfram doesn’t think contracts will become more simple.
Today a fairly complex contract might involve a hundred pages of legalese. But once there’s computational law—and particularly contracts constructed automatically from goals—the lengths are likely to increase rapidly.
What will be in all those extra pages? Well, we can’t really know because we don’t have the language yet to talk about computational contracts.
When I didn’t have a way to express something, it didn’t enter my thinking. But once I had a way to express it, I could think in terms of it.
And so it will be, I believe, for legal thinking. When there’s a precise symbolic discourse language, it’ll become possible to think more clearly about all sorts of things.
In other words, once we reduce contract to code—symbolic discourse language—we will be able to think about legal relationships at a much higher level while the basic stuff takes care of itself. And there will be lawyers or some version of legal knowledge professional helping out at every stage of the process, enabling much more complex legal relationships than we are capable of describing effectively today.
Once, accessing knowledge through published information was labor-intensive and available only to those with money, education, and a roomful of monks to make the copies. The entire publishing market was probably made up of a few tens of thousands of human beings.
Today there are millions of people engaged in publishing. Publishing is so easy you anyone can do it, and anyone can read the greatest encyclopedia in the history of the world for free. And yet publishing is still a huge industry, with companies from the New York Times on down to Snapchat celebrities and solo bloggers.
The printing press, the typewriter, printers, and the internet—none of these ruined publishing. They drastically increased the number of people engaged in publishing and opened publishing up to many more people and businesses than before.
Maybe law will be similar. Maybe the changes coming to law will change the industry into something bigger, better, and more democratic than it is now. If so, there is no reason to think there will be fewer roles for lawyers to play.
Wouldn’t it be great if the robots we create wind up creating more work for lawyers, solving legal problems we haven’t even been able to think of yet?
I realize I’m doing little justice to the concept of computational irreducibility, but I hope I’ve got the gist of it right for the purposes of this short post. ↩