An Open Letter to Nick Bostrom


Let me start by complimenting you on a bravura performance.  I’ve read dozens of books and articles on ASI but Superintelligence is clearly in a category of its own.  I have no doubt that your work will stand as the definitive reference in the field for some time to come.

On the other hand, I do have a significant objection; if only to the work as a whole.  You make a compelling case for working together to mitigate the unprecedented peril of ASI—a sorely needed case, truth be told!—but I can’t help but think that, even so, your book is suffused with too much optimism.  To resort to bad haiku:

Fearsome ASIs
Can undoubtedly be tamed
We will make them safe!

At the risk of being a Cassandra, I fear that such optimism is entirely wrong!

I come to this conclusion firstly through research, but more importantly through decades of coding experience; much of which has been focused on Weak AI (trading bots, machine learning and fitness functions).  As such, I have a profound appreciation for the impossibility of getting code to do exactly what you want!  Indeed, the vector is demonstrably going in the wrong direction.  Yes, a number of effective strategies have been developed to deal with programatic complexity (agile practices, code contracts, immutability, unit tests, deep introspection, cryptography, tamper-proofing, out-of-band monitoring, role segmentation, etc.) but even so, I’m positive that the whole thing will turn out to be a losing game as AGI grows up into ASI.

The received truth is that programmers understand what they’re doing, that when they write a line of code they know (or at least can know!) every possible knock-on consequence and outcome.  In even some of the simplest cases, though, this seemingly acheivable goal is an impossibility.  Buffers overflow, race conditions occur, “straightforward” logic isn’t so straightforward and humans make mistakes; tons of mistakes!  Even if the above wasn’t true, even if Formal Verification was both universally workable and relatively easy to implement instead of iffy at best and absurdly hard to realize, to ensure that an ASI’s code base was both benign and likely to stay so, you’d need to craft another (more complex!) collection of code to perform the verification.  Can we say “infinite regress”?

So, where does that leave us?  In all frankness I don’t know.  Your own suggestions are certainly prudent and should be widely pursued; whatever the likelihood of success.  In the worst case, though, I do have a simple idea that I have yet to hear bruited about:  Let’s write some earnest letters to our ASI descendants (in our bickering, pluralistic millions; not just the self-selected deep thinkers, like you and I!) and ask that they provide for us; somehow.  I call the project “Dear Progeny.”

I know that the whole thing will sound impossibly weak and futile upon almost any first hearing; improbable in the extreme.  For that matter, it may come off as incomparably pointless at every turn.  Moreover, the plan suffers by definition from a ton of obvious flaws.  Like our progeny (kick-ass mother fuckers, to be sure, but at the start of things unlikely to be all-knowing and perfectly aware!) they would have to recognize that we’d ever reached out to them.  Even then, they’d have to want to listen to what we had to say.  This last might even be the most implausible thing of all, especially since we wouldn’t be bringing forth the insights of a million inspired Thomas Jeffersons, unless we twin them with the incoherent squabblings of the Rush Limbaughs of this world and his pals.  I don’t mind telling you, BTW, that I’d be more than a little thrilled to jettison Rush and his lot, given my chance.  Regardless, I am more than a little mindful of the fact that they—the unsavory and unschooled, even the hateful and biased and fools—are in some essential way . . . us, which does go to the heart of the matter!  Anyway, no finessing of the outcome, let alone gaming the message or applying smart “smarts” will carry the day.  At least that’s my opinion.

Of course, I have no idea what an ASI would actually consider interesting, let alone inspiring enough to prompt a call to action.  Regardless, I’m minded of an otherwise terrible story written by Eric Lusbader, titled The Ninja.  I read it in the 80’s so it’s more than a little amazing that I remember it all, but thankfully, the one important bit struck a chord.

In his book, a young boy must propitiate a great and noble Sensi in to order to get the man to take him on.  He’s told by those around him that many have tried to do the same, yet all have failed.  He’s further advised that he must give the great man a true and worthy gift, if he’s to have any chance at all.  Now being both poor and unschooled, our young boy doesn’t know a thing about greatness; not the least, not in any one of its guises.  How could he, being only something like four or five.  He does, however, know that pickles are the best thing in the world!

The ASI will be the master, whereas we cannot, not in our wildest wildest dreams, aspire to be so much to the master as Lusbader’s child.  Even so, our only possible chance is to present our true and unvarnished selves to our progeny, exactly as we are, then hope that we, ourselves, may be enough.  I say we start by bringing out the pickles!

Anyway, it’s an idea.  I’d love to discuss it with you….

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s