Some people are better at following instructions than others.
A short while ago, I had a few people around my house to jam, have some drinks and play cards. None of them had been to my house before.
Some of my friends found it really easy to get here and didn’t call once to check how to make it. Others had a bit more difficulty. Some ignored my instructions completely and still got here.
The benefit of talking to people who have background knowledge of London is that my instructions can be pretty vague, yet still successful.
I have some younger nieces and nephews. If I ask them to bring me something, I have to be really clear on what it looks like, when to bring it to me and probably give them a motivation to do it. Sometimes it gets completed. Most of the time it doesn’t. However, I couldn’t blame them for that, they’re young and don’t know how to follow instructions as well as my 20-something-year-old friends.
I used to be friends with a lizard called Ella when I was 13. She peed on me once. That wasn’t even an instruction, she just did it.
What if they controlled us?
So here’s a weird thought – what if these humans, nieces and nephews, and animals trapped us in a cage and were in charge of us?
When I tell my friends to do certain things and complete tasks, we can have a conversation and gain further clarity on the final goal.
This is what we call a “working relationship“. I mean, it’s still slavery but let’s ignore that for now. (What a strange thing to say…)
When I tell my nieces and nephews to do things, sometimes communication breaks down and I have to resort to different tricks like offering chocolate. If they don’t understand me, I don’t get any kind of freedom.
This is what we call an “alarmingly frustrating relationship“.
When my lizard friend asks me to do something, I have no bloody idea what is happening because she doesn’t speak a human language. She starves because she doesn’t get any food from me.
This is what we call a “pointless relationship“.
As we progressed down the list of agents, the relationships became more difficult because I was smarter and smarter than them in comparison. Yet they, for some reason, were in charge of me. I want to help but they don’t understand me.
If a lizard controlled my fate, I would need to find a way out of this cage or I’d die.
We are the babies and lizards
Nick Bostrom popularised the term Superintelligence in the book of the same name to describe artificial intelligence that has surpassed human intelligence and capabilities in all domains. They are faster than us, remember more and complete complex tasks with greater ease. And they don’t need food to keep going.
We currently control computers. We tell them what to do, fix them and rely on them in specific domains. Now, what happens when they’re simply smarter than us but we don’t want them to take advantage of us much like we (unfortunately) take advantage of animals?
This is a problem that has, for a number of years, tormented Artificial Intelligence researchers [1].
In the above relationships, the intelligent being trapped in the cage is a waste of talents. But the babies just leave us trapped in the cage because they don’t understand what we’re saying.
I need to teach them the English language, how to read, how to understand complex directions and concepts and, perhaps most importantly, to trust us. This will take too long.
Immensely frustrating. So it’s better for us to find a way to leave the cage. We can help them better than they can help themselves.
When AI researchers talk about problems such as this, it often sounds like a silly fantasy made up as a way to inject more unnecessary terror in the world. Evil computer overlords – ha!
The problem they try to emphasise is that it isn’t evil artificial intelligence that we should worry about. It is capable artificial intelligence.
I can help these babies and animals better than they can help themselves so I when I get out of this cage, I’m going to lock them in this room so I can feed and teach them with greater ease.
We already rely on artificial intelligence in scenarios ranging from helping pilots fly, getting us information from the internet and fighting crime.
This has resulted in crashes [2], injection of fake news articles [3], and unintentional racist profiling [4].
Yet, we continue. Because it’s so helpful and easy. Following A.I is the path of least resistance so it’d take a remarkably quick change to cause a worldwide uprising… if it ever comes.
This isn’t a complaint about the current state of our attitudes towards artificial intelligence. This is to highlight the problem that superintelligent computers may pose to us in the next 50 to 100 years (or never, depending on how confident you are this will ever happen).
Superintelligent computers may not be evil. They may just be very good at what they do. So they should be in charge. Unless we’re happy to let babies run the world?
A.I.D.A.N
This brings us to the end of this short discussion. I want to point you in the direction of some great books because I’ve inevitably missed out a lot of detail here.
Inspiration: Life 3.0 by Max Tegmark
More inspiration: Superintelligence by Nick Bostrom
Slightly looser inspiration: Hello World by Hannah Fry
One of the freakiest A.Is in Sci-fi: The Illumnae Files by Jay Kristoff and Amie Kaufman (All of them. It features an A.I named A.I.D.A.N who may or may not go off the rails.)
More Superintelligent A.I. in sci-fi: Thunderhead by Neal Shusterman
As always, thank you for reading!
If you have any questions, feel free to ask below.
Twitter: @improvingslowly
Facebook: Improving Slowly
References:
[2] Air France Flight 447: ‘Damn it, we’re going to crash’
[3] Facebook Is Changing News Feed (Again) to Stop Fake News
[4] Is Artificial Intelligence Racist?