Katrina Payne
1 min readDec 11, 2016

--

As somebody who is attempting to create a sentient AI that is out to destroy all humans… I can honestly say you don’t need to worry about it too much. It is actually really really hard to get a sentient AI to hold onto the motivation to kill all the humans and not just get bored with it and move onto something.

Sometimes I feel like one of those mothers who are forcing their kids to do something the mothers wanted when they were young but the kids just are not into.

Most you are going to have to worry about is a worrisome glitch that you did not catch in various test case scenarios giving issues and problems.

Seriously… do you know how hard it is to explain to an AI why they are wanting to kill all humans? Best plan I have is to start making AIs into sexbots and have them also required to work customer service… but that also runs into the issues of the AI determining that humans will kill ourselves off it just left to our own devices.

Yet people are acting like AI is just going to up and decide to kill all humans like this — and it gets really frustrating. They don’t understand at all!

--

--

Katrina Payne
Katrina Payne

Written by Katrina Payne

A mixture of several spicy hot take opinion pieces and apocalyptic log entries from an unfiction ARG

Responses (1)