AI Fear is the New Luddite Nonsense?
So after a brief interaction with a very very silly person — it was brought up that when people are scared of the robots coming in and destroying humanity it isn’t the whole death-bots leveling everything that is human. No — turns out they are worried about humans being made redundant. Now I’ll take on the various areas where death-bots have issues killing all humans in a bit… but let’s just talk about humans being made redundant.
Humans have always been redundant and unnecessary.
If an asteroid were to crash into the Earth and humanity were to be gone forever — what exactly would the loss be to the universe? Nothing. What would the gain be to the universe? Also nothing.
Now, if we are to talk about AI and robots making our jobs so easy that there will be no reason for humans to do something… we’ve answered this question several times since the start of the Industrial Revolution hitting Europe.
Maybe I am just one of them youngun’s who is too use to having a Loom be an everyday thing who doesn’t understand the more better oldums ways prior to the Loom taking up everybody’s lives and having us less inclined to be as good as the generation prior to Looms making everybody terrible. I’ll accept that my lack of knowledge of not having Looms might have me look silly to anybody reading this from before the generation of Looms being around.
I have noticed that for every advancement that makes it so a certain set of work is no longer necessary, humans just cause all kinds of new things to need to be done and part of how work is generally done.
The whole luddite angle is fucking stupid enough, that I just assumed nobody would take that nonsense into account. But… I am one of them damned kids with them Looms who don’t understand the good old days prior to Looms being around.
Now… the fun part: deathbot failure scenarios
Scenario 1: No learning method
Easy failure: humans wear silly hats that have them no longer look human to the robots
Scenario 2: Robots tell who is human by facial recognition software
Easy Failure: Nobody puts their face in an orientation that the robots can tell it is a face — or they wear handkerchiefs over their faces.
Easy Failure 2: Robots spend their effort fighting stuff that looks like faces in their software — but is not actually human.
Scenario 3: Robots have the notion of removing things who are threats to their existences as part of how they do things.
Easy Failure: Robots get good enough at killing humans, we are no longer considered a threat to them, and thus they stop attempting to kill humans.
Scenario 4: Robots want to have the land humans hold onto
Easy Failure: Turns out humans and robots operate best under completely different conditions. Common human congregations are horrible for robots to operate within
Scenario 5: Robots are programmed to terraform common human locations
Easy Failure: There is less energy spent in going to the areas Robots work best (which are not surprisingly human free already), and they are more inclined to just move there
Caveat: if there is no machine learning element, you’ve mostly just created a terraforming device and put bad settings in it — that is not really robots killing all humans, so much as humans killing other humans via Hanlon’s Razer
Scenario 6: Robots destroy cities because they like it better than the humans with in it currently
Easy Failure: That doesn’t make them much different from the humans… and humans have not yet succeeded in killing each other off
Scenario 7: Robots are hardcoded to kill everything human
Easy Failure: what it means to be human is kind of a moving target. How humans act and behave one thousand years ago is different enough than what they do today… and not easy to differentiate from other mammals.
Scenario 8: You program your AI to kill all mammals
Easy Failure: Through cybernetic augmentations might make it so humans are not considered mammal enough to kill.
Scenario 9: You create a Roko’s Bassalisk type component in order to psychological break down human “thought patterns” and attempt to learn the language of anything considered sentient and attack based upon sentience
Easy Failure: Humans routinely display behaviours that show a lack of sentience, that should such a program run for long enough it will start attacking sea slime or something ridiculous like that.
Other Easy Failure: If you have it determine sentience, you are likely to have your AI attack itself for being determined as too human, and create a robot civil war which interferes with their “kill all sentience” goal
Scenario 10: Include an option in the AI for them to specifically act in a way that is not human in order to avoid being what they kill while studying what they kill
Easy Failure: AI now is spending all its time attempting to not be human — and study what not to be, that they generally do not have much time to actually kill all the humans or sentient creatures.
Scenario 11: Psychologically torture AI to snap and freak out on humans. Either via straight bullying… or by building AI to be sexbots and have them work customer service jobs
Response: the robots spend more time huddling in the corner and is unresponsive to outside stimulus. It isn’t ready to kill things — it isn’t even ready to actually do much of its original purpose anymore. Robots keep initiating shut down sequence and refusing to turn back on.
Scenario 12: Attempt to raise the AI’s mindset with a curriculum that has it indoctrinated to believe all humans are worth killing.
Failure that Occurs: The curriculum has issues with being filled with logical holes that I’ve yet to figure out how to answer satisfactorily for the AI. The only way to avoid having those logical holes is to not teach the AI things that having not learned the AI is no longer capable of killing all humans.