You know, mostly I hear good things about Artificial Intelligence (AI). But why shouldn’t we? The people who have the power to advocate on AI's behalf worldwide, are usually the same people creating AI, or being paid by them to market certain ideas, anyhow.
The apparent conflict of moral obligation, or honest consideration of AI's possible harmful effects does not seem to be a consideration, as much as its’ public relations are concerned.
Here’s what one AI crony said:
“Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.“
Max Tegmark, President of the Future of Life Institute
Well, I’m happy to know that everything we love about civilization is a product of intelligence, including the nuclear bomb, chemical warfare, and propaganda. We are certainly going to be safer with AI intelligence, just like Max Tegmark stated above.
Oh, wait, he didn’t say we would be safer, did he? Oh, he also said a little kind of disclaimer, which stated, “as long as we manage to keep the technology beneficial.”
Now, I’m not an AI hater, yes those exist, but I would like our AI advocates to actually seriously consider the harmful effects that AI could possibly bring, rather than telling us a bunch of benefits, or watered-down potential disaster scenarios.
Let’s look at some possible harmful effects of AI, according to futureoflife.org
“In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.”
I hope you caught that part about leaving the human intellect far behind. Lol
Most of us realize that the only reason we rule this planet is because of our intellect. Trust that AI, if or when it becomes smarter than us, will happily rule us.
But you say, “We can program the robots, Preston. No worries.”
I’m sorry, but they will be so intelligent that they will be able to reprogram themselves. Seriously, scientists expect AI to develop a consciousness and have self-awareness.
I know, I know, you think I’m trying to scare the shit out of you. I’m not. Or you may think, dude, there’s no way in hell a robot will have a consciousness. Or, you may think, I'm old. I'll be dead by then, who cares? Probably your kids or nephews care.
Considering, there’s no World War 3, or our planet is destroyed, AI with self-awareness is inevitable. I’d say that’s very harmful, considering they would be smarter than us. Look at what we good beings have done or still do to those beings less intelligent than us: like put them in cages, experiment with them, make them slaves, but remember… we are the good ones.
Other harmful effects:
How about the fact AI likely won’t experience emotions? That is as good as bad. Psychopaths do not feel either.
My apprehension about AI, is not like an old person who dislikes Facebook, or veers away from Twitter because he didn’t grow up with Tech.
While Facebook and Twitter alter our world systems of communication and our culture, they are not the same as conscious beings, other than humans, who can also destroy our civilization, on purpose or by accidental program malfunction.
Now, I’m not being a demagogue here; I only want people and the corporations who push AI to make a buck, to consider the realistic harmful effects it will have on our world, as well.
Futureoflife.org asks us to consider these two scenarios:
1) “The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
2) “The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.”
Whether AI’s harmful effects are malevolent or super-competence is really of no consequence, as both methodical approaches will destroy or harm humanity.
In this same article from futureoflife.org, they say something below which deeply bothers me, and is mostly why I wrote this article in the first place. Here’s what they said:
"Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots."
I understand what the author said about the news creating fear, but the author also disregards the smartest scientist of our time. Then the author downplays the dangers of AI because most people interpret AI’s dangers consisting of consciousness, evil, and robots.
Now, however, AI projects a harm for humanity, it should not be downplayed, but considered, with serious consideration, apart from all the corporate monetary motivations. If it’s not being considered, it’s because the author and others have a small mind or another agenda.
They have a small mind because they can’t see that far into the future; they only see the cookie and milk in front of them now, but know nothing about the cows, and so forth.
Certainly, this is such a large and hefty subject that one blog does do this topic much justice. And of course, I have a stipulation for you to know: I love my technology and I know AI can help our humanity, but I honestly fear it may do more harm than good.
Yet, a lot of people feel this way with new devices, but remember, these are not your regular devices, these AI guys may speak without tongues.
I’d love to hear your comments.
Here's the article I quoted from at https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
Check out my fiction & subscribe up top.
Hope you enjoyed today’s topic.