That’s Why Elon Musk Is Scared of Artificial Intelligence

The idea that the quest for strong AI (Artificial Intelligence) would ultimately succeed was long thought of as science fiction, centuries or more away. And maybe this is the reason why the risks are overlooked.

At some point, the public may have stopped paying attention because it all sounds like science-fiction. People now see it as a shoddy film artifact. Could this possibly catch us off-guard?

Thanks to recent discoveries, many AI milestones have now been reached, leading many scientists to take seriously the prospect of superintendence in our lifetimes. While some experts still speculate that human-scale AI is centuries away, many scientists believe it will happen in our lifetime—Elon Musk is one of them.

AI - Artificial Intelligence
AI – Artificial Intelligence

The billionaire cautions that he claims the existential threat posed by artificial intelligence is much closer than previously predicted. Musk has consistently warned us in recent years about the existential threat posed by advanced artificial intelligence.

Despite this, they still feel that the issue is not properly understood. He also laid out several possible scenarios for us to avoid with the rise of AI, if at all.

Before we delve deeper into what solutions Musk has, let’s talk about why he considers AI to be dangerous.

Why AI is Dangerous

We are in the early stages of perhaps one of the greatest technological revolutions ever. Some call it our last invention. If we can solve AI, we can solve many other problems. And in the end, AI will make better decisions than humans.

While science fiction often portrays AI as a robot with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson. But what’s wrong with taking technology that has all kinds of beneficial uses and making it better?

Let’s do the reverse. If you go back to September 11, 1933, Ernest Rutherford, a famous nuclear physicist, says that “Anyone who looks for a source of power in the transformation of the atoms is talking moonshine!“.

The next morning, Leo Szilard, a much younger physicist, reads it and gets really annoyed. Then he figures out how to do a nuclear chain reaction a few months later. The point is that nuclear power was not even considered possible, the risk was suppressed.

Perhaps it was this lack of attention that then led to the Chernobyl disaster. Similarly, completely ignoring the risks of AI will only harm its technological progress. If we make AI smarter than us, we have to be open to the possibility that we may actually lose control over them.

As I.J. Good, designing smart AI systems is a cognitive task in itself. Such a system could potentially undergo recursive self-correction, leading to an intelligence explosion leaving human intelligence far behind.

In 2016, Elon Musk said that humans risk being treated like house pets by artificial intelligence unless technology that can connect brains to computers is developed.

Shortly after commenting, he announced Neuralink, a new brain-computer interface startup that is attempting to implant a brain chip. According to Musk, Neuralink will allow humans to compete with AI, as well as cure brain diseases, control mood, and even help people “listen to music straight from our chips.” We will come back to this later.

Most researchers agree that a superintelligent AI is unlikely to display human emotions such as love or hate and that there is no reason to expect an AI to become intentionally benevolent or spiteful.

Instead, when considering how AI can become a risk, experts think of two scenarios, most likely: The first scenario is Programming The AI ​​To Do Something Destructive: for example autonomous weapons. In the hands of the wrong person, these weapons can easily cause massive casualties.

The most serious problem is that, in order to avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off”, so that humans could lose control of such a situation.

The second scenario is Programming The AI ​​To Do Something Beneficial, But It Develops A Destructive Method To Achieve Its Goal: this can happen when we fail to perfectly align the AI’s goals with our own. Which is surprisingly difficult.

If you ask an obediently discreet car to pick you up at the airport as quickly as possible, it may be that you are chased by helicopters and covered in vomit, not what you wanted but literally what you asked for.

Many widely respected scientists such as Steven Hawking, Steve Wozniak, and Bill Gates have already expressed their concern that super-intelligent AI could escape our control and go against us.

In 2016, Microsoft established an AI chatbot called Tay on Twitter. Tay was programmed to tweet and learn from what was being sent to him by other Twitter users. The more you chat with it, the smarter it becomes, learning to connect with people through “Informal and Playful Conversation“.

The Singularity and Self-Designed Evolution

Pretty soon, people start tweeting the bot with all kinds of inflammatory comments. And Tay began repeating these sentiments back to users. Microsoft literally had to remove Tay from the Internet 16 hours after its launch.

Elon Musk has warned that humans run the risk of being overtaken by artificial intelligence by 2025.

This prediction marks a significant revision of previous estimates of the so-called technological singularity when machine intelligence surpasses human intelligence and accelerates at an incomprehensible rate.

Singularity is an imaginary epoch in the future where our intelligence will become increasingly non-organic and trillions of times more powerful than it is today. The rise of a new civilization will enable us to transcend our biological limits and increase our creativity.

Ray Kurzweil previously stated that by the year 2045, we will experience the greatest technological singularity in the history of mankind: the kind that can overturn the institutions and pillars of society.

Kurzweil believes that we will achieve singularity by creating a super-human AI. Regardless of whether we believe the Singularity will happen or not, the idea raises many concerns, significant security risks, and uncertainties for the future of humanity.

It forces us to initiate conversations with ourselves and with others about what we as a species want. On 11 May 1997, the computer program Deep Blue defeated chess grandmaster Garry Kasparov.

But after his defeat, Kasparov created a new kind of chess competition: in which humanoid and computerized players collaborate.

In this type of collaboration, the computer provides a rapid calculation of the possible moves and requires the human to choose the best option. Together, the two make up a centaur: a mythical creature that combines the best qualities of two different species – a supernatural one if you will. That’s what Elon Musk is trying to accomplish with his neuroscience startup Neuralink.

The company aims to implant a wireless brain-computer interface or brain chip that will connect the human brain directly to the computer.

For Musk, brain-computer interfaces are the only way mankind can survive, given AI’s ongoing encroachment with living neurons, providing a way for electronics to interface with your brain’s activity.

Does this mean we are turning into cyborgs? A lot of the concerns about AI actually start with the people who are developing the technology. Musk recently said his “biggest concern” is DeepMind, according to the secret London A.I. Google-owned lab.

Not only Elon Musk but many scientists are also concerned about the current practice of AI development. The Future of Life Institute says they believe today’s research will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding the pitfalls.

The institute published an open letter that has been signed by more than 8,000 people from the scientific community, including many prominent figures in the field.

The letter conveyed a clear sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

————————————–

Also Read:

Information Source: Youtube – Tech Flake

11 thoughts on “That’s Why Elon Musk Is Scared of Artificial Intelligence”

  1. Natural brain of natural genius human being is really more powerful than any AI. It’s stupid to spoil the brain with chips and make a person a prefix to AI.

    Reply
  2. I am going to do some prayer about this whole thing and discuss with God how he feels about Neuralink as I know that God Has Purposes That We Do Not Understand and He is The One In Control And I Refuse To Challenge What He Is Doing In This World In Any Capacity By Acting Like MY Opinion Is More Valuable Than His. I Have An Immense Amount Of Love & Respect For Elon Musk & I Trust Him. Although I Don’t Understand The AI Thing, It Does Not Scare Me….I Choose Not To Get Involved In Connecting Anything Computer Based To Anything Inside My Body…Particularly My Brain…But God May Have A Plan For People That Choose That And I Won’t Lean On My Own Understanding About How He Defeats Evil. Thank You Elon…For All You Are Doing and Giving…Don’t Fear A.I. It Cannibalises Itself. If We Don’t Link Up….We Don’t Have To Play That Game The Evil Will Cancel Itself Out And I Don’t Want To Watch That Happen To Anyone So I Don’t Want To Hear About It Or Know About It, Please. I Feel Very Sorry For Everyone That Has Chosen That “Cure”….The Other One That We Didn’t Know About…Working On A Letter To Send You About Some Important Things. Thank You So Much.

    Reply
  3. Hi Nuero link team,
    It’s Dave,
    I am a C5 incomplete Quadreplic.With a head inury.Causing short term memory loss.
    I am also an electrical technologist.
    I am very interested in the theraputic aspect of Neurolink.
    David S Loader
    416-690-3806

    Reply

Leave a Comment