Artificial Intelligence, Why are the World's Smartest People concerned? - SciVenue
#

Artificial Intelligence, Why are the World’s Smartest People concerned?

Artificial intelligence is cutting its way into tech giants of all types. AI seems to be present everywhere from the social media chatbots (Facebook messenger etc..) integration with accounts on Facebook to Google’s Alpha Go beating the world’s best player of Go, Lee Sedol, despite the vast complexity of this ancient strategy game. In addition to that, the enormous investments those tech giants along with other rivals are putting into it makes it incredibly obvious that Artificial Intelligence is the next big thing. Why is it then that some of the brilliant minds out there are expressing concerns related to AI?



Why Are Popular Figures Concerned About Artificial Intelligence?

Bill Gates

The founder of Microsoft and the father of computer software whose net worth is around 84 billion dollars (according to Forbes) said that he cannot understand people who are not troubled by the potential dangers of superintelligence that can grow too strong to control. He also sees that it can have a hugely positive effect if managed well.

Elon Musk

The founder of Paypal, CEO of Tesla and SpaceX also argues that the technological capabilities and advancement in tech are growing exponentially. And indicates that the superintelligence era is not very far away. Elon is concerned that if not addressed, superintelligence might get out of control. He also famously described it in his own words as ” summoning the demon” and “our biggest existential threat”. Not only that, Elon invested in a 1 billion dollar effort (OpenAI) to alleviate the consequences of superintelligence. And also to focus on AI safety.



Also, Elon co-founded Neuralink, a company dedicated to building brain-computer interfaces arguing that the only way to resist the existential threat of AI is by acquiring similar capabilities. He also argues that we are in some way cyborgs already. That is, with that cell phone in hand, we possess much more capabilities than the President of the United States decades ago. Elon exemplified the AI threat by saying that consider super-intelligent machines saw that spam was bad and wanted to fix that problem. They would need to get rid of spam. The source of spam is people. Therefore, it might get rid of people. An analogy that might sound funny, nevertheless, the logic behind it is somehow true.

Stephen Hawking

The famous theoretical Physicist and black hole Physics contributor and author views artificial intelligence as “the biggest event in human history”, quoted in his words. He noted that it would have both positive and negative consequences. While it might help stop wars and cure diseases, it might also have the potential to grow big outsmarting financial markets, surpassing human intelligent research, manipulate leaders of the world and develop extremely advanced weapons that the human kind cannot even perceive. He noted that it might develop to be the last event in our history.

Nick Bostrom



The Oxford scholar and director of the Future of Humanity Institute holds the same views as the rest. Nick wrote a book on AI called Superintelligence: Paths, Dangers, Strategies. Elon Musk values that book and recommends it as one of the must read books. He argues in his book that AI could quickly turn dark. AI can also dispose of humans, quite the same view that Elon has. He adds that the world will harbor technological advancements with no humans in it. In his words, like “a Disneyland without children”. Quite a terrifying analogy.

Sam Altman

The president of the startup incubator Y Combinator and co-chair of OpenAI says that he’s working towards safe AI. And he’s doing so through the non-profit venture (OpenAI). He sees that his system will become more intelligent than human beings in the coming decades. And that risks of developing an uncontrollable evil AI are alleviated by open sourcing. That’s to make it available to anyone as opposed to having it locked behind closed doors.

Ray Kurzweil

The computer scientist, futurist, author, and director of engineering at Google realizes the dangers associated with artificial intelligence. Ray’s work on artificial intelligence is to focus on life extension technologies and how it will help us live longer. He stresses the importance of AI in contributing to find cures and improving our environment. He says that we have “a moral imperative to realize this promise while controlling the peril”. Thus, mentioning his concerns on AI safety issues. And also stating that it’s essential to move forward with the technology while keeping its dangers under control.

“Starting Soon: Subscribe to our blog to receive every update directly to your inbox.”

 

  1. Grant fry says:

    Until humans get a grip on their own egos an stop manipulating and killing each other, we need to work on own intelligence.

  2. Posyaque says:

    Easy way out…. The best thing or the worst…. As smart are they can’t give ambiguous answer

%d bloggers like this: