Thursday, January 15, 2015

Open Letter For More Research On AI, Signed By Elon Musk And Stephen Hawking

Artificial intelligence might be the next great achievement by humanity, but according to many tech enthusiasts and scientists, it may also bring humanities’ doom if efforts are not controlled, restricted and researched.
This, according to The Future of Life Instituteis what researchers need to figure out. The good and bad of AI, and how companies and organizations can control the new systems and reap the benefits of AI.
Currently, AI can best be spotted on virtual assistant services like Siri, Google Now and Cortana, alongside smart home appliances that learn from user habits, but in the near future AI might be used for much more.
Google recently acquired DeepMind for $500 million, a startup working on an artificial brain, alongside working on its own AI programs. Facebook is also investing heavily on AI for its social network. Nuance is another company working on replicating the human brain with an artificial design.
Even though these tech companies goals are to recreate the human brain and allow computers to think for themselves, the implementation of such a system could bring global change to economies around the world.
Having an AI control the household or a manufacturing line may remove the need for any human workers at all, if the main intelligence unit communicates with electronics and robots in stores and factories.
All of this seems fine, but the implementation of AI on a battlefield, or AI by a rogue nation, could bring disastrous consequences. Several technology companies are already researching standards and security to make sure AI can be shut-down and cannot be used for destructive means.
Tesla Motors CEO Elon Musk said “If I had to guess at what our biggest existential threat is, it’s probably [AI]. So we need to be very careful.” Stephen Hawking also called for regulation on AI, alongside members of DeepMind and Google.

No comments:

Post a Comment