The Future of Artificial Intelligence: Should we Fear a Robot Uprising?

Artificial Intelligence is a hot topic and it seems like everyone is dabbling into the AI world these days. Large acquisitions from Google and Apple and smarter products like IBM’s Watson and Facebook’s personal assistant are fueling the AI fire thanks to advanced computing capacity and high-tech new features.

Yet the more prominent AI becomes, the more people start to fear that these machines will be a little too intelligent.

Where are we now?

AI startups are creating amazing technologies that will, without a doubt, improve the way we live and work. However, these companies get stuck walking a difficult line: as they try to prove that we need their tech, they must diffuse the public’s concerns about an AI revolution.

Perhaps one of the most prominent AI campaigns in recent times has been for IBM’s Watson cognitive computer system. Celebrities like Ridley Scott, Bob Dylan, and Serena Williams are partnering with the technology company to endorse and star alongside the now famous platform.

Unstructured data like articles, blog posts, and even tweets make up about 80% of all data on the internet. The Watson platform uses its processes to observe, interpret, evaluate, and reason unstructured data and complex language made for human consumption at record speeds, mirroring human thinking and decision-making. IBM can also “train” Watson to understand what is the most important and reliable part of a certain text.

Watson also handles “open-domain” questions. Unlike Apple’s Siri which can only answer certain programmed questions and parameters, you can ask Watson almost anything and it can reason enough to find an answer. Furthermore, Watson will try to “understand” the question being asked, even if the language is unusual. It is also clever enough to know when it requires more information and will ask clarifying questions to determine an answer.

AI technology is also responsible for automated driving systems like those behind Google’s “self-driving” cars. In fact, Google’s AI has been officially recognized as a legal driver in the U.S., paving the way for the legalization of more autonomous vehicles.

Although most experts agree that there is no reason to expect a super intelligent AI to one day become malevolent (cue robot uprising), some fear that these technological advances may still pose a threat to the future of humanity as we know it.

Expectations: Terminator or Wall-E?

urban-robot_sm

With dark sci-fi visions plaguing the world of AI, it is easy to see how people fear a future where machines are as smart, or smarter, than the humans that created them.

AI technologies can be programmed to cause harm or, even if they’re programmed for good, can determine to use destructive methods to achieve their goal. Because these programs can learn and are lacking human emotions and ethics, it is difficult to predict how they will react to certain circumstances.

An open-letter signed by tech leaders like Stephen Hawking, Elon Musk, Steve Wozniak, and Jaan Tallinn, to name a few, warns that the primary concern with AI in the weapons space is that they can make their own targeting decisions without approval from a human controller. They ask if we really want to let a machine decide when and how to take human lives.

These tech leaders also fear that AI weapon development could create a global arms race. Unlike nuclear weapons, they “require no costly or hard-to-obtain raw materials” so they will be easy for military powers to mass produce. Then, “it will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators, and warlords”.

The-Big-Bang_sm

However, industry leaders are trying to put systems in place to “check” these technologies and ensure that they are not only legal, but ethical. The CEO and founder of Lucid, an artificial intelligence company based in Austin, has formed an Ethics Advisory Panel (EAP) that has already begun providing oversight for the company and its customers.

The EAP wants to prove that ethics don’t have to stifle innovation and that aligning Lucid’s core values with ethical processes will provide deeply relevant products that have a “good” impact.

The less severe of the bunch fear that these new technologies will make their professions obsolete. Many economists believe that we are on the brink of a new industrial revolution where AI will strongly impact the labor force, some even predicting that machines will take over 47% of today’s jobs within a few decades.

So will there be a robot uprising or not?

AUG8VM3FWS_sm

According to Stephen Hawking, “the development of full artificial intelligence could spell the end of the human race.” But, perhaps Mr. Hawking was exaggerating a bit.

Even Watson is not as close to the science fiction version of AI as you may think. Although it is often portrayed as a “friendly” voice, the truth is that Watson often doesn’t talk. In fact, there are more than 30 Watsons with different specialties and capabilities that people interact with through tablets and computers as they would any other program.

Watson and its AI brethren are meant to collaborate with people and enhance our experience, not replace us. The truth is we interact with AI on a daily basis, it just doesn’t look like what we have come to expect through sci-fi movies and video games. From your internet browser and search engines to CCTV and the stock market, you have probably done something today that involved an interaction with an intelligent machine.

However, as we move towards a “smarter” and more integrated future it probably wouldn’t hurt to have a little bit of patience with your clunky old laptop. One day, it might have to put in a good word for you with the robot overlords.