It’s not like the movies: understanding the future of artificial intelligence

The future of artificial intelligence is more practical than science fiction

By Kennedy Lamb

“Smilebot, make me smile.”

The machine process the command and decides that the most efficient way to make you smile is to tell a joke. You smile at first, but soon enough you are no longer unamused. Smilebot then decides to incessantly tickle you to elicit a smile but, for obvious reasons, the stimulation becomes annoying and you stop smiling. Let’s say this process continues in the same fashion for an extensive length of time with the same result – in every scenario you eventually stop smiling. Smilebot only has one command: to make you smile. He will stop at nothing to accomplish his goal. Eventually, the only way Smilebot can make you smile is to surgically alter your mouth into an everlasting joker-like grin.

This scenario resembles the popular notion that artificial intelligence (AI) will always progress to chaotic, maleficent, or even world dominating behavior. Try as it might, AI can’t seem to shake the bad rap created by science fiction writers and Hollywood producers. In reality, the scientists working on AI hope to solve societal woes, not create them.

In a small lab hidden deep within the third floor of the Information Technology building at the University of Maryland Baltimore County, a group of six students at the Multi-Agent Planning and Learning (MAPLE) lab huddle at a table to hold their weekly lab meeting. Doctoral student, John Winder, sits at the head of the table. He joined the MAPLE lab when he was an undergrad in 2009 and has been there ever since.

Winder explains that the MAPLE lab studies one type of AI: agent based planning, or the programming of algorithms to perform certain tasks. More specifically, the lab is interested in programming one robot or “agent” to perform multiple tasks (referred to as general AI) as opposed to creating many different agents to perform a singular task (narrow AI).

“It’s expensive to train AI, so we prefer to study general AI because you don’t have to train an agent from scratch every time you want to complete a new task,” he said.  

The current paradigm for such research is reinforcement learning in which the robot is trained to complete tasks with positive and negative rewards. A human subject will demonstrate a set of actions to a robot that will get a positive reward for correctly replicating the actions and a punishment for not completing the action. This is seen in action with MAPLE’s collaborators at Brown University. Roboticists are training the robot “Baxter” to successfully follow a recipe and create a meal. They move Baxter’s arms through the sequences of preparing a recipe in the hopes that the robot will be able to replicate the actions without human aid the next time.

Winder says that general AI, like Baxter, will weave itself into the world virtually unnoticed at first, through mediums such as customer service.

“Anything that can be automated will be automated,” he said. “Customer service is a great example of an agent-based mechanism because it involves interactions with humans and machines over an online interface. The AI at a help desk, for example, will mimic humans fairly well.”

Virtual assistants such as Siri, Cortana, and Alexa will become more efficient as well. They will likely become intelligent planners and begin to read through users’ emails to automatically schedule appointments and meetings in open times. Machines such as Google’s Deepmind can already read (as well as carry out other feats such as beating its human counterparts in the ancient game of GO), and in the near future they will be able to proactively schedule their users’ time.

A second type of AI, machine learning, is already being used to solve problems. Tim Oates, a professor at the University of Maryland Baltimore County, just recently submitted a proposal to the Defense Advanced Research Project Agency (DARPA) to fund a new AI program that would help predict food shortages by monitoring crop yields on a yearly basis.

If the project is funded, DARPA will construct computer programs that will be to gather the vast amount of data needed to predict annual crop yields. The AI will process hundreds of thousands of texts, pictures from social media, and previous years’ statistics to put into an algorithm that will predict if crop yields will be high or low.

“There aren’t enough people in the world that understand the domain well enough to gather all of this information,” Oates said. “The information is scattered all over the place, so AI is the most practical way to collect the data.”

Studies have shown that drought and subsequent food shortages coincide with social and political unrest in developing nations. If AI can predict when crop yields will be low, proper government aid can be directed to the countries with hungry and dissatisfied citizens.

“It’s a bad cycle, but AI may be able to help,” Oates said.

Despite the potential of AI to help humans build a better world, the researches working on these projects admit that the new technology does have the potential to create new problems – or replicate old ones.

The MAPLE lab also researches the idea of predictive policing.  AI’s are implanted with algorithms based on data on previous crimes, police behaviors and neighborhood socioeconomic statistics that would be used by police to influence how to patrol a certain area. However, any time a machine is trained, it learns inherent bias just as humans do. Features that are predictive of behaviors could encourage and reinforce stereotypes. Handing this much power to a machine may further increase tensions between communities and their respective police departments.

“AI isn’t open to inspection or interpretation to humans, and it won’t regard humans the same way we regard ourselves,” Winder says.

A debate rages in the artificial intelligence field on the topic of strong versus weak AI. Those who believe in strong AI hope that one day, artificial intelligence will perfectly replicate human cognitive functions. Weak AI believers think that AI will make leaps and bounds, but there are some areas of society that AI should never be allowed to touch.

An undergraduate at the MAPLE lab, Khalil Anderson supports strong AI, but thinks that AI should not be used in warfare.

“AI will want to take the most efficient route to end the war and that may be by killing all of the civilians. One of the most effective ways to end a war is to kill the people that support it,” he explained. “AI will look for the most optimal solution, and that won’t always be the most humane solution.”

Still, Winder is optimistic and hopes that AI will continue to be used productively.

“In 20 years I hope that AI is used to automate any process in our society that is inefficient – that’s what AI is all about,” he said. I would like to see all societal issues that arise from inefficient processes are resolved using machines.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s