BLOG

Deep Learning Artificial Intelligence AI

AI: It May Not Take Over the World, But Definitely Our Day-To-Day Lives

02.21.17

Guest blog by: Alexa Perlov

Larry Perlov’s arms rest limp on the leathered driver’s seat. Making no effort to reach the acceleration or brake pedals, his legs form a ninety-degree angle with the floor. And yet, there we are: whizzing down the I-95 in a Tesla, pushing seventy miles per hour. As the car in front of us slows down, so do we. And as the road bends, we bend with it while staying equidistant from both white lines.

Larry explains to me how it all works. The Tesla is equipped with eight cameras that offer 360-degree visibility with a radius of 250 meters, twelve ultrasonic sensors for detecting hard or soft objects, and a forward-facing radar capable of seeing through pouring rain, fog, and dust to collect data about the surroundings. An onboard computer intakes all of this data and passes it through algorithms to tell the car how fast to drive, how to remain in the correct lane on curvy roads -- basically how to emulate a flawless human driver.

“And the technology gets even cooler. Next time we drive down this exact spot, the car will swing out the same amount as what I just did. The car changes its behavior based on stimuli and it learns from experiences like this.”

If we drive around enough pot holes, for example, eventually the Tesla will be able to identify any given pothole and will avoid it regardless if the car has ever even encountered that particular spot. This ability to learn without being explicitly told, “Hey, there’s a pothole, drive around it” is the essence of machine learning, a subcategory of artificial intelligence (AI).

So no, contrary to the classic science fiction movies, AI is not geared towards manufacturing killer robots. Instead, at least in an automated car’s case, AI is utilized to potentially save an estimated 30,000 lives per year. Within America alone, thousands of accidents occur on a daily basis, 94% of which result from human error. Although today’s self driving cars aren’t quite advanced enough to navigate through roundabouts or four-way stops, by 2020 full fledged automated cars will dominate the roads.

As we roll to a stop beside a parked car, a pop-up on the control screen (where the radio and AC are usually located in cars) asks if we want to parallel park. Larry taps “Yes,” and the steering wheel begins spinning on its own as the Tesla maneuvers its way into a tight spot wedged between two cars. I wonder, as the AI behind self-driving cars continues to improve, why would we, humans, even need to learn how to drive in the first place? Regardless of how probable these prevalent sci-fi movies are, one thing’s for sure: AI is rapidly advancing and having major implications on how we live our lives.

According to the Law of Accelerating Returns, we will experience 20,000 years worth of AI improvement within the tight span of just the 21st century. So, by 2045, intelligent machines are predicted to be smarter than humans, a benchmark known as “superintelligence.” For many people, the idea of the superintelligence symbolizes the death of humanity, a belief that science fiction movies have spread. Well known films like the Matrix and the Terminator instill in humans this image where armies of bulky robots with piercing red eyes march over the rubble of our society. Across all AI themed movies, the consequences of overly naive humans building ultra smart robots lead to one particular fate: a robot takeover and the extinction of humankind.

Although Hollywood film writers have concocted this seductively horrific plot, the one definite fact is AI is growing, so the superintelligence most likely will occur. After all, every two years, our computing power doubles. By looking at AI’s history, we see the evidence for these statistics. Even before the Middle Ages, the desire for crafting artificial beings with thinking capabilities has been in our psyche. AI transitioned from just Greek mythology to an actual possibility when Alan Turing published a paper in 1950 where he posed the question, “Can machines think?” He continued to establish a test for labeling a machine as intelligent or not: while a machine and human share an online interaction, if an outside judge cannot distinguish who is whom, then the machine must have intelligence. To pass this test, machines don’t need to reply with the “correct” answer per se but rather accurately mimic the response of a human. So, in the broadest sense, AI is simply intelligence exhibited by machines. Machine learning, a form of AI, emerged as a method for building intelligent machines where the computer uses pattern recognition and algorithms that learn from data. Two years after Turing sparked the boom of AI growth, Arthur Samuel coded the first computer learning program for the game checkers. Similar to how a self driving car eventually learns to avoid all potholes, the computer learned the winning strategies of checkers via playing many matches -- each time improving based on its experiences in the previous rounds. While the fundamental idea behind adapting from past experiences is common ground between learning how to play the game checkers and learning how to drive, the difference in complexity between the two is massive, and each year new forms of AI continue to stretch this gap.

In the empty Milton Academy computer lab, I recline into my swivel chair as I begin parsing a massive spreadsheet loaded with data about every Celtics game played in 2008. Getting in a computer science mindset for an interview with Erin Solovey, an AI researcher and professor at Drexel University, I chug along on my machine learning project where the goal is to predict whether or not the Celtics would win a given game based on the first quarter statistics. After only a minute or two of making some progress, Erin calls me on Skype.

Currently, Erin uses brain sensors to find insightful patterns in her subjects’ brain activities while they use computers so the computer systems can learn to adapt as users go through different cognitive states. After conducting research in the Humans and Automation Lab at MIT, the Human-Computer Interaction Research Group at Tufts University, and the Computational User Experiences group at Microsoft Research, she knows the AI field through and through. Erin’s previous colleague even praised her as “the machine learning expert in any of her various research groups.”

To provide me with a better understanding about how machine learning is being utilized beyond the simple predictive analytics I was using for my Celtics project, Erin details a hefty list of the applications of machine learning. She spews an array of implementations: search engines, fraud detection, adaptive websites, bioinformatics, speech and handwriting recognition, brain-machine interfaces (her area of expertise), DNA sequence classification, computer vision, natural language processing and understanding, stock market analysis, user behavior analytics (another strong suit of hers), marketing and advertising, sentiment analysis, and video games. When Erin finally pauses to take a proper deep breath, she shrugs and says, “At least that’s all I can think of.”

When I ask Erin why we are pursuing advancement in AI when the rumors suggest devastating ramifications, she explains, “I think AI is going to do a lot of good. There are a lot of ways that this is going to make things better and safer and have a positive impact. For me, the biggest positive with machine learning is that it can accomplish its tasks so much faster and more accurately than we can, which will ultimately improve our quality of life. We already see it improving medicine, robot control, law enforcement, remote sensing, scientific research, among other industries.”

Considering we live in a highly progressive society, I think the efficiency of AI combined with its ability to advance so many diverse fields really encapsulates the reason we should even bother investing in something as supposedly dangerous (at least according to movies) as AI. And agencies including the Defense Advanced Research Projects Agency, NASA, and the National Institutes of Health seem to agree. With roughly $5.4 billion invested into AI research from these agencies, many top notch computer scientists, like Erin, are researching and tinkering with applications of AI that promise to continue AI’s current trend of enhancing functionality and productivity.

Waiting to meet another AI expert in a Starbucks, I immediately spot two forms of machine learning. The woman juggling her iPhone, a piping hot mocha latte, and some form of sugary midday treat, resorts to using Siri for texting her friend that she is “just leaving Starbs now.” Voice recognition machines like Siri or Google Now can understand what we say and react accordingly because of AI, and people are really taking advantage of these technologies: a recent study revealed that 98% of iPhone owners utilize Siri and 96% harness Google Now. So even if we don’t realize it, AI is already integrated into our day-to-day lives and not confined to research labs.

I peek over at another customer, who is scrolling  through her Facebook newsfeed, cluttered with ads. It’s no coincidence that the ads popping up on sites like Facebook cater to users’ interests -- it’s all machine learning algorithms. The computer is collecting tons and tons and tons of data to selectively display ads that fit your preferences: what posts you like, how long you look at a particular post for, links you click on, and so on.

AI is truly everywhere, and as its presence becomes more obvious and ubiquitous, it won’t be a question as to whether or not we choose to take advantage of it or to avoid it. Rather, AI will become an inherent piece of our day-to-day lives. By 2018, roughly six billion devices (ranging from mobile apps to connected appliances) will incorporate a form of AI. So, we should all recognize that the AI upsurge is not only happening, but is also gaining more and more momentum. Instead of questioning how to dodge AI, we should be wondering how to advance it in a way that doesn’t lead to a Terminators 2.0 situation (in an extreme case) and that does enhance how we live our lives. This task raises major ethical questions moving forward: how do we build a machine’s moral compass? How do we prevent humans from being replaced? How do we build AI without losing control of it?

I then meet with a computer science major, Gideon, who offers his point of view on the challenges with AI. For him, the most relevant and concerning issue with AI is building biased machines. “You know, when you build a machine learning algorithm, all it really is is a representation of a dataset. So the fact that software is being racist in such a systematic way is really worrisome,” he explains. When machines make decisions based on data that’s either biased or isn’t taking into account certain pieces of information, their algorithms can yield inappropriate results. This issue is especially dangerous because we trust these algorithms to produce the correct output. Gideon predicts that the more AI advances, the more dependent we will be on this technology, so the more imperative it is that we can trust it.

As he describes discriminatory AI fiascos due to biased data, I wonder how we can teach a machine to be moral. If we can contrive the machine’s values and ingrain in them the importance of human “allies,” surely we can prevent a robot takeover. However, how we go about laying out these important morals -- and even which morals we include because different people uphold different values -- will be a challenge. It might seem like the answer is straightforward: we should simply write code that states, “Don’t kill/harm/(fill in the blank for any bad, violent action) humans.” Yet, verbs like “harm” are vague. What if an AI robot is given the responsibility of baking cake and suddenly runs out of the necessary ingredients, so it turns to humans as the next best option. How is the AI supposed to understand that baking humans in order to somehow make cake is “harming” us?

Another challenge associated with creating an AI’s consciousness is teaching it how to handle moral dilemmas. I imagined I was in the autonomous Tesla again: what if I was inevitably going to get into a car accident -- does the Tesla’s AI prioritize my life or the lives of the people in the other car?

Initially, it feels right for the Tesla to try to save mine at no expense, but what if there’s a family of five in the other car compared to just me in the Tesla -- suddenly the decision feels less clear. And should the AI have a preference of children over adults? Passengers over bystanders? Really, what factors need to be taken into account in such major conflicts? As AI receives more responsibility in our lives beyond the beign scope of strategically displaying ads, ethical challenges arise where life or death situations rest in the decisions of the AI. So, when we give AI massive responsibilities, we need to insure it’s appropriately equipped with a morally just “mindset” to handle these circumstances. If we can achieve this goal, then the idea of the superintelligence will start to feel significantly less intimidating.

When I ask Gideon about the buzz around the superintelligence, he laughs and whips out his phone to show me a tweet from a Harvard professor, Ryan Adams: “The current ‘AI scare’ going on feels a bit like kids playing with Legos and worrying about accidentally creating a nuclear bomb.” His curls bob as he shakes his head and explains that the craze around the idea of AI taking over the world and massacring humans is just that: ill-informed craze.

“The people worried about it are the people detached from the problem. There are huge research organizations that are investing a lot into AI, and the press latches on thinking that we’re going to build this AI takeover, but that is so far from anything that anyone’s working on.” And he’s correct: everyone who’s had hands-on experience within the AI field has disclaimed the idea of a robot takeover, while those who aren’t familiar with how AI really works have all admitted to fearing its potential impact.

Gideon then mentions that Elon Musk, the CEO of Tesla, Paypal, and SpaceX, has spent almost a billion dollars hiring expert AI researchers to try build machines in an open way that will benefit society. “I actually know the guy who was the first hired for this team. The press writes about it as if they’re building killer robots, but he’s working on building tools for making conversation systems marginally better.”

As we continue to talk about killer robots and the superintelligence, what really resonates with me is when Gideon states, “We’re just not there yet,” which he justifies by the fact that our forms of AI simply aren’t smart enough to be able to seek world domination. His “yet” is unsatisfying but simultaneously empowering. Although it suggests that there will come a time when robots might have the intelligence and, therefore, the capabilities of taking over our world, it also implies that we still have time to stop this scenario from occurring. Twenty-eight years to be exact (if the predicted year of the superintelligence, 2045, still holds). With nearly three decades, we have plenty of time to evaluate our approaches and shift our practices so that when 2045 arrives, we can have machines that are smarter than we are, but in a safe way as long as these superintelligent machines contain human-contrived morals.

Even if we can avoid a robot takeover, AI can have some detrimental effects on our society if we don’t draw the line between AI that will help us live better and safer (like autonomous cars) and AI that might seem cool but is still better left under the control of humans (like autonomous weapons).

As I leave the coffeehouse with this final thought bouncing around in my mind, I remember very distinct advice from Erin: “We need to be prepared to make tradeoffs when it doesn’t make sense to insert AI into specific fields.” After all, if we can employ some self control over what we decide to make with AI, then we should be able to maintain control over how AI impacts our society. As I reflect on my chats with both of these AI geniuses, I feel a sense of clarity as to the best methods for shepherding AI’s maturation. With every advancement in the AI field, we need to be asking “What’s the point? Will this really be helping society?” We shouldn’t make AI just for the sake of making “cool” technology, instead we should deliberately code it with beneficial intentions.

In the atrium of the MIT Media Lab, Boston’s hub of technological innovation, four floors bordered with glass walls overhang the entrance and I see your stereotypical computer scientists: clusters of people banging their hands against keyboards. Except there’s a twist -- when you think of STEM research, you imagine a tense environment. Here, though, the atmosphere pumps with energy. Science-y jokes and doodles, accompanied by drawings of past inventions and prototypes, decorate the glass walls.

Zig-zagging across the open atrium, bright red steps contrast the crisp white tiles and walls. Hanging near the glass elevators, a black plaque adds another level of complexity to the stretch of white. It details the different research projects housed in the Media Lab:

  • Synthetic neurobiology. To better understand the human condition and repair brain disorders.
  • Personal robots. To create interactive robots that help humans live healthier lives, connect with others, and learn better.
  • Civic Media. To make technology for social change.
  • Object Based Media. To alter storytelling and communication via sensing, understanding, and new interface technologies.
  • Responsive environments. To augment the human experience, interaction, and perception with sensor networks.
  • Human Dynamics. To investigate social networks’ impact on our lives in business and health.
  • Biomechatronics (a blend of biology, mechanics, and technology).
  • Social Computing (socio-technological systems).
  • Affective computing (technology that understands emotions).

When I see “Affective Computing,” I smile and think of Rana El Kaliouby, whom I met only a few days prior. Rana is included in Entrepreneur Magazine’s list of The 7 Most Powerful Women to Watch in 2014 and the MIT Technology Review’s selection of the “Top 35 Innovators Under 35” for her work in the affective computing field. In 2006, Rana planted her affective computing roots here in the Media Lab, where she honed her research in on one area: Autism.

I meet with Rana, and she unfolds her reading glasses as a model for her first affective computing project. Pointing to the arc of her glasses, she explains that this is where a little camera would be installed and connected to a device that would process all of my facial expressions and then give her real time feedback.

“You know, like ‘Alexa’s interested’ or, ‘She’s confused, so stop and ask a question,’” Rana describes. From there, brands  like Coca Cola wanted to use her emotion recognition technology to see how effective their video ads were. Eventually, the high demand from commercial companies inspired Rana to found her startup, Affectiva. She adds, “I started with the image of being the emotion AI or the emotion brain of apps and devices.”

When I ask Rana why intelligent machines should have this emotional component, she describes how we, humans, have different kinds of intelligences, namely cognitive intelligence and, most importantly in her opinion, emotional intelligence. People who have a stronger emotional intelligence are more likeable, more persuasive, more successful in their professional and personal lives. “So,” Rana pauses, “Our thesis is that it’s essentially going to be the same for technology.”

Rana’s hypothesis really makes sense -- as AI continues to advance and meld its way into every aspect of our lives, its ability to be “smart” and unravel intricate logic puzzles won’t be enough.

In order for these artificial machines to mesh well into our humanized environment, they need to understand these human qualities that go beyond our IQ. Tesla, a car that has autonomous capabilities but still requires a human driver at times, acts as the perfect demonstration for this idea. Incorporating emotional AI, the car can understand when the driver is drowsy or distracted and adapt. Autonomous cars are currently learning simply the rules of the road, but humans are still involved and, therefore, emotions need to be taken into account.

Although adding this humanizing touch to AI might seem as if we’re breeding these artificial beings to replicate or even replace humans, we’re not. Rather, the machine is recognizing the emotions and responding accordingly, so we’re making the machines more intelligent -- not more human.

As a result, this feature of AI will have a widespread of positive impacts on our society, but Rana’s current main focus rests on making online learning specialized to each viewer and diagnosing mental health disorders.

“When you go to a doctor today, they don’t ask you, ‘What’s your blood pressure?’ They just put the device on you and measure it,” Rana explains while mimicking the action of strapping on the device. She continues, “And yet we still ask people, ‘On a scale from 1-10, how are you feeling? Are you depressed? Are you suicidal?’ which is a very unreliable way of getting information. We’ve studied how depressive patients tend to have really dampened facial emotions, you could read it in their voice, in their head movements. So I would love to see our AI used in a way that can flag early signs of depression.”

Rana.jpg

Up until this point in our interview, Rana has painted an image of AI’s positive impacts. I ask her opinions on the downsides. Rana exclaims, “I actually just attended the World Economic Forum in Dubai where I spent a great deal of time analyzing some of the dangers.” At the conference, Rana, along with sixteen other AI leaders, crafted a recommendation around how to prepare for the results of AI in our society. This elite team settled on several recommendations regarding AI in robotics, education, health, energy, cities, mobility, manufacturing, and social inclusion.

The biggest topic that Rana highlights is AI’s impact on the economy. The unavoidable truth is jobs are and will continue to be replaced by automated machines. In fact, 45% of all paid jobs could be automated. Although, the jobs with the highest threat of automation are those in a controlled environment that involve a lot of repetition. So manufacturing, food service and accommodations, and retailing industries face the greatest risk. According to a recent study, 78% of jobs in these three categories could -- and probably will -- be automated. We will soon start to see the results of these estimations: within just this upcoming decade, AI already threatens to replace 16% of all U.S. jobs.

While these percentages alone are dissatisfying, the aftermath of integrating technology into workplaces stretches beyond statistical predictions. When we automate controlled tasks like operating machinery within a factory, the rest of the factory will be able to run more efficiently and more workers will be available to perform other, non-automatable jobs. In general, by allowing workers to redirect their time to other tasks, automated jobs will augment their productivity instead of fully replacing the workers.

This efficiency will ultimately have a positive impact on the economy. After studying twelve economies that together account for over 50% of the world’s economic production, the consulting company, Accenture, predicted that by 2035, the increase in AI will double these countries’ annual economic growth rates. Additionally, within this same time frame, AI, making workers more efficient, will shoot labor productivity up by 40%.

Unfortunately, this positive spin on automation doesn’t hold true for every job that AI will replace. Take a self-driving car: this form of AI has the potential to displace a whole economy of truck drivers, delivery services, taxis -- Uber has already deployed self-driving cars in Pittsburgh and San Francisco. When I ask Rana how we address these occupations, she describes an appealing solution.

She assures me that her team has compiled a preliminary list of ways to retrain those whose jobs will be completely replaced. Even though she isn’t allowed to disclose their specific plans yet, I feel relieved to hear that the top AI workers have begun drafting ideas regarding how to retrain workers and apply them in other areas.

Rana also spitballs ethical challenges ranging from privacy to decision making issues. Privacy problems strike as a result of no limitations on the data that the AI uses. However, for machine learning, the more data the machine intakes, the more accurate and reliable its results are. So, as AI inherits more tasks with greater weight in our lives, we should want it to be able to execute its job to the best of its ability. In order to do so, we need to feed the machine as much relevant data as possible. And here we hit the issue between trying to make strong AIs without giving away too much of our personal information.

For example, machine learning algorithms have the potential to accurately diagnose diseases well in advance based on pattern recognition. So, the machine would have all of the data from previous patients with the same disease (such as their symptoms), and if your AI was tracking data like how many times you ate in a given day, how long you slept for, how many times you went to the bathroom, and so on, it can begin to intrude on personal parts of your life that theoretically should be left unanalyzed. So, we need to regulate which AIs should have full access to any form of data (I vote for any healthcare related AI) and which should have privacy restrictions when it comes to personal data.

The other major challenge Rana brought up, decision making, really ties into how much power we should be giving our AIs. If an AI detects that you are depressed, does it have the right to inform your doctor or your loved ones? In terms of avatar nurses, which have already been deployed in some hospitals around the world, should they be given the ability to decided when to unplug a patient? Or should the robotic nurse be allowed to give a recommendation regarding what it would do but still leave the final say up to a human? This issue is interconnected with morality challenges -- if we don’t have a moral structure for the AI, then I think it’s safer for humans to have the final say. But of course, in some instances we have to give full autonomy to the machine like in self-driving cars: the car isn’t going to stop and ask the human whether or not it has permission to switch lanes or to stop at a red light. So really, with any of these ethical challenges, there’s no clear cut solution.

As I approach the Media Lab’s exit, a quote by Jerome Wiesner, whom this building is dedicated to, printed on the white wall catches my attention: “All learning must be linked with a broad concern for the complex effects of technology on our evolving culture.” Perhaps this quote really embodies how we should be addressing the “complex effects” of AI on our society by continuing to wrestle with these ethical questions. If we approach AI with a heightened sense of ethical duties, killer robots might remain in the category of science fiction. And maybe instead of replacing humans, AI will help us be better or, worst case scenario, simply force us to evolve and advance.

After all, the human experience is something so unique to our species, something so special that robots may be able to mimic but not truly experience, so how could they possibly replace us or erase our importance? The thing is, AI is founded on the idea of learning rules. In basic forms of AI, these rules are explicitly stated in the code, but in more sophisticated AI applications like machine learning algorithms, machines learn the rules as they go. Regardless of how advanced the form of AI is, its intelligence still stems from rules.

So we can teach machines how to drive, how to make predictions based on data, even how to recognize and react to our facial expressions, but how can we teach them to genuinely feel joy or jealousy? When I see a small smile creep across Rana’s face as she describes all the ways in which she’s helped those with Autism, I realize that there’s no way to teach somebody how to have empathy -- you just have it. And when Rana’s eyes widen and her smile stretches deeper as she articulates how she hopes to diagnose mental disorders, I recognize that there’s no rule for how to feel inspired -- you just feel it.

And, really, that’s what makes us human: our innate ability to think and feel beyond algorithms and rules, a quality AI cannot take away from us.

The above article was a guest post from a Milton Academy senior who interviewed our CEO, reposted and edited with permission.

About Alexa Perlov:

Alexa wrote this article for a three-month long English project and chose this topic based on her interest in computer science. Alexa plans on majoring in computer science at Columbia University’s School of Engineering and Applied Sciences. From there, she hopes to work at a tech startup and one day found her own startup. Alexa enjoys playing the piano and guitar, running varsity track, and writing for her school’s science publication.

CTA_Road_ Safety_eBook2

Deep Learning Artificial Intelligence AI