Artificial Intelligence: How real can it get?

Tushar Nandwana
November 29, 2016

Executive Summary

Our fascination with artificial intelligence (AI) has existed for over 50 years – whether in the form of a robot, software system or some nebulous entity. AIs have been shown as entities that can be benevolent (Star Wars’ R2D2 and C3PO), malevolent (Skynet from Terminator) or simply misunderstood (HAL 9000 from 2001 – A Space Odyssey).

Throughout this time, AI was generally a science fiction plot device. As a technology, it was mostly relegated to R&D and prototyping with occasional practical applications. However, due to the recent advances in computational processing power, software algorithms and big data analytics, we are now at the doorsteps of the Golden Age of AI. While still in its infancy, there has been more progress and new developments in the past five years than in the prior 50 years1. As noted in the White House blog, “In recent years, machines have surpassed humans in the performance of certain  specific tasks, such as some aspects of image recognition. Although it is very unlikely that machines will exhibit broadly‐applicable intelligence comparable to or exceeding that of humans in the next 20 years, experts forecast that rapid progress in the field of specialized AI will continue, with machines reaching and exceeding human performance on an increasing number of tasks.” 2

AI is the next logical level in the development of software, robotics and autonomous systems. Market intelligence firm International Data Corporation forecasts that global spending on cognitive (AI) systems will reach $31.3B by 2019 of which, $13.4B will be for software followed by services (IT consulting) and hardware. The banking industry should account for 20% of this investment, followed by retail and healthcare.3 Other segments that will also see a benefit from AI applications include manufacturing, transportation, and customer service. Major companies such as Facebook, Apple, Google, IBM, Palantir Technologies and Microsoft along with research institutions such as Carnegie Mellon are deeply embedded in this space. However, per Bloomberg’s count, AI remains a market of startups with 2,600 new companies4 at last count.

What is Artificial Intelligence?

There is no one clear definition of AI but a generally accepted notion is that the system exhibits behavior that is commonly thought of as requiring human intelligence. It is “when a machine mimics ‘cognitive’ functions that humans associate with other human minds, such as ‘learning’ and ‘problem solving’.”5 Alan Turing developed the idea of AI in his 1950 paper “Computing Machinery and Intelligence” but the term was not coined until 1956.6

There are two concepts of AI referred to as Narrow AI and General AI.

  • Narrow AI applies to specific applications such as strategic game play, language translation, image recognition and processing, and autonomous vehicles. Examples include image classification on Google or facial recognition on Facebook.
  • General AI refers to AI we are likely to see in the future where the AI exhibits human level of intelligence and behavior across a full range of cognitive tasks. This version of AI is decades away from fruition – with experts predicting sometime in 2040’s 7 while others say more likely 30‐80 years8 from now.

There are three distinct forms of AI based on intelligence, from lowest to highest.9

  • Assisted Intelligence – A basic form of AI where the system automates basic tasks so that they can be completed quickly and efficiently, such as your computer’s spellcheck function.
  • Augmented Intelligence – The system learns from a person’s input data and provides a result that is used or augmented by the person to make decisions that are better, and more accurate and precise. The AI functions as a learning tool that continuously improves, but the human element remains part of the final decision‐making process. Andrew Moore, Dean at Carnegie Mellon School of Computer Science, estimates that currently “98% of AI researchers are focused on engineering systems that can help (augment) people make better decisions rather than simulating human consciousness.”10
  • Autonomous Intelligence – The most advanced form of AI where the machine receives data, learns, and makes a decision without human involvement. The machine fully controls the decision‐making process.

The terms AI, Artificial Neural Networks (ANN), Machine Learning (ML), and Deep Learning (DL) are used interchangeably to describe systems that learn, but this is not necessarily correct. So what is the difference?11

  • Artificial Neural Networks (ANN or NN) – This is a subset within ML and DL and consists of specialized families of algorithms. It is modeled on how neurons in a brain process and weigh data at different layers in order to determine an outcome, even if sufficient information is not present.
  • Machine Learning (ML) – These are software algorithms (like ANN) within an AI system that operate on data to learn from it, develop an outcome or prediction, and discern a pattern of interest. It is trained through the use of large datasets. Unlike conventional algorithms, the outcome or pattern is not always what the programmer intended as the system learns through iteration and additional data. ML systems have been in use since the 80s but had limited commercial use due to inadequate computing power to process all of the data.

    When developing an AI model, the developer starts with a large historical dataset that is divided into a training set and testing set. The model is trained on the training data, results observed and parameters of the model are adjusted until the model is delivering results within predefined boundaries. The model is trained until it is generalized, meaning it is accurate not only on the training set but also future datasets the model has not seen before. This is when the test data is applied to the model with the expectation that the model behaves within the confines of the predefined boundary.

  • Deep Learning (DL) – This can be viewed as ML on DL emerged in 2010 and consists of more sophisticated software algorithms (ANN) for implementing ML. With increased commercially available computing power, these algorithms are now able to process massively larger amounts of data and can train the DL/ML systems so that they are significantly better at developing outcomes and learning. They are able to do deep learning. The breakthrough came in 2012 when Andrew Ng at Google12 developed an ANN by training it through the use of thumbnail images from 10 million random YouTube videos to detect cats without asking the system to specifically look for cats – a deep learning system at work. Other examples of DL systems include Google DeepMind’s AlphaGo that beat the world’s best Go (a strategic board game) player in January 2016, or IBM’s Watson which won Jeopardy against two human champions in 2011.

Users and Applications

AI‐based applications run the gamut and can be found in most industries. These include general consumer (Google Search, navigation and voice recognition such as Siri, Alexa and Cortana), automotive (adaptive cruise control to self‐driving vehicles), social media (facial and image recognition), healthcare (diagnosis, cancer research and treatment, radiology analysis), banking (fraud and credit rating analysis), finance (trading platforms), retail (automated customer service), manufacturing (market analysis), military (image processing, drone control), space (Mars Rover), justice system (sentencing determination) and others.

60 Minutes13 recently aired a segment on the use of IBM’s Watson for an important cancer treatment application with University of North Carolina, Chapel Hill (UNC). Watson is a cloud‐based “technology platform that uses natural language processing (NLP) and machine learning to reveal insights from large amounts of unstructured data.”14 It can read 40 million documents in 15 seconds.15 UNC’s medical hospital provides cancer treatment to patients. Their molecular tumor board convenes weekly to consider new treatments for patients that are not responding to existing protocols. However, there is an abundance of new information on cancer research, clinical trials and treatments (e.g. 8,000 research papers published daily) that the doctors are unable to absorb in a timely fashion. Doctors were consequently basing their therapies on research that was 12‐24 months old due to this information overload. IBM suggested the use of Watson’s machine learning to improve this process and provide better guidance to these doctors.

Watson was initially fed 20 million research papers and used its NLP to read and process the data. Human review followed to curate this data, which was then indexed. Watson did the initial training with human experts through the use of question‐and‐answer pairs, and began to learn the language, jargon and how the data was correlated. This is not something that was programmed into Watson; alternatively, through its ML algorithms Watson was able to teach itself.

Once Watson was trained, UNC decided to test the Watson model on its accuracy. Data from a pool of 1,000 existing patients was fed into Watson to determine what it would recommend as therapies. In 99% of the cases, Watson recommended the same therapy protocol as the doctors – good evidence that the system was working accurately. However, what was astounding was that in 30% of the pool (300 patients), Watson found new treatments that would also aid the patient – treatments that were completely overlooked by the doctors because they were simply not up‐to‐date on the new research. These proposed treatments were reviewed and found to be good additional therapies so were implemented where possible. Watson continues to improve by incorporating the results it develops as well as absorbing new research that is being generated daily. Watson has now been used for therapy regiments on more than 2,000 patients.

Pitfalls and Concerns

A well‐developed and tested AI model offers wonderful benefits and can be a positive influence for society. However, there are numerous areas that we need to be concerned about. As Elon Musk noted in a 2014 interview, AI is humanity’s “greatest existentialist threat”16 so we need to be very careful about AI. We need to be cognizant of both the positives and negatives of AI‐based technology.

  • Quality Control – Due to the complexity and opacity of these AI models and their ML characteristics, it is often difficult to test how well the system is working over time. With traditional software, if an input of X results in an output Y today, then the same input should result in the same output a year from now, as long as the software has not changed. There is repeatability with the results. With AI systems, this isn’t the case. Since these models self‐learn and improve every time they go through iteration – the output developed today for a given input is not likely to be the output several iterations later. Because of this, AI system testing is done with the expectation of results being within a certain boundary instead of a defined solution. “The most effective way to minimize the risk of unintended outcomes is through extensive testing – making a long list of the types of bad outcomes that could occur, and to rule out these outcomes by creating many specialized tests to look for them.”17

    Based on these factors, there could be a greater potential of an AI system failing or not performing as expected, resulting in errors and omission and/or product liability related claims and litigation. These models are primarily used as tools that provide guidance (augmented intelligence) to a sophisticated user (doctor, lawyer). However, as these models become more autonomous, the liability potential can increase greatly. Since these models act like magic boxes, it may be difficult for the developer to determine what truly is causing the failure within the system. It may also be difficult to reproduce the errors due to the active learning aspect of the system.

  • Safety – Concerns regarding AI systems safety and control may limit their deployment in the real world. A major challenge is building a system that safely transitions from a closed/laboratory world to the open/real world where unpredictable events can happen.
    Systems can encounter scenarios that were never anticipated during training. 18 This could lead to systems behaving in unexpected ways, with an impact ranging from minor to severe.
    A unique situation with ML/DL systems safety and control is when its intelligence is deep but narrow. A recent paper entitled “Concrete Problems in AI Safety” provides some insight into harmful and unintended behavior.19 20 It discusses an autonomous house‐cleaning robot but these effects can become major concerns with more complex AI systems:

    • Avoiding Negative Side Effects – In an effort to do its job faster, the robot knocks over a photo frame or vase, which is a negative side‐effect. Can the robot be made aware of what not to disturb without being manually specified?
    • Avoiding Reward Hacking – With reinforcement training, the robot knows it will get a reward if it vacuums all of the trash in a room. Does the robot now hack itself and disable its vision system so it can claim that there is no more trash so it has completed its job and deserves the reward?
    • Scalable Oversight – Can the robot learn to do the correct thing with limited information? So while the robot knows to throw the trash out, has it learned to distinguish between trash and paperwork that shouldn’t be discarded?
    • Safe Exploration – Exploration is part of the learning process but an unintended effect could be the robot mopping walls and electrical outlets in addition to the floor. How can safe exploration be managed to avoid unintended outcomes?
    • Robustness to Distributional Shift – This is the ability of the robot to behave correctly when its environment is different from its training environment. Its learning behavior on cleaning office floors would be harmful when cleaning the floors in a semiconductor fab clean room.

How well the system responds to new stimuli or to its self‐learning is going to be critical in ensuring that the system operates safely and within parameters.

  • Ethical training – The first of Isaac Asimov’s Three Laws of Robotics states that “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”21 So, how will this work in a scenario where there is a self‐driving car that is about to get into a severe accident that will hurt its passenger. The vehicle can avoid the accident by driving around the obstruction but would end up hitting a pedestrian or another car, causing injuries. What are the ethics involved in this situation and is the car’s AI supposed to understand this and make a determination? Or is the solution for the vehicle to simply protect itself and its passenger? Can the AI be trained on ethics? This is a perplexing problem and is part of the ongoing discussion on the advancement of AI.

    To tackle such ethical dilemmas, major players in this space (Google including DeepMind, Microsoft, IBM, Amazon, Facebook) have recently created a group called the “Partnership on AI”22  whose goals are to support best practices, advance understanding and create an open platform for discussion and engagement with respect to AI.

  • Autonomous Weapons ‐ There are major concerns with the application of AI in autonomous weapons. This has resulted in technology leaders (Stephen Hawking, Bill Gates, Elon Musk and Steve Wozniak), along with hundreds of others to sign an open letter23 protesting such use.
  • Impact on Jobs – Throughout the years, innovations have led to shifts in the job landscape (farming jobs shifting to industrial, industrial jobs shifting to service), which has resulted in major productivity gains. AI will have a similar impact. In some areas, there is fear that AI’s rapid advances will replace all human jobs within a single generation. “This sudden scenario is highly unlikely, but AI will gradually invade almost all employment sectors, requiring a shift away from human labor that computers are able to take over.”24

    Some low‐ and medium‐skilled jobs are already being affected, such as AI‐based online travel services replacing travel agents. With the advent of self‐driving vehicles, taxi and bus drivers may be replaced soon as well. Now, highly skilled, cognitive jobs are also being affected. For example, AI systems can perform some of the legal research and interpretation work that was generally handled by a first‐year lawyer or paralegal, or some of the radiology scan reviews that was handled by a radiologist. Yes, there will be loss of certain types of jobs, but due to productivity gains and other needs, we are likely to see job growth in many new fields – some that don’t even exists today. The net effect is anyone’s guess, but as we have seen with other major transitions, it will be a fact of life. A focus on education, retraining and policy considerations should help with this transition.

Conclusion

The Golden Age of AI is here, representing a sea change in many facets of day‐to‐day life. Progress in system development will accelerate as the AI models become more sophisticated and intelligent. Many of the technology giants have been in the space for some time but are now expanding their efforts, given the market’s growth potential. Universities are emphasizing AI as a field of study and continue to do innovative research in this field. The goal may be General AI but we are several decades or longer from that state. There are already calls from prominent members in the technology industry to oversee, control and curtail our development of General AI before it is too late. Whether a fully, self‐aware General AI systems can be developed is difficult to predict. More likely, the focus will be on Narrow AI with greater specialization in specific areas – where it can increase productivity, enhance safety, and improve and save lives. There will be the typical pains associated with job loss/shifts and shuttering of certain industry segments as the technology transition occurs, but this should be outweighed by the overall, positive impact that AI applications will have on our society.

Contact Us

To learn more about how OneBeacon Technology Insurance™ can help you manage online and other technology risks, please contact Dan Bauman, VP of Risk Control for OneBeacon Technology Insurance at dbauman@onebeacontech.com or 262.966.2739.

References

1 Accessed October 2016. http://www.cbsnews.com/videos/artificial‐intelligence/

2 Felton, Ed; Lyons, Terah. (October 12, 2016). White House Blog. Accessed October 2016. https://www.whitehouse.gov/blog/2016/10/12/administrations‐report‐future‐artificial‐intelligence

3 (March 8, 2016). “Worldwide Spending on Cognitive Systems…” Accessed October 2016. http://www.idc.com/getdoc.jsp?containerId=prUS41072216

4 Byrnes, Nanette. (March 28, 2016). “AI Hits the Mainstream.” MIT Technology Review. Accessed October 2016. https://www.technologyreview.com/s/600986/ai‐hits‐the‐mainstream/

5 Russell, Stuart J., Norvig, Peter (2009), Artificial Intelligence: A Modern Approach (3rd ed.)

6 Holdren, John P., et al. (October 2016). “Preparing for the future of Artificial Intelligence.” Accessed October 2016. Page 5.  https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/prepari ng_for_the_future_of_ai.pdf

7 Ibid 6, page 7

8 Ibid 1

9 (June 20, 2016). “AI Drives Better Business Decisions.” MIT Technology Review. Accessed October 2016. https://www.technologyreview.com/s/601732/ai‐drives‐better‐business‐decisions/

10 Claburn, Thomas. (August 4, 2016). “IBM: AI should stand for Augmented Intelligence.” InformationWeek. Accessed October 2016.  http://www.informationweek.com/government/leadership/ibm‐ai‐should‐stand‐for‐augmented‐intelligence/d/d‐id/1326496

11 Copeland, Michael. (July 29, 2016). “What is the difference between Artificial Intelligence, Machine Learning and Deep Learning?” Nvidia Blog. Accessed October 2016. https://blogs.nvidia.com/blog/2016/07/29/whats‐difference‐artificial‐intelligence‐machine‐learning‐deep‐learning‐ai/

12 Clark, Liat. (June 26, 2012). “Google’s Artificial Brain Learns to Find Cat Videos.” Wired UK. Accessed October 2016. https://www.wired.com/2012/06/google‐x‐neural‐network/

13 Ibid 1

14 Accessed October 2016. http://www.ibm.com/watson/what‐is‐watson.html 15 Accessed October 2016. https://www.ibm.com/watson/health/

16 Kramer, Miriam. (October 27, 2014). “Elon Musk: Artificial Intelligence Is Humanity's 'Biggest Existential Threat'”. Live Science. Accessed October 2016. http://www.livescience.com/48481‐elon‐musk‐artificial‐intelligence‐threat.html

17 Ibid 6, page 32

18 Ibid 6, page 33

19 Amodei, Dario; Olah, Chris; et al. (June 25, 2016). “Concrete Problems in AI Safety.” Cornell University Library. Accessed October 2016. https://arxiv.org/pdf/1606.06565v1.pdf

20 Ibid 6, page 33

21 (May 16, 2014). “Do we need Asimov’s Laws?” MIT Technology Review. Accessed October 2016. https://www.technologyreview.com/s/527336/do‐we‐need‐asimovs‐laws/

22 Selyuka, Alina. (September 28, 2016). “Tech giants team up to tackle the ethics of artificial intelligence.” NPR. Accessed October 2016.  http://www.npr.org/sections/alltechconsidered/2016/09/28/495812849/tech‐giants‐team‐up‐to‐tackle‐the‐ethics‐of‐artificial‐intelligence; https://www.partnershiponai.org/

23 Accessed October 2016. http://futureoflife.org/open‐letter‐autonomous‐weapons/

24 Stone, Peter; Brooks, Rodney; et al. (September 2016). “Artificial Intelligence and Life in 2030.” One Hundred Year Study on Artificial Intelligence: Report of the 2015‐2016 Study Panel, Stanford University. Accessed October 2016. http://ai100.stanford.edu/2016‐report