Our fascination with artificial intelligence (AI) has existed for over 50 years – whether in the form of a robot, software system or some nebulous entity. AIs have been shown as entities that can be benevolent (Star Wars’ R2D2 and C3PO), malevolent (Skynet from Terminator) or simply misunderstood (HAL 9000 from 2001 – A Space Odyssey).
Throughout this time, AI was generally a science fiction plot device. As a technology, it was mostly relegated to R&D and prototyping with occasional practical applications. However, due to the recent advances in computational processing power, software algorithms and big data analytics, we are now at the doorsteps of the Golden Age of AI. While still in its infancy, there has been more progress and new developments in the past five years than in the prior 50 years1. As noted in the White House blog, “In recent years, machines have surpassed humans in the performance of certain specific tasks, such as some aspects of image recognition. Although it is very unlikely that machines will exhibit broadly‐applicable intelligence comparable to or exceeding that of humans in the next 20 years, experts forecast that rapid progress in the field of specialized AI will continue, with machines reaching and exceeding human performance on an increasing number of tasks.” 2
AI is the next logical level in the development of software, robotics and autonomous systems. Market intelligence firm International Data Corporation forecasts that global spending on cognitive (AI) systems will reach $31.3B by 2019 of which, $13.4B will be for software followed by services (IT consulting) and hardware. The banking industry should account for 20% of this investment, followed by retail and healthcare.3 Other segments that will also see a benefit from AI applications include manufacturing, transportation, and customer service. Major companies such as Facebook, Apple, Google, IBM, Palantir Technologies and Microsoft along with research institutions such as Carnegie Mellon are deeply embedded in this space. However, per Bloomberg’s count, AI remains a market of startups with 2,600 new companies4 at last count.
There is no one clear definition of AI but a generally accepted notion is that the system exhibits behavior that is commonly thought of as requiring human intelligence. It is “when a machine mimics ‘cognitive’ functions that humans associate with other human minds, such as ‘learning’ and ‘problem solving’.”5 Alan Turing developed the idea of AI in his 1950 paper “Computing Machinery and Intelligence” but the term was not coined until 1956.6
There are two concepts of AI referred to as Narrow AI and General AI.
There are three distinct forms of AI based on intelligence, from lowest to highest.9
The terms AI, Artificial Neural Networks (ANN), Machine Learning (ML), and Deep Learning (DL) are used interchangeably to describe systems that learn, but this is not necessarily correct. So what is the difference?11
When developing an AI model, the developer starts with a large historical dataset that is divided into a training set and testing set. The model is trained on the training data, results observed and parameters of the model are adjusted until the model is delivering results within predefined boundaries. The model is trained until it is generalized, meaning it is accurate not only on the training set but also future datasets the model has not seen before. This is when the test data is applied to the model with the expectation that the model behaves within the confines of the predefined boundary.
Deep Learning (DL) – This can be viewed as ML on DL emerged in 2010 and consists of more sophisticated software algorithms (ANN) for implementing ML. With increased commercially available computing power, these algorithms are now able to process massively larger amounts of data and can train the DL/ML systems so that they are significantly better at developing outcomes and learning. They are able to do deep learning. The breakthrough came in 2012 when Andrew Ng at Google12 developed an ANN by training it through the use of thumbnail images from 10 million random YouTube videos to detect cats without asking the system to specifically look for cats – a deep learning system at work. Other examples of DL systems include Google DeepMind’s AlphaGo that beat the world’s best Go (a strategic board game) player in January 2016, or IBM’s Watson which won Jeopardy against two human champions in 2011.
AI‐based applications run the gamut and can be found in most industries. These include general consumer (Google Search, navigation and voice recognition such as Siri, Alexa and Cortana), automotive (adaptive cruise control to self‐driving vehicles), social media (facial and image recognition), healthcare (diagnosis, cancer research and treatment, radiology analysis), banking (fraud and credit rating analysis), finance (trading platforms), retail (automated customer service), manufacturing (market analysis), military (image processing, drone control), space (Mars Rover), justice system (sentencing determination) and others.
60 Minutes13 recently aired a segment on the use of IBM’s Watson for an important cancer treatment application with University of North Carolina, Chapel Hill (UNC). Watson is a cloud‐based “technology platform that uses natural language processing (NLP) and machine learning to reveal insights from large amounts of unstructured data.”14 It can read 40 million documents in 15 seconds.15 UNC’s medical hospital provides cancer treatment to patients. Their molecular tumor board convenes weekly to consider new treatments for patients that are not responding to existing protocols. However, there is an abundance of new information on cancer research, clinical trials and treatments (e.g. 8,000 research papers published daily) that the doctors are unable to absorb in a timely fashion. Doctors were consequently basing their therapies on research that was 12‐24 months old due to this information overload. IBM suggested the use of Watson’s machine learning to improve this process and provide better guidance to these doctors.
Watson was initially fed 20 million research papers and used its NLP to read and process the data. Human review followed to curate this data, which was then indexed. Watson did the initial training with human experts through the use of question‐and‐answer pairs, and began to learn the language, jargon and how the data was correlated. This is not something that was programmed into Watson; alternatively, through its ML algorithms Watson was able to teach itself.
Once Watson was trained, UNC decided to test the Watson model on its accuracy. Data from a pool of 1,000 existing patients was fed into Watson to determine what it would recommend as therapies. In 99% of the cases, Watson recommended the same therapy protocol as the doctors – good evidence that the system was working accurately. However, what was astounding was that in 30% of the pool (300 patients), Watson found new treatments that would also aid the patient – treatments that were completely overlooked by the doctors because they were simply not up‐to‐date on the new research. These proposed treatments were reviewed and found to be good additional therapies so were implemented where possible. Watson continues to improve by incorporating the results it develops as well as absorbing new research that is being generated daily. Watson has now been used for therapy regiments on more than 2,000 patients.
A well‐developed and tested AI model offers wonderful benefits and can be a positive influence for society. However, there are numerous areas that we need to be concerned about. As Elon Musk noted in a 2014 interview, AI is humanity’s “greatest existentialist threat”16 so we need to be very careful about AI. We need to be cognizant of both the positives and negatives of AI‐based technology.
Based on these factors, there could be a greater potential of an AI system failing or not performing as expected, resulting in errors and omission and/or product liability related claims and litigation. These models are primarily used as tools that provide guidance (augmented intelligence) to a sophisticated user (doctor, lawyer). However, as these models become more autonomous, the liability potential can increase greatly. Since these models act like magic boxes, it may be difficult for the developer to determine what truly is causing the failure within the system. It may also be difficult to reproduce the errors due to the active learning aspect of the system.
Safety – Concerns regarding AI systems safety and control may limit their deployment in the real world. A major challenge is building a system that safely transitions from a closed/laboratory world to the open/real world where unpredictable events can happen.
Systems can encounter scenarios that were never anticipated during training. 18 This could lead to systems behaving in unexpected ways, with an impact ranging from minor to severe.
A unique situation with ML/DL systems safety and control is when its intelligence is deep but narrow. A recent paper entitled “Concrete Problems in AI Safety” provides some insight into harmful and unintended behavior.19 20 It discusses an autonomous house‐cleaning robot but these effects can become major concerns with more complex AI systems:
How well the system responds to new stimuli or to its self‐learning is going to be critical in ensuring that the system operates safely and within parameters.
To tackle such ethical dilemmas, major players in this space (Google including DeepMind, Microsoft, IBM, Amazon, Facebook) have recently created a group called the “Partnership on AI”22 whose goals are to support best practices, advance understanding and create an open platform for discussion and engagement with respect to AI.
Some low‐ and medium‐skilled jobs are already being affected, such as AI‐based online travel services replacing travel agents. With the advent of self‐driving vehicles, taxi and bus drivers may be replaced soon as well. Now, highly skilled, cognitive jobs are also being affected. For example, AI systems can perform some of the legal research and interpretation work that was generally handled by a first‐year lawyer or paralegal, or some of the radiology scan reviews that was handled by a radiologist. Yes, there will be loss of certain types of jobs, but due to productivity gains and other needs, we are likely to see job growth in many new fields – some that don’t even exists today. The net effect is anyone’s guess, but as we have seen with other major transitions, it will be a fact of life. A focus on education, retraining and policy considerations should help with this transition.
The Golden Age of AI is here, representing a sea change in many facets of day‐to‐day life. Progress in system development will accelerate as the AI models become more sophisticated and intelligent. Many of the technology giants have been in the space for some time but are now expanding their efforts, given the market’s growth potential. Universities are emphasizing AI as a field of study and continue to do innovative research in this field. The goal may be General AI but we are several decades or longer from that state. There are already calls from prominent members in the technology industry to oversee, control and curtail our development of General AI before it is too late. Whether a fully, self‐aware General AI systems can be developed is difficult to predict. More likely, the focus will be on Narrow AI with greater specialization in specific areas – where it can increase productivity, enhance safety, and improve and save lives. There will be the typical pains associated with job loss/shifts and shuttering of certain industry segments as the technology transition occurs, but this should be outweighed by the overall, positive impact that AI applications will have on our society.
To learn more about how OneBeacon Technology Insurance™ can help you manage online and other technology risks, please contact Dan Bauman, VP of Risk Control for OneBeacon Technology Insurance at firstname.lastname@example.org or 262.966.2739.
1 Accessed October 2016. http://www.cbsnews.com/videos/artificial‐intelligence/
2 Felton, Ed; Lyons, Terah. (October 12, 2016). White House Blog. Accessed October 2016. https://www.whitehouse.gov/blog/2016/10/12/administrations‐report‐future‐artificial‐intelligence
3 (March 8, 2016). “Worldwide Spending on Cognitive Systems…” Accessed October 2016. http://www.idc.com/getdoc.jsp?containerId=prUS41072216
4 Byrnes, Nanette. (March 28, 2016). “AI Hits the Mainstream.” MIT Technology Review. Accessed October 2016. https://www.technologyreview.com/s/600986/ai‐hits‐the‐mainstream/
5 Russell, Stuart J., Norvig, Peter (2009), Artificial Intelligence: A Modern Approach (3rd ed.)
6 Holdren, John P., et al. (October 2016). “Preparing for the future of Artificial Intelligence.” Accessed October 2016. Page 5. https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/prepari ng_for_the_future_of_ai.pdf
7 Ibid 6, page 7
8 Ibid 1
9 (June 20, 2016). “AI Drives Better Business Decisions.” MIT Technology Review. Accessed October 2016. https://www.technologyreview.com/s/601732/ai‐drives‐better‐business‐decisions/
10 Claburn, Thomas. (August 4, 2016). “IBM: AI should stand for Augmented Intelligence.” InformationWeek. Accessed October 2016. http://www.informationweek.com/government/leadership/ibm‐ai‐should‐stand‐for‐augmented‐intelligence/d/d‐id/1326496
11 Copeland, Michael. (July 29, 2016). “What is the difference between Artificial Intelligence, Machine Learning and Deep Learning?” Nvidia Blog. Accessed October 2016. https://blogs.nvidia.com/blog/2016/07/29/whats‐difference‐artificial‐intelligence‐machine‐learning‐deep‐learning‐ai/
12 Clark, Liat. (June 26, 2012). “Google’s Artificial Brain Learns to Find Cat Videos.” Wired UK. Accessed October 2016. https://www.wired.com/2012/06/google‐x‐neural‐network/
13 Ibid 1
14 Accessed October 2016. http://www.ibm.com/watson/what‐is‐watson.html 15 Accessed October 2016. https://www.ibm.com/watson/health/
16 Kramer, Miriam. (October 27, 2014). “Elon Musk: Artificial Intelligence Is Humanity's 'Biggest Existential Threat'”. Live Science. Accessed October 2016. http://www.livescience.com/48481‐elon‐musk‐artificial‐intelligence‐threat.html
17 Ibid 6, page 32
18 Ibid 6, page 33
19 Amodei, Dario; Olah, Chris; et al. (June 25, 2016). “Concrete Problems in AI Safety.” Cornell University Library. Accessed October 2016. https://arxiv.org/pdf/1606.06565v1.pdf
20 Ibid 6, page 33
21 (May 16, 2014). “Do we need Asimov’s Laws?” MIT Technology Review. Accessed October 2016. https://www.technologyreview.com/s/527336/do‐we‐need‐asimovs‐laws/
22 Selyuka, Alina. (September 28, 2016). “Tech giants team up to tackle the ethics of artificial intelligence.” NPR. Accessed October 2016. http://www.npr.org/sections/alltechconsidered/2016/09/28/495812849/tech‐giants‐team‐up‐to‐tackle‐the‐ethics‐of‐artificial‐intelligence; https://www.partnershiponai.org/
23 Accessed October 2016. http://futureoflife.org/open‐letter‐autonomous‐weapons/
24 Stone, Peter; Brooks, Rodney; et al. (September 2016). “Artificial Intelligence and Life in 2030.” One Hundred Year Study on Artificial Intelligence: Report of the 2015‐2016 Study Panel, Stanford University. Accessed October 2016. http://ai100.stanford.edu/2016‐report