Artificial intelligence is pegged as the magic solution to nearly every business challenge, and the destroyer of jobs. Is that accurate? Like any technology, artificial intelligence isn’t magic. It is just another useful tool that can be used to solve specific problems rather than all problems. The challenge is understanding what kinds of problems it can solve, how to make the most of it and what kind of opportunities will be created as a result.
The foundational concepts behind artificial intelligence have been around for a long time. AI had its first commercial application in the 1960s when it was used to eliminate echoes in phone calls. It has been taught as part of computer science and engineering curriculums for many years, and can’t be said to be particularly new. The big change that has brought on so much hype in recent years has been the increase in computing power and the availability of tools that made the technology more accessible and applicable.
Rob High, chief technology officer of IBM and keynote speaker at the recent GR Innovation & Insurtech Conference, gave a great definition of artificial intelligence. He said: “AI is the application of machine capabilities to the human experience.” He explained that it is a misconception that at this time AI will replace human intelligence, but instead will augment it. This is a critical point that is often misunderstood.
Too many people overestimate the capability of artificial intelligence and expect that we are close to achieving human-level intelligence, often referred to as “general intelligence” artificially. This is actually exceptionally difficult and likely many years away. Martin Ford interviewed many of the top minds in AI in his recently published book, Architects of Intelligence. In it he came away with an average guess that general intelligence will not be achieved until 2099 — 81 years from now!
So, if we’re many years away from general intelligence, what is AI good for? At its present maturity, AI is most capable as a tool for pattern recognition. You can apply it to recognise and categorise patterns in language, vision, small or large sets of data. To put that into context, Andrew Ng, co-founder of Google Brain, suggested: “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.”
AI can be an incredibly efficient tool if applied correctly, and exceptionally better than humans on specific tasks. The challenge is that to apply it correctly, it needs to be taught how to recognise patterns based upon well-classified data sets that encompass as many scenarios as possible.
Consider the auto manufacturer Tesla, a leader in the application of artificial intelligence to achieve autonomous driving. One of the reasons that Tesla is such a leader is that it relies on “fleet learning” having built sensors into their cars from very early on. These sensors have for many years transmitted back to Tesla data captured related to driving conditions and scenarios from every Tesla on the road. This has given them an incredible advantage because they have access to a wealth of data beyond tightly controlled scenarios.
However, when it comes to training AI, the amount of data is not nearly as important as the breadth and diversity of the data. Autonomous driving related accidents have illustrated this. Tesla itself has had a couple fatal accidents, as have other players in the autonomous driving space.
It can be impossible to predict the reaction of an AI that has never been trained to recognise the white side of a tractor trailer crossing in its path against a brightly lit sky or a damaged highway barrier that doesn’t match what it has ever seen before. Humans may be imperfect and inefficient, but we can rely on a wealth of contextual information that may not have been available for training of an AI.
We can recognise a truck as different from an overhead highway sign and a highway barrier that has been damaged and marked with cones. If not trained with this contextual information, AI has no ability to recognise these things. Therefore, it has been relatively easy to augment human driving with AI-powered driving aids, but incredibly difficult to replace humans entirely.
The fear that human jobs will simply be replaced by AI is a misguided one. As High suggested, we are more likely to see AI used to develop tools that augment our jobs than simply eliminate them wholesale. Take, for example, the job of an underwriter. To completely replace the job of an underwriter with AI, it would be necessary to first train the AI to do the job. To adequately train it, you would need clear process and instrumentation that can track every part of and reason for a decision.
Data on why contracts were rejected is as important as data on why they were accepted. Only collecting data on good contracts leads to confirmation bias and gaps in the data. Every portion of an underwriters job needs to be tracked in order to have the right breadth of data. A failure to capture enough breadth of data could lead to catastrophic errors where something that might have been completely contextually obvious to a person was not to an AI programmed to do its job.
Until a role such as underwriting is fully augmented with tools that make it possible to capture every detail and a wide-enough variety of scenarios, it simply will not be possible to replace it with AI. Far more likely is that AI will serve as a tool to make the job of an underwriter more efficient and accurate. There are things AI can do far better than a human can. Only a very small subset of people really excel at data analysis. When that data becomes large, it becomes insurmountable for a person to really make sense of it all. This is where AI really shines because it is possible to train it specifically to manage these sorts of scenarios.
This opens opportunities to leverage AI to analyse data sets that an underwriter relies on to make their decisions, augmenting their intelligence rather than replacing it. As an example, several of the insurtech start-ups pitching at the Innovation & Insurtech Conference were focused on applying AI analysis capabilities to satellite imagery cross-referenced against smartphone-captured imagery. They used this to provide more granular classifications of data such as that of flood risk for properties. This offered a glimpse into the potential to achieve a more precise estimate of the risk of individual properties that comprise a larger regional whole.
Another example is the impact of small devices getting powerful enough to run AI analysis on site in the frame of the “Internet of Things”. It will be increasingly possible to tap into live sources of data such as the diagnostics port of a car and apply AI to monitor in real time how it is performing, and if there are any issues developing long before they become apparent.
Similarly, with health diagnostics it will be possible to monitor a sick individual’s bio signs in real time and provide deeper insight into any developing issues before they happen. This opens the potential to be proactive rather than reactive in helping to manage risks.
Like any technology, AI is not a magic solution to everything; it just represents another useful tool in solving specific challenges. We’re likely a long way away from human-level general intelligence that will readily destroy jobs. Instead, we can look forward to AI being used as a tool to augment human intelligence. This will open new opportunities to provide more capable analysis of both big and small data sets that will make our jobs less tedious and far more efficient.
AI certainly will change the world in the coming years; just perhaps not in the way many think it will.
• Denis Pitcher is a software and technology solutions consultant with an interest in exploring the potential of blockchain and distributed-ledger technologies. He is also tech co-founder and chief architect of resQwest.com, a global tourism technology solutions provider. He can be reached at email@example.com