The trouble with AI

October 12 2018 by Nick Ferguson

Artificial intelligence is increasingly being deployed within risk departments both as a tool for security and risk management. While such products can save organisations time and money, they may also introduce a new set of risks.

One of the latest offerings on the market, announced by Oracle this week as part of its enterprise resource planning cloud solution, allows organisations to continuously monitor for segregation of duties, financial compliance, privacy risks, proprietary information and payment risks.

The new controls embed self-learning AI techniques to constantly examine all users, roles and privileges against a library of active security rules. The offering includes more than 100 configurable rules across general ledger, payables, receivables and fixed assets.

“As the pace of business accelerates, organisations can no longer rely on time-consuming manual processes, which leave them vulnerable to fraud and human error,” said Laeeq Ahmed, managing director at KPMG, who added that the adaptive capabilities and AI technology of such products “can help organisations manage access controls and monitor activity at scale to protect valuable data and reduce exposure to risk”.

The potential of such systems is significant, but deploying AI across an organisation is not for the faint of heart and will require skilled professionals to operate and maintain, as well as active engagement by senior management — though organisations that take risk management seriously should already have both in place.

In practice, particularly in Asia, not many do. There is a risk, according to some, that AI has become a buzzword that senior decision-makers may be keen to buy into without fully appreciating the technology.

And AI certainly has the potential to cause serious problems for an organisation that is complacent. Indeed, when you have people such as Stephen Hawking, Bill Gates and Elon Musk raising concerns, it would be foolish not to take them seriously.

“Does the board understand the potential impact of AI on the organisation’s business model, culture, strategy and sector?” asks Jeanne Boillet, EY global assurance innovation leader, in a recent article.

Some critics worry that the power of AI is being overestimated amid the same rose-tinted optimism that led people in the 1950s to imagine that a future of flying cars and robot butlers was just around the corner. Instead, we have vacuum cleaners that get tangled under the sofa and bots that suggest movies and music we have no interest in.

The yawning gap between reality and expectation is not merely a flippant observation. “Executives investing massively in AI may turn out to be disappointed, especially given the poor state of the art in natural language understanding,” according to Gary Marcus, a neural science specialist at New York University and long-standing AI sceptic. “It is probably fair to say that chatbots in general have not lived up to the hype they received a couple years ago.”

Marcus worries that overhyping the ability of AI could lead to a spectacular failure that ends up undermining the whole field.

“If, for example, driverless cars should also disappoint relative to their early hype, by proving unsafe when rolled out at scale, or simply not achieving full autonomy after many promises, the whole field of AI could be in for a sharp downturn, both in popularity and funding.”

One of Marcus’s central criticisms of AI is its reliance on supervised learning. To teach a computer to recognise turtles, we show it lots of pictures of turtles. With AI now being deployed in security software, there is a significant risk that this training data could be a weak point — the security of the system is only as good as the data it is trained on. And if such data is leaked, hackers could learn what constitutes a red flag and evade detection.

While the term artificial intelligence conjures images of fantastically smart robots, the reality is that deep learning computers are still extremely limited and nowhere near achieving the kind of general intelligence that allows children to understand sentences or use common sense. There are few indications that computer scientists are close to breaking through this barrier and, as long as this remains the case, it will be important to focus on the role of humans in sound risk management.

MORE FROM: Insights