ChatGPT, a new form of generative AI, has taken the world by storm. It was launched in November 2022 and, by January 2023, had more than 100 million users worldwide. It is the most rapidly growing consumer application in the relatively short history of the modern computer era.
Members of AXA XL’s communications team recently spoke with Zhenghong Pan, underwriting manager for AXA XL Financial Lines in Singapore, about generative AI and how it could affect cyber risk.
We then edited and condensed his remarks for clarity and length. Thus, while the following copy is about generative AI, real people wrote and edited this article.
With over 10 years of experience in underwriting technology and cyber in Asia, Pan holds the responsibility of driving the growth of the financial lines portfolio in Singapore.
What is generative AI, and how do these new versions differ from previous AI models?
Generative AI is a new form of artificial intelligence that uses algorithms to create content, including code, text, audio and images. The most famous of these currently is ChatGPT; GPT stands for generative pre-trained transformer.
Whereas traditional AI systems were designed to recognise patterns and make predictions, ChatGPT is a form of neural network that “learns” context about a language pattern, including spoken and computer programming languages.
I put “learns” in quotes because while the model develops an understanding of how different words or symbols are used together to create meaning, that doesn’t mean it knows what it is saying. Instead, generative AI has been likened to an internet parrot in that it repeats words or phrases that probability shows are most likely to occur next to each other in natural speech. That enables it to create new content in seconds that otherwise would take a person hours or days to produce.
What are some of the ways different industries could use generative AI?
It has been hard to miss the flurry of media reporting in recent months about the disruptions that generative AI could unleash across society and virtually all industries. Some commentators predict that these technologies will lead to profound societal changes, especially when the bots achieve a level of “intelligence” surpassing human capabilities.
For now, however, the best answer to the question of how generative AI will affect our lives is “nobody really knows”.
That said, I can envision several applications for these tools within the commercial P&C industry. The most obvious include drafting emails and further automating the process of collecting and analysing data used to underwrite different risks. Generative AI should also help insurers process claims more quickly and efficiently and identify claims patterns to facilitate more accurate risk assessments.
The AXA Group and AXA XL have already developed guidelines for using generative AI tools such as ChatGPT for these and other purposes, and I’m sure those will be updated and expanded as new uses are identified.
However, ChatGPT and similar systems are still far from perfect. Initial tests suggest that the best models produce accurate information around 50-70% of the time. There have also been many anecdotes of ChatGPT going off on non-sensical tangents or making stuff up, what we humans would call hallucinating.
Nonetheless, these glitches and imperfections will doubtless become less common as the models become more experienced, although the shape of the reliability curve over time is also hard to predict.
How will ChatGPT impact cyber risk?
The caveats noted above notwithstanding, the potential for ChatGPT or other generative AI models to exacerbate cyber risk can’t be ignored. In particular, hackers could ask a chatbot to write the software needed to launch and execute cyberattacks. That could change the threat landscape significantly.
One example: cybercriminals offering ransomware-as-a-service (RaaS). These gangs don’t attack organisations directly but offer others the names of exposed entities – private companies, governments, schools and universities, etc. –they can attack. They also provide software for executing ransomware attacks and a user manual. RaaS “vendors” either charge a license fee or take a cut of whatever their “clients” take in.
Previously, developing a productive and profitable RaaS operation required fairly sophisticated software engineering capabilities. However, with generative AI, asking a chatbot to create the software needed to launch a phishing, malware or denial of service attack could be vastly simpler and produce better results than developing that code on one’s own.
Similarly, generative AI could make hacking more accessible to amateurs who previously lacked the skills to pierce organisations’ increasingly sophisticated security barriers.
Thus, we could see more and more “script kiddies”; that is the common, albeit silly, term applied to unskilled individuals who use scripts or programs developed by others for malicious purposes. In the past, they would have accessed the dark web for these scripts. Now, they could use a chatbot to write the software for them.
Aren’t there restrictions on using generative AI for illegal purposes?
While ChatGPT and other systems have such limits, hackers have figured out how to get around them. Moreover, when a prominent tech publication prompted ChatGPT about using it to create malware, the program responded that “threat actors may use artificial intelligence and machine learning to carry out their malicious activities”, but the developer “is not responsible for any abuse of its technology by third parties”.
Also, although government regulators and others are closely examining the need to put guardrails around generative AI, one of the hard lessons we have learned over the past few years is that hackers will eventually find gaps or workarounds.
How effective are existing cybersecurity systems in detecting and preventing attacks created by chatbots?
Cybersecurity experts report that code drafted by ChatGPT could, as one put it, “easily evade security products” because of the chatbot’s apparent ability to create what are known as polymorphic or metamorphic viruses.
These are a kind of malware that is programmed to mutate its appearance or signature files via new decryption routines. That, in turn, means that many current antivirus or antimalware solutions that rely on signature-based detection won’t be able to recognise and block the attack.
At the same time, cybersecurity providers aren’t sitting back. They, too, are taking advantage of these new tools.
For instance, many companies use end-point detection and response tools, or EDR. That software sits on an endpoint – for example, an employee’s workstation – and logs information. Everything done on that computer is sent to a security operation centre that monitors the records and uses automated software to look for anomalies or suspicious behaviour.
Now, there is a new wave of software coming out called XDR. This is the next level of end-point detection and response that leverages AI to automatically spot anomalous behaviour or dangerous-looking signals.
As a result, it seems likely that generative AI will escalate the ongoing arms race between hackers intent on infiltrating organisations’ IT systems and cybersecurity professionals intent on keeping them out. In other words, the battles between the good and bad guys could become even more intense and complex.
What suggestions do you have for companies grappling with these new threats?
Although there are still many uncertainties, I don’t think the emergence of ChatGPT calls for different responses. Instead, it reinforces the importance of the three-pronged approach recommended by cybersecurity professionals namely – prevention, response and mitigation.
In terms of prevention, companies should ensure that their cybersecurity systems and protocols include the most recent and robust tools and processes for blocking cyberattacks. After all, cybercriminals typically seek out less well-defended targets where the level of effort needed to succeed isn’t so great.
Companies should also have mechanisms to limit the scope of an attack and swiftly restore data and affected systems.
AXA XL partners with leading breach response providers and maintains a 24/7 hotline to help organisations navigate these sensitive situations. Our cyber coverages also include access to firms specialising in cyber incidents, including computer forensics, legal issues, public relations, and credit and ID monitoring.
Finally, AXA XL supports clients in diverse industry segments to structure insurance policies designed to mitigate financial losses from cyberattacks.
ZhengHong Pan
Underwriting Manager, Financial Lines
Email: [email protected] |
-
BHSI | Managing non-Asian exposure in long-tail lines
While US-exposed business can look attractive to Asian carriers, managing the volatility around the long-term results and the ability to model those losses are crucial, say BHSI’s Marc Breuil and Marcus Portbury.
-
Sedgwick | To Handle CAT Claims Well, Multi-Step Preparation is Key
When it comes to risk, it’s not a matter of “if” it’s a matter of “when” an event will occur.
-
HSBC Asset Management | Is it time to relook at Asian currency bonds?
With diversification and performance high on investors’ agendas, it seems a good time for global portfolios to revive allocations in Asian local currency bonds – including Hong Kong dollar (HKD) bonds.
-
PineBridge Investments | Why Asian insurers are exploring private credit and CLOs
The recent rollout of risk-based capital regimes across Asia calls for a closer alignment between insurers’ assets and liabilities. We explore potential ways to maintain a healthy investment yield and robust returns on regulatory capital.