• Home
  • About
  • Contact
  • Events
  • Membership Application
  • ISCM Blog
  • Job Postings
  • Newsletters
  • Gallery
  • Science / Technology / Climate Links
  • Hot News Feeds
  • Useful Links
  • Members Area
    • RAA Cat 2017
    • ISCM Board Members Only
    • ISCM Forums: Members Only
    • Bylaws & Budget
    • Previous Docs
  the international society of CATASTROPHE managers

Cyber Risk Modelling: Can We Really Apply Nat Cat Modelling Principles? And Other Key Considerations For Evaluating This Dynamic Risk

8/5/2019

0 Comments

 
​The International Society of Catastrophe Managers (ISCM) was established in March 2006. One of its  aims is to promote and develop catastrophe and exposure management  within the insurance industry. The London arm of the ISCM gathered on Friday, 28 June 2019 in the Lloyd’s Old Library to discuss the challenges presented to our community with regards to managing and modelling cyber underwriting risk.
 
The afternoon was organised in two parts:  first we heard an  underwriter's perspective on the challenges of underwriting cyber, by Helen Gemmell from Brit Insurance, followed by a panel of distinguished market experts moderated by David Clouston from Pelagius.
 
 
Part I - The Underwriter's Perspective
Helen Gemmell (Senior Underwriter, Cyber, Brit Insurance)
 
Having worked previously as a catastrophe exposure manager Helen was well-placed to address the audience and offered comments on cyber coverages and market trends as well as her views on cyber models.  The learning points from this session can be grouped into three categories: market trends, data issues and exposure management challenges.
 
Market trends
 
Coverage is offered for both first party and third party in the event of malicious and non-malicious attacks - the latter are accidental losses or system errors which can be costly and difficult to identify
 
Typically, insurance would pay for all the legal and regulatory requirements (including fines), the extortion costs and any business interruption. Should the need arise, it would also pay for IT forensics and legal defences.
 
The ever-evolving nature of the risk and the current competitive market, however, is leading to broader coverage. For example, damage to hardware (including single devices, a practice called 'bricking') which used to be excluded, has become commonly offered since NotPetya.
 
Another example is the current pressure on a standard infrastructure exclusion, which applies to all losses arising from electrical failure, water or gas leakage. Removing, or even just narrowing, this type of exclusion poses significant challenges for the industry, including pricing and aggregations across lines of business.
 
Further, undermining  the industry’s ability to build robust models is the  inconsistency  with contract wordings. Multiple  syndicates and broking firms are using their own wordings - each with different endorsements, which makes it very difficult to  capture the many variations with the terms and conditions offered.
 
Data issues
 
Data is largely incomplete. Some proposal forms are missing basic information like data volumes, patching policies or disaster recovery plans. And even when we do have data we need to question it. A client might have a business continuity plan, but this plan might itself rely on data stored on a specific device (like a laptop) which may not be accessible during a cyber event. Quite often these flaws, which seem obvious, are not picked up until a breach actually happens (e.g. zero day vulnerabilities), making it easier, ironically, to rate a company with a history of cyber incidents than one without.
 
As industry practitioners, we  need to define the minimum amount of information required to model cyber. This threshold might vary according to the size of the company. Data requirements should also vary depending on the aspect of the risk that we are modelling. Lately ransomware attacks have been the largest driver of claims, while previously it was data breaches. Ransomware attacks exploit software vulnerabilities, therefore understanding software aggregations becomes particularly important, e.g.  understanding how much a company relies on Microsoft with respect to Linux or others is critical data that is not commonly available.
 
Exposure management challenges
 
In summary: the risk is evolving, coverage is complex, wordings are inconsistent, and clients are unpredictable. Against this backdrop building a robust modelling framework looks unrealistic, but models provide guidance for underwriting and can be effective at portfolio accumulation management. We need to do more to understand and communicate uncertainty, and underwriters have a responsibility to educate exposure managers on wordings, coverages and similar.
 
In many cases, deterministic modelling does not need to be very technical, as a simple scenario can give rise to an appropriate stress test .  When looking at tail risk, that is low frequency and high severity events, probabilistic models can be complemented by scenarios and every other tool available. Clear risk management processes that are regularly reported are essential to assessing the risk as an underwriter.
 
Conclusion
 
The market is still growing significantly, both in terms of new clients seeking new policies and in terms of existing clients buying significantly more limit. However, there is a time lag. Given the complexities of the risk it might take a year to complete the proposal form and another year to underwrite, adding negotiations and sign off it can take up to three years to set up a new policy.
 
Rates are still largely driven by market forces but the race to the bottom has recently shown signs of stabilisation, and loss ratios are still attractive despite growing steadily. Despite all that, some insurance companies are bound to be caught unprepared by a  systematic event  such as a cloud service provider failing for a sustained period of time.  These events hold the potential for significant market aggregation.
 
 
Part II - An Expert Industry Panel
 
Moderator: David Clouston (Pelagius)
Mark Christensen (Head of Catastrophe Management @ Chubb Overseas General)
David Singh (Head of Exposure and Portfolio Management @ MS Amlin)
Domenico del Re (Director @ PWC)
Justyna Pikinska (Cyber Pricing Actuary @ Capsicum Re)
Jamie Pocock (Cyber Analytics @ Guy Carpenter)
 
The panel discussion opened with the question of human agency, and whether this specific feature of the risk allows the development of catastrophe models in the same way as natural perils do. There was some consensus on the need to learn to walk before we can run, in that cyber risk is evolving very quickly and it is still emerging (we have not seen the worst of it – will we ever?). Against this backdrop both actuarial techniques (which learn from the past) and catastrophe modelling techniques (which rely on accepted studies of the risk) have material limitations.
 
There are a few things we can learn from past experience in cyber: recent events result from human actions or operational technology failure, regulation is tightening and fines are becoming more severe . At the same time, we need to be conscious of the ever-evolving nature of cyber risk: change affects behaviours and the nature of claims. Although a few years ago the main cause of claims was loss of data, more recently it has become ransomware. Following the attack on the city of Baltimore, where the ransom was not paid, governments of cities under attack chose to pay, creating the conditions for hackers to follow with more attacks. This has clearly increased the frequency of events, which in turn can lead to a spike in annual severity.
 
But dealing with human agency is not new for our community, we see this when modelling flood for instance, and dealing with an ever-evolving peril is also not uncharted territory. For example, our hurricane models are getting better at capturing the storm surge element which in turn is becoming more severe because of climate change. Exposure managers need to think about the assets first, whether tangible or intangible, and develop ways to assess possible damage, in other words going back to basics before trying to push the envelope with probabilistic models. And that is because the potential for very severe losses is real, even more so considering insurance penetration is growing. In 2018 the global economic losses for cyber were three times the size of natural catastrophe losses.
 
On the topic of human agency and anthropomorphic risk we heard from Stephen Burr of Pool Re, in the audience. He described the process for issuing the first ILS product backed by an in-house developed model. The main feature of the Pool Re model is that it is credible, and such reliability  has allowed them to have a conversation with ILS partners. The frequency part was the most challenging to look at, requiring a partnership with academic institutions.
 
From terrorism the conversation veered towards State-sponsored attacks. There was a general question about whether the models differentiate between State-sponsored or “lone wolf” attacks, but more generally the panel agreed there is uncertainty around attribution and considerations would have to be made around targeted attacks versus collateral damage.
 
What about non-malicious losses? A coding error or other omission can cause serious damage, and policies should cover these events as well. These situations are not financially motivated and might as such be more difficult to capture within a catastrophe modelling framework. But even admitting that we are not able to capture all the nuances of the peril, the panel agreed that models have been and remain a very useful "measuring stick" to help understanding the relative risk.
 
The question of data came up next, with the observation there is no consistency in the underwriting community about rating factors. This poses a challenge in that there is no consensus about which aspects of the peril are more or less risky. It is clear that we need data standards, and maybe some level of education, to develop standard ILF curves or other rating tools. The panel went as far as suggesting there could be a market-wide initiative, an open modelling framework to agree on a common language around cyber.
 
There is, however, an immediate need to meet regulatory requirements and quantify cyber risk accurately for solvency calculation purposes, and at the moment each regulated entity is defining their own way of doing this. There are several approaches in use, from deterministic scenarios to frequency / severity modelling.
 
Such variety is evidence that the market is making an effort to understand and quantify the risk, and the panel expressed optimism that these efforts were converging towards a common understanding.
 
The discussion concluded with an analogy with the evolution of catastrophe models. Considering catastrophe modelling as a discipline was born in the mid-1980s, each panellist was asked to determine where cyber models are today. The consensus was that we are around the mid-1990s, but  on also suggested that the evolution of cyber models is going to be a lot quicker than natural hazards models, with one year in the cyber space equivalent to maybe 7 years in the natural hazards space.
 
At this pace, it will be only a few years before cyber catastrophe models could be as developed as current natural hazards ones – and rightly so.

0 Comments

    Randall Law

    Sample Blog

    Archives

    January 2020
    December 2019
    November 2019
    October 2019
    August 2019
    February 2019
    December 2018
    October 2018
    June 2018
    April 2018
    November 2017
    October 2017
    April 2017
    November 2016
    October 2016
    December 2015
    June 2015
    January 2015

    Categories

    All

    RSS Feed

Proudly powered by Weebly