
Our fourth think tank of senior underwriting and risk engineering professionals took place recently. Co-hosted by Risk Solved and GWTInsight it developed on subjects raised at the previous forums.
What’s happening in Risk Engineering and Underwriting?
There has been a rise in compliance related litigation in the US. These claims often follow subsequent surveys that discover gaps that may have been unrelated to the original report. For example, if the plan says the structure is steel, when in fact it is timber, the question of where liability sits can arise. In one example this led to challenges around responsibility for a 24-hour fire watch. There is a belief that while many claims are spurious more can be expected and beyond the US.
From an underwriting perspective, risk relating to civil commotion and political unrest are on the up. Elections and movement of people mean that communications across borders can escalate quickly.
What is happening with AI as a Risk Engineering and Underwriting tool?
Artificial intelligence (AI) has emerged as a transformative force across many industries. A major challenge is how AI can be used properly. While AI promises unparalleled efficiency and innovation, it also poses significant risks that demand our attention. Members explored the current working definition of AI, relevant legislation and appropriate AI risk management.
AI is replicating what a human can do around areas such as; problem solving, planning, reasoning, perception and recognition. Firms need to be aware of the AI risks and liabilities that they are opening themselves up to. This includes deep fakes, phishing, bias and subsequent litigation. One example that is leading to claims is using online data without permission.
Managing risk requires a combination of mitigation tools. Training staff to recognise misinformation and develop a human sandwich to monitor systems and sense check inputs and outputs are essential. Other considerations include embedding consent and ethics bias within the AI processes.
The feeling is that in terms of risk surveying there are time savings to be made through AI, but human interaction will remain. The process can be improved but not yet replaced. Finally, some members expressed concerns about Chat GPT and have developed their own in-house alternatives that only draw upon internal information that can be controlled.
Developments in technology to manage airborne risks
The fallout from asbestosis is still being experienced by victims and insurers. The latest airborne risks – from dust on construction sites and in data centres to mould infestation are being monitored and mitigated using IoT technology.
Risk Engineers are exploring casualty and property risks. For example, from how to maintain safe working environments on building sites contractors to maintaining specialist equipment, such as data centres within defined temperature and humidity ranges to prevent damage.
The ramifications of asbestos are still prevalent, despite the risks being decades old. These have been one of the drivers for air quality testing. More recently, the legalisation of cannabis and commercial production has been a driver of air quality testing.
Construction sites present their own specific risk factors that can be challenging to replicate. Using a meter to check quality is problematic as it depends where testing takes place and when. More accuracy can be achieved with sensors on individuals and multiple monitoring points measuring particle changes in real time.
Businesses and organisations are seeing the difference between the long-term risks as well as short term liabilities in terms of wellbeing and getting staff back to work.
Data, data everywhere – how can it be harnessed?
The group discussed the flood of information now available, data bias and practical solutions as to how it can be analysed.
The feeling is that AI can’t do a Risk Engineer’s job but could do it in another way. It is expected that as AI develops roles will become more consultative. AI could assess the risk and people provide the softer skills needed to interpret and then liaise with clients.
Where AI could really deliver is in a reduction in attritional losses rather than large losses. The latter are easier to identify and mitigate through human intervention. However, the challenge with this is that the granular level of claims data required is hard to gather and therefore link to causation.