The Rising Risks of AI: Insurers Reassess Coverage Amidst Growing Concerns
What happens when the software that everyone’s racing to adopt becomes too risky for anyone to insure? According to
reporting from the Financial Times, we’re about to find out.
Insurers Seek Exemptions for AI-Related Liabilities
Major insurers like AIG, Great American, and WR Berkley are requesting U.S. regulators to allow them to exclude liabilities related to AI from their corporate policies. As one underwriter remarked to the FT, the outputs from AI models are perceived as “too much of a black box.”
Recent High-Profile AI Mishaps Raise Alarm
The industry has substantial reasons to be concerned. For instance, Google’s AI Overview erroneously implicated a solar company in legal issues, leading to a
$110 million lawsuit in March. Additionally, Air Canada found itself obligated to honor a discount offered by a
chatbot that it did not authorize last year. In another incident, a digitally cloned executive was used by fraudsters to deceitfully extract
$25 million from Arup during a seemingly legitimate video call.
The Fear of Systemic Risk in the Insurance Sector
Insurers are not merely worried about one massive payout; they are particularly fearful of systemic risks. The prospect of countless simultaneous claims due to a malfunctioning, widely-used AI model is daunting. As one executive from Aon stated, insurers can manage a $400 million loss from a single company, but they struggle with the potential fallout from an AI incident that causes 10,000 losses all at once.
Sure! Here are five FAQs with answers addressing the concerns that "AI is too risky to insure," from the perspective of insurance professionals.
FAQ 1: Why do some insurers believe AI poses too much risk?
Answer: Insurers are cautious about AI due to its complexity, potential for unpredictable behavior, and the rapid pace of technological advancement. Many traditional underwriting models aren’t equipped to assess the unique risks associated with AI systems, leading to uncertainties in coverage and liability.
FAQ 2: What specific risks associated with AI are challenging to insure?
Answer: Key risks include operational failures, unintended biases in decision-making, data privacy issues, and security vulnerabilities. The potential for significant financial loss in the event of a major failure can be difficult to quantify, making it challenging for insurers to offer comprehensive policies.
FAQ 3: How can companies mitigate risks to make their AI systems more insurable?
Answer: Companies can improve insurability by implementing robust risk management practices, such as regular audits, transparency in AI decision-making processes, comprehensive data protection measures, and ongoing monitoring of AI systems to identify and mitigate potential risks proactively.
FAQ 4: Are there any existing insurance products for AI-related risks?
Answer: While the market is still developing, some insurers are beginning to offer specialized products, such as cyber liability insurance and technology errors and omissions coverage. However, these offerings may have limitations, and many companies find it challenging to find coverage that fully addresses their unique AI-related risks.
FAQ 5: What can AI developers and users expect in terms of insurance in the future?
Answer: As understanding of AI risks evolves, the insurance industry is likely to develop more tailored products and risk assessment frameworks. Companies should stay informed about developments in this area and be prepared for more comprehensive options as best practices in AI management become standardized in the industry.

No comment yet, add your voice below!