Almost 60% do not treat AI ethical risks separately: Airmic survey

Nearly 60% of organisations are not treating ethical risks related to artificial intelligence (AI) separately from other ethical concerns, according to a recent survey conducted by Airmic, a UK association for risk and insurance professionals.

In another poll asking whether these organisations believe AI’s ethical risks should be treated separately, respondents were almost evenly divided.

As organisations rapidly integrate AI applications into their frameworks, concerns about associated ethical risks remain largely uncharted territory. Consequently, respondents deemed it sensible to give these risks extra visibility and attention within their risk management frameworks and processes.

This trend coincides with increasing calls for organisations to establish AI ethics committees and develop separate AI risk assessment frameworks to navigate contentious ethical situations.

Julia Graham, CEO of Airmic, emphasises, “The ethical risks of AI are not yet well understood and additional attention could be spent understanding them, although in time, our members expect these risks to be considered alongside other ethical risks.”

Hoe-Yeong Loke, Head of Research at Airmic, explains, “There is a sense among our members that ‘either you are ethical or you are not’ – that it may not always be practical or desirable to separate AI ethical risks from all other risks faced by the organisation.”

He adds, “What this calls for is more debate on how AI ethical risks are managed. Regardless, organisations should carefully consider the implications of potentially overlapping risk management and governance structures.”

The post Almost 60% do not treat AI ethical risks separately: Airmic survey appeared first on ReinsuranceNe.ws.

Leave a Reply

Your email address will not be published. Required fields are marked *