Far from being a silver bullet for every issue in modern banking, AI, like any other developments in the sector, is an imperfect solution. What it might provide in improved efficiency and enhanced intelligence is tempered by the potential amplification of existing risks. In a joint study focused on compliance, the Association for Financial Markets in Europe (AFME) and PricewaterhouseCoopers (PwC) gathered insights from 17 AFME member firms and four European and UK regulatory bodies to provide an overview of current AI-related challenges and opportunities.
One of the benefits of AI implementation is the driving of “an improved compliance culture through ‘real-time’ monitoring, the ability to access and analyse greater sets of data, and the use of preventative, rather than detective, controls”, writes the study. In fact, compliance monitoring and surveillance in the first line of defence (1LoD), which includes trade surveillance, communications surveillance, and behavioural analytics, was rated by 47 percent of participants as the top area to benefit from AI. Joining it in the top three are anti-money laundering (AML)/know your customer (KYC) functions, and client engagement.
In terms of risks, 41 percent of respondents identified explainability and transparency as their top worry. According to AFM, explainability “focuses on why an AI system has reached a specific output and is therefore related to the operation of the model and the data that is used to train it”, which slightly differs from transparency, which ”involves being open about data handling, model limitations, potential biases and the context of its uses to ensure that all stakeholders clearly understand the workings of the AI system”. Making up the top three risks are breach of data confidentiality and data issues.
Despite the identified risks, 65 percent of firms surveyed revealed that they do not currently have frameworks and policies in place for AI. 82 percent of them, however, have plans to set up frameworks in the future.
Data governance, data quality, and data privacy regulations were cited as the main barriers to adopting AI in compliance functions. Should implementation eventually happen, the firms surveyed expressed the belief that compliance will remain “human-led”, as subjective assessment and critical thinking would still be required. 65 percent of respondents revealed that they are currently not recruiting for AI skills within the compliance function, although there is a recognition that “there will be a need for a gradual change in skills where employees in the compliance function become more AI literate”.
The study concluded with a number of suggestions for firms that are seeking to enhance their AI capabilities, including gaining an understanding the organisation’s risk of AI exposure, building AI into the compliance function’s transformation agenda, and prioritsing governance and accountability.