Artificial intelligence tools are spreading and generate a lot of interesting conclusions, in clearinghouses and elsewhere. Sometimes, perhaps too interesting, as they can build on skewed input data, or even “hallucinate”. The clearing specialist conference WFEClear, in Seoul, saw a Thursday panel discuss how to approach the buzzing area in practice. 

Our coverage of WFEClear 2025 is gathered here.

Many of us have gasped over the analytical and creative capabilities of artificial intelligence tools recently. 

Advertisement

“It is always difficult to speak about artificial intelligence, because by the time you get on stage, your knowledge will be outdated,” noted panellist Boon Gin Tan, Chief Executive Officer of Singapore Exchange Regulation. “Are we talking about early AI or 2025?” 

Today, it is large language models thar are at the centre of discussion (of with ChatGPT being the most well-known example).

Boon Gin Tan was accompanied on stage by 
Burak Akan, CCP Director of Turkey’s Takasbank,
Tao Chen, Group Head of Quantitative Risk Management with the Hong Kong Exchanges & Clearing, and 
Alicia Greenwood, CEO of clearinghouse division JSE Clear and Director of Post Trade Services with the Johannesburg Stock Exchange,
in discussion led by Charlie Ryder, Regulatory Affairs Manager of the World Federation of Exchanges (WFE).

In practical life, the gasp will soon be replaced by puzzlement over stark dilemmas. Internally built AI models are expensive to produce (and the skilled people who can help you are hard to find, as all industries compete for them) – yet, using publicly available engines is often inappropriate when one considers the necessity of keeping clients’ data close to the heart for integrity reasons. 

Could improve the stress tests

Evident use cases of AI at central counterparty clearinghouses (CCPs) include market surveillance, where the ability of AI to rapidly spot patterns across large data volumes fits perfectly. Another attractive value for the CCPs could lie in fine-tuning the so-called stress tests that CCPs must run regularly to ensure that they keep big enough buffers against market hardships and member defaults. 

“I think most of us in the industry will tend to go straight to the risk modelling capabilities,” said Alicia Greenwood when asked about her dream applications. 

Takasbank’s Burak Akan started by emphasising risks, for example that existing biases in the market can feed into distortions when AI models build their workings on historical data. Also, where CCPs depend on third-party providers, transparency and accountability could become more difficult. 

“But will it stop us? No,” he turned around. With control especially over the governance and the quality of used data, clearinghouses may solve for many of the problems, he suggested. 

Keep humans in there

Alicia Greenwood pointed to the importance, beyond the AI tools themselves, of embedding the technology in a suitable operating model where the staff is able to make sense of the outcomes: “Which results to you act on and which ones do you just park? Once you have all the tools, you risk suffering from information overload.” 

HKEX’s Tao Chen was on the same track. “You probably actually need a human evaluation layer.” 

The balance of in-house versus external capabilities was discussed from several angles: quality, cost, integrity … Several speakers seemed to favour the idea of leaning on a trusted technology partner. This would solve for the access to competent staff as well as the appropriate handling of confidential data. 

In terms of data quality, SGX’s Boon Gin Tan pointed out how recent tech progress has lowered the threshold to combining internal and external data for deeper insights: “You will not be limited by your own data, but can use the data across the internet and the ability of AI to make connections.” 

“By inputting your own data, you are grounding the answer you get from the LLM. This goes some way towards solving the problem that AI models can ‘hallucinate’,” he concluded.

Tao Chen foresees that the cost of AI capabilities building will come down with time. “Developing your own AI models will become cheaper and cheaper. … But you need to go through some barriers in terms of becoming able to leverage open-source resources.”