IEyeNews

iLocal News Archives

AI and machine learning – views from Hong Kong and the UK

By Douglas Thomson From GBRR

Credit: Zapp2Photo on shutterstock

Artificial intelligence and machine learning have rapidly grown in importance during the covid-19 pandemic, say speakers at a recent Bank of England webinar – but a report issued the same day by a group of Hong Kong regulators said new types of risk arising from the use of the technologies may require new types of macroprudential regulation.

In a webinar hosted by the Bank of England’s fintech director Tom Mutton, speakers described the rapidly changing attitudes to AI among banks and other businesses over the course of the covid-19 pandemic.

The BoE published the webinar on its site on 21 August, the same day the Hong Kong Monetary Authority published its own report on AI and machine learning in the banking sector.

Chandini Jain, CEO of data science company Auquan, told the webinar, which originally took place on 10 August, that the initial use of AI and machine learning to predict how infections would spread at the beginning of the pandemic had convinced “on the fence” banks of its utility.

World Bank senior technology advisor Lesly Goh described a “catalytic effect” on digitalisation coming out of the pandemic. Goh noted that the increased amount of online transactions during the pandemic’s quarantine restrictions had generated “a ton of data” and that the take-up of AI would be a “key differentiator between winners and losers”.

Google Cloud general manager for AI and industry solutions Andrew Moore said the pandemic had shown that “emergency AI is a real thing”, but had also stretched the “digital divide” between banks that had extensively adopted it. “Those behind are now more behind, and those ahead are now further ahead.” He said successful use of “emergency AI” depended upon having a platform and processes already in place.

Bank of England senior fintech specialist Mohammed Gharbawi said that although most banks’ machine learning models had adapted well to the circumstances of the pandemic, those tasked with predicting human behaviour – including those related to anti-money laundering and financial crime – had not performed so well.

“That’s to be expected – these are not the conditions the models have been trained on, because we’re obviously in very strange circumstances,” he explained. He said the pandemic showed “lessons to be learned” in terms of how models were stress tested. “We need to understand and monitor when models go wrong and have appropriate remediation steps.”

Jain also said senior management needed to familiarise themselves with the models their banks used. “It’s non-negotiable that any model you’re using is transparent, always accompanied by an explanation of how it uses the data,” she said. “It’s necessary that whenever management adopts a model in any of its processes it is well-documented how the model actually works.”

Goh said the increased amount of data being accumulated by financial AI processes meant there needed to be a greater focus on data hygiene. But she also urged regulators to “walk alongside and not ahead” of developments, and make it easier to safely share data between jurisdictions and between entities – something Mutton said had recently become a focus for the US Treasury department.

Moore also cautioned banks not to become “too solipsistic” and be open to lessons from industries. “I often see one industry making mistakes another solved years earlier,” he observed.

Mutton agreed, pointing out that the “tendency of us in finance to assume finance is in the lead” was not always borne out by the facts.

He noted that during the preparation of the Bank of England’s 2019 Future of Finance report, “we challenged to think of the last time the financial sector had successfully incubated a technology. We couldn’t think of one in recent history”.

“New thinking” needed as regulators face up to AI

Nearly one-third of banks using AI do not have regular reviews to identify AI-related risks, according to the 66-page report released by the Hong Kong Academy of Finance – a joint project between local financial regulators including the HKMA and Securities and Futures Commission (SFC) – on 21 August.

The report disclosed the findings of a 168-bank survey, including 27 retail and 141 non-retail banks, the latter of which mainly included mainland Chinese banks and the Hong Kong branches of foreign banks.

The report suggested that 80% of the banks surveyed planned to increase their investment in artificial intelligence (AI). But it also said only 68% of the banks using AI had regular reviews to identify AI-related risks, and only 70% had clear procedures to address defects in their machine learning models.

It also flagged banks’ concerns over a shortage of talent to develop complex AI and the potential for increased AI usage to expose them to cyber attacks.

The report added that the “generally less stringent” regulation of fintech companies might create new risks, and that banks might adopt more risk-taking behaviour to keep up with their fintech competitors.

It also suggested that broader AI use might aggravate the financial system’s “too big to fail” problem, as the economies of scale benefits for AI adoption might encourage concentration in the industry. “Such new forms of systemic risk may require new types of macroprudential regulation”, it suggested.

This would include bank regulators strengthening their co-operation with their counterparts responsible for non-financial companies engaged in fintech, including competition, consumer protection and data privacy regulators. “One difficulty is that countries weigh these three considerations quite differently,” it remarked.

The report advised bank supervisors to establish mechanisms to exchange data and intelligence with other regulators, citing the HKMA’s own May 2019 circular on the use of personal data in fintech development as an example.

It said regulators may also “develop new thinking” when assessing AI-related threats to financial stability, including skilling up on areas such as data science and programming.

The academy recommended a framework for “robust governance” of banks’ AI risks, including data risk management, privacy and security, and data warehousing.

For more on this story go to: GBRR

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *