Is artificial intelligence the right technology for risk management?

Is artificial intelligence the right technology for risk management?

Getting your Trinity Audio player ready...
Is artificial intelligence the right technology for risk management?

To minimize risks and maximise the benefits of risk management Risk professionals are becoming more dependent upon AI. But, AI is becoming increasingly employed to spot behaviours and patterns that could indicate money laundering fraud — or, in recent years, recognize faces to verify that customers are real, however its use in managing the risk in institutions has been relatively insufficient. With the advent of AI chatbots like ChatGPT (which makes use of “natural language processing” to respond to requests from users and create computers) which are like it could revolutionize process of managing risk within banks and financial services firms.

Many experts anticipate that in the next decade, AI will be used throughout the sector’s risk management. This allows new types of risks to be evaluated as well as identifying strategies to mitigate them and also improving efficiency and the automation of work of risk management. “The genie is out of the bottle,” says Andrew Schwartz, an analyst at Celent an advisory and research company that is specialized in the field of technology used by financial institutions. More than half of the biggest banks are currently making use of AI to manage risks according to his views.

Growth market Chatter also known as “generative” AI technologies, such as openAI’s ChatGPT as well Google’s Bard can analyse the immense amount of information that are found in corporate documents including regulatory filings market news, price quotes in the marketplace and also social media. They could be helpful as a way of improving the efficiency of existing methods to assess the dangers to credit or in the development of more complicated and realistic “stress testing” exercises -that mimic the way in which a bank would handle markets or financial volatility according to Schwartz. “You just have more information and, with more information, there could be a deeper and theoretically better understanding of risk.” Sudhir Pai, Chief Technology and Innovation executive for finance and services at the firm Capgemini states that a few banks are in the beginning stages of using the artificial intelligence (AI) that is generative AI in order to serve as an assistant to risk management. The assistants collect information of market and investment data, and give advice on ways to minimize the risk. “[An] AI assistant for a risk manager would allow them to gain new insights on risk in a fraction of the time,” an expert claims.

It is not common for them to discuss the direct use of artificial intelligence (AI) generative AI for managing risk but Schwartz believes that they could need to consider the challenge to verify the accuracy of the information used in the AI system in addition to eliminating any incorrect information. Initially, the larger businesses might be focused on testing the new generative AI for areas such as risk management, where conventional AI is widely used in areas like security and detection for criminals, as per Maria Teresa Tejada, a specialist in the field of regulations, risk, and finance at Bain & Co, the famous consultant. Generative AI might be the final “game changer” for financial institutions, Tejada says because of its capacity to analyse massive quantities of structured data, such as spreadsheets and unstructured documents, like legal contracts along with calls transcripts. “Now, banks can better manage risks in real time,” Tejada claims. Tejada.

SteelEye the company that makes the software for compliance designed specifically to be used by financial institutions has tested ChatGPT using five of its clients. SteelEye has designed nine “prompts” for ChatGPT to analyze text messages sent by clients to assist in complying with requirements for compliance. SteelEye copy-paste text of messages from clients in email messages, WhatsApp messages, and Bloomberg chats to determine if ChatGPT found suspicious messages and if it did, then report the information to authorities in order to investigate further. It was directed to look the evidence for possible evidence of insider trading. Matt Smith, SteelEye’s chief executive, states that ChatGPT is efficient in analyzing and identifying the suspicious nature of communications, which can be the subject of further scrutiny by experts in risk management and compliance. “Something that could take compliance professionals hours to sift through could take [ChatGPT] minutes or seconds,” the chief executive says. Regarding the accuracy and bias, some individuals have expressed concern about ChatGPT’s use of data from various sources. ChatGPT collects data from a variety of sources. These include Twitter and Reddit can provide incorrect information, and even violate security. Smith’s answer to this concern the fact that ChatGPT will only help compliance officers make the ultimate decision about what actions based on the information.


But, the main concern about whether its it is generative AI is the ideal option for tightly monitored and highly delicate risk management departments in banks. In this case, the details as well as complicated mathematical models need to be thoroughly examined. “ChatGPT is not the answer for risk management,” says Moutusi Sau — a analyst working in the field of financial services at Gartner who is a consulting and research business. Another concern, as stated within the European Risk Management Council, is that the intricacy of ChatGPT as well as other AI strategies could cause an obstacle for firms working in the field of financial services to better understand the decision-making process of their systems.

They are systems whose results cannot be explained, are commonly referred to in the field for being “black boxes” in AI languages. AI developers who develop AI to reduce risk as well as those using AI must understand the assumptions, weakness points and the limits of its results in accordance with the advice of the Council. One of the issues with regulation is that the method of regulations pertaining to AI is different from country to nation. In the US there is a report that officials from the White House recently met with the executives of technology firms to discuss ways they could apply AI within the context of developing guidelines. The EU and China have already developed regulations for AI applications. In the UK at present, the competition watchdog has begun studying what’s going on within the AI market.

Discussions on control of AI has been focused on individual rights to privacy and also to protect against discrimination. There is a chance that a different approach will need to be used to control AI in the field of risk management for the sake of ensuring that the basic guidelines will be translated into specific guidelines for risk specialists. “My sense is that regulators will work with what they’ve got,” declares Zayed Al-Jamil, an associate of the group’s section on technology of the firm of law Clifford Chance. “They won’t say AI is prohibited for risk management or have a prescriptive nature . . . I believe that they’ll amend existing regulations in order to incorporate AI,” Jamil claims. But, in spite of these regulatory worries and fears about AI’s capacity to control risks within the financial market There are many experts who think that AI is likely become more common. A majority of them believe that it will help in the control of risk via the automated analysis of data. Schwartz of Celent is “bullish” concerning the possibilities of AI in the finance sector. “In the medium term, I think we will see a huge amount of growth in what [AI tools] are able to do,” the Celent CEO claims.

Source:- https://www.ft.com/content/ca4e6538-00fe-4c75-b664-90b4b4079863



One thought on “Is artificial intelligence the right technology for risk management?

Leave a Reply

Your email address will not be published. Required fields are marked *