Newsletter
Taiwan's Financial Supervisory Commission announced the "Guidelines for the Use of Artificial Intelligence (AI) in the Financial Industry"
Following the release of the "Core Principles and Related Promotion Policies for the Use of Artificial Intelligence (AI) by the Financial Industry" in October 2023, the Financial Supervisory Commission ("FSC") issued the "Guidelines for the Use of Artificial Intelligence (AI) in the Financial Industry" ("AI Guidelines") on June 20, 2024. These Guidelines shall serve as a reference for financial institutions in the introduction, use, and management of AI. According to the FSC, these Guidelines are administrative guidance in nature. If the relevant industry associations in the financial industry have established their own self-regulations for the use of AI, such associations may incorporate relevant key points and measures of the AI Guidelines into their self-regulations. If no such self-regulations are established, then financial institutions are advised to follow these Guidelines for the introduction, use, and management of AI systems. Below is a summary of certain key points of the AI Guidelines:
General Provisions; Common Matters:
These cover relevant definitions of AI-related terms, description of the AI system life cycle, and risk assessment factors, and also provide the following:
(a) When using AI systems, financial institutions should implement the core principles based on risk, and assess the risk levels of using AI systems after considering various risk assessment factors; and
(b) Supervision of third-party vendors: financial institutions should have in place supervisory measures when engaging third parties for introduction of AI systems, including establishing appropriate data or system migration mechanisms in the event of termination.
Chapter 1: Establish Governance and Accountability Mechanisms:
Financial institutions should have a clear structure and risk management policies for managing AI systems, understanding the purpose, applicable business or operations, and responsible personnel. Internally, financial institutions should be able to clearly explain the system's operational logic; externally, they should be able to communicate the overall policy and the individual/specific AI system-related information that the consumers may need to know. In addition, there should be comprehensive procedures for handling errors or unexpected events. For high-risk AI systems, after assessing AI system risks, internal resources, and expertise, financial institutions may, as needed, establish a mechanism for review and evaluation by independent third party AI experts.
Chapter 2: Emphasize Fairness and Human-Centric Values:
When using AI systems, financial institutions should assess fairness, avoid bias, and prevent discrimination. For instance, if a financial institution uses generative AI developed by a third party while not being able to control the training process or ensure fairness in data or results, the financial institution should still have its personnel manage and control the risks associated with the output information objectively and professionally.
Chapter 3: Protect Privacy and Customer Rights:
Financial institutions should protect customer privacy, avoid risk of data leakage, and follow the principle of data minimization to avoid collecting excessive or unnecessary sensitive information. Additionally, they should respect customers' rights to choose whether to use AI services, and determine whether to offer alternative solutions after considering the risks for both customers and the institution as well as the feasibility and costs of alternative solutions.
Chapter 4: Ensure System Robustness and Security:
Financial institutions should ensure system robustness and security, establishing and implementing cybersecurity measures. When using AI systems developed or operated by third parties for financial services, appropriate risk management and supervision of such third-party vendors are necessary.
Chapter 5: Implement Transparency and Explainability:
Financial institutions should ensure the "transparency" of AI system operations. For example, if needed, financial institutions may proactively disclose information through reports, technical documents, or website postings to inform stakeholders about their AI system practices, thereby enhancing trust in the financial institution's AI systems. As to "explainability", financial institutions should be able to clearly explain how the AI systems—
whether developed in-house or outsourced— operate and the logic behind their predictions or decisions.
Chapter 6: Promote Sustainable Development:
When using AI systems, financial institutions should align their development strategies and implementation with sustainable development principles, such as reducing redundant hardware setups and saving energy. They should also provide appropriate education and training for employees to help them adapt to new work environments and reduce unemployment risk during the digital transformation process.
Lee and Li has been closely monitoring the impact of the development of emerging technologies such as AI on financial institutions and other industries. If you have any questions about the aforementioned content or topics, please feel free to contact us.