Recently, especially since the advent of ChatGPT, artificial intelligence (AI) has become ubiquitous. It is no longer just a buzzword within the tech industry, as we see its application across various regions and industries, including the accounting profession. According to a Thomson Reuters’ survey early this year, 73 percent of the surveyed tax and accounting firms believed that generative AI could be applied to their work. Accounting professionals have started integrating AI to automate daily operations such as summarizing and analyzing data and identifying irregular transactions, thereby reducing costs and optimizing efficiency. The trend of integrating AI into organizations’ workflow seems to be unstoppable.

AI – Friend or foe?


However “magical” AI may appear to be, the bitter truth is that it is a double-edged sword. Organizations must remain vigilant to the risks posed by this technology, particularly privacy risks, to prevent AI from becoming our foe.

Privacy considerations are crucial because data is the lifeblood of AI. Massive amounts of data are typically involved throughout the life cycle of AI, from development, customization and implementation to termination. For example, when accounting firms customize AI models purchased from AI developers or vendors using sensitive data, such as clients’ financial information and personal data, any leakage could be catastrophic. If such data falls into the wrong hands, any subsequent fraud or identity theft will severely tarnish the firms’ reputations and undermine the confidence of their customers. Therefore, having robust data security measures in place and ensuring compliance with the Personal Data (Privacy) Ordinance (PDPO) is vital.

Navigating the regulatory regime in Hong Kong


In Hong Kong, the PDPO, which is a principle-based and technology-neutral piece of legislation, applies to the collection, holding, processing and use of personal data, whether through the use of AI or not.

To help organizations establish AI governance and comply with the requirements of the PDPO, as early as 2021, my office, the Office of the Privacy Commissioner for Personal Data (PCPD) published the Guidance on the Ethical Development and Use of Artificial Intelligence. Organizations are recommended to adopt the internationally well-recognized data stewardship values (i.e. being respectful, beneficial, and fair to stakeholders) and ethical principles for AI (i.e. accountability; human oversight; transparency and interpretability; data privacy; fairness; beneficial AI; and reliability, robustness and security) when developing and using AI.

More recently in June this year, to support the “Global AI Governance Initiative” promulgated by the Mainland and to help ensure AI security, the PCPD published the Artificial Intelligence: Model Personal Data Protection Framework (Model Framework). Targeting organizations that procure, implement and use any type of AI system, including generative AI, and premised on general business processes, the Model Framework covers recommendations and best practices in the areas below.

AI strategy and governance

Active participation by top management is a crucial driving force for the successful implementation of AI strategy and governance. Organizations should first formulate an AI governance strategy that includes a roadmap setting out the directions and purposes for which AI systems may be adopted.

Beyond strategic alignment, when procuring AI systems from third parties, organizations are recommended to consider various governance issues in their supplier management processes, such as key privacy and security obligations or ethical requirements to be conveyed to potential AI suppliers.

Risk assessment and human oversight

Underpinning the Model Framework is a risk-based approach. It is recommended that organizations conduct risk assessments to systematically identify, analyse and evaluate the risks, including privacy risks, involved in the AI life cycle so that corresponding risk mitigation measures, including an appropriate level of human oversight, can be deployed. In use cases that may incur higher risks, organizations should adopt a “human-in-the-loop” approach wherein human actors retain control of the decision-making process.

Customization of AI models and implementation and management of AI systems

When customizing and implementing AI solutions, organizations should minimize the amount of personal data involved. Additionally, organizations should ensure that AI models have undergone rigorous testing and validation before use.

The Model Framework also recommends that organizations establish an AI incident response plan to monitor and address incidents that may inadvertently occur.

Communication and engagement with stakeholders

Regular communication with stakeholders is key to building trust. Organizations should encourage feedback from all parties and strive to make their AI-generated output as transparent as possible.

Be the trusted advisors


Accountants have long been the trusted advisors to individuals and organizations. As technology evolves rapidly, it is more important than ever that accountants keep abreast of the latest trends and measures in AI security in order to maintain the trust placed in them.

We use cookies to give you the best experience of our website. By continuing to browse the site, you agree to the use of cookies for analytics and personalized content. To learn more, visit our privacy policy page. View more
Accept All Cookies