As industries across the board are embracing the transformative power of generative artificial intelligence (gen AI) in their operations, the accounting profession is of no exception. In July, a world-leading accounting firm committed at least US$2 billion to the development of its cloud and gen AI tools. In October, another leading firm launched an AI chatbot that claimed to have capabilities on par with an industry veteran.
Although gen AI promises to reshape the way we work and unleash new growth potentials, is it completely risk-free?
First, what is gen AI? According to an answer given by ChatGPT, one of the better known applications, it is “a subset of artificial intelligence that learns from data and uses it to create new content, such as images, text, or music, that is similar but not identical to the data it was trained on.” Gen AI has wide-ranging uses. For example, it can “read” documents, turning invoices into expenses; it can analyse voluminous documents like annual reports and financial statements, spotting patterns overlooked by humans, such as irregularities in an expense account; and it can aid analytics, enhancing cash flow management by predicting payment behaviour. Lastly, it can also produce context-specific reports, including tax agreements and responses to customer enquiries. These capabilities promise an array of potential benefits for the accounting profession: higher efficiency, improved accuracy, lower costs, better decision-making, higher customer satisfaction and bolstered compliance.
Although gen AI is fast revolutionizing accounting practices, it is worthwhile to address its privacy and ethical challenges. To analyse the privacy risks involved, we may refer to the Data Protection Principles (DPPs) in the Personal Data (Privacy) Ordinance that cover the entire lifecycle of the handling of personal data.
The first risk relates to data collection. Training a gen AI model requires a vast amount of data. If the original purpose of collecting such data does not include AI model training, the principles governing collection and transparency (i.e. DPPs 1 and 5), which require the collection of personal data in a fair manner and on an informed basis, may have been circumvented.
Interactions with gen AI tools pose another risk. User inputs, such as client inquires, may contain sensitive information like identity card numbers. If this data is repurposed for gen AI training, the use limitation principle (DPP 3) might be violated.
The third privacy risk relates to the practicality for users to access and correct their personal data (DPP 6) and the feasibility of controlling the retention of data (DPP 2), given the sheer volume of training data.
The fourth risk relates to data security (DPP 4). As gen AI systems store a huge amount of user dialogues and are susceptible to “jailbreaking,” the consequences of the leaking of client information cannot be overstated.
Lastly, gen AI poses ethical risks. It is challenging to interpret gen AI’s decisions as the breadth of training data makes tracing outputs to specific inputs difficult. Furthermore, gen AI models may give incorrect statements, especially when the dataset is biased or contains discriminatory information.
To foster an environment where AI can evolve and operate while respecting privacy and data protection, regulations and laws have been proposed or implemented. The former includes the European Union’s proposed Artificial Intelligence Act while the latter includes the Mainland’s Interim Measures for the Management of the Services by Generative AI, which is the world’s first regulation tailor-made for AI-generated content.
Apart from regulations, governments and regulators have issued guidance and recommendations on AI deployment and development. At the time of writing, the Mainland had just introduced the Global AI Governance Initiative, which outlines the country’s proposals on AI governance. Locally, in August 2021, the PCPD published the Guidance on the Ethical Development and Use of Artificial Intelligence (guidance).
By outlining a framework for deploying AI while respecting privacy and minimizing ethical risks, the guidance aims to facilitate the safe, healthy and ethical development and use of AI. To achieve this, the guidance proposes three sets of recommendations, comprising (a) three data stewardship values (being respectful, beneficial, and fair); (b) seven ethical principles (accountability, human oversight, transparency and interpretability, data privacy, fairness, beneficial AI and reliability, robustness and security); and (c) a four step-practice guide covering the establishment of an internal governance structure, comprehensive risk assessments, execution of AI model development and system management, and robust communication with stakeholders.
In addition, we recommend that AI developers or users adopt a personal data privacy management programme, which is in essence a management framework for the responsible collection, holding, processing, and use of personal data. The programme, which highlights organizational commitment, proper programme controls, and ongoing assessment and revision, would help to enhance data security for organizations and allow them to manage AI-related privacy risks.
Accountants, whether they are professional accountants in business or in practice, would have to understand, and cope with, the revolutionary changes brought by gen AI to the profession. Very often, accountants would be the ones liaising with the information technology team, or alternatively, supervising or auditing the team to ensure the operation and security of the information systems of organizations. It is important, therefore, for us to engage in a dialogue to rise to the challenges, and to build a stronger profession which is also savvy about technology.