How worried should we be about artificial intelligence?

Author
Herbert Yung, Jason Yau and Jason Ho

Experts chime in on the latest topics in accountancy and business

Bookmark
Text size: A+A-

Author
Herbert Yung, Jason Yau and Jason Ho

Share

Herbert Yung, Director, Member Engagement and Sustainability Lead at the Hong Kong Institute of CPAs and an Institute member

Accountants should not necessarily be worried about artificial intelligence (AI), but they should be aware of how it is changing the industry and the skills that will be required to succeed in the future.

AI is gradually changing the way of business and consequentially the accounting and auditing process. Many related tasks that used to consume a lot of manpower, such as data entry, sampling, reconciliation and report generation, can now be done by AI in a snap. While some might consider whether their job will be replaced by machines, on the bright side, the adoption of AI could enhance the efficiency and accuracy of processes, allowing accountants to focus on more value-adding activities and tasks that require more professional judgement to enhance their value.

Accounting work can thus be even more interesting than before. Some younger generations may consider the work of accountants repetitive and demanding, and the use of technology is a good opportunity to showcase that the profession can also be innovative and strategic. Going forward, accountants will not only need to know their way around numbers, but also how to make use of technology to generate insights and identify trends that can help corporates make better decisions. With these changes, accounting can be better placed to play the role of business advisor rather than record keeper or auditor.

“On the bright side, the adoption of AI could enhance the efficiency and accuracy of processes, allowing accountants to focus on more value-adding activities and tasks that require more professional judgement to enhance their value.”

Of course, new technology will lead to new risks that both companies and accountants will need to be prepared for and require careful management when it comes to usage. For example, exposure to data privacy and cybersecurity risks will increase in view of the extensive exchange of information. Information integrity could also be a concern as the emergence of AI technology has made information more difficult to be validated due to its massive scale and dynamic nature. Therefore, besides leveraging technology to enhance the overall accounting and auditing processes, appropriate safeguards should be taken in order to mitigate the negative impact.

To stay relevant in the face of these opportunities and challenges, accountants should focus on developing skills that are less likely to be automated, such as strategic thinking, problem-solving, and communication. They should also stay up to date with the latest developments in AI and other technologies, so that they can better understand how they can be used to improve accounting and auditing processes and services.

Overall, accountants should view AI as a tool that can help them to be more effective in their work, rather than as a threat to their profession.

Jason Yau, Regional Leader, Asia Pacific at RSM International, and Partner, Technology and Management Consulting at RSM Hong Kong and an Institute member

When I was invited to share my perspective on this topic, the first thing that came to my mind was perhaps I could ask ChatGPT this question and see what it says. This thought would not have been possible just a short time ago.

Indeed, the emergence of generative AI has brought significant changes to our world in just six months, allowing ordinary people like myself to harness the power of AI to gather information and generate ideas. If we project this exponential growth trend for the next five to 10 years, its impact on all aspects of our society becomes even more profound.

By harnessing the power of AI and putting it to good use, we could potentially resolve many of the issues we face today. In our industry, AI has the potential to empower our workforce, making them more efficient and enabling them to deliver higher quality outcomes, ultimately boosting productivity.

During our firm’s recent regional conference, a renowned professor specializing in AI ethics from a prestigious university shared her perspectives on the latest developments of AI. She emphasized the importance of reskilling our professionals to learn to leverage the power of AI, while warning that some jobs may get eliminated during this phase of transformation. Surprisingly, most of the audience maintained an open mind towards AI, and few expressed extreme concern about its power.

“Rather than fear it, we must learn to embrace, adopt, and coexist with this technology. By doing this, we can ensure that its potential is applied morally.”

Our understanding of AI, however, drastically changes when we take into account situations outside of the corporate context, such as a military environment. AI does not have core elements of what makes us human when it comes to decision-making – such as empathy, emotion, relationships, etc. While AI can offer unbiased solutions, making judgements exclusively based on data and algorithms might have grave implications.

We should take our worries about AI seriously as no one presently possesses complete expertise in this field. To ensure responsible development in this field, proper governance and oversight must be established to dictate how AI is being applied and utilized in different situations. Governments should also act quickly to enact necessary laws and regulations before the intense AI race among the world’s largest technology companies spirals out of control.

It is important to recognize that AI is already here. And rather than fear it, we must learn to embrace, adopt, and coexist with this technology. By doing this, we can ensure that its potential is applied morally and towards advancing society.

Jason Ho, Financial Services Technology Risk Consulting Partner at EY and an Institute member

In recent years, AI has made tremendous advances. However, the rise of AI-powered attacks also poses growing risks to business, encompassing different kinds of data breaches, malware and ransomware attacks, fraudulent activities and social engineering exploitations. Understanding the diverse nature of these risks is crucial in developing comprehensive strategies that leverage AI-based defence mechanisms and ensure resilience in an evolving threat landscape.

One emerging concern from the development of AI is damaged cognition of identity verification, which prevents fraud, and this adheres to rules such as eKYC (electronic know your customer) requirements. By utilizing generative adversarial networks, existing images, sounds and videos can be superimposed onto source material, resulting in a high-quality animated face that can pass liveness tests based on facial landmark movements such as eye blink, mouth movement, head orientation or even voice.

Further, one of the most alarming risks is the capability of hackers to automate their adversarial operations, and more easily conduct large-scale attacks. Hackers also can use AI to swiftly uncover security loopholes and build codes for malware capable of infecting different systems. All these may result in impairing the confidentiality, integrity, and availability of sensitive data, intellectual property and financial information.

“AI is very beneficial for automating repetitive operations but we need to clearly understand and evaluate the risks, such as algorithmic bias.”

Financial institutions, therefore, must develop a comprehensive strategy in protecting systems and data from an ever-changing AI threat landscape. For example, an identity verification system should go beyond using just simple username or password authentication, or a basic facial and voice recognition system. Advanced AI systems may be required to detect and prevent deepfake attacks in real-time, and this entails putting in place strict access controls, comprehensive testing and updating systems frequently.

Furthermore, education and training can help to raise awareness about the risks of AI while also empowering staff members and customers to prevent, detect and report suspicious behaviours.

AI is very beneficial for automating repetitive operations but we need to clearly understand and evaluate the risks, such as algorithmic bias and data poisoning. Therefore, the ethical development of AI is vital from a regulatory perspective. Different authorities have started to cultivate a healthy environment in relation to AI-related activities. It is advisable to keep our eyes on the existing and upcoming AI-related laws, such as European Union Artificial Intelligence Act and Mainland China’s AI regulations in 2023.

Add to Bookmark
Text size
Related Articles
Practice management
October 2024
Panellists share how they elevated their adoption of technology at their firms
Digital transformation
January 2024
Key insights from the Institute’s study on the current state of technology adoption by small- and medium-sized practices in Hong Kong
Digital transformation
October 2023
Experts chime in on the latest topics in accountancy and business
Artificial Intelligence
October 2023
A roundtable discussion on the opportunities and potential pitfalls of new technologies on the profession
Artificial Intelligence
October 2023
Ada Chung, Privacy Commissioner for Personal Data, on how the profession could embrace generative artificial intelligence while managing its privacy and ethical risks

Advertisement

We use cookies to give you the best experience of our website. By continuing to browse the site, you agree to the use of cookies for analytics and personalized content. To learn more, visit our privacy policy page. View more
Accept All Cookies