By Catherine Ngo Content writer, presenter and podcaster

Amid the growing interest in artificial intelligence (AI) and its applications, the Australian government's Digital Transformation Agency (DTA) has taken a significant step by issuing guidance on AI training and responsible AI use policies for government workers. This framework can be valuable for private-sector employers seeking to establish a structured approach within their organisations.

Public servants will receive comprehensive training on the fundamentals of AI and its applications. This initiative aims to ensure that government agencies are equipped to leverage AI technologies effectively and ethically.

A statement from the DTA strongly recommends that all staff, regardless of their roles, undergo AI fundamentals training. Given the uncertainty and hype surrounding AI, this move is essential.

Many individuals perceive AI as a mysterious and complex technology, leading to misconceptions about its capabilities and potential risks. The risk profile associated with AI can vary significantly depending on the organisation or industry.

This initiative by the DTA demonstrates the government's commitment to responsible AI adoption and its recognition that a comprehensive understanding of AI among public servants is crucial for effectively utilising AI technologies while mitigating associated risks.

Exploring the potential of generative AI

AI is essentially an advanced statistical engine that generates outputs based on probability. In other words, AI predicts the next word in a sentence based on the likelihood that it is the most suitable word to add. 

Due to its probabilistic nature, large language models like ChatGPT can generate different outputs for the same input. While this unpredictability is advantageous for creative work and generating ideas, its reliability is limited in areas such as financial advice or medical diagnoses, where accuracy is paramount.

Leaders need to take responsibility for implementing AI training programs and gaining a thorough understanding of its functions, regardless of whether they initiate its use or not. 

Privacy and confidentiality with AI

In AI training, businesses must prioritise privacy protection and risk mitigation to safeguard employee and customer information. Ensuring data security and restricted access to authorised personnel is crucial.

Leaders should acknowledge that while AI offers advantages, there is a potential risk of generating inaccurate or misleading information. 

Accuracy verification is essential to prevent the spreading of false information, which could lead to legal consequences and reputational damage.

AI usage for HR

Within internal HR functions, technology tools, particularly in performance management, hiring, promotion, and termination, are becoming increasingly prevalent. These applications provide significant advantages, especially in evaluating numerous resumes, saving time and resources.

However, despite their intended purpose of eliminating bias, AI tools can produce biased outcomes due to the complexity of accounting for all correlations within extensive datasets. Moreover, AI often reinforces established patterns, leading to self-fulfilling prophecies. For example, selecting candidates based on their similarity to current successful employees may limit diversity and stifle innovation in the workforce. This is due to the algorithm's training to identify traits like those of current staff, leading to hiring individuals with similar characteristics.

The Need to Test and Learn

Organisations have successfully tested ways to prompt AI tools to produce reliable outputs, leading to time savings and improved report quality for clients. However, this is just one study; caution should be exercised when interpreting the results. Sufficient, comprehensive, and long-term evidence is still lacking to determine the optimal collaboration between humans and AI.

The performance of AI also depends on the task at hand. In banking and insurance, AI has been used for fraud detection for several decades with positive results. However, generative AI differs because it can produce outputs rather than simply providing static or descriptive recommendations.