By
Gaby Grammeno
Contributor
Not so long ago, before mobile phones and flat-screen TVs, before laptops, before the word ‘email’ was even thought of, there were hefty electric typewriters. There were typing pools because managers didn’t know how to type.
There were typesetters assembling metal type into words and lines, and librarians hand-writing book-borrowers’ details on paper records. Almost every job you can think of was done differently, and most without the aid of the tools and equipment available today.
With technological developments, jobs that haven’t changed have mostly disappeared. It appears that AI – in conjunction with quantum computing and other factors – has the potential to shift the kaleidoscope around again and generate a whole new configuration of activities and processes. This makes it necessary for business owners, managers, and professionals at all levels to re-think what they are doing and how they’re doing it.
Job insecurity is inevitable, as the old ways become untenable or too expensive. Whether it’s scriptwriters who find their livelihoods under threat or university lecturers wondering how to distinguish between students’ and AI input into assignments, the psychosocial hazards of rapid change, re-learning, low control, and lack of clarity about role and direction are going to loom large.
New risk management challenges
Change management on this scale calls for anticipation of various possible scenarios, and creative thinking in how to deal with them. The challenge will be to realise the benefits while avoiding the potential adverse effects.
Many benefits to work health and safety have been envisaged, as AI is put to use solving problems and carrying out tasks previously done by people. Safety optimisation systems may be improved if more detailed, accurate exposure estimates can be generated by AI’s ability to collect and process real-time exposure data. This could enable better prediction of adverse events. However, managers will still need the time and the will to implement more effective risk control measures.
New threats are also likely. The COVID pandemic has given us recent experience in dealing with new threats, forcing the working world to adopt new ways of operating, and in some cases, re-think their roles and come up with innovative solutions.
While AI will lead to job losses, new jobs are bound to emerge. Some of the new occupations are hard to imagine as yet – like trying to explain cryptocurrencies, pre-natal gene modification, and biometric recognition systems to soldiers in the First World War. Not to mention trollbots, botslayers, and crowdsourced verification tools.
The challenges and pressures this will put on employers trying to keep up – to be resilient and swim with the tide, not sink – are bound to be a source of anxiety for many business owners and managers, and their employees who fear losing their jobs. It follows that mental health issues at work are likely to intensify, and need more active management.
Misinformation and disinformation are already running wild and distorting people’s view of reality, but AI will predictably supercharge the avalanche of deepfakes boosting the persuasiveness of conspiracy theories, scams, hoaxes and cyber threats of all sorts. This was rated as the top risk facing the globe in 2024, according to a report prepared for the World Economic Forum earlier this month.
With malicious cyber actors seeking targets, almost a quarter of Australian businesses experienced a cyber security attack in the last year, commonly in the form of email compromise, ransomware and online banking fraud. AI is likely to intensify the onslaught, giving rise to the need for ramped-up cyber hygiene for protection against illegal access, data theft, corruption or other damage. This too could be a source of stress for employers.
The pressures on business personnel – including, for example, competition for fewer jobs as organisations downsize – may well generate in-house friction in some instances, and possibly a rise in aggression, bullying, and harassment.
What it means for employers
Dealing with the impact of AI at work is going to require risk management efforts on several fronts, not least regarding the health, safety and well-being of workers.
Mental health and psychosocial risks can reasonably be expected to become more prevalent, as the pace of technological change accelerates and uncertainty erodes psychological comfort. WHS laws require people conducting a business or undertaking to take steps to actively prevent or minimise psychosocial risks to their workforce.
Assessing the risks and deciding on practical ways of minimising their impact should be done in discussion with staff affected by the issues, as employers have a duty to consult with workers about changes likely to affect people’s health or safety. In the first instance, therefore, talk with staff and maintain good communication about the risks, challenges, and probable health and safety effects.
Guidance in managing psychosocial hazards and mental health issues is available from many sources.
The Code of Practice: Managing psychosocial hazards at work provides practical advice, as do organisations such as Beyond Blue, Lifeline, Head to Health, and many others.