SJ Price, a technology law expert presenting at the Australian Institute of Health and Safety's 2024 National Health and Safety Conference, warned about the risks associated with AI safety programs but suggests OHS professionals are well-equipped to handle them.
Machine learning artificial intelligence models are increasingly being used in safety applications, such as detecting fatigue and monitoring infrastructure integrity. However, Price highlights concerns such as inaccuracy, intrusive monitoring and surveillance, over-reliance, and lack of explainability in AI systems.
She emphasised the importance of OHS professionals managing AI risks, as they possess skills in hazard identification, risk assessment, and implementing control mechanisms.
Price underscores the need for a positive AI culture within organisations and encourages training to empower individuals to ask critical questions about AI risks.
Importantly, organisations deploying AI are accountable for its actions, making systematic controls and monitoring crucial. It is our view that it is not just about overseeing A.I. but making sure that the implementation is not inherently hazardous.
See our Intrusive Workplace Surveillance and Algorithmic Management for more information.
Source: OHS Alert, 24 May