What if everything you thought about going digital was all a lie – or not the complete truth?
Insights from ForbrukerRadet suggest that may be the case when it comes to digital technology.
As AI, technology and the like continue to evolve, so will their ease of use.
Its whitepaper provides a gloomy yet frank view of the impact of Gen AI on users.
We previously discussed keeping your AI enemies close. Commentary in the digital space indicates adopters need to be cautious when it comes to using Gen AI, especially in places like the public sector which deals with mass amounts of sensitive information.
Where’s the boundary when it comes to using Gen AI tools in everyday work?
Gen AI’s harmful impact on consumers
‘There is a lack of interest in the tech industry in calculating carbon emissions generated by generative AI.’
For so long, we’ve been told that going digital is a more sustainable option for long-winded or paper-based tasks, such as admin and repetitive requirements. However, some processes using Gen AI can still be just as unsustainable, with the whitepaper commenting on the environmental impact of Gen AI being ‘more of a problem than a solution to issues such as climate change, water shortages, and high energy consumption.’ Additionally, AI uses more energy than other forms of computing.
As Gen AI learns from itself and past use, it develops a way of communicating almost conversationally with users. This suggests a risk of providing false or inaccurate information, where it fails to see the difference between factual and incorrect content, but has the ability to make it sound convincing.
The way forward?
Like the concern of where data is stored when using tools like ChatGPT, it applies to Gen AI too: ‘Consumers must have a ‘right to be forgotten’ to have personal data deleted from generative AI models.
Companies need to regulate how they use AI; set boundaries for ‘how the technology is trained, developed, deployed, and used.’ As the speed of innovation picks up, it will only be a matter of time before governments intervene to ensure its use is safe, human-centric and avoids being intrusive, especially where sensitive data is handled.