Uncontrolled AI use risks data leaks

Nearly half of Swedish workers use unauthorised AI tools, risking data leaks. Companies should replace them with approved platforms, clear policies, and adjusted performance targets.

By Anders Leander, AI Advisor, Columbus

Nearly half of Swedish office workers are using unauthorised AI tools at work. The phenomenon, called “Shadow AI,” is creating a paradox for businesses.

To reverse this trend, companies need approved and accessible tools, a living policy, proper training, shared ways of working, and incentives that reinvest the saved time.

Real-world consequences

“Shadow AI” refers to employees using their own AI tools at work without approval, oversight, or common working practices. And it has consequences. An example is Samsung who was forced to ban ChatGPT in 2023 after engineers repeatedly leaked source code and confidential data.

While employees gain immediate benefits such as faster drafts, clearer analysis, and better presentations, companies are not always reaping those benefits and are also facing growing security risks.

When employees use their own AI tools without guidance, Swedish companies risk data leaks and quality issues. The time saved for the individual isn’t captured in improved processes, metrics, or results for the company.

A Swedish tech report reveals that 46 per cent of office workers use AI solutions their companies don’t know about.

Across Europe, the picture is similar. 39 per cent of EMEA employees use free AI tools at work, while only 23 per cent use employer-provided options.

Why the secrecy? Employees cite fears of automation replacing jobs, concerns about increased workloads, or simply wanting a competitive edge over colleagues.

Time to reverse the trend

Company leaders have every reason to act now to get “Shadow AI” under control. The problem is growing with the number of available AI solutions and their increasing power.

Even approved AI tools can create quality issues. But with clear policies, employee training, and shared ways of working, organisations can better identify, prevent, and manage these challenges.

The EU’s AI Act, effective from August 2025, adds regulatory urgency. High-risk AI now requires data governance and human oversight. Requirements that are impossible to meet when data flows into unauthorised services.

However, 60 per cent of organisations can’t even identify where “Shadow AI” is happening within their walls.

The solution isn’t surveillance. It’s replacement. Companies must offer approved AI platforms that match or exceed what employees find elsewhere.

Policies must evolve quarterly to keep pace with AI development. And critically, performance targets must adjust for AI-supported work, or the efficiency gains vanish into “longer lunches and personal errands.”

Time is running short. The problem grows with every new AI tool launched.

Anders Leander
Anders Leander AI Advisor

Want to talk to an expert?

Contact us