By Anders Leander, AI Advisor, Columbus
Nearly half of Swedish office workers are using unauthorised AI tools at work. The phenomenon, called “Shadow AI,” is creating a paradox for businesses.
To reverse this trend, companies need approved and accessible tools, a living policy, proper training, shared ways of working, and incentives that reinvest the saved time.
Real-world consequences
“Shadow AI” refers to employees using their own AI tools at work without approval, oversight, or common working practices. And it has consequences. An example is Samsung who was forced to ban ChatGPT in 2023 after engineers repeatedly leaked source code and confidential data.
While employees gain immediate benefits such as faster drafts, clearer analysis, and better presentations, companies are not always reaping those benefits and are also facing growing security risks.
When employees use their own AI tools without guidance, Swedish companies risk data leaks and quality issues. The time saved for the individual isn’t captured in improved processes, metrics, or results for the company.
A Swedish tech report reveals that 46 per cent of office workers use AI solutions their companies don’t know about.
Across Europe, the picture is similar. 39 per cent of EMEA employees use free AI tools at work, while only 23 per cent use employer-provided options.
Why the secrecy? Employees cite fears of automation replacing jobs, concerns about increased workloads, or simply wanting a competitive edge over colleagues.
Time to reverse the trend
Company leaders have every reason to act now to get “Shadow AI” under control. The problem is growing with the number of available AI solutions and their increasing power.
Even approved AI tools can create quality issues. But with clear policies, employee training, and shared ways of working, organisations can better identify, prevent, and manage these challenges.
The EU’s AI Act, effective from August 2025, adds regulatory urgency. High-risk AI now requires data governance and human oversight. Requirements that are impossible to meet when data flows into unauthorised services.
However, 60 per cent of organisations can’t even identify where “Shadow AI” is happening within their walls.
The solution isn’t surveillance. It’s replacement. Companies must offer approved AI platforms that match or exceed what employees find elsewhere.
Policies must evolve quarterly to keep pace with AI development. And critically, performance targets must adjust for AI-supported work, or the efficiency gains vanish into “longer lunches and personal errands.”
Time is running short. The problem grows with every new AI tool launched.
Want to talk to an expert?
Contact us
How to address the challenges
To address the challenges of Shadow AI, leaders shouldn’t focus on tracking or blocking every unauthorised tool. The technology changes too fast. Instead, think long-term. Provide secure, approved alternatives that employees want to use.
I recommend the following set of measures:
1. Approved, reliable, and accessible AI tools are essential to ensure data is handled securely without leaks. Use a leading platform, if not, employees will turn to their own AI again.
2. Keep a clear, regularly updated policy on how and for what AI should be used. AI evolves fast. Review the policy often, as what was impossible last quarter may be possible now.
3. Establish shared ways of working with AI. Verify sources, use review checklists, and watch for common errors or hallucinations.
4. Train employees to use AI correctly and understand its limits and potential. Combined with shared tools and routines, this reduces risks of data leaks, misuse, and “workslop.”
5. Measure and reward reinvestment of saved time. Adjust quality and performance expectations for tasks supported by AI. Otherwise, the benefits are lost. Review progress regularly and stay transparent about efficiency gains.
Key take-aways
- 46% of Swedish office workers use unauthorised AI tools at work.
- “Shadow AI” boosts individual efficiency but increases data leak and compliance risks.
- The EU AI Act adds urgency for governance and oversight.
- Solutions include approved tools, updated policies, training, and shared AI practices.
- Businesses should reinvest saved time through incentives and transparent performance targets.