Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Security leaders and CISOs are discovering that a growing swarm of shadow AI apps has been compromising their networks, in some cases for over a year.
Theyโre not the tradecraft of typical attackers. They are the work of otherwise trustworthy employees creating AI apps without IT and security department oversight or approval, apps designed to do everything from automating reports that were manually created in the past to using generative AI (genAI) to streamline marketing automation, visualization and advanced data analysis. Powered by the companyโs proprietary data, shadow AI apps are training public domain models with private data.
Whatโs shadow AI, and why is it growing?
The wide assortment of AI apps and tools created in this way rarely, if ever, have guardrails in place. Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage.
Itโs the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. โI see this every week,โ ย Vineet Arora, CTO at WinWire, recently told VentureBeat. โDepartments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.โ
โWe see 50 new AI apps a day, and weโve already cataloged over 12,000,โ said Itamar Golan, CEO and cofounder of Prompt Security, during a recent interview with VentureBeat. โAround 40% of these default to training on any data you feed them, meaning your intellectual property can become part of their models.โ
The majority of employees creating shadow AI apps arenโt acting maliciously or trying to harm a company. Theyโre grappling with growing amounts of increasingly complex work, chronic time shortages, and tighter deadlines.
As Golan puts it, โItโs like doping in the Tour de France. People want an edge without realizing the long-term consequences.โ
A virtual tsunami no one saw coming
โYou canโt stop a tsunami, but you can build a boat,โ Golan told VentureBeat. โPretending AI doesnโt exist doesnโt protect you โ it leaves you blindsided.โ For example, Golan says, one security head of a New York financial firm believed fewer than 10 AI tools were in use. A 10-day audit uncovered 65 unauthorized solutions, most with no formal licensing.
Arora agreed, saying, โThe data confirms that once employees have sanctioned AI pathways and clear policies, they no longer feel compelled to use random tools in stealth. That reduces both risk and friction.โ Arora and Golan emphasized to VentureBeat how quickly the number of shadow AI apps they are discovering in their customersโ companies is increasing.
Further supporting their claims are the results of a recent Software AG survey that found 75% of knowledge workers already use AI tools and 46% saying they wonโt give them up even if prohibited by their employer. The majority of shadow AI apps rely on OpenAIโs ChatGPT and Google Gemini.
Since 2023, ChatGPT has allowed users to create customized bots in minutes. VentureBeat learned that a typical manager responsible for sales, market, and pricing forecasting has, on average, 22 different customized bots in ChatGPT today.
Itโs understandable how shadow AI is proliferating when 73.8% of ChatGPT accounts are non-corporate ones that lack the security and privacy controls of more secured implementations. The percentage is even higher for Gemini (94.4%). In a Salesforce survey, more than half (55%) of global employees surveyed admitted to using unapproved AI tools at work.
โItโs not a single leap you can patch,โ Golan explains. โItโs an ever-growing wave of features launched outside ITโs oversight.โ The thousands of embedded AI features across mainstream SaaS products are being modified to train on, store and leak corporate data without anyone in IT or security knowing.
Shadow AI is slowly dismantling businessesโ security perimeters. Many arenโt noticing as theyโre blind to the groundswell of shadow AI uses in their organizations.
Why shadow AI is so dangerous
โIf you paste source code or financial data, it effectively lives inside that model,โ Golan warned. Arora and Golan find companies training public models defaulting to using shadow AI apps for a wide variety of complex tasks.
Once proprietary data gets into a public-domain model, more significant challenges begin for any organization. Itโs especially challenging for publicly held organizations that often have significant compliance and regulatory requirements. Golan pointed to the coming EU AI Act, which โcould dwarf even the GDPR in fines,โ and warns that regulated sectors in the U.S. risk penalties if private data flows into unapproved AI tools.
Thereโs also the risk of runtime vulnerabilities and prompt injection attacks that traditional endpoint security and data loss prevention (DLP) systems and platforms arenโt designed to detect and stop.
Illuminating shadow AI: Aroraโs blueprint for holistic oversight and secure innovation
Arora is discovering entire business units that are using AI-driven SaaS tools under the radar. With independent budget authority for multiple line-of-business teams, business units are deploying AI quickly and often without security sign-off.
โSuddenly, you have dozens of little-known AI apps processing corporate data without a single compliance or risk review,โ Arora told VentureBeat.
Key insights from Aroraโs blueprint include the following:
- Shadow AI thrives because existing IT and security frameworks arenโt designed to detect them. Arora observes that traditional IT frameworks are letting shadow AI thrive by lacking the visibility into compliance and governance thatโs needed to keep a business secure. โMost of the traditional IT management tools and processes lack comprehensive visibility and control over AI apps,โ Arora observes.
- The goal: enabling innovation without losing control. Arora is quick to point out that employees arenโt intentionally malicious. Theyโre just facing chronic time shortages, growing workloads and tighter deadlines. AI is proving to be an exceptional catalyst for innovation and shouldnโt be banned outright. โItโs crucial for organizations to define strategies with robust security while enabling employees to use AI technologies effectively,โ Arora explains. โTotal bans often drive AI use underground, which only magnifies the risks.โ
- Making the case for centralized AI governance. โCentralized AI governance, like other IT governance practices, is key to managing the sprawl of shadow AI apps,โ he recommends. Heโs seen business units adopt AI-driven SaaS tools โwithout a single compliance or risk review.โ Unifying oversight helps prevent unknown apps from quietly leaking sensitive data.
- Continuously fine-tune detecting, monitoring and managing shadow AI. The biggest challenge is uncovering hidden apps. Arora adds that detecting them involves network traffic monitoring, data flow analysis, software asset management, requisitions, and even manual audits.
- Balancing flexibility and security continually. No one wants to stifle innovation. โProviding safe AI options ensures people arenโt tempted to sneak around. You canโt kill AI adoption, but you can channel it securely,โ Arora notes.
Start pursuing a seven-part strategy for shadow AI governance
Arora and Golan advise their customers who discover shadow AI apps proliferating across their networks and workforces to follow these seven guidelines for shadow AI governance:
Conduct a formal shadow AI audit. Establish a beginning baseline thatโs based on a comprehensive AI audit. Use proxy analysis, network monitoring, and inventories to root out unauthorized AI usage.
Create an Office of Responsible AI. Centralize policy-making, vendor reviews and risk assessments across IT, security, legal and compliance. Arora has seen this approach work with his customers. He notes that creating this office also needs to include strong AI governance frameworks and training of employees on potential data leaks. A pre-approved AI catalog and strong data governance will ensure employees work with secure, sanctioned solutions.
Deploy AI-aware security controls. Traditional tools miss text-based exploits. Adopt AI-focused DLP, real-time monitoring, and automation that flags suspicious prompts.
Set up centralized AI inventory and catalog. A vetted list of approved AI tools reduces the lure of ad-hoc services, and when IT and security take the initiative to update the list frequently, the motivation to create shadow AI apps is lessened. The key to this approach is staying alert and being responsive to usersโ needs for secure advanced AI tools.
Mandate employee training that provides examples of why shadow AI is harmful to any business. โPolicy is worthless if employees donโt understand it,โ Arora says. Educate staff on safe AI use and potential data mishandling risks.
Integrate with governance, risk and compliance (GRC) and risk management. Arora and Golan emphasize that AI oversight must link to governance, risk and compliance processes crucial for regulated sectors.
Realize that blanket bans fail, and find new ways to deliver legitimate AI apps fast. Golan is quick to point out that blanket bans never work and ironically lead to even greater shadow AI app creation and use. Arora advises his customers to provide enterprise-safe AI options (e.g. Microsoft 365 Copilot, ChatGPT Enterprise) with clear guidelines for responsible use.
Unlocking AIโs benefits securely
By combining a centralized AI governance strategy, user training and proactive monitoring, organizations can harness genAIโs potential without sacrificing compliance or security. Aroraโs final takeaway is this: โA single central management solution, backed by consistent policies, is crucial. Youโll empower innovation while safeguarding corporate data โ and thatโs the best of both worlds.โ Shadow AI is here to stay. Rather than block it outright, forward-thinking leaders focus on enabling secure productivity so employees can leverage AIโs transformative power on their terms.
source: https://venturebeat.com/security/shadow-ai-unapproved-ai-apps-compromising-security-what-you-can-do-about-it/


