Copertina articolo: Shadow AI: come scoprire gli strumenti di Intelligenza Artificiale usati fuori dalle policy aziendali

Shadow AI: how to discover Artificial Intelligence tools used outside company policies

Visibility into AI usage is now a security and governance requirement

The AI Act requires companies to know which artificial intelligence systems they use and to demonstrate control over their use, but in reality, AI often enters companies without visibility and outside of policies. In this article, we analyze why monitoring the actual use of AI is so complex and how to bridge the gap between written rules and what really happens.

Artificial intelligence is now part of everyday work: generative chatbots such as ChatGPT, Copilot, or Claude are used to write texts or analyze documents, development support tools assist technical teams in writing code, and AI features integrated into SaaS applications automate tasks in areas such as CRM, collaboration, or HR. These technologies are adopted quickly and often spontaneously; the problem is not AI itself, but what happens when it is used without visibility and without real governance. It is in this context that there is increasing talk of Shadow AI, i.e., artificial intelligence tools used outside of company policies, without IT approval or awareness. Many organizations have already defined rules on the use of AI, establishing, for example, that only approved solutions may be used, preferably in enterprise versions with adequate data processing guarantees, that access is via corporate accounts and single sign-on systems to ensure traceability, that use on personal devices is prohibited or limited to non-productive contexts, and that AI features integrated into SaaS applications are only enabled after a security and compliance assessment. In addition to these, there is often an obligation to report the introduction of new artificial intelligence tools or capabilities to IT. However, in daily practice, verifying compliance with these policies is complex: remote working, the use of personal devices, and browser access make it extremely easy for users to circumvent traditional controls, creating a misalignment between formal rules and actual use, and it is precisely in this space that Shadow AI finds fertile ground, making it clear that visibility is the real prerequisite for effective governance.

The real risks to security and compliance 

The uncontrolled use of artificial intelligence tools exposes companies to concrete security risks that go far beyond a simple violation of internal policy. When users employ generative chatbots or external AI tools without oversight, they often transfer potentially sensitive company data—such as customer information, financial data, or intellectual property—to external services that are not covered by the organization's protection and monitoring mechanisms.

This creates more attack surfaces for attackers, who can exploit vulnerabilities in unmanaged services, unmonitored integrations, or weak credentials used to access these tools. At the same time, the absence of consistent governance makes it difficult to detect, contain, and respond to security incidents, extending the time an attack can spread before being detected.

Without adequate access controls, strong authentication, network segmentation, and continuous monitoring, organizations are vulnerable to both accidental data breaches, such as unintentionally entering confidential information into an AI prompt, and sophisticated threats that leverage AI to automate attacks, generate targeted phishing, or bypass security controls.

This gap between innovation and governance is not hypothetical: IBM's Cost of a Data Breach Report 2025 shows concrete data on this phenomenon, highlighting how the uncontrolled use of AI increases the cost and impact of breaches.

According to the report, for organizations with high use of “shadow AI” the costs of breaches are on average about $670,000 higher than those with low or no use of these tools, and a large majority (97%) of AI-related breaches occur in companies without the necessary access controls

These figures highlight that shadow AI is not an abstract risk, but a real factor that contributes both to increased data breach costs and to the expansion of risk surfaces, making governance and visibility of AI technology use within organizations crucial. 

Why a multi-source approach is needed to truly see AI 

IT struggles to track AI usage because these tools no longer follow traditional software adoption models. Many AI solutions do not require installation, do not go through application catalogs, and do not involve formal onboarding processes: they live in the browser, are accessible in a few clicks, and are often used in “freemium” mode or through individual subscriptions. In other cases, AI is already integrated into existing SaaS applications and is activated as an additional feature, without the organization being fully aware of it. In both scenarios, adoption occurs easily outside of traditional IT processes.

In this context, relying on a single discovery method—such as contract analysis or installed asset analysis—is no longer sufficient. Each individual observation point provides a partial view and inevitably leaves some areas unclear. A multi-source approach stems precisely from the need to cross-reference different signals in order to reconstruct actual usage.

Identity and access analysis allows you to understand which applications users log into and how often, especially when corporate accounts or single sign-on systems are used. Monitoring browser usage and network traffic, on the other hand, allows you to identify AI tools used without formal installations or integrations, often through personal accounts. These elements are complemented by financial data, which is useful for identifying individual subscriptions, fragmented spending, and SaaS or AI purchases that do not go through procurement processes, while integrations with major SaaS platforms allow you to verify which applications are actually in use and which features—including artificial intelligence—are active.

Only by combining all these sources is it possible to obtain a consistent and reliable overview. The multi-source approach eliminates blind spots, allows you to distinguish between authorized and unauthorized tools, identifies off-policy usage, and reveals risks and expenses that would otherwise remain invisible. In other words, it is not a question of controlling more, but of observing better, creating the basis for truly effective SaaS and AI governance.

Governing AI means understanding the context, not controlling content 

Governing Artificial Intelligence does not mean controlling the content generated or monitoring what users enter into systems, but understanding the context in which risk arises and being able to demonstrate control over the use of AI technologies. Knowing which tools are in use, whether they are authorized, who is using them, and how often allows organizations to take proactive action before misconduct turns into a security or compliance issue.

This is exactly the type of approach required by the European AI Act, which requires companies to be aware of the artificial intelligence systems they use, assess their risk, and demonstrate the existence of governance, control, and accountability mechanisms.

Well-known cases involving companies such as Samsung, where employees entered confidential code and information into generative AI tools, or that of a US Department of Defense official who used public AI services for work activities, clearly show how the problem is often linked to the human factor rather than technology. In situations like these, effective governance would not necessarily have prevented the error, but it would have made it clear where to intervene—in terms of training, policy, and access levels—and, above all, it would have allowed the organization to demonstrate that the use of AI was tracked and governed.

In the event of an audit, incident, or regulatory review, this ability to demonstrate control and accountability is what distinguishes a manageable human error from a structural governance deficiency.

Our point of view  

At WEGG, we support companies as Software Asset Management and SaaS Management consultants, aware of how the rapid proliferation of artificial intelligence tools makes software management increasingly complex and critical from a security, compliance, and governance perspective. We have also discussed this here.

That's why we rely on Flexera's SaaS Management technologies, which take a multi-source discovery approach that cross-references data on identities and access, network traffic, integrations with major vendors, financial information, and browser usage, revealing even tools and features used without IT approval. All these signals are then normalized and enriched through Technopedia, Flexera's technology database that allows for the correct recognition of technologies, vendors, and risk levels, eliminating duplications and blind spots.

On this basis, we help organizations build continuous visibility into SaaS and AI, comparing actual usage with company policies to reduce risk and waste without stifling innovation. At the same time, we support the verification of terms of use, licensing aspects, data management, security, and regulatory compliance—including the AI Act—to ensure that every SaaS and artificial intelligence tool is used in an informed, compliant, and governed manner.

02-s pattern02

Do you need support in monitoring AI usage in your company?

Contact us at [email protected] for a consultation!