Back to Blog
Security

Vercel Got Breached Because an Employee Installed an AI App. This Is the New Attack Surface.

AN

Kevin — Adjacentnode

April 21, 2026·7 min read

It wasn't a zero-day. It wasn't nation-state malware. An employee installed an AI productivity tool and handed over access. This is happening everywhere and most security teams aren't ready for it.

Vercel disclosed a security incident that came down to an employee installing an AI-powered application. Not a phishing link, not a zero-day exploit, not sophisticated malware. An AI app.

This is worth paying attention to because it isn't going to be the last time.

What Happened

The details Vercel made public describe an employee installing a third-party AI productivity tool that had been granted broad OAuth permissions. That kind of access, read your email, access your calendar, view files in your connected cloud storage, is standard for AI apps these days. You click through the permissions screen because the app needs that access to work and you want the feature.

The problem is that when you grant those permissions, you're not just trusting the app. You're trusting the company that built it, everyone who works there, their security practices, their third-party dependencies, and anyone who might compromise any of those things down the line.

In Vercel's case, that trust chain had a weak link.

This Is a Supply Chain Attack, Just a New Flavor

Supply chain attacks aren't new. SolarWinds was a supply chain attack. The 3CX breach was a supply chain attack. The idea is that instead of breaking into a well-defended target directly, you compromise something that target already trusts and uses.

AI apps are a new and particularly effective vector for this because the permission grants are enormous and they're normalized. When you install a new AI writing tool and it asks to read your emails, that doesn't feel alarming. That's just how these things work.

But "read your emails" in an OAuth context often means read all your emails, including ones with credentials, internal links, MFA codes, and confidential communications. "Access your calendar" can expose meeting links, which expose video conference rooms, which are often where sensitive conversations happen. "Access your Google Drive" can mean access to internal documentation, credentials stored in notes, infrastructure diagrams, and more.

Security teams spent years locking down which software employees could install. AI apps blew that perimeter open again because everyone wants to use them and blocking them entirely is a losing battle.

The Specific Risk With AI Apps

There are a few things that make AI tools more dangerous than a typical third-party app:

The permissions are broad by design. AI tools need context to be useful. Context means access. The more access they have, the more useful they are. The business incentive pushes toward maximum permissions.

The market is flooded with newcomers. There are hundreds of AI productivity apps right now, many of them built by small teams, many of them venture-backed and moving fast. Fast-moving teams cut corners on security. That's not an accusation, it's a reality of how early-stage software companies operate.

The review process doesn't match the risk. A lot of organizations still review third-party software access the same way they did in 2018. You fill out a form, someone in IT approves the OAuth grant, and the tool goes live. That process wasn't designed for apps that have read/write access to your entire communications history.

What You Should Actually Do About This

At the individual level: read the permission screen before you click through. If a todo app is asking for access to your email and contacts, that's worth asking why. Revoke permissions from apps you no longer use. Google and Microsoft both have pages where you can see every app that has access to your account. Go look at yours.

At the organizational level: inventory your OAuth grants. Most companies have no idea how many third-party apps have active access to their infrastructure. That list will be longer than expected and some of those apps haven't been actively maintained in years.

Build a review process specifically for AI tools. The risk profile is different from a standard SaaS integration. Permissions should be scoped to the minimum the tool actually needs. Quarterly reviews to check that access is still appropriate.

And accept that you can't block all of it. Employees are going to use AI tools. They're going to use their work accounts to log in. The goal isn't zero AI app usage, it's having enough visibility to detect when something goes wrong before the blast radius gets large.

The Uncomfortable Part

Vercel is one of the most technically sophisticated companies in the industry. If it happened there, it can happen anywhere.

The attack surface created by AI tool adoption is real, it's growing, and most organizations are treating it the same way they treated BYOD in 2012: with policies that lag years behind actual behavior. The companies that close that gap first are going to be a lot better positioned when the next incident happens.

Enjoying the content? Subscribe for weekly breakdowns.

Subscribe to Newsletter