Microsoft yanks some cloud and AI access from Israel’s defense ministry over civilian-spying concerns

Microsoft is pulling the plug on parts of its cloud and AI services for Israel’s Ministry of Defense, saying it found the tech was being used in ways that violate company rules against mass surveillance of civilians.
The move follows an August investigation by the Guardian, working with +972 Magazine and Local Call, that described a sweeping phone-dragnet run by Israel’s Unit 8200. According to former personnel quoted in that reporting, recordings of millions of calls from Palestinians in Gaza and the West Bank were hoovered up and parked on Microsoft’s Azure cloud in Europe, where intelligence officers could replay conversations and feed the results into targeting decisions for raids and airstrikes. Sources said the data trove — measured in thousands of terabytes — was sitting in a Netherlands data center and, after the story published, appeared to be shifted out of the country.
On Thursday, Microsoft president Brad Smith said an internal review “found evidence that supports elements of the Guardian’s reporting,” explicitly referencing Israel’s consumption of Azure storage in the Netherlands and use of Microsoft AI services. In plain English: what the company could see from its own telemetry lined up with parts of the media account. Because Microsoft’s terms of service forbid using its tech for mass surveillance of civilians, Smith said the company has “ceased and disabled” specified subscriptions and services for the ministry, including certain cloud storage and AI tools.
The company is trying to thread a needle here. Smith stressed that Microsoft doesn’t rummage through customer content during investigations — so it can’t just open the hood and play captured calls — while also thanking the Guardian for surfacing details Microsoft couldn’t get because of those privacy commitments. He added that the clampdown won’t interrupt Microsoft’s broader cybersecurity work that protects Israel and other Middle Eastern partners, including those tied to the Abraham Accords. So this isn’t a blanket boycott; it’s a targeted shutdown meant to enforce a specific rule.
It’s also a turnabout in tone. Back in May, responding to earlier questions about the military’s use of Azure during the Gaza war, Microsoft said it had found “no evidence” its software had been used to harm people or that Israel’s defense ministry had breached Microsoft’s policies or AI code of conduct. Since then, the company brought in outside help — Washington law firm Covington & Burling and a technical consultant — to review communications and financial records tied to its work in Israel. Smith says that wider review is still underway, but Thursday’s action suggests Microsoft saw enough to act now rather than wait for a final report.
None of this will end the broader surveillance debate. Intelligence sources told the Guardian that Unit 8200 planned to migrate data from Microsoft to Amazon Web Services; neither AWS nor the Israel Defense Forces answered requests for comment, and CBS says it didn’t hear back either. An Israeli official, speaking anonymously to AP, downplayed Microsoft’s decision, saying it would do “no damage” to operational capabilities. Inside Microsoft, the fight over these contracts has been raw: the company fired several employees after protests over its ties to the Israeli military, while others resigned. Activists who’ve organized under the banner “No Azure for Apartheid” called Thursday’s step an “unprecedented win” but argued it touches only a sliver of the overall relationship.
To understand why the cloud matters here, think about what Azure provides. Beyond raw storage, there are off-the-shelf AI services — speech-to-text, translation, entity extraction — that make it easy to turn a messy ocean of intercepted audio into searchable, analyzable data. The Guardian’s sources described analysts scanning not just a target’s calls but the conversations of people nearby to map patterns in dense urban areas before a strike. Microsoft’s rules are designed to head that off: the company’s acceptable-use policy prohibits using its services for mass civilian surveillance, regardless of who is doing the surveilling.
The controversy also plugs into a longer, uglier story about how Palestinians are monitored. Rights groups have documented Israel’s use of facial recognition systems such as Red Wolf at checkpoints in Hebron and East Jerusalem and the “Wolf Pack” database that ties together addresses, family links and watch-list flags. Israeli spyware firms — most infamously NSO Group, maker of Pegasus — have sold surveillance tools around the world. And while those technologies are distinct from Azure and its AI add-ons, they form the backdrop that makes the cloud angle so combustible: a years-long expansion of digital policing, now supercharged by artificial intelligence and hyperscale compute.
There’s also a European subplot that won’t go away. Privacy advocates have been leaning on Brussels to rethink data flows to Israel, arguing that surveillance practices are incompatible with EU privacy law. The European Commission renewed Israel’s “adequacy” decision last year, basically certifying that Israeli safeguards are on par with GDPR. Microsoft’s statement — pointing to Azure storage in the Netherlands and an AI layer on top — gives those campaigners fresh ammunition to demand another look at how and where sensitive datasets move.
So what actually changes on the ground? That’s the hard part. Microsoft says it has cut specific subscriptions and disabled certain services for the defense ministry. It hasn’t named them, detailed scopes, or said whether other agencies or contractors could still reach similar capabilities through different accounts. If Israel shifts workloads to another cloud provider — or to sovereign infrastructure — much of the underlying activity could continue. If it doesn’t, the loss of turnkey speech and language processing, plus convenient European storage, would force workarounds that slow and complicate surveillance pipelines. Either way, a clear line just got drawn by one of the world’s most powerful tech companies: use our platforms to spy on civilians at scale, and we’ll shut you off.
The geopolitical calendar keeps spinning in the background. The Gaza war has ground on for nearly two years, with international scrutiny intensifying over civilian harm and the use of AI in targeting. In Washington and European capitals, patience for opaque “trust us” answers is thin. And inside Big Tech, workers are pushing harder for bright-line rules about how their tools are used in war.
Microsoft’s message lands somewhere between moral stance and legal housekeeping: we won’t break customer privacy to investigate, but if credible evidence shows our tech is being used in ways our contract forbids, we’ll act — and we just did. The next chapter hinges on two questions no blog post can resolve: where the data goes next, and whether policymakers finally redraw the boundaries for surveillance in a world where the cloud is the battlefield too.
With input from Politico, CBS News, and Al Jazeera.
The latest news in your social feeds
Subscribe to our social media platforms to stay tuned