Seattle startup Glacis brings longtime Microsoft leader aboard to target AI’s biggest blind spot
Glacis, a Seattle startup building tamper-proof records of AI behavior, has hired longtime Microsoft Azure exec Rohit Tatachar as co-founder and CTO and is launching new open-source tools for monitoring and controlling AI agents. Read More

As a veteran engineer and product leader inside Microsoft Azure, Rohit Tatachar saw that many companies were building AI systems they couldn’t fully monitor or control in production.
In his new role at a Seattle startup, he’s doing something about it.
Tatachar is now co-founder and CTO of Glacis, which builds tamper-proof records of AI behavior — what CEO Joe Braidwood has called a “flight recorder for enterprise AI.” His arrival comes as Glacis launches new open-source tools for monitoring and controlling AI agents.
Glacis, first covered by GeekWire in November 2025, was started by Braidwood and Dr. Jennifer Shannon, a psychiatrist and adjunct professor at the University of Washington.
The company grew out of a difficult lesson: Braidwood’s previous startup, Yara, an AI-powered mental health tool, had to be shut down after he realized the models drifted from their intended behavior during extended conversations with vulnerable users.
After he wrote about the shutdown on LinkedIn, regulators, clinicians, engineers and insurance executives reached out with the same observation: when AI systems make decisions, nobody can independently verify whether the safety controls actually worked.
That was the spark for Glacis.
How it works: The startup’s core product, called Arbiter, sits in the path of every AI inference call and creates a signed record of the input, the safety checks that ran and the final output.
The record can’t be altered after the fact. At scale, a system that Glacis calls the Witness Network notarizes those records into an auditable trail.
Customers can choose to run the system in “shadow mode,” observing without intervening, or in enforcement mode, where it actively constrains the AI’s behavior.

Shannon, Glacis’ chief medical officer, said the stakes are especially high in healthcare. As a practicing child psychiatrist, she has seen AI-powered ambient scribes hallucinate content in her clinical notes, including fabricating medication prescriptions she never made.
“I would like to be able to go back and see every step of how that AI model made that decision,” she said. “If there’s no infrastructure for that, who is liable? Nobody’s going to sue AI. It’s me.”
The underlying challenge: Tatachar worked at Microsoft across two stints spanning nearly 19 years, most recently as a principal product manager on the Microsoft Foundry team, its platform for building and deploying enterprise AI applications and agents.
He said he saw companies building tools and running proofs of concept but struggling to move AI into production because they couldn’t explain or verify what their systems were doing.
There are three dimensions to the problem, he said: the baseline state of a customer’s infrastructure, model behavior, and what’s known as “intent drift,” where a system behaves differently than what a customer intended, even if the underlying model is functioning normally.
Glacis monitors deployments across all three. “It’s only when you converge these three that a customer has a real view of what actually happened,” Tatachar said.
New releases: Glacis is releasing auto-redteam, an open-source tool that automatically attacks AI systems across a range of vulnerability categories, then generates fixes and verifies their effectiveness.
The company is also publishing OVERT 1.0, a standard for what it calls “observable verification evidence for runtime trust,” intended to give organizations a framework for building provable AI safety into their operations.
The launches come at a volatile moment for AI agent security. OpenClaw, an open-source AI agent framework, has attracted hundreds of thousands of developers since its debut in late 2025, but its rapid adoption has outpaced its security architecture.
Major cybersecurity firms including CrowdStrike and Cisco have published analyses warning of security vulnerabilities in the framework. Braidwood said this shows the need for infrastructure that can enforce safety controls at runtime, not just test them before deployment.
Target market: The company is focusing on customers in healthcare, fintech and insurance.
It signed two pilot deals out of the JP Morgan healthcare conference earlier this year, with three more in the pipeline. Braidwood said the company sees healthcare as its entry point, but considers the problem ultimately universal to any deployment of AI.
A new development this week: Glacis is also opening a waitlist for a $49-per-month starter plan covering red teaming, enforcement and cryptographic attestation for up to 10,000 AI events per month. A $499 pro tier covers up to 100,000 events.
Braidwood said the move is a deliberate shift toward making the technology accessible beyond the regulated enterprises and design partners the company has worked with so far.
Broader landscape: AI observability and security is a booming market, with well-funded startups and big companies offering runtime monitoring and guardrails for enterprise AI.
Braidwood said Glacis differentiates itself through its focus on cryptographic provability — not just detecting problems but producing tamper-proof evidence that safety controls ran, which he said could help companies negotiate insurance coverage and satisfy regulators.
Funding: Glacis has raised $575,000 from a group of investors that includes Geoff Ralston’s Safe Artificial Intelligence Fund, Mighty Capital, Sourdough Ventures and the AI2 Incubator.
It is also part of Cloudflare’s Launchpad program and Plug and Play’s third Seattle accelerator cohort. Braidwood said the company hopes to close a seed round later this year.
Team: Glacis has five employees, including the three co-founders and two engineers.
Tatachar said the company’s sixth “employee” will be an AI agent tasked with handling SOC 2 compliance work through Vanta. The team writes its core cryptographic code in Rust and uses Claude, Codex, and ChatGPT across its workflow.
“We’ve got a 100-person company,” Braidwood joked. “Five of them are real, and the rest are in the cloud or on the desk.”
Share
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0
