Why Anthropic put a pharma CEO on its safety board

By Oscar Espinoza, Alvento — 21 April 2026 — 6 min read

A pharmaceutical capsule resting on a circuit board with a single green glow, illustrating the governance crossover between pharma and AI.

Anthropic appointed the former CEO of Novartis to its Long-Term Benefit Trust last week. Most of the coverage read this as pharma customer acquisition. That is the shallow read. The interesting one sits underneath, and it has implications for any organisation operating under regulatory scrutiny.

The move

Vas Narasimhan spent nearly a decade running Novartis, one of the largest pharmaceutical companies in the world. Before that, he led its global drug development. His career has been built inside the FDA, EMA, PMDA, and every other drug regulator of consequence. Anthropic is a major AI developer that has positioned itself, repeatedly, as the most safety-conscious of the large players.

The Long-Term Benefit Trust is not a commercial board seat. It is the body that governs Anthropic's safety mission. That distinction is the key to reading the appointment correctly.

Three signals

Regulated industries are where AI is heading

Two decades inside the FDA and EMA is not transferable to commercial strategy. It is transferable to operating under intense regulatory scrutiny. The EU AI Act is already enforcing against high-risk systems. Individual US states are layering their own AI laws. Longer term, a drug-approval-style pathway for advanced AI systems is no longer a fringe idea. Anthropic is staffing up for that world before it arrives, not after.

Life sciences is a flagship vertical, not a customer segment

Anthropic has been investing heavily in biomedical work: protein design, drug discovery, clinical reasoning. This appointment plants a flag. Regulated healthcare is the place the company wants its technology taken most seriously. That framing shapes investment priorities, capability choices, and the sectors that will see genuine engineering attention rather than generic enterprise sales.

Tech boards need counterweights

Pharmaceutical governance is built around clinical safety, adverse-event reporting, post-market surveillance, and decades of accumulated institutional trust. Silicon Valley defaults are the opposite — speed, iteration, and shipping before the regulatory framework catches up. Importing pharma-style governance at board level is a deliberate cultural hedge, and a credibility signal to policymakers, hospitals, and scientific bodies making procurement decisions right now.

What regulated-sector leaders should actually do about this

If you run a registry, a professional body, a medtech organisation, or any institution whose reputation is staked on rigour, the useful question is not what this means for Anthropic. It is what it means for how you should be approaching AI over the next twelve months. Four practical moves.

The bigger picture

The next phase of AI competition will not be won on raw capability. It will be won on whether the technology can genuinely be trusted inside regulated, high-stakes industries. Anthropic has made the first serious governance move in that direction. Others will follow. Regulated sectors that engage early will be the ones that set the terms.

Building AI capability in a regulated sector? Alvento works with registries, professional bodies, and medtech organisations that want to build AI capability the right way, without compromising the standards their reputation depends on. Book a diagnostic at alvento.ltd or email hello@alvento.ltd — first conversation is free.