The Data Moat That Isn't: Why the Wearable-to-Medical-AI Thesis Has Three Compounding Problems
Oura raised $900M at an $11B valuation. The investor narrative is that consumer wearable data is the foundation for medical-grade AI. There are three problems with that thesis — and they compound.
By Paal Selnaes, Norseman Projects Ltd
Oura just raised $900M at an $11 billion valuation. They are well-run, growing fast, and their CEO is openly talking about predicting hypertension and cardiac risk from ring data. The FDA's 2026 guidance didn't kill that ambition — but it quietly made it significantly more complicated. Here's why the market hasn't fully priced the distinction.
The Investor Narrative
“Hundreds of millions of user-days of continuous biometric data are the foundation for the next generation of medical-grade AI. The data moat is the valuation argument.”
This narrative is consistent across the consumer wearable space — Oura, WHOOP, Garmin, Apple — and it has been for years. It is not without merit. The question is not whether the opportunity exists. It is whether the path from wellness data to clinical AI is as direct as the valuation implies.
There are three problems with the assumption. They compound.
Consumer wearables stay outside FDA device regulation by maintaining wellness-only intended use. No diagnostic claims, no clinical language. That classification enables rapid product iteration and commercial scale. The problem is what happens next.
Intended use follows the data. Routing wellness-classified data into a pipeline that trains a model predicting clinical outcomes raises a question the market has not fully priced: was the device collecting that data always functioning as part of a medical system — regardless of how it was labelled?
Regulatory Precedent — Not Theoretical
The FDA's “inherent use” doctrine — applied in a July 2025 warning letter to WHOOP over its Blood Pressure Insights feature — establishes that the agency will look past intended use labels when the actual application is clinical. This is an enforcement position already on record.
FDA guidance on AI-based medical software requires training data from validated, representative sources. Consumer wearable datasets have a known structural problem: they skew younger, healthier, more affluent, and more health-engaged than the patient populations where clinical predictions would actually be applied.
The data was not collected under clinical protocols. There is no ground-truth validation at scale. A hundred million wellness data points is not a hundred million validated clinical measurements — and the distinction matters enormously when a regulatory submission requires you to demonstrate representativeness and clinical validity.
Market Signal — The Bridge Is Necessary
The Oura–Dexcom and Samsung–Dexcom partnerships are telling. Both pair consumer wearable data with FDA-cleared medical devices. The bridge to regulated hardware is necessary precisely because the wearable data alone is insufficient for clinical validation purposes. The market is signalling what the regulatory framework requires — but the valuation narrative has not fully absorbed it.
Several companies in this space carry valuations partly premised on a data-to-medical-AI pathway that, on close inspection, requires a separate clinical data collection programme, multiple regulatory submissions, and multi-year timelines to realise. Whether that represents a material fact not fully reflected in asset descriptions is a question acquirers and institutional investors should be asking more directly than they currently are.
⚡ EU AI Act — August 2026
Medical AI is explicitly classified as high-risk under the EU AI Act, effective August 2026. Conformity assessment, transparency documentation, and human oversight requirements apply — and wellness-collected data architectures are not designed to satisfy them. Companies with US-centric data strategies planning EU clinical launches will face this head-on.
How the Three Problems Compound
Classification Trap
Data collected under wellness intended use may not legally support clinical AI training pipelines without triggering device regulation — regardless of volume.
Fitness-for-Purpose Gap
Even where classification risk is managed, the data's demographic skew and absence of clinical-protocol collection means it cannot directly support FDA submissions without substantial supplementary work.
Disclosure Gap
The investment case embeds the clinical AI upside without fully surfacing the regulatory and clinical validation cost — which is substantial, multi-year, and capital-intensive.
The Result
Assets priced as if they are on the clinical pathway, when the actual path requires a separate programme, regulated submissions, and a timeline the current narrative does not account for.
The Actual Path — and Who Has Walked It
The companies furthest along the clinical AI road got there through regulated submissions, not by repurposing wellness data. The pattern is consistent across every major player that has actually achieved clinical-grade status.
| Company / Product | Narrative Assumption | Regulatory Reality |
|---|---|---|
| WHOOP ECG | Wellness data → clinical insight | FDA 510(k) cleared; separate regulated submission required |
| Oura + Dexcom | Ring data sufficient for glucose insights | Partnership with FDA-cleared CGM device required to bridge validation gap |
| Samsung + Dexcom | Galaxy Watch data enables clinical features | Regulated hardware integration is the mechanism — not wearable data alone |
| Apple Watch AFib | Consumer device → cardiac monitoring | FDA De Novo authorization; dedicated clinical study with 400,000+ participants |
Questions Investors and Acquirers Should Be Asking
Does the company's data-to-clinical-AI pathway require a separate clinical data collection programme — and is that programme funded, staffed, and on a disclosed timeline?
Has the intended use classification of data collected to date been reviewed against the FDA's inherent use doctrine, in light of the July 2025 WHOOP warning letter precedent?
What is the demographic profile of the existing dataset relative to the patient populations where clinical predictions will be applied? Has a representativeness analysis been conducted?
For EU clinical ambitions: has the product and data architecture been assessed against EU AI Act high-risk classification requirements effective August 2026?
Are the clinical AI valuation assumptions reflected in disclosure documents, or embedded in narrative without corresponding risk disclosure?
The 2026 regulatory landscape — FDA's inherent use doctrine, the post-market surveillance emphasis, and the EU AI Act — is making the distinction between assets genuinely on the clinical pathway and assets priced as if they are increasingly difficult to ignore. For investors, acquirers, and companies building in this space, that analysis is becoming a core part of the work.
Need a tailored market-entry or regulatory strategy?
We help European technology companies translate complex U.S. and Canadian regulatory pathways into executable plans — from FCC and PTCRB through FDA and ITAR/EAR.
