FDA has announced that its Total Product Lifecycle Advisory Program, or TAP Program, is scaling from 15 devices at launch to 325 by the end of FY2027. That’s a 20x expansion over the life of the program. When a regulatory agency scales a pilot program that aggressively, it’s worth asking what they’re actually building toward.
We spend our days in the software, connectivity, and system architecture layers of connected medical device development. We’re not regulatory consultants, and this isn’t regulatory advice. But we’ve been watching this expansion alongside a few other recent FDA moves, and the pattern has direct implications for how connected devices get built. That’s the lens we’re bringing here.
TAP is designed to reduce friction in the submission process and break down silos between manufacturers, providers, and payers, but the mechanism for doing that is structured, ongoing engagement with FDA across the product lifecycle, not just at submission. A few other recent moves point in the same direction and help clarify what that model looks like in practice:
- Cybersecurity in Medical Devices guidance (2025) — reinforces security as a lifecycle requirement: secure update mechanisms, vulnerability management, software bill of materials. Not a box you check at submission. We broke down what this means for software teams here.
- Predetermined Change Control Plan (PCCP) guidance (2024) — enables FDA to authorize a framework for future software and AI updates under a controlled plan, supporting iterative change without treating the device as frozen after clearance.
- TEMPO pilot for Digital Health Devices (2025) — pilots a pathway centered on real-world outcomes and performance, signaling increased reliance on ongoing evidence generation after deployment
TAP participation is voluntary, and most teams building connected devices won’t enroll. But that’s beside the point. What these moves suggest, taken together, is a shift from “prove it once at submission” toward “demonstrate it throughout the lifecycle.”
If that’s where oversight is headed, it has direct implications for the architecture decisions you’re making right now, whether you’re deep in regulatory planning or nowhere near that conversation yet. We can’t read FDA’s mind, and we’re not claiming this is their stated direction. But the pattern is clear enough that it’s worth thinking through what it means for the systems you’re building today.
The Design Question Shifts: “Can We Do It?” to “Can We Prove It?”
If this shift is real, it changes what you’re fundamentally designing for.
The old design question was binary: can our system do X? Does the firmware update? Does the device collect the right data? Does the connection stay stable? Those are still important questions. But they’re increasingly insufficient on their own.
The new question is: can our system prove it’s doing X correctly both continuously and retrospectively? That’s a different question, and it requires a different architecture.
Three Questions Every Architecture Decision Should Answer
A useful way to think about this: every architecture decision for a critical component now needs to answer three questions.
- Continuous Demonstration: Can this component prove it’s operating as validated, ongoing, not just at the moment you checked?
- Retrospective Reconstruction: Can we reconstruct this component’s exact state and behavior at any historical point in time?
- Controlled Evolution: Can we update this component with full traceability and rollback capability?
If your architecture can’t answer yes to all three for critical components, you’re likely designing for yesterday’s regulatory model. And importantly, this isn’t about adding features. It’s about building different foundational capabilities from the start.
Three Architecture Areas Where Requirements May Change
The clearest way to see what continuous oversight demands in practice is to look at where current architecture typically falls short. Three areas tend to surface the gap most sharply.
Software Update Architecture
Most teams design their update architecture around one question: can we reliably, securely and practically push firmware and software updates to devices in the field? The focus is functional. Does the OTA mechanism work, do updates deploy successfully, does the device come back up correctly?
Continuous oversight asks a different question: can you prove that every update was authenticated, authorized, and deployed as intended, and maintain a complete audit trail of how every device in your fleet got to its current software state? Not just “did the update happen,” but “can you demonstrate the full history of how every device in your fleet got to its current software state?”
Think about what happens when a device has an adverse event. FDA wants to know what firmware version was running, how the device got that version, what the deployment path was, and whether you could have rolled it back if you’d caught the problem in time. Your update mechanism works, but can you prove the deployment history?
That’s where the architecture has to do more than most teams design for. Version control isn’t just a development tool, it’s regulatory infrastructure. Deployment logging isn’t optional, it’s evidence generation. Rollback isn’t a nice-to-have, it’s a safety requirement. These need to be built into the core update architecture from the start, not bolted on later. If you’re working through the implementation side of that like OTA mechanics, deployment flow, and rollback design, this breakdown is a useful next read.
Device State & Performance Monitoring
Most teams design their telemetry around the data their device needs to function: sensor readings, battery state, connectivity status, algorithm inputs. The focus is operational: does our device have what it needs to do its job?
Continuous oversight shifts the question to what data proves your device is operating within validated parameters? That’s a meaningfully different frame. Monitoring for functionality is not the same as monitoring for validated performance.
Imagine FDA reviewing your device data and asking how you know the device was operating as validated. Your logs show it was powered on and connected. But can you prove it was calibrated correctly, processing data within spec, operating within validated environmental bounds? A CGM doesn’t just need glucose readings in the data record; it needs calibration state, sensor age, and the environmental conditions that affect validation claims.
That gap is what the telemetry strategy has to close. You need to capture not just what’s happening, but whether what’s happening falls within validated parameters. Your monitoring data has to be designed to answer regulatory questions, not just operational ones.
System Traceability & Reconstruction Capability
Most teams build logging for debugging and troubleshooting. The focus is reactive. Essentially, capture enough information to understand and fix problems when they occur.
Continuous oversight requires something more demanding: the ability to reconstruct complete system state at any historical point. Not just “what’s happening now” but “what was the exact configuration, firmware version, data collection parameters, and user settings on this device six months ago?”
Say an adverse event occurred six months ago and the FDA needs to understand the root cause. Can you reconstruct the full picture at that exact moment i.e., your firmware version, configuration state, update history, and environmental conditions? Most systems can tell you the current state with confidence, but very few can tell you the historical state with confidence.
That reconstruction capability has to be designed across your entire stack from firmware to cloud backend, data pipeline, and user interface. Versioned, immutable state records, not just logs. Configuration history tied to device identity, not just current configuration. A complete audit trail of every significant action, not just errors. This isn’t one system’s responsibility; it cuts across every layer. The cloud backend layer in particular carries a lot of this burden; here’s how we approach that review process for regulated systems.
Architecture Stage Determines Your Optionality
The reason to think about this now isn’t FDA pressure. It’s that architecture decisions have a natural window defined by where you are in development, not by any regulatory deadline.
Early in development, the groundwork capabilities we’ve described cost relatively little to design in. They’re not features you add, they’re decisions about how your system works. Structuring your update architecture to generate deployment logs by default. Designing your telemetry to capture validation-relevant state alongside functional data. Building immutable records into your backend data model from the start. At this stage, these are architectural choices, not major engineering investments.
Later, they’re retrofits to foundational infrastructure, which are expensive, disruptive, and often incomplete. It’s the same argument we make about firmware planning specifically: the decisions that cost least are the ones made earliest.
The Non-Regulatory Case for Building This Way
If you’re reading this while you’re still making core architecture decisions, you have optionality. If you’re 18 months into development with these systems already built, you have less of it. But that’s just the nature of building software systems, not a scare tactic.
And honestly, the case for building these capabilities isn’t purely regulatory. Traceable, auditable, reconstructable systems are faster to debug, easier to root-cause, and more defensible to any stakeholder – investor, customer, or regulator alike. Whether or not FDA’s oversight posture shifts exactly as we’re reading it, teams that build these capabilities tend to end up with more robust systems regardless.
The Question Your Regulatory Consultants Aren’t Hired to Answer
Regulatory consultants do essential work. They understand the submission process, the documentation requirements, and the evidence FDA needs to clear your device. They essentially work backwards from submission: what does FDA need to see, and how do you produce it?
That’s a necessary question. But it’s not the same question as: does your architecture actually have the capability to generate that evidence — continuously, across the product lifecycle, in a way that holds up under review? That’s a system design question, and it’s typically not what regulatory consultants are hired to answer.
The System Design Question
If this framing is useful, it’s because it gives you a way to evaluate your own architecture choices before those choices are locked in from the system forward, rather than from submission backwards.
TAP’s expansion, and the pattern of FDA moves around it, tells you something about where connected device oversight appears to be headed. The question isn’t just whether you’re compliant at submission. The question is whether your system was designed to prove what it’s doing over time, in the field, in a way that holds up. That’s a technical question. And it’s worth asking now.




