A founder posted a story this week that landed because everyone in regulated SaaS has now seen some version of it.
They paid roughly $8,000 to have an AI-assisted developer build a healthcare MVP. Six weeks later there was something that looked like a product — login, database, dashboard, a clean UI, fully demo-ready. They lined up their first pilot. A regional clinic sent over a vendor questionnaire. The questionnaire asked about encryption at rest, audit logging, role-based access controls, Business Associate Agreement coverage, and whether any PHI touched third-party infrastructure that had never been reviewed.
None of that was in the product. Not because the developer was careless. Because the prompts never asked.
This is now a pattern. And the people most exposed to it are the ones least likely to see it coming: solo and small-team founders shipping fast with AI coding tools, validating clinical hypotheses with real users, and discovering — at the moment of first revenue — that they built the wrong thing.
The Structural Problem Isn’t AI
The AI coding tools are not the failure here. Plenty of compliant healthcare software gets built with the same tools, by the same kinds of operators. What fails is the assumption that compliance is a layer that gets added later, somewhere between “working prototype” and “first paying customer.”
In regulated SaaS, compliance isn’t a layer. It’s a constraint on the schema, the authentication model, the logging strategy, and the list of third-party services you’re even allowed to call. That’s the part the prompts never surface. You can ask Cursor or Claude Code to build you a patient intake form. You cannot ask Cursor to tell you that your patient intake form needs field-level access permissions, immutable audit columns from the initial schema design, and a documented data flow map showing every place PHI touches third-party infrastructure — because the prompt didn’t ask, and the tool doesn’t volunteer.
Aptible, which has been doing HIPAA infrastructure for digital health startups since 2013, puts it directly: the decisions hardest to retrofit — data model, access control architecture, audit logging, encryption scheme — are the ones easiest to get right before you write the first line of production code. Every one of those decisions is invisible from the prompt level.
What the Procurement Questionnaire Actually Asks
The clinic in the original story sent a vendor questionnaire. That document is not arbitrary. It is the operational expression of the HIPAA Security Rule, mapped to specific technical controls. If you have not seen one, the categories are predictable, and they map cleanly onto things AI-built MVPs almost universally skip.
Encryption
AES-256 for PHI at rest, TLS 1.2+ in transit, FIPS-validated protocols, documented key management. Default cloud database setup does not satisfy this — you have to be able to point at the configuration, the key rotation policy, and the BAA from your cloud provider that covers the specific services you used. “We use Postgres on AWS” is not an answer. “We use RDS with encryption at rest enabled, KMS-managed keys with annual rotation, under an executed AWS BAA covering RDS specifically” is.
Audit Logging
HIPAA requires hardware, software, and procedural mechanisms to record and examine activity in systems containing PHI. At the application level that means immutable logs of every login attempt (successful and failed), every PHI view/create/update/delete, with user ID, timestamp, event type, and affected PHI identifiers. Logs must be retained six years minimum, encrypted at rest and in transit, and protected against modification by administrators. Most AI-built MVPs log nothing. Some log to plaintext files. A few log to encrypted storage but to the same database the application uses, with no immutability guarantees.
Role-Based Access Controls
“Doctor sees their patients, admin sees billing” is not RBAC. You need defined roles, documented permissions per role, minimum-necessary access enforcement, periodic access reviews, and the ability to produce evidence of who had access to which records and when. Most prototypes have one role: logged-in user.
BAA Coverage and the Subprocessor Chain
This is where most AI-built products fail hardest. If your app calls OpenAI to summarize clinical notes, and you do not have a BAA with OpenAI (which requires their Enterprise tier or API with specific configuration), every one of those API calls is a HIPAA violation. Same for Anthropic, Azure OpenAI, AWS Bedrock, Google Vertex.
The BAA is not implied by the vendor’s website saying “we’re HIPAA compliant.” Aptible’s team has documented a specific failure mode: one company had a BAA with Anthropic but not OpenAI, and a developer tried OpenAI for a task where it performed better — discovered in standup, disclosed to compliance, documented as an incident. It was fixable. It was also avoidable.
Risk Analysis
HIPAA requires an accurate, thorough, written risk analysis of where ePHI exists in your system, what threats it faces, and what controls are in place. Not a mental checklist. An actual document. Absence of one is itself a compliance violation, and it is the first thing a serious auditor asks for.
The Retrofit Math
The Reddit post claimed roughly 3x the original build cost to retrofit compliance into one of these MVPs. The published industry numbers are worse.
Multiple healthcare development analyses from the last six months converge on the same range. Webkorps’ 2026 analysis estimates that retrofitting compliance after development typically costs 60–100% of the original development budget, versus 20–35% added cost when built correctly from day one. Specode’s 12-month total cost of ownership analysis describes the “Month-8 rebuild” as the standard outcome when teams ship on general-purpose low-code platforms without healthcare scaffolding — by month three to twelve the bill shifts from shipping screens to running regulated software, and it’s the rework that kills the budget.
The reason retrofitting compounds is that the early architectural choices propagate. If your database schema doesn’t have tenant isolation and audit columns, every table touching PHI has to be migrated. If your authentication doesn’t distinguish provider context from patient context, the auth model needs to be rebuilt — and every endpoint that consumed the old auth has to be revisited. If your third-party integrations don’t have BAAs, you have to either replace them or get them under contract, and replacing them means rewriting whatever code depends on their specific APIs. Each retrofit step touches code that was written assuming the old constraints, which means the rework isn’t additive — it cascades.
A six-week, $8K build that has to be rebuilt isn’t a $24K problem. It’s the original $8K plus whatever you have to pay someone who understands healthcare architecture to do the work right, plus the opportunity cost of the pilot you paused, plus whatever trust damage you took with the pilot customer when you told them the architecture needs to get fixed.
What “Compliance by Design” Actually Looks Like in Week One
Before the first feature commit, in this order:
1. Map the PHI flow. Even informally. A diagram showing where PHI enters the system, where it’s stored, where it transits, what touches it. This is the single most commonly skipped step, and it’s the cheapest one. Aptible’s guidance for early-stage teams is explicit that this doesn’t need to be a formal document — but it needs to exist, because every downstream decision (which database, which auth provider, which AI vendor, which logging stack) depends on it.
2. Pick the cloud and lock the BAA. AWS, Azure, GCP all sign BAAs. The BAA covers infrastructure they manage — not your code, not your data pipelines, not your AI vendor choices. Confirm which specific services are HIPAA-eligible under your BAA (not all of them are), and constrain your architecture to those.
3. Schema with audit columns and tenant isolation from line one. Every table that touches PHI gets created/updated timestamps, created_by/updated_by user IDs, and a tenant/organization scope. Retrofitting this later means a migration touching every PHI table and every query.
4. Authentication that distinguishes role contexts. Provider vs. patient vs. admin vs. system. Even if you only have one role today, the auth model should accommodate the others without restructuring. Role-based access enforcement at the API layer, not just the UI.
5. Application-level audit logging from the first endpoint. Every PHI access, every authentication event, every administrative action. Immutable storage (append-only, separate credentials from the application, retention policy enforced). This is the control most often missed in early builds and the single hardest one to retrofit, because the log you don’t have for the first six months is gone forever.
6. The BAA inventory for every third party that touches PHI. OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Google Vertex, your email provider, your analytics provider, your error tracking provider, your customer support tool. If they receive, transmit, or maintain PHI, you need a BAA. If your error tracker logs request bodies and your request bodies contain PHI, that tool needs a BAA. If your analytics provider sees URL parameters and your URLs contain patient identifiers, that tool needs a BAA.
7. A written risk analysis. Doesn’t need to be 80 pages. Needs to exist, be dated, be specific to your system, and identify the threats and controls. It is the first document a procurement team or auditor will ask for, and its absence is treated as a substantive finding, not a paperwork gap.
None of this requires slowing the build to a crawl. It requires that the prompt include the constraint. “Build me a patient intake form” produces one thing. “Build me a patient intake form for a HIPAA-regulated environment, with tenant isolation, audit columns, role-based access enforcement, application-level audit logging to an immutable store, and no third-party API calls without an executed BAA” produces a different thing. The AI tool will happily do the second version. It just won’t propose the second version on its own.
The Honest Version of the Advice
The founder in the original post said the thing most people in healthcare procurement know but rarely say: a lot of the healthcare founders who reach out need a compliance attorney before they need a developer. The ones who come back after that conversation tend to ship something that survives contact with a real procurement team.
That doesn’t mean every solo founder needs to spend $15K on legal before writing a line of code. It means that before the first commit, someone on the project needs to have read the HIPAA Security Rule, understood what a BAA actually obligates, and built a checklist of compliance requirements that becomes a constraint on the prompts. If that person isn’t on the team, it’s the developer’s job to ask whether they exist before quoting the work. If they don’t exist, the right answer is to say so.
The question to ask any developer before they write a line of healthcare code is what their compliance requirements checklist looks like. If they don’t have one, you have your answer about whether they should be building this.
A Practical Checklist for Founders Evaluating an AI-Built Healthcare MVP
If you already have a build and you’re trying to figure out whether it can survive procurement, these are the questions to ask in order. Each one is a binary — “we’ll add that later” is a no.
- Is there a written data flow map showing every place PHI is stored, transmitted, or processed?
- Is there an executed BAA with the cloud provider, naming the specific services in use?
- Is there a BAA with every third-party service that receives, transmits, or processes PHI — including any LLM API, error tracker, analytics tool, or email provider?
- Is PHI encrypted at rest using AES-256 or equivalent, with documented key management?
- Is PHI encrypted in transit using TLS 1.2+ end to end, including between internal services?
- Does every PHI-touching table have audit columns (created/updated timestamps, user IDs, tenant scope)?
- Is there application-level audit logging of every PHI access, write, and authentication event, retained immutably for at least six years?
- Is access enforced by role at the API layer, with documented role definitions and minimum-necessary access?
- Is there a written risk analysis specific to this system?
- Is there a documented incident response and breach notification procedure that meets the 60-day notification window?
If you’re at three or fewer yeses, you don’t have an MVP. You have a prototype that needs to be rebuilt before it touches a real patient record. That’s not a disaster — it’s a much smaller disaster if you find out now than if you find out in a procurement questionnaire from a clinic that was about to be your first customer.
If you’re building in healthcare, fintech, defense, or any other space where the first real customer’s procurement team will ask for evidence of controls, the architectural decisions are the compliance decisions. They get made in week one whether you’re paying attention or not. The only choice is whether you make them deliberately.
This article is provided for informational purposes only and does not constitute legal advice. Healthcare technology founders with specific questions about HIPAA compliance program design, BAA requirements, or security architecture should consult qualified legal and technical counsel.



