HIPAA Compliant Alternatives to ChatGPT
No, ChatGPT is not HIPAA compliant. The free, Plus, Pro, and Team versions of ChatGPT cannot be used with protected health information (PHI) under any circumstances. OpenAI does offer a Business Associate Agreement (BAA) for ChatGPT Enterprise, ChatGPT for Healthcare, and its API platform - but signing a BAA is only the starting point, not the finish line. Here is what healthcare organizations actually need to know.
What OpenAI's BAA Actually Covers (and What It Doesn't)
In January 2026, OpenAI launched ChatGPT for Healthcare, a purpose-built tier that includes a BAA, audit logs, customer-managed encryption keys, and data residency options. This joined ChatGPT Enterprise and the API platform as the only OpenAI products eligible for HIPAA-covered use.
Here is what the BAA does cover:
- Data isolation: Inputs and outputs from Enterprise, Healthcare, and API tiers are not used for model training by default.
- Encryption: Data is encrypted in transit (TLS 1.2+) and at rest (AES-256).
- Access controls: SSO, SCIM provisioning, and admin-level workspace management.
- Audit logging: Activity logs available for compliance monitoring.
But there are significant limitations you need to understand:
- Data still leaves your network. Every prompt containing PHI travels to OpenAI's servers for processing. You are trusting a third party with your patient data.
- Court-ordered data retention. A U.S. court order now requires OpenAI to retain conversation data across ChatGPT and API platforms indefinitely - even for enterprise and opt-out users. This creates real tension with HIPAA's minimum necessary standard and data retention policies.
- No BAA for ChatGPT Business. Despite the name, OpenAI explicitly states that ChatGPT Business is not covered by a BAA. Do not confuse it with Enterprise.
- Shared responsibility. OpenAI's BAA covers their obligations as a business associate. Your organization is still responsible for access controls, workforce training, risk assessments, and proper PHI handling on your end.
"HIPAA Eligible" vs. Actually HIPAA Compliant
This distinction trips up more healthcare organizations than any other. No software vendor can make you HIPAA compliant by itself. HIPAA compliance is a shared responsibility between the covered entity (your organization) and the business associate (the vendor).
When a vendor says their product is "HIPAA eligible," they mean:
- They will sign a BAA with you.
- They have implemented technical safeguards on their end (encryption, access controls, audit trails).
- They will not use your data for unauthorized purposes.
What they do not mean:
- That using the product automatically makes you compliant.
- That you can skip your own risk assessment.
- That every feature of their platform is covered under the BAA.
- That your staff can use it however they want without policies in place.
A signed BAA with zero internal safeguards is not compliance - it is a false sense of security with a paper trail.
Comparison: HIPAA-Compliant AI Platforms for Healthcare
| Vendor | BAA Available | Data Leaves Your Network | PHI Training Risk | Compliance Certifications | Best For |
|---|---|---|---|---|---|
| ChatGPT Enterprise / Healthcare | Yes | Yes - processed on OpenAI servers | Low - training opt-out by default, but court-ordered retention applies | SOC 2 Type II | Organizations already invested in OpenAI's ecosystem |
| Microsoft 365 Copilot / Dragon Copilot | Yes | Yes - processed in Microsoft Azure | Low - data not used for foundation model training | SOC 2, HITRUST, ISO 27001, FedRAMP | Health systems already on Microsoft 365 and Azure |
| Google Workspace with Gemini | Yes | Yes - processed in Google Cloud | Low - Workspace data excluded from training with BAA | SOC 2, ISO 27001, FedRAMP, HITRUST | Clinics and practices already on Google Workspace |
| AWS (Bedrock / HealthLake) | Yes | Yes - processed in AWS infrastructure | Low - customer data not used for training; 166+ HIPAA-eligible services | SOC 2, HITRUST, ISO 27001, FedRAMP High | Engineering teams building custom healthcare AI applications |
| Compass AI (Self-Hosted) | Not needed - you own the infrastructure | No - all processing stays on-premise | None - models run locally, PHI never transmitted | Inherits your org's certifications (SOC 2, HITRUST, etc.) | Organizations that need zero data exposure and full control |
When a Cloud BAA Might Be Enough
Not every healthcare AI use case requires on-premise deployment. For some workflows, a cloud vendor with a signed BAA and proper configuration is a reasonable approach:
- General administrative tasks: Summarizing non-PHI meeting notes, drafting policy documents, creating training materials.
- De-identified data analysis: Working with properly de-identified datasets that no longer qualify as PHI under HIPAA's Safe Harbor or Expert Determination methods.
- Patient-facing communication drafts: Generating template language for appointment reminders or educational content (reviewed by staff before sending).
- Research literature review: Summarizing published medical literature or clinical guidelines.
In these cases, the key question is whether PHI is actually entering the AI system. If you can keep PHI out of the prompts entirely, compliance risk drops significantly.
When You Absolutely Need On-Premise AI
Some use cases involve PHI so deeply that sending it to any third party - even one with a BAA - introduces unacceptable risk:
Clinical Documentation and Notes
Generating or summarizing clinical notes from patient encounters means feeding detailed PHI - diagnoses, medications, lab results, personal identifiers - directly into the AI. Every prompt is a potential data exposure event. With a self-hosted solution like Compass AI, the model runs entirely within your infrastructure. Patient data never crosses a network boundary.
Prior Authorization Processing
Prior auth workflows involve insurance details, diagnosis codes, treatment plans, and patient demographics. Automating these with AI means processing dense concentrations of PHI. On-premise deployment eliminates the third-party risk entirely and keeps processing within your existing compliance perimeter.
Medical Coding and Billing
AI-assisted coding pulls from clinical documentation to suggest ICD-10, CPT, and HCPCS codes. The input data is inherently PHI-rich. Self-hosted AI lets coding teams leverage automation without adding another business associate to your compliance chain.
Patient Communication at Scale
Drafting personalized patient messages - follow-up instructions, care plan summaries, portal responses - requires the AI to process individual patient records. When this happens at scale across thousands of patients, the attack surface of a cloud-based solution multiplies. On-premise processing contains that risk.
Multi-System Integration
When AI needs to pull from your EHR, billing system, and scheduling platform simultaneously, every integration point is a potential PHI exposure vector. Self-hosted AI operating within your existing network can access these systems without PHI ever leaving your security perimeter.
The Real Risk of "Good Enough" Compliance
The most dangerous scenario in healthcare AI is not a data breach - it is the slow, invisible drift of PHI into systems that were never designed to protect it. A doctor copies a patient note into ChatGPT Plus to get a quick summary. A billing coordinator pastes claim details into the free tier to format an appeal letter. A practice manager uses a consumer AI tool to draft a patient communication.
None of these actions trigger alarms. All of them are HIPAA violations.
This is why the choice of AI platform matters. It is not just about which vendor will sign a BAA. It is about whether your infrastructure makes the right thing easy and the wrong thing hard. Self-hosted AI platforms like Compass AI solve this by removing the possibility of PHI leaving your network entirely. There is no accidental data exposure because there is no external data transmission.
How to Evaluate a HIPAA-Compliant AI Solution
Before committing to any AI platform for healthcare use, work through this checklist:
- BAA availability and scope: Does the vendor offer a BAA? Which specific products and features does it cover? (Many vendors have BAAs that only apply to certain tiers or services.)
- Data processing location: Where is PHI processed and stored? Can you specify the region? Can you verify it?
- Training data policies: Is your data explicitly excluded from model training? Is this contractually guaranteed or just a default setting that can change?
- Data retention: How long does the vendor retain your data? Can you enforce your own retention and deletion policies?
- Audit capabilities: Can you get detailed logs of who accessed what data and when?
- Subprocessor transparency: Does the vendor use subprocessors? Are those subprocessors also bound by HIPAA requirements?
- Breach notification: What is the vendor's breach notification timeline and process?
- Exit strategy: Can you export and delete all data if you leave the platform?
Frequently Asked Questions
Can I use ChatGPT Enterprise for HIPAA-covered work?
Yes, but only after signing a BAA with OpenAI and configuring your workspace correctly. ChatGPT Enterprise excludes your data from model training by default and provides encryption, SSO, and audit logs. However, your data is still processed on OpenAI's servers, and a court order currently requires OpenAI to retain conversation logs indefinitely - a detail your compliance team should evaluate carefully.
Does OpenAI sign BAAs?
Yes. OpenAI offers BAAs for ChatGPT Enterprise, ChatGPT for Healthcare, and its API platform. They do not offer BAAs for ChatGPT Free, Plus, Pro, Team, or Business plans. You must be on an eligible tier and execute the BAA separately - it is not automatic with your subscription.
Is a BAA enough to make an AI tool HIPAA compliant?
No. A BAA is a legal agreement that defines the vendor's responsibilities for protecting PHI. HIPAA compliance requires a complete program including risk assessments, workforce training, access controls, encryption, incident response plans, and ongoing monitoring. The BAA covers the vendor's side. Everything else is on you.
What is the safest way to use AI with patient data?
The safest approach is a self-hosted AI platform where models run entirely within your own infrastructure. No PHI is transmitted to third parties, no BAA is needed for the AI vendor, and your existing security and compliance controls apply directly. Solutions like Compass AI are built specifically for this use case in regulated industries.
Can healthcare workers use the free version of ChatGPT at work?
Not with any information that could qualify as PHI. The free version of ChatGPT has no BAA, no data isolation, and OpenAI may use inputs to improve its models. Even seemingly anonymous clinical questions can constitute PHI if combined with other identifiers. Healthcare organizations should have clear policies prohibiting the use of consumer AI tools for any work involving patient information.











