Skip to main content
Norvet MSP
Back to Blog
Compliance

How AI Is Changing Healthcare IT — and What HIPAA Says About It

Norvet MSP Team April 2026 7 min read
How AI Is Changing Healthcare IT — and What HIPAA Says About It

Healthcare providers are under real pressure to do more with less. AI promises to help — handling prior authorizations, transcribing patient notes, flagging potential diagnoses, and clearing scheduling backlogs that would take a human admin hours to process.

The problem is that healthcare AI HIPAA compliance is genuinely complicated. Patient data is sensitive by definition, the legal exposure from a breach is severe, and the AI tools being marketed to healthcare practices in 2026 are not all built to meet the standard.

This post explains where AI is actually useful in healthcare settings, where HIPAA draws the line, and what your IT setup needs to look like before you bring any AI tool near protected health information.

Where AI Is Being Used in Healthcare

Clinical Decision Support

AI tools that analyze patient history, lab values, imaging, and clinical notes to surface relevant diagnostic considerations or flag drug interactions are now in use at practices ranging from independent clinics to large health systems. These tools don't replace physician judgment — they give the physician more to work with faster.

Patient Scheduling and Communication

AI-powered scheduling assistants handle appointment booking, rescheduling, insurance verification, and appointment reminders without requiring a human on the other end. Some systems handle the back-and-forth with patients via text or chat to fill cancellations in real time.

Medical Coding and Claims Processing

AI coding tools analyze clinical documentation and suggest ICD-10, CPT, and HCPCS codes for billing. For practices with high claim volume, this reduces coding errors, speeds up revenue cycle, and cuts down on denials from miscoded claims.

Transcription and Clinical Documentation

AI transcription tools like Nuance DAX and Suki listen during patient encounters and generate structured clinical notes that the provider reviews and signs. This alone saves many providers 1–2 hours per day of documentation time.

Where HIPAA Gets Complicated

HIPAA's rules on AI come down to one core question: where does the patient data go, and who controls it?

Public AI Tools Are Off-Limits for PHI

ChatGPT, standard Google Gemini, and most publicly available AI tools are not HIPAA-compliant by default. If you paste patient information — a name, a date of birth, a diagnosis, a medication list — into a public AI tool, you have created an unauthorized disclosure of protected health information. That's a HIPAA violation whether or not anyone else ever sees it.

This happens more often than compliance officers realize. A front-desk employee uses ChatGPT to draft a patient communication and includes identifying details. A coder pastes a clinical note into an AI tool to help with code selection. Both are violations.

The Business Associate Agreement Requirement

Any vendor that creates, receives, maintains, or transmits PHI on your behalf is a business associate under HIPAA, and that relationship requires a signed Business Associate Agreement (BAA). If an AI vendor doesn't offer a BAA, you cannot use their tool with real patient data. Full stop.

Major enterprise AI platforms like Microsoft Azure OpenAI Service and AWS HealthLake do offer BAAs and are designed for HIPAA-eligible workloads. Most consumer-facing AI tools do not.

Model Training and Data Retention

Some AI platforms use the data you submit to improve their models. Under HIPAA, allowing a vendor to train their AI on PHI without specific patient authorization is a problem — both legally and ethically. Before deploying any AI tool that touches patient data, your IT team or MSP needs to review the data processing agreement for language about model training, data retention timelines, and deletion rights.

Security Risk Assessment Documentation

HIPAA requires covered entities to maintain a documented security risk assessment. As you bring AI tools into your environment, each one needs to be evaluated and documented as part of that assessment — what data does it access, what are the access controls, what happens in a breach, who is the business associate? Deploying AI without updating your risk assessment puts you out of compliance even if the tool itself is legitimate.

The Questions to Ask Before Deploying Any Healthcare AI Tool

Before you turn on any AI tool that will touch patient data, get clear answers on these:

  • Does this vendor sign a BAA? If no, the conversation ends there. - Where is patient data processed and stored? Domestic, cloud-based, encrypted at rest and in transit? - Does the vendor use submitted data to train their models? If yes, can you opt out? - What are the data retention and deletion policies? - What are the access controls — who inside the vendor organization can see your data? - Is this tool included in your current security risk assessment?

If you can't get satisfactory answers in writing from the vendor, that's your answer.

What HIPAA-Compliant AI Looks Like in Practice

Healthcare practices that are deploying AI successfully in 2026 are doing it on purpose. They're working with IT providers who understand HIPAA, vetting each tool before it touches production data, and documenting every decision.

That looks like: using AI transcription tools built specifically for clinical environments with signed BAAs, running them on HIPAA-eligible cloud infrastructure, and training staff on what they can and cannot put into any AI system. It looks like updating the risk assessment annually to reflect the current AI toolset, and having an incident response plan that addresses AI-specific scenarios.

What it doesn't look like is a front-desk employee using consumer AI tools on a shared computer to help with scheduling because it saves time. The time savings are real — but so is the liability.

The Role of Your IT Provider in Healthcare AI

Your IT provider needs to be a compliance partner, not just a helpdesk. For healthcare organizations deploying AI, that means reviewing vendor agreements for HIPAA compliance, configuring data loss prevention tools that block PHI from reaching unauthorized AI platforms, conducting employee training on appropriate AI use, and maintaining documentation that would hold up to an OCR audit.

This is exactly what Norvet does for healthcare clients. We manage healthcare IT with HIPAA compliance built into every layer — from endpoint security to email filtering to AI tool vetting.

Get AI Right the First Time

Healthcare AI HIPAA compliance isn't a reason to avoid AI. It's a reason to deploy it with a plan. The practices that get this right will see real productivity gains and a stronger compliance posture. The ones that don't will eventually explain a breach to their patients and the Office for Civil Rights.

Norvet MSP supports healthcare providers in the Atlanta metro and Clayton County area. If you want a clear-eyed review of your current AI exposure and a roadmap for compliant deployment, call us at (678) 995-5080 or visit norvetmsp.com.

Source Attribution

Article content used with permission from The Technology Press and adapted for Norvet MSP publishing.

View source article

Need help with Compliance?

Norvet MSP provides managed IT, cybersecurity, and cloud solutions for businesses across metro Atlanta and beyond.

Related Articles