11 min read

Colorado AI Act, EU AI Act, HIPAA: Your 2026 Compliance Checklist

Three major regulations affect AI document processing in 2026. Here's a practical checklist covering the Colorado AI Act, EU AI Act, and HIPAA -- what applies, what to do, and when.


The regulatory convergence

2026 is the year AI regulation gets real. Three major frameworks converge:

  • EU AI Act: Full enforcement of high-risk AI obligations begins August 2026
  • Colorado AI Act (SB 24-205): The first comprehensive US state AI law, effective February 2026
  • HIPAA: Existing healthcare data rules now intersect with AI processing in ways that OCR (Office for Civil Rights) is actively enforcing

If you use AI to process documents in a professional setting, at least one of these frameworks likely applies to you. If you operate across industries or jurisdictions, multiple may apply simultaneously.

This checklist covers each framework's requirements and provides actionable steps for compliance.

Part 1: Colorado AI Act

What it covers

The Colorado AI Act regulates "high-risk AI systems" -- AI that makes or substantially influences "consequential decisions" affecting consumers in areas including:

  • Employment and hiring
  • Education and enrollment
  • Financial services and lending
  • Healthcare services
  • Insurance
  • Housing
  • Government services
  • Legal services

Who it applies to

Developers: Organizations that create AI systems deployed in Colorado.

Deployers: Organizations that use AI systems to make consequential decisions affecting Colorado consumers.

If you use AI to process documents that inform decisions in any of the categories above, and those decisions affect Colorado residents, you're a deployer.

Checklist for deployers

  • [ ] Risk management policy: Implement a documented risk management policy governing your use of high-risk AI systems. The policy must be reasonable and updated annually.

  • [ ] Impact assessment: Complete an impact assessment for each high-risk AI system before deployment and annually thereafter. The assessment must include:

    • The purpose and intended use
    • The data processed (types, sources)
    • Known limitations and potential harms
    • The metrics used to evaluate performance
    • Safeguards and mitigation measures
  • [ ] Consumer notification: Inform consumers when a consequential decision is made using a high-risk AI system. Include:

    • That AI was used in the decision
    • A description of the AI system
    • Contact information for questions
    • The right to appeal the decision (if applicable)
  • [ ] Discrimination testing: Test for algorithmic discrimination based on protected characteristics. Document results and any remediation.

  • [ ] Data governance: Ensure training and input data is appropriate, relevant, and managed according to applicable privacy laws.

  • [ ] Human oversight: Maintain human oversight of consequential decisions. The AI should inform, not replace, human judgment.

  • [ ] Attorney General reporting: If you discover that your AI system has caused algorithmic discrimination, report to the Colorado Attorney General within 90 days.

How local AI helps with Colorado compliance

The impact assessment is significantly simpler with local processing:

  • Data processed: "Documents are processed locally on company devices. Only extracted text content is sent to the language model for analysis."
  • Safeguards: "Files never leave company-controlled devices. OS-level sandboxing restricts code execution. Regional API routing keeps text processing within designated regions."
  • Known limitations: Easier to document when you control the processing pipeline end to end.

Part 2: EU AI Act

What it covers

The EU AI Act classifies AI systems by risk level and imposes obligations proportional to risk. For document processing, the key categories are:

  • High-risk (Annex III): AI used in employment, credit assessment, insurance, law enforcement, migration, administration of justice
  • Limited risk: AI systems that interact directly with users (transparency obligations)
  • Minimal risk: Most general-purpose document processing

Who it applies to

Any organization that:

  • Provides AI systems in the EU market (providers)
  • Uses AI systems in the EU (deployers)
  • Processes data of EU individuals

This has extraterritorial reach. A US company using AI to process documents containing EU personal data is within scope.

Checklist for high-risk AI deployers

  • [ ] Confirm risk classification: Map each AI use case to the risk categories. Document why each classification was chosen. General document summarization is typically minimal risk. Document processing that informs employment, credit, insurance, or legal decisions is likely high-risk.

  • [ ] Technical documentation review: Obtain and review the provider's technical documentation. Verify it covers the AI system's intended purpose, capabilities, and limitations.

  • [ ] Human oversight: Establish procedures for human oversight of AI outputs. Designate responsible individuals with appropriate training and authority to override AI decisions.

  • [ ] Input data quality: Ensure documents processed by the AI are appropriate for the use case. Monitor for input data that could cause biased or inaccurate outputs.

  • [ ] Record-keeping: Maintain logs of:

    • AI system use (what documents processed, when, by whom)
    • Decisions informed by AI outputs
    • Any anomalies or errors detected
    • Human oversight actions taken
  • [ ] Risk management: Implement ongoing risk identification, analysis, and mitigation. Review at least annually.

  • [ ] Transparency to affected persons: Inform individuals when decisions affecting them are influenced by high-risk AI systems.

  • [ ] DPIA integration: If the AI processes personal data, integrate the AI Act requirements with your GDPR Data Protection Impact Assessment.

  • [ ] Incident reporting: Report serious incidents (AI system causing death, serious health damage, property damage, environmental damage, or fundamental rights violations) to the relevant national authority.

Checklist for all AI users (including minimal risk)

  • [ ] Transparency for AI interaction: If users interact with an AI system (e.g., chatbot), disclose that they're interacting with AI. Not required if obvious from context.

  • [ ] Synthetic content marking: If AI generates text, images, or other content that could be mistaken for human-created, mark it as AI-generated.

  • [ ] General-purpose AI documentation: If using a general-purpose AI model (like foundation models), verify the model provider complies with Article 53 obligations (technical documentation, copyright compliance, training data summary).

How local AI helps with EU AI Act compliance

  • Record-keeping: Local audit logs capture every agent action, file access, and model call. Exportable for compliance reporting.
  • Risk management: Simpler architecture means fewer risks to manage. No third-party storage, no complex data flows, no sub-processor chains.
  • Input data quality: You see exactly what documents enter the system (they're on your device) and what text reaches the model.
  • Transparency: The processing pipeline is inspectable. Files are parsed locally, text is sent to the model, results return. No black-box processing.

Part 3: HIPAA

What it covers

HIPAA protects Protected Health Information (PHI) -- any individually identifiable health information held or transmitted by a covered entity or business associate. This includes information in medical records, billing records, and any document that contains health-related data linked to an individual.

Who it applies to

  • Covered entities: Healthcare providers, health plans, healthcare clearinghouses
  • Business associates: Organizations that handle PHI on behalf of covered entities -- including technology vendors that process documents containing PHI

If you're a covered entity using AI to process documents containing PHI, HIPAA applies directly. If you're a business associate, HIPAA applies through your BAA.

Checklist for HIPAA-compliant AI document processing

  • [ ] Business Associate Agreement (BAA): If using a cloud AI service that processes PHI, ensure a BAA is in place. The BAA must cover:

    • Permitted uses and disclosures of PHI
    • Safeguards to prevent unauthorized use
    • Breach notification obligations
    • Return or destruction of PHI upon termination
  • [ ] Minimum necessary standard: Process only the minimum PHI necessary for the task. If the AI only needs diagnosis codes, don't send the entire medical record.

  • [ ] Access controls: Implement user authentication and role-based access. Only authorized personnel should use AI tools that process PHI.

  • [ ] Audit controls: Maintain logs of who accessed what PHI through the AI system, when, and for what purpose.

  • [ ] Transmission security: PHI transmitted to any external service must be encrypted in transit (TLS 1.2 or higher).

  • [ ] De-identification consideration: Where possible, de-identify PHI before AI processing. HIPAA's Safe Harbor method requires removing 18 categories of identifiers.

  • [ ] Risk analysis: Conduct a security risk analysis that includes AI document processing. Identify threats, vulnerabilities, and the likelihood and impact of PHI exposure.

  • [ ] Breach notification preparedness: Have a plan for notifying affected individuals, HHS, and (for breaches over 500 individuals) media within 60 days of discovering a breach.

  • [ ] Workforce training: Train staff on HIPAA requirements as they apply to AI document processing. Emphasis on not uploading PHI to unauthorized services.

How local AI helps with HIPAA compliance

No BAA needed for the file processing layer. Since docrew processes files locally and sends only text (not files) to the model API, the file processing itself doesn't involve a business associate. The model API provider may need a BAA for the text content they process (if it contains PHI), but the relationship is simpler than a cloud AI service that stores and processes your files.

Minimum necessary is structural. Local parsing extracts text from documents. The extracted text can be further filtered before model inference. The raw files -- with their full content -- never leave the device.

Audit controls are built in. Local agent logs capture every file read and model call. These logs satisfy HIPAA's audit control requirements for AI processing activities.

Risk analysis is simpler. Fewer systems touching PHI means fewer threats and vulnerabilities to analyze. The risk assessment focuses on the local device and the model API call -- not on a cloud provider's entire infrastructure.

Cross-framework checklist

Some obligations are common across all three frameworks. Implement these once and satisfy multiple requirements:

  • [ ] Data flow documentation: Map how documents move through your AI processing pipeline. Required by all three frameworks.

  • [ ] Human oversight: Ensure human review of AI-influenced decisions. Required by Colorado AI Act and EU AI Act; best practice for HIPAA.

  • [ ] Access controls: Authenticate and authorize users. Required by HIPAA; expected by Colorado and EU AI Act for high-risk systems.

  • [ ] Audit logging: Maintain records of AI processing activities. Required by all three.

  • [ ] Risk assessment: Identify and mitigate risks from AI processing. Required by all three.

  • [ ] Incident response: Plan for and respond to AI-related incidents. Required by all three (Colorado: AG reporting; EU: serious incident reporting; HIPAA: breach notification).

  • [ ] Vendor due diligence: Evaluate AI tool providers' security and compliance. Required by all three for third-party tools.

  • [ ] Training: Ensure staff understand their obligations. Required by HIPAA; best practice for all.

Timeline for action

Now (March 2026):

  • Colorado AI Act is in effect. If you make consequential decisions using AI and affect Colorado residents, you should already be compliant.
  • HIPAA applies to all PHI processing, including AI tools. If you're processing PHI with AI tools that lack BAAs, you're out of compliance.
  • EU AI Act prohibitions are in effect. General-purpose AI model transparency obligations are in effect.

By August 2026:

  • Full EU AI Act high-risk obligations take effect. Complete your conformity assessments, implement risk management systems, and establish human oversight procedures.

Ongoing:

  • Annual impact assessments (Colorado)
  • Annual risk management reviews (EU AI Act, HIPAA)
  • Continuous monitoring and incident response (all three)

The architecture advantage

Compliance isn't just about policies and procedures. It's about architecture.

An architecture that minimizes data exposure, keeps files local, logs every action, and reduces third-party dependencies makes every compliance requirement easier to meet. Not by circumventing the requirements, but by making the answers simpler.

When the auditor asks "where does PHI go during AI processing?" -- "It stays on our device" is a better answer than a flowchart of cloud services.

When the assessment asks "what safeguards prevent unauthorized disclosure?" -- "The files never leave our infrastructure" is stronger than a list of contractual protections at third-party providers.

The regulations are getting more demanding. The architecture that makes compliance simplest is the one that keeps data closest to home.

Back to all articles