How to Build EU AI Act Compliant AI Systems: A Developer's Implementation Guide
The EU AI Act enforcement deadline arrives in less than five months. By August 2, 2026, all high-risk AI systems used in the European Union must comply with strict technical requirements. This applies to your company regardless of where you're headquartered—if your AI system affects EU residents, the law applies to you.
High-risk AI systems include hiring algorithms, credit scoring tools, medical diagnostics, educational assessment systems, and law enforcement applications. These systems require full compliance with risk management, data governance, technical documentation, audit logging, and human oversight mechanisms. Penalties for violations reach €15 million or 3% of global annual turnover.
This guide provides concrete implementation patterns and code examples for building EU AI Act compliant AI systems. You'll learn how to implement risk classification, audit logging infrastructure, data governance pipelines, human oversight interfaces, and compliance documentation generation.
Quick Test: JuiceFactory API
Before diving into compliance implementation, let's verify JuiceFactory API access:
curl -X POST https://api.juicefactory.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "juicefactory/qwen3-vl",
"messages": [
{"role": "user", "content": "Hello, JuiceFactory!"}
]
}'
Expected response:
{
"id": "chatcmpl-1234567890",
"object": "chat.completion",
"created": 1709251200,
"model": "juicefactory/qwen3-vl",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I am Qwen3-VL from JuiceFactory."
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 10,
"total_tokens": 25
}
}
This confirms JuiceFactory API is accessible. Get your API key at https://juicefactory.ai/api-key.
System Architecture Overview
┌─────────────────────────────────────────────────────────────────────────┐
│ EU AI ACT COMPLIANT SYSTEM ARCHITECTURE │
└─────────────────────────────────────────────────────────────────────────┘
┌──────────────┐ ┌──────────────────┐ ┌──────────────────────────┐
│ USER │────▶│ HUMAN OVERRIDE │────▶│ JUICEFACTORY API (EU) │
│ INTERFACE │ │ INTERFACE │ │ qwen3-vl Inference │
└──────────────┘ └──────────────────┘ └───────────┬──────────────┘
│ │
│ NO │ YES
▼ ▼
┌─────────────┐ ┌──────────────────┐
│ OVERRIDE │ │ AUDIT LOGGING │
│ ACTION │ │ MIDDLEWARE │
└──────┬──────┘ └────────┬─────────┘
│ │
▼ ▼
┌─────────────────────────────────────────────────┐
│ SECURE STORAGE (EU-HOSTED) │
│ ┌─────────────┐ ┌──────────────┐ ┌──────────┐ │
│ │ Audit Logs │ │ Data │ │ Override │ │
│ │ (Article 10)│ │ Governance │ │ Records │ │
│ └─────────────┘ │ (Article 10)│ │(Art. 14) │ │
│ └──────────────┘ └──────────┘ │
└─────────────────────────────────────────────────┘
│ │
▼ ▼
┌─────────────────────────────────────────────────┐
│ MONITORING & COMPLIANCE LAYER │
│ ┌─────────────┐ ┌──────────────┐ ┌──────────┐ │
│ │ Prometheus │ │ Grafana │ │ Doc Gen │ │
│ │ Metrics │ │ Dashboard │ │ (Art. 11)│ │
│ └─────────────┘ └──────────────┘ └──────────┘ │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ COMPLIANCE REPORTING │
│ (Article 12: Automatic Record-Keeping) │
│ Available to national competent authorities │
└─────────────────────────────────────────────────┘
KEY:
─────────────────────────────────────────────────────────────────────────────
• JuiceFactory API: EU-hosted, stateless inference with zero data retention
• Audit Logging Middleware: Captures all inputs/outputs (Article 10)
• Human Override Interface: Human intervention point (Article 14)
• Secure Storage: EU-hosted, GDPR-compliant, access controls
• Monitoring Layer: Real-time metrics, override rate tracking
• Compliance Reporting: Automated documentation generation (Article 11)
─────────────────────────────────────────────────────────────────────────────
This architecture satisfies core EU AI Act requirements:
- Article 10: Automatic logging via middleware
- Article 11: Technical documentation via doc generator
- Article 12: Record-keeping in secure EU storage
- Article 14: Human oversight interface
- Article 15: Robustness via monitoring layer
JuiceFactory is highlighted as the compliant inference layer—EU-hosted, stateless-by-default, with automatic audit logging integration.
Understanding the Four Risk Tiers and Where Your System Fits
Every AI system falls into one of four risk tiers under the EU AI Act. Your compliance obligations depend entirely on which tier your system occupies.
Unacceptable Risk (Prohibited): Systems banned outright since February 2025. This includes social scoring, subliminal manipulation, exploitation of vulnerable groups, untargeted facial scraping, biometric categorization by protected characteristics, real-time biometric surveillance in public spaces, predictive policing based purely on profiling, and emotion recognition at work and in education.
High Risk: AI systems used in critical domains like hiring, worker management, credit scoring, healthcare triage, education, critical infrastructure, law enforcement, and administration of justice. These systems require full compliance with Article 10 (audit logging), risk management systems, data governance, technical documentation, and human oversight. Full enforcement begins August 2, 2026.
Limited Risk: Chatbots, deepfake generators, and AI-generated content tools. Must disclose AI interaction to users and label AI-generated content. Transparency obligations enforce August 2, 2026.
Minimal Risk: Spam filters, recommendation engines, video game AI. Largely unregulated, but other applicable EU law still applies.
Most enterprise and SaaS developers need to focus on the high-risk tier. The implementation patterns in this guide target high-risk AI systems, which carry the strictest requirements and highest penalty exposure.
JuiceFactory's stateless-by-default inference with zero data retention makes it ideal for high-risk AI systems requiring strict GDPR compliance and data sovereignty under EU AI Act requirements. All inference happens on EU-hosted infrastructure with no prompt or response storage unless explicitly configured.
Implementing Risk Classification for Your AI System
The foundation of EU AI Act compliance is correctly classifying your AI system into the appropriate risk tier. Article 6 and Annex III provide the legal framework for this classification, but developers need a systematic way to evaluate their systems programmatically.
The Risk Classification Decision Tree
Classification follows a decision tree based on your AI system's intended use, the domain it operates in, and the potential harm to fundamental rights. The key questions are:
- Does the system fall under any prohibited category? If yes, it's unacceptable risk and must be discontinued.
- Does the system operate in any Annex III listed sector (critical infrastructure, education, employment, essential services, law enforcement, migration, justice, democratic processes)? If yes, it's likely high-risk.
- Does the system manipulate user behavior or exploit vulnerabilities? If yes, it's unacceptable risk.
- Is the system a chatbot or content generator? If yes, it's limited risk with transparency obligations.
- If none of the above apply, it's minimal risk.
Code: Risk Classification API
Here's a Python FastAPI implementation that classifies AI systems against EU AI Act criteria:
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List, Optional
from enum import Enum
class RiskTier(str, Enum):
UNACCEPTABLE = "unacceptable"
HIGH = "high"
LIMITED = "limited"
MINIMAL = "minimal"
class AnnexIIICategory(str, Enum):
CRITICAL_INFRASTRUCTURE = "critical_infrastructure"
EDUCATION = "education"
EMPLOYMENT = "employment"
ESSENTIAL_SERVICES = "essential_services"
LAW_ENFORCEMENT = "law_enforcement"
MIGRATION = "migration"
JUSTICE = "justice"
DEMOCRATIC_PROCESSES = "democratic_processes"
class AISystemDescription(BaseModel):
name: str
primary_function: str
sector: Optional[AnnexIIICategory] = None
is_biometric: bool = False
is_emotion_recognition: bool = False
is_social_scoring: bool = False
is_subliminal_manipulation: bool = False
is_chatbot: bool = False
is_content_generator: bool = False
targets_vulnerable_groups: bool = False
app = FastAPI(title="EU AI Act Risk Classification API")
def classify_system(system: AISystemDescription) -> RiskTier:
"""Classify AI system according to EU AI Act risk tiers"""
# Unacceptable risk checks (Article 5)
if system.is_social_scoring:
return RiskTier.UNACCEPTABLE
if system.is_subliminal_manipulation:
return RiskTier.UNACCEPTABLE
if system.targets_vulnerable_groups and system.is_biometric:
return RiskTier.UNACCEPTABLE
if system.is_emotion_recognition and system.sector in [
AnnexIIICategory.EMPLOYMENT, AnnexIIICategory.EDUCATION
]:
return RiskTier.UNACCEPTABLE
# High risk checks (Article 6 and Annex III)
if system.sector is not None:
return RiskTier.HIGH
# Limited risk checks (Article 50)
if system.is_chatbot or system.is_content_generator:
return RiskTier.LIMITED
# Default to minimal risk
return RiskTier.MINIMAL
@app.post("/classify")
async def classify_ai_system(system: AISystemDescription) -> dict:
"""Classify an AI system and return risk tier with explanation"""
risk_tier = classify_system(system)
response = {
"system_name": system.name,
"risk_tier": risk_tier.value,
"compliance_deadline": None,
"key_requirements": [],
"recommendations": []
}
if risk_tier == RiskTier.UNACCEPTABLE:
response["compliance_deadline"] = "ALREADY BANNED (Feb 2025)"
response["key_requirements"] = [
"Immediate discontinuation required",
"No compliance path exists",
"Penalties: €35M or 7% global revenue"
]
response["recommendations"] = [
"Redesign system to remove prohibited functionality",
"Consult legal counsel before proceeding"
]
elif risk_tier == RiskTier.HIGH:
response["compliance_deadline"] = "August 2, 2026"
response["key_requirements"] = [
"Risk management system (Article 9)",
"Data governance (Article 10)",
"Technical documentation (Article 11)",
"Audit logging (Article 12)",
"Human oversight (Article 14)",
"Transparency (Article 13)",
"Accuracy, robustness, cybersecurity (Article 15)"
]
response["recommendations"] = [
"Begin conformity assessment immediately (6-12 months)",
"Engage a Notified Body for certification",
"Implement technical controls covered in this guide"
]
elif risk_tier == RiskTier.LIMITED:
response["compliance_deadline"] = "August 2, 2026"
response["key_requirements"] = [
"User disclosure of AI interaction",
"Labeling of AI-generated content"
]
response["recommendations"] = [
"Implement transparency notices in UI",
"Add metadata to AI-generated outputs"
]
else: # Minimal risk
response["compliance_deadline"] = "None (voluntary codes of practice)"
response["key_requirements"] = [
"Voluntary codes of conduct encouraged",
"Other applicable EU law still applies (GDPR, etc.)"
]
response["recommendations"] = [
"Consider voluntary compliance for trust building",
"Document system capabilities for transparency"
]
return response
# Example usage
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
This API provides a systematic way to classify your AI systems. Use it during system design to ensure you're building compliant architectures from the start, not retrofitting compliance later.
Building Audit Logging Infrastructure (Article 10)
Article 10 of the EU AI Act requires automatic logging for high-risk AI systems. Logs must capture inputs, outputs, timestamps, model versions, and enable traceability of individual decisions. This is non-negotiable for compliance.
Article 10 Requirements Explained
The logging requirements specify:
- Automatic logging of AI system operations
- Traceability of individual decisions
- Record keeping for at least 6 months after system decommissioning
- Logs must be stored securely with access controls
- Logs must be available to national competent authorities upon request
For high-risk systems, you cannot disable logging. Every inference, decision, or prediction must be captured with sufficient detail to reconstruct what happened and why.
Code: JuiceFactory Integration with Audit Logging
JuiceFactory's OpenAI-compatible API makes it straightforward to integrate audit logging. Here's a Python middleware wrapper that automatically logs all API calls:
import os
import json
import time
from datetime import datetime
from typing import Dict, Any, Optional
from openai import OpenAI
import hashlib
# JuiceFactory API configuration
JUICEFACTORY_API_KEY = os.getenv("JUICEFACTORY_API_KEY")
JUICEFACTORY_BASE_URL = "https://api.juicefactory.ai/v1"
# Secure storage configuration (EU-hosted recommended)
LOG_STORAGE_PATH = "/var/log/ai-act-audit/"
class JuiceFactoryAuditLogger:
"""Middleware wrapper for JuiceFactory API with Article 10 audit logging"""
def __init__(self, api_key: str = JUICEFACTORY_API_KEY):
self.client = OpenAI(
api_key=api_key,
base_url=JUICEFACTORY_BASE_URL
)
self.model_version = None
self._ensure_log_directory()
def _ensure_log_directory(self):
"""Ensure audit log directory exists with proper permissions"""
os.makedirs(LOG_STORAGE_PATH, mode=0o700, exist_ok=True)
def _generate_request_id(self, system_id: str) -> str:
"""Generate unique request identifier"""
timestamp = datetime.utcnow().isoformat()
hash_input = f"{system_id}-{timestamp}".encode()
return hashlib.sha256(hash_input).hexdigest()[:16]
def _log_request(self, request_id: str, log_entry: Dict[str, Any]):
"""Write audit log entry to secure storage"""
log_filename = f"{LOG_STORAGE_PATH}{request_id}.json"
with open(log_filename, 'w') as f:
json.dump(log_entry, f, indent=2, default=str)
def chat(
self,
messages: list,
model: str = "juicefactory/qwen3-vl",
system_id: str = "default",
**kwargs
) -> Dict[str, Any]:
"""JuiceFactory chat completion with automatic audit logging"""
request_id = self._generate_request_id(system_id)
start_time = time.time()
# Capture input for audit log
audit_entry = {
"request_id": request_id,
"system_id": system_id,
"model": model,
"timestamp": datetime.utcnow().isoformat(),
"input": {
"messages": messages,
"parameters": {k: v for k, v in kwargs.items() if k != 'api_key'}
},
"output": None,
"metadata": {
"start_time": start_time,
"end_time": None,
"latency_ms": None,
"status": "started"
}
}
try:
# Make JuiceFactory API call
response = self.client.chat.completions.create(
model=model,
messages=messages,
**kwargs
)
end_time = time.time()
latency_ms = (end_time - start_time) * 1000
# Extract response data
output_data = {
"content": response.choices[0].message.content,
"finish_reason": response.choices[0].finish_reason,
"model_used": response.model
}
# Update audit entry with success
audit_entry["output"] = output_data
audit_entry["metadata"]["end_time"] = end_time
audit_entry["metadata"]["latency_ms"] = latency_ms
audit_entry["metadata"]["status"] = "completed"
# Write audit log
self._log_request(request_id, audit_entry)
return {
"request_id": request_id,
"response": output_data,
"latency_ms": latency_ms
}
except Exception as e:
# Log failure with error details
audit_entry["metadata"]["end_time"] = time.time()
audit_entry["metadata"]["latency_ms"] = (time.time() - start_time) * 1000
audit_entry["metadata"]["status"] = "failed"
audit_entry["error"] = {
"type": type(e).__name__,
"message": str(e)
}
self._log_request(request_id, audit_entry)
raise
# Usage example
if __name__ == "__main__":
logger = JuiceFactoryAuditLogger()
# Example: High-risk credit scoring system
response = logger.chat(
messages=[
{"role": "system", "content": "You are a credit scoring assistant."},
{"role": "user", "content": "Evaluate creditworthiness for applicant with income: €50,000, debt: €10,000"}
],
model="juicefactory/qwen3-vl",
system_id="credit-scoring-system-v1",
temperature=0.3,
max_tokens=500
)
print(f"Request ID: {response['request_id']}")
print(f"Response: {response['response']['content']}")
print(f"Latency: {response['latency_ms']:.2f}ms")
This middleware automatically logs every JuiceFactory API call with full input, output, timestamps, and error handling. Logs are stored securely with unique request IDs for traceability.
Code: Audit Log Query and Analysis
Compliance requires being able to retrieve and analyze audit logs for regulatory reporting. Here's a query and analysis module:
import json
import os
from datetime import datetime, timedelta
from typing import List, Dict, Any
from collections import defaultdict
class AuditLogAnalyzer:
"""Query and analyze audit logs for EU AI Act compliance reporting"""
def __init__(self, log_path: str = LOG_STORAGE_PATH):
self.log_path = log_path
def get_logs_by_date_range(
self,
start_date: datetime,
end_date: datetime,
system_id: Optional[str] = None
) -> List[Dict[str, Any]]:
"""Retrieve logs within date range, optionally filtered by system"""
logs = []
for filename in os.listdir(self.log_path):
if not filename.endswith('.json'):
continue
filepath = os.path.join(self.log_path, filename)
with open(filepath, 'r') as f:
log_entry = json.load(f)
# Parse timestamp
log_time = datetime.fromisoformat(log_entry['timestamp'])
# Filter by date range
if not (start_date <= log_time <= end_date):
continue
# Filter by system ID if provided
if system_id and log_entry.get('system_id') != system_id:
continue
logs.append(log_entry)
return logs
def calculate_metrics(self, logs: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Calculate compliance metrics from audit logs"""
total_requests = len(logs)
successful_requests = sum(1 for log in logs if log['metadata']['status'] == 'completed')
failed_requests = sum(1 for log in logs if log['metadata']['status'] == 'failed')
latency_values = [
log['metadata']['latency_ms']
for log in logs
if log['metadata'].get('latency_ms') is not None
]
avg_latency = sum(latency_values) / len(latency_values) if latency_values else 0
p95_latency = sorted(latency_values)[int(len(latency_values) * 0.95)] if latency_values else 0
# Count by system
requests_by_system = defaultdict(int)
for log in logs:
requests_by_system[log['system_id']] += 1
return {
"reporting_period": {
"total_requests": total_requests,
"successful_requests": successful_requests,
"failed_requests": failed_requests,
"success_rate": (successful_requests / total_requests * 100) if total_requests > 0 else 0
},
"performance": {
"average_latency_ms": round(avg_latency, 2),
"p95_latency_ms": round(p95_latency, 2)
},
"systems": dict(requests_by_system)
}
def generate_compliance_report(
self,
system_id: str,
days: int = 30
) -> str:
"""Generate human-readable compliance report for specific system"""
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=days)
logs = self.get_logs_by_date_range(start_date, end_date, system_id)
metrics = self.calculate_metrics(logs)
report = f"""
EU AI Act Compliance Report
===========================
System ID: {system_id}
Reporting Period: {start_date.date()} to {end_date.date()}
Generated: {datetime.utcnow().isoformat()}
SUMMARY
-------
Total Requests: {metrics['reporting_period']['total_requests']}
Successful: {metrics['reporting_period']['successful_requests']}
Failed: {metrics['reporting_period']['failed_requests']}
Success Rate: {metrics['reporting_period']['success_rate']:.2f}%
PERFORMANCE
-----------
Average Latency: {metrics['performance']['average_latency_ms']}ms
P95 Latency: {metrics['performance']['p95_latency_ms']}ms
COMPLIANCE STATUS
----------------
✓ Article 10 (Automatic Logging): IMPLEMENTED
✓ Traceability: ENABLED (unique request IDs)
✓ Log Retention: CONFIGURED (check storage retention policy)
✓ Secure Storage: IMPLEMENTED (directory permissions 0o700)
Note: This report covers audit log availability and traceability.
Full Article 10 compliance requires additional controls including
risk management, data governance, and human oversight.
"""
return report
# Usage example
if __name__ == "__main__":
analyzer = AuditLogAnalyzer()
# Generate last 30 days compliance report
report = analyzer.generate_compliance_report("credit-scoring-system-v1", days=30)
print(report)
This analyzer enables you to generate compliance reports on demand for national competent authorities or internal audits.
Data Governance Pipelines for High-Risk AI
Article 10 also requires data governance for high-risk AI systems. This includes training data documentation, data lineage tracking, and ensuring data quality and representativeness.
Training Data Lineage Tracking
For high-risk systems, you must document all training data sources, preprocessing steps, and any data augmentation applied. This documentation must be available for conformity assessment.
Code: Data Governance Workflow
Here's an Airflow DAG that tracks data provenance from source to model deployment:
from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
import json
import hashlib
default_args = {
'owner': 'ai-governance',
'depends_on_past': False,
'start_date': datetime(2026, 1, 1),
'email_on_failure': True,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
def document_data_source(**context):
"""Document training data source for EU AI Act compliance"""
data_source = {
"source_id": context['params']['source_id'],
"source_type": context['params']['source_type'], # e.g., "internal", "licensed", "synthetic"
"collection_date": datetime.utcnow().isoformat(),
"data_volume": context['params']['data_volume'],
"retention_period": context['params'].get('retention_period', "7 years"),
"consent_documented": context['params'].get('consent_documented', False),
"gdpr_compliant": context['params'].get('gdpr_compliant', True),
"data_categories": context['params'].get('data_categories', []),
"metadata": {
"collector": context['params'].get('collector', "system"),
"collection_method": context['params'].get('collection_method', "api"),
"quality_checks": context['params'].get('quality_checks', [])
}
}
# Hash for data provenance
data_hash = hashlib.sha256(
json.dumps(data_source, sort_keys=True).encode()
).hexdigest()[:16]
data_source['provenance_hash'] = data_hash
# Save to governance database
with open(f"/var/lib/ai-governance/sources/{data_source['source_id']}.json", 'w') as f:
json.dump(data_source, f, indent=2)
return data_source
def track_transformation(**context):
"""Document data preprocessing and transformation steps"""
transformation = {
"transformation_id": context['params']['transformation_id'],
"input_hash": context['task_instance'].xcom_pull(task_ids='document_data_source')['provenance_hash'],
"transformation_type": context['params']['type'], # e.g., "normalization", "augmentation", "filtering"
"parameters": context['params'].get('parameters', {}),
"output_volume": context['params'].get('output_volume'),
"timestamp": datetime.utcnow().isoformat(),
"metadata": {
"reproducible": context['params'].get('reproducible', True),
"version": context['params'].get('version', "1.0")
}
}
# Save transformation record
with open(f"/var/lib/ai-governance/transformations/{transformation['transformation_id']}.json", 'w') as f:
json.dump(transformation, f, indent=2)
return transformation
def link_to_model(**context):
"""Link data lineage to deployed model version"""
model_linkage = {
"model_id": context['params']['model_id'],
"model_version": context['params']['model_version'],
"training_data_hash": context['task_instance'].xcom_pull(task_ids='track_transformation')['output_hash'],
"deployment_date": datetime.utcnow().isoformat(),
"compliance_metadata": {
"article_10_compliant": True,
"data_governance_documented": True,
"last_audit_date": datetime.utcnow().isoformat()
}
}
# Save model linkage
with open(f"/var/lib/ai-governance/models/{model_linkage['model_id']}.json", 'w') as f:
json.dump(model_linkage, f, indent=2)
return model_linkage
dag = DAG(
'ai_act_data_governance',
default_args=default_args,
description='EU AI Act Data Governance Pipeline',
schedule_interval='@daily',
catchup=False
)
# Example DAG run with parameters
t1 = PythonOperator(
task_id='document_data_source',
python_callable=document_data_source,
op_kwargs={
'source_id': 'credit-scoring-training-data-2026-01',
'source_type': 'internal',
'data_volume': 50000,
'data_categories': ['financial', 'demographic'],
'collector': 'automated_pipeline',
'quality_checks': ['null_check', 'duplicate_removal', 'outlier_detection']
},
dag=dag
)
t2 = PythonOperator(
task_id='track_transformation',
python_callable=track_transformation,
op_kwargs={
'transformation_id': 'credit-scoring-preprocessing-v2',
'type': 'normalization',
'parameters': {'method': 'z-score', 'feature_wise': True},
'output_volume': 50000,
'version': '2.1'
},
dag=dag
)
t3 = PythonOperator(
task_id='link_to_model',
python_callable=link_to_model,
op_kwargs={
'model_id': 'credit-scoring-model',
'model_version': '1.5.0'
},
dag=dag
)
t1 >> t2 >> t3
This Airflow DAG documents the complete data lineage from source to deployed model, satisfying Article 10 data governance requirements.
Human Oversight Interfaces and Control Loops
Article 14 requires human oversight for high-risk AI systems. This isn't optional—human operators must be able to understand, monitor, and intervene in AI system operations.
Designing Human Intervention Points
Human oversight requires:
- Clear explanations of AI decisions
- Ability for humans to override decisions
- Monitoring of human override rates
- Training for human operators
Your system architecture must include intervention points where human operators can review and override AI decisions, with full audit logging of all overrides.
Code: Override API Implementation
Here's a FastAPI endpoint for human operators to override AI decisions:
from fastapi import FastAPI, HTTPException, BackgroundTasks
from pydantic import BaseModel
from typing import Optional, List
from datetime import datetime
import json
class OverrideRequest(BaseModel):
system_id: str
request_id: str
operator_id: str
override_reason: str
reason_code: str # e.g., "incorrect_risk_score", "missing_context", "appeal"
original_ai_decision: str
human_decision: str
additional_notes: Optional[str] = None
class OverrideResponse(BaseModel):
override_id: str
timestamp: str
status: str
app = FastAPI(title="AI System Human Override API")
AUDIT_LOG_PATH = "/var/log/ai-act-audit/"
def log_override(override_data: dict):
"""Log human override to audit trail"""
override_id = f"override-{datetime.utcnow().timestamp()}"
log_entry = {
"override_id": override_id,
"timestamp": datetime.utcnow().isoformat(),
"system_id": override_data["system_id"],
"original_request_id": override_data["request_id"],
"operator_id": override_data["operator_id"],
"override_reason": override_data["override_reason"],
"reason_code": override_data["reason_code"],
"original_ai_decision": override_data["original_ai_decision"],
"human_decision": override_data["human_decision"],
"additional_notes": override_data.get("additional_notes"),
"metadata": {
"compliance": "Article 14 Human Oversight",
"audit_ready": True
}
}
# Save to audit log
filename = f"{AUDIT_LOG_PATH}{override_id}.json"
with open(filename, 'w') as f:
json.dump(log_entry, f, indent=2)
return override_id
@app.post("/override", response_model=OverrideResponse)
async def create_override(
request: OverrideRequest,
background_tasks: BackgroundTasks
):
"""Create human override record for AI system decision"""
# Validate request
if request.reason_code not in [
"incorrect_risk_score",
"missing_context",
"appeal",
"regulatory_requirement",
"other"
]:
raise HTTPException(status_code=400, detail="Invalid reason code")
# Log override in background
override_id = log_override(request.dict())
# TODO: In production, trigger downstream actions:
# - Update system behavior if pattern detected
# - Notify compliance team if override rate exceeds threshold
# - Retrain model if systematic bias identified
return OverrideResponse(
override_id=override_id,
timestamp=datetime.utcnow().isoformat(),
status="recorded"
)
@app.get("/overrides/{system_id}")
async def get_system_overrides(
system_id: str,
limit: int = 100,
reason_code: Optional[str] = None
):
"""Retrieve override history for a specific system"""
overrides = []
for filename in os.listdir(AUDIT_LOG_PATH):
if not filename.startswith("override-") or not filename.endswith('.json'):
continue
with open(os.path.join(AUDIT_LOG_PATH, filename), 'r') as f:
override = json.load(f)
if override['system_id'] != system_id:
continue
if reason_code and override['reason_code'] != reason_code:
continue
overrides.append(override)
if len(overrides) >= limit:
break
# Sort by timestamp descending
overrides.sort(key=lambda x: x['timestamp'], reverse=True)
return {
"system_id": system_id,
"total_overrides": len(overrides),
"overrides": overrides[:limit]
}
@app.get("/overrides/{system_id}/analytics")
async def get_override_analytics(system_id: str, days: int = 30):
"""Calculate override analytics for monitoring"""
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=days)
overrides = await get_system_overrides(system_id, limit=10000)
filtered_overrides = [
o for o in overrides['overrides']
if datetime.fromisoformat(o['timestamp']) >= start_date
]
# Calculate metrics
total_overrides = len(filtered_overrides)
reason_counts = {}
for override in filtered_overrides:
reason = override['reason_code']
reason_counts[reason] = reason_counts.get(reason, 0) + 1
top_operators = {}
for override in filtered_overrides:
operator = override['operator_id']
top_operators[operator] = top_operators.get(operator, 0) + 1
return {
"system_id": system_id,
"period_days": days,
"total_overrides": total_overrides,
"override_rate_per_day": round(total_overrides / days, 2),
"top_reasons": sorted(reason_counts.items(), key=lambda x: x[1], reverse=True),
"top_operators": sorted(top_operators.items(), key=lambda x: x[1], reverse=True),
"compliance_note": "High override rates may indicate need for model retraining"
}
This API provides a complete human override interface with audit logging, history retrieval, and analytics for monitoring Article 14 compliance.
Compliance Documentation Generator
Technical documentation is required for conformity assessment (Article 11). Documentation must include system description, intended use, risk assessment, testing results, and compliance measures.
Required Documentation for Conformity Assessment
The technical documentation must include:
- General description of the AI system
- Detailed explanation of system architecture
- Results of risk assessment
- Testing and validation results
- Data governance documentation
- Human oversight measures
- Transparency and explainability measures
- Robustness, accuracy, and cybersecurity measures
Code: Documentation Generator CLI
Here's a command-line tool that generates compliance documentation from audit logs and system metadata:
#!/usr/bin/env python3
"""
EU AI Act Compliance Documentation Generator
Generates technical documentation for conformity assessment
"""
import argparse
import json
from datetime import datetime
from pathlib import Path
from typing import Dict, Any
class DocumentationGenerator:
"""Generate EU AI Act compliance documentation from system metadata"""
def __init__(self, system_metadata_path: str, audit_log_path: str):
self.system_metadata = self._load_json(system_metadata_path)
self.audit_log_path = audit_log_path
def _load_json(self, path: str) -> Dict[str, Any]:
with open(path, 'r') as f:
return json.load(f)
def _generate_section_1_general_description(self) -> str:
"""Article 11.1: General description of the AI system"""
metadata = self.system_metadata
return f"""
# 1. General Description of the AI System
## 1.1 System Overview
**System ID:** {metadata['system_id']}
**System Name:** {metadata['system_name']}
**Version:** {metadata['version']}
**Provider:** {metadata['provider']}
**Date of Documentation:** {datetime.utcnow().isoformat()}
## 1.2 Intended Purpose
{metadata['intended_purpose']}
## 1.3 Intended Users
{metadata['intended_users']}
## 1.4 Risk Classification
**Risk Tier:** {metadata['risk_tier']}
**Annex III Category:** {metadata.get('annex_iii_category', 'N/A')}
## 1.5 Deployment Environment
**Hosting Location:** {metadata['hosting_location']}
**EU-Hosted:** {metadata['eu_hosted']}
**Data Residency:** {metadata['data_residency']}
"""
def _generate_section_2_architecture(self) -> str:
"""Article 11.2: System architecture description"""
metadata = self.system_metadata
return f"""
# 2. System Architecture
## 2.1 Model Information
**Model Type:** {metadata['model']['type']}
**Model Name:** {metadata['model']['name']}
**Model Version:** {metadata['model']['version']}
**Provider API:** {metadata['model']['api_endpoint']}
## 2.2 Data Flow
{self._describe_data_flow()}
## 2.3 Integration Points
{self._describe_integrations()}
"""
def _describe_data_flow(self) -> str:
"""Describe system data flow"""
return """
1. User submits request to application interface
2. Request passes through human override check (Article 14)
3. Request forwarded to JuiceFactory API
4. Audit logging middleware captures input/output (Article 10)
5. Model generates response
6. Response logged and returned to user
7. Human operator can override decision if needed
8. All overrides logged for audit trail
"""
def _describe_integrations(self) -> str:
"""Describe system integrations"""
metadata = self.system_metadata
integrations = []
for integration in metadata.get('integrations', []):
integrations.append(f"- {integration['name']}: {integration['purpose']}")
return "\n".join(integrations) if integrations else "No external integrations"
def _generate_section_3_risk_assessment(self) -> str:
"""Article 11.3: Risk assessment results"""
metadata = self.system_metadata
return f"""
# 3. Risk Assessment
## 3.1 Identified Risks
{self._list_risks()}
## 3.2 Risk Mitigation Measures
{self._list_mitigations()}
## 3.3 Residual Risk
**Acceptable:** {metadata['risk_assessment']['residual_risk_acceptable']}
**Justification:** {metadata['risk_assessment']['residual_risk_justification']}
"""
def _list_risks(self) -> str:
"""List identified risks"""
risks = self.system_metadata.get('risk_assessment', {}).get('identified_risks', [])
return "\n".join([f"- **{r['category']}:** {r['description']} (Severity: {r['severity']})" for r in risks])
def _list_mitigations(self) -> str:
"""List risk mitigation measures"""
mitigations = self.system_metadata.get('risk_assessment', {}).get('mitigations', [])
return "\n".join([f"- **{m['risk_category']}:** {m['measure']}" for m in mitigations])
def _generate_section_4_testing(self) -> str:
"""Article 11.4: Testing and validation results"""
metadata = self.system_metadata
return f"""
# 4. Testing and Validation
## 4.1 Testing Methodology
{metadata['testing']['methodology']}
## 4.2 Test Results
**Accuracy:** {metadata['testing']['accuracy']}%
**Precision:** {metadata['testing']['precision']}%
**Recall:** {metadata['testing']['recall']}%
**F1 Score:** {metadata['testing']['f1_score']}%
## 4.3 Bias Testing
{self._describe_bias_testing()}
## 4.4 Robustness Testing
{self._describe_robustness_testing()}
"""
def _describe_bias_testing(self) -> str:
"""Describe bias testing results"""
bias_tests = self.system_metadata.get('testing', {}).get('bias_tests', [])
if not bias_tests:
return "No bias testing results available."
return "\n".join([f"- **{t['attribute']}:** {t['result']}" for t in bias_tests])
def _describe_robustness_testing(self) -> str:
"""Describe robustness testing results"""
robustness = self.system_metadata.get('testing', {}).get('robustness', {})
return f"""
- **Adversarial Attacks:** {robustness.get('adversarial_attacks', 'Not tested')}
- **Data Drift:** {robustness.get('data_drift', 'Not tested')}
- **Edge Cases:** {robustness.get('edge_cases', 'Not tested')}
"""
def _generate_section_5_compliance_measures(self) -> str:
"""Article 11.5: Compliance with AI Act requirements"""
metadata = self.system_metadata
return f"""
# 5. AI Act Compliance Measures
## 5.1 Article 9: Risk Management System
**Implemented:** {metadata['compliance']['article_9_risk_management']['implemented']}
**Description:** {metadata['compliance']['article_9_risk_management']['description']}
## 5.2 Article 10: Data and Data Governance
**Implemented:** {metadata['compliance']['article_10_data_governance']['implemented']}
**Description:** {metadata['compliance']['article_10_data_governance']['description']}
## 5.3 Article 11: Technical Documentation
**Implemented:** YES (This document)
## 5.4 Article 12: Record-Keeping (Automatic Logging)
**Implemented:** {metadata['compliance']['article_12_record_keeping']['implemented']}
**Description:** {metadata['compliance']['article_12_record_keeping']['description']}
## 5.5 Article 14: Human Oversight
**Implemented:** {metadata['compliance']['article_14_human_oversight']['implemented']}
**Description:** {metadata['compliance']['article_14_human_oversight']['description']}
## 5.6 Article 15: Accuracy, Robustness, Cybersecurity
**Implemented:** {metadata['compliance']['article_15_accuracy_robustness']['implemented']}
**Description:** {metadata['compliance']['article_15_accuracy_robustness']['description']}
## 5.7 Article 13: Transparency
**Implemented:** {metadata['compliance']['article_13_transparency']['implemented']}
**Description:** {metadata['compliance']['article_13_transparency']['description']}
"""
def generate_full_documentation(self) -> str:
"""Generate complete technical documentation"""
sections = [
"# EU AI Act Technical Documentation",
f"**Generated:** {datetime.utcnow().isoformat()}",
"",
self._generate_section_1_general_description(),
"",
self._generate_section_2_architecture(),
"",
self._generate_section_3_risk_assessment(),
"",
self._generate_section_4_testing(),
"",
self._generate_section_5_compliance_measures(),
"",
"---",
"",
"This documentation is generated automatically from system metadata and audit logs. ",
"For conformity assessment, submit this document along with the full audit log archive ",
"to your designated Notified Body."
]
return "\n".join(sections)
def main():
parser = argparse.ArgumentParser(description='Generate EU AI Act compliance documentation')
parser.add_argument('--metadata', required=True, help='Path to system metadata JSON file')
parser.add_argument('--audit-log-path', default='/var/log/ai-act-audit/', help='Path to audit log directory')
parser.add_argument('--output', default='eu-ai-act-compliance-doc.md', help='Output filename')
args = parser.parse_args()
generator = DocumentationGenerator(args.metadata, args.audit_log_path)
documentation = generator.generate_full_documentation()
with open(args.output, 'w') as f:
f.write(documentation)
print(f"Documentation generated: {args.output}")
if __name__ == '__main__':
main()
Save this as generate-compliance-doc.py, make it executable (chmod +x), and run with your system metadata JSON file.
Monitoring and Reporting Dashboard
Real-time monitoring is essential for Article 12 compliance and for demonstrating that your AI system operates as intended.
Key Metrics to Track
For EU AI Act compliance, monitor:
- Request/response rates and error rates
- Human override rates and reason codes
- Model performance metrics (accuracy, latency)
- Audit log completeness
- Data drift indicators
Code: Prometheus Metrics and Grafana Dashboard
Here's a Prometheus metrics collector and Grafana dashboard configuration:
from prometheus_client import Counter, Histogram, Gauge, start_http_server
import time
# Prometheus metrics for EU AI Act compliance
REQUESTS_TOTAL = Counter(
'ai_act_requests_total',
'Total number of AI system requests',
['system_id', 'status', 'model']
)
REQUEST_LATENCY = Histogram(
'ai_act_request_latency_seconds',
'Request latency in seconds',
['system_id', 'model'],
buckets=[0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0]
)
HUMAN_OVERRIDES_TOTAL = Counter(
'ai_act_human_overrides_total',
'Total number of human overrides',
['system_id', 'reason_code', 'operator_id']
)
COMPLIANCE_STATUS = Gauge(
'ai_act_compliance_status',
'Compliance status (1 = compliant, 0 = non-compliant)',
['system_id', 'requirement']
)
class PrometheusMetrics:
"""Collect and expose EU AI Act compliance metrics"""
@staticmethod
def record_request(system_id: str, model: str, latency_ms: float, status: str):
"""Record request metric"""
REQUESTS_TOTAL.labels(
system_id=system_id,
status=status,
model=model
).inc()
REQUEST_LATENCY.labels(
system_id=system_id,
model=model
).observe(latency_ms / 1000.0) # Convert to seconds
@staticmethod
def record_override(system_id: str, reason_code: str, operator_id: str):
"""Record human override metric"""
HUMAN_OVERRIDES_TOTAL.labels(
system_id=system_id,
reason_code=reason_code,
operator_id=operator_id
).inc()
@staticmethod
def set_compliance_status(system_id: str, requirement: str, compliant: bool):
"""Set compliance status for specific requirement"""
COMPLIANCE_STATUS.labels(
system_id=system_id,
requirement=requirement
).set(1 if compliant else 0)
@staticmethod
def start_metrics_server(port: int = 8000):
"""Start Prometheus metrics HTTP server"""
start_http_server(port)
print(f"Prometheus metrics server started on port {port}")
print(f"Access metrics at http://localhost:{port}/metrics")
# Example usage
if __name__ == "__main__":
PrometheusMetrics.start_metrics_server(port=9090)
# Simulate some requests
system_id = "credit-scoring-system-v1"
model = "juicefactory/qwen3-vl"
for i in range(10):
latency = 50 + (i * 10) # Simulated latency
status = "success" if i < 9 else "error"
PrometheusMetrics.record_request(
system_id=system_id,
model=model,
latency_ms=latency,
status=status
)
time.sleep(0.1)
# Simulate override
PrometheusMetrics.record_override(
system_id=system_id,
reason_code="incorrect_risk_score",
operator_id="operator-001"
)
# Set compliance status
PrometheusMetrics.set_compliance_status(
system_id=system_id,
requirement="article_10_logging",
compliant=True
)
# Keep server running
while True:
time.sleep(1)
Grafana dashboard JSON (import into Grafana):
{
"dashboard": {
"title": "EU AI Act Compliance Dashboard",
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "rate(ai_act_requests_total[5m])",
"legendFormat": "{{system_id}} - {{status}}"
}
]
},
{
"title": "Request Latency (P95)",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, rate(ai_act_request_latency_seconds_bucket[5m]))",
"legendFormat": "{{system_id}}"
}
]
},
{
"title": "Human Override Rate",
"type": "graph",
"targets": [
{
"expr": "rate(ai_act_human_overrides_total[1h])",
"legendFormat": "{{reason_code}}"
}
]
},
{
"title": "Compliance Status",
"type": "stat",
"targets": [
{
"expr": "ai_act_compliance_status",
"legendFormat": "{{requirement}}"
}
]
}
]
}
}
This monitoring setup provides real-time visibility into your AI system's compliance status, essential for Article 12 requirements and for demonstrating compliance to regulators.
FAQ
Do I need to comply if my company is not based in the EU?
Yes. The EU AI Act has extraterritorial reach and applies to any AI system used by EU residents, regardless of where the company is headquartered. If your AI products or services are used by people in the European Union—or if AI outputs are used in the EU—you're covered by this law.
What happens if I miss the August 2026 deadline?
Penalties for high-risk AI system violations can reach €15 million or 3% of global annual turnover. Organizations starting today barely have enough time for conformity assessment, which takes 6-12 months. Beyond financial penalties, national competent authorities can order suspension of AI system operations and mandatory recalls of non-compliant products from the EU market.
How does JuiceFactory help with EU AI Act compliance?
JuiceFactory provides stateless-by-default AI inference with zero data retention, hosted entirely in the EU (Sweden). This simplifies GDPR and AI Act compliance for high-risk systems that require strict data sovereignty and audit trails. The OpenAI-compatible API allows seamless integration with existing compliance tooling while ensuring all inference happens on EU infrastructure with full Article 10 audit logging capabilities.
What's the difference between high-risk and limited-risk AI systems?
High-risk systems (Article 6 and Annex III) are used in critical domains like hiring, credit scoring, healthcare triage, education, critical infrastructure, and law enforcement. These require full compliance with risk management systems, data governance, technical documentation, automatic logging, and human oversight (Articles 9-15). Limited-risk systems (Article 50) like chatbots and content generators only require transparency obligations—disclosing AI interaction to users and labeling AI-generated content.
Do I need a Notified Body for conformity assessment?
Yes, for high-risk AI systems you must engage a Notified Body for conformity assessment. Notified Bodies are independent organizations designated by EU member states to certify AI systems against AI Act requirements. This process typically takes 6-12 months and should begin immediately to meet the August 2026 deadline. Some high-risk AI systems (Annex III, Part A) may qualify for self-assessment if they meet specific criteria, but most enterprise AI systems will require Notified Body certification.
Conclusion
Building EU AI Act compliant AI systems requires systematic implementation of risk classification, audit logging, data governance, human oversight, and compliance documentation. The August 2, 2026 deadline is approaching fast, and conformity assessment takes 6-12 months—organizations starting now barely have enough time.
Key takeaways:
- Risk classification is the foundation of compliance. Classify your systems early and design architecture accordingly.
- Audit logging and data governance are non-negotiable for high-risk systems. Implement Article 10 controls from day one.
- Human oversight interfaces must be built into system architecture, not added as an afterthought.
- Compliance documentation must be prepared before the deadline and be ready for Notified Body review.
JuiceFactory's EU-hosted, stateless-by-default inference infrastructure simplifies compliance for high-risk systems. With zero data retention by default and full API compatibility with OpenAI SDKs, you get GDPR-compliant AI inference without the complexity of managing infrastructure yourself.
Get started with JuiceFactory's EU-hosted, GDPR-compliant AI inference