Built for EU AI Act Compliance
AI you can trust. Compliance you can prove.
Semantikmatch is built from the ground up to meet the strictest regulatory requirements — EU AI Act, GDPR, and institutional governance. Every decision is explainable. Every piece of data is protected.
EU AI Act
Compliant from day one.
The European AI Act classifies AI systems used in education as HIGH RISK (Annex III, Section 3a). Semantikmatch is built to meet those requirements.
Aug 2, 2026
Date of enforcement
15M or 3%
Maximum annual fine
18-24 months
Ahead of the market on regulatory compliance.
6
AI Act compliance pillars covered.
Of EU/UK RFPs now require AI Act compliance
Risk Management System — integrated, documented, audited.
Data Governance & Bias Testing — continuous statistical monitoring across demographics, nationalities, and document types
Technical Documentation — available to supervisory authorities.
Record-Keeping & Audit Trails — complete log of every AI decision: timestamp, model version, inputs, outcome.
Human Oversight — every AI decision requires human validation. The human always decides last.
Accuracy & Robustness Testing — en cours de finalisation. Generic AI (ChatGPT, Claude) = black box = CANNOT comply by default. Semantikmatch builds the compliance layer.
GDPR
Your data, safeguarded by design
At Semantikmatch, data privacy isn't an afterthought — it's a foundational principle.
Full audit trails for all data interactions.
End-to-end encryption — TLS 1.2+ in transit, AES-256 at rest.
Support for Article 22 compliance, including human review pathways and explicit consent mechanisms.
European data hosting in ISO 27001-certified centres (France & Germany).
Granular access controls with multi-factor authentication — IAM restricted server access.
Daily backups with GDPR-compliant retention.
Quarterly third-party security audits and penetration tests.
DPA available upon request
Trust & Governance
Well-governed AI isn't just a legal requirement — it's what makes it actually useful.
Semantikmatch is built on transparency, accountability, and human control.
LAYER 1
Trust Infrastructure
DSHAP (explainability), Audit Trails, Human-in-the-Loop, Bias Testing, AI Act Engine. The foundation of the entire system.
LAYER 2
Specialised AI Agents
Fraud detection (>98% precision, £5-10/candidate vs £50-200 competitors), English level assessment (IELTS correlation >0.85, 50+ accents, 12 languages), Deepfake detection. Each agent is auditable and explainable.
LAYER 3
Workflow Orchestration
Multi-agent coordination, configurable rules, SIS/CRM/LMS integration. Institutions define their own workflows and escalation paths.
Of EU/UK RFPs now require AI Act compliance
Fairness — deterministic scoring designed to reduce bias, with continuous monitoring.
Explainability — SHAP technology on every ML model, no black boxes.
Human Control — AI assists, humans decide.
Accountability — clear roles and responsibilities on both the Semantikmatch and institution side.
Continuous improvement — model monitoring, network effects: each client improves accuracy for all.
EXCLUSIVE PARTNERSHIPS







