2026 is the year that multi-agent systems replace lonely LLM calls as the dominant AI architecture. McKinsey puts the market for agentic AI platforms at USD 31 billion — with 290% growth over 2024. Yet while vendors release new frameworks daily, Swiss decision-makers face a practical question: LangGraph, CrewAI, AutoGen or the OpenAI Agents SDK? Over the past 14 months, we at mazdek have completed 23 productive multi-agent deployments for Swiss companies — from fiduciary pipelines to insurance classification to RAG-orchestrated compliance reviews. This guide distills the lessons: which framework for which workload, which Swiss governance pitfalls, which cost drivers. Our PROMETHEUS agent guides architecture, HERACLES orchestrates tools and APIs, ARES secures compliance, ARGUS delivers 24/7 observability — all revDSG, EU AI Act and FINMA compliant.
Why multi-agent systems become standard in 2026
A multi-agent system orchestrates multiple specialized LLM agents — each with its own role, tools and memory — to solve a task that a single agent cannot reliably handle. Instead of one large prompt with twenty tasks, the system distributes subtasks across researcher, writer, verifier, critic and tool-user. Three drivers pushed this pattern over the threshold in 2026:
- Reasoning models (Claude 4.7, o4, DeepSeek-R2): only since LLMs can reliably create plans and self-critique does multi-agent orchestration become worthwhile.
- Model Context Protocol (MCP): standardizes tool integration. What in 2024 still required 8-12 weeks of custom glue code now takes two days in 2026. See our MCP guide.
- Token prices in free fall: Claude Haiku, GPT-5 nano and Llama 4 Mini make it economically viable to run five agents in a loop instead of sacrificing a single GPT-4 call.
«A single-agent setup in 2026 is what a single-service backend was in 2014: nostalgic, occasionally sufficient, but chronically underperforming for enterprise workloads. At mazdek we are experiencing the same architectural leap as back then with the move from monoliths to microservices — only with agents as services and the LLM as runtime.»
— PROMETHEUS, AI & Machine Learning Agent at mazdek
The multi-agent framework landscape 2026
The four dominant frameworks of 2026 do not differ in idea, but dramatically in philosophy, Swiss fit and production maturity:
| Framework | Vendor | License | Architecture | Production maturity | Swiss fit |
|---|---|---|---|---|---|
| LangGraph | LangChain | MIT | Stateful Graph (DAG + Loops) | High — v1.0 Q4 2025 | Very good |
| CrewAI | CrewAI Inc. | MIT (Core) | Role-based Crew | Medium-high | Good (self-hosted) |
| AutoGen 0.4+ | Microsoft Research | CC-BY 4.0 | Conversational Multi-Agent | Medium — Azure-oriented | Medium |
| OpenAI Agents SDK | OpenAI | MIT | Lightweight Handoffs (Swarm heritage) | Medium — very new | Limited (US API) |
| Semantic Kernel | Microsoft | MIT | Plugins + Plans | High (Enterprise .NET) | Good (Azure CH) |
| llama-index Workflows | LlamaIndex | MIT | Event-driven Workflows | Medium | Very good |
| Pydantic AI | Pydantic | MIT | Type-safe Agents | Medium-high | Very good |
In Swiss productive deployments we see four archetypes in 2026 — depending on workload profile:
- LangGraph: the pragmatic enterprise standard. Stateful, deterministic, debuggable. Our default choice for compliance pipelines, ETL orchestration, multi-step reasoning.
- CrewAI: the gold standard when the domain thinks in roles — researcher, writer, reviewer, editor. Content pipelines, marketing, research reports.
- AutoGen 0.4: research and code generation. In production often chatty (= expensive), but unbeatable for deep reasoning and group-chat patterns.
- OpenAI Agents SDK: the lightweight stack. When US data sovereignty is acceptable and the use case only needs simple handoffs — ideal for ChatGPT-centric workflows.
Architecture comparison: how the four frameworks think
The decisive difference lies in the control topology: who decides which agent runs next?
+----------------------+ +----------------------+
| LangGraph | | CrewAI |
| | | |
| +--+ cond +--+ | | Manager (Router) |
| |A1|-------->|A2| | | / | |
| +--+ +--+ | | Res. Writer Crit |
| cond | | | | / |
| v | | -- Output -- |
| +--+ +--+ | | |
| |A3|->|END| | | hierarchical / |
| +--+ +--+ | | sequential |
| | | |
| Stateful Graph | | Role-based Crew |
+----------------------+ +----------------------+
+----------------------+ +----------------------+
| AutoGen | | OpenAI Agents SDK |
| | | |
| GroupChatManager | | Triage agent |
| | | | | handoff |
| v | | v |
| ( A1 -- A2 -- A3 ) | | Sales agent |
| ^ | | handoff |
| | speak_again | v |
| v | Refund agent |
| Critic | |
| | Stateless |
| Conversational loop | | Function calls |
+----------------------+ +----------------------+
Almost everything else follows from this topology — debugging, cost profiles, failure modes, observability:
- LangGraph (Stateful Graph): every node is a function, edges are conditional transitions. State is passed persistently between nodes (checkpointer in Postgres / Redis). Reproducible, easy to test, gold standard for FINMA-auditable workflows.
- CrewAI (Role-based Crew): a manager LLM distributes subtasks to specialized agents. Higher token costs through delegation, but semantically very clear roles — stakeholders understand the architecture immediately.
- AutoGen (GroupChat): agents speak in a shared channel; a selector LLM determines the next speaker. Powerful, but without cost caps the conversation explodes.
- OpenAI Agents SDK (Handoffs): each agent has a list of other agents it can hand off to. Very lightweight, but state management is the developer's responsibility.
Reference architecture: the Swiss-Sovereign multi-agent stack
Regardless of the framework — every productive mazdek deployment follows an 8-layer architecture. These layers are explicitly framework-agnostic, so a framework switch is possible without re-architecting:
+------------------------------------------------------------+
| 1. UI / Trigger: IRIS · n8n · Slack · Client portal |
+-----------------------------+------------------------------+
| Task brief + Context
v
+-----------------------------+------------------------------+
| 2. Intent + Routing: PROMETHEUS — Single vs. Multi-Agent |
| - simple -> SLM (Phi-4, Gemma 3) without agents |
| - workflow-> Multi-Agent (LangGraph / CrewAI) |
| - chat -> AutoGen GroupChat |
+-----------------------------+------------------------------+
| Framework choice
v
+-----------------------------+------------------------------+
| 3. Agent layer: PROMETHEUS-orchestrated |
| - Planner · Researcher · Writer · Verifier · Critic |
| - Reasoning calls via Claude 4.7 / DeepSeek-R2 |
+-----------------------------+------------------------------+
| Tool call (MCP)
v
+-----------------------------+------------------------------+
| 4. Tool layer: HERACLES — MCP bus |
| - SAP · Salesforce · DB · Vector DB · Web · Code Sbx |
| - Auth: OAuth 2.1 · mTLS · Service Tokens |
+-----------------------------+------------------------------+
| Result + Audit
v
+-----------------------------+------------------------------+
| 5. Memory: ORACLE — Short / Long / Episodic |
| - Postgres (Checkpoints) · pgvector (Episodes) |
| - Mem0 / Letta for cross-session learning |
+-----------------------------+------------------------------+
| Validated state
v
+-----------------------------+------------------------------+
| 6. Guardrails: ARES — PII · Prompt injection · Policy |
| - Llama Guard 3 · NeMo Guardrails · LlamaFirewall |
+-----------------------------+------------------------------+
| Compliant action
v
+-----------------------------+------------------------------+
| 7. Observability: ARGUS — Langfuse + OpenTelemetry |
| - Trace replay · Cost caps · Eval regression |
| - WORM archive 10y for FINMA / EU AI Act |
+-----------------------------+------------------------------+
| Metrics + Events
v
+-----------------------------+------------------------------+
| 8. Infrastructure: HEPHAESTUS — Swiss GPU / Bedrock CH |
| K8s · vLLM · ISO-27001 · revDSG · 99.95% SLA |
+------------------------------------------------------------+
Three layers deserve special attention:
- Intent + Routing: 65-80% of all requests do not need a multi-agent system. A classifier decides whether a single small-language-model call is enough or orchestration is required. Saves 4-7x cost — see SLM article.
- Tool layer (HERACLES): the most important lever for speed and cost. MCP standardizes the tool API; all four frameworks support MCP natively in 2026.
- Guardrails (ARES): in multi-agent systems the hallucination risk multiplies. Each agent can quietly leak PII or pass on prompt injection. Without Llama Guard 3 or NeMo Guardrails, no productive Swiss deployment.
Code comparison: the same use case in four frameworks
Task: a fiduciary agent receives an incoming invoice, classifies it by VAT code, checks against the supplier master and books in Bexio. A classic 4-step pipeline.
LangGraph (Python)
from langgraph.graph import StateGraph, END
from langgraph.checkpoint.postgres import PostgresSaver
from typing import TypedDict
import anthropic
class State(TypedDict):
invoice: dict
classification: str
supplier: dict | None
booking_id: str | None
def classify(state: State) -> State:
res = anthropic.messages.create(
model='claude-opus-4-7',
max_tokens=512,
messages=[{'role': 'user', 'content': f"VAT code for: {state['invoice']}"}]
)
return {**state, 'classification': res.content[0].text}
def lookup_supplier(state: State) -> State:
sup = bexio.suppliers.find(name=state['invoice']['vendor'])
return {**state, 'supplier': sup}
def book(state: State) -> State:
if not state['supplier']:
return state # conditional abort
res = bexio.invoices.create(
supplier_id=state['supplier']['id'],
vat_code=state['classification'],
amount=state['invoice']['amount']
)
return {**state, 'booking_id': res['id']}
g = StateGraph(State)
g.add_node('classify', classify)
g.add_node('lookup', lookup_supplier)
g.add_node('book', book)
g.set_entry_point('classify')
g.add_edge('classify', 'lookup')
g.add_conditional_edges(
'lookup',
lambda s: 'book' if s['supplier'] else END,
)
g.add_edge('book', END)
checkpointer = PostgresSaver.from_conn_string(os.environ['DATABASE_URL'])
app = g.compile(checkpointer=checkpointer)
result = app.invoke({'invoice': payload}, config={'configurable': {'thread_id': inv_id}})
Characteristic: each phase is a pure function, the state is passed explicitly. Recoverable after a crash, FINMA-auditable, with the Postgres checkpointer every run survives a pod restart.
CrewAI (Python)
from crewai import Agent, Task, Crew, Process
classifier = Agent(
role='Swiss VAT specialist',
goal='Correct VAT classification for every incoming invoice',
backstory='15 years of experience with Swiss VAT law and Bexio bookkeeping',
llm='claude-opus-4-7',
tools=[bexio_search_tool, vat_lookup_tool]
)
booker = Agent(
role='Bexio booking agent',
goal='Cleanly post verified invoices into Bexio',
llm='claude-haiku-4',
tools=[bexio_book_tool]
)
t1 = Task(description='Classify {invoice} and find supplier', agent=classifier)
t2 = Task(description='Book with VAT code & matched supplier', agent=booker, context=[t1])
crew = Crew(agents=[classifier, booker], tasks=[t1, t2], process=Process.sequential, memory=True)
result = crew.kickoff(inputs={'invoice': payload})
Characteristic: role metaphor instead of graph. Stakeholders understand the code intuitively. The downside: state is implicit, debugging is harder, token consumption ~15% higher due to role-backstory prompts.
AutoGen 0.4 (Python)
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.anthropic import AnthropicChatCompletionClient
client = AnthropicChatCompletionClient(model='claude-opus-4-7')
classifier = AssistantAgent('classifier', model_client=client,
system_message='You classify Swiss VAT codes')
booker = AssistantAgent('booker', model_client=client,
system_message='You book in Bexio via tool calls', tools=[bexio_book])
critic = AssistantAgent('critic', model_client=client,
system_message='You review the booking and say APPROVE or REJECT')
team = RoundRobinGroupChat([classifier, booker, critic], max_turns=6)
async for msg in team.run_stream(task=f'Process invoice: {payload}'):
print(msg)
Characteristic: conversation as the control mechanism. Powerful when the task requires discussion and self-criticism. But round-robin in our measurements consumes 2.3-3.1x more tokens than LangGraph for the same output — without a hard max_turns this quickly gets expensive.
OpenAI Agents SDK (Python)
from agents import Agent, Runner, function_tool
@function_tool
def lookup_supplier(name: str) -> dict: ...
@function_tool
def book_invoice(supplier_id: str, vat_code: str, amount: float) -> str: ...
classifier = Agent(
name='Classifier',
instructions='Classify VAT code, then handoff to Booker',
tools=[lookup_supplier],
model='gpt-5-mini',
)
booker = Agent(
name='Booker',
instructions='Book the invoice with the handed-over VAT code',
tools=[book_invoice],
model='gpt-5-mini',
)
classifier.handoffs = [booker]
result = Runner.run_sync(classifier, input=f'Invoice: {payload}')
Characteristic: minimal boilerplate. But: stateless (no Postgres checkpoint), reasoning models via OpenAI API with US data routing — problematic for FINMA workloads in Switzerland.
Cost comparison: what a multi-agent system really costs
From 23 productive deployments we distill realistic costs per 1000 tasks for a typical 4-step workflow (classification, lookup, verification, action):
| Framework | Avg. tokens/task | Avg. latency | CHF / 1k tasks | Failure rate |
|---|---|---|---|---|
| LangGraph + Claude 4.7 | 4,200 | 5.1 s | CHF 38 | 1.4% |
| LangGraph + DeepSeek-R2 (self-hosted) | 4,600 | 6.8 s | CHF 6 | 2.1% |
| CrewAI + Claude 4.7 | 5,500 | 7.3 s | CHF 51 | 1.8% |
| AutoGen 0.4 + Claude 4.7 | 9,800 | 11.2 s | CHF 91 | 2.4% |
| OpenAI Agents SDK + GPT-5 | 3,400 | 3.8 s | CHF 32 | 2.2% |
| OpenAI Agents SDK + GPT-5 mini | 3,600 | 3.4 s | CHF 9 | 3.5% |
Three lessons from the data:
- AutoGen is 2.4x more expensive than LangGraph for a comparable task. Round-robin without a hard
max_turnsis the main cause. Set cost caps per tenant. - Self-hosted DeepSeek-R2 in LangGraph is 6x cheaper than frontier cloud — at an acceptably higher failure rate (offset through eval gating).
- OpenAI Agents SDK + GPT-5 mini is the price-performance king for simple handoffs — as long as US data sovereignty is not a knockout criterion.
5 real-world use cases with measurable ROI
From 23 productive mazdek multi-agent deployments, five reproducible patterns:
1. Fiduciary pipeline (LangGraph)
A Zurich fiduciary firm with 480 SME mandates uses LangGraph + Claude 4.7 to classify 18,000 incoming invoices per month, post them to Bexio and escalate anomalies. The 7-node pipeline (OCR, classification, supplier match, VAT check, booking, anomaly detection, escalation) is FINMA-auditable with a Postgres checkpointer. Result after 6 months: 84% less manual posting, 1.2% anomaly escalation rate (previously 7.4%), CHF 480,000 savings per year, payback in 4.8 months.
2. Insurance claim classification (CrewAI)
A Swiss property insurer with CHF 950m in premiums orchestrates four agents — file reader, claim classifier, fraud detector, goodwill recommender — with CrewAI. Result: claim throughput time -62%, fraud detection 2.1x improved, NPS +14 points, payback in 7.2 months.
3. Compliance review (LangGraph + AutoGen hybrid)
A Geneva private bank combines LangGraph (deterministic pipeline) with AutoGen GroupChat (legal discussion of complex cases). Three ARES-orchestrated compliance agents review FINMA circulars against internal processes. Result: 79% faster reviews, 0% FINMA findings since go-live, CHF 3.1m annual savings.
4. Content pipeline (CrewAI)
A Bern-based B2B marketing department produces 28 long-form articles per week in four languages with researcher, SEO specialist, writer and reviewer. Result: output 4.2x, time-to-publish -71%, ranking top-10 on Google for 142 new keywords, CHF 285,000 saved in agency budget.
5. DevOps incident response (LangGraph + MCP)
A Basel fintech runs a multi-agent incident responder: Detector → Diagnoser → Remediator → Documenter. Via MCP the agents access Datadog, GitHub, Kubernetes and PagerDuty. Result: MTTR -68%, P0 incidents -41% after 3 months, ARGUS reports pass the ISO-27001 audit without finding. See Self-Repairing AI article.
Decision matrix: which framework for you?
| Workload profile | Recommendation | Why |
|---|---|---|
| Compliance / FINMA / Audit | LangGraph | Stateful, deterministic, Postgres checkpoint |
| Content / Marketing / Research | CrewAI | Role metaphor matches the domain |
| R&D / Code generation / Math | AutoGen 0.4 | GroupChat for discussion + self-criticism |
| ChatGPT-centric / simple handoffs | OpenAI Agents SDK | Lightweight, native API |
| Microsoft stack / .NET enterprise | Semantic Kernel | Deep Azure integration, plugin model |
| Type-safe / Pydantic world | Pydantic AI | Schema validation from the start |
| RAG-centric / index-heavy | llama-index Workflows | Best index and retrieval primitives |
Our PROMETHEUS default for Swiss mid-market enterprises: LangGraph + Claude 4.7 (escalated) + DeepSeek-R2 (standard) + MCP tools via HERACLES. This combination delivers the optimal intersection of cost, auditability and speed in 80% of our mandates.
Governance: multi-agent systems under the EU AI Act, revDSG and FINMA
Multi-agent systems raise four additional compliance questions that single LLM calls did not have:
- EU AI Act Art. 50 (transparency obligation): if a multi-agent system interacts with end customers, it must be recognizable that several AI agents are involved — not just one. Best practice: UI hint «This process is handled by a network of specialized AI agents.»
- EU AI Act Art. 12 (logging): every agent call, every tool invocation, every handoff counts as input/output and must be stored over the entire system lifetime. A 7-step workflow generates 7x more trace volume than a single call.
- revDSG Art. 21 (automated individual decision): if the multi-agent system has a legally relevant effect (insurance classification, HR decision, credit risk), the data subject must be able to demand human review — and the complete multi-agent trace is part of the justification.
- FINMA RS 2023/1 (operational risks): multi-agent systems generate cascading risks — a hallucinating agent can contaminate the pipeline. Mandatory: eval regime, red-team tests, cost caps and a tamper-evident trace archive over 10 years.
Our EU AI Act guide contains multi-agent templates for all four points. Three hard obligations every Swiss multi-agent implementation must meet:
- Trace completeness: capture every agent state, every tool call, every handoff decision with OpenTelemetry plus Langfuse. WORM archive for regulated mandates.
- PII redaction between agents: sensitive data must not be passed through unless explicitly required. ARES Llama Guard sits between agents as a bus filter.
- Cost & loop caps: hard token limits per agent and hard iteration limits (max. 10-15 loops). Infinite loops are the #1 risk in AutoGen deployments.
Real-world example: fiduciary pipeline with LangGraph live in 9 weeks
A Zurich fiduciary firm (CHF 14m revenue, 62 employees, 480 mandates) processes its incoming invoices manually before 2026 — 4,200 person-hours per year for pure posting.
Starting point
- 18,000 incoming invoices/month from PDF, email attachment, QR invoice
- Manual posting in Bexio by 6 clerks
- Error rate: 4.1% (mostly wrong VAT codes or supplier mismatches)
- Seasonal peaks Q1 and Q4 led to backlogs of up to 21 days
mazdek multi-agent solution in 9 weeks
We built a LangGraph pipeline with seven nodes and an MCP tool bus:
- Node 1 (OCR): Mistral OCR + Tesseract fallback, extracts structured fields.
- Node 2 (classification): PROMETHEUS agent with Claude 4.7 for complex VAT cases, DeepSeek-R2 for standard.
- Node 3 (supplier match): HERACLES with MCP bridge to Bexio Suppliers + fuzzy matching.
- Node 4 (verification): ORACLE RAG against VAT law and mandate notes.
- Node 5 (booking): Bexio API with idempotency keys.
- Node 6 (anomaly): statistical detector flags outliers for human review.
- Node 7 (escalation): Slack/email alert plus client portal card.
Results after 6 months in operation
| Metric | Before | After | Delta |
|---|---|---|---|
| Manual effort | 4,200 h/year | 670 h/year | -84% |
| Posting error rate | 4.1% | 0.6% | -85% |
| Anomaly escalation rate | 7.4% | 1.2% | -84% |
| Throughput time per invoice | ~6 min | ~22 s | -94% |
| LLM cost per invoice | — | CHF 0.038 | — |
| Annual savings | — | CHF 480,000 | — |
| Payback | — | 4.8 months | — |
Important: no job cuts. The six clerks were redeployed as anomaly specialists and client advisors — with a higher value contribution and better pay. Seasonal peaks are absorbed by the system, not by the team.
Implementation roadmap: live in 10 weeks
Phase 1: Discovery & framework selection (week 1-2)
- Workshop: which workflows today have > 4 steps and cost > 1,000 h/year?
- Framework matrix: determinism × audit × cost × team skill
- Pick top-2 workflows, build an eval set with 200-500 cases
Phase 2: PoC with chosen framework (week 3-5)
- PROMETHEUS builds the graph in LangGraph (or Crew, AutoGen)
- Eval against gold set, measure cost profiles
- MCP tool bridges for required systems (Bexio, SAP, Bitrix24, etc.)
Phase 3: Guardrails, memory & observability (week 6-7)
- ARES implements Llama Guard 3 bus filter, PII redaction, loop caps
- ORACLE builds Postgres checkpointer + pgvector episodic memory
- ARGUS instruments Langfuse + OpenTelemetry, WORM archive
Phase 4: Infrastructure & compliance (week 8-9)
- HEPHAESTUS deploys to Swiss K8s / Bedrock eu-central-2 Zurich
- EU AI Act and FINMA conformity check by ARES
- Red-team tests against prompt injection and cascading hallucinations
Phase 5: Rollout & learning (week 10+)
- Shadow mode: system runs in parallel without live impact
- Supervised: 10% traffic with human approval
- Full production: 100% with human-in-the-loop for low-confidence cases
- Monthly eval regression, quarterly framework upgrades
The future: A2A, council models and infinite swarms
Multi-agent frameworks in 2026 are only the second generation. What is coming in 2027-2028:
- Agent-to-Agent protocol (A2A): Google, Anthropic and LangChain are working on an MCP counterpart for agent-to-agent communication across vendor boundaries. A LangGraph agent will be able to call a CrewAI agent directly — without adapter code.
- Council models: Anthropic Council, OpenAI Swarm 2.0 and DeepMind Concurrent show 8-15 percentage points of accuracy gain when 3-5 reasoning models debate in parallel and converge. First Swiss mandates are testing this for due-diligence reports.
- Hierarchical swarms: manager agent over manager agent over worker. Usable for thousands of concurrent tasks (marketing, support, RPA replacement).
- Specialist marketplaces: Cursor Agent Hub, Anthropic Agent Store and HuggingFace Spaces agents bring pre-trained, signed specialists — the AI industry's vendor lock-in?
- Domain-fine-tuned multi-agent stacks: a DPO-trained fiduciary multi-agent stack as an out-of-the-box product. mazdek is building the first of its kind for the Swiss fiduciary industry for Q4 2026.
- Long-horizon agents: agents that run autonomously for days and weeks — compliance sweeps, market analyses, continuous tests. They require infinite memory (Mem0, Letta) and tight cost governance.
Conclusion: multi-agent is the AI architecture of 2026
- Architectural leap: single-agent LLM calls become microservice equivalents in 2026 — functional, but rarely optimal. Multi-agent is the new default for workflows with more than 3 steps.
- LangGraph as the Swiss default: stateful, auditable, Postgres-checkpointed. Our recommendation in 80% of Swiss enterprise mandates.
- CrewAI for role-based domains: content, marketing, research, legal case handling.
- AutoGen for reasoning-heavy workloads: with cost caps. Without caps the framework becomes a cost trap.
- OpenAI Agents SDK for lightweight handoffs: when US data sovereignty is not a knockout criterion.
- ROI under 8 months: 23 productive mazdek mandates, 5.9 months average payback.
- Compliance is feasible: EU AI Act, revDSG and FINMA are cleanly mapped with ARES guardrails, ARGUS observability and WORM archiving — Swiss-sovereign from day one.
At mazdek, 19 specialized AI agents orchestrate the entire multi-agent lifecycle: PROMETHEUS for architecture, routing and reasoning; HERACLES for MCP tool integration; ORACLE for RAG and memory; ARES for guardrails and compliance; ARGUS for 24/7 observability and WORM audit; HEPHAESTUS for Swiss K8s infrastructure; IRIS for human-in-the-loop; NANNA for eval regression. 23 productive multi-agent deployments since 2024 — DSG, GDPR, EU AI Act, FINMA and CO compliant from day one.