AI Compliance

Our approach to EU AI Act compliance and responsible AI development

Last updated: December 11, 2025
Our Platform Role
Understanding our responsibilities under the EU AI Act

Platform Provider (Not AI Service Provider)

Creatures Digital GmbH operates solely as a platform provider. We supply the infrastructure, tools, and marketplace that help users create and deploy AI agents. Final agents and their outputs remain the responsibility of the users who configure and operate them.

EU AI Act Classification

Under the EU AI Act we are classified as a platform provider. Optional platform features—such as our Custom RAG stack and the zAI customer-service agent—can be disabled. All other AI systems on the platform are user-created and user-governed.

AI-Assisted Creation Tools

Creation helpers (prompt builders, templates, validation hints) support users during setup but do not automatically produce finished agents. Users review and approve every change before deployment.

User Education & Support

We provide documentation, training paths, and compliance templates so teams can understand their obligations and implement responsible AI practices.

AI-Assisted Creation Tools
How we support responsible agent creation

Creation Assistance Only

Suggestions from our tooling are optional and must be reviewed by the user. They document context, recommended safeguards, and testing guidance, but human owners approve the final configuration.

User Review Required

Each change can be inspected, edited, or discarded. We highlight potential compliance gaps but do not override user decisions or deploy changes automatically.

Compliance Responsibility

Users remain accountable for verifying that their agents satisfy the EU AI Act and other applicable regulations. Our tooling provides guardrails and documentation exports to streamline this effort.

Build Script Transparency

Every deployment includes machine-readable build scripts that detail model settings, prompts, memory configuration, and system parameters. This supports auditability and change tracking.

Tool Limitations

Automated guidance may not cover domain-specific obligations. Users should apply domain knowledge and legal review where needed.

Compliance Framework
How we support compliance across the platform

Platform-Level Compliance

  • Transparency and user education
  • Content filtering and safety tooling
  • Data protection and access controls
  • Security monitoring and audit logs
  • Incident response processes

User Support

  • Compliance checklists and documentation templates
  • Risk assessment guidance
  • Legal disclaimer tooling
  • Agent trust indicators
  • Escalation workflows
Risk Classification
How we evaluate AI system risk

Limited Risk (Platform AI Features)

Our optional platform features—Custom RAG processing and the zAI customer-service agent—are documented as Limited Risk systems and include mandatory transparency measures.

User-Created AI Systems

Users determine the risk level for their own agents and must implement safeguards that align with their classification. We provide tooling and guidance to support that analysis.

High-Risk Systems

High-risk deployments require comprehensive risk management, monitoring, and documentation. We provide templates and data exports, but ultimate compliance lies with the deploying organization.

User Responsibilities
Expectations for teams deploying AI agents

Risk Assessment

Users must classify their systems and implement controls that align with the assessed risk level.

Testing & Validation

Thorough testing is required before production deployment to ensure safe and accurate behaviour.

Documentation

Users should maintain up-to-date documentation covering purpose, functionality, safeguards, and evaluation results.

Ongoing Monitoring

Continuous monitoring for performance, safety, and compliance is essential throughout the agent lifecycle.

Compliance Support
How we help resolve compliance questions

Compliance Resources

Documentation, templates, and guidance help users understand and implement the EU AI Act and related regulations.

Support & Consultation

For compliance questions or concerns, contact our team at zaun@creatures.digital or reach out via the admin portal’s compliance channel.

Regular Updates

We keep our guidance and platform safeguards aligned with evolving best practices and regulatory updates.

Platform AI Features Disclosure
Optional Platform Features | Classification: Limited Risk AI Systems

Custom RAG Document Processing Stack

Classification: Limited Risk AI System (optional feature that workspace administrators can enable or disable).

Purpose: Document ingestion, intelligent parsing, and retrieval-augmented responses for enterprise knowledge bases.

Architecture:
  • Document Processing: Azure Document Intelligence API for OCR, layout detection, and structured document extraction.
  • Embeddings: Azure OpenAI Embeddings API (text-embedding-3 models) deployed in Germany West region.
  • Vector Storage: Milvus with encrypted persistence and tenant isolation.
  • Response Generation: zAI Memory Manager generates answers and reasoning traces using the retrieved context.

Processing: Document intelligence and embeddings processed via Azure APIs in Germany West region.

User Control: Feature toggles are available globally and per agent.

Data Handling: Documents stored in Milvus with encrypted storage located in the EU.

Customer Service Agent Disclosure
Platform's Customer Service AI | Classification: Limited Risk AI System

AI Agent Information

Classification: Limited Risk AI System

Provider:Zaun Platform (User: Zaun)
Model Stack:GPT-4.1 Mini (Azure Germany West) orchestrated by zAI Memory Manager
Version:main · v1 (deployed October 1, 2025)
Type:Customer Service & Support Agent

Purpose and Capabilities

zAI provides first-line support for the Zaun platform: answering account questions, guiding workflow setup, filing Jira tickets with audit trails, and escalating sensitive or high-risk matters to human support in line with transparency obligations.

  • Guides onboarding, billing, and workflow configuration.
  • Surfaces knowledge base articles and Custom RAG excerpts when enabled.
  • Creates Jira tickets with user attribution, session identifiers, and timestamps.
  • Maintains conversation continuity and recognises returning users.
  • Escalates when confidence dips, sentiment signals frustration, or policies require human review.

Technical Stack & Safety Controls

Primary LLM: GPT-4.1 Mini delivered from Azure Germany West to maintain EU data residency while supporting streaming responses.

Reasoning Layer: zAI Memory Manager orchestrates memory integration, tool calls, and response synthesis.

Memory: Redis-backed agent memory, shared service memory, and smart memory (Qwen2.5 1.5B instruct) enforced with per-user isolation.

Embeddings & Retrieval: Local zAI embedding service with selectable models and Milvus collections stored on encrypted volumes.

Session Governance: Session isolation with configurable message counts and expiration windows to balance continuity and privacy.

Observability: Confidence scores, latency metrics, and satisfaction analytics feed weekly quality reviews.

Data and Training Information

Training & Tuning Data

  • Curated platform documentation and onboarding workflows.
  • Sanitised historic support transcripts without personal data.
  • Approved Jira templates, escalation guides, and compliance playbooks.
  • Internal privacy, security, and escalation policies.

Performance and Limitations

What We Measure

  • Per-response confidence, certainty, and suggested next steps.
  • Escalation rates, latency, and tool invocation health.
  • User satisfaction feedback and follow-up survey results.
  • Retrieval coverage to flag stale or missing documentation.

Known Limitations

  • No direct access to live production dashboards or tenant billing data.
  • Complex engineering, legal, or policy requests escalate to human staff.
  • Accuracy depends on maintaining current knowledge base entries.
  • Automated Jira workflows fall back to manual handling during outages.

Risk Mitigation and Safety

Human Oversight

  • Escalation triggers covering unresolved issues, frustration signals, and complex technical cases.
  • Weekly supervisor audits of conversations and Jira artifacts.
  • Manual takeover controls that allow staff to join or assume a session.

Bias & Safety Controls

  • Diverse training examples across industries and personas.
  • Continuous monitoring for demographic skew and language drift.
  • Model-level safeguards with safe completion fallbacks.

User Rights Under EU AI Act

  • Receive clear disclosure when interacting with zAI.
  • View confidence indicators and recommended follow-up actions.
  • Request human support at any point.
  • Report suspected bias, safety issues, or inaccuracies.
  • Opt out of analytics improvements and request data deletion.

Contact for AI Agent Inquiries

For questions about our support agent or to raise a compliance concern, email zaun@creatures.digital or open a ticket via the admin portal.

This disclosure satisfies EU AI Act transparency requirements and helps users make informed decisions when engaging with zAI.

Important: This compliance framework supports users in understanding their obligations under the EU AI Act. Ultimate responsibility for compliant deployment lies with the teams operating their AI systems. Consider obtaining legal advice for specific regulatory questions.

Zaun AI Community (zAI)