Our approach to EU AI Act compliance and responsible AI development
Creatures Digital GmbH operates solely as a platform provider. We supply the infrastructure, tools, and marketplace that help users create and deploy AI agents. Final agents and their outputs remain the responsibility of the users who configure and operate them.
Under the EU AI Act we are classified as a platform provider. Optional platform features—such as our Custom RAG stack and the zAI customer-service agent—can be disabled. All other AI systems on the platform are user-created and user-governed.
Creation helpers (prompt builders, templates, validation hints) support users during setup but do not automatically produce finished agents. Users review and approve every change before deployment.
We provide documentation, training paths, and compliance templates so teams can understand their obligations and implement responsible AI practices.
Suggestions from our tooling are optional and must be reviewed by the user. They document context, recommended safeguards, and testing guidance, but human owners approve the final configuration.
Each change can be inspected, edited, or discarded. We highlight potential compliance gaps but do not override user decisions or deploy changes automatically.
Users remain accountable for verifying that their agents satisfy the EU AI Act and other applicable regulations. Our tooling provides guardrails and documentation exports to streamline this effort.
Every deployment includes machine-readable build scripts that detail model settings, prompts, memory configuration, and system parameters. This supports auditability and change tracking.
Automated guidance may not cover domain-specific obligations. Users should apply domain knowledge and legal review where needed.
Our optional platform features—Custom RAG processing and the zAI customer-service agent—are documented as Limited Risk systems and include mandatory transparency measures.
Users determine the risk level for their own agents and must implement safeguards that align with their classification. We provide tooling and guidance to support that analysis.
High-risk deployments require comprehensive risk management, monitoring, and documentation. We provide templates and data exports, but ultimate compliance lies with the deploying organization.
Users must classify their systems and implement controls that align with the assessed risk level.
Thorough testing is required before production deployment to ensure safe and accurate behaviour.
Users should maintain up-to-date documentation covering purpose, functionality, safeguards, and evaluation results.
Continuous monitoring for performance, safety, and compliance is essential throughout the agent lifecycle.
Documentation, templates, and guidance help users understand and implement the EU AI Act and related regulations.
For compliance questions or concerns, contact our team at zaun@creatures.digital or reach out via the admin portal’s compliance channel.
We keep our guidance and platform safeguards aligned with evolving best practices and regulatory updates.
Classification: Limited Risk AI System (optional feature that workspace administrators can enable or disable).
Purpose: Document ingestion, intelligent parsing, and retrieval-augmented responses for enterprise knowledge bases.
Processing: Document intelligence and embeddings processed via Azure APIs in Germany West region.
User Control: Feature toggles are available globally and per agent.
Data Handling: Documents stored in Milvus with encrypted storage located in the EU.
Classification: Limited Risk AI System
zAI provides first-line support for the Zaun platform: answering account questions, guiding workflow setup, filing Jira tickets with audit trails, and escalating sensitive or high-risk matters to human support in line with transparency obligations.
Primary LLM: GPT-4.1 Mini delivered from Azure Germany West to maintain EU data residency while supporting streaming responses.
Reasoning Layer: zAI Memory Manager orchestrates memory integration, tool calls, and response synthesis.
Memory: Redis-backed agent memory, shared service memory, and smart memory (Qwen2.5 1.5B instruct) enforced with per-user isolation.
Embeddings & Retrieval: Local zAI embedding service with selectable models and Milvus collections stored on encrypted volumes.
Session Governance: Session isolation with configurable message counts and expiration windows to balance continuity and privacy.
Observability: Confidence scores, latency metrics, and satisfaction analytics feed weekly quality reviews.
For questions about our support agent or to raise a compliance concern, email zaun@creatures.digital or open a ticket via the admin portal.
This disclosure satisfies EU AI Act transparency requirements and helps users make informed decisions when engaging with zAI.
Important: This compliance framework supports users in understanding their obligations under the EU AI Act. Ultimate responsibility for compliant deployment lies with the teams operating their AI systems. Consider obtaining legal advice for specific regulatory questions.