Patent Filed · United Kingdom

Safeguarding
infrastructure for
AI-mediated
communications

We are building independent, auditable safeguarding architecture for the infrastructure layer. Designed to be model-agnostic and regulatory-compliant from day one.

Patent-Protected
UK IPO Application Filed
Model-Agnostic
Designed for Any LLM
Multi-Jurisdiction
UK · EU · US · AU Compliance Targets

AI chatbots are deployed to millions of children with no independent safety layer

Built-in model safety is opaque, non-auditable, and routinely bypassed. Regulators cannot verify compliance. The infrastructure layer is missing.

Built-In Safety Fails

Model-level safety measures such as RLHF are embedded in training. They cannot be externally audited, do not produce compliance records, and are regularly circumvented through jailbreaking.

No Independent Layer

No existing product provides real-time, auditable child safeguarding that operates independently of the AI model provider. Current content moderation tools were not designed for AI conversations.

Regulatory Mandates Arriving

The UK Online Safety Act, EU AI Act, US KOSA, and Australian eSafety frameworks are imposing enforceable child safety obligations on AI platforms. Compliance tooling does not yet exist.

Designing safeguarding infrastructure for any AI platform

OLLOO® is developing an independent safety layer that sits between users and AI models, designed to produce full audit trails for regulatory compliance across multiple jurisdictions.

User

Child or vulnerable user

OLLOO

Safeguarding Infrastructure

AI Model

Any LLM provider

Model-Agnostic Architecture

Designed to work with any large language model without requiring modification to the model itself. Platforms could change providers without losing protection.

Independent of Model Training

Safeguarding is designed to operate at the infrastructure layer, not inside the model. This architectural approach means it could not be bypassed by jailbreaking or prompt injection.

Auditable Compliance Records

The system is being built to produce structured records for every safeguarding action, designed for regulatory audit and legal compliance across multiple jurisdictions.

Real-Time Processing

Engineered for production latency budgets. Safeguarding is designed to happen synchronously, not as a post-hoc review.

Designing a purpose-built safeguarding engine, not a filter

Existing content moderation tools were designed for social media, not AI conversations. OLLOO is being developed as an integrated architectural system, built from the ground up for this new category.

Harmful Content Detection

Designed for comprehensive safeguarding coverage across content categories relevant to child safety in AI-mediated communications, with structured classification.

Behavioural Pattern Analysis

Designed to go beyond single-message content filtering. The architecture will identify concerning behavioural patterns across sessions that no individual message would trigger on its own.

Grooming Detection

A core development focus: purpose-built detection of grooming patterns in AI-mediated conversations, designed to produce structured audit entries for safeguarding teams and regulators.

Guardian Alerts

A planned configurable notification system to alert parents, guardians, and safeguarding teams when concerning activity is detected, with appropriate escalation pathways.

Optimised for Speed

Being engineered for real-time processing within production latency budgets. Safeguarding should not degrade the user experience.

Audit-Ready

Designed so that every safeguarding decision will produce structured compliance records. Built for the regulatory scrutiny that AI platforms will face.

New laws are creating enforceable obligations

AI platforms serving children will need to demonstrate compliance with child safety regulations. OLLOO is building the infrastructure to enable this—not just alignment, but the operational mechanism for compliance.

United Kingdom

Online Safety Act 2023. Ofcom codes of practice now in force.

European Union

Digital Services Act & AI Act. Full application from 2026.

United States

COPPA & Kids Online Safety Act (KOSA). Federal and state frameworks.

Australia

Online Safety Act 2021. eSafety Commissioner enforcement.

Patent-protected technology

Our approach is protected by a UK patent application covering the core safeguarding system and method, with an international filing window under the Patent Cooperation Treaty.

UK Patent Application Filed

Comprehensive patent claims covering the safeguarding system and method for AI-mediated communications. Filed with the UK Intellectual Property Office with a 12-month priority window for PCT international filing across major AI markets.

Building infrastructure that enables compliance, not just a compliant product

OLLOO is being developed as neutral, independent safeguarding infrastructure—designed to bridge regulatory policy and technical architecture for AI platforms worldwide.

Regulatory Infrastructure

Being developed as the infrastructure layer for AI safeguarding compliance. Designed to formalise compliance pathways and operationalise evolving regulatory frameworks technically.

Model-Agnostic Neutrality

Architected to work across frontier AI systems without model-specific dependencies. Designed to be platform-level and policy-relevant across providers.

Category Definition

AI safeguarding infrastructure is a new category. OLLOO aims to define how regulation is implemented technically—not just to align with it.