Responsible AI

Last updated 2/18/2026

Responsible AI Policy

Last updated: February 2026 | Version 2.0


This Responsible AI Policy ("Policy") is part of our legal terms and should be read alongside our Free Account Terms and, where applicable, our Subscription Agreement. When you use the AI-powered services provided by Toby Sinclair Coaching Limited ("AI Products"), you warrant that you will comply with this Policy and all applicable laws and regulations governing AI.

Your use of our AI Products signifies your agreement to engage with our platform in a lawful, ethical, and responsible manner that respects the rights and dignity of all individuals. If you do not agree with this Policy, please refrain from using our Services.


Who We Are

We are Toby Sinclair Coaching Limited ("Company", "we", "us", "our"), a company registered in England and Wales at 13 Highfield Road, Croston, Lancashire, PR26 9HH, trading as Real Talk Studio. We operate the website realtalkstudio.com, as well as any other related products and services that refer or link to this Policy (collectively, the "Services").


Use of the Services

When you use the Services, you warrant that you will comply with this Policy and with all applicable laws. You also acknowledge that you may not:

  • Systematically retrieve data or other content from the Services to create or compile, directly or indirectly, a collection, compilation, database, or directory without written permission from us
  • Make any unauthorised use of the Services, including collecting usernames and/or email addresses of users by electronic or other means for the purpose of sending unsolicited email, or creating user accounts by automated means or under false pretences
  • Circumvent, disable, or otherwise interfere with security-related features of the Services
  • Engage in unauthorised framing of or linking to the Services
  • Trick, defraud, or mislead us and other users, especially in any attempt to learn sensitive account information such as user passwords
  • Engage in any automated use of the Services, such as using scripts to send comments or messages, or using any data mining, robots, or similar data gathering and extraction tools
  • Interfere with, disrupt, or create an undue burden on the Services or the networks connected to the Services
  • Attempt to impersonate another user or person
  • Use any information obtained from the Services in order to harass, abuse, or harm another person
  • Use the Services as part of any effort to compete with us or use the Services for any revenue-generating endeavour without our prior written consent
  • Decipher, decompile, disassemble, or reverse engineer any of the software comprising the Services, except as expressly permitted by applicable law
  • Upload or transmit viruses, Trojan horses, or other harmful material that interferes with any party's use and enjoyment of the Services
  • Use the Services in a manner inconsistent with any applicable laws or regulations

AI Products

When you use the AI Products provided by Real Talk Studio, you warrant that you will not:

  • Deploy AI techniques that utilise subliminal, manipulative, or deceptive methods designed to distort behaviour and impair informed decision-making, particularly when such actions cause significant harm to individuals
  • Exploit vulnerabilities related to age, disability, or socio-economic circumstances through AI in a way that distorts behaviour or decision-making, especially if this results in significant harm to the individual
  • Use AI systems for biometric categorisation that infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except in limited cases permitted by law
  • Implement AI-based social scoring systems that evaluate or classify individuals or groups based on their social behaviour or personal traits in a manner that causes harm, discrimination, or unfair treatment
  • Assess the risk of an individual committing criminal offences based solely on profiling, personality traits, or other non-behavioural factors
  • Compile facial recognition databases through untargeted scraping of facial images from the internet, social media, or CCTV footage
  • Use AI to infer emotions in sensitive environments such as workplaces or educational institutions where such analysis could lead to discrimination, unfair treatment, or privacy violations
  • Engage in real-time remote biometric identification in public places for law enforcement purposes, except where there are strong legal justifications and oversight mechanisms

Our Commitment to Responsible AI

We recognise the significant impact AI can have on our users and society, and we are dedicated to ensuring that our AI Products are designed and operated in accordance with comprehensive ethical standards. This Policy governs the development, deployment, and use of AI technologies across our Services to protect users' rights and maintain transparency in all AI operations.

No AI model training on your data

Real Talk Studio does not use data input by users — whether free account users or enterprise customers — to train AI models. This applies to conversation transcripts, Roleplay content, and any other data processed via the Platform. We contractually require all third-party AI Providers integrated into the Platform to have model training disabled in respect of any data processed on our behalf. This commitment applies equally to free and paid accounts.

Transparency

Users of the Platform are explicitly informed that they are interacting with an AI-powered simulation. AI-generated responses are clearly distinguishable from human responses, and users maintain full control over their own decision-making. AI outputs from the Platform do not constitute professional advice and should not be treated as such.

Fairness and accountability

We are committed to identifying and addressing bias in AI outputs. Where AI-generated content is found to produce unfair or discriminatory outcomes, we will take appropriate steps to investigate and remediate. We regularly review our AI Provider relationships to ensure alignment with our ethical standards.

Ongoing review

As technology evolves and regulatory environments shift, we will regularly review and update this Policy to reflect technological advancements and legal changes in local, national, and international regulations related to AI.


EU AI Act Risk Classification

Real Talk Studio is classified as a Limited-risk AI system based on our own self-assessment conducted in accordance with the EU AI Act framework.

Please note: This classification reflects Real Talk Studio's assessment of the platform itself. Customers and users remain solely responsible for their own compliance obligations under the EU AI Act and any other applicable regulation. Real Talk Studio cannot warrant that your use of the Platform will ensure compliance with any regulatory requirement, and this classification should not be relied upon as legal advice.

Overview of the AI system

Real Talk Studio is an AI-powered platform designed to help individuals and organisations practice high-stakes workplace conversations through simulated Roleplay interactions. The system does not make autonomous decisions that impact individuals' rights, employment status, or legal standing.

Assessment under the EU AI Act

The EU AI Act categorises AI systems into four risk levels:

  • Prohibited AI
  • High-risk AI
  • Limited-risk AI
  • Minimal-risk AI

Real Talk Studio falls under the Limited-risk AI category for the following reasons:

  • The system is used for training and coaching purposes only and does not make legally binding or high-stakes decisions
  • Users are explicitly informed they are interacting with an AI-powered simulation, meeting transparency requirements
  • The tool is not used in hiring, credit scoring, or law enforcement — sectors classified as high-risk under the Act
  • Our Subscription Agreement expressly prohibits Customers from using Output Data solely to make employment-related decisions about End Users

Compliance with transparency obligations

As a Limited-risk AI system, Real Talk Studio ensures:

  • Users are notified that they are engaging with an AI-powered coaching tool
  • AI-generated responses are clearly distinguishable from human responses
  • Users maintain full control over their decision-making process, and AI outputs do not have direct real-world consequences

We are open to conducting additional assessments or engaging with relevant authorities if further regulatory guidance is required.


Enforcement

Any misuse of our AI Products or failure to adhere to this Policy will result in appropriate action to protect the integrity of our platform and our users. Depending on the nature and severity of the violation, Real Talk Studio may take one or more of the following actions:

  • Warning
  • Temporary suspension
  • Termination of access
  • Legal action

Changes to This Policy

We reserve the right to update this Policy from time to time to reflect changes in law, technology, or our services. Where we make material changes, we will notify users by email or via an in-platform notification. The current version of this Policy is always available at realtalkstudio.com/policies/responsibleai.


Contact Us

If you have any questions about this Policy, please contact:

Real Talk Studio Toby Sinclair Coaching Limited (t/a Real Talk Studio) 13 Highfield Road, Croston, Lancashire, PR26 9HH Email: privacy@realtalkstudio.com Website: realtalkstudio.com