Safeguarding People in the Metaverse and AI Era
MetaEthics helps young people, educators and developers practice digital safeguarding, ethical AI use and online safety in immersive and AI-driven environments.
What is MetaEthics?
- Understand risks in immersive spaces
- Practice responsible and ethical AI use
- Promote wellbeing and safety by design
What is Digital Safeguarding?
In the era of the Metaverse and Artificial Intelligence (AI), digital safeguarding refers to protecting ourselves and others from harm, abuse and exploitation in virtual and AI-driven environments. This covers recognising risks such as identity theft, harassment, mis-information and biased automated systems – and knowing how to respond appropriately.
Key points
- The metaverse is more than VR games: it's an interconnected virtual space where users interact via avatars, share experiences, transact and socialise. Learn more about what the metaverse is
- AI systems in these environments can monitor behaviours, personalise experiences, moderate content and influence what users see and do. This creates new risks around privacy, bias, automated decisions and transparency. Learn more about AI systems and their risks in virtual environments
- Digital safeguarding aims to give users – whether young people, parents, educators or professionals – the tools, awareness and confidence to recognise these risks and respond safely.
Signs to Watch For
Here are some of the common warning signs in virtual and AI-driven spaces. If you notice any of these, take action.
Suspicious or unexpected friend requests or invites
In immersive spaces profiling is easier and avatar identities can be manipulated.
Learn more about suspicious friend requests and avatar manipulationInappropriate content or behaviour
Harassment, bullying, sexualised interactions or grooming may occur in virtual spaces where moderation is weaker.
Learn more (PDF)Identity theft or avatar takeover
Virtual identities and assets may be compromised, impersonated or stolen.
Learn more about identity theft and avatar security threatsDeepfake or manipulated interactions
AI-driven avatars or media may be used to deceive, impersonate or manipulate users.
Learn more about deepfakes and AI-driven manipulationPrivacy intrusions or data collection
Immersive systems may gather more data than expected (biometrics, motion tracking, social behaviour) and reuse it without clear consent.
Learn more about privacy risks and data collection in the metaversePhysical or psychological discomfort
Immersive VR/AR experiences can cause nausea, disorientation, confusion or blur the line between virtual and real.
Learn more about physical and psychological effects of immersive experiencesAction Steps
If you suspect a safeguarding concern in a metaverse or AI environment, follow the steps below:
- Pause the interaction. If you feel unsafe, uncomfortable or uncertain, exit the space or avatar interaction.
- Note any details: username/avatar, time, platform, what happened.
- Use the platform's tools to block, mute or report the user/avatar. Most immersive/social platforms provide user controls. Learn more (PDF)
- If you're a minor, stop further engagement and seek support from a trusted adult.
- Report the behaviour to the platform's moderation team or support channel. Include screenshots or logs if available.
- If the incident involves harassment, sexual content, grooming or hacking – consider reporting to relevant authorities (e.g. CEOP in the UK).
- Use online safety organisations for advice: e.g. Internet Matters (UK) offers guidance for metaverse safety. Security. Learn more about metaverse safety guidance for parents
- Change passwords and enable multi-factor authentication (MFA) on your accounts. Learn more about securing your metaverse accounts with MFA
- Review connected devices and permissions (VR headsets, apps, accessories).
- Be cautious about sharing personal information or virtual assets.
- If you are under 18 or feel emotionally affected by the incident, speak with a parent, teacher or trusted adult.
- Consider professional help if you experience distress, anxiety or feel unsafe repeatedly.
Contact and Support
If you encounter safeguarding issues in metaverse or AI-driven environments, you can reach out to these organisations and resources:
Your metaverse platform's help centre
Notes for Developers and Guardians
Ensure your platform or tool enables moderation, avatar boundaries and reporting tools. Learn more (PDF)
Promote digital literacy: users (especially young people) must understand that virtual interactions carry real-world implications. Learn more about promoting digital literacy for young people
For educators: set clear guidelines for use of VR/metaverse in schools or youth settings, ensure supervision and safe environment setup.