D2.3 Personal data and AI in the Metaverse

Research output: Other contribution

7 Downloads (Pure)

Abstract

With the market size associated with the Metaverse forecasted to grow to between $8 trillion and $13 trillion by 2030, the Metaverse promises immense economic benefits. Personalised experiences play an important role in realising these benefits; they increase user engagement with the Metaverse platform thereby generating revenue for both the platform operator and the many service providers within it. Personalisation is provided by leveraging personal data – generated through interactions with other users and digital services within the Metaverse – through technologies such as AI. To effectively reap the benefits that the Metaverse will provide, it is important to identify the risks that may arise due to the collection and processing of large volumes of personal data, and have effective safeguards in place to prevent harm and provide remedies. A safe and secure Metaverse will be of benefit to all involved parties – commercial service providers, end users, and nation states. Enhanced levels of trust will in turn stimulate the uptake and adoption of Metaverse technologies. This report focusses on the concerns associated with gathering personal data and processing this data through AI-based technologies.
Concerns regarding the use of AI are not new. Indeed, when placed within the context of the Metaverse, AI processing of personal data can introduce new concerns or aggravate existing ones. This is attributable to the intrinsic characteristics of the Metaverse which include personalised services; heavy use of advanced sensors leading to easy and continuous data collection (which complicates user consent mechanisms); interoperability and interconnectedness; and the massive scale of the Metaverse which will host potentially unlimited numbers of users. We see these concerns fall into two main categories according to whether the AI is exploited by a malicious actor, or the AI itself is either malicious or flawed. We consider the first category – that of vulnerabilities in AI being exploited – as a security concern. In this case, the AI is developed and tested to be trustworthy but is susceptible to attacks across the full AI lifecycle, i.e., design, development, deployment, and maintenance. We provide a comprehensive list of these AI vulnerabilities in the Metaverse, including poisoning, backdoor, evasion, and privacy attacks. The second category is a safety concern; it relates to the AI behaving unethically or unlawfully towards individuals, society, or the state. Some safety concerns revolve around the bias or fairness of the AI, reliance on large datasets, transparency and accountability, and poor baselines and evaluation methods.
The concerns identified in this report provide opportunities to address the vulnerabilities that arise due to AI processing of personal data within the Metaverse. The mitigations suggested in the report include the development of safeguards and the adoption of quality and safety-oriented processes applied across the full lifecycle of AI systems. Our contributions will help make the Metaverse a safer, more secure, and profitable place for people and organisations to conduct business.
Original languageEnglish
TypePolicy Report
Media of outputReport Document
Number of pages36
Publication statusPublished - 31 Oct 2024

Keywords

  • Metaverse
  • Personal Data Collection
  • AI
  • Policy

Fingerprint

Dive into the research topics of 'D2.3 Personal data and AI in the Metaverse'. Together they form a unique fingerprint.

Cite this