We are entering a world of synthetic realness, where AI-generated data convincingly reflects the physical world. In this world of synthetic data, images and chatbots, spoofing and fakes, we face questions: What’s real, what’s not and perhaps more importantly, when do we care?
Synthetic realness can push AI to new heights in healthcare. Synthetic data can represent patient datasets for use in research, training or other applications. Synthetic content, such as AI-generated text, video and audio, could be used to counter malicious deepfakes and misinformation in healthcare by spreading truth from trusted sources to counter damage bad actors.
Indiscernible fakes exist, so as synthetic realness progresses, we must focus on authenticity. We’ll begin to evaluate "Is this authentic?” based on four primary tenets:
- Provenance – what is its history?
- Policy – what are its restrictions?
- People – who is responsible?
- Purpose – what is it trying to do?
That said, using these technologies pushes healthcare into controversial terrain. It raises tough questions about how to use generative AI in an authentic way within the context of bad actors using these same technologies to create deepfakes and disinformation that undermine trust.