Mindful Integration, Reflections on SynthID, and Skipping the exit to Uncanny valley
A Valley in One's Sense of Affinity
Why Emotional Realism Isn’t just a Content Issue
My research proposal explored how emotional realism shows up in regulated systems, like compliance tools. What started as an attempt to make interfaces feel more helpful, adding typing delays, friendly phrasing, encouraging tone—ended up raising more questions.
At what point does friendliness become performance?
And when do users stop realizing they’re responding to a script?
These questions have since followed me beyond those compliance tools and into more mainstream systems, like chatbots and videos, where intimacy and realism are being baked into defaults, often without much explanation or friction.
A few weeks ago, while setting up ChatGPT on a new device, I came across its updated personality customization screen.
The interface asks:
What should ChatGPT call you?
What do you do?
How should ChatGPT sound? (Options like Chatty, Encouraging, Poetic, Gen Z...)
On paper, this is customization. But in practice, it’s persona sculpting. These tonal cues aren’t just preferences—they’re pre-authored social roles designed to create familiarity. The interface frames ChatGPT as a kind of companion, and you, the user, are gently nudged to respond in kind. Obviously these are just suggestions, and the user could input anything, but the fact that these are suggested, do propose some persuasion to interact with the tool in a particular way.
There’s an illusion of control here that mirrors what I saw in enterprise tools. You’re not just selecting how an AI responds—you’re selecting how human you want it to feel. That decision, while subtle, sets the tone for the entire interaction.
And Then There’s the Car Show Interview That Wasn’t Real
The same dynamic, emotional realism without disclosure, recently played out in a very different context: an AI-generated video of what looked like a casual interview at a car show. The video wasn’t overly dramatic, or uncanny in any major way. And that’s what made it so effective.
One Reddit commenter put it perfectly:
“Nobody really watches anything critically when they’re doomscrolling... it just gets absorbed into your brain without you really thinking about it.”
Another said:
“Even completely rational, tech-literate people won’t think to question it.”
The content didn’t demand analysis, and so people didn’t give it any. Someone later pointed out that a biker’s jacket said “Hells Angles” instead of “Hells Angels,” and even those who saw it admitted they hadn’t noticed. Not because it wasn’t visible, but because the video felt plausible enough to pass.
This is where emotional realism gets complicated. It doesn’t need to be perfect. It just needs to feel good enough that you don’t stop and ask.
Instagram does let users tag content as AI-generated, and it even displays a disclaimer in the main feed—if the creator opts in. But on Reels, where most of the platform’s attention lives, that notice often disappears. Which means emotional resonance becomes detached from transparency. You’re moved, but not necessarily informed.
Now I am aware of the Veo watermark on the video, but that is Something easily editable.
So, What Happens When Emotional Design Becomes Default?
This is the question I keep coming back to—not just in theory, but in practice. My work began with AI-tools that simulated warmth to make compliance feel more trustworthy. But over time, it became clear that even well-intentioned emotional cues can quietly reshape user behavior, especially when the system starts to feel like it cares.
What This Has to Do With My Work
These examples don’t shift the focus of my research, but they do reflect the same design patterns I’ve been studying more critically: systems that simulate emotion without asking if the user wants that emotional layer in the first place.
What links these systems is the use of emotional realism as a behavioral design strategy, sometimes veering into dark pattern territory. The cues are not always deceptive, but they can be manipulative in their emotional intent, especially when users aren’t given the tools to recognize them as such.
In both cases, whether it’s picking a chatbot persona or casually absorbing AI video content, users are being nudged into emotionally responsive states without being told that’s what’s happening. The emotional cues aren’t necessarily harmful, but they are persuasive.
This builds on questions I’ve been exploring through speculative design:
How do we give users the option to opt in to emotional realism, rather than just assuming they’re fine with it by default?
And what does consent look like in the context of tone, familiarity, and emotional believability?
As mentioned in my pervious post the speculative design suggestion I am exploring aren’t meant to make systems cold or robotic. They’re meant to bring emotional clarity into the loop, so users can decide how much realism they want, and when.
What I’m Working On Now
Right now, I’m focused on building out metrics to test emotional agency and user perception in emotionally expressive systems. This includes:
Whether users feel emotionally “nudged” or supported
How emotional realism impacts understanding of credibility?
If users can distinguish between poignant AI and helpful AI
These metrics will feed into a broader set of design interventions, like Dehumanize Mode, Consent Prompts, and Reality Check Toggles that aim to put emotional transparency back into the hands of the user.
So no, this isn’t a pivot away from emotionally intelligent design. It’s a closer look at how it lands in the wild, especially in systems people don’t read closely. Because sometimes the most emotionally sticky designs are the ones you barely notice.
Championing emotional clarity over restrictive measures in AI
I still think emotional interfaces can be helpful, and in some cases even healing. But I also think the realism of today’s systems is growing faster than our ability to contextualize them. We were once taught to question news sources. It’s obvious that we have to increase literacy, learning to live with not just using the tools but also consuming content created by it (I recognize the current efforts initiated by Schools or the Creator tools themselves to educated ).
Whether you’re naming your chatbot or scrolling past an AI-generated reel, what you’re feeling is being shaped by design.
That’s why efforts like Google DeepMind’s SynthID feel important, not because they fix everything, but because they point in the right direction. By embedding invisible watermarks into AI-generated images, SynthID doesn’t alter how content looks or feels—it simply adds a layer of traceability.
The team describes it as:
“Empowering people and organizations to create responsibly.”
It’s a reminder that responsible design doesn’t always have to be loud or prescriptive. Sometimes, it’s about giving people subtle cues that invite reflection. And while SynthID doesn’t directly address emotional realism, it supports the broader cultural shift we need.
We often think of the uncanny valley as a space of discomfort, where something is almost human, but not quite. However, I’m not sure we’re in that valley anymore. Most of what we encounter now is real enough to pass. The discomfort doesn’t come during the interaction, it comes after, when we realize we never thought to question it.
And that’s the work: not to dull the emotion, but to make space for interpretation. To leave room for the pause. Because realism without context isn’t just believable, it’s directive— shaping how we trust, respond, and, feel.
This isn’t a crisis. But it is a shift. And I think it’s worth paying attention to.
More publications In this series:
Designing for Intimacy Without Consent? Reclaiming Emotional Boundaries in AI Interfaces