Thinking out loud as I look back upon foundational texts.
Working on definitions and understanding my own work ?
Lately, I’ve been buried in new papers on agentic AI systems, parasocial interaction, anthropomorphic cues, emotional trust. There’s so much exciting work emerging that I almost forgot what it felt like to sit with an old idea. I’ve been chasing novelty, but I realized I needed to look back, to the frameworks that first taught me how to think.
This post is my way of revisiting the foundational readings I encountered during my undergraduate education in cognitive science and design. Texts that still echo in the questions I ask now about intimacy, ethics, and the emotional texture of digital systems. Now, as I wrestle with the seductive realism of emotional AI, and as I often find myself stuck in the design of things, in flows, in frameworks, in toolkits. I want to take this time to think aloud while reading these text, take a note of how they inform my work and how I can further refine what I thought was just a design problem. Since I aim to make claims on ethics, I also wanted to indulge in some definitions in hopes to work on methodology.
Emotion as Constructed, Not Pre-Programmed
Lisa Feldman Barrett’s theory of constructed emotion was my first encounter with the idea that what feels biological may be cultural. She argues that emotions are not universal signals, but interpretive experiences grounded in social context. Reading this work now, against a flood of new papers on emotionally resonant AI and anthropomorphic interfaces, I’m realizing just how radical that claim is.
Barrett's view challenges how we think about "emotional realism" in AI. If emotions are things we project, then AI systems don't actually feel real, we just perceive them that way. Despite this, designers still try to simulate emotions rather than setting appropriate boundaries. We need to shift power away from designers and give users more control in defining emotional interactions. Leading me to the conclusion that this isn't just about negotiating better terms—it's about giving up control entirely. (Maybe the take is a bit radical but the current problems with over reliance make me think increased conversations on regulation does echo my sentiments)
Both AI-simulated emotions and user emotional projections can reinforce mainstream cultural standards, particularly Western individualistic views. Barrett's theory doesn't protect against this problem without examining cultural biases, we might still end up treating emotions as simple, readable components.
When we project emotions, we're influenced by larger systems and biases. While machines don't actually have emotions, their design guides which feelings users are encouraged to project onto them.
Researchers Sengers and Birhane take this critique further, Sengers points out how emotional design reinforces Western ideas about self-expression. While Birhane highlights how building narrow emotional norms into "universal" AI systems causes harm. Their concerns align with recent research showing that human-like robots increase trust while raising ethical concerns.
Cognitive Bias and the Ethics of Frictionless Design
Daniel Kahneman’s Thinking, Fast and Slow helped me name something I had long felt in interface design but couldn’t quite articulate: that ease isn’t always ethical. System 1, our fast and intuitive mode of thinking, is the same system that designers exploit for conversions, attention, and compliance.
Friedman and Nissenbaum’s work on bias in systems gave that insight moral weight. They showed how "neutral" systems reproduce social inequalities by design.
Reading this alongside Norman’s work on emotional design especially his 2002 piece arguing that “attractive things work better”. I saw the contradiction more clearly, the very principles that make interfaces usable and pleasing can also be used to bypass deliberation.
This is echoed in Sun & Wang (2025), who show how LLM sycophancy induces misplaced trust. Zamfirescu-Pereira et al. (2023) similarly reveal how natural language interfaces lull users into treating AI as more human than it is, leading to false confidence and fragile mental models.
Agency Is Not Autonomy Alone
Reading Andy Clark’s Being There during undergrad changed how I understood minds: as situated, embodied, extended. His argument that cognition happens through tools, not just in the brain, probably planted the seed for my fascination with human-system interaction.
This fascination led me to take a minds, brains and computers course where I encountered Shannon Vallor’s virtue ethics, which reframes ethics not as a checklist, but as a practice of care, honesty, and relational capacity. The Cognitive and Ethical aspects of the problems merged into the idea that agency is co-constructed. It doesn’t arise from autonomy alone, but from context, constraint, and connection.
This insight reshaped how I saw my design practice. I wasn’t just building tools; I was building scaffolds for trust, boundaries, and relational negotiation. In designing emotionally responsive systems, the question isn't “how do I make the machine seem caring?” but “how do I give the user agency to decide how much emotional involvement they want?”
This aligns with Shang et al. (2024), who distinguish between cognitive and emotional trust in AI, offering much-needed granularity to how we think about interface relationships. Their trust scale might not be the final answer, but it’s a step toward building tools that clarify rather than blur the user’s stance toward machine agency.
Moral Crumple Zones in Human-AI Interaction
Recently re-reading about Madeleine Elish’s concept of the moral crumple zone made me think about the hidden architecture of blame in human-AI systems, how the human operator becomes the scapegoat when things go wrong, even if the failure was designed in from the start.
The idea sticks with me as I navigate the ethical murk of emotionally intelligent systems. Because when users emotionally invest in AI interfaces, thanks to polished UX, pleasant voices, or affirming text, intimacy becomes engineered, not earned.
Hofmann et al. (2025) call this illusions of intimacy, the work shows how people develop one sided emotional attachments to AI systems, often unaware that there is no mutuality involved. Similarly, Cohn et al. (2024) reveal how even subtle anthropomorphic cues increase trust and perceived intelligence, misleading users about system capabilities.
Research Methodology (Refined)
This study uses a critical-speculative mixed-methods approach to explore how emotionally expressive AI systems shape user trust, emotional boundaries, and parasocial attachment. Rather than simply evaluating system effectiveness, the goal is to uncover how emotional cues are received, rejected, or negotiated and when users wish they could opt out.
I am not moving too far away from what I initially proposed. I do want to build in mechanisms to further educate myself on authority and agency, while also developing frameworks that hold both designers and users accountable without excluding them from the process. Its not just about enforcing toolkits. Its thinking about education, and including ethics conversations in curriculums.
The rest of the methodology remains as detailed in the original plan, with speculative UI probes, comparative user studies, qualitative interviews, sentiment analysis, and cross-method triangulation guiding the research. The goal is not to find perfect emotional UX patterns—but to understand when users want emotional realism, how they want to control it, and what ethical design can look like beyond simulation.
More publications In this series:
Designing for Intimacy Without Consent? Reclaiming Emotional Boundaries in AI InterfacesMindful Integration, Reflections on SynthID, and Skipping the exit to Uncanny valley