Middle Ground: An Exploration of How Users with Opposing Stances Engage Online 

Topics Exploration 2022, Advised by Chen Ying-Yu

Fostering constructive cross-perspective dialogue.

Social media platforms often amplify familiar ideas and like-minded connections, creating echo chambers that deepen divides rather than fostering dialogue. While spaces for differing opinions exist, they are frequently marred by incivility, trolling, and anonymity, discouraging thoughtful conversations.

To counter these challenges, I designed Middle Ground, a mobile prototype that enables constructive and meaningful discussions between users with opposing views on controversial topics. Through its design, the platform aims to transform online interactions into opportunities for expanding horizons and working collaboratively on complex issues.

The challenge

Affordances like Facebook’s friends list and the algorithms that decide what shows up on users’ feeds lead to the creation of one echo chamber after another, and interfaces that were once meant to draw people closer together are instead increasing the divide between them. Users have a tendency to seek for information that supports their existing beliefs rather than those that challenge them [1], but when they do, spaces that allow different ideas to exist are often also accompanied by incivility, trolling, and other disruptive behavior that keeps thoughtful conversations about important issues from taking place [2].

To address these challenges, I decided to study how users seek information and engage in conversations about controversial topics, exploring users’ perspectives on existing designs and personal experiences to understand what is needed to create safe, constructive spaces for meaningful dialogue. 

As a starting point, I asked:

  • How do people with opposing stances on a topic discuss their opinions online?

  • What features or affordances are beneficial to these exchanges?

Gathering data and initial analysis

To gain a nuanced understanding of user needs, behaviors, and challenges, I employed methods grounded in user experience research, including in-depth interviews, affinity diagramming, persona creation, and user journey mapping.

Interviews

I conducted online interviews via Google Meet and Zoom with five college students selected through judgment sampling. Participants met the following criteria: (C1) they demonstrated interest and understanding of a controversial topic (e.g., critical race theory or artificial intelligence) and actively followed current events, and (C2) they were willing to voice their opinions on controversial topics. Each interview, lasting 30–60 minutes, prompted participants to reflect on online and offline conversations, focusing on motivations, challenges, and experiences with controversial discussions. Specific questions encouraged comparisons between mediums to identify aspects of offline interactions missing online.

Affinity Diagram

Personas

From the affinity diagram insights, I created two personas. Ian, an introverted user, struggles to initiate genuine online conversations, while Elliot, an extroverted user, actively seeks diverse perspectives but often finds interactions emotionally charged or unreliable. Each persona highlights distinct motivations, frustrations, and preferred platforms, grounded in direct interview quotes.

User Journey Maps

Next, I transcribed and reviewed interviews, extracting key points onto color-coded sticky notes representing each participant. These notes were grouped into categories, including "mediums used," "helpful features," and "sources of emotional response," revealing common patterns in user behavior and pain points.

Using personas, I mapped the steps users take when engaging in conversations with opposing viewpoints. Ian, motivated by personal interests, navigates cautiously through familiar platforms before seeking insights from trusted friends. In contrast, Elliot confidently reaches out to broader audiences but pivots to personal connections when public responses fail to meet her expectations. These maps integrate interview data, illustrating emotional and behavioral shifts during the process.

What I discovered

Based on the analysis conducted, I identified and organized four preliminary insights to guide and inform the design of the prototype, ensuring that it addresses key user needs and challenges.

(1) Users may seek opposing views for topics of interest or utility.

Social media platforms have varying affordances that shape user interactions. While selective exposure theory suggests people prefer information that supports their beliefs, they may seek opposing views when the topic interests them or offers perceived utility.

(2) Thoughtful environments and valuable designs matter. 

Online behavior aligns with prior comments’ tone, meaning negative interactions can escalate trolling. To foster thoughtful exchanges on opposing ideas, platforms should include features like private channel switching, content deletion, and humanizing interfaces (e.g., displaying names or moods).

(3) Opposing-view conversations need to be engaging, respectful, and productive.

Positive experiences depend on participants being engaged, respectful of differing opinions, and feeling that the conversation is meaningful and making progress.

(4) Anonymity fosters openness but undermines credibility.

Anonymity allows users to speak freely without fear of judgment but reduces credibility and trust. Interviewees highlighted that knowing others' identities helps ensure authenticity and reduces concerns about trolls, bots, or insincerity, leading to more meaningful exchanges.

From discovery to design

I began the design process by sketching initial ideas, focusing on two main solutions: enhancing features in existing social media platforms or creating a dedicated app for safe, constructive dialogue between opposing stances. Discussions with peers and my professor led to a hybrid approach—developing a platform with familiar social media structures but tailored for respectful and engaging conversations on controversial topics. Initial sketches, including a pairing page and chat room, were created using paper and Procreate.

With a clearer vision, I moved to wireframing to define key elements, their hierarchy, and page connections. This process, done in Figma, evolved into low-fidelity prototypes, where I refined layouts, word choices, and iconography, finalizing the structure of the platform.

Introducing Middle Ground

The high-fidelity prototype ensured an intuitive visual design, and key features indicated with simulated user interactions are presented below:

Match by Topic and Stance

Users select a topic, their stance, and the stance of their conversation partner on the pairing page before entering a one-on-one chat. Default settings encourage dialogue between opposing views. The chat room, inspired by Messenger, features a reminder for mindful communication and a panel summarizing the topic.

Cite Sources for Credibility

To address concerns about ethos, the platform enables users to cite sources in conversations, fostering fact-based dialogue and reducing emotional escalation.

Guiding Questions for Stuck Points

Guiding questions help users overcome conversational roadblocks, ensuring continuous engagement and introducing new ideas while modeling constructive dialogue.

Exit with Closure

An "end conversation" button directs users to a peer-review page, notifying the other participant of the decision while preserving past exchanges. This design avoids ghosting, offering a respectful exit process without blocking.

Peer Review for Accountability

The peer-review system evaluates users on ethos, language, emotion control, topic engagement, and activeness via a Likert scale. Reviews generate an averaged public score while remaining anonymous to the reviewed user, balancing transparency with discretion.

My reflections

This research takes a closer look at how users with opposing stances interact with each other online, in light of past works that discuss the importance of these conversations, the affordances of popular social media that affect these conversations, and past attempts to improve these conversations. There were many subtopics that this research has briefly touched upon, such as the meaning that controversial conversations hold for individual users, how users identify trolls and malicious actors from users they can engage properly with, the process of how users decide which platform to use when seeking for conversations, and more that hold potential for future research to go in depth with.

As I designed the prototype for Middle Ground, my role was to influence user behavior through thoughtful affordances rather than enforce control. Features like citation and guiding questions nudge users toward respectful, engaging dialogue with those holding opposing views. While challenges such as biased peer reviews may arise, iterative testing and refinement can minimize these gaps, aligning user actions more closely with the platform’s goals. As a whole, I found the problem space to be engaging and of great importance in today’s social media era, where users around the world are offered more opportunity to connect but are also more divided than ever. This project allowed me to learn more about the intricacies of both the social and technical aspects of conversation, especially those where the participants have opposing stances, and I hope the results of this project will benefit future research and users.


Bibliography

[1] Munson, Sean, et al. “Encouraging Reading of Diverse Political Viewpoints with a Browser Widget.” Proceedings of the International AAAI Conference on Web and Social Media, https://ojs.aaai.org/index.php/ICWSM/article/view/14429

[2] Cheng, Justin, et al. “Anyone Can Become a Troll.” Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 2017, https://doi.org/10.1145/2998181.2998213.

Next
Next

Empty-Nest