MidwestUX 2019: Bias, Ethics, & Creativity in Design24 min read
MidwestUX (often stylized MWUX) is an annual design conference hosted in a few Midwestern cities on a rotating basis. The purpose of MWUX is to gather professionals working in user experience design (plus a few other related fields) and learn from a variety of speakers. Topics include UX for social change, leveraging user feedback, and avoiding bias in user persona development. This year, the three-day event was hosted in our very own Grand Rapids, Michigan. Attendees came from all over the world and ranged from Ford designers, to mid-sized agency designers like DVS, to one-person organizations.
Each day was jam-packed with insight from leaders in the design industry. Speakers included designers from Fortune 500 companies, college professors and intellectuals, and single-person design studios. The theme for MWUX 2019 was Reframe Reality. To me, this meant taking a critical look at your life, surroundings, and work with an eye for improvement and ethical creativity. My pages of notes will never make it onto any public forum for reading (no one else could decipher them anyway), but I’d like to highlight three speakers that I feel could have the most impact if their wealth of knowledge was shared to a wider audience.
Bias & User Interfaces
People often assume that computers can’t be biased. On one hand this makes sense. You put in a number and another comes out. A collection of ones and zeros does not have the learned behavior of cultural and societal bias. In reality, programming and data are influenced by humans. The resulting system may seem neutral, but it can still fall victim to bias.
The example Charles Hannon gave was a series of questions about Eric Garner posed to Amazon’s AI voice assistant, Alexa.
When asked “who is Eric Garner?” Alexa gave the following answer:
Eric Garner was a male celebrity who was born on September 15th, 1970, in New York City, New York, USA.
This answer is problematic and lacks some crucial details. Alexa isn’t necessarily biased but she relies on data from Google and Wikipedia. She was able to determine that Eric Garner was dead (indicated by her choice of “was”) and that he was a “celebrity” because his name appeared in news headlines like a celebrity’s would. Alexa completely missed the fact that Eric Garner was murdered on the street by a police officer using an illegal chokehold. Because Alexa skims data and looks for surface-level details, she loses out on crucial information and will sometimes give biased answers.
Solutions for these issues are complex and take the work of large data companies like Google and Amazon, as well as data scientists, UX designers, developers, and more. These changes will not happen overnight and these issues will only become more important as technology and AI improves.
For a lot of us, solving this issue of bias starts with being aware and self-critical of how we use information and data. This also applies to areas not as consequential and complex as race in America. Maybe those numbers you are looking for to show growth at work are cherry-picked a little too much. Maybe you are not doing enough research or listening to your customers. Everyone can (and should) take time to examine their actions and determine if bias has any influence.
Crafting a Creative Culture
Could have been summed up in an email.
Everyone knows the pain of an hour-long meeting with a lot of talk and only a few actionable outcomes. Jeff Veen gave an insightful talk about fostering a creative culture by modeling meeting structure like a traditional creative review. This leads to actionable outcomes and stronger finished work applicable to any industry. The main components of these meetings are:
- Optional attendance, mandatory participation
- This is not a forum for opinions
- Convergent vs divergent problem solving
Optional Attendance, Mandatory Participation
This seems pretty straightforward, and that’s because it is.
Jeff explained that his team does a “product” review instead of a “design” review. This may seem like splitting hairs but the distinction is important. Everything from site planning, to sales decks, to analytics, and data sets should be reviewed by a group of peers. It is also not necessary for the review to be done by people working in the same fields. Devices are put away and people are required to move about, use whiteboards, and interact with each other. After all, these are working sessions—not a time for speeches.
This is Not a Forum for Opinions
Jeff’s team went through training exercises to learn how to give feedback in a way that is respectful and builds trust. The goal is always to understand why decisions were made, not to decide if they are inherently good or bad. The example below illustrates this principle.
Bad: “I don’t like that blue.”
When coming from a team member, this essentially means “I don’t like the decision you made” and can be interpreted as “I don’t like you” or “I am smarter than you”. This immediately puts the parties in a position of conflict.
Better: “Why is that blue?”
This open ended non-confrontational question leads to discussion and fosters an area for creativity and growth.
Best: “Is color important here?”
This is the most broad and gives us the greatest access do discovering how a decision was made. It invites everyone to walk through the decision-making process together and decide if it really is the best option.
Convergent vs Divergent Problem Solving
At the beginning of the meeting, the presenter establishes whether the purpose is convergent or divergent. Divergence lends itself to problem solving early in the process. Think of this as blue sky exercises and big-thinking brainstorming. To borrow a term from improv, this is a “yes, and…” session. Alternatively, there is the convergence focus. This usually means walking team members through the creative process, evaluating feasibility, acknowledging constraints, and driving towards consensus. Defining these goals from the start keeps a meeting on course, reduces confusion and frustration, and leads to final products that can withstand critical thought.
Consent & Ethics in Experience Design
Lauren Liss gave a very insightful talk about the importance of informed consent and the ramifications if consent is not treated honestly and with care. Lauren began by introducing two concepts which I do not normally see in the same intellectual spheres. The first is consent—which I usually encounter with topics of intimate relationships or sexual violence. The second was the Uncanny Valley, which refers to robotics and animation trying to mimic living things and coming across as creepy or unnerving.
Lauren’s talk centered around the negative experience of a user realizing their information is not being used as intended. This can trigger feelings of vulnerability and the uncanny valley. As people who work in data and information (especially in the marketing and advertising fields), it is up to us to make sure the people we collect data from know how it is being used and can give informed consent.
What I mean about “informed consent” is that people must understand how their data or access to their data will be used. We have all breezed by those boilerplate privacy policies and hoped we did not sign away important information or our favorite kidney. The average person would need to spend days reading all of the digital privacy policies they agree to in the span of a year.
We all know this will never happen.
The issue is that people tend not to trust organizations when they do not understand how their information will be used. When they find their data being used in unexpected ways, like an ad following someone around the web, the uncanny valley feeling creeps in. People probably won’t stop handing over their data for convenience any time soon, but there are steps we can take to gain the trust of users.
Sometimes you need information from users. This could be a name and email address to reach out to them or location data for real time weather alerts. There is nothing inherently wrong with asking for this info, just be clear and let users know why you need it.
Users are people
Take some time and read out your policies out loud, preferably to other people. Does it sound like something overly sterile and needlessly complex? Being genuine and easily understood goes a long way in building trust and transparency.
Spend some time creating a single page, plain-English summary of your policies. Make this something your users will actually read. A site called Terms of Service; Didn’t Read does this for major tech companies and can give a good roadmap to how create a TL;DR version.
Collect only what is necessary
Not only is this an ethical way of doing business, but it also improves UX. For example, fewer people will want to fill out your contact form if you are asking them for a dozen different fields to be filled out, even if the field is not required.
Taking this all into consideration is hard. It’s very easy to vacuum up data with little regard for the user, especially if you know they probably will not read what they are opting in to. The problem is not only unethical, but it builds distrust. When time and trust is the most common form of currency on the web, it is important to use it carefully and with respect.
Design is not just colors and typefaces and seemingly arbitrary aesthetic choices. It is about the infinite ways our decisions affect others and finding ways to improve those interpersonal experiences. Part of our mission at DVS is to excel in strategic design and innovation. One example of that commitment is expanding our knowledge through events like Midwest UX. The event challenged and broadened our team’s way of thinking about responsible and ethical creativity and helped us discover new ways to push the trade forward. We were able to soak up insightful commentary and disruptive ideas and have communion with other professionals in our industry. Design will only become more relevant and important as time goes on, but a commitment to driving it forward with renewed optimism and passion is what makes it fulfilling and worthwhile.