{"id":228,"date":"2014-08-26T22:30:00","date_gmt":"2014-08-26T22:30:00","guid":{"rendered":"http:\/\/measuringu.com\/ethical-treatment\/"},"modified":"2023-01-11T10:49:52","modified_gmt":"2023-01-11T17:49:52","slug":"ethical-treatment","status":"publish","type":"post","link":"https:\/\/measuringu.com\/ethical-treatment\/","title":{"rendered":"The Experiment Requires That You Continue: On The Ethical Treatment of Users"},"content":{"rendered":"

<\/a>In 1963 Yale Psychologist Stanley Milgram paid volunteers $4 to “teach” another\u00a0 volunteer, called the “learner” new vocabulary words.<\/p>\n

If the learner got the words wrong, he or she received an electric shock!<\/p>\n

Or, so the teacher\/volunteer was led to believe. In fact, no shock was given, instead a person working with Milgram pretended, with great gusto, that they were being shocked.<\/p>\n

So while no one in the study ever received electrical current, the findings of the study were shocking. Some 65% of study participants<\/a> administered what they thought were 450 volts, a lethal dose of electricity!<\/p>\n

The intent of the study was to understand how such atrocious crimes could be carried out by generally ordinary people during the Nazi period.\u00a0\u00a0 After the results of the study were published, attention diverted from the disconcerting outcome of the findings and on to the ethical treatment of study participants. Many participants claimed to have suffered significant psychological damage both during and after the study.<\/p>\n

Informed Consent<\/h2>\n

While the services and findings that Milgram provided the behavioral science community continue to be debated, one clear outcome emerged from the experiments: a reform in the ethical treatment of test subjects was needed.\u00a0 Researchers conducting University sponsored research now have to go through an Institutional Review Board (IRB) which vets the methods and approves or rejects the research.<\/p>\n

What’s more, in the years since the Milgram studies it’s become necessary in academia to provide participants with what’s called an informed consent. This document is intended to let participants know the general topic of the study, that they can stop at any time, regardless of what the experiment demands.<\/p>\n

And while user researchers aren’t Yale trained psychologists performing controlled laboratory experiments, they are collecting data from volunteers and do subject them to questioning, analysis and observation.\u00a0 Although not as explicitly required in industrial settings, many companies also present study participants a type of informed consent document.\u00a0 This is what we did when I worked at Oracle and Intuit. Whether they read or understood them is another question.<\/p>\n

While we don’t always provide a formal document for participants to read and sign, as part of our in-person studies we also explain to participants the same key points from the informed consent in plain language and ask if they have questions.<\/p>\n

But user research happens well beyond the confines of a usability lab. And users willingly and explicitly volunteer to do something, if it isn’t exactly clear what they’re volunteering for.\u00a0 New technology and methods means a blurring of ethical lines in the name of better products and commercializing business models.<\/p>\n

There is a strong demand to better meet customer needs through understanding customer behaviors.\u00a0 It seems easy in hindsight to identify the Milgrams or the Zimbardos<\/a>, but at what point does measuring user behavior become more sinister than sanguine?<\/p>\n

Facebook<\/h2>\n

How does seeing friends post pictures of beaches, parties and smiling faces all over social media affect us?\u00a0 Does it lead to resentment or even depression?\u00a0 It’s an interesting and important psychological question that’s been debated for years.\u00a0 And Facebook helped academia find out. Earlier this year, Facebook made headlines<\/a> for conducting a large scale experiment on a small fraction of its billion users (700,000!).<\/p>\n

These users unknowingly had their newsfeed manipulated for one week to present more positive or more negative postings in their timeline.\u00a0\u00a0 The results of the study actually showed that exposure to more positive posts resulted in users themselves producing more positive posts. The same was true of the negative posts. In other words, emotional sentiments were contagious. Good news didn’t lead to people feeling glum, it actually led them to feel better.<\/p>\n

But like the Milgram experiments, the results were quickly overshadowed by another example of Facebook using our information to, in some sense, manipulate us.\u00a0 Was this ethical?\u00a0 Did Facebook go too far in the collection of information to improve its product?<\/p>\n

Facebook discloses the latitudes they can take in their privacy policy and terms of use which itself has been the subject of controversy<\/a>.\u00a0 But analysis of data reveals few people read<\/a>, much less understand, the implications of terms and conditions and privacy policies.\u00a0 So what is Facebook’s ethical obligation?<\/p>\n

OKCupid<\/h2>\n

Have you ever wondered if online dating websites are more effective at finding matches than say, random matches or the serendipity that happens in real life meetings?<\/p>\n

In response to the outrage over Facebook, the dating website OKCupid admitted<\/a> that, among other things, they paired up people who were poor matches according to their algorithm.\u00a0 But the results of their experiment suggested that the act of telling someone that they were a good match was as important as them actually being a good match.\u00a0 Users were notified that they were involved in a study and were shown the correct compatibility percentages after it was concluded.<\/p>\n

This sort of manipulation was likely covered under the terms and conditions the users agree to when using the website.\u00a0\u00a0 These experiments were done to improve the product and matching for all of us. But was it unethical for OKCupid<\/a> to manipulate data and people in this way?<\/p>\n

Amazon and Orbitz<\/h2>\n

Amazon has been a pioneer of many things on the web. In 2000 it was revealed that Amazon was adjusting the pricing<\/a> of some of its products based on past browsing behavior.\u00a0 Depending on who you were the price you paid would differ.\u00a0\u00a0 More recently, Orbitz came under fire for revealing<\/a> that the way it prioritizes hotels is based on data that showed Mac users were 40% more likely to book a 4 or 5 star hotel. Mac users would be shown more expensive alternatives when searching.\u00a0 Is it ethical to charge different prices or change your inventory lineup based on who you are, what you own or what you’ve done?<\/p>\n

Mint.com<\/h2>\n

Recently, the financial planning website Mint.com invited some participants to use a new beta feature that separated business and personal accounts. After a year of collecting data from users who meticulously entered financial information they turned off the feature without notice. Previously entered data and reports were no longer accessible. These actions were also permissible under the terms of use.\u00a0 While computer software users have become used to the ubiquitous Beta periods, is it ethical for Mint to remove access to information produced from customer labor and private financial data?<\/p>\n

Should the Experiment Continue?<\/h2>\n

At this point it should be clear to every internet and software user that your actions and data are being monitored and used for commercial purposes.\u00a0\u00a0 Like walking out in public places, our expectations for privacy are reduced. Of course the time spent and the data collected on the Internet dwarfs the typical concerns about time spent in public spaces. While this is a broad topic that can’t be settled or resolved, here are some thoughts to consider when measuring the user experience.<\/p>\n