#32 - An Imperfect Match

Thursday, July 31, 2014 - 04:38 PM

Transcript

This week, dating site OK Cupid put up a blog post describing experiments it conducted on its users. In one experiment, the site told users who were bad matches for one another that they were actually good matches, and vice versa. Alex and PJ talk to OK Cupid President and co-founder Christian Rudder about the ubiquity of online user experimentation and his defense of potentially sending OK Cupid's users on bad dates.

Thanks for listening. If you like our show, please subscribe to us on iTunes. Or you can follow PJ and Alex and TLDR on Twitter. 

Tags:

More in:

Comments [33]

I was reminded of the book Predictably Irrational, where the author talks a lot about how people are wired to not deal properly with 'virtual' concepts.

I agree with the guy who wrote that if this was done by an old-school 'brick and mortar' dating service, there would be law suits.

It messes with my mind that something so obviously morally wrong in the real world is not just value neutral but considered -good- in the virtual world.

The double-standard is always scary.

Oct. 05 2014 10:32 PM
RL from nyc

The issue being taken with the "experiment" is completely off base.

Saying that A/B testing is somehow unethical is unreasonable. As Christian says, the only way to make the algorithm better is to test it. In this case, you have algorithm "A", the normal recommendation, and algorithm "B" the "Test" algorithm. In this case, algorithm B is that all results = 90%. This is no different than if algorithm B did something more "ethical" sounding, like weighted the color of one's shirt higher than usual. The company has NO knowledge of what makes a good algorithm until they make it and test it. When it comes down to it, what's the difference between changing an algorithm blindly vs lying about what the algorithm says?

I completely agree with Christian that the only way the company can believe in their algorithm with any integrity is to do these kinds of experiments. Returning random numbers might be better than whatever it is they were doing. Testing the algorithm is just one way to ensure they are serving the interests of their customers.

Sep. 10 2014 06:16 PM
SBL

This was an incredibly frustrating segment, not for any of the opinions or viewpoints expressed, but because it ALMOST got to the core issue, but stopped just inches away.

OKCupid, and many other online social platforms, seem to have this viewpoint that the way in which people act and perform on their websites and social networks is innately "natural", that they are getting a perfect data set of "real people" doing "real things" in "real time". The massive blindspot these corporations, and their critics, miss, is that all of these structures, being for profit as they are, encourage a very particular and nuanced type of behavior depending on which platform is being engaged with. People speak a certain way, engage with others in a certain way, crop their profile pictures in a certain way, hold back on information in a certain way, and share content in a certain way depending on the individual platform's design and various reward systems (you may get more messages on OKCupid or tinder if you have a dog in your profile picture, so you borrow your friend's pooch specifically for the photo shoot; actually owning the dog in real life while matter little once on your date, but you might as well gamify the system to your benefit when fishing for matches).

How is OKCupid, et al, getting a truly "better" data set by not telling people they are watching them when they are already performing in a way that is a result of encouraging one to act according to underlying UX and ad revenue systems? Every environment has its own set of contexts. The problem with Rudder's viewpoint is that he essentially says "people should be more aware and just understand that when they are using these systems, they are being tracked and experimented on, nudged and rewarded for behavior the platforms is drawing out of them, this is just the reality of the internet", then goes on to say "if we make them aware that we are doing certain things and purposefully feeding them certain information, then we wont get as good results". Its this inherent contradiction of espousing that "people should already know we're manipulating them" and then saying "if we actively tell them we are doing so, we wont get the results we are looking for" that is the logistical fallacy in Rudder's argument.

Aug. 26 2014 02:39 PM
BrianL from Brooklyn

I'm not sure about the degree to which it matters, but the first time I made an OKCupid profile, I didn't need to answer "thousands" of questions before I got to the one about whether there are circumstances in which a person is "obligated" to have sex. I only answered 300 or so questions, and that question was one of them. About a year later, after I'd deleted my old OKCupid account, I started a new profile and, as I was answering the questions that help determine matches, I received a quite different set of questions from the first time I did. Does every user answer the same set of questions when he or she sets up a profile? Part of Rudder's argument suggests the questions and the order they're asked are some kind of control in and of themselves, and it kind of seems they're not.

Aug. 17 2014 12:38 PM
Francisco from Newcastle upon Tyne, UK

Michael Rattner from San Francisco,

You seem to be missing the point. It's not that people have an attachment to the old formula, it's that they deliberately tried forumations that they KNEW would not work. In short, they did a psychology/social science research project without obtaining consent!

Aug. 10 2014 11:21 PM
Michael Rattner from San Francisco

Most of the arguments against what OKC did miss one major point: there never was a reason to believe the original algorithm was the best one, and yet many people are clinging it it as if it were some kind of absolute and deviation from it is betrayal. Someone came up with an algorithm and then they iterated. This isn't just how the internet work or social media works, it's quite literally how everything works, including finance, branding, the layout of stores like Safeway and Target, and even military strategy.

Companies like FB, OKC, Google and even traditional media companies like the New York Times, CBS News, and OTM need to decide what, out of a torrent of data, to show a user without driving them insane. In some cases it's an editor with a list of guidelines and in other cases it's an algorithm.

It seems like we are better off not knowing how the sausage is made.

Aug. 10 2014 07:56 AM

This was really a disappointing interview. The way you two got on high horses about being potentially manipulated by a for-profit company that offers a free service using voluntarily-given personal information was embarrassing. It also betrayed naivete about exactly what your show was designed to talk about: how internet culture works. Your inability to offer any insight on an ethical way to do this kind of research belied your total (and, at times, seemingly willful) misunderstanding of how internet business works all for the sake of feeling superior to those jerks in the private sector. Please learn from this embarrassment.

Aug. 08 2014 03:54 PM
Francisco from Newcastle upon Tyne, UK

So I take it that, from Rudder's replies, that it's OK (without consent) for:

China to "experiment" with different ways of telling the news if it's in the name of "improving" the service?
Or that it's OK for doctors to randomly give patients wrong diagnoses if it was to rule out psychological factors?
Or for the police to randomly arrest people to see how they cope?
Or that the Tuskegee syphilis experiment was totally fine?

If he says "no" to any of these, what is the difference between that and what his organisation did?

As for the "it's in the T&Cs" argument:

I take it that if he's arrested and has his human rights abused, he'll be fine with the argument that he accepted the other country's ways of doing things just by visiting it?

Aug. 08 2014 03:43 PM
reporter from round lake, il

The question: why did OTM wait so long to write about issues at Cupid and others like it. In 2013 BBC Panorama did a pretty good expose on the industry (see below). What place takes nowadays is a complete remake of the society by finance and this is only one of the symptoms.
https://www.youtube.com/watch?v=W_OI7f3j41k

Aug. 08 2014 02:35 PM
Steve Grundman from Washington DC

I too found this episode odd, because of the implicit premise of this conversation that OkCupid is some kind of public service or owes some obligation to the public trust. OkCupid is a commercial firm, and you guys who may use it have a contractual relationship with them, nothing more. This has nothing whatsoever to do with morals or science, it's about a commercial relationship. If you don't like the service you're getting from OkCupid because of what it did or did not do or how it made use of the information about yourself that you gave it, then complain to Rudder about what a bad business he is running and how you're going to abandon it for a competitor that provides what you would regard as a better service. But, please, there's really no place in this conversation for silly sanctimony over whether what this company did was "moral" or "ethical". Caveat emptor, man. If you want a social community that can be held to the kinds of standards you think are appropriate in this situation, you're going to have to revert to the public square, not social media.

Aug. 07 2014 04:41 PM
Sam from MA

This was an odd episode. The issue at hand was informed consent, but PJ and Alex let Mr. Rudder change the subject such that Mr. Rudder stood on the side of the scientific method. Nobody was debating the scientific method.

Mr. Rudder claims letting people know an experiment is happening changes their behavior. Sure. You can measure this -- tell folks you're running an experiment, then don't. You can then demand that any effect you do measure be larger than this. Crudely, this is what people mean when they say they "controlled" for something in an experiment. It is not new science.

Mr. Rudder may argue that this is too great a sacrifice in accuracy. But this would be disingenuous. He already admits he doesn't know many details about his users. So obviously there is a noise level in his data he cannot control, and I would bet is it larger than this informed/not informed effect. In fact, I'm sure of it. If Mr. Rudder had actually run this experiment ("how much does informing someone change their behavior?") he would have mentioned the result in this discussion. Also, Mr. Rudder is well known for writing blog posts that reveal ways to game OKCupid -- by changing my profile picture, by looking for matches to specific oddball questions, etc. "Corrupting its data set" is what made OKCupid famous. Why wasn't that too great a sacrifice? Because it was fun?

I would also note that it is odd that Mr. Rudder claims lab experiments are "unnatural". What is natural about answering long questionnaires? Psychologists and social scientists and advertisers have been doing that a long time, at least as long as lab experiments, but that doesn't make it "natural". Mr. Rudder has a business incentive to make it seem natural for a larger user base to freely to provide a wealth of intimate and trivial personal information for a chance at a prize (a relationship), but it is a relatively recent phenomenon. (Far more recent than lab testing.)

OKCupid is not on the side of the scientific method, it is on the side of making money. If it refined its algorithm so every match was perfect it would go out of business -- people would use match once. OKCupid's goal is to amass customers, so its incentive is to make an algorithm good enough so you don't blame OKCupid for a bad match, but bad enough so that any relationship doesn't last very long. If all your first dates are bad you would give up on OKCupid. But if every third date you ended up in a six-twelve month relationship you would blame yourself, not OKCupid. In fact, I would argue Mr. Rudder was also being disingenuous about what the purpose of the A/B experiment was. The purpose of any experiment must be to determine what keeps people coming back to the site. If their business unit discovered that sign-ups increased when the algorithm was changed OKCupid would keep that change, regardless of whether it made better matches.

Aug. 06 2014 10:05 PM
Sam from MA

There's nothing wrong with this business model per se. But once you recognize we're talking about a business model, not the scientific method, then you can ask whether the business model is acceptable. This is a debate worth having. For example, in the US we find child labor unacceptable, but Bolivia just legalized it. The US used to trust banks not to take on too much risky debt, but now we demand certain capital ratios.

Let's put it in game theoretic terms. Dating sites know their algorithms are imperfect. They have an incentive to distribute that imperfection over a large enough user base so that no one leaves the site. I, as a dating site user, also know the algorithm is imperfect. But my incentive is to try to distribute that imperfection onto other users as much as possible. (This is model also applies to doctors performing tests: The doctor has an incentive to distribute the risk of a false result among all her patients equally, whereas I as a patient want to maximize the accuracy of my result, even at the cost of other patients.)

The problem is that the information in this case is asymmetric. The dating sites know everything about their algorithm and I know nothing. This is great for the dating site and terrible for me. I don't see anything wrong in discussing ways to change that balance of information. Again, Mr. Rudder doesn't address this issue -- he just says "everyone does it, so suck it up". (I paraphrase.) That's not a defense. At the end of the interview he says "Well, at some hypothetical point in the future we'll all be savvier consumers". But how exactly does he expect that to happen? Osmosis? We become savvier because we pry relevant the information out of the hands of interested parties and make it available to critical parties. Information like studies on the correlation between cancer and smoking or the level of remuneration received by certain doctors to endorse products. (Rule of thumb: People don't become savvier consumers by not asking questions of the producers.)

So let OKCupid (and all the other dating sites, and all the other social media sites) conduct its experiments. But how about every time it conducts an experiment it has to acknowledge that fact and provide a change metric for it? I.e. "Updated algorithm, first 100 user matches changed by 1%". (This is just a layman's version of giving me version numbers and regression test results every time a new algorithm goes up, something which I would demand of any algorithmic product I actually pay for.) How about letting me, up front, opt out of being part of any experiment? If too many people opt out and OKCupid fails, so be it. Another company will pop up in its place and will try something new. "A/B testing" is just trial and error -- it can be conducted within OKCupid on its in-house algorithm or it can be conducted across multiple firms with competing algorithms. The latter version is preferable for consumers. We usually call it the free market.

Aug. 06 2014 10:04 PM
Sam from MA

To conclude, I do not want to stop any firm from testing their algorithms. I simply want to have a discussion about the business model under which this testing takes place. Mr. Rudder wants to push me to accept filling out questionnaires as "natural" and he would be happy if I started providing even more personal information to him (so he can improve his algorithm = maximize his profit). The only reason he doesn't demand more information is that he knows it would put me off his business -- because from a scientific perspective surely more information would make his algorithm better (he admits as much). I would like to push back and suggest greater transparency might lead to greater competition might lead to improvements in my algorithm = maximizing my ability to find a mate for life. More generally, I would like to better understand just how much information these algorithms really need. If all the algorithms basically get me 80% to a perfect mate with a small amount of information but then require gobs of personal details to get to 83%, then maybe I can decide that trade is unacceptable. (For those who think this is unrealistic, look up the Pareto principle.)

I hope it is clear that this argument extends to far more than dating sites. But it seemed best to stick to the example at hand, even if I am happily married. (We met the old fashioned way: on a BBS.)

Aug. 06 2014 10:02 PM
Bob from Urbana, Illinois

There are many disturbing elements in this "so-called" experiment, but as a behavioral scientist, let me just say that this is not science. It is not at all clear what the creators of this website are trying to learn, but they are not conducting or contributing to science.

Anyone who has taken an undergraduate course in experimental methods would be happy to explain how to conduct an ethical experiment involving deception and any Institutional Review Board at any university could explain the criteria for ethical ways to conduct these studies. Everybody doesn't do this and thank goodness they don't.

No one should conduct business with any organization that is so blatantly deceptive and clueless about even reasonable ethical behavior.

Aug. 06 2014 02:14 PM
bf

Pretty disappointed in Alex and PJ's blatant bias in this episode. Toward the end, I felt like I was listening to conservative talk radio; they were ganging up on Rudder, in that patronizing, exasperated way, despite having no realistic alternative (to Rudder's testing proposal), nor no concrete ethical stance on the matter. Apparently, it just doesn't feel good to them. When I heard that interruption "from the control booth," I had to take a break. Better research, questions, and host-composure next time, please!

Aug. 06 2014 12:38 PM
Matt Mire

Rudder has gone down the slippery slope. Two major differences between ordinary A/B testing and his fun experiment:

First, the difference between manipulation of OK cupid content and manipulation of how users believe their content is being displayed to others.

Second, the difference between changes made *in good faith* to do what is advertized (matching) and an experiment done in bad faith to be cute and get interviews.

He is disingenuous when saying "maybe random is better."

If they were actually interested in evaluation of the algorithm, they could make a variety of moderate changes or randomization (e.g., ignoring food preference in matching or giving 60% match instead of 70%). Then he would have his data without fundamentally misrepresenting users and misrepresenting the service provided.

Aug. 06 2014 01:48 AM
Alan from Illinois

If this were a bricks-and-mortar dating service, I suspect people would be more uniformly upset over what Rudder and company are doing at OkCupid. The main hole I see in the argument that what OkCupid is doing is acceptable is when it reaches the "everyone does it" stage, followed by "it's a Web site offering a service." Taking that logic, then, it's okay if you order book A on Amazon.com and Amazon sends you the wrong book--as long as it's done in the name of "improving" the user experience? I don't buy it. When companies make changes to their sites that affect the user experience in significant ways (and I would think changing how you're matched with potential partners is a significant change to the OkCupid user experience), it isn't unreasonable to expect the companies to give you a heads-up that things are changing or have changed. Perhaps if Christian Rudder hadn't been so blase about his company "experimenting" (the company's word choice, remember) on its users, I could sympathize with the idea that it's information overload to let users know of each little tweak to the system. As it stands, however, the whole thing just flat-out strikes me as unethical, and I'm glad I've never used OkCupid (and most likely never will).

Aug. 05 2014 02:01 PM
Dougal from Pittsburgh

@listener: "I am not surprised at the arrogance, lack of ethics and creation of ridiculous arguments to try to make a case. I am surprised to hear the president of a tech company say that they don't know if their algorithm works better than random chance."

The *only way* to know if something is better than random chance is to do an experiment. Which is exactly what they did, and now they *do* know that it's better than random chance. Saying that they know it's better than random chance without having done any experimentation is just lying, and saying it based on laboratory experiments is pretty dubious, given how different dating is from whatever lab analog they would be able to come up with.

I totally agree with Mr. Rutter that PJ's proposal of "hey, we might put you in an experiment but we won't tell you what, that cool?" doesn't help anyone, and would add that it's likely to make the results less informative by biasing the sample towards people willing to participate in some nebulous experiment.

Also, the "is anyone ever obligated to have sex" argument is a complete strawman. Most users probably haven't answered the question, and even among those that have, I'd imagine that if you marked that question as mandatory or whatever it'd still show up in the "questions I care about" section of their profile, which if you're going on dates without looking at, you really don't care about the question that much anyway.

Aug. 05 2014 11:03 AM
listener

I am not surprised at the arrogance, lack of ethics and creation of ridiculous arguments to try to make a case. I am surprised to hear the president of a tech company say that they don't know if their algorithm works better than random chance.

I hope those using their services get the President of OK Cupid's message out far and wide that they can't be sure what you pay them for works better when it is switched on or off.

Aug. 04 2014 08:31 PM
Arun Venkataraman from Campbell

To start off, I've never used OKCupid and met my wife the old fashioned way (set up by people we know). Just finished listening to Mr Rudder, and my gut reaction is he came off as "I know better than you" (this is probably a sought after trait - young, brash and overconfident). The answer to every question is not: "Do you have a better idea?". I didn't get $100m to come up with a better idea :)

Having said all this, the reason FB's experiments creep me out is because they did this on my connections: people I know and am familiar with. Its like Google deciding to hide some of your e-mail because its not useful to them from an advertising point of view. Very big brother-ish.

For OK Cupid, I don't feel as creeped out because they connect someone to strangers. So what if they screwed around with the formula? At the end of the day, you meet up with basically strangers. People know this and expect this. This is someone you met on the Internet, for crying out loud.

Aug. 04 2014 02:08 PM
BC

I agree with OKCupid's stance. I think people are having a knee-jerk reaction to the word "experiment". I think the crucial distinction here is whether or not users are potentially harmed by the experiment. In OKCupid's case, I don't see much in the way of potential harm to the user in both OKCupid's case or Facebook's case. If on the other hand, restaurants were giving us food poisoning (as TLDR seemed to think was somehow analogous) or pharmaceutical companies were giving us untested drugs, those could have very bad, potentially life-threatening effects. Yet, the word "experiment" covers everything from simple marketing experiments to life-threatening experiments. There's a huge difference between the two, even though we use one word "experiment" to describe both. People seem to be reacting as if they're thinking "well, if I allow OKCupid to run experiments without my express consent, then I have to allow all experiments to be done without my consent." Charged language like "they're treating me like a guinea pig" isn't helping. (And I can't help but think of the retort: "Treating you like a guinea pig? Do you mean they euthanized you after the experiment was done?")

I'd also point out that AB testing is done all the time by companies - and I'm not just talking about internet companies, I'm talking about companies in general. They run one version of an ad in one area and then a different ad in a different market and they see how the results vary. For example, a while back, Wikipedia ran different "please donate" ads on the Wikipedia site to see what ad performed best at getting people to donate money. The most successful ad was vastly better at getting donations than other versions they ran. That's AB testing and that's an experiment on their users. Now, that was another example of an internet company, but every company (including small companies) does that for marketing purposes.

I also think that, as human beings, we are "experimenting" on the people around us. We don't do it quite so deliberately, but we might notice that when we approach people one way versus another way, we get better results. (For example, we might get better at getting people to take our concerns into account if we approach them with soft language instead of attacking them with anger. Or maybe you discover that your kids or coworkers or employees behave better when you incentivize them instead of punishing them.) We are all "running experiments" on each other and modifying our behavior accordingly.

Aug. 04 2014 01:42 PM
JMS from PDX

Gosh, Mr. Rudder has the exact attitude that makes me distrust all the young new techies. (yes, I work in tech as well).

I remember watching a young marketing punk from Car2Go in a neighborhood meeting where we were trying to address their cute little cars parking all over our parking-permit zoned street (we had a perpetual problem with our neighborhood being used as a park-and-ride). Their cars had immunity from parking laws granted to them by the city, and could park without regard to the 2 hour visitor limit. I've even seen the cars in clearly marked "no parking" zones. The guy couldn't have been more disinterested, and clearly unwilling to be there. He projected the attitude that we were all just pathetic morons, and that this issue was so beneath him.

Verbally Mr Rudder projected the same attitude. Yeah, we're experimenting on you, but it's for your own good. If you can't see that, you're a moron.

Aug. 04 2014 12:14 PM

Can we agree that it is possible for ok cupid to do unethical things to it's users? Yes. And it is possible to say "it was just an experiment. Yes. And since companies have a very well documented history of pervasively doing unethical things before they become illegal, why the hell should we be paying attention to your stance that ok cupid can do no wrong?

Aug. 03 2014 12:05 PM

I found it hilarious that the ok cupid guy used an analogy or metaphor to justify his company, then like 10 seconds later said it wasn't ok to do the same thing to say why what his company did was wrong. Also, metaphors are baked into our language and brains, saying we can't use them is ridiculous, http://www.slate.com/articles/podcasts/lexicon_valley/2012/12/lexicon_valley_fiscal_cliff_is_a_flawed_but_brilliant_metaphor.html

Aug. 03 2014 12:00 PM

Not exactly on topic, but it's not a good user experience to comment, not see it show up, and have no idea why. I'd hope it's being held for moderation, but I have no idea. Tell people whats going on, else it is really frustrating, and I probably won't comment here again. I just created an account to see if that makes any difference. Again, you should tell people these things.

Aug. 03 2014 11:51 AM
Anon tldr faithful

"Everyone does it"
I'd like to see this site a/b test whether showing you a warning affects how you travel.
http://travel.state.gov/content/passports/english/alertswarnings.html
And ok cupid guy says they informed people and they just messaged each other. Wait, they did not say that in their blog post, and how do they know people just messaged each other? Did they read people's private messages? I'm sure some of them exchanged phone numbers, and they don't have any access to that. So really, they could have 100% matched someone who answered yes to "Would you like to try acting out a rape fantasy" and yes to "Is sex sometimes required", and then that person goes on a date and a dangerous situation partially because of that 100% match. I was amazed you guys did not push back more. I get the impression that part of the reason the ok cupid guy was so blase is because behind the scenes they are doing unethical things for all sorts of reasons, to get their own dates, for revenge, etc. If we have NSA spies who had to take oaths and extensive background checks doing 3,000 occasions of spying on people for personal interests, http://blogs.wsj.com/washwire/2013/08/23/nsa-officers-sometimes-spy-on-love-interests/, then you can pretty safely assume things go on at okcupid. And this guys attitude shows that they probably wouldn't care except for the publicity. And with pretty much every technology comes new opportunities to do unethical things, and those things are then done and widespread until laws are passed and society develops new norms. The internet is no different. Late 1890s, society advanced enough for monopolies to become the new thing that ever company was doing or being affected by and trying to negotiate around, and we outlawed that. The way this guy talked, sounded like he would be fine with ok cupid selling the ability to spy on your romantic interests, because "people can use a different site". He probably passes around private conversations and photos and laughs about the idiots trusting his company.

Aug. 03 2014 11:42 AM
Nathanael Bassett

This episode is a great contrast to how academics have the institutional review board (IRB) to approve their experiments, and why that makes their research ethical and successful, and not totally freaking dubious and grotesque like this.

Aug. 03 2014 11:35 AM
factor from Oakland, CA

listen closely to what paint had to say, it really makes a difference how they assign potential dates. some experimentation with an algorithm is OK, but it makes a lot of sense to be completely transparent in how the % match is calculated, because that has a big influence on if a woman feels she can trust the system to assign her to safe dates.

Aug. 02 2014 07:37 PM
Nate from Philadelphia

Around the 11 minute mark, Alex proposes a sort of vague version of informed consent where users agree to be in an experiment without knowing any of the details. I know not everyone looks at OKCupid this way, but for me at least that agreement had already been made.

I was a fan of the okcupid blog back when it was a more regular thing. As a data nerd myself, I've always thought of okcupid as an ongoing experiment and it seemed pretty clear that they were tinkering with the way matches are calculated. It doesn't bug me that they intentionally made bad matches look good to see what would happen.

It's interesting that they didn't mention the Love is Blind part of the post. Even though the user (sort of) knows what's up in that experiment, it seems like it would cause as much or more emotional duress - you get in a good conversation only to have it go completely silent once the person sees you? ouch.

Aug. 02 2014 05:47 PM
paint

As a female user of okcupid, who deleted their account, I have to say. THANK YOU for asking the real question. THANK YOU for calling out his BS.

I personally geared all my responses to filter out who might be entitled sexist fucks, I look at my match % not as a forumla for love, but as a forumla for safety.

There's a bad date over stale coffee, and then theres dates where you get raped. I have a feeling that the people saying that this isn't a big deal, can't see past the stale coffee.

I was sexually assaulted by okcupid matches, thats not okcupids fault, but if they are going to give me intentionally misleading information on my matches, I am no longer safe to use their website, and I am, understandably, quite upset at the whole things.

Aug. 02 2014 12:11 PM

I feel like you guys come across as completely over-reacting in this podcast. The okcupid guy's answers are very reasonable.

I think shrill accusations about unethical experiments are ridiculous. This is an online dating/social media website, not a laboratory where people are being told to shock other people to death.

Also, the objection about 'informed consent' is nonsensical. How does being told that they are part of an experiment but not what experiment help anyone?? What 'ethics' does that satisfy?

I'm GLAD they did the experiment where they tampered with match percentages. It gives me some hope that okcupid is interested in actually refining their matchmaking algorithms rather than just using them as marketing flim-flam, which I think other similar sites do. I think it is far less ethical to advertise an untested matchmaking algorithm with baseless claims about its effectiveness than to, you know, test it!

I would like to see the experiment go further. What if okcupid actually removed percentages from their site for a day? Would that in any way effect how people found each other and struck up conversations?

The 'Facebook experiment' controversy has felt very much like a tempest in a teacup to me. This one, even moreso.

Aug. 02 2014 11:26 AM
Cobarde Anónimo from Austin

Full disclosure: I'm the happiest possible OkCupid customer since they introduced me to my wife. My bias acknowledged, I'm completely on their side on this.

First: Yes, every pixel of every website is an experiment, whether it's a controlled and thoughtful one or an unacknowledged and uncontrolled crapshoot. As Rudder points out, it's the sites that know when they're experimenting which are being more responsible. Despite Rudder's repeated challenges, you guys never offered an alternative to the scientific method.

Second: The context and consequences of the experiment make all the difference. OkCupid is a dating site, not a dispenser of chemotherapy. The results it delivers are inevitably riddled with false positives and false negatives. Every user should behave accordingly, viewing potential matches with a proper mix of skepticism and frivolity. The full responsibility for any bad dates or broken hearts lies with people who entrust their love lives to silly algorithms. OkCupid adding a little noise into the mix doesn't change that a bit.

Aug. 01 2014 03:13 PM
Alexander Trefz from Berlin

I think PJ`s 2 cents make absolutely no sense at all. There is no difference between changing the algorithm a little bit and changing the algorithm a whole lot. If every company changing their SaaS(Software as a Service)-Product would ask for permission/notify people every time they change their product, all users would go mad. You have to look more abstract at this issue: okcupid changed their product - if that change is only for some users, or only temporary is completely irrelevant - and they have the right to do that freely and without notification. That really is the end of the story right there.

Notifying all affected users(which are not always actually predictable) about every product change is not only absolutely unfeasible and unreasonable, it also does not help the users in the slightest. The only thing that would do is annoy them with constant spam(some Websites change multiple times _daily_). This is an A/B test like any other that happens anywhere anytime, on almost any website, and actually almost any other product in this world, there is absolutely nothing wrong with it, it in fact is a very good thing that they are doing this test, in this specific case, because it actually is simple quality assurance (which is not the case with most A/B tests, but with this one).

Aug. 01 2014 07:08 AM

Leave a Comment

Email addresses are required but never displayed.

Supported by

 

Embed the TLDR podcast player

TLDR is a short podcast and blog about the internet by PJ Vogt and Alex Goldman. You can subscribe to our podcast here. You can follow our blog here. We’re also on Twitter, and we play Team Fortress 2 more or less constantly, so find us there if you like to communicate via computer games from six years ago.

Subscribe to Podcast iTunes RSS

Feeds