from the seems-moderately-hypocritical dept
This post was inspired by a Benedict Evans’ tweet.
One of the many phrases that has become popular to an annoying degree over the last few years is the concept of “dark patterns.” These are, we’re told, sneaky, ethically dubious ways, in which companies — usually “big tech” — trick users into doing what the companies want. I’m not saying that companies don’t do sketchy stuff to try to make money. Lots of companies do. Indeed, we’ve spent decades calling out some pretty sketchy behavior by companies to get your money. But the phrase “dark patterns” has such a connotation to it, and it is now used in cases that, um, don’t even seem that bad (and, yes, I’ve used the term myself, once, but, that was demonstrating specific behavior that was pretty clearly fraudulent).
Of course, the NY Times is among the media orgs which have really popularized the phrase. It wrote one of the earliest popular articles about the concept, and has called it out multiple times when talking about tech companies. That last one is particularly notable because it was written by a member of the NY Times, Greg Bensinger. He really, really doesn’t like “dark patterns” that manipulate users into… say, “signing up for things.”
These are examples of “dark patterns,” the techniques that companies use online to get consumers to sign up for things, keep subscriptions they might otherwise cancel or turn over more personal data. They come in countless variations: giant blinking sign-up buttons, hidden unsubscribe links, red X’s that actually open new pages, countdown timers and pre-checked options for marketing spam. Think of them as the digital equivalent of trying to cancel a gym membership.
He’s pretty sure we need legislation to take down dark patterns.
Companies can’t be expected to reform themselves; they use dark patterns because they work. And while no laws will be able to anticipate or prevent every type of dark pattern, lawmakers can begin to chip away at the imbalance between consumers and corporations by cracking down on these clearly deceptive practices.
Companies can’t be expected to reform themselves.
Anyway. That’s called foreshadowing.
Now, let’s talk about a new piece written by a data scientist at the NY Times, about “how the NY Times uses machine learning to make its paywall smarter.”
The company’s paywall strategy revolves around the concept of the subscription funnel (Figure 1). At the top of the funnel are unregistered users who do not yet have an account with The Times. Once they hit the meter limit for their unregistered status, they are shown a registration wall that blocks access and asks them to make an account with us, or to log in if they already have an account. Doing this gives them access to more free content and, since their activity is now linked to their registration ID, it allows us to better understand their current appetite for Times content. This user information is valuable for any machine learning application and powers the Dynamic Meter as well. Once registered users hit their meter limit, they are served a paywall with a subscription offer. It is this moment that the Dynamic Meter model controls. The model learns from the first-party engagement data of registered users and determines the appropriate meter limit in order to optimize for one or more business K.P.I.s (Key Performance Indicators).
Cool. (And, yes, this is legitimately cool to see how the company handles the meter in a more dynamic way — more companies should be more dynamic like that).
But, um, also, isn’t that… a dark pattern? I mean, it’s not clear to the end user. It’s designed to — and I’ll quote here — “get consumers to sign up for things, keep subscriptions they might otherwise cancel or turn over more personal data.”
The writeup by the data scientist is pretty clear what they’re trying to do here:
Thus, the model must take an action that will affect a user’s behavior and influence the outcome, such as their subscription propensity and engagement with Times content.
I mean, this is all kind of interesting, but… it sure sounds like what the NY Times editorial side is complaining about as a dark pattern.
And that’s where some of the problem with the term comes into play. There’s a spectrum of behavior — some of which is just smart business and tech practices, and some of which is more nefarious. But using the term “dark patterns” to broadly describe anything that we don’t understand, or can’t see that is designed to get you to do something… becomes problematic pretty quickly. I don’t have a problem with the way the NY Times runs its paywall. It’s trying to make it work in a reasonable way that converts more users into subscribers.
Clearly, this is not as nefarious as services that make it impossible to, say, cancel your subscription without first having to talk to a human (oh wait, the NY Times does that too?). Um, okay, it’s not as nefarious as literally tricking people into signing up for recurring payments rather than a one time thing. That’s even worse.
But, it is still a spectrum. And when we refer to any optimization efforts as “dark patterns” then the problematic parts of “dark patterns” lose their meaning. It becomes way too easy to smear perfectly reasonable efforts as somehow nefarious. It’s fine to talk about sketchy things that companies do, but we should be specific about what they are and why they’re sketchy, rather than just assuming anything that is designed to drive conversions is inherently problematic.
The fact that the NY Times on the tech/business side uses exactly what the editorial side condemns in order to be able to pay the editorial side’s salaries, should at least inform the framing of some of this discussion.
Filed Under: conversion rates, dark patterns, optimization, paywalls
Companies: ny times
Source by www.techdirt.com