from the last-minute-homework dept
During the 2020 campaign, there were a few times when candidate Joe Biden insisted he wanted to get rid of Section 230 entirely, though he made it clear he had no idea what Section 230 actually did. When I wrote articles highlighting all of this, I had some Biden supporters (even folks who worked on his campaign) reach out to me to say not to worry about it, that Biden wasn’t fully briefed on 230, and that if he became President, more knowledgeable people would be tasked to work on stuff, and the 230 stuff wouldn’t be an issue. I didn’t believe it at the time, and it turns out I was correct.
The White House has released a truly bizarre set of “Principles for Enhancing Competition and Tech Platform Accountability” that are so poorly thought out that I’m confused as to how anyone in the White House thought these were good ideas. First of all, they’re mostly silly simplistic platitudes that don’t take into account the complexities of each of these items. They’re perhaps red meat for the “big tech bad!” crowd, but not even in a coherent way.
Some of them don’t make sense at all and are incoherent. Some of them buy into disinformation (which is depressingly ironic, as the White House argues that some of this is about fighting disinformation).
It’s a really weird list in that it just… isn’t that sophisticated or well thought out at all. It looks kinda like no one seriously worked on this issue, or really spoke to that many experts about it, and then scrambled together something at the last minute to make sure they had something they could roll out before the mid-terms as a “we’re taking on big tech” platform.
Let’s go through them, though out of order, to start with the most egregious nonsense here: removing Section 230.
Remove special legal protections for large tech platforms. Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials. The President has long called for fundamental reforms to Section 230.
I mean… how the hell can the White House say this?
Section 230 is NOT SPECIAL PROTECTIONS FOR LARGE PROVIDERS. That’s a lie. Mostly made up by disgruntled Republicans who want websites to be forced to carry their propaganda and disinformation through must-carry provisions.
Section 230 is not “special legal protections.” It’s codification of some common law liability principles. And it’s not “for large tech platforms.” It does much more to protect users’ speech and smaller companies then protecting large companies.
On top of that, the line that it “broadly shields them from liability even when they host or disseminate illegal, violent conduct or materials” is oddly worded and basically nonsense. First of all, you’d think the White House, of all places, would be aware that Section 230 includes section (e)(1) that notes that there is no effect on criminal law. So, um, if it’s “illegal,” Section 230 does not help. Second, Section 230 protects companies from being held liable for someone else’s speech. I’m not sure what “violent conduct or materials” has to do with any of that.
More to the point: if there is “illegal, violent conduct or materials,” then, um, isn’t it law enforcement’s job to go after those actually breaking the law? In the end, all 230 is really doing is saying “don’t blame the tool, blame the person actually violating the law.”
Also, as we’ll get to, much of this document talks about enabling more competition. Removing Section 230 does the exact opposite. The big tech companies literally have buildings full of lawyers and massive content moderation teams. They’re better positioned than others to handle the burden that removing Section 230 would create.
Startups? Mid-sized companies? Removing Section 230 would kill them.
It’s such a nonsensical position for the White House to take. There certainly must be some people in the White House who understand Section 230. Why weren’t they invited to weigh in on this?
Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services. Children, adolescents, and teens are especially vulnerable to harm. Platforms and other interactive digital service providers should be required to prioritize the safety and wellbeing of young people above profit and revenue in their product design, including by restricting excessive data collection and targeted advertising to young people.
Ah, yes, the ever present “but think of the children” issue. Again, this is vague and unclear, but it sounds an awful like the “age appropriate design code” that California just passed, which has all sorts of problems (including constitutional ones). As we recently explained, back in 1996 Congress tried to pass sweeping “but think of the kids online” legislation, and the Supreme Court rightly threw it in the trash. We don’t need to go through that again, no matter how politically popular this seems to whichever political consultants insisted it be included.
Either way, the devil’s in the details here, and this vague statement has none. The fact that the language sounds so similar to the California Kids Code means it’s likely exactly that. And that’s a problem — and as we’ve seen with the muted opposition to the California bill, it’s one that is politically popular because no one wants to be branded as being “against protecting the kids,” even if these bills don’t do anything to actually protect kids, but do help rich people with savior complexes think they’re helping.
Increase transparency about platform’s algorithms and content moderation decisions. Despite their central role in American life, tech platforms are notoriously opaque. Their decisions about what content to display to a given user and when and how to remove content from their sites affect Americans’ lives and American society in profound ways. However, platforms are failing to provide sufficient transparency to allow the public and researchers to understand how and why such decisions are made, their potential effects on users, and the very real dangers these decisions may pose.
Here’s another one that sounds good as a platitude, but the reality is much different. As we’ve said over and over again, transparency is good, but mandated transparency creates all sorts of problems. Again, we can look to the terrible, terrible problems with California’s transparency bill, demanding this kind of transparency often only serves to help bad actors learn how to game your systems.
People pushing for these kinds of transparency mandates have clearly never actually run a website that has user content on it. It’s a constant struggle, and a dynamic one, where bad actors are always, always, always trying to game your system. And if you’re forced to publish clear rules on how you moderate it does two terribly dangerous things. First, it gives those bad actors a road map for how to game your system and limits your ability to change on the fly to deal with the changing nature of the attacks.
And let’s not even get into how this same policy is being pushed for by Republicans as a tool to block websites from moderating disinformation. Already, Texas and Florida have tried to pass content moderation bills that have (so far…) been found to be unconstitutional — and parts of those bills were pitched using this exact same language, how they were really about “transparency” regarding moderation, and how they just wanted the companies to be “less opaque” about how they made their decisions. Except that those laws also came with the stick of liability.
It’s so weird to see GOP nonsense talking points that have already been deemed unconstitutional showing up in an official White House policy document coming out of the Biden administration.
Stop discriminatory algorithmic decision-making. We need strong protections to ensure algorithms do not discriminate against protected groups, such as by failing to share key opportunities equally, by discriminatorily exposing vulnerable communities to risky products, or through persistent surveillance.
Again, this is one of those things that sounds good, but tends to be problematic in practice. Last year I wrote about a bill that attempted to do this, where I noted that it seemed entirely mistargeted, and (see a pattern here?) seemed based on a near total lack of understanding of how things work. The issue, again, is that the people most vocally claiming “algorithmic discrimination” are actually… disinformation peddlers, insisting that they’re being discriminated against not for peddling disinformation, but because they’re Christian white male conservatives.
So, uh, yeah, be careful what you wish for.
There are, of course, legitimate concerns about algorithms that use historically biased data to further continue a bias against marginalized platforms, but there are ways to deal with that without broadly outlawing “discrimination” via algorithms. Because that is the kind of thing that will be weaponized.
Also, as we noted in that post, it’s often quite difficult to separate out “discriminatory algorithmic decision-making” from more traditional discriminatory human decision-making, and there’s a real risk here that a bill of this nature starts holding tech companies responsible for bigotry by humans making decisions, rather than actual problems in the algorithm.
Provide robust federal protections for Americans’ privacy. There should be clear limits on the ability to collect, use, transfer, and maintain our personal data, including limits on targeted advertising. These limits should put the burden on platforms to minimize how much information they collect, rather than burdening Americans with reading fine print. We especially need strong protections for particularly sensitive data such as geolocation and health information, including information related to reproductive health. We are encouraged to see bipartisan interest in Congress in passing legislation to protect privacy.
So, yeah, sure. We need a federal privacy law. But the details here matter quite a lot, and the details in this vague paragraph suggest that whoever put this together… hasn’t actually thought through any of the details and the associated tradeoffs. For example, again, the final item we’ll go over in these principles is one about competition, but how do privacy laws and competition interact? The fact is that many of the proposed privacy bills would only be helping the largest companies, since they’ll be able to put in place the necessary compliance regimes, while the smaller competitors will be overwhelmed by it.
Also, it’s slightly weird to limit targeted advertising. I get that people hate advertising, but… I’d also kinda rather have advertising be better targeted so that it’s actually more useful to me than not? As I keep saying over and over again, privacy is about a set of trade-offs: how much am I willing to give up to get what kind of benefit. And the problem tends to come in not when I’m just handing off information, but when there’s a mismatch (or lack of clarity) in how much information I’m giving up and what exactly is the benefit I’m giving up in return. But, if I had more visibility and control over that — for example the ability to better target useful ads to myself by seeing what advertisers see about me, and having some control over what info is included in whatever “profile” they have on me — then that’s not a privacy violation to me any more. That allows me to customize things in a way where I’m comfortable and I even get relevant and useful ads.
But, again, so many in the privacy realm refuse to even consider that world a possibility, and simply want to cut off even the ability for me to enable that kind of world. Instead, they want to stop targeted ads entirely. Even for people who want them. And that seems… not all that helpful?
Promote competition in the technology sector. The American information technology sector has long been an engine of innovation and growth, and the U.S. has led the world in the development of the Internet economy. Today, however, a small number of dominant Internet platforms use their power to exclude market entrants, to engage in rent-seeking, and to gather intimate personal information that they can use for their own advantage. We need clear rules of the road to ensure small and mid-size businesses and entrepreneurs can compete on a level playing field, which will promote innovation for American consumers and ensure continued U.S. leadership in global technology. We are encouraged to see bipartisan interest in Congress in passing legislation to address the power of tech platforms through antitrust legislation.
This is the first one on the list, and it’s probably the one I have the least complaints about — except that, again, the devil is in the details. So far the bill that has gotten the farthest on this front, AICOA, is so poorly drafted that it basically would allow it to be a content moderation bill in disguise, where disinformation peddlers would be able to use provisions in the law to claim, disingenuously, that moderation for, say, disinformation, was actually being done in an anti-competitive manner.
Indeed, as noted above, many of the other provisions in this platform are, themselves, anti-competitive, in that they would create massive compliance costs that the biggest providers could shoulder, but everyone else would be left out in the cold.
It is increasingly difficult for me to take any policymakers seriously when they refuse to look at how competition, privacy, content moderation, and much, much, more are interconnected, and how movements you make on one impact others.
All of these proposals (and the bills they likely refer to) are half-baked performative ideas that make for great headlines, but show a real lack of understanding how the world actually works and how these changes will flow through the internet ecosystem.
That this is the best the Biden White House can put out after 20 months in office is kind of a condemnation of the administration’s tech policy chops. They seem to have very few actual experts on board who could better inform these discussions. And thus… we get this.
It’s performative. It creates headlines that maybe sound good. But it doesn’t solve any of the real problems.
Filed Under: antitrust, children, content moderation, for the children, joe biden, section 230, tech policy, whitehouse
Source by www.techdirt.com