Tuesday, October 19, 2021

The Disinformation Funnel Part II: Solutions

Michael Jackson, leaning in

I work in fundraising and each year we put on a Giving Day where we challenge our donors to make as many gifts as possible in 24 hours. In a prep meeting, we were looking at donations by the hour and noticed some patterns; lots of gifts at 7 am, noon and 5 pm. We were trying to decide when to promote incentives during the day and the data led to two conclusions that split the department in half.

One half thought we should push out our incentives when giving was low, to try to shore up the moments when we are weak. The other half thought we should lean into what we do well and put the incentives out when giving is high.

In my last post, I described what I see as the problem of disinformation and Jonathan Rauch's "funnel." I didn't make any great suggestions other than to say I put my trust in certain people to interpret for me what comes out of the funnel. I think most people do some version of this. When people say something like "I'm not one of those sheep who follow Fauci. I do my own research," I think they mean they look for people they trust to interpret what comes out of the funnel.

This is the familiar pattern of the 21st century; we lose trust in institutions and place it in people. And I totally agree with Yuval Levin that this is bad but maybe the solution is to just lean into it. 

PETA spent decades trying to shame people into giving up meat, doing things like throwing blood on ladies in mink coats. It didn't work because they couldn't solve the big problem, which is that hamburgers are delicious. Along comes Impossible Foods and decides to lean into the problem by creating a meatless burger that is also delicious. 

That is the challenge. How can we lean into the individuals that people already trust and sort out the grifters from the rogue geniuses? 

We need to create a funnel for public intellectuals.

Moral Accountability

So Twitter has a blue checkmark that verifies that people are who they say they are. How about a green checkmark for someone in the journalism/publishing industry who has taken the journalistic oath? They can police one another and decide to rescind green checkmarks. You're still allowed to tweet, you just lose that credential.

I'm looking to appropriate the Hippocratic Oath, but for media. The oath should be something agreed upon by heterodox journalists and they can hold one another accountable. People will still be able to choose whom they follow and whom they trust, but the green checkmark will lend more credibility. 

This doesn't involve occupational licensing, manipulative algorithms, or government oversight. Just the will of the industry to want to do better.

Forecasting on Record

In Scott Alexander's plan to improve the Republican party, he suggests they repeal all bans on prediction markets and give tax breaks for participating in them. 

"You'll get a decentralized, populist, credentialism-free, market-based alternative to expertise."

He then warns that if we continue to not trust experts, Republicans are at risk of not listening to people who are actually right. 

"Prediction markets - an un-biasable, decentralized form of aggregating expert and non-expert opinion correctly - are the only solution I can imagine working."

From there, we're looking at an expansion of the Good Judgement Project. Basically, influencers make their predictions public and their followers can see for themselves how often they are right and how often they are full of shit. And anyone who refuses to participate should be considered unreliable.

Arnold Kling has a similar idea with a draft. My only critique is that he judges people based on things he finds admirable, which will preclude other thinkers outside your ingroup. Better to open the funnel and give everyone a chance to prove their trustworthiness. 

The Spectrum of Approval

Scott has another idea but this one isn't about discerning trustworthy individuals, but rather a possible fix to the scientific funnel problem. He makes a suggestion to improving the FDA. Instead of giving the binary approved/not approved stamp (and the occasional "emergency authorization"), give drugs a rating system.

"Grant drugs one-star, two-star, etc approvals. Maybe one-star would mean it seems grossly safe, the rats we gave it to didn’t die, but we can’t say anything more than that. Two-star means it’s almost certainly safe, but we have no idea about effectiveness. Three-star means some weak and controversial evidence for effectiveness, this is probably where aducanumab is right now. Four-star means that scientific consensus is lined up behind the idea that this is effective, this is probably where the COVID vaccines are right now. Five star is the really extreme one where you’re boasting that Zeus himself could not challenge the effectiveness of this drug - the level of certainty around MMR vaccine not causing autism or something like that."

I remember going into restaurants in North Carolina and they had a placard displayed with a grade. The department of health would grade each restaurant, which I liked. My libertarian side hoped they wouldn't shut people down for failing but would have to put up an F and people could decide if it was worth it to them to eat there. 

The reason people like Scott like prediction markets is that there are penalties for being wrong. You literally lose money. There needs to be more of a penalty for misinformation. So instead of doing the FDA thing of saying this speech is/isn't allowed you give it a rating system and people are penalized via their rating when they're wrong.

So whatever we end up doing, it will only have a chance of success if we lean into where the trust already resides.

No comments:

Post a Comment