What responsibility do hosting companies have for sites like 8chan?

Note: This is something I originally wrote for the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer

Over the August 4th weekend, another mass shooting took place in which the shooter posted material related to his attack — including written “manifestos,” as well as images and in some cases live, streaming video — to the controversial online community 8chan. The gunman in the latest case, who killed 20 people in a Walmart in El Paso, Texas, posted his alleged justification for the rampage on 8chan’s message boards, and so did the killer in the Christchurch mosque shootings in New Zealand in March, and the shooter who opened fire on a mosque near San Diego, Calif. in April. Commenters on the 8chan threads for these acts referred to each of the shooters as “our guy,” and in some cases have talked about the killing as a “high score,” the way someone playing a video game would.

Until late Sunday night, 8chan used the services of a company called Cloudflare, which runs a network of powerful internet “proxy” servers that can balance the traffic going to such sites when there is a sudden onslaught of visitors — either because a piece of content has become popular, or because malicious users are directing a “denial of service” attack at the site by hitting it with an automated deluge of traffic. When 8chan’s role in the latest mass shooting came to light, reporters asked Cloudflare whether the company planned to continue providing these services to the site, and Cloudflare said yes, arguing that it isn’t up to the company to decide what kinds of content are appropriate. But by late Sunday, Cloudflare CEO Matthew Prince had changed his mind, and said 8chan would be blocked from using the service.

This isn’t the first time this issue has come up for Cloudflare. In 2017, the company went through a similar debate before cutting off neo-Nazi website The Daily Stormer, which routinely promotes racism and white supremacist ideology. Prince finally decided to block the site from Cloudflare’s service, but wrote a long and thoughtful blog post about how he didn’t think his company and others like it —those that provide hosting services and other utilities — should have the power to effectively remove certain websites from the public internet. “Due Process requires that decisions be public and not arbitrary,” Prince said. “Law enforcement, legislators, and courts have the political legitimacy and predictability to make decisions on what content should be restricted. Companies should not.” Prince said something very similar in a blog post about 8chan, as well as in interviews, as did legal experts such as Kate Klonick of Yale Law School, an expert in censorship and online misinformation

A provider like Cloudflare can’t block a site from the internet completely, but removing its services means 8chan could be crippled fairly easily by a denial-of-service attack or some other exploit. In effect, it makes the site much less stable, which in turn makes it less likely to have as much reach. And Cloudflare isn’t the only one that has taken action: Google removed 8chan from its search index in 2015, which means that anyone searching for it gets links to Wikipedia entries and news stories about it rather than a link to the site itself. Of course, the content often filters out even when the sites themselves are taken down: the conservative news site The Drudge Report, for example, posted a version of the El Paso killer’s manifesto even though most other sites refused to even link to it. And Gizmodo notes that while Cloudflare may have removed 8chan, the proxy service and other hosting services continue to support a wide range of other objectionable and hate-filled sites. 

As was the case with The Daily Stormer, the removal of service by companies like Cloudflare usually results in a scramble to come up with alternative hosting and DoS protection. Much like the neo-Nazi site, 8chan fairly quickly signed up with a Cloudflare-like provider called Bitmitigate — which is a subsidiary of Epik, a company whose founder bragged about helping to host The Daily Stormer after it was taken offline. But even an internet utility has to rely on other utilities for its livelihood, which in turn makes its content and services vulnerable. In the case of Bitmitigate, a company called Voxility owns the internet infrastructure that allows the caching or proxy service to function, and after its role was pointed out on Twitter (by Alex Stamos, former director of security at Facebook, among others) the company said it had removed Bitmitigate from its service.

In some ways, the responsibility that social networks like Facebook and YouTube have for offensive content is more obvious than it is for a service provider like Cloudflare. Facebook and Twitter and Google not only help to distribute such content, but their content-promoting algorithms make sure plenty of people see it, which is an editorial function like the one newspapers used to fulfill. Cloudflare and similar hosting services are more like the power company, which operates the grid that keeps the lights on, or the phone network that connects users and allows them to call each other. Should the power company be deciding which companies or homes to supply electricity to? Should the phone company be cutting off users who choose to talk about offensive subjects using their network?

None of these analogies are totally accurate, but they help show why providers like Cloudflare have a difficult time removing services even from obvious online cesspools like 8chan, and why questions are often raised when payment processors like PayPal or Visa make it impossible to donate to certain entities (as they did with WikiLeaks). Do we want a utility provider to be making those kinds of decisions? And if not, then who does? And based on what criteria? These are the kinds of questions that 8chan — and the role it has played in mass shootings — have forced us to begin to grapple with.

Facebook’s 3rd party fact-checking program falls short

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

In December of 2016, in the wake of a firestorm of criticism about online disinformation and Facebook’s role in spreading it during the 2016 election, the social network reached out to a number of independent fact-checking organizations and created the Facebook Third Party Fact-Checking project. When these outside agencies debunked a news story or report, Facebook promised to make this ruling obvious to users, and to down-rank the story or post in its all-powerful News Feed algorithm so fewer people would see it. But even though the project has grown to the point where there are now 50 partner organizations fact-checking around the world, it’s still very much an open question how useful or effective the program actually is at stopping the spread of misinformation.

One of those raising questions is a relatively new Facebook fact-checking partner in the UK, known as Full Fact, a non-profit entity that recently published an in-depth report on the first six months of its involvement in the program. The group says its overall conclusion is that the third-party fact-checking project is worthwhile, but it has a number of criticisms to make about the way the program works. For example, Full Fact says the way Facebook rates misinformation needs to change, because the terminology or categories it applies aren’t granular enough to encompass the various different kinds. It also says that while the company has expanded to fact-check in 42 different languages, Facebook has so far failed to scale up the speed with which it flags and responds to fact checks. According to the group, it fact-checked just 96 claims in six months (and was paid $171,800 under the terms of its partnership contract).

One of the group’s other concerns is more fundamental: namely, that Facebook simply doesn’t provide enough transparency or clarity on the impact of the fact-checking that groups like Full Fact do. How many users did the debunks or fact-checks reach? How many clicked on the related links from the info pane? Did this slow or even halt the spread of that misinformation? Facebook doesn’t divulge enough data to even begin to answer those questions. Its only response to the Full Fact report and its 11 recommendations was to tell the group that it is “encouraged that many of the recommendations in the report are being actively pursued by our teams as part of continued dialogue with our partners, and we know there’s always room to improve.” There was no response to the criticism about a lack of data.

The complaint is not a new one. Earlier this year, a number of the social network’s fact-checking partners told the BBC they were concerned that there was no real way to see whether their work was having an effect, and that this suggested Facebook didn’t actually care about the efficacy of the program. “Are we changing minds?” wondered a fact-checker based in Latin America. “Is it having an impact? Is our work being read? I don’t think it is hard to keep track of this. But it’s not a priority for Facebook.” The program has been the subject of these and other criticisms almost since its inception. Last year, a number of partners seemed deeply cynical about it. “They’re not taking anything seriously. They are more interested in making themselves look good and passing the buck,” said Brooke Binkowski, former managing editor of fact-checking site Snopes.com, who now works for a similar site called Truth or Fiction.

The idea that a highly touted project might be primarily for PR purposes is a common theme with Facebook. Some believe the $300 million in funding it has committed to media ventures through the Facebook Journalism Project is primarily window-dressing, a way of buying the loyalties of those who receive the funding, in order to generate press releases that make the company look better in the eyes of Congressional regulators (many of whom are pushing an antitrust agenda). If Facebook wanted to give the impression that it actually cares about fact-checking, one obvious way to do so would be to open up its vast database and share more information about whether the project is actually working or not.

Here’s more on Facebook and fact-checking:

Checking in with the checkers: Last year, Mike Ananny wrote for CJR about a report he helped write for Columbia University’s Tow Center for Digital Journalism, which looked at the Facebook program and a number of criticisms participants had about how the project was structured and the criteria used, including why some posts and news stories were chosen for debunking but others were not.

What about Instagram? Among the recommendations in the report from Full Fact is that Facebook extend its fact-checking program to Instagram, the photo-sharing network it owns, which is much more popular with younger users than Facebook itself. “The potential to prevent harm is high [on Instagram] and there are known risks of health misinformation on the platform,” the group said.

A booming business: Fact-checking groups in Uruguay, Bolivia, Argentina, and Brazil have joined forces to create a national coalition in order to fight misinformation being spread both on Facebook and through WhatsApp, the encrypted messaging network the company owns. The groups are also working with organizations like First Draft, a fact-checking and training network based in the UK that is also affiliated with City University in New York.

Fact-checking Boris: The British TV network Channel 4 has done some fact-checking of government statements in the past, but in the wake of Boris Johnson’s recent ascension to the office of Prime Minister of the UK, the network says it is now committed to fact-checking every public statement Johnson makes during his tenure as PM, and has asked its viewers to help it do so.

Other notable stories:

The New York Times profiled a dying local newspaper from rural Minnesota, The Warroad Pioneer, which is shutting down after 121 years of publishing. According to the Times story, the paper and its three remaining employees ended their run “with Bloody Marys, bold type and gloom about the void it would leave behind.”

The Times also published a feature called A Future Without the Front Page, in which it asks: “What happens when the presses stop rolling? Who will tell the stories of touchdowns scored, heroes honored and neighbors lost?” The paper asked the founders of the Report For America project, the executive editor of Chalkbeat, a service focused on education reporting, and the founder of Outlier Media.

Conveying the sheer magnitude and gravity of the climate change crisis is often a challenge, but sometimes an image sums up more than words can express. A video clip shared on Twitter by a former fellow for the Council on Foreign Relations did that on Thursday, showing the swollen river that was created by a melting glacier in Greenland. According to Danish officials, more than 12 billion tons of glacier ice melted in a 24-hour period.

Less than a year after agents working for the Saudi Arabian royal family reportedly killed and dismembered Washington Post columnist Jamal Khashoggi, the country says it will hold a media forum and award ceremony aimed at repairing its reputation. According to the Post, the number of journalists in prison in Saudi Arabia has also tripled in the last two years since Mohammed bin Salman took power.

Facebook said it removed 259 Facebook accounts, 102 pages, five groups, and 17 Instagram accounts that were engaged in what the social network calls “coordinated inauthentic behavior.” The company said the accounts originated in the United Arab Emirates and Egypt, and their behavior was focused on a number of countries in the Middle East and Africa, including Libya, Sudan, and Qatar.

Brenna Wynn Greer writes for CJR about the sale of the photo archives from Jet and Ebony, two pioneering magazines aimed at African-American readers. The archive, which included historic photos of events such as the lynching of Emmett Till, was acquired by four philanthropic foundations—the J. Paul Getty Trust, the Ford Foundation, the MacArthur Foundation, and the Andrew W. Mellon Foundation—for just under $30 million.

YouTube trolls advertised a new shelter in Los Angeles called the Ice Poseidon Homeless Shelter that turned out to be a private mansion belonging to a YouTube personality whose nickname is Ice Poseidon. The YouTube creator, whose real name is Paul Denino, told the LA Times that he has been the victim of trolling behavior before, including “swatting” attacks, in which trolls call 911 in an attempt to get SWAT teams to descend on a user’s home or workplace.

The Wall Street Journal reported that social media bots pushed divisive content and misinformation related to race during the Democratic debates, focusing specifically on Kamala Harris. But disinformation researcher Josh Russell said on Twitter that virtually every hashtag or search term will show signs of bot-like activity. “What matters are networks of bots,” said Russell, who only found signs of two relatively small spam networks trolling Harris.

A joint team from the Centre for Democracy and Development and the University of Birmingham in the UK spent months researching the impact of WhatsApp on the 2019 Nigerian elections that were held in May. Their report finds that the platform was used to mislead voters in some sophisticated ways, but the group also found that usage of WhatsApp actually strengthened democracy in other areas.

Facebook’s funding of local journalism is problematic

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

On Wednesday, Facebook announced the first round of grant recipients for what the social-media giant is calling its Facebook Journalism Project Community Network. The 23 media outlets who will receive the money—between $5,000 and $25,000 per newsroom—were chosen by Facebook’s partner: the Lenfest Institute, a non-profit entity set up by former cable magnate Gerry Lenfest in part to finance the continued operation of the Philadelphia Inquirer and Philadelphia Daily News. Facebook said in a news release about the grant program that the winners “include a fresh approach to business sustainability through community-funded journalism, and expansion of successful storytelling events shown to increase reader revenue.”

Being a small, community-focused media outlet has never been easy, but it has gotten increasingly difficult of late, as the print advertising business has plummeted and digital advertising has been squeezed. So it’s not surprising that startups and hyperlocal players like the ones chosen to receive Facebook’s largesse would celebrate their victory, since the company’s funding will presumably allow them to do things they otherwise couldn’t—including, perhaps, keep the lights on. But there is an elephant in the room: namely, the fact that Facebook is one of the main reasons the media industry is in such desperate straits in the first place, since it controls a significant share of the ad market, and the attention of billions of daily users.

The almost two dozen media entities who are getting the Facebook grants include a number of prominent players in community-based journalism, including Spaceship Media, which organizes events aimed at bringing together disparate groups in an attempt to discuss difficult topics, the education-focused outlet known as Chalkbeat, and the Tyler Loop from Texas, which got money to expand its live storytelling events. There’s Block Club Chicago, a member of the blockchain-powered journalism platform Civil, and a project called 100 Days in Appalachia. But somewhat surprisingly, the recipients also include a number of much larger, more traditional media companies, including the Los Angeles Times—which is getting money to fund community forums—as well as Newsday, which is owned by Cablevision founder Charles Dolan, and The Salt Lake Tribune.

The Community Network is expected to get about $1.5 million in total funding this year, according to Facebook, or less than 1 percent of the amount of revenue the giant social network makes in a single day. Even the broader Journalism Project, which Facebook has said it is going to fund to the tune of about $300 million over the next several years, is only going to cost the company about two days worth of revenue. And what does Facebook get in return? The company says it cares about local journalism because local media is all about community, and so is Facebook. But the main benefit seems to be that the company gets to issue press releases with grateful comments from all of the ventures it is helping support, and that makes it look good at a time when it is under fire from Congress both for its market power and its role in spreading disinformation.

Obviously, media outlets both large and small are struggling to make ends meet, as are many other journalism-related entities, which is why making friends with Facebook and Google is so appealing. They have money, and they are willing to spend it! And, best of all, it appears to have no strings attached. The reason why it doesn’t have any obvious strings attached, however, is likely because these giant platforms don’t actually care what happens to the money, so long as they get to issue their press releases and make themselves look good in the eyes of regulators. It may feel like a win-win, but it isn’t. It’s a giant, thorny conflict of interest with a check attached.

Here’s more on Facebook and the funding of journalism:

The patronage system: I looked at the problematic nature of journalism funding from Facebook and Google in a feature for CJR last year called The Platform Patrons. Google has also committed to spend $300 million on journalism training and other funding over the next several years, as part of what it calls the Google News Initiative.

Dangerously codependent: British journalist James Ball wrote for CJR about the idea of a levy on tech companies to fund journalism. He said “tying the future of journalism to a tech or social media levy shackles the two even closer together, making a already dangerously codependent relationship even less healthy—and potentially compromising journalism in the eyes of readers.”

Facebook and news deserts: At an “accelerator summit” in Denver earlier this year, Facebook announced its research into the news desert problem, and held workshops with local media outlets aimed at helping them figure out how to improve their business models. But some of those who attended said they felt they were mostly pawns in a giant public-relations exercise.

Other notable stories:

Tensions are running high at First Look Media, according to a report from New York magazine. The owner of the investigative journalism site The Intercept has come under fire for closing two well-regarded sites, The Nib and Topic magazine. A letter sent to management by First Look staffers says there are reports the company has used the cost savings from the shutdowns to acquire Passionflix, a romance-focused streaming video service run by Elon Musk’s sister.

Substack, a platform for publishing email newsletters, said Wednesday that it has closed a $15.3-million Series A funding round from a group of investors including noted Silicon Valley firm Andreessen Horowitz. Substack says that the dozens of newsletters published on its platform, including Bill Bishop’s Sinocism, currently have a total subscriber base of about 50,000 people.

Emily Tamkin, one of a group of public editors that CJR has appointed for several leading media outlets, writes about CNN’s questionable decision to put white supremacist Richard Spencer on its news program to talk about the response to Donald Trump’s recent racist comments about four Democratic members of Congress.

Some staffers at Gizmodo Media say the company’s new owners are taking sites like Jalopnik and Kotaku in the wrong direction, according to a report from The Daily Beast. New CEO Jim Spanfeller has reportedly suggested that sites should be more friendly toward advertisers, and has asked to send ad sales representatives along with reporters when they visit potential advertisers.

Ford Motor Co. issued a statement contesting a recent Detroit Free Press investigation that found the automaker knowingly launched the Ford Focus and Ford Fiesta with defective transmissions, “and continued selling them despite thousands of complaints and an avalanche of repairs.” In response, the Free Press printed the rebuttal from Ford in full but annotated it.

A recent goodbye party for retiring Philadelphia Inquirer and Daily News columnist Stu Bykofsky took a dark turn according to Philadelphia magazine, when architecture critic Inga Saffron took the microphone and started taking jabs at the columnist’s body of work, his alleged ethical lapses, and a contentious column he wrote about the virtues of young sex workers in Thailand.

Twitter released a video-editing tool on Wednesday called LiveCut, which it says will allow media companies and publishers to easily create and share video clips from media streams. The product is similar to a service called SnappyTV, which Twitter acquired in 2014 and is in the process of shutting down.

Brian Merchant writes for CJR about the scourge of “on background” briefings given to media outlets by tech company executives. Merchant says these briefings are a “toxic arrangement” that shields tech companies from accountability, and allows giant corporations the opportunity to “transmit their preferred message, free of risk, in the voice of a given publication.”

The Athletic, a US-based subscription sports news service, just hired away some of the top sports writers in the United Kingdom, according to a report by BuzzFeed. The new hires include an award-winning Guardian football writer and a BBC reporter with a huge following among London football fans. The acquisition spree has been described as “setting off a bomb” in the industry.

The New York Times published a special interactive feature looking at the fire that almost destroyed the Notre Dame cathedral in Paris in April. The feature, which includes hand-drawn sketches of the fire-fighting process done by a Paris firefighter who is also a trained sketch artist, concludes that an incredibly complicated system of fire alerts and an employee who had only been on the job for three days were partly to blame for the devastating blaze.

Journalists have to walk a fine line, says disinformation expert Whitney Phillips

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

One of the most challenging problems of the digital information age is how to report on disinformation without pouring gasoline on the fire in the process. While working with the New York-based group Data & Society, media analyst Whitney Phillips (now an assistant professor of communications at Syracuse University) wrote a comprehensive report on this challenge, entitled The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators. We thought this topic was worth exploring in more depth, so we asked Prof. Phillips to join us on our Galley discussion platform for an interview, which took place over multiple days.

The idea that journalists can exacerbate problems merely by doing their jobs is somewhat more widely accepted now, thanks in part to the work of Prof. Phillips and Joan Donovan, who runs the Technology and Social Change Research Project at Harvard’s Shorenstein Center (and also did an interview with CJR on Galley recently). After the recent mass shooting incident in New Zealand, a number of media outlets chose not to focus on the shooter, and didn’t publish or link to his “manifesto.” In some cases, news outlets didn’t even use his name in the stories they wrote, which is a big change from even a few years ago. But Phillips says there is more to be done.

“I’ve been considering these questions for the better part of a decade and I still find them vexing,” she says. There are some basic guidelines that are comparatively clear, including efforts to avoid publicizing anything that hasn’t yet met the tipping point—which is reached when a topic moves from a discrete online community and becomes a subject of broader discussion. Obviously, a mass shooting will cross that point immediately, but that doesn’t mean reporters should report everything about the incident. Of particular concern, says Phillips, are ways of framing the story that “aggrandize the shooter/antagonist, or otherwise incentivize future shooters/antagonists.”

Some news outlets have argued that they need to report on the personal details and background of extremists like the Christchurch shooter because we need to understand how they were radicalized. But while this kind of understanding might help in some cases, Phillips says it is going to fail in others, because “radicalization is a choice, and changing people’s minds about the things they actively choose is a long-term, up-close-and-personal, complicated ground game, not something you can solve by waving a newspaper article at someone.” Writing in detail about how they were radicalized might be seen by like-minded extremists as a reward rather than punishment.

It’s true that in some cases “sunlight disinfects,” in other words, that exposing wrong-doers can cause them to lose their power. But Phillips notes in some cases, it can function as a hydroponic grow light, “and it’s simply not possible to know what the long term effect of reporting will be. By then, it might be too late to intervene, because what ended up growing turned out to be poison.” Currently, journalists and even academics tend to focus almost exclusively on white supremacists and violent manipulators. But why? “At what point did we internalize the idea that attackers and liars and racists are the most interesting and important parts of a story?” she asks.

The point, Phillips says, is that if the goal is to undermine a violent ideology like white supremacy, you don’t do that by only talking about white supremacists. “That keeps them right where they want to be, which is central to the narrative.” What we should be doing is showing the effects of white supremacy. Many people only know about racism as an abstraction, says Phillips. “But it’s not an abstraction. It’s bleeding bodies. It’s screaming babies. It’s synagogues and mosques on lockdown. Those stories need telling.” Better to spend more time reporting those kinds of details, rather than another profile that amplifies the messaging “of some violent asshole whose actions tell us everything we need to know.”

Part of the problem with fighting misinformation is that we all believe things that turn out to be wrong, whether it’s bad habits or personal relationships. What this shows, she says, “is that well intentioned interventions, outfitted with true and important facts, often go unheeded, and can actually compel a person to double down and feel even more convinced that they’re right and everybody else is wrong.” That’s why well-intentioned fact-checking efforts can have a boomerang effect and actually entrench a false belief in some cases. And on top of that, studies have shown that repeating a message, even while debunking it, can reinforce the message and paradoxically make it seem more believable.

“Efforts to fact check hoaxes and other polluted information operate under the assumption that objective truth is a magic bullet [which] goes right into readers’ brains, without any filter, without any resistance, and fills in the holes that bad information leaves behind,” says Phillips. According to this theory, the problem of disinformation can be solved by handing out facts. But that’s not how human nature works. “When something ugly emerges from the depths, you simply cannot throw facts at it and expect anything transformative to happen—most basically because there is, across and between groups, no agreement about what the facts even are.”

There are even more complicating factors, Phillips says. According to one study of “fake news,” almost 15 percent of users shared false or misleading stories even though they knew they were untrue. And in many cases people do this because they want to send a message about who they are or what they believe, in order to show that they are part of a specific group. “Media literacy discussions within journalism and academia tend to presume good faith in these kinds of cases, and proceed from there,” she says. “But people don’t always operate under good faith. In my line of work in particular, bad faith arguments and actions are everywhere.”

Getting to the bottom of the Seth Rich conspiracy theory

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

It was one of the first prominent “fake news” conspiracy theories to metastasize from Internet rumor all the way to the White House: In the summer of 2016, stories began to circulate in various online forums that Seth Rich, a fairly low-level Democratic National Committee staffer who died in July, wasn’t the victim of a botched robbery at all, but had actually been assassinated by a contract killer working for Hillary Clinton. Rich, the theory went, was actually the secret source who had leaked DNC emails to WikiLeaks—a theory that WikiLeaks founder Julian Assange appeared to lend credence to when he offered a $20,000 reward for information leading to the identity of Rich’s killer or killers. “Our sources take risks,” he said.

As these theories were being spread by Reddit users, denizens of 4chan forums and even Fox News hosts like Sean Hannity, suspicion arose that there were shadowy forces trying to promote the loony-sounding conspiracy. But it wasn’t clear who exactly these forces were, or what their intentions were. On Tuesday, Yahoo News investigative reporter Michael Isikoff announced that he had tracked down the original source of the theory: A fake report concocted by the Russian intelligence agency SVR (short for Sluzhba vneshney razvedki Rossiyskoy Federatsii), a unit of the former KGB. The phony “bulletin,” designed to look like an authentic intelligence report, was released just 3 days after Rich’s death, Isikoff writes.

The idea that the Rich conspiracy theory was distributed by agents acting on behalf of the Russian government is not a new one. When information started to come out about the activities of the so-called Internet Research Agency during the 2016 election—which engaged in a sustained campaign of disinformation and outright propaganda on Facebook and other platforms—the Seth Rich assassination theory turned out to be one of the many pieces of fakery the IRA distributed as a way of destabilizing the campaign. But the agency was a privately run, arm’s-length entity (albeit one run by a close associate of Russian president Vladimir Putin). Until Isikoff’s report, it was not clear that this conspiracy theory originated from Russian intelligence itself.

According to the Yahoo News scoop—which also forms the basis of a podcast series the site has launched about the story, called Conspiracyland—the Russian intelligence report’s existence was confirmed by the former federal US prosecutor who was in charge of the Rich case. “To me, having a foreign intelligence agency set up one of my decedents with lies and planting false stories, to me that’s pretty outrageous,” said Deborah Sines, who was an assistant US attorney in charge of the Rich case until her retirement last year, and has never spoken publicly about the case or Russia’s involvement.

The Rich theory was promoted heavily not just by the Internet Research Agency but also by Russian-owned media companies RT and Sputnik, Isikoff says, and from there it spread to right-wing media players such as Alex Jones of Infowars, and to a number of conservatives with ties to the White House, including Roger Stone. But one bombshell exposed by the Yahoo News report is that the theory was also directly promoted by a senior advisor to Trump: namely, Steve Bannon, at one time a key player in the administration. According to Isikoff, who saw some of Bannon’s text messages, the former co-founder of Breitbart News texted a CBS “60 Minutes” producer on March 17, 2017 saying: “Huge story … he was a Bernie guy … it was a contract kill, obviously.”

Here’s more on the Seth Rich conspiracy theory:

The Russians are coming! Philip Bump at The Washington Post argues that Isikoff’s story overplays the Russian intelligence angle, and says it is more likely the theory got traction because it played into existing right-wing tropes about the Clintons’ alleged involvement in multiple deaths long before Trump. Blaming the Russians makes for a better story, Bump says, but the truth is “it was politically useful for a number of people to hype the allegations.”

Worse than you remember: At the height of the Seth Rich conspiracy frenzy, Fox News ran a report claiming it had confirmed that Rich was the WikiLeaks source who provided the hacked DNC emails. The company later retracted the story and said it would look into how it happened, but no investigation has ever been released. So Media Matters ran its own series of articles looking into the bogus story, a series it released on the second anniversary of the Fox report. Isikoff says a person close to Fox told him the network came to doubt that the anonymous source in its story ever existed.

Multi-channel strategy: Disinformation expert Kate Starbird says the Yahoo News report is a great example of how conspiracy theories flow from alternative media sites through to mainstream television. “This is why when we focus on social-media effects of Russian disinfo, we completely miss the point,” Starbird said on Twitter on Tuesday. “This is a multi-dimensional, multi-channel strategy, which uses different tools in complementary ways, and through which they have shaped U.S. political discourse.”

Other notable stories:

Vicky Ward writes for The Daily Beast about Jeffrey Epstein, a New York financier who was recently arrested on charges of sex trafficking, after reports that he helped cultivate a network of underage girls and coerced them into sex acts at his 21,000-square-foot New York apartment and his Palm Beach mansion. Ward says details about Epstein’s abuse of two young sisters were removed from a profile she wrote for Vanity Fair magazine in 2003.

A federal appeals court ruled Tuesday that Donald Trump cannot legally block people from following him on Twitter because doing so is a breach of their First Amendment rights. The unanimous ruling from a three-judge panel on the United States Court of Appeals for the Second Circuit said since Trump uses the account to conduct government business, he is not allowed to block users from what amounts to a public governmental forum. The lawsuit was filed in 2017 by the Knight First Amendment Institute.

Ten journalists from news organizations across the country are receiving support from the American Press Institute for year-long projects aimed at helping their newsrooms incorporate community engagement into their journalism. The 2019 Community Listening Fellows will receive training and support on how to employ listening strategies to inform their journalism and to help them represent and serve their communities better.

A spokesman for Facebook said the social network has not been invited to attend a social-media summit the president is holding later this week at the White House. The Trump administration has said the summit is designed to “bring together digital leaders for a robust conversation on the opportunities and challenges of today’s online environment.” So far, several conservative groups have said they will be attending, but no official list has been released. Twitter and Google have not been invited, according to a number of reports.

NBC Nightly News anchor Lester Holt is the 2019 recipient of the Walter Cronkite Award for Excellence in Journalism, Arizona State University officials announced Tuesday. Holt will receive the 36th-annual Cronkite Award from the university’s Cronkite School of Journalism and Mass Communication in Phoenix on Nov. 4, which was Cronkite’s birthday. The late CBS News anchor would have been 103 this year. Holt has anchored the flagship NBC broadcast since 2015, following eight years as anchor of the newscast’s weekend edition and 12 years as co-anchor of “Weekend TODAY.”

For CJR, Matt Haber talked with author Lisa Taddeo about her new book “Three Women,” a book about desire that she has been working on for almost a decade. Taddeo says she wanted to write about how desire and love change people’s lives, and she drove across the country six times looking for the right stories and the right communities to write about. She tells Haber that she became such a part of the lives of the three women she writes about that it was like she had moved into their homes for the duration of the book.

YouTube has been under fire recently for advertising to children who watch its videos, and in the process harvesting data on them. The Federal Trade Commission has said it is investigating the practice, but experts who have been interviewed by the commission tell Vice that the regulator is likely to put the onus on video creators to turn off advertising for children, rather than requiring YouTube itself to take action, such as forcing all videos with child-focused content into its existing dedicated kids portal, YouTube Kids.

Alex Stamos, former Chief Security Officer at Facebook and now the director of the Stanford Internet Observatory, tells fact-checking service First Draft News that the media often rushes to find a compelling reason for the spread of disinformation campaigns, but the sexiest story is usually the least accurate. “A big thing I always tell all journalists is: look, it’s probably not Russia,” he says. “The vast majority of the time, it is not a foreign influence campaign.”

Twitter released new rules Tuesday that it says are designed to prevent hate speech directed at religious groups. In a blog post, the company said that the new standards were developed after “months of conversations and feedback from the public, external experts and our own teams.” In the future, it said, tweets that use phrases such as “We need to exterminate the rats. The [Religious Group] are disgusting,” or “[Religious Group] should be punished. We are not doing enough to rid us of those filthy animals” will be removed and could lead to account suspensions.

According to an investigative report from Vice News, online retailing giant Amazon worked with police in Arizona to co-ordinate a sting operation aimed at thieves who were stealing Amazon packages from people’s houses. The company reportedly provided police with a “heat map” of various neighborhoods, showing where packages had gone missing, and provided dummy packages with GPS trackers in them, as well as video from the automated Ring doorbells at certain homes (Amazon acquired Ring last year for $1 billion).

Facebook and the private group problem

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Anyone who has been paying attention over the past year is probably well aware that Facebook has a problem with misinformation, but a number of recent events have highlighted an issue that could be even more problematic for the company and its users: namely, harassment and various forms of abusive conduct in private groups. In the latest incident, ProPublica reported on Monday that members of a Facebook group frequented by current and former Customs and Border Patrol agents joked about the deaths of migrants, talked about throwing burritos at members of Congress who were visiting a detention facility in Texas, and posted a drawing of Rep. Alexandria Ocasio-Cortez engaged in oral sex with a migrant. According to ProPublica, the group is called “I’m 10-15,” which is the code that CBP agents use when they have illegal migrants in custody, and has 9,500 members.

It’s not clear whether the administrator of the Facebook group made any effort to restrict membership to actual or former Customs and Border Patrol agents, so it’s probably not fair to conclude that the views expressed in the group are indicative of how a majority of the CBP feels about immigrants. But that didn’t stop many commentators on Twitter and elsewhere—including a number of congressmen—from expressing their concerns about whether attitudes at the agency need to be investigated. “It’s clear there is a pervasive culture of dehumanization of immigrants at CBP,” Democratic presidential candidate Kamala Harris said on Twitter, while her fellow candidate Ocasio-Cortez said “This isn’t about ‘a few bad eggs.’ This is a violent culture,” and described CBP as “a rogue agency.” If agents are threatening violence against members of Congress, Ocasio-Cortez said, “how do you think they’re treating caged children+families?”

In response to the ProPublica story, a spokesman for the CBP said that the agency has initiated an investigation into the “disturbing social media activity,” and Border Patrol Chief Carla Provost said the posts are “completely inappropriate and contrary to the honor and integrity I see—and expect—from our agents.” But the Customs and Border Patrol group is only the latest in a series of such examples that have come to light recently. A recent investigation by Reveal, the digital publishing arm of the Center for Investigative Reporting, reported that hundreds of current and retired law enforcement officers from across the US are members of extremist groups on Facebook, including those that espouse racist, misogynistic, and anti-government views.

Some might argue that abusive comments made in a private Facebook group aren’t that different from hateful remarks made in an email thread. But email providers don’t host such discussions in the same way that Facebook hosts private groups, and the social network’s algorithm also recommends such groups to users based on their past behavior and interests. These private groups are just part of a broader—and growing—problem for Facebook, which offers private, encrypted discussions via its WhatsApp service that have also been implicated in the genocide in Myanmar, in which hundreds of thousands of Rohingya Muslims were driven from their homes, tortured and killed. Facebook CEO Mark Zuckerberg has said private communication is the future of the social network, which means these kinds of problems could escalate and multiply.

Facebook has been working on creating what it calls a Supreme Court-style social media council, or series of councils, that will be made up of third-party experts who could make decisions about what to do with problematic content. Will these councils be able to see and/or regulate the kind of hate speech or abusive content that occurs in private and secret groups, or in encrypted conversations on WhatsApp? And if so, how will they determine what is appropriate and what isn’t—will those decisions be based on Facebook’s standards, local laws, or universal human rights principles? That’s unclear. But the company is going to have to find an answer to some of these questions soon, as more and more attention is focused on the potential downsides of its private, encrypted future.

Here’s more on Facebook and its content moderation problems:

  • Front-line workers: Whether it’s groups or just regular Facebook content, one of the main weapons the company has against the problem are the thousands of moderators it employs to review flagged content, most of whom work for third-party contractors. In a new book, researcher Sarah Roberts writes about how artificial intelligence is not a solution to the content moderation problem.
  • The privacy dilemma: I wrote for CJR’s print edition about the challenges that Facebook faces as it tries to come up with a process for deciding what kinds of content it should host and what is unacceptable. Misinformation researcher Renee DiResta told me that hoaxes, hate speech and propaganda could be even more difficult to track and remove as more discussion moves into private groups.
  • Crackdown backfiring: Facebook has been trying to take action against groups that contain problematic content by removing and even banning them, but some of those efforts are backfiring as trolls figure out how to game the process. Some groups have been forced to go private after malicious actors posted hate speech or offensive content and then reported the groups to Facebook moderators in an attempt to get them removed for misconduct.
  • A fine for failure: Germany’s federal Office of Justice has handed down a fine of $2.3 million against Facebook for not acting quickly enough to remove hateful and illegal content. Under the country’s internet transparency law (known as NetzDG), companies like Facebook have to remove posts that contain hate speech or incite violence within 24 hours or face fines of as much as $35 million. They are also required to file reports on their progress every six months.

Other notable stories:

  • A report from the Atlantic Council’s Digital Forensic Research Lab says that some of the accounts and pages that were recently removed from Facebook for what the social network calls “inauthentic behavior” were operated on behalf of a private Israeli public-relations firm called The Archimedes Group, and appeared to be trying to stir up dissent in Honduras, as well as Panama and Mexico.
  • Soraya Roberts, a culture columnist with Longreads, writes about the controversy that was sparked on media Twitter when New York Times writer Taffy Brodesser-Akner confessed in an interview that she made $4 a word for her celebrity profiles. Roberts says the point of the uproar was that one journalist makes several times what the majority do, despite the industry complaining that it “has nothing left to give.”
  • CJR is publishing articles from the latest print version of the magazine on our website this week, including a piece on what remains of the free press in Turkey, a special report on Benjamin Netanyahu’s relationship with the media in Israel, and a note from CJR’s editor Kyle Pope about how in the current environment, all news is global.
  • A report from a group of newspapers including The Guardian and The New York Times report says that China’s border authorities routinely install an app on smartphones belonging to travelers who enter the Xinjiang region, including journalists. The app gathers personal data from phones, including text messages and contacts, and checks whether devices contain pictures, videos, documents, and audio files that match a list of more than 73,000 items stored within the app, including any mention of the Dalai Lama.
  • Vice News says that Google’s Jigsaw unit (formerly known as Google Ideas), which was originally designed to help promote a free and independent internet, has turned into a “toxic mess.” Among other things, anonymous sources who spoke with Vice said that founder Jared Cohen “has a white savior complex,” and that the mission of the Google unit is “to save the day for the poor brown people.”
  • In a recent interview, British politician Boris Johnson, a leading candidate to take over for Prime Minister Theresa May, talked about his hobby of making model buses out of old wine crates, a comment many observers found a bit bizarre. Glyn Moody of Techdirt thinks part of the reason Johnson confessed to such a strange pursuit was that it was a way of tricking the Google search algorithm into smothering other unflattering news items from Johnson’s past.
  • Virginia has become one of the first states to impose criminal penalties on the distribution of non-consensual “deepfake” images and video. The new law amends an existing law that defines the distribution of nudes or sexual imagery without the subject’s consent⁠ (sometimes called revenge porn⁠) as a Class 1 misdemeanor. The new bill updates the existing law by adding the category of “falsely created videographic or still image” to the text.
  • Media Matters writes about how a rumor spread during recent protests in Portland, Oregon that anti-fascist demonstrators were throwing milkshakes made of quick-drying cement at right-wing groups. The rumor was picked up by alt-right commentators including Jack Posobiec, the site says, and eventually was cited in headlines by a number of news outlets, including Fox News, despite the fact that there was no evidence that it was true. 
  • A programming note from CJR: The daily newsletter will be taking a hiatus for the July 4th holiday, but will return next week.

Deepfakes aren’t the real problem

Note: This is something I originally wrote for the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer

When it comes to disinformation, the latest buzzword on everyone’s lips is “deepfake,” a term used to refer to videos that have been manipulated using computer imaging (the word is a combination of “deep learning” and “fake”). Using relatively inexpensive software, almost anyone can create a video that makes the person in a video appear to be saying or doing something they never said or did. In one of the most recent examples, a Slovakian video artist named Ctrl Shift Face modified a video clip of comedian Bill Hader imitating Robert De Niro, so that Hader’s face morphs into that of the actor while he is doing the imitation. Another pair of artists created a deepfake of Facebook co-founder and CEO Mark Zuckerberg making sinister comments about his plans for the social network.

Technologists have been warning about the potential dangers of deepfakes for some time now. Nick Diakopolous, an assistant professor at Northwestern University, wrote a report called Reporting in a Machine Reality last year about the phenomenon, and as the US inches closer to the 2020 election campaign, concerns have continued to grow. The recent release of a doctored video of House Speaker Nancy Pelosi—slowed down to make her appear drunk—also fueled those concerns, although the Pelosi video was what some people have called a “cheapfake” or “shallowfake,” since it was obvious it had been manipulated. At a conference in Aspen this week, Mark Zuckerberg defended the fact the social network didn’t remove the Pelosi video, although he admitted it should not have taken so long to add a disclaimer and “down rank” the video.

Riding a wave of concern about this phenomenon, US legislators say they want to stop deepfakes at the source. So they have introduced something called the DEEPFAKES Accountability Act (in a classic Congressional move, the word “deepfakes” is capitalized because it is an acronym—the full name of the act is the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act). The law would make it a crime for anyone to create and distribute a piece of media that makes it look as though someone said or did something they didn’t say or do without including a digital watermark and text description that states it has been modified. The act also gives victims of “synthetic media” the right to sue the creators and “vindicate their reputations.”

Mutale Nkonde, a fellow with the Berkman Klein Center at Harvard and an expert in artificial intelligence policy, advised Congress on the Deepfakes Accountability Act and wrote in a post on Medium that the technology “could usher in a time where the most private parts of our lives could be outed through the release of manipulated online content — or even worse, as was the case with Speaker Pelosi, could be invented [out of] whole cloth.” In describing how the law came to be, Nkonde says that since repealing Section 230 of the Communications and Decency Act (which protects the platforms from liability for third-party content) would be difficult, legislators chose instead to amend the law related to preventing identity theft, “putting the distribution of deepfake content alongside misappropriation of information such as names, addresses, or social security numbers.”

Not everyone is enamored of this idea. While the artists who created the Zuckerberg video and the Hader video might be willing to add digital watermarks and textual descriptions to their creations identifying them as fakes, the really bad actors who are trying to manipulate public opinion and swing elections aren’t likely to volunteer to do so. And it’s not clear how this new law would force them to do this, or make it easier to find them so they could be prosecuted. The Zuckerberg and Hader videos were also clearly created for entertainment purposes. Should every form of entertainment that takes liberties with the truth (in other words, all of them) also carry a watermark and impose a potential criminal penalty on creators? According to the Electronic Frontier Foundation, the bill has some potential First Amendment problems.

Some believe this type of law attacks a symptom rather than a cause, in the sense that the overall disinformation environment on Facebook and other platforms is the problem. “While I understand everyone’s desire to protect themselves and one another from deepfakes, it seems to me that writing legislation on these videos without touching the larger issues of disinformation, propaganda, and the social media algorithms that spread them misses the forest for the trees,” said Brooke Binkowski, former managing editor of fact-checking site Snopes.com, who now works for a similar site called Truth or Fiction. What’s needed, she says, is legislation aimed at all elements of the disinformation ecosystem. “Without that, the tech will continue to grow and evolve and it will be a never-ending game of legislative catch-up.”

A number of experts, including disinformation researcher Joan Donovan of Harvard’s Shorenstein Center (who did a recent interview on CJR’s Galley discussion platform), have pointed out that you don’t need sophisticated technology to fool large numbers of people into believing things that aren’t true. The conspiracy theorists who peddle the rampant idiocy known as QAnon on Reddit and 4chan, or who create hoaxes such as the Pizzagate conspiracy theory, haven’t needed any kind of specialized technology whatsoever. Neither did those who promoted the idea that Barack Obama was born in Kenya. Even the Russian troll armies who spread disinformation to hundreds of millions of Facebook users during the 2016 election only needed a few fake images and plausible-sounding names.

There are those, including Nieman Lab director Joshua Benton, who don’t believe deepfakes are even that big a problem. “Media is wildly overreacting to deepfakes, which will have almost no impact on the 2020 election,” Benton said on Twitter after the Pelosi video sparked concern about deepfakes swamping voters with disinformation. Others, including the EFF, argue that existing laws are more than enough to handle deepfakes. In any case, rushing forward with legislation aimed at correcting a problem before it becomes obvious what the scope of the problem is—especially when that legislation has some obvious First Amendment issues—doesn’t seem wise.