About Mathew Ingram

I'm a writer with the Columbia Journalism Review, which is based in New York at Columbia University, but I live in Toronto. I write about the intersection between media and technology, and anything else that interests me!

YouTube takedowns are making it hard to document war crimes

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Like every other large social platform, YouTube has come under fire for not doing enough to remove videos that contain hate speech and disinformation, and the Google-owned company has said repeatedly that it is trying to get better at doing so. But in some cases, removing videos because they contain graphic imagery of violence can be a bad thing, at least when it comes to documenting war crimes in a country like Syria. That’s the case that Syrian human-rights activist and video archivist Hadi Al Khatib makes in a video that the New York Times published on Wednesday in its Opinion section. Khatib co-produced the clip with Dia Kayyali, who works for Witness, an organization that helps people use digital tools to document human rights violations. In the video, Khatib notes that videos of bombings the Syrian government has carried out on its own people—including attacks with barrel bombs, which Human Rights Watch and other groups consider to be a war crime—are important evidence, but that YouTube has removed more than 200,000 such videos.

“I’m pleading for YouTube and other companies to stop this censorship,” Khatib says in the piece. “All these takedowns amount to erasing history.” There are similar policies at Facebook and Twitter, both of which have also removed videos because they were flagged as being violent or propaganda, when those videos included evidence of government attacks in Syria and elsewhere. The problem, Kayyali says, is that most of the large social platforms use artificial intelligence to detect and remove content, but an automated filter can’t tell the difference between ISIS propaganda and a video documenting government atrocities. Many of the platforms have been placing even more emphasis on using automated filters because they are under increasing pressure from governments in the US and elsewhere to act more quickly when removing content. Facebook CEO Mark Zuckerberg bragged to Congress last year that the company’s automated systems take down more than 90 percent of the terrorism-related content posted to the service before it is ever flagged by a human being.

Khatib runs a project called The Syrian Archive, which has been tracking and preserving as many videos of war crimes in that country as it can. But YouTube’s policies are not making it easy, he says. And user-generated content is a crucial part of the documentation of what is happening in Syria, Khatib notes, because getting access to parts of the country where such attacks are taking place is extremely dangerous, even for experienced aids agencies, journalists, and human rights organizations. YouTube hasn’t just been removing videos either: Since 2017, it has taken down a number of accounts that were trying to document the Syrian conflict, including pages run by groups such as the Syrian Observatory for Human Rights, the Violation Documentation Center, and the Aleppo Media Center. Khatib says YouTube reinstated some of the videos it took down after he complained earlier this year, but that hundreds of thousands still remain unavailable.

The Syrian activist isn’t the only one to raise a warning flag about this problem. Eliot Higgins, the investigative journalist formerly known as Brown Moses, who now runs a crowdsourced journalism project called Bellingcat, started raising this issue in 2014, when he said Facebook was taking down pages and accounts that were documenting Syrian government attacks using the banned chemical Sarin gas. In many cases, both YouTube and Facebook have been targeted by pro-government forces who flag and report videos incorrectly, hoping to have them taken down. It’s not just Syrian content that is being taken down—Khatib says activists in Sudan, Yemen, and Burma have also had similar problems with important content being removed. And the major web platforms now have a shared database of videos to use for their automated removals, a partnership called the Global Internet Forum to Counter Terrorism. But the exact criteria used to define what constitutes a terrorist video is unknown.

In his Times op-ed, Khatib recommends that Google, Facebook, and Twitter hire content moderators in the countries where they are removing such videos, so that they can understand the context behind what is being removed. The platforms, he says, could also work with researchers and archivists to assess these takedowns and in some cases reverse them if necessary. A report co-authored with the Electronic Frontier Foundation and Witness warns: “The temptation to look to simple solutions to the complex problem of extremism online is strong, but governments and companies alike must not be hasty in rushing to solutions that compromise freedom of expression, the right to assembly, and the right to access information.”

Here’s more on the platforms and takedowns:

  • Simple solutions: A report that Khatib co-authored with the Electronic Frontier Foundation and Witness talks about takedowns affecting Syria as well as groups in Chechnya and Turkey, and warns that: “The temptation to look to simple solutions to the complex problem of extremism online is strong, but governments and companies alike must not be hasty in rushing to solutions that compromise freedom of expression, the right to assembly, and the right to access information.”
  • One hour: The European Union is considering a new content-takedown law that would require platforms like Facebook and Google to remove terrorist content and hate speech within one hour of it being flagged. The legislation would also force them to use a filter to ensure content isn’t re-uploaded, and, if they fail to do either of these things, governments are allowed to fine them up to 4 percent of their global annual revenue. For a company like Facebook, that could mean fines of as much as $680 million.
  • Santa Clara principles: New America’s Open Technology Institute released a report earlier this year that looked at how well the major platforms had been sticking to the Santa Clara Principles, recommendations that were made by the group and other organizations last year, aimed at getting Google, Facebook, and Twitter to be more transparent about why they removed content. All three companies report takedowns, and in some cases say who asked for the removal (if it was a government) but don’t say much about why.

Other notable stories:

  • Facebook plans to launch its News tab on Friday, according to a report in the Washington Post. The paper says the new feature will offer stories from hundreds of news organizations, some of which will be paid fees for supplying content to the service, including the Post itself, the Wall Street Journal, and BuzzFeed. The New York Times is likely to contribute to the feature, but is still negotiating with Facebook over the terms of its participation, according to the Post report.
  • Facebook isn’t alone in building a news aggregator: The Information is reporting that CNN plans to launch a news aggregation service featuring content from a range of outlets, some of who may be paid for their articles. The project, codenamed NewsCo., comes just a few months after Rupert Murdoch’s News Corp. announced it was working on a news aggregation service called Knewz. The Information report says the CNN service would likely be a mix of subscription-based and advertising-based content.
  • Sarah Lacy, a former TechCrunch journalist who started her own subscription-based news site called Pando Daily in 2011, says she is selling the site and getting out of journalism to run a digital community for mothers called Chairman Mom. Lacy said she has sold the company to an advertising firm called BuySellAds, which also acquired the website Digg last year for an undisclosed sum.
  • Corey Hutchins writes for CJR about the first year of the Colorado Sun, a digital publication created in the wake of mass layoffs at the Denver Post, which led to a dramatic editorial rebellion against the paper’s owner, Alden Global Capital. Eventually, 10 journalists defected from the newspaper to launch the Sun, thanks in part to startup funding provided by Civil, the blockchain-powered platform for journalism.
  • Medium, the content-hosting company founded by former Twitter CEO Evan Williams, says it is changing the way it compensates writers. In 2017, the company launched its Medium Partner Program, which paid writers based on the number of “claps” or likes their content received from readers. Medium says that system paid out more than $6 million to over 30,000 writers, but it is switching to a new model that will reward writers based on reading time, which it says is “a closer measure of quality.”
  • The White House has said it won’t be renewing subscriptions to the New York Times and the Washington Post, after Donald Trump called the two “fake news” during an interview on Fox News’ Hannity program. The president described the Times as “a fake newspaper” and said “we don’t even want it in the White House anymore,” adding “we’re going to probably terminate that and The Washington Post.”
  • Around 800 journalists, filmmakers, and media CEOs signed an open letter published in newspapers across Europe on Wednesday, urging governments to ensure that Google and other tech firms comply with a new EU rule that requires them to pay publishers a fee if they use even short excerpts of their stories. Google said recently that it will not pay fees, and instead will remove excerpts and images from its search. “The law risks being stripped of all meaning before it even comes into force,” the letter said, calling Google’s move “a fresh insult to national and European sovereignty.”
  • Storyful, the social-media verification company that is owned by News Corp., has launched an investigative unit that is designed to help news organizations comb through social media networks to find stories or shore up existing projects. The unit has already worked on stories published in the Wall Street Journal, the Times of London, and broadcast on Sky News, according to a report by the Nieman Journalism Lab.

Zuckerberg wants to eat his free-speech cake and have it too

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Facebook’s relationship to speech is complicated. The giant social network routinely takes down hate speech provided it meets certain criteria (although critics say it misses a lot more), along with gratuitous nudity, and other content that breaches its “community standards.” And it hides or “down-ranks” misinformation, although only in certain categories, including anti-vaccination campaigns. But it refuses to do anything about obvious disinformation in political content, including political ads, saying it doesn’t want to be an arbiter of truth. One of the most interesting things about Mark Zuckerberg’s speech Thursday at Georgetown University was listening to the Facebook CEO try to justify these conflicting decisions. The speech, which was livestreamed on Facebook and YouTube and published in the Wall Street Journal, was at times a passionate defense of unfettered free speech, and how it played a crucial role in social movements like the Vietnam War and the civil-rights era.

If nothing else, Zuckerberg’s emotional investment in this idea came through, despite some awkward phrasing (he wrote the speech himself, and wouldn’t let anyone see or edit it because he wanted to “maximize for sincerity,” according to a Facebook source). Zuckerberg warned about a number of countries that are moving to restrict speech, and even trying to censor speech that occurs elsewhere on the internet, and his voice became almost strident as he talked about the repressive regime in China (a market Facebook has repeatedly tried to enter) and the fact that most of the top internet services used to be American, but now six of the top 10 are Chinese. “While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok mentions of these protests are censored, even in the US,” Zuckerberg said. “Is that the internet we want?”

But the Facebook CEO also defended the network’s decision not to fact-check political ads, despite the fact that the Trump campaign has already used its ad campaigns to circulate lies about Joe Biden and his alleged involvement in corruption in Ukraine. “We don’t fact-check political ads, because we think people should be able to see for themselves what politicians are saying,” Zuckerberg said. “I know many people disagree, but, in general, I don’t think It’s right for a private company to censor politicians or the news in a democracy.” The Facebook founder also noted that similar ads appear on other services, and also run on analog TV networks. “I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100 percent true,” Zuckerberg said, despite having just described how the social network routinely takes down or “down-ranks” misinformation of various kinds.

In many ways, as New York Times writer Mike Isaac put it, the Facebook CEO’s speech seemed like “an optimist’s defense of the internet — or the internet as defined by Facebook.” During questioning after the event (where questions were carefully moderated in advance) Zuckerberg admitted the company made a mistake by not acting more quickly in Myanmar, where Facebook was weaponized by anti-Muslim forces as part of a campaign of vicious attacks on the Rohingya community. But he maintained that “people having the power to express themselves at scale is a new kind of force in the world — a fifth estate alongside the other power structures of society,” and that while he understands the concerns about tech platforms like his and their power, “the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands.”

Jillian York, international director of freedom of expression for the Electronic Frontier Foundation, called the Facebook CEO’s speech “23 minutes of contradictions, unsubstantiated postulations, and a Cliff Notes version of free speech history.” At one point, Zuckerberg drew a direct line between the freedom fighting of Frederick Douglass and Martin Luther King and the kind of free expression he says he’s committed to at Facebook. It’s an analogy that likely came as a shock to many of the marginalized groups that have either been censored by the social network or harassed and victimized by it. But as far as Zuckerberg is concerned, we’re all on the same side. “We can continue to stand for free expression, understanding its messiness, believing that the long journey towards greater progress requires confronting ideas that challenge us, or we can decide the cost is simply too great,” he said. As always when it comes to Facebook, the question is the cost for whom?

Here’s more on Facebook and free speech:

Destroying democracy: In an op-ed in the New York Times, Matt Stoller — a fellow at the Open Markets Institute and the author of Goliath: The Hundred Year War Between Monopoly Power and Democracy, says that tech companies like Facebook are destroying democracy and the free press because “advertising revenue that used to support journalism is now captured by Google and Facebook, and some of that money supports and spreads fake news.”

Lots of pain, little gain: Kurt Wagner notes in a piece for Bloomberg that political ads seem to be more trouble than they are worth for Facebook, since they account for such a tiny portion of the company’s revenue, but spark controversy when they don’t get fact-checked. Alex Stamos, former head of security for Facebook, said on Twitter that not running political ads at all might be a smart decision, “except that the politicians who are loudest about FB’s ad policies have also benefited immensely from the platform and would flip out.”

Oversight not enough: In his speech, Zuckerberg talked about the “oversight board” the social network is planning to create, in which outside advisors would be able to overrule the company’s decisions on content. But in an op-ed for the Harvard Business Review, disinformation researcher and former Facebook staffer Dipayan Ghosh said that the board isn’t an effective solution to the company’s moderation problems because the underlying problem is “the business model behind the company’s platforms itself.”

Calling the shots: Judd Legum, an investigative reporter who publishes a progressive newsletter called Popular Information, says one of the problems with Facebook is the fact that the network bends over backwards to please right-wing groups. The main reason it does this, Legum argues, is that several senior executives at the company are former high-level Republican operatives, including Joel Kaplan, director of global public policy and a former deputy White House chief of staff under president George W. Bush.

Other notable stories:

Despite a memo sent to all New York Times staffers earlier this year by standards editor Phil Corbett, articles in the newspaper routinely fail to link to competitors who have written or broken news stories about similar topics, a Vice report notes. “I think that a big problem is that there are still editors who like…do not get the online etiquette of linking,” one Times employee told Vice. “I wish you great luck in shaming people out of this policy.”

According to a report in the Wall Street Journal, a review of nearly 170,000 tweets, plus analysis from expert information warfare researchers, shows that Houston Rockets general manager Daryl Morey was the target of what appears to be a coordinated harassment campaign after a tweet on Oct. 4 (since deleted) that set off an international furor about the anti-government protests in Hong Kong.

Fake news stories about Canadian prime minister Justin Trudeau that appear to be designed to weaken his political support continue to circulate on Facebook as the country approaches a national election, according to iPolitics. The stories are posted by a site called The Buffalo Chronicle that pretends to be a newspaper. A spokesman for Facebook said “misinformation does not violate our community standards. We don’t have a rule that says everything you post needs to be true.”

The Miami Herald is partnering with the Miami Foundation to launch an Investigative Journalism Fund that it says will nearly double the size of the paper’s investigative team. The company’s says its goal is to raise $1.5 million for reporting efforts spread over three years, adding two full-time reporters, a data visualization specialist, a videographer and an editor to its existing team. The Herald says it plans to launch the Investigative Journalism Fund as soon as it reaches $500,000 in donations.

Facebook co-founder Chris Hughes is launching an “anti-monopoly fund” with a donation of $10 million, according to a report in the Washington Post. Hughes and the organization he co-chairs, the Economic Security Project, said the fund will be backed by a series of high-profile philanthropies, including the George Soros-financed Open Society Foundations and the Omidyar Network, created by the founder of eBay. The fund will go towards researchers and grassroots groups fighting against monopolies that have too much market power.

New York Times writer Thomas Edsall says Donald Trump is winning the political marketing war because “the technical superiority and sophistication of the president’s digital campaign is a hidden advantage of incumbency.” According to the report, the Trump re-election machine has spent $15.9 million on Facebook and Google advertising this year, more than was spent by the top three Democratic candidates combined.

Marc Benioff, the owner of Time magazine and CEO of Salesforce, writes in an op-ed for Time that “the very technologies and social media platforms that were supposed to bring us together have been used to sow division and undermine our democracy,” and that he bought the news magazine from its previous owners because “we need journalism to elevate humanity.”

There have been high hopes that artificial intelligence might be able to flag disinformation, but two new research reports show that current machine-learning models aren’t yet up to the task of distinguishing false news reports, according to a report from Axios. “If you create for yourself an easy target, you can win at that target,” said MIT professor Regina Barzilay. “But it still doesn’t bring you any closer to separating fake news from real news.”

When ABC News reporter Jonathan Karl asked Donald Trump a question about his Syria strategy at a recent news conference, the president took the opportunity to criticize Karl and his network for running video footage that ABC said showed violence at the border with Turkey, but which turned out to have been filmed in Kentucky. “You shouldn’t be showing buildings blowing up in Kentucky and saying it’s Syria, because that really is fake news,” Trump said.

On Facebook, disinformation, and existential threats

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

There has been a steady stream of Facebook-related news over the past couple of weeks: First, The Verge published transcripts of two hours of leaked audio from a town hall with CEO Mark Zuckerberg. His comments included a reference to Elizabeth Warren and her plans to break up the company, which Zuckerberg called “an existential threat.” For some, these remarks brought up the specter of potential political interference. Would Facebook try to put its thumb on the scale by using the all-powerful news feed algorithm? And while that question was still swirling, the company continued to get blowback on its recent decision to no longer fact-check political ads, triggered in part by a Trump advertising campaign running on Facebook that repeats unsubstantiated claims about Joe Biden.

In an attempt to grapple with these and other issues, CJR convened a series of interviews on our Galley discussion platform with journalists and others who follow the company. First was Casey Newton of The Verge, who got the town-hall audio scoop. Although Zuckerberg’s comments about Warren got a lot of attention, Newton said one of the most interesting things about the town hall was what the questions said about the company’s employees—that they are concerned about a breakup, but also about how they and Zuckerberg are perceived. One of our next interviewees, veteran Recode media writer Peter Kafka, said that for him, one of the most interesting things about the leak is that it happened at all—Facebook has been doing town halls for over a decade, and this is the first time an insider has leaked one. Does that mean employees are growing restless? Perhaps!

I also spoke with Dina Srinivasan, a former advertising industry executive and antitrust expert who wrote an academic paper entitled “The Antitrust Case Against Facebook,” which has been cited by several members of Congress who want to break the company up. Her argument is that antitrust law doesn’t have to focus solely on the effect a monopoly has on consumer prices (a difficult case to make for Facebook, since the service is free). Facebook could also be accused of using its monopoly to degrade the quality of its service, she says, by removing privacy protections it promised would never be weakened, and by using customer data without permission.

April Glaser of Slate talked with me about Facebook’s decision to stop fact-checking political ads, which she said amounted to a dereliction of duty. “I think that Facebook has a responsibility to serve the information needs of its users and not be an active force in making our elections awful,” Glaser said. Alex Hern of The Guardian, another of our interviewees, said that Facebook’s argument is that it’s not the company’s job to determine what is true or false, and that it doesn’t believe it should have that kind of power. “But it already has that power,” says Hern, “and refusing to reject false political adverts is just as much of a political action as refusing to accept them.” Judd Legum, who runs a progressive newsletter called Popular Information, told me Facebook may believe it is acting on principle, but their decision “is also allowing them to accept millions from the Trump campaign to spread content that is demonstrably false.”

Alex Heath, a writer with The Information who I spoke with as part of the series, said that he didn’t think Zuckerberg would stoop to fiddling with the Facebook algorithm to try and influence the election. Heath said he recalled how some employees wanted the company to block Donald Trump’s profile in 2016 because they believed that he was engaging in hate speech, but Zuckerberg fought the idea. “I think he’s smarter than trying to tip the scales for or against a particular candidate,” said Heath. Charlie Warzel of the New York Times told me that Zuckerberg doesn’t need to intervene “because Facebook, the platform, will do so instinctively.” The social network, he says, has “redefined what it means to be a good candidate—and provided a distinct natural advantage to those who distort the truth.”

There was a lot more to discuss in each of the interviews, so I encourage you to check them out. And to finish the series, we will be having a roundtable discussion on Galley today with all of our interviewees, as well as selected readers and contributors. What does the future hold for Facebook? Is antitrust the only way to solve the social problems it continues to cause? And given the company’s role in the spread of disinformation and control of the ad industry, should media companies and journalists boycott Facebook and refuse to use its services? Please join us and share your thoughts!

Here’s more on Facebook and some of the issues confronting it:

The new News tab: As part of our Galley series, Tom McGeveran of Old Town Media talked with Lukas Alpert of the Wall Street Journal about Facebook’s attempt to get media companies on board with its News tab feature, which the company is expected to roll out soon. Only a handful of those whose news is featured are to be paid, Alpert said, and Facebook is still working on signing up publishers.

The First Amendment: In a section of the leaked Facebook town-hall audio that hasn’t been previously published, Mark Zuckerberg talks about whether the social network should take the same approach to speech that the First Amendment does, according to a report from Casey Newton in his Interface newsletter. The Facebook CEO said most people want the company to go further than just allowing any and all speech to exist on the platform.

Amplifying harm: The Electronic Frontier Foundation has criticized Facebook’s decision to exempt political content from fact-checking, saying this effectively excuses parties and politicians from the rules that everyday citizens have to abide by, and “amplifies the harm” that political lies can do. “What is particularly troubling is for a platform to apply one set of rules to most people, and a more permissive set of rules to another group that holds more political power,” the group said.

Other notable stories:

In an article entitled “Moving Beyond ‘Zuck Sucks’,” media researchers Anthony Nadler, Hamsini Sridharan, and Doron Taussig write for CJR about the idea that journalists should take a cue from “solutions journalism” and spend at least as much time talking about potential solutions to the problems caused by social technology, instead of just focusing on reporting the problems themselves.

The documents Trump adviser Rudy Giuliani waved around during a recent TV interview as he talked about a Democrat plot in Ukraine were not affidavits proving his case, but printouts from an obscure right-wing conspiracy site called Hopelessly Partisan, according to a BuzzFeed report. This was just the latest example of Giuliani’s fondness for internet conspiracy theories, says BuzzFeed.

The head of military intelligence services in Colombia resigned earlier this month after fact-checkers pointed to his use of misleading photos in a report that alleged Venezuelan involvement in terrorist attacks. The report, which was presented to the United Nations as evidence, included two pictures that allegedly showed recent Venezuelan activity, but both were old photos of unrelated events.

A researcher at Freedom of the Press Foundation who used to work at Google is warning journalists who use the company’s ubiquitous office tools such as Gmail, Google Docs and Google Sheets that these products are not end-to-end encrypted and therefore they should be cautious about using them for sensitive projects. “Google has everything they need to read your data. This insight into user data means that U.S. agencies have the ability to compel Google to hand over relevant user data to aid in investigations.”

Farhad Manjoo, writing in the New York Times, argues that recent incidents in which speech about China has been censored—including the way the NBA responded to a tweet from a coach expressing support for protesters in Hong Kong—shows that “China’s economic miracle hasn’t just failed to liberate Chinese people. It is also now routinely corrupting the rest of us outside of China.”

Washington Post media critic Erik Wemple says the Ronan Farrow affair—in which the journalist offered a story about Harvey Weinstein’s repeated sexual abuse of young women to NBC executives and was rejected, and then took it to the New Yorker, where it became a blockbuster—is evidence of “the rot at NBC News,” and “among the most cowardly media episodes of modern times.”

The Correspondent, the crowdfunded journalism site that is an English-language spinoff of the Dutch site De Correspondent, took down an article by its climate writer Eric Holthaus that included a first-person interview with child activist Greta Thunberg. In a note, Holthaus said Thunberg’s family didn’t see the article prior to publication and “after it was published, they raised a number of concerns around sensitivities within the piece with me.” Readers who saw the piece said it was “revealing of some of their personal aspects” and “some details could be misused in the wrong hands.”

As part of ProPublica’s local reporting network, NPR Illinois has been researching and writing about sexual harassment at colleges and universities. But according to a statement from ProPublica, the public radio affiliate has been told by the University of Illinois—which holds its operating license—that its staff are considered university employees, and therefore if anyone tells them about sexual harassment or abuse, they are required to identify that person to the university, regardless of any confidentiality promises they have made as journalists.

Some lessons from the MIT Media Lab controversy

Note: This is something I originally published on the New Gatekeepers blog at the Columbia Journalism Review, where I’m the chief digital writer

When the news first broke that the MIT Media Lab had a close relationship with deceased billionaire and convicted pedophile Jeffrey Epstein, some saw it as a momentary lapse in judgment, and there was widespread support for Media Lab director Joi Ito. But then New Yorker writer Ronan Farrow reported that the Epstein relationship was much deeper than it first appeared — including the fact that Ito got a significant amount of money from Epstein for his own personal investments. Much of the earlier support evaporated, and Ito agreed to resign. And there were other spinoff effects as well: Richard Stallman, a free-software pioneer and veteran MIT professor, also resigned, after being criticized for comments he made on an internal email list that downplayed the impact of Epstein’s sexual abuse.

To explore these and other issues, CJR had a series of one-on-one and roundtable interviews — using its Galley discussion platform — with a number of journalists and other interested observers, including WBUR reporter Max Larkin, Slate writer Justin Peters, Gizmodo editor Adam Clark Estes and Stanford researcher Becca Lewis. We talked about why places like the Media Lab often get a free pass from reporters, and why there’s so much technology writing that focuses on the “hero/genius” trope, where the all-knowing founder gets credit for inventing something amazing, even if the thing they invented either doesn’t work (Theranos) and/or they are terrible people in a variety of ways (Steve Jobs, Elon Musk, etc.).

Larkin said some inside MIT were frustrated that the Epstein donations got so much attention, when the institution also recently accepted money and a visit from Saudi Arabian leader Mohammad bin Salman, who has been implicated in the vicious killing of Washington Post journalist Jamal Khashoggi. “One Media Lab alum told me she was, on balance, more appalled by MIT’s ties to the late David Koch than by the ties to Epstein,” said Larkin, since the Kochs had done so much to undermine the Institute’s core values with their support of climate change-denying groups. And Larkin also noted that some defenders of the Epstein donations — including Media Lab founder and chairman Nicholas Negroponte — believed in what might be called the “transmutation” argument, namely that taking money from bad people and turning it into funding for creative academic pursuits was a positive thing.

Justin Peters talked in his interview about the piece he wrote for Slate called “The Moral Rot at MIT Media Lab,” in which he looked at the history of the institution and came to the conclusion that a relationship with someone like Epstein wasn’t really out of character for MIT at all — in fact, it was just part of a larger pattern. Peters, the author of a 2016 book called “The Idealist: Aaron Swartz and the Rise of Free Culture on the Internet,” said he was initially a big fan of the Media Lab when he was doing a Master’s degree in journalism in 2007. “It was intellectually lively and adventurous in ways that I had never before experienced in academia. They were building things there, as opposed to just constructing theories,” he said. But later, Peters says he came to realize that “I bought into the story that the Lab was selling about itself without stopping to consider the extent to which it *was* a story, and the extent to which the Lab benefited from journalists telling its story in a whiz-bang the-future-is-now manner.”

There’s nothing wrong with an institution like MIT being run as a business or trying to find funding, Peters said, but “there are always, always, always strings attached to that money, and it feels to me like the Media Lab’s leaders chose to disregard those strings, or pretend that they didn’t exist, which made it easier for them to take money from the likes of Jeffrey Epstein.” And Peters talked about how the often fawning coverage of the Media Lab was symptomatic of a larger problem with tech journalism. “It’s my sense that the people who cover tech get into the field as enthusiasts, because they like science and technology and gadgets,” he said. And because they are enthusiasts at heart, “they are susceptible to angles that confirm and bolster their enthusiasm. There are exceptions, of course, but I think it’s fair to say that most tech journalism doesn’t serve as a check on tech’s power. It’s a signal booster.”

Gizmodo editor Adam Clark Estes admitted that when he was at Harvard, he liked to hang out at the Media Lab, because Harvard was severely lacking in the kind of discussion about the future of media in the early-2000s, while “down the road, there was this very cool building full of fascinating people who were doing forward-thinking things.” Estes said that MIT brand is part of the problem when it comes to reporting on the institution, because it has been involved with so many hugely creative projects such as e-ink, and so technology reporters maybe aren’t as critical as they should be when it introduces something new. “So media outlets like Gizmodo stumble onto a new MIT Media Lab concept or promise, and we’re sometimes dazzled simply by the fact that it comes from the MIT Media Lab, that place we believe is cool without actually investigating what’s making it all possible,” he said. “You could say that tech journalists do the same thing when they write about the latest Instagram feature.”

Stanford researcher Becca Lewis, meanwhile talked about the tendency to see startup founders and CEOs — mostly men — as geniuses or creative visionaries, and how this often blinds us to their flaws, including in some cases the fact that their creations are not even close to being functional. “While there are undoubtedly people with exceptional intelligence and inspiration, the mythology of genius goes beyond that and seems to suggest that people with exceptional intelligence also have some sort of divine quality, as if they are in touch with the supernatural,” said Lewis. “Because of this, it means we also often give geniuses exceptional treatment, and that means genius becomes a form of authority and power.”

Entrepreneurs like Mark Zuckerberg, Bill Gates, Jeff Bezos, and Elon Musk are celebrated for their vision and genius, but “over time, it has become clear that, despite each of their intelligence in certain realms, they each have been assisted by luck and privilege in various forms,” Lewis said. And each of them also has questionable and/or disturbing traits as well, whether in their personal lives, their treatment of their employees, or the impact of their companies on the world. “For years, and to a certain extent still, these aspects were ignored or pushed out of sight in media coverage in favor of glowing profiles, often which explicitly labeled each of these men as geniuses,” she said. And the media plays up these aspects for a number of reasons, Lewis says: One is that favorable coverage helps when it comes to getting access to stories, and another is that “it’s simpler to tell the story of a successful individual, rather than focusing on the nuts and bolts of bureaucracy, the slow-moving structural forces that help shape technologies, and the people doing the “boring” work behind-the-scenes.”

What happens when Facebook confronts an existential threat?

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

Facebook CEO Mark Zuckerberg doesn’t do a lot of off-the-cuff speaking. His public appearances–whether before Congress or at a launch event–tend to be carefully scripted and rehearsed to the point where a cardboard cutout would seem animated by comparison. All of which helps explain some of the excitement surrounding a Verge report this week, consisting of two hours worth of unedited audio and transcripts of Zuckerberg addressing a town hall at Facebook, including questions from the staff. Although the scoop was heavily promoted, the transcripts didn’t contain any smoking bombshells exactly–in fact, Zuckerberg himself promoted the story in a post on his personal Facebook page, which pretty much guarantees there was nothing earth-shattering in the text.

That said, however, a number of observers highlighted one comment they found troubling: when the Facebook CEO was asked whether he was concerned about the company being broken up by government regulators, he responded that he could see federal authorities–and here he mentioned Elizabeth Warren specifically–trying such a gambit, and that if necessary he would oppose it. And then Zuckerberg said: “At the end of the day, if someone’s going to try to threaten something that existential, you go to the mat and you fight.” Based on the context of the quote, it seems clear that the Facebook CEO meant he would fight the government’s attempt in the courts. In the full transcript, he prefaces his comment by saying one of the things he loves and appreciates about the US is “that we have a really solid rule of law,” and that he doesn’t think such a case would survive a court challenge (and he is probably right).

On Twitter and elsewhere, however, the reference to Warren and her desire to break up the company was boiled down to the point where it appeared that Zuckerberg sees Warren herself–and her presidential candidacy–as being an existential threat. The Facebook CEO’s comment brought up what some saw as a disturbing scenario. What if you almost single-handedly controlled the world’s largest information distributor, one that hundreds of millions of people rely on for their news, and one that has been implicated in the past in spreading misinformation and propaganda during an election–how might you respond to something that you perceive as an existential threat to your company?

In a piece he wrote in 2014 for the New Republic, long before a Russian troll factory tried to hijack Facebook during the 2016 election, Harvard law professor Jonathan Zittrain talked about the potential that Facebook has to sway voters, and the risk this ability poses when it comes to an election. In the article, entitled “Facebook Could Decide an Election Without Anyone Ever Finding Out,” Zittrain mentions a social experiment Facebook conducted in 2010, in which the company placed a small graphic in the newsfeed of selected users. The graphic included a list of polling places, photos of friends from your address book who had already voted, and a button to click that would let your followers know that you had voted. According to Facebook’s research, this experiment–which no one was informed of either before or during the test–resulted in an increase in voting behavior, with about 340,000 more votes being cast than were cast in the previous election in the regions corresponding to where the graphic was used.

So what would happen if Facebook decided it wanted to try to influence an election, Zittrain asked. All it would really have to do is make sure that supporters of its preferred candidate received the “I Voted” package or something similar, and ensure that users supporting other candidates did not. While Facebook might argue that it would never use its powers in that way, the reality is we would never really know. We each get our own customized news feed, and so we have no way of knowing what we are seeing that others aren’t, or what we are missing that others are seeing. And that gives Facebook and its boy-king Mark Zuckerberg the ability to influence how we see the world in ways that we are completely unaware of, because they are hidden by its black-box algorithm. And all we have to comfort us are the company’s assurances that it means well.

Here’s more on Facebook and its various challenges:

That would suck: Elizabeth Warren got wind of Mark Zuckerberg’s comments about her being an existential threat, and how her plan to break the company up would “suck,” as the Facebook CEO put it. Warren responded on Twitter: “What would really suck is if we don’t fix a corrupt system that lets giant companies like Facebook engage in illegal anti-competitive practices, stomp on consumer privacy rights, and repeatedly fumble their responsibility to protect our democracy.”

Thanks for the pitch: As news of Zuckerberg’s remarks spread through social media, a number of users said that they found the Facebook CEO’s comments to be a pretty good endorsement of Elizabeth Warren. “I already like her ok, you don’t have to sell her this hard,” said a popular pseudonymous account called The Volatile Mermaid, while another said that calling Warren an existential threat to Facebook “is maybe her biggest selling point yet,” in a tweet that got 1,400 likes.

Profiting from Trump: The Democratic National Committee slammed Facebook on Tuesday, telling CNN that the company is allowing Donald Trump “to mislead the American people on their platform unimpeded.” The comments were made after Facebook said last week that it will not fact-check any posts or advertisements that come from politicians. According to CNN, the Trump campaign has so far spent almost $20 million on Facebook ads since May 2018.

Profiting from hate: Facebook has said many times that it wants to crack down hate speech and other forms of harassment on the platform–in part because a number of countries including Germany have laws against hate speech, and require the company to remove it within a certain time frame. But according to a report from investigative news site Sludge, the company has made millions in advertising revenue from more than 35 recognized hate groups that have used the platform to spread their message.

Other notable stories:

Journalists with the Miami Herald and el Nuevo Herald said on Wednesday they intend to form a union and asked the company to voluntarily recognize the One Herald Guild without a formal vote of newsroom staff. A statement from union organizers said a majority of the journalists in the two newsrooms of the South Florida publications supports the effort, but the executive editor and publisher of both papers, told organizers that the decision to unionize should be put to a vote.

Donald Trump’s use of the word “coup” to describe what is happening around the impeachment process and the Ukraine investigation is another example of how misinformation spreads from right-wing social media to the president’s Twitter feed in a self-reinforcing circular process, according to the New York Times. Trump escalates accusations born in right-wing media, “portraying himself as the victim of an unsubstantiated scheme. His followers often jump in and amplify the messages online, which are then picked right back up on conservative shows and news outlets.”

With disinformation and propaganda ramping up as we head towards the 2020 elections, the media needs to become even more vigilant, writes Washington Post media columnist Margaret Sullivan. “That public opinion be based on facts — not weaponized falsehoods — is about the most crucial work the media can do,” she says. Sullivan notes that journalists need to be quick on their feet, networks need to stop booking Trump surrogates for interviews, and the media must “end its addiction to both-sides journalism, which gives falsehood the same opportunities as truth.”

Starbucks said Tuesday that it plans to offer customers at its coffee shops free access to the websites of a number of newspapers for a limited time. Customers using the free WiFi at the chain’s 8,500 or so stores will get free access to the digital versions of the Chicago Tribune, the Wall Street Journal, USA Today, the Seattle Times, the Baltimore Sun, the Orlando Sentinel and the New York Daily News. The company stopped selling print newspapers in its stores earlier this year.

Ethan Zuckerman, director of the Civic Media Lab at MIT, has a new study looking at how news and information about social movements such as Black Lives Matter and MeToo is distributed over time, and how the public attention that is paid to these movements often comes in waves. Zuckerman said he hopes that the research offers “both an opportunity to understand how media attention can move in waves, and how social movements might harness and benefit from those waves.”

Live TV interviews are a relatively modern invention that frequently adds little to the understanding of key issues, writes Michael Socolow, a professor of communication and journalism at the University of Maine. “Live TV helps those who lie and want to hide,” Socolow says. Rather than enlightening viewers, Socolow argues that many of these mainstream network television interviews are a journalistic failure, providing lots of sensational programming without really providing any facts that would be useful to those who are trying to understand a news event.

The investigative news site Sludge says that if it can’t raise “significant funds” in the next few weeks, it will be forced to shut down. The site was one of the first startups to join the blockchain-powered journalism platform Civil, and was initially funded by the company as part of what it called its “first fleet” of newsrooms. But Civil’s funding grants were only designed to last for a limited time, so Sludge and others have been trying to raise enough money to continue through crowdfunding.

Mari Cohen writes for CJR about the state of journalism in Chicago, where the city’s landmark newspaper, the Tribune, moved out of its iconic downtown building in 2018, something many took as a sign of a decline in the market. But Cohen says there are enough positive things happening both at the Tribune and elsewhere that “Chicago looks less like another sad journalism story and more like an example of what can happen when things appear to be working.”

Michelle Amazeen and Erik Bucy write for CJR about a study they did which looked at whether an understanding of how the media functions can change how people perceive misinformation. According to their research, in which they surveyed more than 1,800 people about their knowledge of the media, the more people know about the media, “the better they are able to identify and resist online disinformation efforts, including fabricated headlines and covert advertising attempts.”

Google plays hardball with European news publishers

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

While the US obsessed on Wednesday over what technically constitutes impeachment for a sitting American president, some European news publishers may have been focused on something quite different: namely, a decision by Google to play hardball with French media companies when it comes to linking to their content in its search results. As of Wednesday, unless a French publisher specifically says that it wants Google to do so, the search giant will no longer include short excerpts from news stories in its results. Instead, there will just be a headline. It’s not exactly clear how this will look in practice—in an earlier mockup of results with text from news publishers excluded, there was just a big white space where the excerpt and image were supposed to go.

Why is Google doing this? Because the French government recently passed a law that requires the search company to pay publishers if it uses even short excerpts of their content on its search pages. The French law is a local variation of a recently adopted European Union copyright directive known as Article 11, which says that publishers are entitled to compensation for the use of even small chunks of text, a payment some refer to as a “link tax.” This in turn was inspired by similar attempts in other EU countries to get Google and others to pay for excerpts. Germany tried with its Leistungsschutzrecht für Presseverleger law in 2013, and Spain tried with a similar law in 2014. In Germany, a number of publishers had their results removed from Google News when it refused to pay them, but later relented when their traffic collapsed by as much as 40 percent. In Spain, Google eventually removed Spain completely from the Google News index.

Google maintains that its news excerpts send publishers a huge amount of traffic—as the company’s head of news, Richard Gingras, pointed out in a blog post on Wednesday—and that this in turn generates revenue via advertising. Publishers, however, note that ad revenue is falling, in part because Google and Facebook control the lion’s share of the market—which is why Google also likes to highlight the Google News Initiative, through which the company funds research and development (and even the creation of entirely new local news outlets, as it is doing through partnerships in both the UK and US). The News Initiative got its start in 2006, when Belgium was the first country to sue Google for using content from local publishers without their consent. The two sides eventually settled, and Google agreed to fund research and development for the industry, and then offered similar deals to France and other countries.

As welcome as this kind of funding might be for struggling newsrooms and media outlets, it also serves to make media companies even more dependent on and integrated into the Google universe, as a number of industry observers noted in a CJR feature on the patronage of Google and Facebook. The unfortunate reality for most digital publishers is that they rely on Google’s traffic whether they like it or not, and the company’s flexing of its muscles in France is only the latest evidence. And the company’s decision shouldn’t have come as any surprise to anyone who has been following the issue: when the European Union was debating whether to implement what became Article 11 of the The Directive on Copyright in the Digital Single Market, Gingras said that the company might pull Google News out of Europe altogether, just as it did in Spain.

Why France thought it might succeed where both Spain and Germany failed is difficult to say. Hope seems to spring eternal that someone will finally be able to force Google to pay publishers for linking to their news, and that this in turn will solve their financial woes. Paying publishers directly is something Facebook recently said it plans to start doing with selected outlets, but there is no sign that Google intends to back down on its position any time soon, legislation or no legislation.

Here’s more on Google, news and paying publishers:

Unacceptable: The French minister of culture, Frank Riester, said in a statement that Google’s response to the new law was unacceptable and “contrary to the spirit and the text” of the legislation. The head of the European Publishers’ Council accused Google of “abusing its market power and putting itself above the law, while at the same time pretending they are acting in line with the law,” and said the company’s decision would “endanger professional journalism.”

Scraping and mining: Jason Kint, the CEO of the US publishers’ lobby group Digital Context Next (previously known as the Online Publishers Association), said on Twitter that Gingras should change his title to head of PR, and that “Sending traffic has zero to do with the new copyright law and proper payment for how Google scrapes, mines, and monetizes press publishers’ content while abusing its dominant position in a myriad of ways.”

No more Google: A study published last year by researchers at Stanford and the University of Michigan looked at what happened to news consumption in Spain after Google News shut down there, and found that the loss of the service mostly affected traffic to smaller news publishers, while some of the larger ones remained unaffected. Research in other countries, however, has shown drops of more than double digits when publishers are removed from the index.

Other notable stories:

Impeachment talk continued to swirl Wednesday, following House Speaker Nancy Pelosi’s announcement of an investigation into Trump’s behavior, and the delivery of an internal whistleblower’s report to Congress. The White House released a document (not a transcript, as some news outlets mistakenly called it, but notes compiled during the call) that seemed to show Donald Trump asking Ukraine’s president to look into allegations of corruption involving Joe Biden, and even offering the help of Attorney-General William Barr for such an investigation. Trump says the document doesn’t show a “quid pro quo,” but observers said even to ask for such interference in a political campaign likely qualifies as an impeachable offense, and that Trump said on the call that he needed “a favor,” right after the Ukrainian president said how grateful he was for American military support.

In other impeachment news, the Washington Post reported that the acting Director of National Intelligence, Joseph Maguire—who was forced to take over the job after the previous director, Daniel Coats, stepped down last month—threatened to resign unless the White House allowed him to testify openly about the whistleblower’s allegations. Within minutes of the Post report, it had been denied by Maguire in an official statement, and by White House press secretary Stephanie Grisham, who said on Twitter that “This is actually not true. And we would have gone on the record to say that if [the Post] had given us more than 6 minutes to respond.”

Mark Thompson, CEO of the New York Times and former head of the BBC, warned of a “crisis threatening to engulf British journalism,” and said the largely print-centric press in that country was in danger of experiencing “something close to a wipeout.” Thompson made the comments as part of the third annual Steve Hewlett memorial lecture in London, held in memory of a BBC media-show host who died of cancer in 2017. Thompson also said that blaming Google and Facebook for the industry’s woes was a convenient myth.

Nieman Lab director Joshua Benton writes that the proposed merger of Vox Media and New York magazine “isn’t a marriage, but it’s a deal that makes sense.” Benton argues that the combination involves “two companies with similar editorial values and brands that mostly complement instead of overlap,” and that this is “the kind of smart merger we should see more of” in the media industry.

The Facebook Supreme Court will see you now

Note: This is something I originally wrote for the daily newsletter at the Columbia Journalism Review, where I’m the chief digital writer

A year and a half ago, Mark Zuckerberg floated what seemed like a crazy idea. In an interview with Ezra Klein of Vox Media, the Facebook CEO said he was thinking about creating a kind of Supreme Court for the social network—an independent body that would adjudicate some of the hard decisions about what kinds of content should or shouldn’t be allowed on the site’s pages, decisions Facebook routinely gets criticized for. Imagine, Zuckerberg said, “some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech.” It wasn’t just a pipedream or an offhand comment: for better or worse, Facebook has been hard at work over the past year creating just such an animal, which it is now calling an “Oversight Board.” This week, it took another in a series of steps towards that goal, by publishing the charter that will govern the board’s actions, as well as a document that shows how it incorporated feedback from experts and interested parties around the world, and a description of how the governance process will work for this third-party entity.

As part of the roadshow for this effort, Facebook held an invitation-only conference call with journalists, in which the head of the governance team took pains to describe just how much work the company did to gather as much feedback as possible on the idea. Facebook held six in-depth workshops and 22 roundtables, featuring journalists, privacy experts, digital-rights activists, and constitutional scholars from 88 different countries, along with 1,200 written submissions—all of which sounds very impressive until you remember that Facebook has more than 35,000 employees and revenues of more than $56 billion. And what did the company come up with? The charter describes an independent body that will start with 11 members and eventually number as many as 40, who will hear cases in groups of five. Some cases will be referred by Facebook, while others will come from appeals launched by users whose content has been removed for a variety of reasons.

In an attempt to keep the board as independent as possible, Facebook says it will appoint two co-chairs for the board, who will then be free to select whomever they wish to fill out the rest of the board membership. And Facebook will not compensate board members directly, for fear of the perception of a conflict of interest—compensation will come from a trust that the company will set up (and fund), which will also be run independently. The charter specifically states that members can’t be removed because of specific decisions they make, but can only be disqualified if they breach the code of conduct set out in the charter. But most important of all, Facebook says, decisions made by the board are binding, which means they can’t be overruled by the company unless the changes that would be required to comply actually violate the law, or unless the board recommends something that is technically impossible.

In a blog post he published to coincide with the release of the charter, Zuckerberg said that while Facebook makes decisions every day about what kind of speech it will and won’t tolerate, “I don’t believe private companies like ours should be making so many important decisions about speech on our own.” Hence, the Oversight Board, and the promise of an appeal process that is at least notionally independent from Facebook (which, it should be noted, is controlled almost single-handedly by Zuckerberg, thanks to his ownership of multi-voting shares). Although skepticism abounds—not surprisingly, given some of the company’s past commitments that have failed to come to fruition—there is also some grudging admiration for what the company is trying to do. In a Twitter thread, law professor and free-speech expert Kate Klonick said that while there’s a chance all these good intentions could turn out to be vaporware, “at the very least, so far, it’s a bigger & more rigorous commitment of time, money, & platform power than anything that’s come before.”

Will the decisions made by the Oversight Board actually change the way Facebook operates in ways that matter? Or will it be just a kind of fig leaf that the company holds up so that it can avoid the threat of imminent regulation? There are forces within Congress that would very much like to remove the protection that Facebook (and other platforms) have under Section 230 of the Communications Decency Act, which keeps them from being sued for content they host or moderation decisions they make. And what are the larger implications of a company like Facebook making decisions about what limits should be placed on free speech, even if those choices are rubber-stamped by a theoretically independent body? We are all about to find out the answers to those questions, whether we like it or not.

Here’s more on Facebook, free speech and the Oversight Board:

Jellyfish skeleton: I spoke with Kate Klonick in an in-depth interview on CJR’s Galley platform recently, and we discussed the proposed Facebook “Supreme Court” idea. Klonick said that she is cautiously optimistic, and that she likes to describe the idea as “trying to retro-fit a skeletal system for a jellyfish. A private transnational company voluntarily creating an independent body and process to oversee a fundamental human right [is] really a very daunting idea.”

Sheer complexity: When Facebook released a draft version of its charter for the Oversight Board earlier this year, Issie Lapowsky of Wired wrote that comparing it to the Supreme Court actually “minimizes the sheer complexity of what Facebook is setting out to accomplish.” The Supreme Court just hands down rulings for the US, but Facebook’s version would be choosing from several million cases every week, and its decisions would affect 2.3 billion Facebook users, a population that’s roughly seven times the size of the US.

Unanswered questions: I spoke with Jillian York, the international director for freedom of expression at the Electronic Frontier Foundation, in a recent Galley interview, and we talked about the Oversight Board. York said she has been calling on the platforms to do something similar, “but of course, the devil is in the details.” Having an external body that can assess content decisions is clearly good, she said, but there are still many unanswered questions.

Other notable stories:

CNN was widely criticized by journalists and others for booking former Trump campaign manager Corey Lewandowski, who had just admitted in testimony before the House Judiciary Committee that he had no compunction about lying to the media. “Corey Lewandowski confessed to gaslighting the press. CNN booked him hours later anyway,” said a Vox headline.

The Washington Post has launched an advertising network for publishers called Zeus Prime that the paper says will allow it and other media outlets to sell automated ads in real-time, in much the same way that large players like Google do. The company is pitching the network as a way for publishers to keep more of the advertising revenue they generate, and promises high CPM (cost per thousand) rates than they can currently get.

Facebook and Google’s parent company, Alphabet, are cozying up to publishers and media companies by offering them features they have long requested, according to a report in The Wall Street Journal, moves that many see as an attempt by the two tech giants to avoid potential government regulation.

Medium, the publishing platform run by former Twitter CEO Evan Williams, has launched a “Save to Medium” feature that mimics tools like Instapaper and Pocket, allowing users to click a browser button and save an article to their Medium account. Such tools are seen as controversial by some publishers because they strip the advertising from pages that are saved.

The Observatory on Social Media at Indiana University has released a free tool to allow journalists and others to detect potential disinformation spreading on Twitter. The tool, called BotSlayer, can be configured to follow certain searches or keywords and uses an “anomaly detection” algorithm to flag suspected bot activity. The Observatory also has several other tools aimed at tracking disinformation, including Hoaxy.

Wudan Yan writes for CJR about how some journalists, when writing about climate change, focus on lifestyle changes such as flying less, when the single biggest action someone can take to reduce their carbon footprint is to have fewer children. According to some recent estimates, a single child produces about 58 tons of carbon dioxide a year, or about 20 times as much as a single transatlantic flight generates.

The New York Times looked at how the Chinese government and its agents unleashed a storm of Twitter trolls in an attempt to discredit the protesters in Hong Kong. Some of the accounts, which the paper says numbered more than 200,000 at one point, started by posting innocuous articles about Chinese topics, but then gradually shifted to posting propaganda aimed at painting the protesters as dangerous terrorists. Others were apparently fakes acquired on the black market.

Jill Geisler of Loyola University in Chicago talks with CJR editor Kyle Pope about why some journalistic outlets are reluctant to take a side in reporting about climate change, and how they and others often shy away from collaborating with projects like CJR and The Nation’s Covering Climate Now for a number of reasons, including “Not Invented Here Syndrome.”