About Mathew Ingram

I'm a writer with the Columbia Journalism Review, which is based in New York at Columbia University, but I live in Toronto. I write about the intersection between media and technology, and anything else that interests me!

Leaked files from alt-right host Epik raise some hard questions


Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

In a data leak first reported last week by independent journalist Steven Monacelli on Twitter, a group of unnamed hackers claiming to be associated with the hacker collective known as Anonymous released more than 180 gigabytes of data from Epik, a web-hosting company that has become notorious for having a number of alt-right groups and services as clients, including right-wing Twitter alternatives Gab and Parler, as well as pro-gun and pro-Trump sites. “This dataset is all that’s needed to trace actual ownership and management of the fascist side of the internet,” the group said in its news release. “Time to find out who in your family secretly ran an Ivermectin horse porn fetish site, disinfo publishing outfit or yet another QAnon hellhole.” The data dump is said to contain account information for all of Epik’s clients, including the registered owner’s email address, mailing address, and other information (although some right-wing sites use anonymization services to conceal this data).

The importance of the information in the Epik hack — if it proves to be accurate — seems obvious, especially for researchers trying to track QAnon groups or other disinformation sources, as well as hate-speech advocates and domestic terrorists. “The company played such a major role in keeping far-right terrorist cesspools alive,” Rita Katz, executive director of SITE Intelligence Group, which studies online extremism, told the Washington Post. “Without Epik, many extremist communities—from QAnon and white nationalists to accelerationist neo-Nazis—would have had far less oxygen to spread harm, whether that be building toward the January 6 Capitol riots or sowing the misinformation and conspiracy theories chipping away at democracy.”

Emma Best, co-founder of Distributed Denial of Secrets, a journalism non-profit that specializes in leaked data, told the Post that some researchers have called the Epik hack “the Panama Papers of hate groups,” a comparison to the leak of more than 11 million documents that exposed the offshore finance industry. Megan Squire, a professor at Elon University who studies right-wing extremism, told the Post “It’s massive. It may be the biggest domain-style leak I’ve seen and, as an extremism researcher, it’s certainly the most interesting. It’s an embarrassment of riches.” Like the Panama Papers, getting information out of the huge database and making sense of it is time-consuming, which could explain why it took several days for mainstream sites like CNN and the Post to report on the Epik hack.

Continue reading

Facebook goes on the offensive against critical reporting


Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

In the aftermath of the 2016 presidential election, and widespread criticism that Facebook had helped to destabilize the process by enabling Russian trolls and spreading disinformation, the company seemed to strike mostly an apologetic tone. Mark Zuckerberg, Facebook’s co-founder and chief executive, occasionally seemed defensive in his subsequent testimony before Congress, but the general sense was that he and the company were sorry for playing a role in those events, and were trying to do better. However, more recently Facebook appears to be taking a much more aggressive approach to criticism, if the company’s response to recent reporting from the Wall Street Journal and New York Times is any indication. The social network also seems to be trying to shift public opinion by inserting positive stories about Facebook into users’ news feeds, while Zuckerberg is doing his best to stay out of the fray.

After a series of Journal articles detailing how Facebook has a special program that allows celebrities to get around the platform’s rules of behavior, and has ignored the advice of its own researchers in its drive for growth at both Facebook and Instagram, the company responded with a lengthy blog post written by Nicholas Clegg, vice-president of global affairs and a former deputy prime minister in the UK. In it, the Facebook executive said the stories “contained deliberate mischaracterizations of what we are trying to do,” and that the reporting from the Journal “conferred egregiously false motives to Facebook’s leadership and employees.” The central allegation in the series, he said — that the company conducts research, and then systematically and willfully ignores it if the findings are inconvenient — is “just plain false.”

In the past, given such accusations, Zuckerberg might have penned his own blog post explaining the company’s behavior, as he did when Facebook said it was moving discussions on the platform toward private groups and encrypted messaging, or when he was describing his commitment to free speech, or when he discussed the decision to permanently block Donald Trump from the platform. In this case, Facebook decided to expand on Clegg’s argument in a separate post, but the post was not signed by anyone. In it, the company tried to highlight some of the positive work it has done on disinformation and abuse, including the fact that it has 40,000 people working on safety and security, and has invested more than $13 billion to protect users (which a former Facebook executive pointed out is about four percent of the company’s revenue).

Continue reading

Internal memos show Facebook knew about flaws and did nothing

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

In 2018, Mark Zuckerberg, co-founder and chief executive of Facebook, said that the company was rolling out a significant change to the algorithm that governs its News Feed, in an attempt to encourage more users to interact with content posted by their friends and family, rather than content from professional sources such as news publishers and other brands. One of the reasons for doing this, Zuckerberg said, was a growing body of research showing that consuming mostly content from brands and publishers was not good for the well-being of users. However, according to a report from the Wall Street Journal published on Wednesday, the algorithm change didn’t improve the well-being of users — in fact, it actually had the opposite effect. Internal memos describe how the company’s own researchers said the changes were making the News Feed “an angrier place” by encouraging outrage and sensationalism. And when they suggested changes, Zuckerberg turned them down because they would decrease engagement, the Journal says.

The Facebook researchers “discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism,” the Journal report says, because this generated higher levels of comments and reactions, which the platform uses as indications that a post is successful and should be amplified. “Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” data scientists at the company said in memos that the newspaper was able to read. “This is an increasing liability. Misinformation, toxicity, and violent content are inordinately prevalent among reshares.” These researchers worked on a number of potential changes that they hoped might ameliorate the algorithm’s tendency to reward outrage, the Journal says, but memos show Zuckerberg resisted many of these solutions because he was worried they might lead to people spending less time interacting with content on the platform.

The emails and memos the newspaper quotes from are part of what it calls “an extensive array of internal company communications” that it gained access to (although it’s not clear how), which so far have produced three investigative pieces on the company’s practices, of which the News Feed story is the third. The first, from reporter Jeff Horowitz, described a little-known system within the company that allowed VIPs to avoid any repercussions for breaching the platform’s terms of service. The program, known as XCheck (pronounced cross check) allowed celebrities, politicians, athletes, and other “influencers” to post whatever they wanted, with no consequences. Although an internal Facebook report seen by the Journal referred to “a select few members” as having this ability, the story says that as of last year, close to 6 million people were covered by the XCheck program.

Continue reading