Opinion

Commentary: The way misinformation targets Latino voters

Social media companies are doing little to moderate Spanish-language posts.

WhatsApp is often used by Latino New Yorkers, whether born or immigrated to the United States, to communicate locally and abroad.

WhatsApp is often used by Latino New Yorkers, whether born or immigrated to the United States, to communicate locally and abroad. SOPA Images / Contributor – Getty

Odds are every Latino in New York, whether born or immigrated to the United States, has a phone home screen featuring a small green box with a stylized cartoon telephone and a little red bubble constantly counting up. That’s WhatsApp, the messaging platform with more than 2 billion users.

Being able to receive messages from a neighboring borough or thousands of miles south has become an invaluable information web that connects far-flung communities with job updates, birth announcements and funny stories. Yet there’s something ominous interspersed between photos of grandma cooking or a nephew graduating from school: gobs of largely unmoderated misinformation and disinformation, often with almost untraceable origins and spreading wildly by conspiracy-minded or simply uninformed users.

Attempts to spread false information aren’t new. Those of us from Latin America – convulsed as it has been by political movements and countermovements, many of them involving armed and insurgent groups, many fomented and shaped from abroad (including from Washington, D.C.) – know that information warfare has a rich history. What’s new is the speed and ease with which disinformation can spread. No longer is it necessary to print and distribute pamphlets or take control of TV stations; misinformation ranging from the general to the extraordinarily detailed can spread from some obscure source all around the world and deep into U.S. communities, including New York, in a matter of hours, all more or less sight unseen to our non-Latino counterparts.

This reality came into stark, widespread view in the run-up to and immediate aftermath of the 2020 presidential election. Researchers and political groups warned about the reach and volume of misinformation among Latinos, which predictably eluded much of the mainstream press and tech moderators who had focused on proliferating English-language disinformation spreading on Facebook and Twitter. Despite a renewed public focus, there appears to have been little action since then on preventing its proliferation. In many cases, the spread doesn’t exactly cause incorrect beliefs so much as an uncertainty about what’s real and what isn’t.

According to an in-depth survey and report by Equis Research, which polled 2,400 Latino adults around the country, when it comes conspiracies like the idea that Antifa was responsible for the Jan. 6 insurrection, “the ‘uncertain’ are the majority group, and in some cases the supermajority.” The researchers wrote that “this widespread uncertainty represents both a threat and an opportunity for those who want to see an informed populace,” as the uncertainty undermines trust but also “signals a healthy skepticism and an openness to persuasion.”

New York is fortunate to have a well-developed community organization and nonprofit sector that can help combat the worst of it, but trying to parry individual false claims directly with Latino voters is a bit like plugging holes in a leaking dam. The only feasible solutions are scalable tech and social media policy fixes, whether they are undertaken actively and voluntarily by the companies most at fault (which, after years of limited change, we shouldn’t hold our breath) or compelled by consumer behavior and local, state and federal policymaking.

For example, most research on the subject has shown that organizations like Meta, which in addition to Facebook and WhatsApp also owns Instagram, are falling short in moderating misinformation in general, and then on top of that have a special lack of emphasis and urgency around cracking down on it in non-English-language spaces. Various researchers have pointed out, for example, that even as Facebook was rolling out efforts to take down election and health misinformation, the tools seemed to miss Spanish-language posts entirely.

A separate problem is presented by platforms like WhatsApp, which is primarily used outside the U.S. and by those who have friends or family living abroad. Whereas public posts on sites like Facebook are visible to moderators, fact-checkers, community organizations and other such groups, private WhatsApp chains are visible only to those members and can thus spread not only unchallenged but often practically unnoticed.

As I explored in an examination of Latino political power earlier this year, one of the phenomena that surprised outside observers – though Latino political analysts considerably less so – was that a growing share of the Latino vote is being captured by the Republican Party, a trend that has continued in the intervening months. There are many explanations for the shift, including, as I posited, a religiously tinged conservative social culture, but the impact of misinformation cannot be overlooked. As with its English-language parallel, the bulk of this misinformation seemed geared to fan right-wing paranoias, perhaps most notably COVID-19 vaccine skepticism. This was no mere blip: An avalanche of information promoting, for example, the use of ivermectin, a dewormer for animals, as opposed to COVID-19 vaccines was flooding not just social media but then jumping to Spanish-language AM radio.

On the purely political side of things, more conservative-leaning Latinos in the U.S. have a host of pressure points to which misinformation can latch on to. While the right wing has long campaigned on an apprehension for socialism, Latinos have much more specific bogeymen to point to. It can be claimed that President Joe Biden is, for example, in league with Venezuelan President Nicolás Maduro or that he’s angling to turn the United States into a totalitarian socialist state, an obviously false charge that nonetheless packs an emotional punch for some folks who’ve been steeped in such fears. As Equis Research’s findings indicated, the objective isn’t only to create certain sets of beliefs among the receivers of disinformation, but to create enough of a sense of general confusion about what’s real that some subset of voters will also just give up on participating in the political system altogether.

Regulating misinformation as a general notion sounds good, but the mechanics of how to do so quickly trip up against the bounds of free speech. It is not, and never has been, unlawful to spread lies or conspiracies through any medium. Private tech companies, unbound as they are by the strictures of the First Amendment, are free to moderate content as they see fit, and could, in theory, develop tools that would automatically cull misinformation or at least label it as such, as most already do in some limited fashion.

There are, generally speaking, four overarching reasons why they don’t. The first two are technical in nature: For services like WhatsApp and Telegram, the company actually can’t see the content of the messages, meaning that active moderation is impossible. Allowing them to analyze the content for misinformation would present its own significant issues around privacy and surveillance. WhatsApp, for example, uses so-called end-to-end encryption, meaning that the content of a WhatsApp message is encrypted by the sender’s device and decrypted by the receiver; it cannot be deciphered while in transit. Tampering with that would partly defeat the purpose of the app.

For public-facing posts on sites like Facebook and TikTok, these global companies with billions of users speaking many different languages have to hire small armies of moderators to make determinations for the millions of posts being created every day, which also have to be flagged by a user to come to their attention. Otherwise, they can try to do so via artificial intelligence, which can be trained on specific types of misinformation and automatically move to delete or obscure it.

These companies almost certainly can’t hire enough moderators to filter all content that might run afoul of the rules, but machines lack the ability to factor in context, intent or meaning, which can result in false positives. In the most absurd cases, reporters and researchers who are themselves attempting to correct or counter spreading misinformation can be suspended or have posts marked as suspect because they’re alluding to the same language used in malicious posts.

Often, this means companies use both techniques – an algorithm flags posts that moderators review – but that is expensive and introduces friction with the users, which brings us to a third, nontechnical reason: Tech companies might get significant flak for hosting and spreading misinformation, but at the end of the day their business models are built around maintaining the attention of as many people as possible for as long as possible so they can both serve them ads and harvest and resell their data.

Studies have shown that nothing drives engagement like strong negative emotions and feelings of belonging to some sort of in-group, both of which are suited to conspiracies. Tamping down on this sort of content is a goal at odds with these behemoths’ designs for maximum attention and maximum scalability. They also tend to be concerned – similarly to media companies, it’s worth noting – with being perceived as partisan, and if the misinformation is largely weighted toward a right-wing ideology, the fear is that taking a strong stance against it will draw the scrutiny of conservative public officials always itching to throw down against the supposed biases of Big Tech.

For these and other reasons, we shouldn’t expect tech companies to go out of their way to tackle this problem of their own volition, which means they must be compelled to do so. The most ironclad way of doing that is through policy, but there we run into the First Amendment. It’s not really an edge case that speech published online, regardless of its content, is protected from government intervention with extremely narrow exceptions for things like direct, specific and imminent threats against another person. In addition to that, social media companies benefit from Section 230 of the Communications Decency Act, which generally exempts them from liability for merely hosting content that they aren’t producing.

State Sen. Brad Hoylman of Manhattan, who chairs the Judiciary Committee, proposed a somewhat novel way around this limitation in a bill introduced last year, which would create a cause of action for both private citizens and entities like the state attorney general’s office to try to hold social media companies liable not for hosting but for ostensibly spreading misinformation. “It’s our contention that social media websites aren’t just a simple host for users’ content or viewpoints. These companies actually employ complex algorithms designed to put the most controversial and provocative content in front of as many users as possible,” Hoylman told City & State.

The distinction between hosting and spreading content is hazy enough that a number of legal experts have questioned whether it meaningfully exists enough to hold up in court. There would have to be some sort of test that a publication would trigger if it was deemed to be pushing content, and Hoylman acknowledged that this key plank remained undefined: “I would leave that to a judge and jury to determine.” Still he said some action must be taken to combat the dangerous information: “Social media is the tail wagging the dog and the extremes of opinion are, unfortunately, the ones that often shape important policy decisions.”

One way for local governments in particular to combat misinformation without outright stepping into the minefield of trying to control its spread is to take active steps to simply counteract it, such as with robust voter education programs that can, in a nonpartisan way, disseminate accurate information about voting and party platforms, as well as support community organizations doing the same. In a recent blog post, Mekela Panditharatne, counsel for the Brennan Center for Justice’s Democracy Program, drew attention to California’s efforts during last year’s gubernatorial recall election, when jurisdictions like Los Angeles County maintained a tip line and provided phone support for voters to get up-to-date information.

Panditharatne also noted that “large numbers of elec­tion offi­cials have left or plan to depart their posts, drain­ing offices of exper­i­ence (and) dozens of candid­ates for offices with power over elec­tions have embraced false elec­tion claims.” In New York, we’ve mostly avoided this semicollapse of electoral institutions (the state Legislature’s attempt at a congressional gerrymander notwithstanding) and are well-positioned to be a national model for how to respond with a mixture of government resources, public pressure and community intervention. We’ve already had plenty of experience, having been the epicenter of the COVID-19 pandemic. Ultimately, we reached about 80% vaccination rates for the first two doses across New York City, outpacing the national average by about 10%. According to city statistics, Hispanics actually overperformed, hitting about 74%, more than white and Black New Yorkers. The messaging eventually worked.