Despite all of the hearings on Capitol Hill about how Facebook, Google and Twitter handle everything from privacy to fake news, U.S. lawmakers haven’t done much yet to regulate tech companies. However, according to the results of a new survey, the American people might be ready for Congress to act.
According to a joint survey by Gallup and the Knight Foundation, 79 percent of respondents think tech companies should be regulated the same way the news media is. The results, which were released on Wednesday, shed light on how the ideals of users don’t always align with the purposes of the platforms.
The research is part of an ongoing series about trust and democracy and comes at a time when regulators and tech companies are increasingly grappling with questions of how to address problems with false information, hate speech and bias on the world’s largest social media platforms. Fifty-four percent of respondents said internet companies should help people “stay better informed,” while 85 percent said the companies aren’t doing enough to stop misinformation from spreading, according to the report.
“What we’re learning form the results is there’s a difference between what people act on and what they think we as a society should be looking at,” said David Askenazi, director of learning and impact at the Knight Foundation.
According to Askenazi, the questions in the latest online survey of 2,000 adults were inspired by a larger one in January, when 19,000 people were asked how they’d like the news to help inform the nation. And while people understand the business models and marketing effectiveness of targeted content, the latest survey suggests people are worried about how tech companies’ bias distorts the reality of news. For example, around 85 percent of Republicans and 60 percent of Democrats said they thought tech companies’ methods of showing content reflect the companies’ own political bias.
There’s something else internet giants might want to note: Americans are wary of algorithms. Almost nine in 10 (88 percent) of respondents said companies should be transparent about how they deliver news content to users, while 63 percent percent said they’re “very concerned” that news feeds give readers a “biased picture” of news by excluding content. They also apparently don’t like filter bubbles—80 percent said they’d rather have everyone see the same content from the same news organization—a seemingly direct rebuke of the personalization of media and marketing that have made the companies like Facebook and Google the giants they are today.
The awareness of curated content in 2018 is starkly different than it was just three years ago. A 2015 study found that 62.5 percent of users didn’t realize Facebook’s news feed hid stories.
“However, with no way to know if their knowledge of these invisible algorithms is correct, users cannot be sure of the results of their actions,” wrote the authors, which included researchers at the University of Michigan and University of Illinois at Urbana-Champaign. “Algorithmic interfaces in internet applications rarely include a clear enough feedback mechanism for users to understand the effects of their own actions on the system. Without such feedback, it can be difficult to assess the influence of either algorithm knowledge or ignorance.”
The shifts in public sentiment could help bolster public opinion ahead of any legislation Congress chooses to address when lawmakers return from summer recess. Earlier this month, Sen. Mark Warner, D-Va., released a white paper detailing some ideas for how tech companies might be reined in with some data practices. For example, Warner said the federal government could create standards for auditing tech companies’ algorithms to detect both fairness and bias.
Companies have begun to somewhat timidly—and very selectively—wade into the waters of content moderation. Earlier this month, Apple, Facebook, Google and Twitter all banned Infowars founder Alex Jones from their platforms for violating policies related to hate speech. However, while Jones is also known for spreading false information and conspiracy theories—including calling survivors of mass shootings “crisis actors”—some of the platforms have said false information on its own is not enough to remove something.
The advertising-funded giants have also put in place additional policies to try and safeguard their platforms from foreign interference. And yet, that hasn’t entirely stopped some “bad actors” from using social networks to spread fake news through advertising, organic content and organizing events pages. In July, Facebook announced it had discovered and removed 32 accounts that were part of a coordinated misinformation campaign.
But even voters who are concerned about bias and fake news are divided on how to handle it. While a majority think tech companies should be held to the same standards as broadcast outlets and newspapers, only 16 percent said the government is responsible for providing people with accurate news. Another 46 percent said the companies are responsible, followed by 38 percent who said it falls on users themselves.
“Targeted content has changed the way we do business over the past 20 to 30 years,” Askenazi said. “And yet, here are 1,200 people that are saying pretty strongly that they don’t like it and yet it makes a huge difference in the way we live our lives.”