Jun 18, 2020

Facebook, Twitter, Google Testimony Transcript on Election Interference, Disinformation

Facebook, Twitter, Google Officials Testify on Election Security
RevBlogTranscriptsCongressional Testimony & Hearing TranscriptsFacebook, Twitter, Google Testimony Transcript on Election Interference, Disinformation

On June 18, Google, Facebook, and Twitter officials testified before Congress today on disinformation and election interference. Read their testimony transcript to U.S. lawmakers here.

Follow Rev Transcripts

Transcribe Your Own Content

Try Rev for free and save time transcribing, captioning, and subtitling.

Adam Schiff: (02:49)
I want to thank, again, our members and supporting staff for helping to make a Monday’s event on Subsaharan Africa, our first remote hearing, possible as we continue to master this new way of conducting business. Without objection, the chair may declare a recess at any time. Before we begin with our topic today about emerging trends in online foreign influence operations, I want to address some housekeeping matters.

Adam Schiff: (03:16)
First, today’s session will be conducted entirely on an unclassified basis. All participants should refrain from discussing any classified or other information protected from public disclosure. Second, the committee is conducting this virtual hearing in compliance with house resolution 965 and the regulations for remote committee proceedings. It is being broadcast live on the committee’s website. Like many of you, I would have preferred to hold this hearing in person in Washington DC. However, because the threat posed by COVID-19 pandemic remain serious and widespread, we are proceeding remotely in order to ensure the safety of our witnesses, members, staff and the public. Today’s important conversation is essential to our oversight of how the intelligence community and nation are working to keep our elections and political discourse from foreign interference. I had hoped it would be a bipartisan discussion. Unfortunately, and without reason or justification, our Republican colleagues once again have decided to absent themselves from the work of the committee. I repeat my hope that they will reconsider this ill-considered path and join us for future hearings. Whether conducted remotely or in person, these hearings and supplemental round tables are official business and integral to our responsibilities in the classified and unclassified realm. Pandemic or no pandemic, the American people have a right to expect us to do our work and to conduct our business in a way that prioritizes the safety of witnesses, members and staff. I want to remind our members of our remote hearing procedures.

Adam Schiff: (04:55)
First, consistent with the regulations, the committee will keep microphones muted to limit background noise. Members are responsible for unmuting themselves when they seek recognition or when recognized for their five minutes. Because there are sometimes delays when muting or unmuting microphones, I would ask that members and witnesses allow sufficient time before speaking to ensure that the last speaker has finished talking. Second, members and witnesses must have their cameras on at all times. If you need to step away from the proceeding, please leave your camera on. Third, if you encounter technical difficulties, please contact technical support through the channels established prior to the hearing. Our technical staff will work to get you back up and running as quickly as possible. Finally, consistent with past practice, I will at appropriate time recognize members for their five minute questions in order of seniority, starting with those who were present at the commencement of the hearing.

Adam Schiff: (05:54)
Thank you again for all your patience as we proceed under these extraordinary circumstances. This is the second hearing of the House Intelligence Committee held with witnesses from Google, Facebook and Twitter. The first was in November, 2017, where we continued to piece together the full breadth of the Russian attack on our democracy one year earlier and inform the public about what we had found. It was a breathtaking and audacious attack that took place on several fronts, including social media platforms used daily by millions of Americans. From subsequent disclosures by the technology companies, Department of Justice and this committee, the world learned that Russia’s Internet Research Agency undertook a determined effort to use social media to divide Americans in advance of the 2016 election. These IRA trolls took to a broad array of platforms to launch a sophisticated and pernicious campaign that exploited wedge issues already challenging our nation, such as immigration, the second amendment, race relations and other issues. Today’s hearing is not intended to look back at 2016 as much as it is to look forward.

Adam Schiff: (07:07)
Election day is a mere five months away. And malicious actors, including Russia but also others, persist in attempts to interfere in our political system in order to gain an advantage against our country and to undermine our most precious right, that to a free and fair vote. We’re holding this hearing, and we engage regularly with tech and social media companies because they are arguably best positioned to sound the alarm if and when another external actor attempts to interfere in our democratic discourse. First, because their technical capacity and security acumen allows them to detect malicious activity on their platforms and make attributions through technical indicators that are available only to the companies themselves. And second, because we cannot have complete confidence that the White House will allow the intelligence community to fully and properly inform Congress if it detects foreign interference, especially if that interference appears to assist the president’s reelection. That is a dangerous and unprecedented state of affairs, but nonetheless, it reflects the reality and why this hearing is so important.

Adam Schiff: (08:19)
To the witnesses, as you describe in your respective written statements, a lot has changed since 2016. In many ways, we’re better prepared today than we were four years ago. Each of your companies have taken significant steps and invested resources to detect coordinated, inauthentic behavior and foreign interference. And while there cannot be a guarantee, it would be far more difficult for Russia or another foreign adversary to run the same 2016 playbook undetected. Well, Facebook and Twitter now regularly update the public, the committee and Congress on their findings as they identify and disrupt coordinated inauthentic behavior and foreign interference targeting the United States and other nations globally. US government agencies, with responsibility to unearth and fight for an interference, coordinate and meet regularly with technology companies and with us. The companies themselves have established mechanisms to share threat information and indicators both among themselves and smaller industry peers. Independent researchers have taken up the mantle in cooperation platforms to apply their skills and knowledge to detecting and analyzing malicious networks and comprehensive public reports.

Adam Schiff: (09:34)
These are positive developments. But as I look across the landscape, I can’t say that I’m confident that the 2020 election will be free of interference by malicious actors, foreign or domestic, who aspire to weaponize your platforms to divide Americans, pit us against one another and weaken our democracy. We are learning, but our adversaries are also learning as well. And not only Russia, modest investments in the IRA and the hacking and dumping campaign aimed at the Clinton campaign paid off in spades, helping to elect the Kremlin’s favored candidate and widening fissures between Americans. The lesson being influence operations on social media are cheap and effective, and attribution to specific threat actors isn’t always straightforward. In March, Facebook and Twitter took down a network comprised of Ghanian and Nigerian individuals operating out of West Africa who were acting essentially as cutouts for IRA linked parties in Russia. This recruited network was tasked with targeting US audiences with race oriented content, echoes of 2016 to be sure, but also a sign of new tactics.

Adam Schiff: (10:47)
And just this week, we saw the release of a graphic report detailing a substantial network of accounts attributed to Russia which the researchers dubbed secondary infection. And while neither networks succeeded on the scale of the 2016 IRA efforts in generating viral content, they showed that Russia linked actors remain determined and capable of sophisticated and malicious social media activity targeting US politics and society. Secondary infections operational security was reportedly very good, and their deployment of convincing forgeries should worry us all.

Adam Schiff: (11:28)
Other countries have watched and learned from Russian active measures and they may well seek to replicate them. As takedowns of coordinated inauthentic favor have demonstrated, China, Iran and other nations are using similar techniques aimed at international and domestic audiences. And they may choose to ramp up foreign influence operations in the future. And the question is, will your companies be able to keep up? Technology has evolved, including the rapid advent of deep fake technology, which was the subject of a hearing by the committee last year. Deep fakes and manipulated media could be weaponized by malicious actors to upend an election by laundering false images, audio or videos into the information stream through social media and traditional media outlets.

Adam Schiff: (12:16)
While each of your platforms has begun to adopt policies around deep fakes and manipulated media, it remains to be seen whether they are sufficient to detect and remove sinister, manipulated media at speed. For once visceral visceral first impression has been made, even if proven false later, it is nearly impossible to repair the damage. I’m also concerned because the nature of your platforms, all of them, is to embrace and monetize virtuality and virality. The more sensational, the more divisive, the more shocking or emotionally charged, the faster it circulates. A tweet or Instagram photo or a YouTube video can be viewed by millions of Americans in the span of hours, a policy that only identifies and acts upon misinformation, whether from a foreign or domestic source, after millions of people have seen it, is only partial response at best. I recognize at its scale, the challenge of moderation is daunting.

Adam Schiff: (13:19)
As we get closer to November, the stakes will only grow. And make no mistake, foreign actors and presidents alike are testing the limits of manipulated media right now. Finally, I am concerned because of an issue that I raised back in 2017, and repeatedly since. I’m concerned about whether social media platforms like YouTube, Facebook, Instagram, and others wittingly or otherwise optimize for extreme content. These technologies are designed to engage users and keep them coming back, which is pushing us further apart and isolating Americans into information silos. Ultimately the best and only corrective measure to address the pernicious problem of misinformation and foreign interference is ensuring that credible, verified factual information rises above the polluting disinformation and false hoods, whether about the location of polling places or about the medical consensus surrounding COVID-19.

Adam Schiff: (14:19)
It remains paramount that all sectors of our society, including technology companies with us today, stand vigilant and posture to detect and disrupt foreign malign attempts to influence our political and societal discourse. Americans must decide American elections. With that, I want to thank again and welcome our witnesses who are joining us today. We’ll proceed with five minute opening statements going in alphabetical order. First, Nathaniel Gleicher, Head of Security Policy at Facebook. Then Nick Pickles, Director of Global Public Policy Strategy and Development at Twitter. And finally Rick Salgado, Director of Law Enforcement and Information Security at Google. Mr. Glacier, we’ll begin with you and you are recognized for five minutes.

Nathaniel Gleicher: (15:09)
Thank you, Mr. Chairman. Chairman Schiff, ranking member Nunez, and members of the committee, thank you for the opportunity to appear before you today. And I appreciate the opportunity to appear before you virtually. My name is Nathaniel Gleicher, and I’m the Head of Security Policy at Facebook. My work is focused on addressing the adversarial threats we face every day to the security and integrity of our products and services. I have a background in computer science and law. And before joining Facebook, I prosecuted cyber crime at the Department of Justice and served as Director for Cybersecurity Policy at the National Security Council. These are incredibly challenging times, and that’s why it’s more important than ever that people can have authentic conversations on our platforms about the issues that matter to them, whether that’s COVID-19, racial and social injustice, family and economic concerns or the upcoming elections. But we also know that malicious actors are working to interfere with these conversations, to exploit our societal divisions, to promote fraud, influence our elections, and de-legitimize authentic social protest.

Nathaniel Gleicher: (16:17)
My team was built to find and stop these bad actors, and we’re working tirelessly to do so. Facebook has made significant investments to help protect the integrity of elections. We now have more than 35,000 people working on safety and security across the company, with nearly 40 teams focused specifically on elections and election integrity. We’re also partnering with federal and state governments, other tech companies, researchers, and civil society groups to share information and stop malicious actors. Over the past three years, we’ve worked to protect more than 200 elections around the world. We’ve learned lessons from each of these, and we’re applying these lessons to protect the 2020 election in November. We’ve taken a variety of steps to support the integrity and security of the electoral process, including launching Facebook Protect, a program that helps secure the accounts of elected officials, candidates and their staff, increasing political and issue ad transparency, investigating and stopping coordinated inauthentic behavior.

Nathaniel Gleicher: (17:17)
We’ve removed more than 50 deceptive networks in 2019 alone. And labeling posts by state controlled media outlets, so that people understand where their news is coming from. Yesterday, we began blocking ads in the United States from these state controlled outlets to provide an extra layer of protection against foreign influence in the public debate ahead of the 2020 election in November. In addition, we know that misinformation and influence operations are at their most virulent in information vacuums. So we combine our enforcement efforts with ensuring that people can access authentic, accurate information about major civic moments, like this global pandemic or voting. This is why we’re creating a new voter information center to fight misinformation, to encourage people to vote and to make sure voters have accurate and up-to-date information from their local, state and federal election authorities. Because authenticity is the cornerstone of our community, we’ve also invested significantly in combating inauthentic behavior, both individual fake accounts and coordinated networks.

Nathaniel Gleicher: (18:19)
For example, we disabled approximately 1.7 billion fake accounts between January and March of this year. Over 99% of those we identified proactively before we received any report. And for the vast majority, these were identified and removed within a very short period after they were created. We’ve also created tools to particularly identify fake accounts targeting civic issues like elections. In addition, so far this year we’ve taken down 18 coordinated networks seeking to manipulate public debate, including three networks originating from Russia, two from Iran and two based here in the United States. We’ve shared information on these networks with third party researchers to enable their own independent assessments. We released monthly reports to highlight the actions we’re taking in one place. We’ve also been proactively hunting for bad actors trying to interfere with the important discussions about injustice and inequality happening around our nation. As part of this effort, we’ve removed isolated accounts seeking to impersonate activists and two networks of accounts tied to organized hate groups that we’d previously banned from our platforms.

Nathaniel Gleicher: (19:25)
Finally, we’re also working to stop misinformation and harmful content related to the COVID-19 pandemic spreading on our platform. On Facebook and Instagram, we remove COVID-19 related misinformation that could contribute to imminent physical harm, such as posts and ads about fake cures. And we continue to work with our network of independent fact checking partners to debunk other false claims and connect people with information from authoritative sources. We’re proud of the progress we’ve made to protect authentic discourse on our platforms, but there’s always more work to do. We’re up against determined adversaries, and we’ll never be perfect, but we are fully committed to the vital work to stop bad actors and give people a voice. Thank you, and I look forward to answering your questions.

Adam Schiff: (20:07)
Thank you, Mr. Gleicher. Mr. Pickles, you’re up.

Nick Pickles: (20:17)
Chairman Schiff, members of the committee, thank you for the opportunity to appear before you today. The purpose of Twitter is to serve the public conversation, and that conversation is never more important than during global elections and civic events, the cornerstone of democracies across the globe. Our service gives people the ability to share what is happening and provides people insights into a diversity of perspectives on critical issues all in real time. We are humbled by the way that our platform is used by those seeking to speak out against injustice, to hold those in power accountable and to build movements for change. Threats of interference in elections from foreign and domestic actors is real and evolving. Since 2016, we’ve made a number of significant investments to address these challenges and prepare against bad actors, taking lessons from the 2018 midterms and elections around the world. I’m grateful for the opportunity to discuss our approach today.

Nick Pickles: (21:10)
And I’ll begin by focusing on the policies, product changes and partnerships Twitter now has in place. The Twitter rules directly address a number of potential threats to the integrity of elections. Under our civic integrity policy, individuals may not use Twitter for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process. We recently expanded this pilot policy to cover civic events, for example, the census, in addition to elections. We prohibit the use of Twitter services in a manner that’s intended to artificially amplify or suppress the conversation. Our rules prohibit fake accounts and those impersonating others. We do not permit the distribution of hacked materials that contain private information, trade secrets, or could put people in harm’s way. In addition to these new rules, Twitter’s advertising policies also play an important part in protecting the public conversation.

Nick Pickles: (22:13)
Firstly, Twitter does not allow political advertising. Online political advertising represents an entirely new challenge to civic discourse that today’s democratic infrastructure may not be prepared to handle, particularly the machine learning based optimization of messaging and micro-targeting. Secondly, Twitter does not allow news media entities controlled by state authorities to advertise. This decision was initially taken with regard to Russia [inaudible 00:22:40] Sputnik, based on the Russian activities during the 2016 election. Last year, we expanded this policy to cover all state controlled media entities globally, in addition to individuals who are affiliated with those organizations. While our policies are vital to protect the conversation, we also want to be proactive in helping people on Twitter find credible information by providing them with additional context. This approach is informed by direct feedback from the people who use Twitter. In 2019, we opened up a public comment period, and heard two clear points.

Nick Pickles: (23:13)
Firstly, Twitter shouldn’t determine the truthfulness of tweets. And secondly, Twitter should provide context to help people make up their own minds in cases where the substance we tweet is disputed. We prioritize interventions regarding misinformation based on the highest potential for harm, and are currently focused on three main areas of content, intent to manipulate the media, elections and civic integrity, and COVID-19. Where content does not break our rules and warrant removal, in these three areas, we may label tweets, but help people come to their own views by providing additional context. These labels may link to a curated set of tweets posted by people on Twitter that include factual statements, counterpoints opinions and perspectives, and ongoing public conversation around the issue. To date, we’ve applied these labels to thousands of tweets around the world across these three policy areas. Finally, I’d like to outline how Twitter is empowering public understanding of the attempts to manipulate the public conversation.

Nick Pickles: (24:12)
In 2018, we launched a public archive of the material that we have removed as part of our work to tackle platform manipulation. This one of a kind resource used by researchers, journalists, and experts around the world now spans operations across 15 countries, including more than nine terabytes of media and 200 million tweets. The use of this archive by a range of stakeholders highlights the importance of information sharing and partnership. Collaboration is critical to Twitter’s efforts and preventing hostile actors from interfering in a public conversation. In certain circumstances, only government agencies have access to information critical to our efforts. And we’re grateful for the continued partnership with federal, state and local agencies, in particular the FBI foreign interference task force and the US Department of Homeland Security’s election security task force. We also work in close collaboration with the National Association of [inaudible 00:25:06] State and the National Association of State Election Directors.

Nick Pickles: (25:09)
We also partner with Civic Alliance, Vote Early Day and National Voter Registration Day to amplify credible election related content. We want people to have confidence in the integrity of the information found on Twitter, especially with respect to information relevant to elections and processes. We know that the threats and challenges are evolving, and we continue to invest in our efforts to address these threats posed by hostile actors and foster an environment conducive to healthy, meaningful conversations. We look forward to working with the committee on this vital issue.

Adam Schiff: (25:44)
Thank you, Mr. Pickles. Mr. Salgado, you are now recognized for your opening statement.

Richard Salgado: (25:50)
Chairman Schiff and members of the committee, thank you for inviting me to testify today to provide an update on Google’s efforts to protect election integrity and prevent platform abuse. My name is Richard Salgado. I am the Director of Law Enforcement and Information Security at Google. Google created its search engine in 1998 with a mission to organize the world’s information and make it universally accessible and useful. As we cope with a global pandemic and are once again reminded of the injustices that continue to exist in our society, our role in helping people access high quality information is more important than ever. Today, I’ll be focusing on three main areas. First, our efforts to combat election related interference. Second, how we are empowering people with authoritative information. And third, how we are improving transparency and accountability in advertising. I’ll start by highlighting our continued investigative and preventive work to combat election related interference.

Richard Salgado: (26:59)
As we previously reported to the committee, our investigation into the 2016 elections found relatively little violative foreign government activity on our platform. Entering the 2018 midterms, we continued to improve our ability to detect and prevent election-related threats and to engage in information sharing with others in the private sector and the government. While we saw limited misconduct linked to state sponsored activity in the 2018 midterms, we continue to keep the public informed. We recently launched a quarterly bulletin to provide additional information about our findings concerning coordinated influence operations. This joins other public reporting across products as we shed light on what it is that we’re seeing. Looking ahead to the November elections, we know that COVID-19 pandemic, widespread protests and other significant events can provide fodder for nation state sponsored disinformation campaigns. We remain steadfast in our commitment to protect our users. Second, we have continued to improve the integrity of our products.

Richard Salgado: (28:08)
Our approach is built on a framework of three strategies, making quality count in our ranking systems, giving users more context and counteracting malicious actors. In search, ranking algorithms are an important tool in our fight against disinformation. Ranking elevates information that our algorithms determine is the most authoritative above information that may be less reliable. Similarly, our work on YouTube focuses on identifying and removing content that violates our policies and elevating authoritative content when users search for breaking news. At the same time, we find and limit the spread of borderline content that comes close but just stops short of violating our policies. The work to protect Google products and our users is no small job, but it’s important. We invest heavily in automated tools to tackle a broad set of malicious behaviors and in people who review content and help improve these tools. We applied many of these strategies in response to the COVID-19 pandemic and developed new ways to connect users to authoritative government information.

Richard Salgado: (29:22)
Similarly, we work to remove misinformation that poses harm to people and undermines efforts to reduce infection rates. On YouTube, we have clear policies prohibiting content that promotes medically unsubstantiated treatments or disputes the existence of COVID-19. We also reduce recommendations of borderline content. Third, Google has made election advertising far more transparent. We now require advertisers purchasing US election ads to verify who they are and disclose who paid for the ad in the ad itself. We launched a transparency report with a searchable ad library as well. Micro-targeting of election ads was never allowed on Google systems, but targeting of election ads in the US is now further limited to general geographic location, age, gender, and context where the ad would appear.

Richard Salgado: (30:17)
This aligns with long established practices in media such as TV, radio and print. Finally, this April, we announced that we will extend identity verification to all advertisers on our platform with a rollout beginning this summer. We certainly can’t do this important work alone. Preventing platform abuse, combating disinformation, and protecting elections requires concerted effort and collaboration across the industry and with governments. We will continue to do our part, seek to improve and correct our mistakes and learn from them along the way in order to better protect the collective digital ecosystem. Thank you for the opportunity to discuss these issues.

Adam Schiff: (31:04)
Well, thank you to all of our witnesses for your statements. We’ll now begin the question period, and I’ll recognize myself to begin the questioning. Mr. Salgado, if I could begin with a question for you, and let me just frame it this way. I think the conventional wisdom among observers of the tech sector, people in academia and others is that Google is probably the least transparent of the major technology companies, contrasting, for example, Twitter’s establishment of a database to make available to researchers and the public what it finds in terms of inauthentic activity on its platform. There is no such equivalence in terms of disclosures by Google. Are you contemplating a change in terms of making data available to the public and researchers that would facilitate analysis of-

Adam Schiff: (32:03)
That would facilitate a analysis of a foreign or even domestic efforts that inauthentic content. And how do you respond to the criticism that Google has essentially adopted a strategy of keeping its head down and avoiding attention to its platform while others draw heat?

Richard Salgado: (32:25)
Well, I certainly hope that’s not the perception. If it is, it’s a misperception, Mr. Chairman. We’ve been very transparent. Particularly on YouTube, we’ve got a transparency record that actually details quite a bit about the actions that are taken on videos, lots of statistics there that may be useful for public policymakers and the public and researchers as well, including on comments, not just on the videos. We have launched a bulletin that will be published quarterly that goes into influence operations that we’ve seen. We released a set just a few weeks ago, I believe it is.

Richard Salgado: (33:04)
So the transparency into that is important to us, the information sharing is important to us, and we’ve certainly engaged in the debate with public policymakers and the public on these important issues. So it’s true we don’t see the volume that the others in the industry see, but we are talking about what it is we see in fighting it and taking it seriously.

Adam Schiff: (33:29)
Would you be supportive of establishing the kind of database that Twitter has announced its establishing to share more data along those lines?

Richard Salgado: (33:38)
I’d have to understand a little better about what it is. We’ve done that with regard to advertisements. If you look at the ads transparency report, you’ll actually see the details of ads and the content of it, how much was paid, the penetration of those ads, who’s behind them. So where we see our products deeply involved, we’ve done just that. If the focus is on YouTube, we can take it back and see if there’s something useful in that arena.

Adam Schiff: (34:09)
Well, let me ask if I could Facebook and Twitter, can you tell us what you were seeing on a couple issues that are of great concern to us? The first is and I applaud Twitter for putting those labels on the presidential tweets regarding absentee voting, but I’m concerned that foreign powers may be amplifying that misinformation about whether voting by absentee is a safer, secure way to vote. Are you seeing any foreign manipulation or amplification of that false information? And likewise, are you seeing yet efforts by foreign powers to exploit divisions as they did in 2016 around Black Lives Matter or now regarding the pandemic?

Nick Pickles: (34:59)
Thank you, Mr. Chairman. In terms of both of those issues, I can begin by reassuring the committee that we haven’t found evidence of concerted platform manipulation by foreign actors in either of those areas. Where you’re absolutely right to draw the connection is we have seen a change in tactics, and I think this in part is a result of the success that we’ve had in clamping down on the inauthentic platform manipulation operations. So activity particularly around COVID and the geopolitics, but also the issues in the United States around particularly policing have transferred into state controlled media, have transferred into the geopolitical space.

Nick Pickles: (35:41)
And so we are seeing the public use of Twitter, accounts that use Twitter are visible to anybody with or without an account and those media entities and those government accounts are engaging in the geopolitical conversation. And so we have seen, for example, some crossover from the Chinese actors comparing the police response in the United States to recent protests with the police in response in Hong Kong. And so that shift from platform manipulation to over state assets is something that we’ve observed.

Nick Pickles: (36:13)
And I think it reminds us that we have to be vigilant that the challenges we faced in 2016 are constant and that this remains an evolving security challenge that we have to keep one step ahead of and keep looking at how bad actors change their behavior.

Adam Schiff: (36:28)
Mr. Gleicher.

Nathaniel Gleicher: (36:31)
Thank you, Mr. Chairman. I agree particularly with something Mr. Pickles just said, we definitely see the tactics in this space evolving, and we see the threat actors trying new efforts to get around the controls that are put in place. We haven’t seen coordinated inauthentic behavior on the part of foreign governments, particularly targeting voting systems or how to vote in the United States. It’s definitely something we’re monitoring. One of the most important tools in this context is ensuring that people do have accurate information about how to vote and how to vote safely.

Nathaniel Gleicher: (37:03)
Part of the reason we launched our voting information center and announced it yesterday, part of the reason why we announced it yesterday actually feeds directly into our security strategy. Providing that accurate information is one of the best ways to mitigate those types of threats. You also asked, Congressman, about coordinated, inauthentic behavior engaging with the protests. What I would say is we’ve seen some cases of fraudsters and spammers trying to make money off of public debate around the protests, even going so far as trying to sell nonexistent t-shirts to attendees or to people who might be attending.

Nathaniel Gleicher: (37:38)
We’ve seen people trying to run financially motivated scams. We have not seen foreign actors engage in coordinated inauthentic behavior around the protests yet. And we have teams that are proactively hunting for that, so that if we do find it, we could announce it publicly, people would be aware of it, and we’d share that information here with the committee and with our partners in industry and in government.

Adam Schiff: (38:00)
Thank you. Jim Himes.

Rep. Jim Himes: (38:07)
Thank you, Mr. Chairman. And thank you all for being here. Mr. Gleicher, I want to ask you a question that I kind of feel has been insufficiently addressed here. I’m glad, and I’ve read the testimony. I’m glad everybody’s doing so much work to try to identify foreign presence and all of that sort of thing, but I’m pretty convinced that when this republic dies, it doesn’t happen because the Russians broke into Ohio voting machines, or they managed to buy ads on Facebook or Twitter. It happens because our politics become so toxic, so polarized that we don’t recognize each other anymore as Americans.

Rep. Jim Himes: (38:47)
And there is a foreign nexus here because all it takes if every single American household is full of toxic, explosive gas, as I think it is today, all it takes is a match from Russia or from Iran or from North Korea or from China to set off a great conflagration. So Mr. Gleicher, I read the Wall Street Journal article about the work that has happened inside Facebook. And I was very troubled by the apparent unwillingness of Facebook to in a very public and specific way come to terms with the notion that its algorithm, which is really what worries me in terms of the security of this country, that its algorithm promote polarization, division and anger.

Rep. Jim Himes: (39:39)
You keep using the word community and authentic. I hear it over and over again. Those are value neutral words. There’s nothing good or bad about authenticity or good or bad about community. I’m old enough. The 1984 Olympics were held in Sarajevo. In 1984, Sarajevo was Muslims and Bosnians and [inaudible 00:39:59] coming all together and it was wonderful. And then Slobodan Milosevic in the 1990s injected some authenticity, some anti-Muslim bias on the part of the Serbs and created new communities, murderous Serbian nationalists, and created new communities.

Rep. Jim Himes: (40:16)
Those are value neutral words. So Mr. Gleicher, the real threat to me feels like Facebook’s underlying business model and algorithm which promotes engagement, but engagement means it’s like me driving a highway and watching a car crash. I can’t not look at it. So that is what scares me most. And I’ve got two minutes and 20 seconds. So Mr. Gleicher, I really want to understand what Facebook is specifically doing, and to some extent this pertains a little bit, I think, to Twitter and YouTube, etc., but what is Facebook doing to not be the Slobodan Milosevic of the destruction of the American Republic?

Nathaniel Gleicher: (41:00)
Congressman, understanding how to ensure not just authentic, but positive and collaborative public debate is absolutely critical. I completely agree. What we’ve found is that people who are on our platforms, users, they don’t want to see click bait. They don’t want to see the type of divisive content that you’re describing. If we were to show only that, users actually wouldn’t want to engage and they wouldn’t come back. That’s why we down rank content that qualifies as click bait. That’s why we take steps, for example, not to recommend groups that are repeatedly sharing information that crosses certain lines.

Nathaniel Gleicher: (41:38)
That’s why we have refocused the public debate around our platform. And when we think about the algorithm in particular to content from friends and family, content that centers around discussions and public conversations, not the type of divisive narrative that you’re describing.

Rep. Jim Himes: (41:56)
Mr. Gleicher, I’m in politics and in the political realm, and I’d like to see the facts behind the studies underlying the notion that people don’t like divisiveness and that they don’t like click bait. I mean, click bait is a thing because people like click bait, but I’m in politics and I know that there’s a difference that when I walk into a room full of people who think like I do and present nuances and complications and shades of gray, that’s a pretty boring meeting and I’m a pretty boring guy. But when I walk into a room and I present things as good versus evil, as the system is rigged against you, that is an energized room.

Rep. Jim Himes: (42:36)
And so what you’re telling me in the political realm is just not resonating with me. And again, I get it. You guys want excitement. That’s what draws me to Facebook, but I want to understand specifically, and I understand this probably means less profit, but specifically, what is Facebook going to do to be more constructive? Your word.

Nathaniel Gleicher: (42:59)
Congressman, the interesting distinction of the important distinction we’ve seen, people will certainly click on click bait. I mean, hence the name and the intent. But in the longterm, if they’re looking for a community, if they’re looking for a place they want to engage, they don’t want that community to be rife with that. And that’s why we’ve taken steps to adjust the way we’re ranking content to ensure that we’re not prioritizing and promoting that. And it directly aligns with having a community, having a platform that will persist and that users will want to use

Rep. Jim Himes: (43:28)
Well, Mr. Gleicher, I’m out of time, but I’m going to continue this discussion because you’re just not resonating with me. Look at the presidential candidates. And if the chairman will give me just 10 more seconds. Look at the presidential candidates. Look at the current president. Americans are drawn to people who are explosive and controversial and paint the world in terms of good and evil and black and white. My friend, John Delaney, who remembers John Delaney because he was constructive and thoughtful and moderate in his approach.

Rep. Jim Himes: (44:02)
So I end this conversation, Mr. Gleicher, more concerned than when I started because you’re telling me that people don’t seek that kind of thing in politics. And that’s just contrary to everything I observe in my own political life and in the political life of the country. So Mr. Chairman, thank you for your forbearance. And I yield back.

Adam Schiff: (44:21)
Thank you. Terri Sewell.

Rep. Terri Sewell: (44:23)
Thank you, Mr. Chairman. I also want to thank our speakers and panelists today. We know from past disclosures that foreign actors have taken advantage of our platforms to spread misinformation, which only undermines our democratic discourse. Through your platforms, these actors can attempt to covertly influence or skew our national conversation towards chaos and confusion and in fact they’ve done so. It is therefore incumbent upon each of your companies to quickly expose foreign influence operations and disable those before this misinformation spreads. You must take steps ahead of their sophisticated tactics, which are ever evolving as they gain a better understanding of what flames they can stow in order to sow more discord. We all saw that Twitter responded to the fact check misleading information about mail-in voting tweeted by President Trump last month. Mr. Pickle, I represent America’s voting rights district, the heart of which is my hometown of Selma, Alabama, and I’m in my Selma office today. We know that marchers bled, fought and died for the rights of all Americans, especially African Americans to vote in this country.

Rep. Terri Sewell: (45:36)
We have secured responsibility. We have a sacred responsibility, I believe, to protect the rights and the votes of all Americans, which includes actions like the one taken by Twitter to counter misinformation about voting regardless of how powerful the person sharing the misinformation is. Propaganda designed to suppress the black vote has been a part of our democracy since we were able to vote. While it is not new, social media creates the potential for such voter suppression tactics and misinformation just spread even further.

Rep. Terri Sewell: (46:09)
I therefore urge all of your companies to vigorously uphold your commitments to preempting misinformation whether from a foreign or domestic force that could interfere with the voting process. My community has been the target of misleading information about voting for generations, always bearing the brunt when institutions like yours don’t take responsibility to stop the spread of misinformation. Russian tactics to interfere in our elections, which we saw all too well in 2016, largely targeted black Americans and other communities of color.

Rep. Terri Sewell: (46:44)
Between the disparate impact of COVID-19 has had on the black community and the growing racial tensions our country has over the murders of George Floyd and Breonna Taylor and so many others, the political landscape is fertile ground for foreign adversaries, as well as domestic aggression to undermine genuine trust and confusion in this country. I’d like to submit Mr. Chairman, for the record, two articles, one entitled Facebook, Twitter Suspend Russian Linked Operations Targeting African-Americans on Social Media from the Washington Post and the other Russian Election Meddling is Back the Ghana and Nigeria and in your feeds from CNN.

Rep. Terri Sewell: (47:28)
Can I submit these for the record, sir?

Adam Schiff: (47:31)
Without objection.

Rep. Terri Sewell: (47:32)
I’d also like to enter into the record a report form from Graphika called IRA in Ghana: Double Deceit. These articles tell us about a sophisticated cross-platform influence operation that targeted black communities in the United States. The operation, which was exposed by CNN, Twitter, Facebook, Graphika and Clemson University professors was run from Ghana. And I’d like to ask the question of you, the representative from Twitter. Could you have found this effort quicker? And if so, what could you have done to stop it even quicker?

Nick Pickles: (48:13)
Well, thank you for raising these critical issues. And we, as you say, have taken a number of campaigns in the 2016 midterm, sorry, 2018 midterms. We took action on around 6,000 tweets of voter suppression that with domestic in origin, primarily. You’re absolutely right to highlight the challenge here of moving as fast as possible. And that’s about investment. It’s also about partnerships. And so as you highlighted there, industry working together, working with expert researchers and the work at Graphika, Camille Francois, Ben Nimmo, doing incredible work that we saw this week in their report on secondary infection.

Nick Pickles: (48:46)
So I think the answer is we need stronger partnerships. We need more information sharing, both with government and across industry. And since 2016 in both of those areas, we’ve made significant progress and are in a much stronger position today.

Rep. Terri Sewell: (49:00)
Mr. Gleicher, what would have your platform would have done to spot these kinds of operations sooner and what lessons were learned and how are we to create barriers from a more sophisticated actors?

Nathaniel Gleicher: (49:15)
Thank you for the question, Congresswoman. What we saw and one of the things I’m really proud of here is that this was a network that our teams found, exposed, worked with our colleagues in industry and in journalism and elsewhere to make sure it was down and people were aware of it. One of the two things that we learned, first, this technique is actually not new. It’s a callback to techniques that Russian actors have used for decades. We’re seeing more and more, they are returning to techniques from the ’60s, ’70s, and ’80s in attempts to evade or get around the detection we’ve put in place, and we as a community have to be vigilant to that.

Nathaniel Gleicher: (49:51)
One key thing we learned here was that the platforms are able to do a particular kind of investigation on our platforms, on the networks and connectivity on the platforms. The CNN reporting in this context where they are able to go on the ground and interview individuals was an incredibly powerful compliment to that work. And I think it reinforces how you need to have close ties between these communities to surface this information quickly, expose it and get it down before it can have significant.

Rep. Terri Sewell: (50:21)
Thank you, Mr. Chairman.

Adam Schiff: (50:26)
Thank you. Jackie Speier.

Rep. Jackie Speier: (50:29)
Mr. Chairman, thank you. Thank you all for joining us this, well it’s morning here, but afternoon there. Mr. Gleicher, you may or may not know that Facebook is headquartered in my congressional district. I’ve had many conversations with Sheryl Sandberg and I’m still puzzled by the fact that Facebook does not consider itself a media platform. Are you still espousing that kind of position?

Nathaniel Gleicher: (51:05)
Congresswoman, we’re, first and foremost, a technology company.

Rep. Jackie Speier: (51:11)
You may be a technology company, but your technology company is being used as a media platform. Do you not recognize that?

Nathaniel Gleicher: (51:22)
Congresswoman, we’re a place for ideas across the spectrum. We know that there are people who use our platforms to engage and in fact, that is the goal of the platforms to encourage and enable people to discuss the key issues of the day and to talk to family and friends.

Rep. Jackie Speier: (51:39)
How long or maybe I should ask this. When there was a video of Speaker Pelosi that had been tampered with, slowed down to make her look like she was drunk, YouTube took it down almost immediately. What did Facebook do and what went into your thinking to keep it up?

Nathaniel Gleicher: (52:01)
Congresswoman, for a piece of content like that, we work with a network of third party fact checkers, more than 60 third party fact checkers around the world. If one of them determines that a piece of content like that is false, then we will down rank it and we will put an interstitial on it so that anyone who would look at it would first see a label over it saying that there’s additional information and that it’s false. That’s what we did in this context. When we down rank something like that, we see the shares of that video, radically drop.

Rep. Jackie Speier: (52:32)
But you won’t take it down when you know it’s false or been tampered with?

Nathaniel Gleicher: (52:36)
Congresswoman, you’re highlighting a really difficult balance and we’ve talked about this amongst ourselves quite a bit. And what I would say is if we simply take a piece of content like this down, it doesn’t go away. It will exist elsewhere on the internet. People who are looking for it will still find it.

Rep. Jackie Speier: (52:51)
I understand that, but there will always be bad actors in the world. That doesn’t mean that you don’t do your level best to show that the greatest deal of credibility. I mean, if YouTube took it down, I don’t understand how you couldn’t have taken it down, but I’ll leave that where we’re it lays. You had said in your opening statements that you have taken down, I believe that’s what you said, taken down some networks from Iran, from Russia, and you might’ve mentioned another country. Could you drill down for us and tell us specifically what they were selling?

Rep. Jackie Speier: (53:38)
What was it that you found offensive that you actually took that down where you didn’t take the video of Nancy Pelosi down?

Nathaniel Gleicher: (53:48)
Congresswoman, the focus of my team’s work is on what we call inauthentic behavior. That is not the content that’s being shared, but behavior or techniques that these actors use to hide their identity, make their content appear more popular than it is or otherwise mislead users. Last year, we took down 50 networks or more than 50 networks around the world for engaging in this from many different countries. This year, so far, we’ve taken down 18.

Rep. Jackie Speier: (54:14)
Can you drill down as to what they were doing?

Nathaniel Gleicher: (54:17)
Absolutely. So a network like this will be using fake accounts and an organized group of fake accounts to mislead users about who’s behind the network. For example, we saw a network based right here in the United States that was representing itself as a US news source when in fact it was using networks of accounts run by actors from overseas to write its content and purportedly pretend to be Americans. We’ve taken down networks linked to entities coming out of Russia that present themselves as local when in fact they are centrally controlled by another organization.

Nathaniel Gleicher: (54:58)
So for example, we did a take down of a network linked to Sputnik News, a state media organization out of Russia that ran seemingly independent news organizations across Europe, representing themselves as independent and claiming to be independent. When in fact they were all centrally controlled and driving a message directly back from the organization that was running. That’s the type of behavior that we enforce against when we see these actors engage in deceptive techniques. We announce it publicly, we share information with third parties and we make sure that it’s very clear in public.

Nathaniel Gleicher: (55:35)
We have a monthly report where we detail this in great detail, and I could ensure that our teams share with you our recent reports so you can have more detail if that’s helpful.

Rep. Terri Sewell: (55:42)
I’m sure that would be. Thank you. I yield back.

Adam Schiff: (55:48)
Thank you. Mike Quigley.

Rep. Mike Quigley: (55:50)
Thank you, chairman. Thank you for all for participating. You all reference sharing and collaboration and its value. Tell us what’s impairing that, if anything. Is there anything legally that makes this more difficult? Are there actors that make it more difficult or just internal measures that limit your capacity to share information and to collaborate with other platforms, third parties, federal, state, and local officials? Anyone?

Nick Pickles: (56:28)
I’m happy to offer some thoughts and then let colleagues chip in. Thank you, Congressman, for raising this important issue and the space that we work in is the tension between privacy and security often. So on the one hand, laws may compel us to not store data for longer than we need it. On the other hand, we may not know the information is relevant at the time that we removed the accounts. Information, operations and actors who are trying to hide the behavior and often use a variety of techniques. And while we focus on the social media end of the spectrum where you see the content, we often don’t focus on the technical infrastructure the actors may use.

Nick Pickles: (57:07)
So we don’t have all the information straight away. So there’s a tension there between removing content and then removing the data versus holding onto data in case we know something later that would actually enable us to say these accounts that we removed at the time, we didn’t realize they were state-linked, but now we think they are state-linked. So that tension is definitely one. And secondly, as I referenced in my opening statement, the more government can declassify information. I think one of the striking things from Graphika’s report, Secondary Infection, whereas there were 300 platforms used.

Nick Pickles: (57:41)
And while the larger platforms have invested significantly, there’s also a responsibility on us to mentor and support our peers in industry based on our skills and expertise. I think government being able to share more information publicly would enable more of those small companies who are often not part of these discussions firsthand to learn and take steps to protect themselves. So happy to let others weigh in, but they’ll be two areas I’d highlight.

Nathaniel Gleicher: (58:09)
Congressman, I’d just add, I think in the last couple of years, particularly among industry and with our partners in government, our ability to share information has gotten much, much better. We’ve had a number of cases where we’ve gotten tips from, for example, the FBI that has helped us take rapid action. Two things that are worth considering, I think there’ve been some questions about transparency and sharing data about these takedowns publicly. That type of sharing information publicly we think is really important because it helps people understand what’s happening.

Nathaniel Gleicher: (58:39)
The legal framework around what you can share and what you should share is not as clear as it could be right now. I think that raises questions for all the platforms. How do we ensure that the public is aware of what’s happening and researchers are aware of what’s happening in a way that they can dig in without impacting the privacy of innocent users who could get swept up in it? That’s an area where I think it could be particularly valuable. But at the same time, as Mr. Pickles noted, the information that we’re sharing here often is very sensitive and it needs to be handled very carefully.

Nathaniel Gleicher: (59:12)
So I don’t think we’re ever going to remove the tension here, but I do think that some clarity would help us be able to share faster, share more with the public and share more with our partners.

Rep. Mike Quigley: (59:23)

Richard Salgado: (59:26)
Thank you. I think the biggest struggle we’ve had on the information front was referred to by Mr. Pickles already, which is quite understandable. But the formidable difficulty that the intelligence community has in sharing classified information with the companies, and I suspect as Mr. Pickles suggested that there’s an issue with over classification of this, or perhaps a difficulty, bureaucratic or otherwise, in the agencies finding ways to declassify the less sensitive parts of information threats, leads that would be useful or the platforms.

Richard Salgado: (01:00:06)
Other than that as Mr. Gleicher pointed out, the information sharing among the companies and with governments has improved greatly since 2016. It’s almost to the point of being unrecognizable compared to where we were back then. And we’re not seeing a lot of legal impediments that we need to see addressed other than the issue with the classification.

Rep. Mike Quigley: (01:00:28)
Thank you all. Mr. Chairman, I yield back.

Adam Schiff: (01:00:33)
Thank you, Mr. Quigley. Eric Swalwell.

Rep. Eric Swalwell: (01:00:36)
Thank you, Chairman. Mr. Salgado, do YouTube’s comments fall under the same policy as Google display ads, your comments policy on YouTube?

Richard Salgado: (01:00:48)
I’m afraid I don’t know the policies deep enough to be able to give you some useful information on that. We have so many products with so many policies and I have to confess I’m not …

Rep. Eric Swalwell: (01:00:57)
Okay. If you could follow up with a letter on that. Have you ever taken down One America News Network videos for misinformation?

Richard Salgado: (01:01:07)
That’s an issue I’m not sure I can give you a direct answer on, a specific question about a specific potential anyway, publisher …

Rep. Eric Swalwell: (01:01:16)
How about Fox News?

Richard Salgado: (01:01:19)
Again, I’m don’t know if there is an example of that. I’d have to check. There’s an awful lot of removals that you can see on our transparency report, but that detail I don’t know right now.

Rep. Eric Swalwell: (01:01:28)
The New York Times, Mr. Salgado, recently reported that conspiracy new site Epoch has spent, E-P-O-C-H, Epoch has spent $1 million on ads with YouTube. Does that sound accurate?

Richard Salgado: (01:01:42)
I don’t know the figure. I understand they may be an advertiser. Yeah.

Rep. Eric Swalwell: (01:01:45)
And according to Social Blade, a website that estimates the revenues that content creators get paid by YouTube to make their content, RT OANN, One News Network and Epoch are earning collectively up to $2 million in revenue this year. Does that seem accurate to you?

Richard Salgado: (01:02:07)
I need to check with the team to be able to come up with real figures that I can testify to and help you.

Rep. Eric Swalwell: (01:02:12)
Could you walk me through how a creator, like One America News Network, which has been, I think called out by most credible news agencies as propagating Russian materials. How could they get paid by Google when it creates a video that people watch? Can you just explain how they would actually make money in addition to running ads?

Richard Salgado: (01:02:35)
Well, there are of course, two different products that you’re talking about here. One is an offering where you can advertise, you can pay to have your advertise appear on the blog posts or websites of other publishers. And then there’s the ability to actually monetize the content you upload, for example, to YouTube. There are policies as you referred to earlier on, on both of those around who can advertise and what can be advertised. And then there are also policies on what sort of content are we willing to actually run advertisements on. So there are …

Rep. Eric Swalwell: (01:03:18)
Will Google have a policy for vaccine misinformation on YouTube?

Richard Salgado: (01:03:23)
There will be policies that address, on ads in particular, ads that that can cause public health damage, deceptive ads. There’s a range of policies. They’re actually all publicly available and can be looked at by anybody including advertisers, of course.

Rep. Eric Swalwell: (01:03:38)
And can I just do a little round robin here, and this is an unclassified briefing, but I do want to know how recently have any of you met with the FBI about misinformation? Again, I don’t want any details about the case, about the country. Just when was the last time-

Rep. Eric Swalwell: (01:04:03)
On any details about the case, about the country, just when was the last time you had a conversation about something you saw? I’ll start with Mr. Salgado`

Richard Salgado: (01:04:11)
These are routine conversations. I won’t say that they are necessarily weekly, but it starts to approach the conversational cadence that we have. I’m fine to be open about this, there’s nothing classified about it. They tend to be largely with the local field office out here in California with … Sometimes they’re ad hoc and we also have a regular cadence of meetings, but it’s actually rather routine at this point.

Rep. Eric Swalwell: (01:04:43)
Thank you. Mr. Pickles?

Nick Pickles: (01:04:46)
[inaudible 01:04:46] Our engagement with the FBI is incredibly regular, whether it be phone calls or emails. We have formal meetings as well on a monthly basis but the dialogue is as-needed so actually, if the FBI has concerns about a specific tweet, a specific issue, we’ll have realtime dialogue, we’re not going to wait for sort of meeting dates on the calendar.

Rep. Eric Swalwell: (01:05:07)
Mr. Gleicher, just before I go to you, I ask because this committee has worked to pass legislation that would essentially put a duty to report on social media companies if they see foreign interference on their platform and that has not become law yet, it has not been passed in the Senate, but could you just tell us about your interactions with the FBI when you see misinformation?

Nathaniel Gleicher: (01:05:30)
Certainly Congressman. As Mr. Pickles and Mr. Salgado mentioned, we actually have a periodic monthly meeting with FBI and DHS and government partners that we all participate in that is at a strategic level so we can talk about the threats that we’re seeing. We can all make sure that we’re sort of aligned and working together as effectively as possible. In addition, whenever we see foreign interference or a CIB takedown, we’ll share information about that with law enforcement. So for example, we announced our latest monthly report of all the takedowns we’ve done and that was last week or the week before. When we did that, we would have shared information ahead of time with our law enforcement partners to make sure that they can follow up if there’s something particularly that implicates foreign interference in the United States.

Rep. Eric Swalwell: (01:06:13)
Great. Thank you Chairman, I yield back.

Adam Schiff: (01:06:19)
Thank you. Next up. We have Mr. Heck. Denny Heck.

Denny Heck: (01:06:25)
Thank you Mr. Chairman and thank you to the panelists for being here. It is appreciate, I appreciate the updates on some of your efforts growing out of the learning experiences of the last few cycles. I choose not to spend my time however on the issues of election interference per se nor disinformation, but more along the lines of what Mr. Himes was pursuing, in fact the exchange Mr. Gleicher with you leads me to ask a variation on what my originally intended question was. It is … I must also add that like Mr. Himes, your answer did not resonate with me, sir. It is axiomatic that civic discourse in America has degraded. That is inarguable. It is also equally self-evident that the social media platforms that we are here talking about have amplified that degraded civic discourse and as a corollary to that that you have also profited off of it.

Denny Heck: (01:07:36)
So I want to ask you again, do you not accept any responsibility for this, and if you don’t, for the love of god, tell me your logic for not accepting any responsibility, and I’m going to say just a couple things before I give you your shot. The first of which is politicians aren’t exempt. Our trade craft has fully utilized these tools to our benefit and to suggest otherwise would be hypocrisy, but it reminds me of a little bit of if somebody had a bullhorn to amplify their communication and they walked up next to you and they put it right in your ear and they kept using it until you got deaf, for you to not accept responsibility as the bullhorn maker to me seems a bit of a stretch of product liability immunity. The fact is, civic discourse has degraded as Mr. Himes has set forth the extreme threat that this serves to our country. The fact is that you amplify it. The fact is that you profit off of it and First Amendment considerations which are really important notwithstanding, do you not accept some responsibility for this? Mr. Gleicher, let’s start with you, sir.

Nathaniel Gleicher: (01:09:05)
Congressman, I think we have critical responsibilities. Yes to ensure that debate on our platforms is authentic, also to ensure that it is as open and positive and collaborative as possible. Part of what you’re identifying Congressman is how humans interact in public discussion. It’s why we’ve taken very serious looks, it’s why we have thought about what we promote, how we promote, what we recommend to address exactly these challenges. I do also think that the rise of social media platforms, the rise of the internet, has led to voices being heard at volumes that have never happened before and the most difficult challenge here is how to peel these two apart. How do you mitigate some of the challenges you’re describing, and I agree, these are essential challenges that we’re all grappling with, without also undermining the incredible profusion of new voices we’ve heard in public debate. We’ve looked at and we’ve done a number of changes to attempt to tackle this. I would never suggest that we can solve this problem alone. I think part of this is how humans engage and the platforms have an opportunity and a responsibility to do everything we can to encourage and enable the best discussion but I’d never suggest that we can solve this problem, Congressman.

Denny Heck: (01:10:22)
Well I’m reminded of what Dr. King said, the moral arc of the universe bends towards justice, in this case, the moral arc of social media platforms isn’t bending towards justice fast enough. Look, there isn’t a person on this call that hasn’t been told a thousand times by their staff, “Stop reading the comments.” They asked us to stop reading the comments because they are so unbelievably uncivil and personal. It’s character assassination and demonization, and it manifests itself in this polarization, the Exhibit A for which is there are no members of the minority party sitting in on this, so polarized has our political culture become. The fact is, it is toxic. The fact is, it is a threat. The fact is, you are the bullhorn manufacturer, and the fact is, you’re not moving fast enough. Thank you Mr. Chairman I yield back.

Adam Schiff: (01:11:29)
Thank you Mr. Heck. Mr. Krishnamoorthi, Raja Krishnamoorthi.

Raja Krishnamoorthi: (01:11:37)
Hello Mr. Chairman, can you hear me?

Adam Schiff: (01:11:39)
Yes we can hear you.

Raja Krishnamoorthi: (01:11:40)
Okay, great. Thank you so much. I just want to direct a couple questions, first to Mr. Pickles. Back when the protests were going on in Minneapolis, the president put out a now infamous tweet calling the protesters thugs and making a reference to an infamous quotation that “When the looting starts, the shooting starts.” I thought that you at Twitter took the right approach in putting a label on that particular post. Can you tell us a little bit about why you did that?

Nick Pickles: (01:12:26)
Happy to Congressman. This is a policy that we launched last year. We announced that in situations where public figures who are verified on Twitter who make a statement that we have deemed break our rules but we feel the preservation of that tweet allows essential public scrutiny, essential public debate, then we would strike a balance to rather than remove the content which we fear would stop that debate is to allow the content to remain on Twitter. [inaudible 01:12:58]

Raja Krishnamoorthi: (01:12:58)
[inaudible 01:12:58] for one second? Sorry, why did you say it breaks your rules?

Nick Pickles: (01:13:03)
So in the case of this specific tweet, we actually have this in the label. We felt this tweet violated our rules of the glorification of violence, and when we apply that label, we actually stop people retweeting it as well so to the previous comment about engagement, we preserve it for discussion but we don’t allow further engagement.

Raja Krishnamoorthi: (01:13:20)
So let me direct my next question to Mr. Gleicher of Facebook. I cannot for the life of me understand why you folks allowed that post to stay up for as long as you did and not issue any kind of similar comment or put any similar label on that post as Twitter did and I’d like you to have a chance to respond to why you don’t think that that was glorification of violence or that it was proper material for a post on your site.

Nathaniel Gleicher: (01:14:01)
Congressman, thank you for the question. I personally found that post to be abhorrent. I know that that view is widely shared. My team doesn’t make team direct content decisions but what I can tell you is that as Mark has made very clear, we frame our approach in this space anchored in freedom of expression and respect for the democratic process, it’s very critical –

Raja Krishnamoorthi: (01:14:23)
How does that show respect for anything? I mean, the reason why you reacted the way you did and called it abhorrent is that it completely eviscerates civil discourse. Now let me ask you another question. What if the Internet Research Agency took that post and put … I don’t know, a million bots on it, and just decided to say to everyone in the United States and put a billion dollars behind that post in sponsored ads that, “When the looting starts, the shooting starts, so go start shooting.” What would you do in that instance?

Nathaniel Gleicher: (01:15:05)
Congressman, we have policies around content and we have policies around behavior. Any activity that uses fake accounts to amplify something would come down.

Raja Krishnamoorthi: (01:15:15)
What if there was no fake account? What if it was an authentic account from the Russian Federation, even let’s just say it was a state actor. They did not do anything to modify the post that Donald Trump put up, but just put money in sponsored ads behind that post and said, “We are the Russian Federation. See what your own president is telling you to do.” What would you do in that instance?

Nathaniel Gleicher: (01:15:45)
Congressman, given the hypothetical there, it’s a little hard to say, but what I can tell you is we have particular policies around ads. You mentioned ads as an example. Just yesterday we began blocking ads from state media including from Russia coming into the United States ahead of the election, so for example if we’re talking about a state media agency from Russia, they wouldn’t be able to do that, they wouldn’t be able to run ads into the United States.

Raja Krishnamoorthi: (01:16:09)
Okay, what if it’s a private actor, there’s this thug in Russia who runs the IRA, and let’s say he … I forgot his name, Denny Heck knows him well but Yevgeny Prigozhin I think, but anyway that guy, what if he just puts a billion dollars behind it, it’s not a state actor, are you saying that you would prohibit him from doing that?

Nathaniel Gleicher: (01:16:33)
Congressman, Prigozhin and his organizations, the Internet Research Agency are banned from our platforms so we have enforcement on content, we have enforcement on behavior, and then we have enforcement on actors, and for organizations like the Internet Research Agency, given the history, given the activity they have engaged in, they have no place on Facebook, if they were to try to come back as we’ve seen them do, we will identify that and remove it so we would not permit that because of the organization it’s coming from.

Raja Krishnamoorthi: (01:17:00)
I’ll just close with this which is that that post was so abhorrent as you said, not I did, that I find it abhorrent that you would have allowed that to stay up and I see that Mr. Zuckerberg is dancing around this post but this is exactly why people [inaudible 01:17:22] right now. Thank you.

Adam Schiff: (01:17:28)
Thank you Mr. Krishnamoorthi.

Val Demings: (01:17:31)
Thank you so much Mr. Chairman and thank you to our witnesses for joining us here today. What a critical discussion but I think we may leave with more questions than answers. We just want to feel better and Mr. Salgado, I’d just like to start with you. You said your mission statement in 1998 and don’t get me wrong, we were really excited about this new platform, a way to communicate and connect and receive information, was to organize the world’s information and make it universally accessible and useful. You also indicated that the integrity of your product continues to improve. Could you just give me a few examples of how the integrity of your product has continued to improve, the integrity has continued to improve and if you were rewriting that mission statement today, what would it say, what would you add or take away from it?

Richard Salgado: (01:18:33)
Thank you for that question. I guess to the integrity of the products part, there are so many examples. I guess a good one to go to is the core product for Google that it’s so known for which is search and the constant improvements in our algorithms to improve the results that you get when you go type in a search query and making sure that the authoritative plus relevant information is what appears at the top and we use algorithms to do it but it’s also informed through real people who check the results, make sure tat things are coming out as you would expect and want for users, and we’re able to adjust algorithms, and it is a constant tweaking of our algorithms, to improve these search experiences that people rely on, and we can –

Val Demings: (01:19:22)
How would you give yourself a letter grade there? What would that letter grade be? From 1998 till now [inaudible 01:19:32]?

Richard Salgado: (01:19:32)
The way search is, I suppose expectations in 1998 are very different than they are in 2020. We’ve had Google around for so long and people just expected to work, but it’s amazing how fast it is. We have billions of hits and yet your results are right there like you’re the only person using it. So when you think about that, I think we are in a solid A category and I am not an easy grader, and it continues to improve, and in adding different features to it. We know people obviously are very concerned about the COVID-19 pandemic so in the search product we have made it much easier for users to find good authoritative medical information, reliable for their queries on COVID-19. So it also can be a flexible product that notes what’s really important to a vast number of people who are using it at any given moment.

Val Demings: (01:20:31)
Okay, let me move on but thank you for that. Mr. Gleicher, I believe you said that voices are being heard that have never been heard before, but I also believe that more chaos and disinformation is being heard or seen like never before. I do believe that all of your platforms is the vehicle by which disinformation, racism, hatred, sexism, and any other kind of ism has traveled the most. I’d like to ask each of you this question. Do you believe as you strive to get information out and make connections and all, do you have a moral obligation, yes, no, and if you feel you do, what do you see that moral obligation as? Mr. Salgado, we can start with you but I’d like to hear from everybody.

Richard Salgado: (01:21:35)
Well I think we have moral and ethical obligations to our users. I think we have a great focus on making sure that the data that we hold for our users is secure, the accounts are secure, an awful lot of the election interference that we saw in 2016 and even the different information, influence operations that we’re seeing today, implicate Google mostly in the phishing attempts that we see against our accounts and so there’s a good deal of focus on Google in that respect, to make sure that accounts remain secure, users with little action on their part can remain confident that the accounts are being protected, but at the same time, try to educate users on the better security practices that are available now and that are not difficult to implement and I think there’s an awful lot of responsibility on the part of Google to make sure that the security of the data that users entrust with us is maintained and continue to improve that.

Val Demings: (01:22:42)
So in terms of being … I believe the number one vehicle for transporting disinformation, racism, hatred and sexism, you believe the number one priority is the security of the product because your answer centered mostly around that.

Richard Salgado: (01:22:58)
I did center on that and really the reason … I’ll just give you a little background there. Really the reason is that we have found historically is that most of the involvement, not all of it, and we can talk about YouTube and we can talk about some of the other platforms, but really the bulk of the activity that we saw was the use of Google platforms like Google accounts and Gmail to create accounts on other services that were then used for disinformation campaigns and so making sure that we were able to identify those accounts, particularly where they might be compromised accounts that are being misused for those purposes to help build an infrastructure, misinformation infrastructure on other companies’ platforms, is important because that helps the whole ecosystem [inaudible 01:23:45], but you’re right to point out there are other touch points and those include even if to a much lesser degree than we see with other companies, platforms like YouTube and even to some extent search to keep the disinformation off of them entirely and as I mentioned in my verbal statement where it starts to get close to our lines, our policy lines, that we make it less discoverable and certainly don’t recommend it to viewers.

Val Demings: (01:24:14)
Thank you. Mr. Chairman, I’m not sure where I am on time. Do I have time to get an answer from the other –

Adam Schiff: (01:24:20)
Yes, you do.

Val Demings: (01:24:22)
Okay. Thank you so much Mr. Chairman. Mr. Gleicher.

Nathaniel Gleicher: (01:24:28)
Congresswoman, I think you’re identifying and I agree, this is sort of the fundamental tension that we’re all struggling with. Collectively if you look at social media platforms on the internet, I think they are the number one platform for public debate on a whole range of issues and what we’ve seen is that wherever you see that public debate happening, bad actors will participate and will try to use that to spread racism, to spread division, to target public debate in all of these ways.

Val Demings: (01:24:59)
So do you have a moral obligation and if you feel you do, what is it?

Nathaniel Gleicher: (01:25:03)
Congresswoman, we have an essential responsibility, an obligation, to do everything we can to combat that when we see it. That includes ensuring voice for people on the platform so they can speak and it also ensures addressing harm as it emerges on the platform. We have a team that I collaborate with that focuses on dangerous organizations and hate groups. Groups that promote violence, groups that glorify violence. We identify, investigate and we remove them from the platform whenever we see it. We actually just removed two networks linked to a couple of those groups earlier this week. We have teams that hunt proactively for actors that are hiding their identity and using that inauthenticity to drive division, to drive racist narratives. One of the things we’ve seen is that not just foreign actors but domestic actors, when they can operate with impunity, when they can mislead people [inaudible 01:26:00], they will drive more of this harmful and divisive content.

Val Demings: (01:26:04)
Thank you, thank you so much, to our next witness.

Nathaniel Gleicher: (01:26:07)
Thank you.

Nick Pickles: (01:26:10)
Simply yes. Twitter exists to serve the public conversation and we have a responsibility to promote the health, that public conversation. So that’s why we’ve changed our ads policies because we recognize that political ads in the digital era may not be something that democracies are equipped to deal –

Val Demings: (01:26:27)
So the answer was to get rid of them as opposed to take the disinformation down or hold them accountable.

Nick Pickles: (01:26:36)
I think that’s one of the unique things about Twitter is Twitter is a public platform, that [inaudible 01:26:40] Twitter are held to account, their views, abhorrent and otherwise, are exposed to the world, and if I may offer a moment of optimism, since March, we have [inaudible 01:26:52] 250 million Tweets of people expressing gratitude and so I think while we focus on the worst of the conversation, we as a company, we’re focused on being more proactive, apart from the content we removed year we detected ourselves. We’re focused on protecting the conversation through our policies. We’re focused on transparency, to inform people, and overall that work is having a positive impact but the value of Twitter to give people a voice and allow people to express gratitude in these difficult times, we think it’s still an incredibly important part of the public [inaudible 01:27:26].

Val Demings: (01:27:27)
Mr. Chairman, thank you so much for your indulgence. Thank you all.

Adam Schiff: (01:27:30)
You bet. Peter Welch.

Peter Welch: (01:27:33)
Thank you very much. One of the questions I think that really is the focus of this is what Mr. Himes brought up and that is what is happening to public debate and public discourse and we’re in a situation now where nobody is prohibited from asserting whatever facts they want, the president is able to say things that are really quite terrible to many of us and truth has just been a casualty of the whole public debate and it’s a toxic influence on democracy. The fact is as well that social media platforms are incredibly popular and in some cases it’s used for very constructive things, people are gathered for Black Lives Matter rallies often using public platforms to do so, but then you also have state actors and you have pernicious political actors who are using it to undermine everything in the public wheel. So each of your companies is trying to deal with that. Mr. Zuckerberg testified before the Energy and Commerce Committee, Mr. Dorsey testified and each company is trying to make some rules and regulations that it follows to try to bring some order to this, but at a certain point, the question isn’t what each individual company does or each executive does, it’s whether there are laws that impose obligations and of course what has been perceived as very important to your platforms is the decision Congress made some time ago to not hold you to the responsibility of a publisher.

Peter Welch: (01:29:21)
Publishers of regular newspapers do have to exercise editorial judgment and you do but you’re not legally obligated to. I want to ask each of you, what recommendations you would make for legal changes that would impose some obligations on each of your platforms that are similar to the obligations a publisher has about content and I’ll start Mr. Gleicher with you from Facebook.

Nathaniel Gleicher: (01:29:56)
Congressman, thank you for the question. In my world, I’m focused on our security threats and tackling the challenges we face and what I can tell you is that for my team and the teams that work in this area, the shield created by Section 230 is absolutely essential for us to do our work. We’ve seen threat actors try to target us in response to consistent enforcement we’ve taken and –

Peter Welch: (01:30:21)
Let me interrupt for a second because here is the dilemma. I get it, and I get it, you’re trying to do this in good faith but the bottom line here is that in Congress or wherever that has public representation authority, we can’t keep up with each one of these things that comes in your way. You’re doing your best, but at a certain point, we’re always chasing after the fact, so is there … Or would it be your view that Section 30 has to be sacrosanct which in effect leaves the final authority to you as opposed to the final authority to people who have been elected representatives of Americans?

Nathaniel Gleicher: (01:31:01)
Congressman, there is a healthy and important debate right now about how to adjust to the reality of what we’re all facing. We’re going to comply with –

Peter Welch: (01:31:10)
I’m asking how to do it. That’s what I’m asking.

Nathaniel Gleicher: (01:31:12)
Congressman, we’ll comply with the law if Congress wants to make changes. My hope and the piece that I think I can contribute, I hope that as we evaluate that, we do remember the importance of the shield to the ability to protect this voice and we preserve that.

Peter Welch: (01:31:27)
Okay, I get that. Mr. Pickles, how about you?

Nick Pickles: (01:31:40)
Firstly, let me start by saying this isn’t just something that protects the companies before you today. This was an instrument that has protected the whole internet and one of the things at Twitter that we believe in, we believe in the open internet and the way that the law currently works now is it provides protection for companies to do the very content moderation that we’re asked to do by policymakers and in my role when I hear from governments around the world, one thing I’m often asked is how did the United States build this world class technology industry? How did companies like ours grow from one part of one country and the answer is Section 230. I think one of the concerns that we have is as we move into this space, is that the rest of the world is looking to emulate the United States’ domination and success in this area. We’re also thinking in the United States about how to weaken that world class –

Peter Welch: (01:32:35)
I understand that, and I’m sympathetic to it, but we’re seeing the downside now. The Section 230 everyone acknowledges was extremely important to giving us an opportunity in the U.S. to build what you’ve accomplished, but there’s a downside that we’re all seeing. Is it time for reconsideration to have some legal standards that apply to all tech platforms?

Nick Pickles: (01:32:59)
I think it’s time … We’re having this discussion right now. There’s proposals, there’s hearings, there’s a range of discussions. I think my concern is firstly just to remind everybody, federal law, criminal law, is not protected by Section 230 so the concerns about content that’s criminal, that’s not covered by this debate and I think that’s sometimes folded in here. Secondly, this idea that people think that one side will say we want to stop moderation and so the solution is to get rid of 230, one side will say we want much more moderation, and the same answer is offered, so I think I am concerned that … It’s essential to investigate before legislating and I think we’re still only at the beginning of the investigating stage. They’re the consequences of changes that could be damaging to competition, damaging for innovation, and damaging for our ability to actually promote and protect our user’s speech.

Peter Welch: (01:33:52)
Thank you. Mr. Chairman, I see my time is up. I yield back.

Adam Schiff: (01:33:57)
Thank you Mr. Welch. I have a couple questions i want to follow up on and then I thought I would ask my colleagues if they have any short follow-up questions as well. It’s interesting in 2017 the last time we had representatives of your companies in, I think it was then your general counsels, it wasn’t until the second round of questioning that we got to the issue of social responsibility and how the function of the algorithms might be serving to divide the public. When I asked the question about that in 2017, I remember the counsel for Facebook saying that the jury was still out on whether the platforms were having the effect of Balkanizing or dividing the public. I don’t know whether the jury was still out even then, but the jury I think has certainly come back since then and it’s reflected in the degree to which you’ve gotten questions about that issue and so let me follow up with a question of my own on that subject. Mr. Gleicher, can you describe for us because your algorithms are so opaque to the public to what degree your algorithms prioritize amplification on the basis of engagement or attention? As opposed to factoring or prioritizing things like friends or family or even truth and accuracy? So what degree do your algorithms currently amplify on the basis of attention and engagement and has that changed since this problem became apparent? Has that prioritization in your algorithm been downgraded to be less of a priority?

Nathaniel Gleicher: (01:35:50)
Congressman, I can say and I know that we have made a number of changes to prioritization to address this type of risk. Unfortunately, my role isn’t focused in our algorithms and our algorithmic work so I can’t speak to detail on it. I’d be happy to have the …

Nathaniel Gleicher: (01:36:03)
… Algorithms and our algorithmic works, so I can’t speak to detail on it. I’d be happy to have the team who can give you a more accurate answer follow up. I want to make sure we get you the most accurate response.

Adam Schiff: (01:36:09)
Well, I would appreciate if you could follow up with me in writing, but can you even answer the more basic question, does your algorithm still give the first priority, among any other, higher than any other, to amplification based on engagement and attention?

Nathaniel Gleicher: (01:36:30)
Congressman, our algorithm prioritizes a range of different factors, not any one single one. Certainly not a single factor like attention or engagement. But as I said, I can have the team follow up with specific detail on that.

Adam Schiff: (01:36:44)
No, I realize that there are obviously lots of factors in your algorithm, but is that the number one factor? Are you able to tell us that?

Nathaniel Gleicher: (01:36:51)
Congressman, let me follow up with the detail on that.

Adam Schiff: (01:36:55)
Okay. Last question I wanted to ask before I see if my colleagues have any followup questions is how do you assess your working relationship with the FBI, with the IC, their willingness and ability to share information with you, and your ability to share information with them or with each other when it comes to foreign interference on your platforms?

Nathaniel Gleicher: (01:37:21)
Congressman, the collaboration within industry and with government is much, much better than it was in 2016. I think we have found the FBI, for example, to be forward-leaning and ready to share information with us when they see it. We share information with them whenever we see indications of foreign interference targeting our election. The best case study for this was the 2018 midterms where you saw industry, government and civil society all come together, sharing information to tackle these threats. We had a case on literally the eve of the vote where the FBI gave us a tip about a network of accounts where they identified subtle links to Russian actors. We were able to investigate those and take action on them within a matter of hours.

Nathaniel Gleicher: (01:38:06)
I do think the points that Mr. Pickles and Mr. Salgado raised earlier about classification are important, and it’s not necessarily… We don’t necessarily need all of the classified details. In fact, there wouldn’t be a good way for us to consume that. But to downgrade some of the information so that we can act on it quickly, that is a really important value, and I know it’s something that our partners in government are working on.

Adam Schiff: (01:38:31)
Any other comments from your colleagues?

Nick Pickles: (01:38:37)
I would just add, the distinction and the contrast between 2016 to where we are now is really night and day. The partnerships are built on incredibly strong personal relationships. As we were discussing earlier, the dialogue is regular, is deep, is valuable. And that’s true of our industry peers and it’s true of government. So I’d just like to express our gratitude for everyone who’s working across the, particularly those in the FBI and DHS who are working on these issues. That collaboration is critical to our success, and their hard work and our investment really do, they’re a force multiplier. So we’re grateful for that continued collaboration.

Richard Salgado: (01:39:18)
I agree with everything that’s been said, and I would add only that we have been able to be very nimble in shifting when we’ve needed to and the information sharing when COVID-19 sprung. And we recognize that there’s a whole new attack surface now that we’ve got to deal with. We were able to pivot immediately, not that we had all the information or all the answers, but we were able to immediately start focusing attention, asking the right question, engaging with the government, which was very receptive to this topic. And even the same was true with the protests. So it’s been a very nimble, quick process to be able to address the changes that come at us so quickly.

Adam Schiff: (01:39:58)
Thank you. Mr. Himes, any follow up questions?

Rep. Jim Himes: (01:40:04)
Yeah, I do. Thank you, Mr. Chairman. And again, thank you all to our witnesses. We’ve had a really interesting conversation today. And Mr. [inaudible 00:04:11], you’ve answered more than your fair share of questions, and felt, I think, a fair amount of concern.

Rep. Jim Himes: (01:40:17)
Let me acknowledge up front that I think a lot of these issues are really hard. I tend to be a First Amendment absolutist. I really don’t want Facebook telling me what’s true and what’s not true. Mainly because most statements or some combination of both. I’m not quite sure how I come out on 230, and I get that these are really hard issues.

Rep. Jim Himes: (01:40:38)
But to me, I keep coming back to the algorithm. Because I believe that I’ve got some obligation as a citizen to sort true from false. That’s actually a key act of citizenship. And frankly, if we can’t rely on American citizens to be critical thinkers, we should probably just throw the towel in anyway.

Rep. Jim Himes: (01:40:55)
But the algorithm is different. Because the algorithm takes away my choice. I see what you want me to see. And look, I’ll be the first to admit, and I think probably 95% of people are like this. I would rather have a big bowl of Doritos than, if I could use your Facebook’s internal language, than eat my vegetables. And so I get really worried when you say, and I want to give you an opportunity to elaborate on this and then I have one more question for you. I thought I heard you say that people on Facebook aren’t actually drawn to the explosive, the controversial. That they are, your word was constructive. Is that really right? And if so, I’m a little influenced by the May 26th Wall Street Journal article in which this seemed to be a real profound debate inside Facebook.

Nathaniel Gleicher: (01:41:53)
Thank you, Congressman. I agree that this is an incredibly challenging issue, and I think there’s more work to be done. A lot more work to be done. The nuance that I was highlighting there, certainly people are drawn to clickbait. They’re drawn to explosive content. It is the nature of clickbait to make people want to click on it. But what we’ve found is that if you separate it out from the particular content, people don’t want a platform or an experience that is just click bait. They will click on it if they see it, but they don’t want to prioritize. They don’t want their time to be drawn into that and all the emotional freight of it.

Nathaniel Gleicher: (01:42:26)
And so we are trying to build an environment where that isn’t the focus. Where they have the conversations they want to have. But I agree with you, a core piece of this challenge is people seek out that type of content, wherever it is. I should note that as we’re thinking about how we prioritize this, one of the key factors is who your friends are, the pages and accounts that you follow and the assets that you engage with. That’s the most important factor in what you see. And so people have direct control over that because they’re choosing the people they want to engage with.

Rep. Jim Himes: (01:43:03)
I’d like to ask you a specific question, and what strikes me about this and what concerns me about the algorithm is that there is a way of thinking that humans use it as rational and deliberative and consumption of information and the weighing of pros and cons. And then there’s a different way of being a human. And that is emotional and anger and tribal. And I think those are different parts of our brain. I do think that in a rational, analytical environment, First Amendment absolutism is justifiable. But I’m profoundly concerned about, as the Chairman said, an opaque algorithm that may lead to the churning up of the sediment of this anger and emotion and tribalism.

Rep. Jim Himes: (01:43:50)
So my question to you is can we address this starting with one of the things that is usually a good place to start, which is transparency? Would Facebook be willing to make, not just the attributes of the algorithm publicly available, but the effects of the algorithm? Quite frankly, I’d like to know how I behave on Facebook. I don’t track it all that well, but late at night, do I look at nasty political stuff? I think transparency would be good. So how open is Facebook to sharing with the public what the actual algorithm looks like and what the, more importantly, the behavioral outcomes of the algorithm? Because if you can show me data that suggest that people look at Facebook and they expose themselves to new ideas and critical thinking, wow, I’m going to be happy. But if the facts and the data show that people go into very dark places, I have a very different response.

Nathaniel Gleicher: (01:44:45)
Congressman, transparency is important here. I think one of the challenges is of course the algorithms we’re talking about, the decision making process we’re talking about is incredibly complex. Showing that information in a way that is consumable and meaningful is extremely important because it’s very easy to jump to conclusions.

Nathaniel Gleicher: (01:45:02)
Two things that I would offer; the first is when thinking about thin slicing the way humans make these very quick decisions, one of the most important pieces is whether they have context in order to make the assessments they’re trying to make. And one of the challenges with the internet, not just social media, but the internet generally, is that it’s historically been a context stripper. And so one of the things we’re focused on is how do we provide more context to users so that they can make those assessments? Whether it’s on content that’s rated false by our fact checkers, state controlled media entities and others.

Nathaniel Gleicher: (01:45:36)
We’re exploring other ways to be more transparent, and I’d be happy to talk more about that. One piece of research that’s interesting, there’s a team at Harvard and elsewhere, [inaudible 01:45:46] and some others who’ve done some really interesting research on polarization and the impact of it, both in social media and traditional media, and had some pretty interesting conclusions, including that for certain people, it actually broadens and pierces bubbles, and for others, it reifies them. That kind of research is incredibly valuable here, and I’d be happy to talk more about that.

Rep. Jim Himes: (01:46:10)
Thank you. Thank you, Mr. [inaudible 00:10:12]. I’ll follow up and yield back my time.

Adam Schiff: (01:46:15)
Thank you. Jackie Speier, do you have any followup questions?

Rep. Jackie Speier: (01:46:18)
I do, Mr. Chairman, thank you. And thank you all again for being here. This is a complicated issue and we’re all struggling to make sure it’s fair. Let me just point out one thing before I ask my question. To the point that you all are media outlets, Pew Research Center found that 44% of Americans use Twitter, Facebook, and Instagram as sources of information for the 2016 presidential campaign. And among people surveyed ages 18 to 29, 35% use social media as their primary source of political news. So you can understand why we are concerned that what is put on your platforms are in fact factual.

Rep. Jackie Speier: (01:47:09)
I want to point out that all of you are men at this hearing. And there was a recent study that looked at the 2020 Democratic party presidential primaries. The study’s entitled Hashtag She Persisted; Women, Politics and Power In The New Media World. The analysis of the 2020 primary showed that female candidates are attacked more often than male candidates by fake news accounts, and interviews with female politicians around the world suggests the same phenomenon. Early in the Democratic presidential primary, social media narratives of female candidates were mostly negative and mostly about their character. For example, the top narrative about Kamala Harris was that she was not authentically progressive, American or black. The top narrative about Warren was that she lied about her ethnic heritage.

Rep. Jackie Speier: (01:48:11)
Attacks of women tend to fall in three buckets, untrustworthy, emotional or dumb. Within these themes are a high volume of sexualized content. Memes, graphics or accusations of women sleeping their way to the top. So this is pretty toxic. And I can tell you personally that I endure a whole lot of that on the platforms. Pretty disgusting stuff. So all three of you are men. I want to know how you are going to address this issue in the 2020 election? Mr. [inaudible 01:48:52] why don’t we start with you?

Nathaniel Gleicher: (01:48:55)
Thank you, Congresswoman. I would say, I have seen as well the increased targeting of women, particularly online, and the way in which they can get brigaded. The way in which prominent women experience this. We have teams focused on coordinated harassment and how we can tackle them. One of the things that my team focuses on in particular is cases where we see individuals singled out using either networks of fake accounts or networks of deceptive behavior to amplify-

Rep. Jackie Speier: (01:49:23)
How many women do you have in your team?

Nathaniel Gleicher: (01:49:26)
Congresswoman, right now I think about 50% of my team is either ethnically diverse or women.

Rep. Jackie Speier: (01:49:32)
Well, how many are women?

Nathaniel Gleicher: (01:49:34)
It’s about 30% of my team.

Rep. Jackie Speier: (01:49:36)
All right, go ahead.

Nathaniel Gleicher: (01:49:38)
One of the things that I’ve found, and it applies to women, it applies to the minority communities as well, the actors who target using these inauthentic techniques target minority communities. They target people they believe they can victimize. And so it’s extremely important that our teams be diverse so that we can have the sort of creative responses we need, and so that we can put ourselves in the shoes of the people who are being targeted and do everything we can to-

Rep. Jackie Speier: (01:50:05)
What do you do when female candidates are targeted in this way? How do you address it?

Nathaniel Gleicher: (01:50:14)
Congresswoman, it depends a little bit on the exact threat. We are building tools that allow first people to mute or not have to see some of these threads. When we see threads-

Rep. Jackie Speier: (01:50:28)
But that hides from the individual, not from you. So if you don’t want to see something, you can click it off, but how about the fact that it just gets communicated throughout the platform and it gains traction. And if it’s negative, it gains more traction based on the algorithm.

Nathaniel Gleicher: (01:50:49)
Congresswoman, if we see direct threats of violence against someone, we will take that down. And we’ve done that a number of these targeting women. If we see content that violates our community standards, we proactively hunt for this and take action against it when it emerges. I do think also thinking about how to change the product to empower people and to disincentivize this type of stuff is an important piece as well. If we’re only removing things we see, we’ll always be in sort of a whack-a-mole world. We need to keep doing that, and we have a responsibility to keep doing that. We also need to change the environment to make this type of thing harder and give people more tools to protect themselves.

Rep. Jackie Speier: (01:51:30)
Okay. Well, I’m going to share this report with Sheryl Sandberg in hopes that you’re going to do more than what you’ve done already. All right? Can I-

Nathaniel Gleicher: (01:51:38)
Thank you, Congresswoman.

Rep. Jackie Speier: (01:51:38)
[crosstalk 00:15:40], representatives, please? I know my time’s expired, Mr. Chairman, but I’d like to hear from the other two.

Adam Schiff: (01:51:47)
That’s fine, thank you.

Nick Pickles: (01:51:51)
Thank you, Congresswoman, for raising this topic. I’ve myself run for office and it’s something that I feel very passionate about. In terms of my team, I’m the only man at my level in my team. My boss is a wonderful woman who we have a remarkable team of people that’s more, majority is female. This issue also affects journalists, I think it’s important to note. And it’s particularly acute for candidates and journalists of color who face particular challenges as different intersections of abuse and harassment.

Nick Pickles: (01:52:23)
The most important thing is being proactive. You’re absolutely right to highlight that if we wait for users to report this content, we’re failing. So that’s been a big investment for us in recent years. Now, more than half of the content that we remove for violating the Twitter rules we action without a user report. Secondly, we’ve made specific improvements to our reporting flow. Something we see is doxxing, an issue of sharing private information. And so we’ve made it much easier to highlight when people report things to us that private information has been posted. That means we can move faster and we’re taking more action. So over the course of the year, our action rate increased more than 100% on private information posts.

Nick Pickles: (01:53:03)
So we have to be more proactive. We also have to have partnerships. And that’s why working with civil society, working with candidates directly, making sure we’ve got those strong relationships to escalate things is important, but I will absolutely tell you now there is more to do in this area, and it is something that we remain deeply committed to working on and investigating.

Rep. Jackie Speier: (01:53:22)
Thank you.

Richard Salgado: (01:53:26)
For Google, it’s similar. On my team, about 50% of the team members are female and my leadership chain is roughly 50% female as well. It’s an acute problem. We’ve seen this going back, Gamergate, all sorts of horrific events, where women and minorities and others are targeted online.

Richard Salgado: (01:53:52)
On Google, perhaps the investment that we’ve made on, as Nick was referring to for Twitter, on comments in YouTube videos has been pretty successful of late. We made some changes in 2019 around recommendations, but we’ve also had some great success in automated removal of comments on YouTube videos that violate our policies. Most of it’s automated. I think we’re high 90 percentage of the removals on those comments in YouTube happen automated, not through user reports. But user reports remain very important. And this sort of conduct of course, has violated the policies and it’s a matter of effective, quick enforcement of those policies, certainly not a lack of willingness. But it is an enforcement effort of policies that exist. And we recognize it as important and treat it as one of the areas where we need to continue to improve.

Rep. Jackie Speier: (01:54:53)
Thank you. Again, thank you, Mr. Chairman, for indulging me on that.

Adam Schiff: (01:54:57)
Representative Speier, Eric Swalwell.

Rep. Eric Swalwell: (01:55:00)
Thank you, Chairman. I’m seeing coming across the news right now that this morning Facebook took down President Trump ads that related to symbols that the Nazis had used to designate political prisoners in concentration camps. A red inverted triangle, which was first used by the Nazis to identify communists, social Democrats, liberals, Freemasons, and other members of opposition parties was used in an ad by Donald Trump. Also Vice President Pence and the Team Trump page and viewed fewer than 24 hours by just under a million people. So I wanted to allow Facebook to address this, but also, what will you do with just the spread that’s already out there with this hate symbol? And what sanctions will you take against the Trump campaign? Because this is not the first time an ad has been taken down. I believe it’s the third time.

Nathaniel Gleicher: (01:56:04)
Thank you, Congressman. Yes, we don’t allow symbols that represent hateful organizations or hateful ideologies, unless they’re put up with context or condemnation. You obviously want to be careful to allow someone to put up a symbol to condemn it or to discuss it, but in a situation where we don’t see either of those, we don’t allow it on the platform and we will remove it. That’s what we saw in this case with this ad, and anywhere that that symbol is used, we would take the same action. So we’ll be consistent in enforcing wherever either our systems identify those symbols, and as you’d expect, when we identify something like this, we bank it within our system so that we can look for other instances of it where it might appear so we can find it and remove it automatically. And also if there’s something we miss, because certainly aren’t perfect, if someone were to bring that to our attention, we would take action there as well if it’s the same symbol.

Rep. Eric Swalwell: (01:56:52)
How many symbols would a campaign have to run that are taken off the platform before the page is taken down? The campaign’s account is taken down?

Nathaniel Gleicher: (01:57:04)
Congressman, my focus isn’t on our ads policies. What I can tell you is if we see repeated instances of violations, repeated instances of misinformation, for example, we will take increasing actions. I don’t have the details on the specific thresholds there. I’d be happy to have the team that is specifically working on this follow up with you.

Rep. Eric Swalwell: (01:57:25)
Well, have accounts been taken down because of repeated efforts to put misinformation out there?

Nathaniel Gleicher: (01:57:32)
Congressman, I’d be happy to follow up with you for that specific detail.

Rep. Eric Swalwell: (01:57:35)
But do you know the answer to it?

Nathaniel Gleicher: (01:57:37)
I don’t know the answer to it off the top of my head.

Rep. Eric Swalwell: (01:57:42)
One of my fears is that in addition to the misinformation we see right now, that one of the most perilous times is going to be between election day and inauguration day. And I think the president with his rhetoric in characterizing mail in ballots as fraudulent and implying that undocumented Americans will be voting in the election, I believe that if the result does not go the way he wants, he is seeding what will be frivolous lawsuits and assaults on the election. My fear is that all three of your platforms will be used, not only by the president, but by outside meddlers to try and amplify discord and confusion in our country. And I just want to get a pledge from each of you as to what will you do if that is indeed the case? That we have a president who does not accept the results and is welcoming or not condemning outside interference that may try and amplify misinformation during what is supposed to be a peaceful transition of power. Mr. Pickles, start with you.

Nick Pickles: (01:58:56)
Well, there’s two very important things. Firstly, our rules are global and our rules apply at all times. So irrespective of whether activity happens before election day or after election day, we will take action on any user that breaks our rules, and we’ll take action on any fake accounts. We’ll take action on foreign actors, we’ll take action on domestic actors. And you have our absolute commitment that we will enforce our rules impartially around the world between now, inauguration day, and beyond.

Rep. Eric Swalwell: (01:59:27)
Thank you. And Mr. Salgado?

Richard Salgado: (01:59:29)
Same is true with Google. We’re committed to enforcing our policies. We will continue to improve those policies and the enforcement, but the commitment to do so remains.

Rep. Eric Swalwell: (01:59:40)
Thank you. Mr. [inaudible 00:23:40]?

Nathaniel Gleicher: (01:59:42)
Congressman, we will continue to enforce our policies consistently around the world, and at any time. I would add, I think you highlight a really important point. We have teams that are running red team exercises and threat ideations within the company and with colleagues outside the company to ask when are the periods of greatest risk? What are the most likely threats? And we’ve always known that the period after the election is a critical one. There are some particular characteristics with this election, given that we expect an increase, for example, in vote by mail ballots. That’s important so that people can engage in a time like this. It takes time to count vote by mail ballots. So there may be periods of uncertainty after the election when work will be especially critical. So what I would say is we are focused on the time after the election with just as much laser focus as the time immediately before.

Rep. Eric Swalwell: (02:00:31)
Well, I’m afraid a storm is coming and we really need you all to be ready. And with that, Chairman, I’ll yield back.

Adam Schiff: (02:00:39)
Thank you, Mr. Swalwell. We have two last questioners, Mr. Heck and then Mr. Welch.

Denny Heck: (02:00:46)
Thank you, Mr. Chairman. Well, insofar as Mr. Himes seems to have preemptively channeled 100% of my thoughts, I’ll forgo my second round, except to express my appreciation again to the panelists for their presence today. Thank you very much, one and all.

Adam Schiff: (02:01:02)
Thank you, Mr. Heck. Mr. Welch, would you like the last question?

Peter Welch: (02:01:06)
… Along with Mr. Heck. Thank you all for an excellent hearing. I very much appreciate it.

Adam Schiff: (02:01:13)
Thank you, Peter. And this will then conclude our hearing today. I want to join in thanking our witnesses for appearing before the committee and testifying under these extraordinary circumstances. We will follow up with you with respect to questions that you took back with you to make sure that we can complete the record. But once again, my thanks to all of you for your participation today, and thanks to the members and staff as well. And with that, we are adjourned.

Nathaniel Gleicher: (02:01:44)
Thank you.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.