Apr 27, 2021

Senate Hearing on Social Media Algorithms Full Transcript April 27

Senate Hearing on Social Media Algorithms Full Transcript April 27
RevBlogTranscriptsSenate Hearing on Social Media Algorithms Full Transcript April 27

Social media executives testified before the Senate on April 27, 2021 about social media algorithms and their effects on public discourse. Read the transcript of the full Senate hearing below.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.

Chris Coons: (00:00)
But as many of us have become increasingly aware, algorithms impact what literally billions of people read and watch and impact what they think every day. Facebook, Twitter, YouTube, the three major tech companies represented in today’s hearing use algorithms to determine what appears on your screen when you open and engage with their applications. There’s nothing inherently wrong about that. With billions or even trillions of pieces of content to choose from on each platform, it makes sense that they should have a way to help us sift through what they think their users are looking for and what we’re actually seeking. Advances in machine learning that made this technology possible have led to enormous good in other context. Machine learning has driven innovation across many industries for medical science to public transportation, and has allowed companies to deliver better services. But many have also recently argued this advanced technology is harnessed into algorithms designed to attract our time and attention on social media and the results can be harmful to our kids’ attention spans, to the quality of our public discourse, to our public health, and even to our democracy itself.

Chris Coons: (01:10)
What happens when algorithms become so good at amplification, at showing you content that a computer thinks you’ll like, that you or your kids or family members end up spending hours each day engaged, staring at the screen? What happens when algorithms become so hyper tailored to you and your habits and interests that you stop being exposed to ideas you might find disagreeable or even so different from yours as to be offensive? What happens when they amplify content that might be very popular, but is also hateful or just plain false? As I noted, ranking member Sasse and I worked on this hearing and one of the main reasons for that is because we truly don’t see these as partisan questions and don’t come to this hearing with a specific regulatory or legislative agenda.

Chris Coons: (01:57)
But this is an area that requires urgent attention. As Mark Zuckerberg himself recently put it, and I quote, “When left unchecked, people will engage disproportionately with sensationalist and provocative content, which can undermine the quality of public discourse and lead to civic polarization. And if we’re so polarized and angry we can no longer hear each other’s points of view, then our democracy itself suffers.” So as quaint as some might think it, ranking member Sasse and I plan to use this hearing as an opportunity to learn about how these companies’ algorithms work, what steps may have been taken to reduce algorithmic amplification that is harmful, and what can be done better so we can build on that knowledge and considering a potential path forward, whether voluntary, regulatory or legislative. I look forward to hearing from the representatives of Facebook, Twitter, and YouTube who’ve agreed to testify.

Chris Coons: (02:49)
Each of these platforms has taken a number of measures in recent years to curb some of the harms that algorithmic amplification can cause. It’s also my hope that these can build upon good practices, learn from each other and make a significant difference. We’ll also hear from two outside experts who can help us ask some bigger picture questions and to narrow in on some of the strategies and tactics we could or should follow moving forward, including whether and how legislation might improve the practices that these and many other platforms use. Thank you, and I’m now going to turn to my ranking member Senator Sasse for his opening remarks.

Ben Sasse: (03:27)
Thank you, Chairman Coons. Congratulations on having a gavel for the first time in six years. Hopefully you don’t get to keep it long, but I have enjoyed the preparation for this hearing with you and with your team. They’ve been thoughtful to deal with. And I appreciate your opening statement. I guess I should acknowledge the witnesses to. Thank you to all four of you. Mr. Harris, it’s actually 85 degrees this afternoon in D.C. so you didn’t have to avoid us in Hawaii and have to testify at 4:00 AM. But thank you for participating in the pre-dawn hours there nonetheless. Chris, I want to applaud your opening statement. It’s too easy in D.C. for us to take any complicated issue and reduce it immediately to heroes and villains and whatever the regulatory or legislative predetermined tool was to then slam it down on the newly to be defined problem.

Ben Sasse: (04:16)
And I think you underscored a number of really important points. The simplest one is that algorithms, like almost all technologies that are new, have costs and benefits. Algorithms can make the world a better place. Algorithms can make the world a worse place. And one of the most fundamental questions before us as a people isn’t first and foremost governmental or legislative or regulatory, though those issues do exist. The first one is in the new digital economy or the attention economy, the old adage holds that if a product is free, you’re probably the product. And the American people need to understand, we parents and neighbors need to understand that we’re being given access to these unbelievably powerful tools that can be used for lots and lots of good. And in most cases, because it’s free, there’s somebody who would really like to capture our attention, shorten our attention spans and drive us into often poisonous echo chambers. So algorithms have great potential for good.

Ben Sasse: (05:13)
They can also be misused and we, the American people, need to be reflective and thoughtful about that, first and foremost. To the tech companies who’ve who showed up today and to those of you who are also adjacent to the Silicon Valley conversation, thank you for your interest and attention to this conversation. I think it’s very important for us to push back on the idea that really complicated qualitative problems have easy quantitative solutions in some hearings that are not narrowly on this topic, but other technology related big tech hearings that we’ve had over the course of the last two or three years in this committee, sometimes really hard nettlesome problems we’ve wrestled with we’ve been told that as soon as the supercomputers were better, they would solve these problems.

Ben Sasse: (05:57)
The truth is we need to distinguish between qualitative and quantitative problems. But I appreciate the chairman’s perspective on the way we’re beginning this hearing, which is this isn’t a rush to pretend politicians know a lot more about these problems than we really do, it’s an acknowledgement that there are some big problems and challenges in this area and prudence and humility and transparency are the best way to begin and I’m grateful for the chairman’s leadership of this committee in this particular hearing.

Chris Coons: (06:25)
Thank you, Senator Sasse. I’ll now turn to Chairman Durbin for his opening remarks.

Dick Durbin: (06:29)
Thanks. I’ll be brief and I appreciate the opportunity to join you and Senator Sasse and make a statement. Congratulations Senator Coons, for taking the reigns as Chair of the Privacy Technology and Law Subcommittee which I was pleased to reconstitute in this Congress. You have already demonstrated significant leadership. I look forward to your work and the cooperative efforts of Senator Sasse in bringing hopes and policy and legislation before the full committee. This country stands at a crossroads as we grapple with the role of technology and social media in our lives and culture. I think Senator Sasse summarized it, it’s plus and it’s minus. We’re trying to look to the minus side, but should never overlook the plus side. For example, the right to privacy, especially for children, is one of the persistent concerns I share with many members of this committee. Every day, internet companies collect reams of personal data on Americans, including kids. But we cannot expect children to fully understand the consequences of their internet use and this collection process.

Dick Durbin: (07:33)
Kids deserve, I believe, a chance to request a clean slate once they’re old enough to appreciate the nature of internet data collection. That’s why later this week I’ll be re-introducing the clean slate for kids online act, which would give every American an enforceable legal right to demand that website companies delete all personal information collected from or about the person when he or she was a child under the age of 13. The right to privacy and access to one’s data could keep the subcommittee completely occupied. There’s a lot more to explore, including the subject of today’s hearing, which will examine how social media platforms use highly targeted algorithms to captivate and persuade us in every aspect of our life. Algorithms influence what we read, watch, buy and how we engage. And they don’t just affect our personal lives, they affect us on a global basis.

Dick Durbin: (08:25)
For example, an independent civil rights audit last year found that Facebook is not sufficiently attuned to how its algorithms, “Fuel extreme and polarizing content,” and can drive people towards self-reinforcing echo chambers of extremism. Following the release of that audit, Chairman Coons led a letter to Facebook which I was proud to join that called on the company to do more to mitigate the spread of anti-Muslim extremism and bigotry on their platform. And last November, when Facebook CEO Mark Zuckerberg testified in this committee, I asked him about recent incidents where hate and conspiracy groups that use Facebook to plan and recruit, including the organizer of the conspiracy to kidnap Michigan’s Governor Gretchen Whitmer and the so-called Kenosha guard militia, which posted a quote, call to arms on Facebook in the aftermath of the shooting of Jacob Blake in Kenosha, Wisconsin.

Dick Durbin: (09:22)
That call to arms spread widely and was read by a 17-year old vigilante named Kyle Rittenhouse, who traveled from Illinois to Wisconsin, where he allegedly shot and killed two people in the streets of Kenosha on August 25th, 2020. That militia page was reportedly flagged at least 455 times to Facebook. However, Facebook found the page did not violate standards so it was left up. The response from Mr. Zuckerberg at the hearing was, and I quote, “It was a mistake. It was certainly an issue and we’re debriefing and figuring out how we can do better.” Unfortunately, it’s clear that they didn’t figure out how to do better quickly enough. Not even two months later, a mob of domestic terrorists and violent extremists stormed this Capitol building in the January 6th coup attempt fueled by widespread lies and conspiracy theories that claimed the election had been stolen from the former president.

Dick Durbin: (10:18)
While the efforts to overturn a free and fair election were ultimately unsuccessful, the trauma of that harrowing day lingers on. After January 6th, the consequences of rampant hate and misinformation on social media platforms has never been clearer. We need social media companies to finally take real action to address the abuse and misuse of their platforms and the role that algorithms play in amplifying. I look forward to hearing from the witnesses and I’m hopeful that this subcommittee can accomplish, under Chairman Coons’ leadership, what we are expecting for an opportunity for this country in the right direction.

Chris Coons: (10:54)
Thank you, Mr. Chairman. I will now briefly introduce our witnesses for today and then swear them in. First is Monika Bickert, Facebook’s Vice President of Content Policy. She originally joined Facebook in 2012 as lead security counsel advising the company on child safety and law enforcement. Prior to joining Facebook, Ms. Bickert served as Resident Legal Advisor at the US Embassy in Bangkok, where she specialized in Southeast Asia rule of law development and response to child exploitation and human trafficking. She also served as a prosecutor with the Department of Justice for 11 years in Washington. Lauren Culbertson is Twitter’s Head of US Public Policy based in Washington D. C., leads the company’s federal and state public policy teams and initiatives, serves as Twitter’s Global Lead for Intermediary Liability Policy and spearheads the company’s efforts to help combat the opioid crisis. Previously, Ms. Culbertson worked in the US Senate for my friend, Senator Johnny Isakson of Georgia. She also founded a business millennial bridge to promote public policy.

Chris Coons: (11:52)
Alexandra Veitch leads YouTube Government Affairs and Public Policy for the Americas where she advises the company on public policy issues around online and user generated content. She previously served as Special Assistant to President Obama and as Deputy Assistant Secretary for the Department of Homeland Security. Before that, she served as a member of Speaker Pelosi’s senior staff and began her career working for Senator Barbara Mikulski of Maryland. Ms. Veitch’s private sector experience also includes leading North American government affairs for Tesla and CSRA. Tristan Harris has spent his career studying today’s major technology platforms and how they have increasingly become the social fabric by which we live and think and communicate. Mr. Harris is the co-founder and president of the Center For Humane Technology, which aims to catalyze a shift toward humane technology that operates for the common good.

Chris Coons: (12:41)
Mr. Harris was the primary subject of the Netflix documentary, The Social Dilemma. Mr. Harris also led the Time Well-Spent movement, which sparked product changes at Facebook, Apple and Google. Dr. Joan Donovan is a leading public scholar and disinformation researcher specializing in media manipulation, critical internet studies and online extremism. She’s the Research Director at the Harvard Kennedy School Shorenstein Center and Director of the Technology and Social Change project. Dr. Donovan is a co-founder of Harvard Kennedy School’s misinformation review. Her research can also be found in peer reviewed academic journals such as Social Media + Society, the Journal of Contemporary Ethnography, Information, Communication and Society, and Social Studies of Science, and she’s a columnist at MIT’s technology review. You’re all virtual, which makes this next step just a little different or novel for me. Would are four witnesses, please stand to be sworn. Raise your right hand. And since I can’t see you, I can’t affirm that you’re doing that. But do you affirm that the testimony you are about to give before this committee will be the truth, the whole truth and nothing but the truth, so help you God?

Alexandra Veitch: (13:58)
I do.

Monika Bickert: (13:58)
Yes.

Chris Coons: (14:01)
Thank you. We will now proceed with witness statements. Each of you has five minutes to make an opening statement to this subcommittee. Ms. Bickert, please proceed with your testimony.

Monika Bickert: (14:14)
Thank you. Chairman Coons, Ranking Member Sasse, and distinguished members of the subcommittee, thanks for the opportunity to be here with you today. I’m Monica Bickert and I lead content policy for Facebook. Facebook uses algorithms for many of our product features, including enforcing our policies. However, when people refer to Facebook’s algorithm, often they’re referring to our content ranking algorithm that helps us order content for people’s news feeds. So I’ll just dive into that one briefly. And Chairman Coons, as you pointed out, the algorithm ranks because when people come to Facebook, they have so much potential content they could see. So the average Facebook user has thousands of eligible posts every day that she could see in her newsfeed and they’re all there. But what we do is we try to save them the time of sorting through all of that to find what’s most meaningful to them by instead using a ranking algorithm that ranks each post and tries to put at the top the content the person will find the most meaningful.

Monika Bickert: (15:21)
The algorithm looks at many signals, including things like how often the user typically comments on or likes content from this particular source, how recently that content was posted and whether the content is in a format such as a photo or a video that that user tends to engage with. The process results in a newsfeed that is unique to each person. Now naturally our users don’t see the underlying computer code that makes up the algorithms, but we do publish information about how the ranking process works. And that includes describing the inputs that go into that ranking process and also we have a blog post that we put out whenever we have significant changes to how we are ranking content in the algorithm. Additionally, people can actually click on any post in their newsfeed and then go to toggle the menu and go down to where it says, “Why am I seeing this post?”

Monika Bickert: (16:19)
And they will see the factors and explanation for why the algorithm put that piece of content where it did in their newsfeed. And this helps people understand what the algorithms are doing and why they’re doing it. Now I do want to underscore that people can opt out of this ranking algorithm. They can toggle over to a most recent newsfeed, which basically means that all of that eligible content that you could see is simply ordered in reverse chronological order. And they can also choose from an option that we call favorite speed, which basically allows you to select pages or accounts that are favorites of yours and then those will be the only things that will be ranked in your newsfeed. We’ve recently released a feature that allows people to toggle among those different options. As we work to bring more transparency to the algorithm and also give people more control over how it works for them, we also are working to improve the way that the ranking system itself works.

Monika Bickert: (17:24)
And we announced last week that part of that includes expanding our surveys to understand what’s meaningful to people and what’s most worth their time and also making it easier for them to give us feedback on individual posts. And that’s feedback that we’ll take from them and we will build into the ranking algorithms in a hope that as we make this process better and better, people will leave Facebook feeling inspired [inaudible 00:17:52] Now, newsfeed ranking. Isn’t the only thing that determines what people might see when they come to Facebook. We also have a set of community standards that says, ” There are certain categories of content that simply aren’t allowed on our service.” And those are public standards that we’ve had for years and we publish a quarterly report on how we are doing at finding that content and removing it. And as the report shows, we’ve gotten better and better, made significant strides over the past years.

Monika Bickert: (18:21)
Now if content is removed for violating those standards, then it does not appear in our newsfeed at all. There other types of content that don’t violate the standards, but nevertheless, people don’t want to see them, like click bait or borderline content, that the algorithms down rank. The reality is it is not in our interests financially or reputationally to push people towards increasingly extreme content. If we do something like that to keep somebody on the site for a few extra minutes, but it makes them have a worse experience and be less likely to use our products, then that is self-defeating. Our long-term interest is to make sure that people want to value our products for years down the road. The algorithms are a key part of how we help people connect and share and how we fight harmful content and misinformation on our site, and we’ll continue to do more to help people understand how the systems work and how they can control their experience. Thanks, I look forward to your questions.

Chris Coons: (19:29)
Thank you very much, Ms. Bickert. Ms. Veitch, would you please proceed with your testimony?

Alexandra Veitch: (19:36)
Chairman Coons, Ranking Member Sasse, and distinguished senators of the subcommittee, thank you for inviting me to appear before you today. My name is Alexandra Veitch and I’m the Director of Government Affairs and Public Policy for the Americas and Emerging Markets at YouTube. I appreciate the opportunity to explain how algorithms and machine learning support YouTube’s mission to give everyone a voice and show them the world. Through the adversity and uncertainty of the last year, YouTube has helped bring people together as we’ve stayed apart. More viewers than ever have come to YouTube to learn new skills, to understand the world more deeply and to be delighted by stories that can’t be found elsewhere. YouTube’s business relies on the trust of our users, our creators and our advertisers. That’s why responsibility is our number one priority. Our approach is based on what we call the four Rs. We remove content that violates our community guidelines. We raise authoritative voices. We reduce the spread of borderline content and we reward trusted creators.

Alexandra Veitch: (20:47)
Our written submission explains each pillar in detail, but I want to focus my comments today on how machine learning supports this responsibility work when it comes to recommendations. Recommendations on YouTube help users discover content that they will enjoy. And on key subjects, we want to recommend content to our users that is authoritative. Recommendations are based on a number of signals, including, if enabled, a user’s watch and search history. We also consider factors like country and time of day, which help our system show relevant news consistent with our efforts to raise authoritative voices. But we also give our users significant control over how their recommendations are personalized. Users can view, pause, edit or clear their watch or search history at any time. We also give users the opportunity to provide direct feedback about recommendations so they can tell us if they are not useful.

Alexandra Veitch: (21:47)
We also believe we have a responsibility to limit recommendations of content that is not useful or may even be harmful. That is why in January of 2019, we launched more than 30 changes to our recommendation systems to limit the spread of harmful misinformation and borderline content, which is content that comes close to, but doesn’t cross the line of, violating our community guidelines. As a result, we saw a 70% drop in watch time of such content from non-subscribed recommendations in the US that year. This borderline content is a fraction of 1% of what’s watched on YouTube in the US, but we know that it’s too much and we are committed to reducing this number. We know there’s interest in the quality of the content we recommend to our users. Researchers around the world have found that YouTube’s recommendation systems move users in the direction of popular and authoritative content.

Alexandra Veitch: (22:45)
Our efforts to raise up content from authoritative sources and reduce recommendations of borderline content and harmful misinformation outweigh other recommendation signals even if the net result is decreased engagement. We are proud of our record here, but we also work continuously to improve. Because responsibility and transparency go hand in hand, I would like to close with three recent transparency efforts we have undertaken to facilitate a better understanding of our platform. First in May 2020, we collaborated with Google to launch the first threat analysis group bulletin. It regularly discloses actions that we have taken to combat coordinated influence operations from around the world. Second, in June of 2020, we launched a website called How YouTube Works to answer frequently asked questions. It explains our products and policies in detail and provides information on critical topics such as child safety, harmful content, misinformation and copyright. Third, earlier this month, we added a new progress metric to our quarterly community guidelines enforcement report. Our violative view rate estimates the percentage of views on content that violates our policies. Last quarter, this number was 0.16% to 0.18%, meaning that out of every 10,000 views on YouTube, only 16 to 18 come from violative content. This is down by over 70% compared to the same quarter of 2017 thanks in large part to our investments in machine learning. As we work to balance the open nature of our platform with our important work to be responsible, we appreciate the feedback we received from policy makers. We will continue to do more. Thank you again for the opportunity to appear before you today. I look forward to your questions.

Chris Coons: (24:43)
Thank you, Ms. Veitch. Ms. Culbertson from Twitter, if you’d now present your opening statement, your testimony, that’d be wonderful. Thank you.

Lauren Culbertson: (24:51)
Thank you Chairman Coons, Ranking Member Sasse and members of the subcommittee for the opportunity to testify on behalf of Twitter today on the role of algorithms and amplification of content. Twitter’s purpose is to serve the public conversation. In the early days, we were where you could go to share 140 characters status updates. Now our service has become a go-to place to see what’s happening in the world and to have conversations about a wide range of topics including current events, sports, entertainment and politics. While much has changed since the company was founded 15 years ago, we believe our mission is more important than ever. While many of the challenges we grapple with today are not new, the creation and evolution of the online world have affected the scale and scope of these issues. Moreover, we must confront these issues amidst increasing global threats to free expression.

Lauren Culbertson: (25:48)
We believe that addressing the global challenges that internet services like our space requires a free and open internet. We’re guided by the following principles as you seek to build trust with the people we serve. This includes increasing transparency, providing more consumer control and choice, and improving procedural fairness. Let me expand on the principle of consumer control and choice, as it’s particularly relevant to today’s discussion on algorithmic choice. In 2018, we introduced a feature to give people on Twitter control over the algorithms that determine your home timeline. Through the sparkle icon you see on the top right corner of your screen, you can choose to see your tweets ranked or toggle to view tweets in reverse chronological order. When we implemented this, some suggested it would be bad for our business. We thought it was the right thing to do for our users and it’s been a core feature ever since.

Lauren Culbertson: (26:45)
Further in line with our commitment to choice and control, Twitter is funding Bluesky, an independent team of open source architects, engineers and designers, to open decentralized standards for social media. It’s our hope that Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice. These standards could support innovation, making it easier for startups to address issues like abuse and harmful content at a lower cost. We recognize that this effort is complex, unprecedented, and will take time. But we’re currently planning to provide the necessary exploratory resources to push this project forward. As we make investments to provide more transparency and choice, we’ve also launched our responsible machine learning initiative to conduct an in-depth analysis and studies to assess the existence of potential harms in the algorithms we use. We plan to implement our findings and share them through an open process to solicit feedback.

Lauren Culbertson: (27:58)
Finally, as policy makers and members of Congress here debate internet regulation, I urge you to consider the ways algorithmic choice and machine learning make Twitter and other services a safer place for the public conversation. Technology is essential for rooting out harmful content like terrorism and child sexual exploitation content. We also rely heavily on machine learning tools to surface potentially abusive or harmful content for human moderators to review. Simply put, we must ensure that regulations enable companies to tap technology to help solve some of the problems that technology itself poses. In summary, we believe that moving toward more open systems will increase transparency, provide more consumer control in choice and increased competition in our industry. This will ultimately lead to more innovation to solve today’s and tomorrow’s challenges. We appreciate the enormous privilege we have to host some of the most important conversations in the world. We’re committed to working with a-

Lauren Culbertson: (29:03)
… to important conversations in the world. We’re committed to working with a broad group of stakeholders to get this right for the future of the internet and for the future of our society. Again, thank you for the opportunity to be here with you today.

Chris Coons: (29:14)
Thank you, Ms. Culbertson. Mr. Tristan Harris of the Center for Humane Technology, if you’d now please give your opening statement.

Tristan Harris: (29:23)
Thank you, Senator Coons, Senator Sasse and Chairman Durbin. It’s an honor to be here with you today. My background is I used to be a design ethicist at Google. That was before recently featuring in the film, The Social Dilemma, which many of you might’ve seen, which really has the insiders who understood how these technologies were built in the first place have affected society. My friends in college were some of the people who ended up working at these companies in the very early days, including my friends Mike and Kevin, who actually started Instagram. And what we really are missing in this conversation is a focus on the business model and the intrinsic nature of what these platforms are about, not because they’re evil, and none of the people who are here with us today are intentionally causing any harm. Neither do I believe that the tech companies who created these systems have intentionally wanted any of these harms to happen, but we’re now in a situation where if we don’t diagnose the problem correctly, we’re going to be in a bit of trouble.

Tristan Harris: (30:21)
While you’re hearing from the folks here today about the dramatic reductions in harmful content, borderline content, hiring 10 thousands more content moderators, et cetera, it can sound very convincing, but at the end of the day, a business model that preys on human attention means that we are worth more as human beings and as citizens of this country when we are addicted, outraged, polarized, narcissistic, and disinformed, because that means that the business model was successful at steering our attention using automation. We are now sitting through the results of 10 years of this psychological deranging process, that have warped our national communications and fragmented the Overton window, and the shared reality that we need as a nation to coordinate to deal with our real problems, which are existential threats like climate change, the rise of China, pandemic, education and infrastructure.

Tristan Harris: (31:15)
So long as these companies profit by turning the American conversation into a cacophony, into a Hobbesian war of all against all, because that is the business model, again of not the advertising, but the model of everyone getting a chance to speak and have it go viral to millions of people. So long as that is the promise with personalization, we are each going to be steered into a different rabbit hole of reality, which Joan will do such a good job of talking about. If you care about or believe that masks work, you will see infinite evidence that masks work. If you click on a couple articles that say mask don’t work and here’s the stats in Florida’s showing that the data was different, you will see infinite evidence that masks don’t work. And then we are pitted against each other with this infinite virality, where anything that is said can go viral.

Tristan Harris: (32:05)
Fundamentally, this is breaking many different aspects of the nation’s fundamental life organs. For children, increased cyber bullying leads to an increase in suicide. It takes momentary drama and it turns it into drama snowballs that drown out the effects of teachers in classrooms who have to spend two hours on Monday morning clearing up all the drama that occurred on social media over the weekend. It can reverse huge progress that we’ve made in civil rights and not perpetuating racial stereotypes by increasing online harassment and rewarding the presentation of minorities in ways that are demeaning. It can inhibit our progress on climate change because climate disinformation has gone viral on these platforms. It can pose a threat to national security in the sense that if Russia or China tried to fly a plane in the United States, they’d be shot down by our Department of Defense, but if they try to fly an information bomb into the United States, they’re met by a white gloved algorithm from one of these companies that says exactly which zip code would you like to target. It is the opposite of national security with social.

Tristan Harris: (33:06)
What a canon was to a castle, social media is to the nation-state because it removes the power asymmetries of the millions and billions of dollars that we’ve spent on F35s, on passport controls at the Department of Homeland Security. Once your society becomes virtual, all those protections go away.

Tristan Harris: (33:23)
Most importantly, if we are not coordinated as a society, if we cannot even recognize each other as Americans, we are toast. That is the only thing that matters. If we don’t have a truth that we can agree on, then we cannot actually do anything on our existential threats. And we’re really sitting at a moment in history where we’re transitioning into becoming a digital society, and we kind of already have a neural link brain implant for our society. Right now we have two options. We have the Chinese brain implant, which leads to an Orwellian control of thought, mass behavior modification and surveillance, or we have the Western brain implant that’s built on this business model that turns us into a performative culture. And there’s the Orwellian dystopia or the Aldous Huxley and Brave New World dystopia, in which we fall into a kind of devolvement of amusing ourselves to death, constantly immersed in distractions and unable to focus on our real problems.

Tristan Harris: (34:18)
So what I really encourage us to think about is someone is going to be controlling the 21st century. Will it be open societies or closed societies? Either we beat China at becoming China in a digital way, which we don’t want to do, or we figure out how to be a digital open society that doesn’t actually lose to that. That is our task, either we figure it out or the American experiment may be in question.

Chris Coons: (34:43)
Thank you very much, Mr. Harris. Dr. Donovan, if you’d now give your opening statement please.

Dr. Joan Donovan: (34:50)
Great. Thank you to the esteemed members of the subcommittee, Chairman Coons, and ranking member Senator Sasse for inviting me, and thank you to your staff as well. I appreciate the opportunity to talk about how algorithms and amplification shape public discourse. I’m Joan Donovan, the research director of the Shorenstein Center at Harvard Kennedy School, and I study the internet. I want to remind everyone that the internet is a truly global technology requiring massive amounts of international labor. So whatever policy ends up coming from the US, will undoubtedly become the default settings for the rest of the world. I also want to begin by saying that I believe a public interest internet is possible, and I have to believe that in order to do the heinous job of researching hate, incitement, harassment, and disinformation on these social media products.

Dr. Joan Donovan: (35:41)
What a public interest internet means practically is crafting policy that draws together the best insights across many different professional sectors, matched with rigorous independent research into how automation and amplification shape the quality of public life. We should begin by creating public interest obligations for social media timelines and newsfeeds, requiring companies to curate timely, local, relevant and accurate information, as well as providing robust content moderation services and options. But today, let’s try to name the problem of misinformation at scale and its impacts. In the US when we talk about politics, we’re really talking about media about politics. And when those news and information flows get laced with strategic misinformation, then a simple search for something like coronavirus origin or mail-in ballots can lead people down the rabbit hole of medical misinformation or political disinformation.

Dr. Joan Donovan: (36:42)
In October 2020, I testified about misinformation at scale having similar harmful societal impacts as secondhand smoke. And it took a whole of society approach to address the burden of disease caused by secondhand smoke, which led us to clear the air in workplaces, schools and airports. So when I say misinformation at scale, I’m not complaining that someone is wrong on the internet. What I’m pointing to is the way that social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant and accurate information can reach them. Post 2020, our society must assess the true cost of misinformation at scale and its deadly consequences. Disinformers, scammers, grifters use social media to sell bogus products, amplify wedge issues, impersonate social movements, and push conspiracies.

Dr. Joan Donovan: (37:32)
What I’ve learned over the last decade of studying the internet is that everything open will be exploited. Moreover, misinformation at scale is a feature of social media, not a bug. What do I mean when I say that? Well for example, because of what I study, I often joke nervously that my computer thinks I’m a white supremacist. For researchers going down the rabbit hole means getting pulled into an online subculture where the keywords, slang, values and norms are unfamiliar, but nevertheless, the content is plentiful. There are four aspects of the design of social media algorithms that can lead someone into the rabbit hole. Coincidentally, they are also four Rs.

Dr. Joan Donovan: (38:14)
Repetition relates to seeing the same thing over and over on a single product, which click, likes, shares, retweets do that. Redundancy is seeing the same thing across different products. That is, you see the same thing on YouTube that you see on Twitter. It tends to produce a feeling that something is more true. Responsiveness is how social media and search engines always provide some answer, even if it’s wrong, unlike other forms of media. And then lastly, reinforcement refers to the ways that algorithms work to connect people and content so that once you’ve searched for a slogan or keyword, algorithms will reinforce these interests time and time again. Nowhere of course, is this more prevalent than on YouTube, where any search for conspiracy or white supremacist content using the preferred keywords of the in group will surface numerous recommendations and even offer up direct engagement with these communities and influencers.

Dr. Joan Donovan: (39:12)
If you’ve recently searched for contentious content like Rittenhouse, QAnon, Proud Boys or Antifa, you’re likely to enter a rabbit hole where extracting yourself from reinforcement algorithms ranges from the difficult to the impossible. The rabbit hole is best understood as an algorithmic economy, where algorithms pattern the distribution of content in order to maximize growth, engagement, and revenue. I have a few things that companies could implement, if we want to talk about that later, but I think tackling a problem this big will require federal oversight for the long-term. We didn’t build airports overnight, but tech companies are flying the plane with nowhere to land at this point. And of course, the cost of doing nothing is nothing short of democracy’s end. Thank you.

Chris Coons: (40:04)
Well, thank you very much for your thoughtful testimony, to all of our witnesses. Given the limited number of members, we may get several rounds of questioning, which is exciting to me. I just want to say to Ms. Bickert, Ms. Culbertson, Ms. Veech, your efforts to down rank borderline content to improve transparency and empower users are all positive steps. We need to continue to find ways to preserve the positive benefits of algorithms in showing content to people that’s meaningful to them while addressing the very clear threats and challenges, the very real potential for the harmful impacts of algorithmic amplification. So the questions I have today are meant to get a better understanding of how one might further build on your efforts and strike the right balance.

Chris Coons: (40:51)
Some have proposed that social media platforms create virality circuit breakers. We’re all familiar with the phrase blowing up on the internet, to detect content that is rapidly gaining widespread viewership so that humans can review whether it actually complies with platform policies before it racks up tens or hundreds of millions of views. Professor Donovan, could you just briefly concisely explain why this kind of mechanism might be particularly valuable?

Dr. Joan Donovan: (41:24)
Yeah. I think one of the things that we know now from decades of tracking flagging, especially in conspiracist communities, hate communities, they only tend to flag things as a result of trying to get retribution on one another. They search for this content and they enjoy it. So systems that are built in don’t tend to work when it comes to particular kinds of strategic misinformation, especially hate or harassing content as well. And so as a result, what you need to do as a corporation is really look for it. I know that there’ve been a couple of different instances recently where corporations have found and rooted out some really heinous stuff, but it obviously has to be part of the business process and the process of content moderation to seek out content that is essentially out of skew with signals from the past.

Chris Coons: (42:29)
Wonderful, thank you.

Dr. Joan Donovan: (42:30)
So, that’s one of the ways that they could incorporate this.

Chris Coons: (42:33)
Thank you. Thank you. Thank you, Professor. Ms. Bickert, Facebook said last fall, it was piloting this very concept. What did you find through this experience? And do you expect to further roll this out more broadly? And please explain briefly, if you might.

Monika Bickert: (42:51)
Senator, thank you for the question. We do look at virality of content as a signal in when we should assess, proactively as Dr. Donovan is suggesting, whether or not something violates our policies or should be referred to our fact checkers. And the fact checkers, as you may know, are more than 80 independent fact-checking organizations that we work with. They can proactively rate content, or we can send it to them, and that could be based on user reports too. Either way, if they rate something false, then that’s when we will put on that a label saying this content is false, directing people to the fact check, and we’ll also reduce the distribution of that content in our newsfeed. And yes, we are seeing that those efforts are paying off. In fact, we see that when we put one of those informational labels on top of a piece of content, people are far less likely to actually click through and see the content than they would if we didn’t have that label.

Chris Coons: (43:57)
Ms. Bickert, I appreciated several steps Facebook announced it was taking just in advance of the Derek Chauvin verdict. One of these steps was limiting the spread of content, this is a quote from Facebook, “that systems predict is likely to violate community standards in the areas of hate speech, graphic violence, and violence and incitement.” Facebook’s statement also noted the company had done this in other emergency situations in the past. My question for you is why Facebook wouldn’t always limit the rapid spread of content likely to violate these standards? Could you help us understand that?

Monika Bickert: (44:38)
Senator, yes. So what we’re doing, and I put that blog post out. So what I meant by that was, we use systems to proactively identify when content is likely to violate or is maybe borderline. And often what that can help us do is send that to our reviewers and have them assess whether or not it violates. In extreme situations, because of course not all of that content will violate, there will be some false positives in that. So there’s a cost to, for instance, taking action on that content without having real people look at it. And so what we do is generally, we use those measures to find content that we can send to reviewers. But in situations where we know that there is extreme and finite, in terms of date, risk, such as an election in a country that’s going through civil unrest or the situation in Minneapolis with the Chauvin trial, we’ll put in place a temporary measure where we will de-emphasize content that the technology, that the algorithms say is likely to violate.

Chris Coons: (45:47)
Let me ask a last question of the three social media representatives before I turn this to my ranking member. Ms. Bickert, Facebook has said, and I think you said in your opening statement, it’s not in your long-term interest to pursue maximum engagement if it comes at the cost of spreading polarizing and sensationalized content. That it’s not really long-term in the financial interest of the company, let alone its reputational interest, to have algorithms that amplify harmful or divisive content. I agree with this. I am concerned about what the underlying incentives are at all three of your platforms, for those who have to make decisions day in and day out about exactly how your companies operate. The MIT Technology review reported last month that pay incentives at Facebook for employees broadly are still tied to growth metrics and engagement metrics.

Chris Coons: (46:40)
So if I’m a Facebook employee who works on its newsfeed, are the metrics the company has set up to measure my performance directly related simply to engagement and growth metrics, or is there some way that these broader, more positive social objectives are incorporated? If you could, all three, just answer briefly, Ms. Bickert for Facebook, Ms. Veech and Ms. Culbertson, do you provide pay incentives in terms of algorithm teams directly or indirectly based on engagement and growth related metrics? Thank you.

Monika Bickert: (47:16)
Senator, the engineers are not specifically goaled or given pay incentives simply to increase time on the site. The focus is really on making sure that the products are services that people find useful and will want to use for years to come.

Lauren Culbertson: (47:33)
Senator, for Twitter, a top priority for our company across our teams is to serve a healthy public conversation. I’d love to share with you our transcript from our latest analyst day, which is what we share with our investors and our advertisers, and all of the concerns and priorities that we’ve talked about thus far today. You’ll see that what we’re telling you is exactly what we tell our investors and our advertisers, because they have the same concerns. We have no incentive to have a toxic or unhealthy conversation on the service.

Chris Coons: (48:07)
Thank you.

Alexandra Veitch: (48:08)
Similarly Senator. So responsibility is our number one priority. And when we set goals, we set those goals around what we define as responsible growth. So we may set a goal to encourage adoption of a feature, but also we want to take into account how that feature may be used or misused and how we can ensure it’s adopted responsibly.

Chris Coons: (48:29)
And Mr. Harris, if you could just provide a brief comment on your understanding of the incentives of employees and how it aligns with responsible growth versus growth at all costs.

Tristan Harris: (48:41)
Yes. My understanding is even to this day, I think there was a brief experimentation at Facebook with non engagement based performance incentives for social impact, but that those have largely gone away, and it’s actually still a measure of engagement. This is things like not time on site, but sessions, seven day active users growth, and that is still the focus. And everything else we’re going to be talking about today, it’s almost like having the heads of Exxon, BP and Shell asking about what are you doing to responsibly stop climate change. Again, their business model is to create a society that is addicted, outraged, polarized, performative, and disinformed. That’s just the fundamentals of how it works. And while they can try to skim the major harm off the top and do what they can, and we want to celebrate that, we really do, it’s just fundamentally they’re trapped in something that they can’t change.

Chris Coons: (49:27)
Thank you, all. Let me turn to my ranking member, Senator Sasse.

Ben Sasse: (49:31)
Thanks, Chris. My first question is actually building, basically exactly, pardon me, on where the Chairman just finished. And I really do think that constructive engagement in these committees is better than people trolling for sound bites. So, I’m not trying to get you all to fight, but the truth of the matter is this hearing would work a lot better if we were in the same room, so we didn’t have to try to bring you all into dialogue, but the last three answers from the social media companies and Mr. Harris’ answers are just ultimately not reconcilable, I don’t think. So I want to go back to, I’ll start with Ms. Bickert as well, but saying that you aspire to healthy engagement, as opposed to just more quantity. I agree with Mr. Harris’s line that you definitely aspire to skim the most destructive habits and practices off the top of digital addiction, but the business model is addiction, right? I mean, money is directly correlated to the amount of time that people spend on the site.

Ben Sasse: (50:36)
So I guess what would be useful for me is to hear each of the three of you say what you think is wrong with Mr. Harris’ argument, because right now I think we’re talking past each other and I know that there is bad content and there’s disinformation content that you all, well-intentioned as your companies surely are, want to curtail, but his argument is really more broadly about the business model, and the business model is addiction. Isn’t it? Ms. Bickert, can we start with you? What is Mr. Harris missing?

Monika Bickert: (51:07)
Senator, thanks for the question. I’ll say two things that I hope will be helpful. One is, for us, the focus is always on the long-term, and I’ll give one concrete example of that. In January of 2018, we put out a post announcing that we were going to be prioritizing content from family and friends over, say news content. It was called meaningful social interactions. We suspected that it would lead to less time spent on the service, and it did. It led to people spending tens of millions of fewer hours on Facebook every day. But that was something that we did because we thought that longer term, it was more important for people to see that sort of content because they would find it meaningful and they would want to continue to use the site. So it’s a long-term picture.

Monika Bickert: (52:01)
And the other thing I would say is the teams that I work with, who include the engineers who are focused on safety issues, removing content, say bullying content or hate speech, and the engineers who are focused on the way that we reduce, for instance, misinformation that’s been labeled on the site. A key statistic for those engineers is prevalence, violating content. That’s their goal. And we put out public reports on their prevalence. So, that’s an example of how we are focused on the long-term and making sure that we are stopping abuse and maintaining a healthy environment.

Ben Sasse: (52:39)
I want to be clear that I’m not targeting the three of you because my opening statement is very sincere. I think that there is a danger in politics in governance, where if you agree that there’s a problem, then there must be a definitive regulatory solution that can come real fast and easy. And on the other hand, if you’re not persuaded there’s a regulatory fix right away, then you have to deny there’s a problem. I’m sort of a heterodox tweener on this, in that I don’t have clarity about what the regulatory fixes would be, but I think society-wide, we should admit that there is a problem in the last 12, 14 years, as we’ve consumed more and more digital stuff that seems to be correlated with some benefits, but also some very real costs. And I don’t think it’s just your companies.

Ben Sasse: (53:24)
I mean, there have been reports out of the New York Times about their own internal deliberations about how they’d like to have more of Americans engaging healthy content, and they’re just printing money right now over the course of the last four or five years. But engagement is much higher when they’re angry. So when the content is angry, it leads to more engagement. I don’t think any of you are really going to dispute that, but I’d like to stay where the question was two minutes ago, which is I’d love it if Ms Culbertson, will you tell me what you think is wrong with Mr. Harris’ argument?

Lauren Culbertson: (53:57)
We’re really focused on serving the public conversation, and that includes having controls in place so people can also control their experience. But I think as we’re talking about algorithms today, Twitter really does one thing. We do tweets. We have a home timeline. And as we’re talking about algorithms, we have a ranking algorithm. That’s designed to show you what might be most relevant to you. That also, if we’re talking about screen time, or how much time you spend on a service, I think that’s really relevant, because I know as a user of Twitter myself, I rely on that so I can see what happened in the day, what people are talking about. And then I log off and move on with my day. So I think it’s important to look at this in a nuanced view and recognize that algorithms can also be helpful in terms of cutting down on screen time or providing more valuable experience for people.

Ben Sasse: (54:48)
Sure, but the reality is the loop between the products that are being produced and the way as we, as narcissistic centers consume it is … Maybe I’ll ask it as a direct question. Is it or is it not true that when somebody tweets something that’s really anger invoking and outrageous, and it goes viral, but then two hours later they realized they were wrong and they correct it, isn’t the correction usually like 3% of the traffic of the original outrageous, but false thing? I mean, so it seems to me that what we know is that people are pretty good at short-term rage and the product capitalizes on that, doesn’t it?

Lauren Culbertson: (55:29)
Well, I think when looking at Twitter, it’s important to remember that it’s an open public conversation. So everything that happens is in the open in the public, and typically, you’ll see the-

Mr. Rhodes: (55:42)
The buyer realizes there’s no product. Those involve upfront payments, payments into escrow that the seller pays into, or I’m sorry, the buyer pays into and the seller can access. And other seems like-

Speaker 1: (55:57)
Sir, I want to interrupt you right there. So, you are conducting education to both the individual consumer and also to purchasing agents. Is that correct?

Mr. Rhodes: (56:11)
That’s correct. We have worked with state level purchasing agents too.

Speaker 1: (56:14)
My time is so limited. I want to move on. Okay. So Mr. Kauffman, when you all are trying to go through your enforcement, what kind of participation are you getting from the online marketplace, from vendors that have third parties that are selling?

Mr. Kauffman: (56:34)
So, the answer would be two-fold. When we see bad claims, in addition to reaching out to the party that’s involved in the bad claims, we are also contacting the platforms to make sure that they’re aware that there’s bad activity on their platform.

Speaker 1: (56:46)
Do they respond appropriately?

Mr. Kauffman: (56:48)
They have responded appropriately, but I wish they would respond more without being invited to respond by the FTC.

Speaker 1: (56:53)
Okay. Do they come to you with due diligence or information on their own? We think we have someone who needs to be investigated.

Mr. Kauffman: (57:04)
We get occasional referrals from platforms and from companies as well.

Speaker 1: (57:08)
Okay. All right. Professor Kovacic, I want to come to you on this. I know you’re going to tell me you need more budget and staff. That’s going to be a part of your answer. And Mr. Kauffman to you. When we look at what the FTC is doing, one of the things that we have discussed for a couple of years now is the need to scale enforcement. So, as you look at this, other than budget and agents, what kind of authorities do you need? And Mr. Kaufman, you first, and then Professor Kovacic.

Mr. Kauffman: (57:47)
So, certainly the most pressing issue right now is restoring our 13B authority. Without that authority, it is a huge, huge blow to the FTC. And there are other areas where we could definitely use civil penalty authority, most notably data security and privacy violations, to be able-

Mr. Kauffman: (58:03)
… authority, most notably data security and privacy violations. To be able to penalize first-time privacy violators would be a huge benefit for consumers and for the agency.

Marsha Blackburn: (58:11)
Okay. Professor?

Professor Kovacic: (58:15)
I would echo Daniel’s comments. Put those on my list as well. I’d simply add, I would like to see the development of a comprehensive strategy that brings together the full community of law enforcement authorities to decide in a more collective way, what are we doing now and what do we have to do to be more effective? I think that would be very profitable. A way to make sure we got the full value for the additional expenditures.

Marsha Blackburn: (58:42)
Thank you. Thank you, Mr. Chairman.

Speaker 2: (58:44)
Thanks, Senator Blackburn. Senator Klobuchar.

Amy Klobuchar: (58:48)
Thank you very much. Thank you to all our witnesses. Welcome to Mr. Rhodes from 3M Company in my home state. Good to see you again, Mr. Rhodes. Thank you to the Chairman and to the ranking member for holding this really important hearing. It couldn’t come at a more important time, as you point out, with the Supreme Court decision with 13(b).

Amy Klobuchar: (59:12)
Mr. Kaufmann, in your testimony, you highlight that the FTC has sent out more than 350 warning letters directing companies to remove deceptive claims from fake coronavirus treatments, to ads for products that were never delivered. How do these companies react to these warning tools, and why is 13(b) so important?

Mr. Kauffman: (59:33)
So the companies, to their credit, to the extent that we want to give them credit, have been very responsive. The response has been overwhelmingly positive, and within 48 hours the claims are removed. So, it has been highly successful from that perspective. But in terms of 13(b), there is a lot of litigation we have ongoing that relies solely upon 13(b) for monetary relief. So rather than getting millions and millions, or tens of millions of dollars for consumers, we will be stuck getting nothing if there’s no congressional fix to [crosstalk 01:00:02].

Amy Klobuchar: (01:00:02)
Do you have any sense of how much money could be at stake? A range? I know you can’t put an exact dollar figure.

Mr. Kauffman: (01:00:08)
Well, for the past five years we’ve given back 11 billion, with a B, dollars back to consumers. So I would estimate it certainly hundreds of millions per year. That seems to be a fairly conservative estimate.

Amy Klobuchar: (01:00:20)
And that’s going forward, of course, too. Not just going backward.

Mr. Kauffman: (01:00:25)
Correct.

Amy Klobuchar: (01:00:26)
And because of the pandemic and some of this fraud that’s going on that we’ve identified, is it possible you’d even have more than usual?

Mr. Kauffman: (01:00:33)
It is certainly possible. We are always looking for the cases that cause the most harm to consumers.

Amy Klobuchar: (01:00:38)
Okay. Very good. And Professor Kovacic, thank you for joining us again. You recommend a significant increase in the FTC budget in your testimony. I agree with that. Senator Grassley and I have this bill, the Merger Filing Fee Modernization Act, which time has come. It would not be everything. A lot of what you’re talking about I think would belong in the appropriations process. But what this would do, would increase the budget of each of the FTC and the Antitrust Division by 67.5 million. And it’s paid for, actually, in this case not by taxpayer money, but by an increased merger filing fee on the biggest deal. It actually helps some of the smaller deals. And I call for even more, of course. A $300 million increase to the FTC budget in the appropriations process.

Amy Klobuchar: (01:01:32)
But could you go even further? Could you talk about why this is so important at this moment of time? Not just from the pandemic fraud cases you’re seeing, but also from what you’re seeing with those antitrust cases. I think it’s so important that the FTC under Chairman Simons brought the Facebook case, but I bet that you could use nearly all the personnel in the FTC to work on those cases and then we’d have no one left for anything else. So, could you talk about the importance of this increase in budget?

Professor Kovacic: (01:02:06)
Yeah. Thank you, Senator Klobuchar. I’m thinking from my perspective as someone who started his career at the FTC in the 1970s, and I have watched since the efforts of our public institutions to do expansive, ambitious things with modest resources. I think as a starting point as a nation, we have to realize that if we want the equivalent of superior results … We’ve been talking in the last couple of years about the equivalent of a public policy moonshot to put competition law in a much different place, consumer protection in a much different place. We look at our experience with NASA. That was expensive, but it was worth it, arguably. And I think to have the conversation that says, “It is worth it for our nation to spend what it takes not just in aggregate numbers, but to recruit and retain the personnel that we’re going to need to drive these programs home successfully.”

Amy Klobuchar: (01:03:09)
Thank you.

Professor Kovacic: (01:03:09)
And I think if we’re not willing to spend more, and suppose we don’t spend triple the agency’s budget. If we don’t take the steps that you have suggested step-by-step, I think we’re going to find ourself in a chronic position that we do in public policy, where we have grand policy aspirations and we scratch our heads and say, “Why isn’t it working?” Well, in part, it’s not working because we’re not willing to pay for it.

Amy Klobuchar: (01:03:32)
Thank you. Mr. Rhodes, hometown company. In your testimony, you note that 3M partnered with the U.S. Department of Homeland Security to help seize approximately 11 million counterfeit 3M N95 masks, and in January 3M helped Minnesota avoid buying nearly 500,000 counterfeit N95 masks from a fraudulent vendor. Can you speak to the important role of public/private partnerships in combating fraud?

Mr. Rhodes: (01:03:58)
Yes. Thank you for the question, Senator Klobuchar. That has been an integral part of our efforts from the beginning in processing those 13,800 reports. Forming close partnerships with law enforcement and with agencies has really allowed us to extend beyond what we can do with our civil litigation authority, and the opening comments I thought were spot-on. That the criminal enforcement really helps to make the difference to stop the bad actors. So that’s been a critical part of our efforts beginning last March, when we sent a letter to the Department of Justice, to the National Association of Attorney Generals, the National Governors Association, reaching out. We conducted briefings, formed working relationships with the DOJ Task Force, with Department of Homeland Security, Homeland Security Investigations and their National IPR center, as well as Customs and Border Protection. Those have really been among the most effective actions that we’ve been able to [crosstalk 01:05:04].

Amy Klobuchar: (01:05:04)
Thank you. Thank you. Just last, I’ll just put this one on the record because my colleagues are waiting to ask questions. But Ms. Patton, thank you for your work. And I will say Senator Luján and I have done a letter after a hearing that he conducted on misinformation on vaccines. And I’ll follow up with you on this, but just some astounding findings that maybe if we could take 12 accounts down, or get the social media companies to do it, we would be in a lot better place. So, thank you.

Speaker 2: (01:05:37)
Thanks, Senator Klobuchar, and thanks for your very thoughtful book which deals with many of these topics.

Amy Klobuchar: (01:05:43)
Thank you.

Speaker 2: (01:05:44)
Senator Thune.

John Thune: (01:05:46)
Thank you, Mr. Chairman. Mr. Kaufmann, following the enactment of the TRACED Act in December of 2019, the FTC has seen a drop in robocall complaints. Could you elaborate on some of the FTC’s recent robocall enforcement actions?

Mr. Kauffman: (01:06:04)
Sure. We are engaging quite a lot with the Federal Communications Commission, with the FCC and with DOJ, on a number of robocall issues. We’ve got several cases in litigation, and we have a lot of cases in the pipeline that will be challenging more bad robocall conduct. So, it’s an area we are very actively involved in.

John Thune: (01:06:25)
Could you speak to the FTC’s efforts on engaging on that issue with industry initiatives like the Robocall Traceback Group, and whether this public/private partnership has been successful in identifying illegal robocallers?

Mr. Kauffman: (01:06:39)
Absolutely. It has been a very successful partnership. It’s been a robust source of leads and helpful information to allow us to build successful law enforcement actions. So, highly successful.

John Thune: (01:06:51)
Okay. Good. Justice Breyer’s opinion last week regarding the FTC’s authority to seek equitable relief cited the SAFE DATA Act, which is comprehensive privacy legislation I’ve sponsored with Senators Wicker, Fischer, and Blackburn as an example of Congress considering providing for a fix to the gap and the FTC’s 13(b) authority. Mr. Kaufmann, in your view, would this particular provision be helpful to the FTC after the Supreme Court’s unanimous ruling that the FTC could no longer secure consumer redress in federal courts and under the FTC’s 13(b) authority?

Mr. Kauffman: (01:07:29)
Absolutely. We really do appreciate the inclusion of 13(b) reform in the SAFE DATA Act, and very much are very supportive, and think it’s a very important thing for Congress to take up.

John Thune: (01:07:44)
Mr. Rhodes, I appreciate all the work that 3M has done throughout the pandemic to increase its production of PPE to support individuals and frontline workers. In fact, last November I had the opportunity to visit 3M’s manufacturing facility in Aberdeen, South Dakota, to see the expansion of 3M’s N95 mask production lines. At the same time you were increasing your manufacturing capability, scammers were trying to exploit the pandemic by offering a number of fraudulent products. What steps did 3M take to combat fraudulent activity during the pandemic, and can you talk about your partnership with law enforcement officials when identifying fraud?

Mr. Rhodes: (01:08:24)
Yes. Thank you for the question, Senator Thune. So, our efforts to address the pandemic really started with getting information out there. We established our website, our fraud hotlines to both allow customers to verify that offers of product were authentic and from authorized 3M distributors. We put online information about how to spot fraud in all of its forms, how to spot counterfeits. We published the list prices for our commonly-sold N95 respirators so that customers could identify and avoid inflated pricing. At the same time, once we established intakes for reports of suspected fraud, we reached out, as I mentioned, to the Department of Justice, to state AG’s offices, to DHS, to CBP, to the FBI, and started a process of sharing information, referring reports. We’ve referred thousands of reports, which has really extended the reach and effectiveness of our actions. We’ve also brought our own actions, and 33 lawsuits in courts across the country. We’ve been very successful in stopping the unlawful activity, and we’ve donated all of our recoveries in those cases, monetary recoveries, to COVID-19-related charities. We’ve also partnered with online retailers, internet companies, to take down tens of thousands of false or deceptive product listings, social media posts, websites. So it’s been really a case of addressing fraud, price gouging, and counterfeiting from all angles.

John Thune: (01:10:08)
Mr. Kaufmann, there have been a number of discussions around vaccine passports since the Biden administration took office. Vaccination is key to ending this pandemic, but the idea that a vaccine passport or lack thereof could be used to track or restrict Americans’ movement is concerning. I understand several technology companies have been working to develop digital tools or passports, which may be appealing to consumers as they seek ways to facilitate the return to normal activities. While convenient to the consumer, I am concerned about the privacy and security of that health data that the consumer may opt to provide these companies. Has the FTC taken steps to conduct oversight of the privacy implications of digital vaccine passports?

Mr. Kauffman: (01:10:48)
At the moment we have not done anything publicly about it, but it is an issue that we are very concerned about. Obviously, there are enormous implications involving vaccines and privacy issues. We’ve also been active in a lot of other health privacy areas. We brought a recent case flow involving an online telehealth app that collected a lot of information from hundreds of millions of people. So the issue of health and apps is an area we’re very focused on, and I think you raise a very important issue that we will be looking at.

John Thune: (01:11:17)
Good. Okay. I’m glad you’re going to be looking at it. Thank you, Mr. Chairman.

Speaker 2: (01:11:21)
Thanks, Senator Thune. And now Senator Markey.

Ed Markey: (01:11:26)
Thank you. Thank you very much, Mr. Chairman. Thank you for holding this very important hearing. A year ago, I sent a letter to the Federal Trade Commission urging it to develop and implement a comprehensive plan to stop bad actors from preying on innocent consumers with COVID-related scams and price gouging. Since then the commission has taken a number of important steps to stop this type of behavior, including issuing warning letters and charging a scammer who deceptively marketed a fake COVID treatment. But there is so much more that needs to be done. The commission also needs to address consumer protection threats that have indirectly arisen as a result of the pandemic. Specifically, we need to stop websites that are taking advantage of kids who are online and on their devices more than ever during the period of distance learning.

Ed Markey: (01:12:25)
I recently sent a letter to the Federal Trade Commission urging it to investigate whether Google violated Section 5 of the Federal Trade Commission Act by misleadingly marketing children’s apps as compliant with the Children’s Online Privacy Protection Act, a law which I authored, despite evidence that many of those apps appear to illegally track children and share their personal information without consent. Mr. Kaufmann, as children’s time online skyrockets during this pandemic, is the Federal Trade Commission committed to cracking down on platforms that are unfairly manipulating kids and deceiving parents while they do it?

Mr. Kauffman: (01:13:08)
Absolutely. I agree very much with you. What has happened from the pandemic, everybody has gone online, and the privacy and security issues have really been magnified by the movement of everything online. So, we are very familiar with your letter. I can’t comment on a specific investigation, but issues of children’s privacy are very important to myself, to my bureau, and to our acting chairman.

Ed Markey: (01:13:30)
Okay. Great. Thank you. Last week, the Supreme Court stripped the Federal Trade Commission of its ability to obtain monetary relief for victims of scams under section 13(b) of the Federal Trade Commission Act. Under this ruling, the Federal Trade Commission will operate without its primary tool for compensating cheated consumers. The Supreme Court’s decision was nothing short of a gut punch to the Federal Trade Commission and to the consumers which it serves. I look forward to working with Chair Cantwell, and Subcommittee Chair Blumenthal, and all of my colleagues on this committee to quickly enact legislation that restores the Federal Trade Commission’s rightful authority under Section 13(b). But until Congress acts, it is imperative that the Federal Trade Commission uses its full authority to deter a bad actors from engaging in illegal activity. We need to stop these scams before they happen.

Ed Markey: (01:14:37)
Federal Trade Commissioner [Shelpra 01:14:39] has proposed that one way the Federal Trade Commission can deter harmful behavior is by using the commission’s Penalty Offense Authority. Under the Federal Trade Commission Act, if the commission formally condemns a particular illegal practice in one case, other companies then who knowingly engaged in the same practice can face big fines. Here’s an example. When a company recently posted fake positive reviews that trick consumers into buying its products, the Federal Trade Commission issued an order condemning that practice.

Ed Markey: (01:15:16)
Using its Penalty Offense Authority, the FTC could potentially now issue serious fines against any company that knowingly defies the Federal Trade Commission by posting fake reviews for their own products. In other words, using Penalty Offense Authority, the Federal Trade Commission could potentially put whole industries on notice and hit companies where it hurts if they scam consumers. Mr. Kaufmann, what are your thoughts on this approach, and is the Federal Trade Commission currently inventorying existing orders to see if it can use them to collect civil penalties from scam artists who are operating today?

Mr. Kauffman: (01:15:57)
The Penalty Offense Authority is one of the additional tools that we are very closely looking at to make sure we can do everything we can in our power to protect consumers despite the loss of 13(b). One issue I have with that authority is that it gives you penalties. At the FTC, one of our first priorities, we want to stop the bad conduct and get money back to consumers. Penalty authority doesn’t allow us to do that, but it is a good alternative given the unfortunate decision of the Supreme Court last week.

Ed Markey: (01:16:27)
Okay. Thank you, Mr. Kaufmann. Again, I just wish that the FTC now will use the number of the authority of the Federal Trade Commission Act to protect consumers wherever possible. While simultaneously, I realize that it’s imperative that we pass legislation or power of the FTC to ensure that they can get money back from consumers.

Chuck Grassley: (01:16:52)
… media site. People can make their voices heard, share their opinions, and interact. Increasingly, however, these big tech companies are deciding what we can and cannot say and infringing on America’s freedom of speech. I constantly hear from Iowans about their concerns with the control that big tech has over the discourse in this country, as well as the biases that these platforms have against conservative voices and Middle America. I’ve heard numerous stories about posts being deleted, businesses removed, and creditors silenced. Many times this happens without warning and very little, if any, due process.

Chuck Grassley: (01:17:37)
These platforms have monopoly powers with very few competitors, and aren’t constrained by market forces, and consumers have no alternative. Big tech is also immune from liability under Section 230. This immunity combined with monopoly allows them to censor, block, and ban whatever they want. We must look at the power and control that a handful of companies have over speech, and their silencing voices with which they disagree. So my questions to Mr. Beckert, Culbertson, and Miss [Veesh 01:18:15]. When you decide to remove current content from platforms, do you believe that you do that consistent with First Amendment free speech principles, such as viewpoint neutrality? And if you believe that you’re doing that, then why is it that conservative voices are consistently the ones who are being censored?

Monika Bickert: (01:18:42)
Senator, thank you for the question. We are a platform for ideas across the political spectrum. I do believe that we enforce our policies without regard to political affiliation. And I do hear questions from both sides of the aisle, if you will, about whether or not we are fair in our content policy enforcement. But I can tell you that we enforce our policies without regard to political ideology.

Alexandra Veitch: (01:19:10)
Senator, I also appreciate the question here. So, we want YouTube to be a place where diversity of viewpoints are heard. We do have public-facing community guidelines that govern what is allowed on our platform and what is not. We do enforce these consistently without regard for political viewpoint. You did mention due process, so I wanted to call out that when content is removed from a creator, a creator does receive an email explaining that and is given an opportunity to directly appeal. We make public the data around our appeals. So in the last quarter of 2020 we did have 223,000 appeals and 83,000 reinstatements, showing we don’t always get this right, but we certainly want to apply our policies evenly.

Lauren Culbertson: (01:20:05)
As for Twitter, and Senator, thank you for the question. We love to see your tweets on Twitter. You’re one of my favorite follows. As you probably appreciate, Twitter wouldn’t be Twitter if everyone had the same viewpoints, and we welcome diverse perspectives. It’s what makes our service Twitter. We have rules in place. We enforce them impartially, and I know people have concerns, and they believe that companies like ours should be more transparent. That’s why we’ve put forth three core solutions, which we think would go a long way to addressing some of these concerns. The first is increased transparency, the second is more user control and choice over algorithms, and the third is enhanced due process. So if we do make a mistake, that users have the ability to appeal and have their decision reviewed against our terms one more time.

Chuck Grassley: (01:20:59)
There are countless examples of material being removed by a platform stating that it is misinformation, but it is actually just viewpoints that liberals might disagree with. What are your platforms doing to ensure that they’re not using pretextual reasons to censor differing opinions? And then that’s my last question.

Lauren Culbertson: (01:21:23)
I’m happy to take this one, and Twitter’s taken a very narrowly-scoped focus on misinformation at this time. We have three categories that govern our policies. The first is synthetic and manipulated media, the second is civic integrity, and the third is COVID-19 misinformation. We are piloting a program called Birdwatch that would crowdsource annotations to potential misinformation. So this is something to address all forms of misinformation, and it would also bring more voices in to help us with that work.

Alexandra Veitch: (01:22:00)
Senator, we do have robust community guidelines on YouTube. Those exist to keep people safe. To your point, it’s important to note that those community guidelines are public-facing and can be reviewed by any of our users.

Chuck Grassley: (01:22:23)
I guess, Mr. Chairman, the third person didn’t want to comment. So, I’ll give up my time. Go ahead.

Speaker 3: (01:22:31)
Thank You, Senator Grassley. I appreciate that. Senator Klobuchar.

Amy Klobuchar: (01:22:34)
Hard act to follow there. Okay. Thank you, Senator Grassley, for your interest in this issue. Mr. Harris, you and I were on a panel together in March, and good to see you again. Could you explain more about how companies’ market power exacerbates problems of disinformation, extremist conduct, and bias?

Tristan Harris: (01:22:57)
Yeah. Well, it’s great to see you again too, and thank you for the question, Senator. If there’s anyone with an alternative model to the current problems that plague us in misinformation, disinformation, and virality, can they succeed in the marketplace? There’s something in the literature called Metcalfe’s law, where the power of a network grows exponentially with the number of participants. Really, what we have between social media platforms is a race to Metcalfe. Once you have a dominant platform, it’s very hard for there to be an alternative. So market concentration means that even if there are alternatives that are trying to do any and solve any of the problems we’re talking about today differently, they’re going to get bought up by the existing platforms. And if you’re a venture capitalist, the only way you’re going to fund an existing company is by knowing that there’s an exit pathway. We kind of all learned the lesson, as all the sort of competing platforms and things that have come out have just been acquired by existing companies. We also know from-

Amy Klobuchar: (01:23:54)
I think that point just can’t be lost, because there are regulations we can put in place. That’s one way to do it, and you can do both things at once. But if you have a company that buys out everyone from under them, in the words of Mr. Zuckerberg they’d rather buy than compete, and buys companies like Instagram and/or WhatsApp, we’re never going to know if they could have developed the bells and whistles to help us with misinformation because there is no competition. Do you want to comment more on that, Mr. Harris?

Tristan Harris: (01:24:24)
Yeah. Well, I mean, just as you said. If WhatsApp remained independent, and let’s say we’re living in some alternative reality where now WhatsApp was separate, and we saw these problems, and WhatsApp decided they’re going to spend billions of more dollars on content moderation because they want to actually be the platform where people can trust. They can’t make that choice because Facebook bought them, and now they’re sort of integrated in how much they’re working on these problems. It’s a race to sweep the garbage under someone else’s rug. What we’ve seen, unfortunately, is instead of collaboration between all these platforms, in some cases we’ve seen, “Hey, look how bad their problems are, because we don’t want to pay attention to ours.” Not, again, because they’re evil. It’s just game theory happening between the companies. It really does-

Amy Klobuchar: (01:25:04)
Okay. Yeah. Thank you. Dr. Donovan, in your research you’ve looked at medical misinformation at scale and the role of the social media platforms. Could you please comment on how the sheer size of a few powerful platforms affects the problems that we should be addressing?

Dr. Joan Donovan: (01:25:20)
Yeah, and thank you, Senator Klobuchar. I really look forward to reading your book Antitrust. The problem of medical misinformation, of course, is one that was exacerbated by the pandemic, but anti-vaccination activists have a long history of using social media in order to attack the public understanding of science. But during the pandemic, of course, the way in which the tech companies have turned to medical misinformation is really … It’s like putting a band-aid on an open wound. Right now, what we need is a comprehensive plan for ensuring that people have access to timely, local, relevant, and accurate information like public interest obligations. But instead, what we have is a very slapdash approach to whatever the breaking news event is of the day. So I do think that the size of the platform and the way in which medical misinformation scales much more quickly than any intervention is probably the most pressing public health issue of our time.

Amy Klobuchar: (01:26:23)
Okay. Thank you. Ms. Bickert, a recent poll found that nearly one in four Americans said they will not get the coronavirus vaccine. Meanwhile, a recent report from the Center for Countering Digital Hate identified 12 specific content producers as the original source of an estimated 65% of coronavirus disinformation online. Recently, Senator Luján and I, after he conducted a hearing, sent a letter to Jack Dorsey and Mark Zuckerberg calling on them to remove these individuals from the platforms. Do you agree that more action needs to be taken? What’s the response to our letter? I guess I’d start with you, Ms.-

Amy Klobuchar: (01:27:03)
[inaudible 01:27:00]. What’s your response to our letter? I guess I’d start with you, Ms. Bickert, and then I’ll go to you Ms. Culbertson.

Monika Bickert: (01:27:09)
Senator, thank you, and thank you for the letter as well. I know that we’ve assessed that content and removed those accounts that were violating, and I can follow up more with you on the specific details of that. But, more broadly, and I think this is a really important issue, we know that we have to get it right when it comes to misinformation around COVID. One of our goals is to help 50 million people get vaccinated. We’re doing that both proactively, through partnerships with local and national health authorities, making sure that we’re directing people to authoritative health information, including where they can get vaccinated. And we’ve connected more than two billion people with those authoritative health resources. But we also, since the very beginning, have been partnering with the CDC to remove content that contradicts CDC guidance that could lead to an increased risk that people could contract or spread COVID. And that includes removing over 12 million pieces of safety related COVID-19 misinformation.

Amy Klobuchar: (01:28:13)
Okay. And in my role on the Commerce Committee, of course, Senator Cantwell is leading a bill on privacy, do you agree that consumers should have the ability to access their data and control how it is used, including what data is used in social media company algorithms? Do you give customers that ability now for both content and advertising algorithms? Ms. Bickert?

Monika Bickert: (01:28:40)
Senator, thank you. We do give people a number of controls, that includes everything from the ability to download your own information, remove it, control who can see your posts, you can opt out of our algorithm, you can see who can see your content at any time, and you can change those settings.

Amy Klobuchar: (01:29:03)
Is the company then supportive of our bill on privacy? Senator Cantwell [crosstalk 01:29:12].

Monika Bickert: (01:29:11)
I’d have to have our US Public Policy team follow up with you on the specifics of that [crosstalk 01:29:16].

Amy Klobuchar: (01:29:16)
All right, thank you. I appreciate that. I can see, Senator Coons, over his mask, is raising his eyebrows at me. That’s a sign, enough is enough.

Senator Coons: (01:29:26)
No, no. The Chairman welcomes additional questions from the celebrated author of an outstanding book on [inaudible 01:29:33].

Amy Klobuchar: (01:29:33)
Okay. Well, I’ll ask one more question then, of Ms. Culbertson from Twitter. Just the original question that I had asked Ms. Bickert about the disinformation dozen, as we call them, the accounts online. And of course, some of these issues that I’ve had, I’m talking about with market power, is not as applicable to Twitter, which I appreciate, but as a competitive platform, but could you at least answer the question here about this disinformation dozen?

Lauren Culbertson: (01:30:10)
Certainly, thank you for the question. We have, and are continuing to review, this particular group of individuals against our policies, and we have taken enforcement action on several of these individuals. Our team will be following up this week with all of the details around that. And also, I just wanted to note that while we are competitors, we are partners to address a lot of really harmful content issues. We’ve collaborated on COVID, we worked together on terrorism, child sexual exploitation, opioids. So I take issue with the premise that was mentioned earlier, there is a collaboration across industry to address some of the most harmful content on the internet. And we also invest heavily in our partnerships with experts, especially around COVID. We’ve worked very closely with the CDC, HHS, the White House, to not only enforce against our rules, but to also ensure that people have [inaudible 01:31:06] to credible information on our service.

Amy Klobuchar: (01:31:08)
So were you’re saying you take issue with something that I’d said? Or was it something?

Lauren Culbertson: (01:31:12)
No, no, no Senator. One of the other panelists suggested that we have a competitive edge to compete on addressing these harms, where we actually collaborate in a lot of these areas.

Amy Klobuchar: (01:31:25)
Okay. Thank you very much. I appreciate it.

Lauren Culbertson: (01:31:29)
Thank you.

Senator Coons: (01:31:29)
We now go to Senator Kennedy, remote. Can you hear us, Senator Kennedy?

Senator Kennedy: (01:31:36)
I can hear you, Mr. Chairman, can you hear me?

Senator Coons: (01:31:39)
Yes, I can. The time is yours, take it away.

Senator Kennedy: (01:31:43)
Thank you. It seems to me that in the guise of giving consumers what they want, a lot of our social media platforms first use surveillance to identify a person’s hot buttons, and then they use algorithms to show that person stuff that pushes those hot buttons. This is called, as you know, optimizing for engagement. The social media platform wants a person to visit its platform early and often, that’s how it makes more money advertising. In any event, when that person that we’re talking about, as a result of those algorithms, gets all revved up with no place to go, he posts something outrageous, not every time, but quite frequently. And that is why you can still find kindness in America, but you have to go offline to do it. Mr. Harris, I’d like a straight answer from you. I have a bill, others have a similar bill, a bill that would say that Section 230 immunity will no longer apply to a social media platform that optimizes for engagement. If you were a Senator, would you vote for it?

Tristan Harris: (01:33:33)
I’d have to see the way that the bill is written.

Senator Kennedy: (01:33:36)
Don’t do that to me, don’t do that to me, Mr. Harris. Give me a straight answer. We all want to read the bills. Would you vote for it or not?

Tristan Harris: (01:33:45)
Well, I would be in support of a bill that had technology companies not measure as their primary mode of success any of the engagement metrics of time spent, clicks, shares, etc.

Senator Kennedy: (01:33:58)
That’s swell. But if the bill said, I don’t like to waste time in these hearings, if the bill said no Section 230 immunity if you optimize for engagement, would you vote for it? [crosstalk 01:34:15] If you don’t want to answer, just tell me.

Tristan Harris: (01:34:17)
It sounds like a very interesting directional proposal, I’d have to know the details, but I’m sorry for not being more clear.

Senator Kennedy: (01:34:23)
Well, you’re being very clear. You’re dodging the answer. Dr. Donovan, would you vote for it?

Dr. Joan Donovan: (01:34:33)
Um, yeah, when it comes to bills, the reason why I’m in research is so I don’t have to make those decisions. But I would say that when we’re talking about what these companies optimize for, and the way in which [crosstalk 01:34:45]-

Senator Kennedy: (01:34:45)
Doc, doc, doc, doc, doc, would you vote-

Dr. Joan Donovan: (01:34:50)
Please?

Senator Kennedy: (01:34:51)
Would you vote for the bill?

Dr. Joan Donovan: (01:34:53)
I would vote for some form of bill that required oversight of these algorithmic systems.

Senator Kennedy: (01:35:01)
All right. I mean, we have these hearings and I appreciate them, but we never get down to it. We all talk, I’m as guilty as anyone else, but at some point you got to get down to it. And that’s where I’m coming from, I’m not trying to be rude. I’m just trying to get an answer out of you. You’ve both been very critical of what we have today. I am too. I’m looking for solutions. I’m not just looking for us to all show how intelligent we are, or are not [crosstalk 01:35:43].

Dr. Joan Donovan: (01:35:43)
We could address.

Senator Kennedy: (01:35:44)
I appreciate it, doc. I’m going to run out of time. Let me ask. I’m thinking about introducing a bill, in fact, we’re working on it, to take the principles of the General Data Protection Regulation in the EU. I never thought I would do something like this, but take the principles in the General Data Protection Regulation in the EU, and to have those principles apply here in the United States. Ms. Bickert, would you support that bill?

Monika Bickert: (01:36:23)
Senator, I focus on content, but there are people in our company we can have follow-up on that.

Senator Kennedy: (01:36:29)
That’s a dodge. Ms. Culbertson, would you vote for it?

Lauren Culbertson: (01:36:34)
We certainly comply with GDPR, there are some tensions with the First Amendment in the US, but we’d welcome a longer conversation about this, but generally, yes.

Senator Kennedy: (01:36:44)
Yes?

Lauren Culbertson: (01:36:46)
Yes, Senator.

Senator Kennedy: (01:36:48)
Oh, God bless you, thank you for an answer. Ms.. I’m sorry if I’m mispronouncing your name, Vich? Veech?

Alexandra Veitch: (01:36:59)
Senator, it’s Veech, yes.

Senator Kennedy: (01:37:00)
I’m sorry, Ms. Veech, I apologize. If you were a Senator, would you vote for it?

Alexandra Veitch: (01:37:06)
Senator, I’m not an expert on GDPR. I can tell you on privacy what we want to do is give our users security-

Senator Kennedy: (01:37:12)
I know, I know, you want privacy, but your whole model is built around finding out everything you can about me, other than my DNA, and you may have it for all I know. And I’m not trying to be rude, but I can’t tell you the number of these hearings I have been to, and I learn something every time. But when we get down to it, what are we going to do about it? Nobody wants to answer, and you’re supposed to be our experts. And I would strongly encourage you to come to these hearings with positions, firm positions, on behalf of yourselves or on behalf of your companies, that you’re ready to take. Don’t just word with us, we’re trying to solve a problem here.

Senator Coons: (01:38:01)
Senator Kennedy?

Senator Kennedy: (01:38:03)
Yes, sir?

Senator Coons: (01:38:04)
I have to ask you for a yes or no answer. Do you realize you’ve gone over time?

Senator Kennedy: (01:38:08)
Well, I realized that, yes. And I realize everybody else has gone over time.

Senator Coons: (01:38:14)
Take another minute, and then please wrap it up.

Senator Kennedy: (01:38:18)
I’m done.

Senator Coons: (01:38:20)
Thank you, sir. Senator Ossoff? Remote.

Senator Ossoff: (01:38:25)
Thank you, Mr. Chairman. Thank you to the panel. Ms. Bickert, much of the public discussion is focused on Facebook’s moderation practices. But there’s a compelling argument that the real problem is not the quality of your moderation policies, or the nature of the algorithm, but the underlying business model, your scale and your power. And while you clearly have an obligation to remove certain content, for example, incitement to violence or hate speech, I’m not at all enthusiastic about huge multi-national tech companies becoming the arbiters of legitimate speech and expression, especially when the decisions about what you may boost or suppress algorithmically are often made in secret and under heavy pressure from politicians and advertisers and public opinion. So, on the subject of your scale and your power, I’d like to ask, does Facebook anticipate that it will embark on further acquisitions of competitor services, in light of the suit that you’re already facing from the FTC and a number of State Attorneys General, alleging that your acquisitions of Instagram and WhatsApp constituted anti-competitive activity?

Monika Bickert: (01:39:52)
Senator, thank you for the question. Of course, I can’t comment on any litigation. I can tell you, because I’m responsible for our content policies and a lot of what we do around moderation, that we do take very seriously both the balance between expression and safety, but also the need for transparency. And so with, for instance, our algorithm, over the past few years, we have put out a number of blog posts and other communications where we’ve actually given the inputs for what goes into the ranking algorithm. We’ve explained how any significant ranking changes. We’ve introduced this tool, where on any post on Facebook, you can click on it, and go under, ‘Why am I seeing this post?,” and it’ll tell you why that’s appearing in your newsfeed where it is. And then, significantly, we have made it more visible how you can opt out of that newsfeed ranking algorithm. So if people just want to see their content in reverse chronological order, excuse me-

Senator Ossoff: (01:40:58)
Yeah. Respectfully, and I greatly appreciate your response, and I heard some of these points earlier in the hearing, and I’m not asking you to comment on any specific litigation. And to be clear, my point is actually that everything you just said about improving the quality of your moderation practices, disclosing some of the decisions underlying the algorithm, are not the root issue. The root issue is that Facebook has too much power. And one company perhaps should not be such a massive gatekeeper that determines what ideas prosper and what ideas don’t. And that’s why the question that I asked was, does Facebook anticipate that it will embark on any further acquisitions of competitor services?

Monika Bickert: (01:41:54)
Senator, acquisitions is really not my area at all, I’m focused on content. I can tell you though, from where I sit, from my perspective, it is a highly competitive space. And I know that not only from being an executive working on content at Facebook, but also being the parent of two teenage daughters, both of whom use social media, and there are a lot of services out there that people use. Nevertheless, I do think it’s really important that we recognize that these content moderation rules are really important, and we have to be very transparent about what they are so people can make informed choices about whether or not they want to use our services.

Senator Ossoff: (01:42:34)
Thank you, Ms. Bickert. Ms. Bickert, Apple’s recent iOS update will require apps to seek additional explicit authorization from users in order for those apps, presumably some of your products included, to continue tracking users across the internet. Tracking cookies and other technologies allow Facebook and other entities to monitor virtually all of their user’s web browsing activity. I want to commend Apple for taking this step, and ask whether you will take significant steps in the short-term to reduce your tracking, your ubiquitous tracking, of your user’s web activity, location, data, the technology that they use, and whether you will consider extending the feature that allows the removal of personal data from Facebook, to include the removal of personal data, not just from Facebook, but from any entities to whom Facebook sold such data? And including in your contracts with those to whom you sell data, a provision that they must delete all data that they’ve purchased from Facebook at the command of the user?

Senator Ossoff: (01:43:50)
So again, it’s two questions. Will you follow Apple’s lead in ceasing tracking of users across the web? And will you include, in contracts with those to whom you sell data, a provision requiring them to permanently delete and verify the deletion of all data you’ve sold to them about any user who activates the Facebook feature to remove their data from Facebook? Thank you so much.

Monika Bickert: (01:44:19)
Senator, thank you for the question. First, let me be really clear, we don’t sell user data. That’s not the way our advertising works. The way that it works is an advertiser selects from among different targeting criteria, and then we deliver that ad to a relevant audience, and we can follow up with more details on how that works. With respect to controls, I know we have introduced controls around people’s off Facebook experience, I’m not an expert in that area. There are those at the company who are, so I can get that information and follow up with you.

Senator Ossoff: (01:45:00)
Thank you, Ms. Bickert. Thank you, Mr. Chairman.

Senator Coons: (01:45:02)
Thank you, Senator. Senator Blackburn, are you available by remote?

Senator Blackburn: (01:45:07)
Yes, I am. Thank you, Mr. Chairman. I appreciate the witnesses and the hearing today, and I think that all the witnesses are hearing that Americans are pretty much fed up with the arrogance of big tech. You’re seeing it from all sides, and certainly Twitter’s CEO, Jack Dorsey, had his contempt for Congress on full display in House Energy and Commerce Committee hearing last, I think it was last month. And he tweeted out a poll on possible answers to the questions, basically treating the hearing as a joke. So Ms. Culbertson, do you agree it is unacceptable for Twitter’s CEO to tweet while he is testifying before Congress? Yes or no?

Lauren Culbertson: (01:46:00)
Certainly he’s the CEO and creator of Twitter, and he likes to tweet, and that’s the [crosstalk 01:46:07] communicates.

Senator Blackburn: (01:46:07)
Okay. I asked for a yes or no, but I will say I’m pleased you are looking and appearing more presentable than your CEO in his testimonies before us. When he behaves disrespectfully in a Congressional hearing and before the American people, he embarrasses Twitter. It is just such proof of how out of touch big tech is with the rest of the country. Big tech is, in my opinion, destroying news, free speech, competition, original content. It is responsible, also, for much of our children’s minds. This is something that bothers me, as a mom and a grandma, the power of Facebook and YouTube algorithms to manipulate social media addiction, where even reading that it is among babies, toddlers, kids, tweens, and teens. And this is something that should terrify each of us. YouTube deploys algorithms to breed this addiction to click bait in children, and they do it because it pays well. Our children’s brains are being trashed. So if you Silicon Valley CEOs can pocket billions of dollars in ad revenue, YouTube algorithms create an un-policed automated reward system, videos with little educational content are amplified to unsuspecting toddlers and kids, and to their unsuspecting parents.

Senator Blackburn: (01:47:57)
And Senator [inaudible 01:47:59] mentioned that we are re-introducing the bi-partisan Filter Bubble Transparency Act to force big tech to disclose if their secret algorithms are manipulating customers. So Ms. Veech, YouTube has a history of exporting children to harvest and profit off of their viewing history. Isn’t it true YouTube has illegally collected data on kids under age 13, in violation of COPA, and marketed that data to companies? Ms. Veech?

Alexandra Veitch: (01:48:35)
Thanks for the question, Senator. I’m familiar with COPA that you’re referring to. That was a novel interpretation of COPA. We worked directly with the FTC to reach an agreement about how we treat made for kids content on YouTube main. We do-

Senator Blackburn: (01:48:55)
Okay, Ms. Veech, yes, you reached a settlement in 2019. You were fined a record $170 million. You recall that?

Alexandra Veitch: (01:49:08)
Yes ma’am.

Senator Blackburn: (01:49:09)
Okay. So, the FTC order does not require you to police the channels that deceived by mis-designating their content. However, Commissioner Slaughter said YouTube should have to take the extra step of creating an algorithmic classifier to better police YouTube content for kids. I know your engineers are capable of designating and designing algorithms for all sorts of purposes, good and evil. So let me ask you this, is the YouTube engineering team capable of designing an algorithm that can identify a designated child-directed content and turn off behavioral advertising?

Alexandra Veitch: (01:50:01)
Senator, they are capable of that, and they have done that. We do require creators to designate their content as made for kids or not. But we also run classifiers, as you mentioned, to check that system and to determine what content is appropriate to be made for children, and served to children. We also, just to be clear, Senator, do not allow personalized advertising on made for kids content.

Senator Blackburn: (01:50:29)
Are you prioritizing profit over children?

Alexandra Veitch: (01:50:35)
No, Senator, child safety on our platform is our top priority. We build our product with parental controls baked right in, things like timers that parents can control.

Senator Blackburn: (01:50:46)
Well, the FTC is prioritizing children and taking steps to safeguard them. Under the settlement, you promised to stop illegally marketing targeted ads to children. Videos now have been labeled as made for kids, as you just mentioned. So, and made for kid videos will no longer include a comment section or in-screens that allow viewers to subscribe to children. So are you allowing this behavioral advertising to be turned off?

Alexandra Veitch: (01:51:25)
Yes, Senator, we do not serve personalized advertisements on made for kids content.

Senator Blackburn: (01:51:30)
Okay. I am over my time. Ms. Bickert, I have a question for you I will submit for the record. Thank you, Mr. Chairman.

Senator Coons: (01:51:38)
Thank you, Senator Blackburn. Senator Blumenthal?

Senator Blumenthal: (01:51:41)
Thank you, Mr. Chairman. Thank you to all of our witnesses for being part of this hearing, and to the Chairman for holding it. It’s a very, very critically important topic and hearing, and I apologize that I am late coming here because I was chairing a Subcommittee of Commerce on Consumer Protection dealing with COVID scams. I am very proud that last week the United States Senate approved the bipartisan Jabara-Hayer NO HATE Act, which I led alongside Senator Moran. We’ve known for a long time that hate crimes are on the rise, they are exploding in this very polarized and vitriolic time, viral videos of individual crimes posted on Facebook, Twitter and YouTube, no matter how horrifying or stomach turning, really tell only part of the story. The NO HATE Act will improve hate crime reporting because so many of them are invisible and unreported, and it will expand assistance and resources for victims of hate crimes as well as for law enforcement, and hopefully will enable us to understand the full scope of the problem so that we can take more effective action against hate crimes.

Senator Blumenthal: (01:53:11)
We know that the tech platforms play a role in hate crimes and hate speech online and off. The Anti-Defamation League recently found that as many as one in three Americans experience hate crimes and harassment online. Following the ADL’s very concerning findings, I teamed up with Representative Raskin to request the Government Accountability Office study specifically on the prevalence of online hate crimes and hate speech in the United States. During the 2020 election, Facebook spoke about the break the glass measures it was taking to “dial down” the hate incitements to violence and misinformation. On it’s platform last week, Ms. Bickert, you wrote a blog post about turning the dial down on hate speech, graphic violence, violence, and incitement, as the country was anticipating the verdict in the Chauvin trial.

Senator Blumenthal: (01:54:21)
If Facebook does in fact have a dial for hateful content, can the company dial it down now? Why doesn’t it dial it down already? And to all of the representatives who are here today from YouTube, Twitter, as well as Facebook, can you commit to providing access data to independent researchers to help us better understand and address the scourge of hate and harassment online?

Monika Bickert: (01:54:58)
Senator, thank you. And let me start by saying I completely agree that the rise of hate speech and hate crimes is very concerning and needs to be a priority for us, and is a priority for us. And I’ll just point to one quick example, which is we’ve started publishing the prevalence. In our quarterly reports that we put out, our community standards enforcement reports, we now publish the prevalence of hate speech, which means we go through with a fine tooth comb and see what we missed for a statistically significant subset of content. And the prevalence of hate speech on our service is very low, less than a 10th of 1%, but it’s something that we’re really focused on finding. And now, I’m happy to say, that more than 95% of the content that we remove for hate speech violations, we find ourselves before anybody reports it to us, so we are making strides. But to respond to your point that the measures we took around the Chauvin trial and the election, why we don’t always do that, let me sort of give you an example.

Monika Bickert: (01:56:03)
… and why we don’t always do that. Let me give you an example the cost of those measures because they have benefits, but they have costs. In the run-up to the election, for instance, we took some very aggressive measures to reduce the distribution of content that might be violating our policies. We did that with the Chauvin trial as well. Those measures aren’t perfect. And so there will be content that actually doesn’t violate our policies that was flagged by our technology that really shouldn’t be reduced.

Monika Bickert: (01:56:33)
And so when we take those measures, we’re mindful of the cost. It’s always this balance between trying to stop abuse and trying to make sure that we’re providing a space for freedom of expression and being very fair. And so we take those measures where there’s a risk of false positives only when there is an additional risk of abuse.

Sen. Blumenthal: (01:56:52)
Thank you. My time has expired. Thank you very much. Thanks Mr. Chairman.

Chris Coons: (01:57:04)
Thank you so much, Senator Blumenthal. We’re going to have a second round of questioning that may be participated in only by the ranking member and myself, given that there are votes actively ongoing on the floor. But let me thank our five witnesses again, and the many members of this subcommittee who have come to question. Ms. Veitch, I understand that 70% of the views on YouTube come by, are driven by its recommendation algorithm. With two billion users worldwide and over a billion hours of video watched each and every day, that makes your recommendation algorithm incredibly powerful.

Chris Coons: (01:57:44)
Members of the public can see how many times any video has been viewed, but members of the public can’t see how many times a video has been recommended. Though I understand YouTube does collect this information and gives it to content providers. So if a video ends up getting taken down by YouTube for violating its content policies, we have no way of knowing how many times it was recommended by your algorithm before it was ultimately removed. Could YouTube commit today to providing more transparency about your recommendation algorithm and its impact?

Alexandra Veitch: (01:58:22)
Thanks for this question, Senator. So just generally speaking, if content violates our policies, we want to remove it as quickly as possible. As you’ll see in our public community guidelines enforcement report of the 9.3 million videos we removed in quarter four of 2020, more than 70% were removed before they had 10 views. I think you’ve brought up an interesting idea and we’re always looking to expand transparency when it comes to our platform.

Alexandra Veitch: (01:58:51)
One way we’ve done this recently is by making public what we call our violative view rate. It’s the percentage of use on our platform that violate our community guidelines. Last quarter they were between 0.16 and 0.18%.

Chris Coons: (01:59:05)
Ms. Veitch if I might, I just want to know if you’re willing to release the data I believe you’re already collecting about how many times videos that violate your content standards have been recommended by your recommendation algorithm.

Alexandra Veitch: (01:59:21)
Thank you Senator. I can’t commit to releasing that today, but it’s an interesting idea. We want it to be more transparent. So let us work with you on that.

Chris Coons: (01:59:28)
Thank you. I look forward to getting an answer as soon as is reasonably possible. Ms. Bickert several publications have reported that significant portions of misinformation and polarizing content on Facebook comes from readily identifiable hyperactive users or super inviters who generate a lot of activity on your platform. Dr. Donovan, can you comment briefly on how these hyperactive users create problems? And then Ms. Bickert, I want to ask about whether or not Facebook intends to tackle this challenge. Dr. Donovan.

Dr. Joan Donovan: (02:00:02)
Yeah, I you’re referring to the Buzzfeed article that reported on a internal memo from Facebook that showed that there’s a power law play where it skews to highly… Misinformation tends to be most potent when you have a densely networked and highly coordinated small group of people working essentially around the clock to try to get their groups stocked with the public.

Dr. Joan Donovan: (02:00:30)
And what’s been interesting about reading the document internal to Facebook is that even as they tried to counter super inviters, their own internal systems and teams were not able to overcome that coordinated small network. And so there’s a lot that the company needs to do to address adversarial movements and in this case they were looking at the formation of stop the steal groups in the patriot party.

Chris Coons: (02:00:58)
Thank you, Dr. Donovan. Ms. Bickert, the Wall Street Journal reported last year that Facebook considered seriously, but ultimately declined to take measures that would put limits on these users activities. There was a proposal reportedly called sparing sharing, which would have reduced the spread of content that was disproportionately favored by these so-called hyperactive users. Could you speak to how Facebook is intending to approach this issue?

Monika Bickert: (02:01:28)
Yes, Senator, and let me say, we did actually put a restriction, a limit on the number of invites that an individual user could send out in a day to a group during the election period. But I want to speak also to the point that Dr. Donovan raised and I completely agree. There are networks of bad actors who are particularly sophisticated, who try to target and use social media to achieve their objectives and understanding the way that those networks work has been something that we have really been focused on in the past few years.

Monika Bickert: (02:02:05)
Building a team under Nathaniel Gleicher who’s got expertise in this area, and I know knows Dr. Donovan as well, in terms of identifying the sophisticated actors who are often engaged in shell games and other attempts to obfuscate what they’re doing using in authentic identities. We’ve gotten far better at that. We’ve removed more than a hundred such networks since the beginning of 2017. We are public about it when we do, we publish the results of those, and we’ve also gotten better generally at identifying fake accounts. We remove more than a million fake accounts up or near the time of upload every day now.

Chris Coons: (02:02:42)
Well, thank you. I look forward to delving into this further with you and with other folks at Facebook. Let me just ask two more, maybe three more. Mr. Ranking member. A quick, just structural question. I know it’s common for employees at major tech companies to be required to sign non-disclosure agreements as a condition of employment. When I was in the private sector, that was a common practice in the businesses that I knew about and practiced for. Ms. Bickert, Ms. Culbertson, Ms. Veitch, do each of your companies generally require your employees to sign NDAs? Strike to be easy. Yes or no question.

Monika Bickert: (02:03:23)
I’ll go first Senator. I don’t know the answer, but I’ll have the team follow up with you.

Alexandra Veitch: (02:03:29)
Senator, I want to be careful here because I’m not a lawyer or an employment lawyer, but I do believe that we have standard agreements to protect proprietary information with our employees.

Lauren Culbertson: (02:03:43)
I’d want to come back to you with the answer, but of course we have certain provisions in place to make sure people aren’t sharing private data and they might be handling. But I would just say generally the Twitter spirit among our employees is to share their perspectives. You’ll often times see our employees tweeting about our different products and services.

Chris Coons: (02:04:05)
Well, thank you. In general, my concern is that if a former employee from one of your companies wants to question or criticize the company or its decision-making, that they might risk facing legal action. And Mr. Harris, Dr. Donovan, I’d welcome some more input from you following this, hearing on that dynamic and whether or not NDAs actually prevent some of the most relevant information about algorithms from getting out to the general public.

Chris Coons: (02:04:33)
Two last questions if I might, one on transparency. I appreciate the information that’s been shared today, about how algorithms work at a high level. Many independent researchers have said, it’s critical to know the details, the dials and knobs of algorithms, to understand how components that drive decisions are weighted. So how much a metric like meaningful social interaction is actually correlated with growth and engagement, which Mr. Harris has repeatedly asserted and as I fundamentally believe the business model of social media requires you to accelerate.

Chris Coons: (02:05:08)
So given the immense impact of the knobs and dials of your algorithms in potentially both positive and negative ways, I think greater transparency about those matters about how your algorithms actually work and about how you make decisions about your algorithms is critical. Ms. Bickert, Ms. Veitch, Ms. Culbertson, could you speak to whether your companies are considering the release of more details about this kind of information or other types of enhanced transparency measures or audits about the impact of your company’s algorithms moving forward?

Lauren Culbertson: (02:05:46)
I’m happy to start. We are constantly thinking about how we can be more transparent about any actions that we take or our systems in place, including our algorithms. That’s why we’re investing in our responsible machine learning initiative. We’d be happy to provide more details in the interest of time, but we have a interdisciplinary group at Twitter, looking at our algorithms, our machine learning, studying our machine learning.

Lauren Culbertson: (02:06:15)
We’ll also be sharing some of our findings with the public so we can be open throughout this process. And then just more broadly, we totally agreed that we should be more transparent. We should also provide more consumer control and choice. We’re also committed to improving procedural fairness, but to those first two points, we’ve invested in this independent project called Bluesky, which is aimed at creating open protocols, which would essentially potentially create more controls for the people who use our services as well as transparency.

Chris Coons: (02:06:53)
Thank you. My last comment will be just this one. Mr. Harris spoke forcefully and pointedly about how the business model of social media is attention harvesting and that after a decade of the positive and negative impacts of social media, which has accelerated to be one of the most important forces in our society today, that we’ve more than not seeing the toxic impacts of division and disinformation.

Chris Coons: (02:07:19)
Mr. Harris has asserted that your entire business model is based on dividing society and that as we transition into a digitized society in the 21st century, in order for Western open democratic societies to survive, we have to develop model humane standards for how social media works. It is my hope, I’ll share with my ranking member, that the next time we convene it might be to consider what sorts of steps are possible, necessary or appropriate to make that progress that Mr. Harris speaks about. To my ranking member Senator Sasse.

Ben Sasse: (02:07:56)
Thank you, chairman Coons. And again, thank you to all five of you for being here. I do want to put another question to Mr. Harris before I start that second round of questioning, I would like to just briefly address colleagues on both sides of the aisle, because both Republican and Democratic colleagues today have said a number of things that presumed more precision about the problem than we’ve actually identified here and then picked up the most ready tool. Usually, the 230 discussion.

Ben Sasse: (02:08:27)
I think I’m a lot more skeptical than maybe most on this committee to push to a regulatory solution at this stage and I think in particular, some of the conversations about Section 230 have been well off point to the actual topic at hand today. And I think much of the zeal to regulate is driven by short-term partisan agendas. I think it would be more useful for us to stick closer to the topic that the chairman identified from this hearing.

Ben Sasse: (02:08:57)
I also think it’s important for members of Congress to constantly remind ourselves that we’re bound by First Amendment constraints in our job. A number of the lines of questioning, again on both the right and left sides of this panel today talked as if the First Amendment is this marginal topic that we don’t have to be obsessively concerned about and yet we need to draw a distinction between the First Amendment and the true public square as regulated by the powers of the government and the fact that the companies we’re talking about.

Ben Sasse: (02:09:29)
Amy Klobuchar has raised some important topics about scale and antitrust issues, but the companies we’re talking about are private companies. And so I just think there are a number of first amendment public private distinctions that we should be attending to a little more closely than maybe we did today. But Mr. Harris, can you tell us what discussions you’ve seen or been a part of either inside the extent companies or at the VCPE environment, potential different business models besides an ad revenue centric business model? Can you just give us a blue sky on that question?

Tristan Harris: (02:10:08)
Yeah. Fantastic question. Thank you, Senator. I mean, obviously there are subscription models, there is public interest models, more like Wikipedia, but I want to make an additional distinction, which is not just the funding model, but it’s the design model. The engagement and advertising model works because of the design that says user generated content. We’re all the unpaid journalists.

Tristan Harris: (02:10:29)
Previously, you had to pay a journalist at a Fox News, a New York Times $100,000 a year to write content, to get people to look at it and that’s the cost of attention production. But what if you could harvest each of us as useful idiots to take our five minutes of moral outrage and then use that to generate attention production for free? So we’re the unpaid labor for the attention that is duped into sharing information with each other, which reduced the costs for all these technology companies and then on the editorial side, instead of paying an editor at a New York Times, at a Fox news at a whatever, $100,000 a year, $20,000 a year.

Tristan Harris: (02:11:05)
We actually have algorithms, which we also don’t have to pay to randomly sort that to people. This happens in a values blind process, which means that in general, you get harm showing up in all of the blind spots. Suddenly Joan wakes up and says, “Hey, there’s this problem, there’s this problem, this problem.” The companies will respond and say, “Okay, fine, we’ll take the whack-a-mole stick and we’ll deal with those three problems.” But in general, values blindness destroys our democracy faster than people like Renee and Joan and so many of our friends in this community are essentially raising the alarms about it. And that’s fundamentally the kind of core design model, more so than the funding model.

Tristan Harris: (02:11:39)
We could have public interest technology that’s funded for public interest. We could tax these companies to put into regenerative fund. There’s a whole bunch of models we could do. One in energy, just like energy companies have this perverse incentive where they make more money, the more energy you use. So theoretically, leave the lights on, leave the faucets on, we make more money, but they don’t do that because instead they have a model that’s regulated so that after a certain amount, they double charge you, triple charge you to disincentivize your energy use.

Tristan Harris: (02:12:06)
But then that, instead of going into the private business model of the companies, the balance sheets, it gets put into a regenerative fund to increase the transition to solar. Imagine that technology companies, which today profit from an infinite amount of engagement only made money from a small portion of that. Let’s say some small amount of time and the rest basically was taxed to put into a regenerative public interest fund. And this one of the things like the fourth of state fact checkers, researchers, public interest technologists, things like that, because what we really need to do is, as we said, organize a comprehensive shift to more humane technology. So a digital open society can compete digital closed societies.

Ben Sasse: (02:12:43)
Very helpful. If we had more time, I was going to ask some questions given your role as an ethicist, whether or not there are debates inside the company about what the optimal user time on a site is every given day and is there a distinction between a fully consenting, assenting 49-year-old like myself and how those platforms think about it for a 13-year-old and a 17-year-old as well. But I know that Chris and I both need to go and vote. So I will just echo the thanks to all five of you for the fulsome discussion today, and to be continued.

Chris Coons: (02:13:22)
Thank you, Senator Sasse. Let me conclude by thanking all five of our witnesses for appearing today and to my 11 colleagues who have appeared and engaged in robust questioning. I appreciate in particular, the willingness of witnesses from Facebook, Twitter, and YouTube to answer some direct and difficult questions about their platforms and their business models. And I’m encouraged to see that these are topics that are broadly of interest and I there could be a broadly bipartisan solution.

Chris Coons: (02:13:52)
None of us wants to live in a society that as a price of remaining open and free is hopelessly, politically divided where our kids are hooked on our phones, their phones, and being delivered a torrent of reprehensible material. But I also am conscious of the fact that we don’t want to needlessly constrain some of the most innovative, fastest growing businesses in the west. Striking that balance is going to require more conversation and I look forward to continuing to work with ranking members, Sasse on these matters whether by round table or additional hearing, and whether by seeking voluntary reforms, regulation, or legislation.

Chris Coons: (02:14:32)
That includes exploring how best to align incentives, both within companies and with the rest of our society to ensure greater transparency and user choice. And I think we have to approach these challenging and complex issues with both humility and urgency. The stakes demand nothing less. Members of the committee may submit questions for the record for the witnesses. They’re due by 5:00 PM one week from today on May 4th. And with that, this hearing is adjourned.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.