Oct 25, 2021

Facebook Whistleblower Frances Haugen Testifies Before UK Parliament Transcript

Facebook Whistleblower Frances Haugen Testifies Before UK Parliament Transcript
RevBlogTranscriptsFacebook Whistleblower Frances Haugen Testifies Before UK Parliament Transcript

Facebook whistleblower Frances Haugen testified before the UK Parliament on October 25, 2021. Read the transcript of the full hearing here.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.

Frances Haugen: (00:00)
… critical time to act. When we see something like an oil spill, that oil spill doesn’t make it harder for a society to regulate oil companies, but right now the failures of Facebook are making it harder for us to regulate Facebook.

Damian Collins: (00:16)
On those failures, looking at the way the platform is moderated today, unless there is change, do you think it makes it more likely that we will see events like the insurrection in Washington on the 6th of January this year, more violent acts that have been driven by Facebook systems? Do you think it’s more likely we will see more of those events as things stand today?

Frances Haugen: (00:35)
I have no doubt that the events we’re seeing around the world, things like Myanmar, Ethiopia, those are the opening chapters, because engagement based ranking does two things: one it prioritize and amplifies divisive, polarizing, extreme content; and two, it concentrates it. And so Facebook comes back and says, “Only a tiny sliver of content on our platform is hate,” or, “Only a tiny sliver is violence.” One, they can’t detect it very well so I don’t know if I trust those numbers; but two, it gets hyper-concentrated in 5% of the population. And you only need 3% of the population on the streets to have a revolution, and that’s dangerous.

Damian Collins: (01:11)
I want to ask you a bit about that hyper-concentration-

Frances Haugen: (01:13)
Sure.

Damian Collins: (01:13)
… particularly an area that you worked on in particular, and that’s Facebook Groups. I remember being told several years ago by a Facebook executive that the only way you could drive content through the platform is advertising. I think we see that is not true, groups are increasingly used to shape that experience. We talk a lot about the impacts of algorithmic based recommendation tools like newsfeed. To what extent do you think groups are shaping the experience for many people on Facebook?

Frances Haugen: (01:38)
Groups play a huge and critical role in driving the experience on Facebook. When I worked on civic misinformation, this is based on recollection, I don’t have a document, but I believe it was something like 60% of the content in the newsfeed was from Groups. I think a thing that’s important for this group to know is that Facebook has been trying to extend the length of sessions, like get you to consume longer sessions, more content. And the only way they can do that is by multiplying the content that already exists on the platform. And the way they do that is with things like Groups and re-shares. If I put one post into a half million person group, that can go out to half a million people. And when combined with engagement based ranking, that group might produce 500, a thousand pieces of content a day, but only three get delivered. And if your algorithm is biased towards extreme polarizing divisive content, it’s like viral variants. Those giant groups are producing lots and lots of pieces of content, and only the ones most likely to spread are the ones that go out.

Damian Collins: (02:32)
It was reported, I think last year by the Wall Street Journal, that 60% of people that joined Facebook Groups that shared extremist content and promoted extremist content did so at Facebook’s active recommendation. So this is clearly something Facebook is researching. What action is Facebook taking about Groups that share extremist content?

Frances Haugen: (02:51)
I don’t know the exact actions that have been taken in the last six months to a year. Actions regarding extremist groups that are recommended actively to users, promoted to users, is a thing [inaudible 00:03:10] here’s our five point plan. And here’s the data that would allow you to hold us accountable, because Facebook acting in a non-transparent, unaccountable way will just lead to more tragedies.

Damian Collins: (03:19)
You think that five point plan exists?

Frances Haugen: (03:21)
I don’t know if they have a five point plan.

Damian Collins: (03:22)
Or any plan. Do they-

Frances Haugen: (03:24)
Yeah. I don’t know. I didn’t work on that.

Damian Collins: (03:26)
Okay. But I mean, to what extent should we be considering Groups, or sort of regulate it, you can’t regulate, but asking these questions about Facebook Groups? I mean, from what you were saying, they are a significant driver of engagement. And if engagement is part of the problem, the way Facebook designed it, then Groups must be a big part of that too.

Frances Haugen: (03:44)
Part of what is dangerous about Groups is that, we talk about sometimes this idea of, is this an individual problem or is this a societal problem? One of the things that happens in aggregate is the algorithms take people who have very mainstream interests and they push them towards extreme interests. You can be someone center left and you’ll get pushed to radical left. You can be center right, you’ll be pushed to radical right. You can be looking for healthy recipes, you’ll get pushed to anorexia content. There are examples in Facebook’s research of all of this. One of the things that happens with groups and with networks of groups is that people see echo chambers that create social norms. So if I’m in a group that has lots of COVID misinformation, and I see over and over again that if someone gives COVID vaccine, they encourage people to get vaccinated, they get completely pounced upon. They get torn apart. I learn that certain ideas are acceptable and unacceptable. When that context is around hate, now you see a normalization of hate and normalization of dehumanizing others. And that’s what leads to violent incidents.

Damian Collins: (04:49)
I mean, many people would say that groups, particularly large groups, and some of these groups have hundreds of thousands of members in them, millions, they should be much easier for the platform to moderate because people are gathering in a commonplace.

Frances Haugen: (05:01)
I strongly recommend that above a certain size group, they should be required to provide their own moderators and moderate every post. This would naturally, in a content agnostic way, regulate the impact of those large groups. Because if that group is actually valuable enough, they will have no trouble recruiting volunteers. But if that group is just an amplification point, like we see foreign information operations using groups like this in virality hacking, that’s the practice of borrowing viral content from other places to build a group. We see these places as being… If you want to launch an advertising campaign with misinformation in it, we at least have a credit card to track you back. If you want to start a group and invite a thousand people every day, the limit is I think 2,200 people you can invite every day, you can build out that group and your content will land in their Newsfeed for a month. And if they engage with any of it, it will be considered a follow. And so things like that make them very, very dangerous and they drive outsized impact on the platform.

Damian Collins: (06:04)
If what you say, if a [inaudible 00:06:06] or agency wanted to influence what a group of people on Facebook would see, you’d probably set up Facebook groups to do that more than you would Facebook pages and run advertising.

Frances Haugen: (06:15)
And that is definitely a strategy that is currently used by information operations. Another one that’s used, which I think is quite dangerous, is you can create a new account and within five minutes go post into a million person group, right? There’s no accountability, there’s no trace. You can find a group to target any interest you want to. Very, very fine grain. Even if you removed micro-targeting from ads, people would micro-target via groups.

Damian Collins: (06:41)
And again, I mean, what do you think the company’s strategy is for dealing with this? Because again, there were changes made to Facebook Groups, I think in 2017, 2018, to create more of a community experience, I think Mark Zuckerberg said, which is good for engagement. But it would seem similar to changes to the way Newsfeed works in terms of the concept that it prefers and favors. These are reforms that company have put in place that have been good for engagement, but have been terrible for harm.

Frances Haugen: (07:06)
I think we need to move away from having binary choices. There’s a huge continuum of options that exist. And coming in and saying, “Hey, groups that are under a thousand people are wonderful. They create community. They create solidarity. They help with people with connections,” but you get above a certain size, maybe 10,000 people, you need to start moderating that group. Because that alone, that naturally rate limits it. And the thing that we need to think about is, where do we add selective friction to these systems so that they are safe in every language? You don’t need the AIs to find the bad content.

Damian Collins: (07:42)
In your experience, is Facebook testing it’s systems all the time? Does Facebook experiment with the way it’s systems work around how you can increase engagement? And obviously, in terms of content on the Newsfeed, we know it experimented around the election time around the sort of news that should be favored. So how does Facebook work in experimenting with it’s tools?

Frances Haugen: (08:00)
Facebook is continuously running many experiments in parallel, on little slices of the data that they have. I’m a strong proponent that Facebook should have to publish a feed of all the experiments they’re running. They don’t have to tell us what the experiment is, just an ID. And even just seeing the results data would allow us to establish patterns of behavior. Because of the real thing we’re seeing here is Facebook accepting little, tiny additions of harm. When they weigh off how much harm is worth how much growth for us. Right now, we can’t benchmark and say, “Oh, you’re running all these experiments. Are you acting in the public good?” But if we had that data, we could see patterns of behavior and see whether or not trends are occurring.

Damian Collins: (08:40)
You worked in the civic integrity team at Facebook.

Frances Haugen: (08:42)
Mm-hmm (affirmative).

Damian Collins: (08:42)
So if you saw something that was concerning you, who would you report to?

Frances Haugen: (08:45)
This is a huge weak spot. If I drove a bus in the United States, there would be a phone number in my break room that I could call that would say, “Did you see something that endangered public safety? Call this number.” Someone will take you seriously and listen to you in the Department of Transportation. When I worked on counter-espionage, I saw things where I was concerned about national security, and I had no idea how to escalate those, because I didn’t have faith in my chain of command at that point. They had dissolved civic integrity. I didn’t see that they would take that seriously. And we were told just to accept under resourcing.

Damian Collins: (09:19)
But I mean, in theory you’d report to your line manager. And would it be then up to them whether they chose to escalate that?

Frances Haugen: (09:27)
I flagged repeatedly, when I worked on civic integrity, that I felt that critical teams were understaffed. And I was told, at Facebook, we accomplish unimaginable things with far fewer resources than anyone would think possible. There is a culture that lionizes kind of a startup ethic that is, in my opinion, irresponsible, right? The idea that the person who can figure out how to move the metric by cutting the most corners is good. And the reality is, it doesn’t matter if Facebook is spending $14 billion in safety a year, if they should be spending 25 billion or 35 billion, that’s the real question. And right now there’s no incentives internally that if you make noise saying, “We need more help,” people will not get rallied around for help because everyone is underwater.

Damian Collins: (10:15)
In many organizations that ultimately fail, I think that sort of culture exists: a culture where there’s no external audit and people inside the organization don’t share problems with the people at the top. What do you think people like Mark Zuckerberg know about these things?

Frances Haugen: (10:29)
I think it’s important that all facts are viewed through a lens of interpretation. And there is a pattern across a lot of the people who run the company, or senior leaders, which is, this may be the only job they’ve ever had, right? Like, Mark came in when he was 19 and he’s still the CEO. There’s a lot of other people who are VPs or directors who this is the only job they’ve ever had. And so there is a lack of… The people who have been promoted were the people who could focus on the goals they were given, and not necessarily the ones that asked questions around public safety. And I think there’s a real thing that people are exposed to data, and then they say, “Look at all the good we’re doing.” Yes, that’s true, but we didn’t invent hate. We didn’t invent ethnic violence. And that’s not the question. The question is, what is Facebook doing to amplify or expand hate? What is it doing to amplify or expand ethnic violence?

Damian Collins: (11:22)
Right. I mean, Facebook didn’t invent hate, but do you think it’s making hate worse?

Frances Haugen: (11:24)
Unquestionably it’s making hate worse.

Damian Collins: (11:27)
Thank you. Joining us remotely, Jim Knight.

Jim Knight: (11:31)
Thank you very much, Chairman. Thank you, Francis, for coming in and talking to us. First of all, just on some of that last fascinating discussion that you are having, you talked about if you were calling out for help, you wouldn’t necessarily get the resource. Would the same be true if you were working in PR or legal within Facebook?

Frances Haugen: (11:53)
I have never worked in PR or communication, so I’m not sure. I do know that there is… I was shocked to hear recently that Facebook wants to double down on the metaverse and that they’re going to hire 10,000 engineers in Europe to work on the metaverse. Because I was like, “Wow. Do you know what we could have done with safety if we’d had 10,000 more engineers?” It would have been amazing. I think there is a view inside the company that safety is a cost center, it’s not a growth center, which I think is very short-term in thinking because Facebook’s own research has shown that when people have worse integrity experiences on the site, they are less likely to retain. I think regulation could actually be good for Facebook’s long-term success, because it would force Facebook back into a place where it was more pleasant to be on Facebook, and that could be good for the long-term growth of the company.

Jim Knight: (12:41)
Thank you. And then let me go back also to the discussion about Facebook groups, by which we’re essentially talking about private groups, clearly. If you were asked to be the regulator of a platform like Facebook, how do you get the transparency about what’s going on in private groups, given that they’re private?

Frances Haugen: (13:04)
I think there’s a real bar. We need to have a conversation as society around how many people… After a certain number of people have seen something, is it truly private, right? Is that number 10,000? Is it 25,000? Is it really private at that point? Because I think there is an argument that Facebook will make, which is that there might be a sensitive group, which someone might post into, and we wouldn’t want to share that even if 25,000 people saw it, which I think is actually more dangerous, right? That if people are lulled into a sense of safety, that no one’s going to see their hate speech or no one’s going to see maybe a more sensitive thing. Like maybe they haven’t come out yet, right? That is dangerous because those spaces are not safe. When a hundred thousand people see something, you don’t know who saw it and what they might do.

Frances Haugen: (13:51)
So I’m a big proponent of, both Google and Twitter are radically more transparent than Facebook. People everyday download the search results on Google and analyze them, and people publish papers. And because Google knows this happens, they staff software engineers who work on search quality to write blog posts. Twitter knows that 10% of all the public tweets end up going out on their firehose, and people analyze those and do things like find information operation networks. And because Twitter knows someone is watching, they behave better. I think in the case of Facebook, and even with private groups, there should be some bar above which we say, ” Enough people have seen us. It’s not private.” And we should have a firehose just like Twitter, because if we want to catch national security threats, like information operations, we need to have not just the people at Facebook looking at it, we need to have 10,000 researchers looking at it. And I think in addition to that, we’d have accountability on things like algorithmic bias or understanding whether or not our children are safe.

Jim Knight: (14:52)
That’s really helpful. And just on Twitter and algorithmic bias, they published a report on Friday suggesting that there’s an algorithmic bias politically. Do you think that is unique to Twitter, or would you say that that would also be the case in Facebook? Is that something implicit in the way that these algorithms and these platforms with all of their algorithms are designed to optimize clicks and therefore there’s something about certain types of political content that makes it more extreme that is endemic to all of these social media companies?

Frances Haugen: (15:26)
I am not aware of any research that demonstrates a political bias on Facebook. I am familiar with lots of research that says the way engagement based ranking was designed… So Facebook calls it meaningful social interactions, though meaningful could have been hate speech or bullying up until November 2020, and it would still be considered meaningful. So let’s call it social interaction ranking. I’ve seen lots of research that says that kind of ranking, engagement based ranking, prioritizes polarizing, extreme divisive content. It doesn’t matter if you’re on the left or on the right, it pushes you to the extremes and it fans hate. Anger and hate is the easiest way to grow on Facebook. There’s something called virality hacking where you figure out all the tricks on how to optimize Facebook. And good actors, good publishers are already publishing all the content they can do, but bad actors have an incentive to play the algorithm. And they figure out all the ways to optimize Facebook. And so the current system is biased towards bad actors and biased towards people who push people to the extremes.

Jim Knight: (16:32)
Thank you. And then currently we have a draft bill, which is focusing on individual harm rather than societal harm. Given the work that you’ve done around democracy as part of your work at Facebook, do you think that it is a mistake to omit societal harm?

Frances Haugen: (16:54)
I think it is a grave danger to democracy and societies around the world to omit societal harm. A core part of why I came forward was I looked at the consequences of choices Facebook was making, and I looked at things like the global south, and I believe situations like Ethiopia are just part of the opening chapters of a novel that is going to be horrific to read. We have to care about societal harm, not just for the global south, but our own societies. Because like I said before, when an oil spill happens, it doesn’t make it harder for us to regulate oil companies. But right now Facebook is closing the door on us being able to act. We have a slight window of time to regain people control over AI. We have to take advantage of this moment.

Jim Knight: (17:39)
And my final question, and thank you, undoubtedly, just because you’re a digital company, you’ll have looked at user journeys and analyzed in a lot of detail the data around how different user journeys work. Is there any relationship between paid for advertising and then moving into some of these dangerous private groups possibly then being moved into messaging services, into encrypted messaging? Are there user journeys like that, that we should also be concerned about, particularly given that paid for advertising is currently excluded from this bill?

Frances Haugen: (18:17)
I am extremely concerned about paid for advertising being excluded, because engagement based ranking impacts ads as much as it impacts organic content. I’ll give you an example. Ads are priced partially based on the likelihood that people like them, they share them, do other things to interact with them, click through on a link. An ad that gets more engagement is a cheaper ad. We have seen over and over again in Facebook’s research, it is easier to provoke people to anger than to empathy or compassion, and so we are literally subsidizing hate on these platforms. It is cheaper, substantially, to run an angry, hateful, divisive ad than it is to run a compassionate, empathetic ad. And I think there is a need for things even discussing disclosures of what rates people are paying for ads, having full transparency on the ad stream and understanding what are those biases that come and how ads are targeted. In terms of user journey journeys from ads to extreme groups, I don’t have documents regarding that, but I can imagine it happening.

Damian Collins: (19:28)
[inaudible 00:19:28].

Beeban Kidron: (19:29)
Thank you, Francis, and thank you really very much for being here and taking a personal risk to be here. We are grateful. I really wanted to ask a number of questions that sort of speak to the fact that this system is entirely engineered for a particular outcome. And maybe you could start by telling us, what is Facebook optimized for?

Frances Haugen: (19:53)
I think a thing that is not necessarily obvious to us as consumers of Facebook is that Facebook is actually a two-sided marketplace. It is about [inaudible 00:20:00] in addition to being about consumers. You can’t consume content on Facebook without getting someone to produce it. Facebook switched over to engagement based ranking. They said, “The reason we’re doing this is we believe it’s important for people to interact with each other. We don’t want people to mindlessly scroll.” But a large part of what was disclosed in the documents was that a large factor that motivated this change was that people were producing less content. Facebook has run things called producer side experiments, where they artificially give people more distribution to see what is the impact on your future behavior of getting more likes, more re-shares, because they know if you get those little hits of dopamine, you’re more likely to produce more content. And so right now, Facebook has said repeatedly, “It’s not in our business interest to optimize for hate. It’s not in our business interest to give people bad experiences,” but it is in Facebook’s interest to make sure the content production wheel keeps turning. Because you won’t look at ads if your feed doesn’t keep you on the site. And Facebook has accepted the costs of engagement based ranking, because it allows that wheel to keep turning.

Beeban Kidron: (21:10)
So that actually leads beautifully to my next question, which is, I was really struck, not so much by the harms, because in a funny way, they just gave evidence to what a lot of people have been saying for a long time and a lot of people have been experiencing. But what was super interesting was that again and again, the documents show that Facebook employees were saying, “Oh, you could do this. You could do that.” And I think that a lot of people don’t understand what you could do. So I’d really love you to say to the committee, unpack a little bit, what were Facebook employees saying we could do about the body image issues on Instagram? What were they saying about ethnic violence, and what were they saying about the democratic harms that you were just referring to?

Frances Haugen: (21:59)
I have been mischaracterized repeatedly in certain parts of the internet that I’m here as a plant to get more censorship. One of the things that I saw over and over again in the docs was that there are lots and lots of solutions that don’t involve picking good and bad ideas. They’re about designing the platform for safety, slowing the platform down. And that when you focus, when you give people more content from their family and friends, you get for free, less hateful, divisive content, you get less misinformation. Because the biggest part that’s driving misinformation is these hyper distribution nodes, these groups, where it goes out to 500,000 people. Some examples of non-content based interventions are things like, let’s imagine Alice posts something and Bob re-shares it, and Carol re-shares it, and now it lands in Dan’s newsfeed. If Dan had to copy and paste that to continue to share it, so his share button was grayed out, that’s a two hop re-share chain, that has the same impact as the entire third party fact checking system. Only it’s going to work in the global south. It doesn’t require us to have a language by language system. It just slows the platform down. Moving to systems that are human scaled instead of having AI tell us where to focus is the safest way to design social media.

Frances Haugen: (23:18)
And I want to remind people, we liked social media before we had an algorithmic feed. And Facebook said, “If you move to a chronological feed, you won’t like it.” And it’s true with groups that are 500,000 people where it’s just a spraying content of people, you’re not going to like it. But Facebook has choices that it could do in different ways. It could have Groups that were designed like these things called discord servers, where it’s all chronological, but people break out into different rooms as it gets too crowded. That’s a human intervention, a human scale solution, not an AI driven solution. And so slowing the platform down, content agnostic strategies, human scale solutions, that’s the direction we need to go.

Beeban Kidron: (24:00)
Why don’t they do it?

Frances Haugen: (24:01)
Oh [inaudible 00:24:03]. So each one of these interventions, so in the case of re-shares, there are some countries in the world where 35% of all the content in the newsfeed is a re-share. And the reason why Facebook doesn’t crack down on re-shares or do friction on the Middle East is because they don’t want to lose that growth. They don’t want 1% shorter sessions because that’s also 1% less revenue. And so Facebook has been unwilling to accept even little slivers of profit being sacrificed for safety. And that’s not acceptable.

Beeban Kidron: (24:35)
And I wanted to ask you in particular about what a break glass measure is, if you would tell us.

Frances Haugen: (24:45)
Facebook’s current security strategy, safety strategy, is that it knows engagement based ranking is dangerous, but the AI is going to pick out the bad things. But sometimes the heat in a country gets hotter and hotter and hotter. It might be a place like Myanmar that didn’t have any misinformation classifiers, or labeling systems, no hate speech labeling classified systems, because their language wasn’t spoken by enough people. They allow the temperature in these countries to get hotter and hotter and hotter. And when the pot starts boiling over, they’re like, “Oh no, we need to break the glass. We need to slow the platform down.” And Facebook has a strategy of only when crisis has begun, it slows the platform down, instead of watching as the temperature gets hotter and making the platform safer as that happens. So that’s what break glass measures are.

Beeban Kidron: (25:31)
Okay. So I guess why I’m asking these questions is, if you could slow it down, make the groups smaller, have break glass as a norm, rather than in emergency, these are all really safety by design strategies. These are all just saying, make your product fit for purpose. Can you just say if you think that those could be mandatory in the bill that we’re looking at?

Frances Haugen: (25:58)
Facebook right now has characterized the reason why they turned off their break the glass measures after the US 2020 election was because they don’t believe in censorship. These measures had largely nothing to do with content. They were questions around how much do you amplify live video? Do you go to 600X multiplier or a 60X multiplier? Little questions where Facebook optimized their settings for growth over safety. And I think there’s a real thing that we need to think about safety by design first, and Facebook should have to demonstrate that they have assessed the risks, they must be mandated to assess the risks. And we need to specify how good is that risk assessment? Because Facebook will give you a bad one if they can. And we need to mandate that they have to articulate solutions, because Facebook is not articulating what’s the five point plan to solve these things.

Beeban Kidron: (26:49)
I also want to raise the issue of white listing, because a lot of the bill actually talks about terms and conditions, them being very clear, and then upholding terms and conditions and having a regulatory sort of relationship to upholding them. But what about white listing where some people are exempt from terms and conditions. Can you give us your view on that?

Frances Haugen: (27:13)
For those who are not familiar with the reporting by the Wall Street Journal, there is a program called XCheck. So Xcheck was a system where about 5 million people around the world, I think it’s maybe 5.7 million, were given special privileges that allowed them to skip the line if you will, for safety systems. So the majority of safety systems inside of Facebook didn’t have enough staffing to actually manually review. So Facebook claimed this is just about a second check, but making sure the rules are applied correctly. And because Facebook was unwilling to invest enough people to do that second check, they just let people through. And so I think there’s a real thing of, unless we have more avenues to understand what’s going on inside the company, like for example, imagine if Facebook was required to publish its research on a one-year lag, right? If they have tens of billions of dollars of profit, they can afford to solve problems on a one-year lag. We should be able to know that systems like this exist, because no one knew how bad the system was, because Facebook lied to their own oversight board about it.

Beeban Kidron: (28:17)
I think the last area I really want to think about is, obviously all the documents you bring come from Facebook, but we can’t really regulate for this company in this moment. We have to look at the sector as a whole, and we have to look into the future. And I just wonder whether you have any advice for that. Because we’re not trying to kill Facebook, we’re trying to make the digital world better and safer for its users.

Frances Haugen: (28:45)
Engagement based ranking is a problem across all sites, right? All sites are going to be… It’s easier to provoke humans to anger. Engagement based ranking figures out our vulnerabilities and panders to those things. I think having mandatory risk assessments and mandatory remediation strategies, we need ways to hold these companies accountable is critical, because companies are going to evolve. They’re going to figure out how to sidestep things. And we need to make sure that we have a process that is flexible and can evolve with the companies over time.

Beeban Kidron: (29:17)
Fantastic. And finally, really, just do you think that the scope of the bill, it’s user to user and search, do you think that’s a wise move or should we be looking for some systemic solutions for the sector more broadly?

Frances Haugen: (29:33)
User to user and search. That’s a great question. I think any platform that has a reach of more than a couple million people, the public has a right to understand how that is impacting society, because we’re entering an age where technology is accelerating faster and faster. Democratic processes take time, if they’re done well. And we need to be able to think about, how will we know when the next danger is looming? Because for example, in my case, because Facebook is a public company, I could file with the SEC for whistleblower protections. If I had worked at TikTok, which is growing very, very fast, that’s a private company and I wouldn’t have had any avenue to be a whistleblower. And so I think there is a real thing of thinking about any tech company that has a large societal impact, we need to be thinking about how do we get data out of that company? Because for example, you can’t take a college class today to understand the integrity systems inside of Facebook. The only people who understand it are people inside of Facebook. And so thinking systematically about for large tech companies, how do we get the information that we need to make good decisions, is vital.

Beeban Kidron: (30:46)
Thank you so much. Thanks, [inaudible 00:30:48].

Damian Collins: (30:48)
You mentioned the oversight board. I know you’re going to be meeting with the oversight board. They themselves don’t have access to the sort of information you’ve been publishing, or the information you’ve been discussing. But do you think the oversight board should insist on that transparency or disband itself?

Damian Collins: (31:02)
… that transparency or disband itself.

Frances Haugen: (31:04)
I always reject binary choices. I’m not an A or B person. I love C and D. I think there is a great opportunity for the oversight board to experiment with what is its bounds. This is a defining moment for the oversight board. What relationship does it want to have with Facebook?

Frances Haugen: (31:25)
I hope the oversight board takes this moment to step up and demand a relationship that has more transparency, because they should ask the question, why was Facebook able to lie to them in this way? What enabled that? Because if Facebook can come in there and just actively mislead the oversight board, which is what they did, I don’t know what the purpose of the oversight board is.

Damian Collins: (31:50)
More of a hindsight board than an oversight board.

Frances Haugen: (31:54)
Yeah.

Damian Collins: (31:54)
Tim Clement-Jones?

Tim Clement-Jones: (31:55)
Frances, hello. You’ve been very eloquent about the impact of the algorithm. You talked about ranking pushing extreme content, the amplification of that sort of content, an addiction driver. I think you’ve used the phrase. And this follows on really from talking about the oversight board or a regulator over here, or indeed, trying to construct a safety by design regime. What do we need to know about the algorithm, and how do we get that, basically? Should it be about the output of an algorithm, or should we be actually inspecting the entrails for a code? When we talk about transparency, it’s very easy just to say, “Oh, we need to be much more transparent about the operation of these algorithms,” but what does that really mean?

Frances Haugen: (32:55)
I think it’s always important to think about Facebook as a concert of algorithms. There are many different algorithmic systems, and they work in different ways. Some are amplification systems; some are downregulation systems. Understanding how all those parts work, and how they work together, is important.

Frances Haugen: (33:11)
So I’ll give you an example. Facebook has said engagement-based ranking is dangerous unless you have this AI that’s going to pick out the extreme content. Facebook has never published which languages are supported and which integrity systems are supported in those languages. Because of this, they are actively misleading the speakers of most large languages in the world by saying we support 50 languages, but most of those countries have a fraction of the safety systems that English have.

Frances Haugen: (33:43)
When we say, “How does the algorithm work?” we need to be thinking about, what is the experience of the algorithm for lots of individual populations? Because the experience of Facebook’s News Feed algorithm in a place that doesn’t have integrity systems on is very different than, say, the experience in Menlo Park.

Frances Haugen: (34:02)
I think some of the things that need to happen are… There are ways of doing privacy-sensitive disclosures of, we call it segmentation. So imagine if you divided the United States up into 600 communities based on what pages and groups people interact with, their interests. You don’t need to say, “This group is 35 to 40-year-old white women who live in the South.” You don’t need to say that. You can have a number on that cluster, but understanding, are some groups disproportionately getting COVID misinfo? Right now, 4% of those segments are getting 80% of all the misinfo. We didn’t know that until my disclosure. For hate speech, it is the same way. For violence incitement, it’s the same way.

Frances Haugen: (34:51)
So when we say, “Do we understand the algorithm?” we should really be asking, “Do we understand the experiences of the algorithm?” If Facebook gives you aggregate data, it will likely hide how dangerous the systems are, because the experience of the 95th percentile for every single integrity harm is radically different, or the 99th percentile is even more radically different, than the median experience. I want to be really clear, the people who go and commit acts of violence, those are people who get hyper-exposed to this dangerous content, and so we need to be able to break out by those extreme experiences.

Tim Clement-Jones: (35:28)
That’s really interesting. Do you think that that is practical for Facebook to produce? Would they need to have further research, or have they got ready access to that kind of information?

Frances Haugen: (35:45)
You could produce that information today. The segmentation systems exist. That was one of the projects that I founded when I was at Facebook. That segmentation has been used since for different problem areas, like COVID misinformation, and they already produce many of these integrity statistics. So part of what’s extremely important about Facebook should have to publish which integrity systems exist and in which languages is right now…

Frances Haugen: (36:11)
Let’s imagine we are looking at self-harm content for teenagers. Let’s imagine we came and said, “We want to understand, how is self-harm concentrated across these segments?” Facebook’s most recent position, according to a governmental source we talked to, was they said, “We don’t track self-harm content. We don’t know who’s overexposed.” If they were forced to publish what integrity systems exist, we could say, “Wait, why don’t you have a self-harm classifier? You need to have one so we can answer this question of, is the self-harm content focused on 5% of the population?” Because we can answer that question if we have the data.

Tim Clement-Jones: (36:45)
And we should wrap that into a risk assessment that we require to be delivered to us, basically.

Frances Haugen: (36:52)
If I were writing standards on risk assessments, a mandatory provision I would put in there is you need to do segmented analysis, because the median experience on Facebook is a pretty good experience. The real danger is that 20% of the population has a horrible experience or an experience that is dangerous.

Tim Clement-Jones: (37:11)
Is that the core of what we would need by way of information from Facebook or other platforms, or are there other information about data? What else do we need to be really effective in risk assessment?

Frances Haugen: (37:27)
I think there’s an opportunity. Imagine if for each of those integrity systems, Facebook had to show you a sampling of content at different scores. We would be able to come in there… A problem that I’m really concerned about is that Facebook has trouble differentiating for, in many languages, the difference between terrorism content and counterterrorism content. So think about the role of counterterrorism content in society. It’s how people make society safer.

Frances Haugen: (37:55)
Because Facebook’s AIs don’t work very well for the language that was in question, I believe it was an Arabic outlet, 76% of the counterterrorism content was getting labeled as terrorism. If Facebook had to disclose content at different scores, we could go and check and say, “Oh, interesting. This is where your systems are weak, and for which languages,” because each language performs differently. I think there is a real importance for, if there were a FireHost for Facebook, and Facebook had to disclose what the scoring parameters were, I guarantee you researchers would develop techniques for understanding the roles of those scores and amplifying which kinds of content.

Damian Collins: (38:39)
Lord Nicholson?

John Nicolson: (38:41)
Thank you very much indeed, chair, and thank you so much for joining us. You might be interested to know that you’re trending on Twitter.

Frances Haugen: (38:48)
Right now? Fascinating.

John Nicolson: (38:51)
So people are listening. I thought the most chilling sentence that you’ve come out with so far this afternoon, I wrote it down, ” Anger and hate is the easiest way to grow on Facebook.” That’s shocking, isn’t it? What a horrendous insight into contemporary society on social media that that should be the case.

Frances Haugen: (39:18)
One report from Facebook demonstrates how there’s different kinds of feedback cycles that are all playing in concert. It said when you look at the hostility of a comment thread… So let’s look at a single publisher at a time and take all the content on Facebook and look at the average hostility of that common thread. The more hostile the comment thread, the more likely a click will go out to that publisher. Anger incites traffic outwards, which means profit. We also see that people who want to grow really fast, they do that technique of harvesting viral content from other groups and spreading it into their own pages, into their groups. They bias towards stuff that gets an emotional reaction, and the easiest emotional reaction to get is anger. Psychology research has shown this for decades.

John Nicolson: (40:03)
So those of us who are adults, or aspiring adults, like members of the committee, will find that hard enough to deal with. But for children, this is particularly challenging, isn’t it? I’d like to follow up on some of Baroness Kidron’s very good questions specifically on harm to children. Perhaps you could tell us, for people who don’t know, what percentage of British teenagers can trace, for those who feel like this, their desire to kill themselves back, I can’t even believe I’m saying that sentence, but back to Instagram.

Frances Haugen: (40:36)
I don’t remember the exact statistic. I think it was around 12%. 13%?

John Nicolson: (40:43)
It’s exactly that. And body image is also made much worse, isn’t it?

Frances Haugen: (40:48)
Yes.

John Nicolson: (40:49)
And why should that be, for people who don’t understand that? Why should it be that being on Instagram makes you feel bad about the way your body looks?

Frances Haugen: (40:59)
Facebook’s own reports say it is not just that Instagram is dangerous for teenagers; it is actually more dangerous than other forms of social media-

John Nicolson: (41:09)
Why?

Frances Haugen: (41:09)
Because TikTok is about doing fun activities with your friends. It’s about performance. Snapchat is about faces and augmented reality. Reddit is at least vaguely about ideas. But Instagram is about social comparison and about bodies. It is about people’s lifestyles. And that’s what ends up being worse for kids.

Frances Haugen: (41:29)
There’s also an effect, which is a number of things are different about a life mediated by Instagram than what high school used to be like. So when I was in high school, it didn’t matter if your experience at high school was horrible. Most kids had good homes to go home to, and they could, at the end of the day, disconnect. They would get a break for 16 hours. Facebook’s own research says now the bullying follows children home. It goes into their bedrooms. The last thing they see at night is someone being cruel to them. The first thing they see in the morning is a hateful statement. And that is just so much worse.

John Nicolson: (42:05)
So they don’t get a moment’s peace.

Frances Haugen: (42:06)
They don’t get a moment’s peace.

John Nicolson: (42:07)
If you’re being bullied, you’re being bullied all the time.

Frances Haugen: (42:09)
Yeah.

John Nicolson: (42:09)
Now, you’ve already told the Senate and you’ve also told us what Facebook could do to address some of these issues, but some of your answers were quite complicated.

Frances Haugen: (42:22)
Oh no. Sorry-

John Nicolson: (42:23)
So perhaps you could tell us in a really simple way that anybody can get what Facebook could do to address those issues. Children who want to kill themselves, children who are being bullied, children who are obsessed with their body image in an unhealthy way, and all the other issues that you’ve addressed, but what is it that Facebook can do now, without difficulty, to solve those issues?

Frances Haugen: (42:52)
There are a number of factors that interplay that drive those issues. On a most basic level, children don’t have as good self-regulation as adults do. That’s why they’re not allowed to buy cigarettes. When kids describe their usage of Instagram, Facebook’s own research describes it as an addict’s narrative-

John Nicolson: (43:11)
A what?

Frances Haugen: (43:12)
As an addict’s narrative. The kids say, “This makes me unhappy. I feel like I don’t have the ability to control my usage of it. And I feel that if I left, I’d be ostracized.” I am deeply worried that it may not be possible to make Instagram safe for a 14-year-old, and I sincerely doubt it’s possible to make it safe for a 10-year-old.

John Nicolson: (43:33)
So they shouldn’t be on it.

Frances Haugen: (43:39)
I would love to see a proposal from an established independent agency that had a picture of what a safe version of Instagram for a 14-year-old looks like.

John Nicolson: (43:47)
You don’t think such a thing exists.

Frances Haugen: (43:48)
I am not aware of something that I would feel confident about.

John Nicolson: (43:50)
Does Facebook care whether or not Instagram is safe for a 10-year-old?

Frances Haugen: (43:55)
What I find very deeply misleading about Facebook’s statements regarding children is they say things like, “We need Instagram Kids because kids are going to lie about their age, and so we might as well have a safe thing for them.” Facebook should have to publish what they do to detect 13-year-olds on the platform, because I guarantee what they’re doing today is not enough. And Facebook’s own research does something where… Facebook can guess how old you are with a great deal of precision because they can look at who your friends are, who you interact with-

John Nicolson: (44:28)
Because you’re at school.

Frances Haugen: (44:29)
Yeah, because they can figure-

John Nicolson: (44:30)
That’s a bit of a giveaway. If you’re wearing a school uniform, chances are you aren’t 20.

Frances Haugen: (44:33)
But I want to disclose a very specific thing. This is something actually the Senate found when we disclosed the documents to them. They found that Facebook had estimated the ages of teenagers and worked backwards to figure out how many kids lied about their ages and how many were on the platform. And they found for some cohorts, 10% to 15% of 10-year-olds were on the platform. Facebook should have to publish those stats every year so we can grade, how good were they at keeping kids off the platform?

John Nicolson: (45:00)
Right. So Facebook can resolve this, solve this, if it wants to do so.

Frances Haugen: (45:06)
Facebook can make a huge dent on this if they wanted to, and they don’t because they know that young users are the future of the platform. And the earlier they get them, the more likely they’ll get them hooked.

John Nicolson: (45:14)
Okay. And obviously, young users are no different from the rest of us. They’re also getting to see all the disinformation about COVID and everything else that the rest of us are getting to see. And just remind us what percentage of disinformation is being taken down by Facebook.

Frances Haugen: (45:31)
I actually don’t know that stat off the top of my head-

John Nicolson: (45:35)
From what I understand, it’s 3% to 5%.

Frances Haugen: (45:37)
That’s for hate speech. But I’m sure it’s approximately the same.

John Nicolson: (45:42)
Well, I would guess so.

Frances Haugen: (45:43)
It’s probably even less from the perspective of the only information that is false at Facebook is information that has been verified by a third-party fact-checking system, and that can only catch viral misinformation. That’s misinformation that goes to half a million, a million people. And this is the most important thing for the UK, I don’t believe there’s anywhere near as much third-party fact-checking coverage for the UK compared for the United States.

John Nicolson: (46:09)
Okay. And the wonders of texting tell me that actually the figure is 10% to 20% for disinformation. Yeah, so I stand corrected, 10% to 20% for disinformation and 3% to 5% for hate speech. So a vast amount of disinformation and hate speech is getting through to children, which must present children with a very peculiar, jaundiced sense of the world. And we have absolutely no idea, do we, how those children are going to grow up and change and develop and mature having lived in this very poisonous society at this very delicate stage in their development.

Frances Haugen: (46:50)
I’m extremely worried about the developmental impacts of Instagram on children, beyond the fact that if you get an eating disorder when you’re 16, you may have osteoporosis for the rest of your life. There are going to be women who are walking around this earth in 60 years with brittle bones because of choices Facebook made now.

Frances Haugen: (47:05)
But I think the secondary thing that I’m super scared about is kids are learning that people they care about treat them cruelly, because kids on Instagram, when they were moved with the feedback of watching someone cry or watching someone wince, they’re much more hateful, they’re much meaner to people, even who are their friends. Imagine what the domestic relationships will be like for those kids when they’re 30 if they learn that people who care about them are mean to them.

John Nicolson: (47:28)
It’s a very disturbing thought. And the other very disturbing thing that you’ve told us about, which I think most people haven’t focused on, is the idea that language matters. So we think Facebook is bad now, but what we don’t tend to realize in our very Anglocentric culture is that all the other languages around the world are getting no moderation of any kind at all.

Frances Haugen: (47:56)
I think the thing that should scare you even more living in the UK is the UK is a diverse society. Those languages aren’t happening abstractly in Africa and getting the raw, dangerous version of Facebook, a version of Facebook that Mark has said himself is dangerous. Engagement-based ranking is dangerous without AI. That’s what Mark Zuckerberg said. Those people are also living in the UK and being fed misinformation that is dangerous, that radicalizes people. So language-based coverage is not just a good-for-individuals thing. It is a national security issue.

John Nicolson: (48:28)
That’s interesting. On the social front, you pointed out that there might be differences between the United Kingdom and the United States which it’s not picking up. I’ve said this to the committee before, but I’ve got personal experience of this in Twitter, where I was… I’m a gay man. I was called a greasy bender on Twitter, and I reported it to Twitter. Twitter wrote back and told me that there was nothing wrong with being called a greasy bender. Then I wrote back giving the exact chapter and verse from their community standards which showed it was unacceptable, and somebody wrote back to me, presumably from California, telling me that it was absolutely acceptable. Now, to be generous, it may just be that they didn’t know what a bender was because it’s not in use in the United States. But honestly, I think I’d have googled if I’d been them, just to find out why this MP was being so persistent about this particular word.

John Nicolson: (49:23)
In a nutshell, what do you want us to do in this? What’s the most useful thing in addressing the concerns that you’ve raised here?

Frances Haugen: (49:35)
I think forcing Facebook… I want to be clear. Bad actors have already tested Facebook. They’ve gone and tried to hit the rate limits. They’ve tried experiments with content. They know Facebook’s limitations. The only one who doesn’t know Facebook’s limitation are good actors. Facebook needs to disclose what its integrity systems are and which languages they work in, and the performance per language or per dialect, because I guarantee you, the safety systems that are designed for English probably don’t work as well on UK English versus American English.

John Nicolson: (50:12)
All of this makes Facebook sound relatively benign, doesn’t it, as if it’s just not doing quite what it should be doing? But what your evidence has shown to us is that Facebook is failing to prevent harm to children. It’s feeling to prevent the spread of disinformation. It’s failing to present hate speak. It does have the power to deal with these issues. It’s just choosing not to, which makes me wonder whether Facebook is just fundamentally evil. Is Facebook evil?

Frances Haugen: (50:47)
I cannot see into the hearts of men. I think there is a real thing of good people, and Facebook is overwhelmingly full of conscientious, kind, empathetic people-

John Nicolson: (50:59)
You have to leave.

Frances Haugen: (51:00)
Good people who are embedded in systems with bad incentives are led to bad actions, and there is a real pattern of people who are willing to look the other way are promoted more than people who raise alarms.

John Nicolson: (51:14)
We know where that leads in history, don’t we? So could we compromise that it’s not evil, maybe that’s an overly moralistic word, but some of the outcomes of Facebook’s behavior is evil?

Frances Haugen: (51:27)
I think it’s negligent.

John Nicolson: (51:29)
Malevolent?

Frances Haugen: (51:30)
Malevolent implies intent, and I cannot see into the hearts of men. But I do believe there is a pattern of inadequacy, that Facebook is unwilling to acknowledge its own power. It believes in a world of flatness, which hides the difference… Children are not adults. They believe in flatness, and they won’t accept the consequences of their actions. I think that is negligence, and it is ignorance. But I can’t see into their hearts, and so I don’t want to consider it malevolent.

John Nicolson: (52:02)
Well, I respect your desire, obviously, to answer the question in your own way, but given the evidence that you’ve given us, I think a reasonable person running Facebook, seeing the consequences of the company’s behavior, would, I imagine, have to conclude that what they were doing, the way their company was performing, and the outcomes, were malevolent and would want to do something about it.

Frances Haugen: (52:28)
I would certainly hope so.

John Nicolson: (52:29)
Back to you, chair.

Damian Collins: (52:31)
Thank you very much. Just on that point about-

Frances Haugen: (52:34)
Would you mind if I rested my voice for five minutes? Can we take a break for a second? Oh, sorry. I don’t know how long we’re going to go. If we go for two and a half hours, I’ll… Never mind. Ask your question.

Damian Collins: (52:47)
Okay, thank you.

Frances Haugen: (52:48)
We’ll do it after.

Damian Collins: (52:49)
On that point about intent, someone may not have intended to do a bad thing, but if their actions are causing that, and they’re told about it, and they don’t change their strategy, then what do you say about them then?

Frances Haugen: (53:01)
I am a big proponent of we need to look at systems and how systems perform, and this idea that… Actually, this is a huge problem inside of Facebook. Facebook has this philosophy that if they establish the right metrics, they can allow people free rein.

Frances Haugen: (53:18)
Like I said, they’re intoxicated by flatness. They have the largest open-floor-plan office in the world. It’s a quarter of a mile long in one room. They believe in flatness. They believe that if you pick a metric, you can let people do whatever they want to move that metric, and that’s all you have to do. If you had better metrics, you could do better actions. But that ignores that if you learn data that that metric is leading to harm, which is what meaningful social interactions did, that the metric can get embedded, because now there are thousands of people who are all trying to move a metric. And people get scared to change a metric and make people not get their bonuses.

Frances Haugen: (53:51)
I think there is a real thing of there is no will at the top. Mark Zuckerberg has unilateral control over three billion people. There is no will at the top to make sure these systems are run in an adequately safe way. I think until we bring in a counterweight, things will be operated for the shareholders’ interest and not for the public interest.

Damian Collins: (54:14)
Thank you. Joining us remotely, Dean Russell.

Dean Russell: (54:17)
Thank you, chair. And thank you again for joining us today. It’s incredibly important, and your testimony has been heard loud and clear.

Dean Russell: (54:27)
The point I want to just pick up on was about diction. If Facebook were optimizing algorithms in the same way, or viewed to be doing it in the same way, as a drug company was trying to improve addiction of their product, it would probably be viewed very differently. And I just wondered if you could explore a bit further this role of addiction and whether, actually, Facebook is doing something that we perhaps have never seen in history before, which is creating an addictive product that perhaps isn’t consumed actually through taking the drug, as it were, but is consumed via a screen.

Frances Haugen: (55:05)
Inside of Facebook, there are many euphemisms that are meant to hide the emotional impact of things. So for example, the ethnic violence team is called the social cohesion team, because ethnic violence is what happens when social cohesion breaks down. For addiction, the metaphor is problematic use. People are not addicted; they have problematic use.

Frances Haugen: (55:27)
The reality is that using large-scale studies, so these are 100,000 people, Facebook has found that problematic use is much worse in young people than people who are older. So the bar for problematic use is that you have to be self-aware enough and honest enough with yourself to admit that you don’t have control over your usage and that it is harming your physical health, your school, or your employment. And for 14-year-olds, it peaks… Their first year, they haven’t gotten quite enough problematic use yet, but by the time you get to be 14, between 5.8% and 8% of kids say they have problematic use. And that’s a huge problem. If that many 14-year-olds are that self-aware and that honest, the real number is probably 15%, 20%.

Frances Haugen: (56:16)
I am deeply concerned about Facebook’s role in hurting the most vulnerable among us. Facebook has studied who has been most exposed to misinformation, and it is people who have been recently widowed, people who are recently divorced, people who move to a new city, people who are socially isolated. And I am deeply concerned that they have made a product that can lead people away from their real communities and isolate them in these rabbit holes, in these filter bubbles. Because what you find is that when people are sent targeted misinformation to a community, it can make it hard to reintegrate into larger society because now you don’t have shared facts. So that’s the real harm. I like to talk about the idea of the misinformation burden instead of thinking of it as… Because it is a burden when we encounter this kind of information.

Frances Haugen: (57:05)
Facebook right now doesn’t have any disincentive for trying to do high-quality shorter sessions. Imagine if there was a sin tax, if it was a penny an hour. This is like dollars a year for Facebook per user. Imagine if there was a sin tax that pushed Facebook to have shorter sessions that were higher quality. Nothing today is incentivizing them to do this. All the incentives say, “If you can get them to stay on the platform longer, you’ll get more ad revenue. You’ll make more money.”

Dean Russell: (57:36)
Thank you. That’s very helpful. But in terms of the way that often the discussion is around the bill, especially, that we are looking at, the Online Safety Bill, is around the comparison with it being a publisher or publishing platform. But should we be looking at this much more about almost a product approach which, in essence, is causing addiction, as you say, with young people, and as you mentioned earlier, about the impact on trying to get a greater high almost through the dopamine in their brain?

Dean Russell: (58:05)
We’ve heard previous testimony from experts highlighting that actually children’s brains seem to be changing because they’re using Facebook and other platforms to a large extent over many, many hours. And if they were being given a white powder and they were having the same symptoms, the same outputs, we’d be very quick to clamp down on that, but because it’s via a screen and we call it Facebook, and we think everyone’s using it nicely, that that doesn’t happen. So I’d just be interested in your view on the impact on children, whether Facebook have actually been looking at that, but also whether we should be doing this almost with regard to Facebook being a product rather than a platform.

Frances Haugen: (58:44)
I find it really telling that if you go to Silicon Valley and you look at the most elite private schools, they often have zero social media policies, that they try to establish cultures where you don’t use phones and you don’t connect with each other on social media. The fact that that is a trend in the most elite private schools in Silicon Valley, I think should be a warning to us all.

Frances Haugen: (59:07)
It is super scary to me that we are not taking a safety-first perspective with regard to children. Safety by design is so essential with kids because the burden that we’ve set up up until now is the idea that the public has to prove to Facebook that Facebook is dangerous. Facebook has never had to prove that their product is safe for children. And we need to flip that script.

Frances Haugen: (59:32)
With pharmaceuticals, a long time ago we said it’s not the obligation of the public to say this medicine is dangerous; it’s the obligation of the producer to say this medicine is safe. We’ve done this over and over again, and this is the right moment to act. This is the moment to change that relationship with the public.

Dean Russell: (59:50)
And if I may, just on that point… Sorry, my screen seems to be switching off. My apologies. With regards to that point of addiction, have there been any studies, from your awareness, within Facebook and within the documents you’ve seen where they have actually looked at how they can increase addiction via the algorithms?

Frances Haugen: (01:00:13)
I have not seen any documents that are as explicit as saying Facebook is trying to make addiction worse, but I have seen documents where, on one side, someone’s saying the number of sessions per day that someone has, the number of times they visit Facebook, is indicative of their risk of exhibiting problematic use. But on the other side, they’re clearly not talking to each other, someone says, “Interesting. An indicator that people will still be on the platform in three months is if they have more sessions every day. We should figure out how to drive more sessions.”

Frances Haugen: (01:00:47)
This is an example of Facebook is not… Because their management style is flat, there isn’t enough cross-promotion. There’s not enough cross-filtration. The side that is responsible for growing the company is kept away from the side that highlights harms, and that kind of world where it’s not integrated, that causes dangers and it makes the problem worse.

Damian Collins: (01:01:08)
Thank you.

Dean Russell: (01:01:09)
Thank you-

Damian Collins: (01:01:10)
Sorry, Dean, if we could pause there. It’s 25 to 2:00. We’ve been going for over an hour, so I think we’ll take a 10-minute break at this point.

Frances Haugen: (01:01:15)
That’d be lovely.

Damian Collins: (01:01:15)
Thank you.

Frances Haugen: (01:01:15)
Thank you so much.

Damian Collins: (01:13:50)
(silence). It’s unusually warm in this room so we’ve tried to open the windows a bit more and turn the heating off. [crosstalk 01:13:52].

Frances Haugen: (01:14:02)
No, no, it’s all good. I’m excited to be here.

Damian Collins: (01:14:02)
Yeah. Thank you. The evidence session will now resume and I’d like to ask Dean Russell to continue with these questions.

Dean Russell: (01:14:13)
Thank you, Chair, and thank you again for your responses earlier. The question I just want to build upon, I’ve got a few more questions, but hopefully it won’t take too long, one just continues on the addictivity, If there’s such a word for Facebook and similar platforms, you mentioned before that you hadn’t seen any specific research in that area, but I just wondered if there’s any awareness within Facebook of the actual effect of long use of Facebook and similar platforms on children’s brains as they’re developing?

Frances Haugen: (01:14:51)
I think there’s an important question to be asked, which is, what is the incremental value added to a child after some number of hours of usage per day? I’m not a child psychologist. I’m not a neurologist. I can’t advise on what that time limit should be, but I think we should weigh a trade off, which is I think it’s possible to say that there is value that is given from Instagram. But I think there’s a real question of, how valuable is the second hour after the first hour? How valuable is the third hour after the second hour? Because the impacts are probably more than cumulative. They probably expand substantially more over time. And so I think that those are great questions to ask. I don’t have a good answer for you.

Dean Russell: (01:15:35)
Thank you. Just verifying you on that point before I move on. It’s just a small extra point. Do you think from your experience that the senior leadership, including Mark Zuckerberg, at Facebook actually care if they’re doing harm to the next generation of society, especially children?

Frances Haugen: (01:15:52)
I cannot see into the hearts of men and so I don’t know. I don’t know what their position is. I know that there is a philosophy inside the company that I’ve seen repeated over and over again, which is the people focus on the good. There is a culture of positivity and that’s not always a bad thing. But the problem is that when it’s so intense that it discourages people from looking at hard questions, then it becomes dangerous. And I think it’s really a thing of we haven’t adequately invested in security and safety and they have consistently, when they see a conflict of interest between profits and people, they keep choosing profits.

Dean Russell: (01:16:33)
So you’d agree that actually it’s a sign that they perhaps don’t care by the fact they haven’t investigated or even done research into this area?

Frances Haugen: (01:16:40)
I think they need to do more research and I think they need to take more action and they need to accept that it’s not free, that safety is important and is a common good, and that they need to invest more in it.

Dean Russell: (01:16:51)
Thank you. And if I may just on a slightly different point, if I may, you are obviously now a globally known whistleblower and one of the aspects that we’ve looked at over the past few weeks is around anonymity. And one of the regular points that’s made is that if we pushed on anonymity within this bill that, that would do harm to people who want to be whistleblowers in the future. I just wanted to get your sense of whether you agree with that and if you have any particular view on anonymity.

Frances Haugen: (01:17:25)
I worked on Google Plus in the early days. I was actually the person in charge of profiles on Google Plus when Google internally had a small crisis over whether or not real names should be mandated. And there was a movement inside the company called Real Names Considered Harmful, and it detailed at great length all the different populations that are harmed by excluding anonymity, and that’s groups like domestic abuse survivors who their personal safety may be at risk if they are forced to engage with their real names. Anonymity, I think it’s important to weigh, what is the incremental value of requiring real names? Real names are difficult to implement. So most countries in the world do not have digital services where we could verify like someone’s ID versus their picture in a database. And in a world where someone can use a VPN and claim they’re in one of those countries and register a profile, that means they could still do whatever action you’re afraid of them doing today.

Frances Haugen: (01:18:29)
The second thing is that Facebook knows so much about you and if they’re not giving you information to facilitate investigations, that’s a different question. Facebook knows a huge amount about you today. The idea that you’re anonymous on Facebook is I think not actually accurate for what’s happening and we still see these harms. But the third thing is, the real problem here is the systems of amplification. This is not a problem about individuals. It is about having a system that prioritizes and mass distributes divisive, polarizing, extreme content, and that in situations where you just limit content, or not limit, when you just show more content from your family and friends, you get for free, safer, less dangerous content. And I think that’s the greater solution.

Dean Russell: (01:19:18)
And so very finally, just in terms of anonymity for this report then, are you saying that we should be focusing more on the proliferation of content to large numbers than we should on anonymity of the source of the content?

Frances Haugen: (01:19:32)
Yes. I think the much more scalable, effective solution is thinking about, how is content distributed on these platforms? What are the biases of the algorithms? What are they distributing more of? And concentration, are certain people being pounded with this bad content? For example, this happens on both sides, it’s both people being hyper exposed to toxicity and hyper exposed to abuse.

Dean Russell: (01:19:57)
Thank you. I’ll pass back over to the Chair next, and I’ve got colleagues who wanted to come in. Thank you very much.

Damian Collins: (01:20:02)
Thank you. I just wanted to just ask you about the point you just made on anonymity. From what you’re saying it sounds like anonymity currently exists to hide the identity of the abuser from their victim, but not the identity of the abuser to the platform.

Frances Haugen: (01:20:17)
Platforms have far more information about accounts than I think people are aware of, and platforms could be more helpful in identifying those connections in cases of crimes. And so I think it’s a question of Facebook’s willingness to act to protect people more so than a question of, are those people anonymous on Facebook?

Damian Collins: (01:20:39)
The reason I ask, I think it’s a particularly pertinent point here having this debate on anonymity, is that one of the concerns is that if you say, “Well, the platform should always know who the account user is,” so if there was a request from law enforcement that they could comply with it, and some people say, “Well, if we do that, there’s a danger” of their systems being hacked or that information being got out another way. But from what you’re saying is, practically, the company already has that data and information anyway. It knows so much about each one of its users regardless of the settings for an account. And, obviously, in Facebook, you have to have your name for the account anyway, in theory. I think what you’re saying is, in practical terms, anonymity doesn’t really exist because the company knows so much about you.

Frances Haugen: (01:21:17)
You could imagine designing Facebook in a way where, as you used the platform more, you’ve got more reach, the idea that reach is earned, it is not a right. In that world, as you interact with the platform more, the platform will learn more and more about you. The fact that today you can make a throwaway account and take an action, that opens up all sorts of doors. But I just want to be clear, in a world where you require people’s IDs, you’re still going to have that problem because Facebook will never be able to mandate that for the whole world, because lots of countries don’t have those systems and as long as you can pretend to be in that country and register an account you’re still going to see all those harms.

Damian Collins: (01:21:54)
[inaudible 01:21:54].

Speaker 1: (01:21:57)
Thank you. If I could join other colleagues and thank you so much for being here today, because this is so important to us. The bill, as it stands, exempts legitimate news publishers and the content that comes from legitimate news publishers from its scope. But there is no obligation on Facebook and indeed the other platforms to carry that journalism. Instead, it’ll be up to them to apply the codes which are laid down by the regulator, directed by government in the form of the Secretary of State, ostensibly, to make their own judgements about whether or not to carry it. Now, it’s going to be AI which is doing that. It is going to be the black box which is doing that, which leads to the possibility in effect of censorship by algorithm. And what I’d really like to know is, in your experience, do you trust AI to make those sorts of judgments or will we get to the sort of situation where all news, legitimate news, about terrorism is in fact [inaudible 01:23:14] censored out because the black box can’t differentiate between news about terrorism and content which is promoting terrorism.

Frances Haugen: (01:23:25)
I think there’s a couple of different issues to unpack. So the first question is around excluding journalism. Right now, my understanding of how the bill is written is that a blogger could be treated the same as an established outlet that has editorial standards. People have shown over and over again that they want high quality news. People are willing to pay for high quality news now. It’s actually interesting, one of the highest rates of subscription news is amongst 18 year olds. Young people understand the value of high quality news. When we treat a random blogger and an established high quality news source the same, we actually dilute the access of people to high quality news. That’s the first issue. I’m very concerned that if you just exempt it across the board, you’re going to make the regulations ineffective.

Frances Haugen: (01:24:16)
The second question is around, can AI identify safe versus dangerous content? And part of why we need to be forcing Facebook to publish which integrity systems exist in which languages and performance data is right now those systems don’t work. Facebook’s own documents say they have trouble differentiating between new content promoting terrorism and counter-terrorism speech at a huge rate. The number I saw was 76% of counter-terrorism speech and this at-risk country was getting flagged as terrorism and taken down. And so any system where the solution is AI is a system that’s going to fail. Instead, we need to focus on slowing the platform down, making it human scale, and letting humans choose what we focus on, not letting an AI, which is going to be misleading us make that decision.

Speaker 1: (01:25:13)
And what practically could we do in this bill to deal with that problem?

Frances Haugen: (01:25:14)
Great question. I think mandatory risk assessments with standards like, how good does this risk assessment need to be? Analysis around things like segmentation, like understanding are some people hyper exposed? All those things are critical and I think the most important part is having a process where it’s not just Facebook articulating harms. It’s also the regulator going out and collecting harms from other populations and then turning back to Facebook and saying, “You need to articulate how you’re going to solve these problems.” Because right now the incentives for Facebook are aligned with their shareholders. And the point of regulation is to pull that center of mass back towards the public good.

Frances Haugen: (01:25:51)
And right now Facebook doesn’t have to solve these problems. It doesn’t have to disclose they exist and it doesn’t have to come up with solutions. But in a world where they were regulated and mandated, you have to tell us what the five point plan is on each of these things. And if it’s not good enough, we’re going to come back to you and ask you again. That’s a world where Facebook now has an incentive to instead of investing 10,000 engineers to make the metaverse. We have 10,000 engineers to make us safer. And that’s the world we need.

Speaker 1: (01:26:19)
We need to give the power to the regulator to do that because at the moment, the bill, as I understand it, doesn’t.

Frances Haugen: (01:26:26)
Oh, okay. I believe that if Facebook doesn’t have standards for those risk assessments, they will give you a bad risk assessment, because Facebook has established over and over again, when asked for information, they mislead the public. So I don’t have any expectation that they’ll give you a good risk assessment unless you articulate what a good one looks like. And you have to be able to mandate that they give solutions, because on a lot of these problems, Facebook has not thought very hard about how to solve them, or because there’s no incentive forcing them away from shareholder interests, when they have to make those little sacrifices like 1% growth here or 1% growth there, they choose growth over safety.

Speaker 1: (01:27:06)
So just leading on from that, just a very quick general question, as things stand at the moment, do you think that this bill is keeping Mark Zuckerberg awake at night?

Frances Haugen: (01:27:20)
I am incredibly excited and proud of the U.K. for taking such a world leading stance with regard to thinking about regulating social platforms. The global south currently does not have the resources to stand up and save their own lives. They are excluded from these discussions and the U.K. has a tradition of leading policy in ways that are followed around the world. And I can’t imagine Mark isn’t paying attention to what you’re doing, because this is a critical moment for the U.K. to stand up and make sure that these platforms are in the public good and are designed for safety.

Speaker 1: (01:27:59)
We probably need to do a little more in this bill to make sure that that’s the case, is what you’re saying?

Frances Haugen: (01:28:03)
I have faith in you guys.

Speaker 1: (01:28:06)
Thank you very much.

Damian Collins: (01:28:09)
You’ve presented a very compelling argument there on the way regulation should work. Do you not think it’s, let’s say, disingenuous of companies like Facebook to say, “We welcome regulation. We actually actively want parliament around the world to regulate.” Nick Clegg was critical of Congress for not regulating. And yet the company does none of the things that you said it should do and doesn’t share any of that information with the oversight board it in theory created to have oversight of what it does.

Frances Haugen: (01:28:34)
I think it’s important to understand that companies work within the incentives and the context they’re given. I think today Facebook is scared that if they freely disclosed information, it wasn’t requested by a regulator, that they might get a shareholder lawsuit. I think they’re really scared about doing the right thing because, in the United States, because they’re a private company, they have a fiduciary duty to maximize shareholder value. And so when they’re given these little choices between 5% more misinformation or 10% more misinformation and 1% of sessions, they choose sessions and growth over and over again. And I think there’s actually an opportunity I think to make the lives of Facebook employees better, rank and file employees better, by giving appropriate goal posts for, what is a safe place? Because right now I think there’s a lot of people inside the company that are uncomfortable about the decisions that they are being forced to make within the incentives that exist and that creating different incentives through regulation gives more freedom to be able to do things that might be aligned with their hearts.

Damian Collins: (01:29:41)
Going back to the oversight board, they don’t have access to data. I mean, so much of your argument is about data drives engagement, drives up revenue, and that’s what the business is all about. The oversight board can’t see that. I mean, even when they ask for it, they’re told they can’t have it and to me that does not look like a company that wants to be regulated.

Frances Haugen: (01:29:57)
Again, I think it’s one of these things where Facebook, like I said before, I can’t see into men’s hearts, I can’t see motivations, but knowing what I know, I’m an odd nerd in that I have an MBA, given what the laws are in the United States, they have to act in the shareholder’s interest or they have to be able to justify doing something else. And I think here a lot of the longterm benefits are harder to prove. I think if you make Facebook safer and more pleasant, it’ll be a more profitable company 10 years from now because the toxic version of Facebook is slowly losing users. But, at the same time, the actions in the short term are easy to prove and I think they worry that if they do the right thing, they might get a shareholder lawsuit. I just don’t know.

Speaker 2: (01:30:41)
Thank you, Chair, and thank you, Francis, so much for being here. It’s truly appreciated. And everything you’ve done to get yourself here as well over the last year. Look, what will it take for Mark Zuckerberg and the Facebook executives to actually be accountable? And do you think they’re actually aware of the human cost that has been expelled in terms of what I feel is they’re not accountable enough and there has been a human price on this?

Frances Haugen: (01:31:12)
I think it is very easy for humans to focus on the positive over the negative and I think it’s important to remember that Facebook is a product that was built by Harvard students for other Harvard students. When a Facebook employee looks at their news feed, it is likely they see a safe pleasant place where pleasant people discuss things together. Their immediate visceral perception of what the product is and what’s happening in a place like Ethiopia, they’re completely foreign worlds. And I think there’s a real challenge of incentives where I don’t know if all the information that’s really necessary gets very high up in the company where the good news trickles up, but not necessarily the bad news. And so I think it’s a thing where executives see all the good they’re generating and then they can write off the bad as the cost of all that.

Speaker 2: (01:32:01)
I’m guessing now having probably watched from afar what’s been going on here in Westminster that they probably are very much aware of what has been going on. I really, truly hope that they are bearing in mind all the evidence sessions that we’ve had and people coming here with just stories that are just quite unbelievable, and loss of life as well. So has it ever been a message to your knowledge internally or privately that they have got it wrong?

Frances Haugen: (01:32:30)
There are many employees internally. I think the key thing that you’ll see over and over again in the reporting on these issues is that countless employees said, “We have lots of solutions. We have lots of solutions that don’t involve picking good and bad ideas. It’s not about censorship. It’s about the design of the platform. It’s about how fast it is, how growth optimized it is. We could have a safer platform and it could work for everyone in the world, but it’ll cost little bits of growth.” And I think there’s a real problem that those voices don’t get-

Frances Haugen: (01:33:03)
… that those voices don’t get amplified internally, because they’re making the company grow a little slower, and it’s a company that lionizes growth.

Speaker 2: (01:33:12)
And what is your view, perhaps, on criminal sanctions for online harm content? Do you believe that there’s a route for criminal sanctions?

Frances Haugen: (01:33:22)
My philosophy on criminal sanctions for executives is that they act like gasoline on law. Whatever the terms are, the conditions of a law, if you have really strong confidence that you’ve picked the right thing, they will amplify those consequences. But the same can be true of if there are flaws in the law. And so it’s hard for me to articulate with where the law stands today whether or not I would support criminal sanctions, but I it is a real thing that it makes executives take consequences more seriously, and so it depends on where the law ends in the end.

Speaker 2: (01:33:55)
Okay, thank you. Just quick one now. You mentioned earlier that it’s easier to promote hate and anger, and I know you touched on this earlier and had that conversation. Quick question: Is the promotion of hate and anger… Is it by accident, or is it by design?

Frances Haugen: (01:34:10)
Facebook has repeatedly said, “We have not set out to design a system that promotes anger, divisive, hateful content.” They said, “We never did that. We never set out to do that.” But there’s a huge difference between what you set out to do, which was prioritize content based on its likelihood to elicit engagement, and the consequences of that. And so I don’t think they set out to accomplish these things, but they have been negligent in not responding to the data as it is produced. And there is a large number of data scientists internally who’ve been raising these issues for years.

Frances Haugen: (01:34:45)
And the solutions that Facebook has implemented, which is in countries where it has civic classifiers, which is not very many countries in the world, very many languages in the world, they are removing some of the most dangerous terms from engagement-based ranking. But that ignores the fact that the most vulnerable, fragile places in the world are linguistically diverse. Ethiopia has 100 million people and they speak six languages. Facebook only supports two, and it’s only a few integrity systems. And so there’s this real thing of… If we believe in linguistic diversity, the current design of the platform is dangerous.

Speaker 2: (01:35:24)
Just one quick one. Sorry, Chair. Online harm has been out there for some time. We’re all aware of it. It’s very much in the public domain, as I touched on briefly before. Why aren’t the tech companies doing anything about it? Why don’t they just get on? Why are they having to wait for this bill to come through to make the most obvious changes to what is basically proliferating online harm and creating… As I say, there’s human loss to this. Why aren’t they doing something now about it?

Frances Haugen: (01:35:58)
I think as we look at the harms of Facebook, we need to think about these things as system problems, the idea that these systems are designed products, these are intentional choices, and that is often difficult to see the forest for the trees. That Facebook is a system of incentives. It’s full of good, kind, conscientious people who are working with bad incentives. And there’s a lack of incentives inside the company to raise issues about flaws in the system, and there’s lots of rewards for amplifying and making things grow more.

Frances Haugen: (01:36:29)
And so I think there is a big challenge of… Facebook’s management philosophy is that they can just pick good metrics and let people run free. And so they have found themselves in a trap where in a world like that, how do you propose changing the metric? It’s very, very hard, because 1,000 people might have directed their labor for six months trying to move that metric, and changing the metric will disrupt all of that work. And so I don’t think any of it was intentional. I don’t think they set out to go down this path, but they’re kind of trapped in it. And that’s why we need regulation, mandatory regulations, mandatory actions to help pull them away from that spiral that they’re caught in.

Speaker 2: (01:37:06)
Thank you.

Damian Collins: (01:37:07)
Debbie Abrahams.

Debbie Abrahams: (01:37:07)
Thank you. And again, I reiterate all my colleagues’ thanks to you for coming over and giving evidence to us. I just wanted to ask in relation to an interview you gave recently… You said, “Facebook consistently resolved conflicts in favor of its own profits.” You have speckled the testimony that you’ve given so far with these, but I wonder if you could pick two or three that you think really highlight this point.

Frances Haugen: (01:37:41)
I think overall, their strategy of engagement-based ranking is safe once you have AI. I think that is the flagship one of showing how Facebook has non-content-based tools they could use to keep the platform safe, but each one of those… So for example, limiting reshare chains. So I said like two hops. That’s going to carve off maybe 1% of growth. Requiring someone to click on a link before you reshare it, this is something Twitter’s done. Twitter accepted the cost of that change, but Facebook wasn’t willing to. Lots of things around language coverage. Facebook could be doing much, much more rigorous safety systems for the languages that they support, and they could be doing a better job of saying, “We’ve already identified what we believe are the most at-risk countries in the world, but we’re not giving them equal treatment. We’re not even out of the risk zone with them.” And I think that pattern of behavior of being unwilling to invest in safety is the problem.

Debbie Abrahams: (01:38:40)
Okay. So looking specifically at the events in Washington on the 6th of January, and there’s been a lot of talk about Facebook’s involvement in that, I think it’s all… In the moment, that evidence is being looked at in terms of depositions. So would that be an example? Would somebody have highlighted this as a particular concern, taken it to the executives? I’m absolutely horrified about what you say about the lack of risk assessment and risk management in the organization. I think it’s a gross dereliction of responsibility. But would that have been one example of where Facebook was aware of the potential harm that this could create, that was created, and that they chose not to do anything about it?

Frances Haugen: (01:39:42)
What is particularly problematic to me is that Facebook looked at its own product before the US 2020 election and identified a large number of settings. So these are things as subtle as… Should we amplify live videos 600 times, or 60 times? Because they want live video on the top of your feed. They came in and said, “That setting is great for promoting live video, for making that product grow, having impact to that product.” But it is dangerous, because like it was used on January 6th, it was used actually for coordinating the rioters. Facebook looked at those risks along those maybe 20 interventions and said, “We need to have these in place for the election.” Facebook has said the reason they turned them off was they don’t… Censorship is a delicate issue.

Frances Haugen: (01:40:34)
And I find this is so misleading. Most of those interventions have nothing to do with content. They have questions like… For example, promoting live video 600 times versus 60 times, have you censored someone? I don’t think so. So Facebook has characterized… They turned those off because they don’t believe in censorship. On the day of January 6th, most of those interventions were still off at 5:00 PM Eastern Time. And that’s shocking, because they could have turned them on seven days before. And so either they’re not paying enough attention for the amount of power they have, or they are not responsive when they see those things. I don’t know what the root cause of that is, but all I know is that’s an unacceptable way to treat a system that is as powerful and as delicate as this.

Debbie Abrahams: (01:41:24)
Your former colleague, Sophie Zhang, was giving evidence to the committee last week, and she made the point that we have freedom of expression, freedom of information. We don’t have freedom of amplification. Is that something you’d agree with, in terms of the censorship-

Frances Haugen: (01:41:40)
The current philosophy inside of the company is almost like they refuse to acknowledge the power that they have. The idea that the choices they’re making, they justify them based on growth. And if they came in and said, “We need to do safety first. We need safety by design,” I think they would choose different parameters in terms of optimizing how does amplification work. Because I want to remind people, we liked the version of Facebook that didn’t have algorithmic amplification. We saw our friends. We saw our families. It was more human-scale. And I think there is a lot of value and a lot of joy that could come from returning to a Facebook like that.

Debbie Abrahams: (01:42:27)
You made a very important point that this is a private company. They have a fiduciary responsibility to their shareholders and so on. Do you think, though, that there are breaches in their terms and conditions? So is there, again, that conflict there?

Frances Haugen: (01:42:46)
I think there’s two issues. So one is I think terms and conditions… Saying that private companies can define them. That’s like them grading their own homework. They’re defining what’s bad. And then we know now they don’t even find most of the thing that they say is bad. They don’t have any transparency, any accountability. The second question is around: Do companies have duties outside of to their shareholders? And we have had a principle for a long, long time that companies cannot subsidize their profits. They cannot pay for their profits using public expense. If you go and pollute the water and people get cancer, the public has to pay for those people. Similarly, if Facebook is sacrificing our safety because they don’t want to invest enough… And don’t listen to them when they say, “We spend $14 billion on safety.” That’s not the question. The question is: How much do you need to pay to make it safe?

Debbie Abrahams: (01:43:40)
Do you think then one of the things that the committee has been looking at is in terms of a duty of care that’s… Is that something that we should be considering very carefully to mandate?

Frances Haugen: (01:43:53)
I think a duty of care is really important. That we have let Facebook act freely for too long, and they’ve demonstrated that… I like to say there’s multiple criteria necessary for Facebook to act completely independently. The first is when they see conflicts of interest between themself and the public good, they resolve them aligned with the public good. And the second is they can’t lie to the public. Facebook in both cases has violated those criteria, and has demonstrated they need oversight.

Debbie Abrahams: (01:44:24)
Thank you so much. My final question is: Do you think the regulator is going to be up to the job?

Frances Haugen: (01:44:33)
I am not a lawmaker, and so I don’t know a ton about the design of regulatory bodies, but I do think that things like having mandatory risk assessments with a certain level of quality is a flexible enough technique that as long as Facebook is required to articulate solutions, that might be a good enough dynamic, where as long as there’s also community input into that risk assessment, that might actually be a system that could be sustainable over time. Because the reality is Facebook’s going to keep trying to run around the edges, and so we need to have something that can continue over time, not just play Whac-A-Mole on specific pieces of content or on specific incidences.

Debbie Abrahams: (01:45:10)
Thanks so much, Frances.

Damian Collins: (01:45:12)
Joining us remotely, Darren Jones.

Darren Jones: (01:45:15)
Thank you, Chair. And just following on from the discussion I have [crosstalk 01:45:21] some of the provisions in this bill might be operationalized in the day-to-day of Facebook. Firstly, there’s a distinction in the bill between illegal content, such as terrorism, and legal but harmful. And the question about how you define what is harmful is based on the idea that a company like Facebook would reasonably foresee that something was causing harm. And we’ve seen through some of the leaks over the last few weeks that Facebook undertakes research internally, but maybe doesn’t publish that or share the information with external researchers. If Facebook just stopped researching potential harms and claimed they had no reasonable foresight of new harms, would they just be able to get around the risk assessment, in your view?

Frances Haugen: (01:46:05)
I am extremely worried about Facebook ceasing to do important research, and it is a great illustration of how dangerous it is to have a company as powerful as Facebook where the only one who gets to ask questions of Facebook is Facebook. We probably need something like a post-doc program where public interest people are embedded in the company for a couple of years, and they can ask questions. They can work on real problems, they can learn about these systems, and then go out and seed academia and train the next generation of integrity workers. I think there are big questions around… So legal but harmful content is dangerous. For example, COVID misinformation, that actually leads to people losing their lives. There’s large, social, societal harmful consequences of this. I’m also concerned that if you don’t cover legal but harmful content, you will have a much, much smaller impact of this bill, and especially on impacts to children, for example. A lot of the content that we’re talking about here would be legal but harmful content.

Darren Jones: (01:47:10)
Thank you. And I know that in your answers today, you’ve said that there’s not enough internal work to understand harms, and that they just should be shared with external academics and regulators, and I agree with you on those points. My second question is: Say the company found some new type of harmful content, a new trend or some type of content that was leading to physical or mental harm to individuals. We’ve talked today about how complex the Facebook world is, whether it’s about content promotion or groups or messaging. How would you go about auditing and assessing how this harm is being shared within the Facebook environment? It sounds like a very big job to me. How would you actually go about doing it?

Frances Haugen: (01:47:52)
One of the reasons why I’m such a big advocate of having a fire hose, so that’s picking some standard where you’re like, “If more than X thousands of people see this content, it’s not really private,” is that you can include metadata about each piece of content. For example, did it come via group? What group? Where on average in someone’s feed did this content show up? Which groups are most exposed to it? Imagine if we could tell, “Oh, Facebook is actually distributing a lot of self-harm content to children.” There’s various metadata we could be releasing. And I think there is a really interesting opportunity that once more data is accessible outside of the company, I think a cottage industry will spring up amongst academics, amongst independent researchers. If I had access this data, I would start a YouTube channel, and I would just teach people about it. I can tell jokes for 15 minutes. I think there’s opportunities where we will develop the muscle of oversight, but we’ll only develop the muscle of oversight if we have at least a peephole to look into Facebook.

Darren Jones: (01:48:58)
So if Facebook were to say to us, for example, “Well, there’s a unique and new type of harm that’s been created, a new trend that’s been appearing in private groups, and has maybe been shared a little bit. Because of the amount of content on the platform, it’s really difficult for us to find it and assess that properly,” you’re saying that that’s not true, and they have the capabilities to do that.

Frances Haugen: (01:49:18)
So as I said earlier, I think it’s really important for Facebook to have to publish which integrity systems exist. What content can they find? Because we should be able to have the public surface like, “We believe there’s this harm here,” and be like, “Oh, interesting. You don’t actually look for that harm.” So the example I heard was around self-inflicted harm content and kids. And are some kids being overexposed? And Facebook said, “We don’t track that content,” or, “We don’t want to track that.” We don’t have a mechanism today to force Facebook to answer those questions, and we need to have something that would be mandatory, where we could come and say, “You need to be tracking this harm.” I’m sorry, I forgot your question. My apologies.

Darren Jones: (01:50:05)
I think you’ve broadly answered it. I just wanted you to answer to the question that they have the capacity to do it. My last two questions are more about corporate governance of the business, and I’m interested to know from your experience how the different teams within the business operate. So a question I’ve asked previously in this committee is… The programmers who might be down the end of the corridor who are coding all of these algorithms will know a certain amount of information. The product teams that build products will know a bit about what the programmers have done, but probably not the whole thing and how it works properly.

Darren Jones: (01:50:34)
And then the compliance and PR team will just have to receive answers probably from the product team in order to produce this risk assessment that they then present to our regulator. And my concern is that the real truth about what’s happening is with the programmers, that it may not get through, unless we force it to, in that audit submission to our regulator. Am I wrong in those assumptions? Do those teams work together well in understanding what they all do and how it links together?

Frances Haugen: (01:50:59)
I think it’s really important to know that there are conflicts of interests between those teams. So one of the things that’s been raised is the fact that at Twitter, the team that is responsible for writing policy on what is harmful reports separately to the CEO than the team that is responsible for external relations with governmental officials. And at Facebook, those two teams report to the same person. So the person who is responsible for keeping politicians happy is the same person that gets to define what is harmful or not harmful content. I think there is a real problem around the left hand doesn’t speak to the right hand at Facebook. It’s like the example I gave earlier of on one hand, someone on integrity is saying, “Problematic use, addiction, one of the signs of it is you come back a lot of times a day.

Frances Haugen: (01:51:49)
And then someone over on the growth team is saying, “Oh, did you notice? We get you to come back a bunch of times a day. You still use the product in three months.” It’s just like there’s a world that’s too flat, where no one is really responsible. [inaudible 01:52:00] said in her Senate testimony… When she was pressed on the Instagram kids and various decisions, she couldn’t articulate who was responsible. And that is a real challenge at Facebook, that there is not any system or responsibility or governance. And so you end up in situations like that, where you have one team probably unknowingly pushing behaviors that cause more addiction.

Darren Jones: (01:52:26)
So it may be that we need to look at forcing a type of risk committee, like an audit committee in the business, where the relevant people are coming together to at least-

Frances Haugen: (01:52:33)
It might be good to include in the risk assessments saying, “What are your organizational risk assessments,” not just your product risk assessments, because the organizational choices of Facebook are introducing systemic risk.

Darren Jones: (01:52:47)
And my very last question is: This is a piece of law here in the UK, has some international reach, but it’s obviously UK law. We’ve heard evidence before that employees of different technology companies here in London will want to be helpful and give certain answers, but actually the decisions are really made in California. Do you think there’s a risk that the way in which power is structured in California means that the UK-based team trying to comply with the law here may not be able to do what they need to do?

Frances Haugen: (01:53:20)
Facebook today is full of kind, conscientious people who work within a system of incentives that unfortunately leads to bad results, results that are harmful to society. There is definitely a center of mass that exists in Menlo Park. There is definitely a greater priority on moving growth metrics than safety metrics, and you may have safety teams, located in London or elsewhere in the world, who their actions will be greatly hindered or even ruled back on behalf of growth. A real problem… Again, we were talking about this idea of the right hand and the left hand. There are reports inside of Facebook that talk about the idea that an integrity team might spend months pushing out a fix that lowers misinformation 10%, but because the AI is so poorly understood, people will add in little factors that will basically recreate whatever the term was that was fixed, and reproduce those harms. And so over and over, if the incentives are bad, you’ll get bad behavior.

Darren Jones: (01:54:25)
Understood. Thank you. Those are my questions, Chair. Thanks.

Damian Collins: (01:54:30)
Thank you. Wilf Stevenson.

Wilf Stevenson: (01:54:32)
Thank you very much, and my thanks to everybody else for the incredible picture you’ve painted. I mean, I think we’ve mentioned that we recognize the problem you face by doing what you’ve done, but it is fantastic to be able to get a real sense of what’s happening inside the company, and I just wanted to pick up on what was being said just a few minutes ago. It’s almost like 1984, some of your descriptions, and when you talked about the names given to the parts of the organization that are actually supposed to be doing the opposite of what the name seems to imply. That raises an issue. I just wanted to ask about the culture.

Wilf Stevenson: (01:55:12)
You ended up by saying there were lots of people in Facebook who got what you’re saying, but you also said that the promotion structure possibly pointed in a different direction, and therefore these people didn’t necessarily get up to the positions of power that you might expect. Now, organizations have a culture of their own, so in a sense, my question is really about culture. Do you think that there is a possibility that with a regulation structure of the type we’re talking about, being seen as the way forward in the way the world deals with these huge companies that we’ve never had to deal with before, there are sufficient people of good heart and good sense in the company to rescue it? Or do you feel that somehow the duality that you’re talking about, the left hand and the right hand, has got so bad that it will never, ever recover itself and it has to be done by an external agency? It’s a very complicated question, but it comes back to culture, really. Do you think there’s a way which it might happen?

Frances Haugen: (01:56:11)
I think until the incentives that Facebook operates under changes, we will not see changes from Facebook. And so I think it’s this question of… Facebook is full of kind, conscientious, good people, but the systems reward growth. And the Wall Street Journal has reported on how people have advanced inside the company, and disproportionately the people who are the managers and leaders of the integrity teams, these safety teams, come originally from the growth orgs. The path to management in integrity and safety is via growth, and that seems deeply problematic.

Wilf Stevenson: (01:56:51)
So it’s a bit doomed.

Frances Haugen: (01:56:52)
I think there is a need to provide an external weight and a pull to move it away from just being optimized on short- termism and immediate shareholder profitability, and more towards the public good, which I think actually is going to lead to a more profitable, successful company 10 years down the road.

Wilf Stevenson: (01:57:10)
And you may not be able to answer this in the sense that it may be an easy question to answer, but inside the company, the things that you’ve been saying are being said at the water fountain and in the corridors, so I think you’ve said that people do talk about these things. What is it that stops it getting picked up as an official policy? Is there actually a gatekeeper on behalf of the growth group which just simply says, “Fine, they’ve had enough time talking that. Move on?”

Frances Haugen: (01:57:43)
I don’t think there is an explicit gatekeeper. It’s not like a thing of there are things we cannot say, but I think there is a real bias of that… Experiments are taken to review, and the costs and the benefits are assessed. And Facebook has characterized some of the things that I’ve talked about as in terms of “We’re against censorship.” And the things that I’m talking about are not content-based. I am not here to advocate for more censorship. I’m about saying, “How do we make the platform more human-scale? How do we move back to things like chronological ranking, finding ways to move towards solutions that work for all languages?” But in order to do that, we have to accept the cost of little bits of growth being lost. And I love the radical idea of… What if Facebook wasn’t profitable for one year? They have this giant pile of cash. What if for one year, they had to focus on making it safe? What would happen? What kind of infrastructure would get built? And I think there’s a real thing of that until incentives change, Facebook will not change.

Wilf Stevenson: (01:58:45)
I’ll leave it there.

Damian Collins: (01:58:48)
I just wanted to ask you about that. Because if I was a charity that works, say, with teenage girls who had self-harmed, and I said… As this organization, I’ve got the Facebook and Instagram profiles of lots of people that have interacted with our charity. What I want do is I want to reach out on Facebook and Instagram to other people that are like those people and see if we can help them before they do themselves too much harm.

Frances Haugen: (01:59:09)
Yeah, that’d be wonderful.

Damian Collins: (01:59:10)
Yeah, but I could do that. I could go to Facebook and say, “Could a guy use the lookalike audiences ad tool in order to reach those people?” And they would happily sell that to me to do it. And they have the data to find the closest possible data matches to young people who are self-abusing for the purpose of selling advertising. Couldn’t be simpler, the way the platform’s designed. But yet if you ask the same question around saying, “Well, why don’t you do more to reach out and help people that are actually in danger of self-harming? Why don’t you stop it? Why don’t you practically reach out,” not only do they not do that, they’ll sell you an ad to do it. They won’t do it themselves. But actually worse than that, they’re continually feeding those selfsame vulnerable people with content that’s likely to make them even more vulnerable. And I don’t see how that is a company that is good, kind, and conscientious.

Frances Haugen: (01:59:59)
There’s a difference between systems. And I always come back to this question of: What are the incentives, and what system do those incentives create? And I can only tell you what I saw at Facebook, which is I saw kind, conscientious people, but they were limited by the actions of the system they worked under. And that’s part of why regulation is so important. And the example you give of one, amplification of interests… So Facebook has run the experiment where they get exposed to very centrist interests, like healthy recipes, and just by following the recommendations on Instagram, they’re led to anorexia content very fast, within a week. Just by following the recommendations, because extreme polarizing content is the ones that get rewarded by engagement-based ranking.

Frances Haugen: (02:00:48)
I have never heard described what you just described, i.e., using the lookalike tools that exist today… If you want to target ads today, you can take an audience like maybe people who bought your product previously, and find a lookalike audience. It’s very profitable. Advertisers love this tool. I’ve never thought about using it to reach out critical content to people who might be in danger. Right now, the number of people who see the current self-harm tools… Facebook loves to brag about how they built tools to protect kids, or protect people who might have eating disorders. Those tools trigger on the order of hundreds of times a day, single hundreds. [crosstalk 02:01:24], hundreds globally. And so I think unquestionably, Facebook should have to do things like that and have partnerships with people who can help connect them to vulnerable populations. Because you’re right, they have the tools, and they just haven’t done it.

Damian Collins: (02:01:38)
When you were working in the civic integrity team, when the civic integrity team existed, could you have made a request like that to Facebook? Say, “We’ve identity find some accounts here as people we think are very problematic, and actually we’d like to use some of the ad tools to try and identify more people like that.” Would that have been a conversation you could have had?

Frances Haugen: (02:01:54)
[inaudible 02:01:54] I’ve had that conversation. There is a concept known as defensibility inside the company, where they are very careful about any action which they believe is not defensible. And things that are statistically likely, but are not proven to be bad, they’re very hesitant to do. So let’s imagine you found some terrorists, and you were looking for other people who are at risk of being recruited for terrorism or cartels. This happens in Mexico. The platforms are used to recruit young people into cartels. You could imagine using a technique like that to help people who are at risk of being radicalized. Facebook would’ve come back and said, “There’s no guarantee that those people are at risk. We shouldn’t label them in a negative way, because that wouldn’t be defensible.” And so I think there’s things where coming in and changing the incentives, like making them articulate risks and making them articulate how they’ll fix those risks, I think pretty rapidly they would shift their philosophies on how to approach problems, and they’d be more willing to act.

Damian Collins: (02:02:56)
But it would be defensible to sell a terrorist guns using an ad tool. Their known interest in that subject would be defensible to sell it for advertising, but not defensible to reach out for civic integrity.

Frances Haugen: (02:03:08)
I am not sure if Facebook allows gun sales. I do know that they have-

Damian Collins: (02:03:11)
Sorry, it’s a hypothetical example.

Frances Haugen: (02:03:12)
Yes, yes.

Damian Collins: (02:03:13)
What I’m saying is this scenario, you could feed someone’s addiction by advertising to them. You can’t use the same technology to reach out and help them.

Frances Haugen: (02:03:19)
Yes, and it’s actually worse than that. In my Senate testimony, one of the senators showed this example of an ad that was targeted at children, where in the background of the ad was an image of a bunch of pills, like a bunch of pills and tablets, very clearly a pile of drugs, and said something like, “Want to have your best skittles party this weekend? Reach out,” or something. And a skittles party apparently is youth code for a drug party. And that ad got approved by Facebook. And so there is a real thing where Facebook says they have policies for things… They might have a policy saying, “We don’t sell guns,” but I bet there’s tons of ads on the platform that are selling guns.

Damian Collins: (02:03:55)
They have policies that don’t allow hate speech, and there’s quite a lot of hate speech on Facebook.

Frances Haugen: (02:03:58)
Oh, there’s tons of hate speech. Yeah.

Damian Collins: (02:03:59)
But it sounds that what you’ve got is one part of the business with razor-like-

Damian Collins: (02:04:03)
… it sounds to me what you’ve got is one part of the business. But with razor-like focus, more than has probably ever been created in human existence, can target people’s addictions through advertising. And yet the other part of the business that is there to try and keep people safe is largely feeling around in the dark.

Frances Haugen: (02:04:15)
There is a great asymmetry in resources that are invested to grow the company versus to keep its safe. Yeah.

Damian Collins: (02:04:21)
It’s not just what you said. There’s not just an asymmetry, it’s you can’t even request. You can’t even go to the people with all the data and all the information and say, “Could we use some of your tools to help us do our job?” Because they would say, “Well, it’s not defensible to do that.”

Frances Haugen: (02:04:36)
I never saw the usage of a strategy like that, but it seems like a logical thing to do. Facebook should be using all the tools in its power to fight these problems and it’s not doing that today.

Damian Collins: (02:04:47)
I think a lot of people would extrapolate from that was if the senior management team really wanted to do it, it would’ve been done. But for some reason it appears that they don’t.

Frances Haugen: (02:04:57)
I think there are cultural issues and there’s definitely issues at the highest levels of leadership where they are not prioritizing safety sufficiently. And it’s not enough to say, “Look, we invested $14 billion.” We need to come back and say, “No, you might have to spend twice that to have a safe platform. The important part is it should be designed to be safe not we should have plead with you to be safe.”

Damian Collins: (02:05:17)
Thank you. Beeban Kidron.

Beeban Kidron: (02:05:22)
Frances, I was really struck earlier when you said they know more about abuse, but it’s a failure of willingness to act, I think it was the phrase. And it brought to my mind a particular case of Judy and Andy Thomas, whose daughter committed suicide, and struggled for the last year to get anything beyond an automated response for their request to have Frances access to her account. I made a small intervention. They did eventually get an answer, but that answer really basically said, “No, you can’t.” And I have to be clear just for legal things, it didn’t say, “No, you can’t.” It was a very complicated answer, a complicated, legal answer. But what it was saying is that they had to protect the privacy of third parties on Frances’ account. So I really wanted to just… Sorry, I called her Frances. It was Frankie. But I just really wanted you to say whether you think that privacy defense is okay in this setting or whether actually the sort of report and complaint piece is another thing we really need to look at, because that seems pretty horrendous to give grieving parents some sort of completion.

Frances Haugen: (02:06:53)
From what you describe, I think their argument is that on the… I think there’s a really interesting distinction around private versus public content. They could have come in and said, at least for the public content that she viewed, like the worldwide available content, “We can show you that content.” I think they probably should have come in and done that. I wouldn’t be surprised if they no longer had the data. They delete the data after 90 days. And so unless she was a terrorist and they were tracking her, which I assume she wasn’t, they would’ve lost all that history within 90 days of her passing. And that is a recurrent thing that Facebook does is that they know that whatever their sins are, it will recede into the fog of time within 90 days.

Beeban Kidron: (02:07:38)
The idea of user privacy being a reason for not giving a parent of a deceased child access to what they were seeing, I’m just interested in that more thematic piece.

Frances Haugen: (02:07:53)
I think there is an unwillingness at Facebook to acknowledge that they are responsible to anyone, right? They don’t disclose data. There’s lots of ways to disclose data in a privacy conscious way. You just have to want to do it. And Facebook has shown over and over again not just that you don’t want to release that data, but even when they do release that data, they often mislead people and lie in its construction. They did this with researchers a couple months ago. They literally released misinformation using assumptions that they did not disclose to the researchers that were misleading. In the case of those grieving parents, I’m sure that Facebook has not thought holistically about the experience of parents that have had traumas on the platform, right? Because I’m sure that her parents aren’t the only ones that have suffered that way. And I think it’s cruel of Facebook to not think about how do they even take minor responsibility after an event like that?

Beeban Kidron: (02:08:47)
Okay. A lot of colleagues have talked to you about children and that’s a particular interest of mine. The one thing that hasn’t come up this afternoon is age assurance, and specifically privacy preserving age assurance. And this is something that I’m very worried about, that age assurance used badly could drive more surveillance, could drive more resistance to regulation. And we need rules of the road. And I’m just interested to know your perspective on that.

Frances Haugen: (02:09:17)
I think it’s kind of a twofold situation. So, on one side, there are many algorithmic techniques Facebook could be using to keep children off the platform that are not involving asking for IDs or other forms of information disclosure. Facebook currently does not disclose what they do. And so we can’t, as a society, step in and say, “You actually have a much larger tool chest you could have been drawing from.” And it also means we don’t understand what the privacy violations are that are happening today. We have no idea what they’re doing. The second thing is that we could be grading Facebook’s homework instead of relying on them grading their homework. Which is that Facebook has systems for estimating the age of any user, and within a year or two of them turning 13 enough of their actual age mates have now joined that they can estimate accurately what the real age of that person is.

Frances Haugen: (02:10:11)
And Facebook should have to publish both that protocol of how they do that and publish the result going back a couple years of saying one, two, three, four years ago, how many 10-year-olds were on the platform? How many 12-year-olds were on the platform? Because they know this data today and they are not disclosing it to the public. And that would be a forcing function to make them do better detection of young people on the platform.

Beeban Kidron: (02:10:34)
Okay. I have been ticking off of a list while you’ve been speaking this afternoon. And mandatory risk assessments and mitigation measures, mandatory roots of transparency, mandatory safety by design, moderation with humans in the loop, I think you said, algorithmic design, and-

Speaker 3: (02:10:59)
[inaudible 02:10:59].

Beeban Kidron: (02:10:59)
… application of their own policies, right? Will that set of things in the regulator’s hand, in our bill that we are looking at, would all of that keep children safe? Would it save lives? Would it stop abuse? Would it be enough?

Frances Haugen: (02:11:15)
I think it would unquestionably be a much, much safer platform if Facebook had to take the time to articulate its risks. And I think this is the second part. They can’t just articulate their risks. They have to articulate their path to solve those risks. And it has to be mandated. And they can’t do a half solution. They need to give you a high quality answer. Because remember, we would never accept a car company that had five times the car accidents coming out and saying, “We’re really sorry, but brakes are so hard,” right? “We’re going to get better.” We would never accept that answer, but we hear that from Facebook over and over again. And I think between having more transparency, mandatory transparency, privacy conscious, and having a process of that conversation around what are the problems and what are the solutions, that is a path that should be resilient in moving us forward to a safer Facebook or any social media.

Beeban Kidron: (02:12:10)
Thank you.

Damian Collins: (02:12:10)
Thank you. Tim Clemont-Jones.

Tim Clemont-Jones: (02:12:14)
Hello. I [crosstalk 02:12:15]-

Damian Collins: (02:12:15)
Are you [inaudible 02:12:17]. That’s fine. In that case, I think [inaudible 02:12:19] Jim Knight had a question he wanted to bring back.

Jim Knight: (02:12:22)
Yeah. Thank you. We’ve been informed that your comments to the media on end-to-end encryption have been misrepresented. I’m interested in whether it’s something that we should be concerned on this committee in terms of whether there’s a regulatory risk, and certainly there are security risks that some parts of government are concerned about in end-to-end encryption. But I’d like to give you the opportunity first to clarify what your position is. And if there’s any comment that you’ve got for us on whether we should be concerned about that area, then I’ll be grateful to you.

Frances Haugen: (02:13:01)
I want to be very, very clear. I was mischaracterized in the Telegraph yesterday on my opinions around end-to-end encryption. So, end-to-end encryption is where you encrypt information on a device. You send it over the internet and it’s decrypted on another device. I am a strong supporter of access to open source end-to-end encryption software. Part of why I am such an advocate for open source software in this case is that if you are an activist, if you’re someone who has a sensitive need, a journalist or a whistle blower, my primary form of social software is an open source end-to-end encryption chat platform.

Frances Haugen: (02:13:40)
But part of why that open source part is so important is you can see the code. Anyone can go and look at it. And for the top open source end-to-end encryption platforms, those are some of the only ways you’re allowed to do chat in, say, the Defense Department in the United States. Facebook’s plan for end-to-end encryption, I think, is concerning because we have no idea what they’re going to do. We don’t know what it means. We don’t know if people’s privacy is actually protected. It’s super nuanced. And it’s also a different context, right? So, on the open source end-to-end encryption product that I like to use, there is no directory where you can find 14-year-olds, right? There is no directory where you can go and find the Uyghurs community in Bangkok. On Facebook, it is trivially easy to access vulnerable populations and there are nation-state actors that are doing this.

Frances Haugen: (02:14:36)
And so I want to be clear, I am not against end-to-end encryption in Messenger, but I do believe the public has a right to know, what does that even mean? Are they really going to produce end-to-end encryption? Because if they say they’re doing end-to-end encryption and they don’t really do that, people’s lives are in danger. And I personally don’t trust Facebook currently to tell the truth, and I am scared that they are waving their hands at a situation where they’re concerned about various issues and they just don’t want to see the dangers anymore. I’m concerned about them misconstruing the product that they build, and they need regulatory oversight for that. That’s my position on end-to-end encryption.

Jim Knight: (02:15:15)
So just to be really clear, there’s a really important use case for end-to-end encryption in messaging. But if you ended up with an integration of some of the other things that you can do on Facebook with end-to-end encryption, then you can create quite a dangerous place for certain vulnerable groups.

Frances Haugen: (02:15:33)
I think there’s two sides. So, I want to be super clear. I support access to end-to-end encryption. And I use open source end-to-end encryption every day, right? My social support network is currently on an open source end-to-end encryption service. But I am concerned on one side that the constellation of factors related to Facebook makes it even more necessary for Republic oversight of how we do end-to-end encryption there. That’s things like access to the directory, those amplification settings. But the second one is just about security. If people think they’re using an end-to-end encryption product and Facebook’s interpretation of that is different than what, say, an open source product would do… Because an open source product, we can all look at it and make sure what it says on the label is in the can. But if Facebook claims they’ve built an end-to-end encryption thing and there’s really vulnerabilities, people’s lives are on the line. And that’s what I’m concerned about. We need public oversight of anything Facebook does around end-to-end encryption because they are making people feel safe when they might be in danger.

Jim Knight: (02:16:36)
Thank you very much.

Damian Collins: (02:16:39)
Debbie Abrahams.

Debbie Abrahams: (02:16:40)
Thank you so much. Just a quick follow up. Are you aware of any Facebook analysis in relation to the human cost of misinformation? So, for example, COVID is a hoax or anti-vax misinformation. Have they done anything? Have they actually tried to quantify that both in terms of illness, deaths, the actual human costs?

Frances Haugen: (02:17:06)
Facebook has done many studies looking at the… The misinformation burden is not shared evenly, right? So, you have things like the most exposed people to misinformation are recently widowed, recently divorced, moved to a new city. And when you put people into these rabbit holes, when you pull people in from mainstream beliefs into extreme beliefs, it cuts them off from their communities. Because if I began to believe in flat earth things and I have friends that are flat earthers, it makes it hard for me to reintegrate into my family. In the United States, the metaphor we often use is the idea of, is Thanksgiving dinner ruined, right? Did your relative go and consume too much misinformation on Facebook, and now that’s become Thanksgiving dinner? And I think when we look at the social costs, when we look at the health costs…

Frances Haugen: (02:17:57)
I’ll give you an example. Facebook under-enforces on comments, right? Because comments are so much shorter, they’re really hard for the AI to figure out. Right now groups like UNICEF have really struggled even with the free ad credits that Facebook has given them, because they will promote positive information about the vaccine or about ways to take care of yourself with COVID and they’ll get piled on in the comments. And so Facebook’s own documents talk about UNICEF saying how much more impact would those ad dollars have had if they hadn’t been buried in toxic content.

Debbie Abrahams: (02:18:31)
Thank you so much. Thank you.

Damian Collins: (02:18:34)
Thank you. Dean Russell.

Dean Russell: (02:18:37)
Thank you, Chair. I just wanted to build on a comment you made earlier, if that’s okay. You mentioned about this idea of skills parties I think, if I’d heard it right, and it occurred to me actually, how do the platforms evolve in terms of the language? Obviously we’ve talked about different types of languages, but thinking of English. Within English, there’s huge differences between American English and English English, if you can call it that, British English. But also the slang that’s used. And when you were mentioning that it occurred to me a few weeks ago, I had someone on Facebook. They’d met me in a pub previously, and they wished they’d given me a Glasgow hug, which it turns out is worse than a Glasgow kiss. That actually means to stab me. Within the time of that being reported, I only found out about this afterwards, it was reported initially to Facebook. They said it didn’t break any of their rules. I believe someone else then reported it and a few others, and it eventually got took down either by the page it was on or by Facebook.

Dean Russell: (02:19:34)
In that time someone else had put on it they’d met me during an election campaign, and they wished they’d given me a Glasgow hug as well. So, in other words, two people wanting to stab me publicly on the platform. Now, when that was reported and clarified to Facebook, that a Glasgow hug meant to stab me, separate from a Glasgow kiss, which I think is a headbutt, which is just as awful, I wonder, do you know within Facebook, would it learn from that? Would it know the next time somebody says they want to give someone a Glasgow hug that they mean to stab them, or would that just be lost into the ether?

Frances Haugen: (02:20:09)
I think it is likely that it would get lost into the ether, right? Facebook is very cautious about how they evolve lists around things like hate speech or threats. And I did not see a great deal of regionalization, which is what is necessary to do content-based interventions. I did not see them investing in a very high level of regionalization that would be necessary to do that effectively. And I think there’s interesting design questions where if we, as a community, that means government, academics, independent researchers came together and said, “Let’s think about how Facebook could actually gather enough structured data to be able to get that case right,” or in the case of the insult for the other member… How do you do that, right? And I think if they took a strategy that was closer to what Google has done historically, they would likely have a substantially safer product. Which is, Google was committed to be available in, I think, 5,000 languages, right?

Frances Haugen: (02:21:15)
How do you make Google’s interfaces, how do you make the help content available in basically all the major languages in the world? And the way they did that was they invested in a community program where they said, “We need the help of the community to make this accessible.” And if Facebook invested the time and effort and the collaborations with academia or with other researchers like the government to figure out collaborative strategies for producing the structured data, I think we’d have a way safer Facebook. I don’t support their current integrity strategies. I think content-based solutions are not great, but they could be so much better than they are today. So, let’s make the platform safer. And if you want to continue to use AI, let’s figure out how to actually do it in a way that protects you from Scottish English, not just from American English, right?

Dean Russell: (02:22:01)
Thank you. And just a further question on that, because that for me builds into this question of the evolution of these platforms, the learning that they have, but also future- proofing this bill against future changes. At the moment, we talk a lot about Facebook and Google and Twitter and all those things as being looked at on a screen, but increasingly they are working in the realms of virtual reality. Obviously there’s Oculus Rift as well, which will increasingly enable user-generated content or engagement. Do you know whether any of the principles that we are talking about here around safety and reducing harm are being discussed for those future innovations? Because my concern, to be honest, is that we’ll get this bill right and then actually the world will shift into a different type of use of platforms and actually we won’t have covered the bases properly.

Frances Haugen: (02:22:54)
I’m actually a little excited about augmented reality because often augmented reality attempts to recreate interactions that exist in physical reality, right? In this room, we have maybe 40 people total and the interactions that we have socially are at a human scale. Most augmented reality experiences that I’ve seen have been more about trying to recreate dynamics of an individual. They’re either games that you play, or they are communications with one or maybe a handful of people. And those systems have a very different consequence than the hyper-amplification systems that Facebook has built today. The danger with Facebook is not individuals saying bad things. It is about the systems of amplification that disproportionately give people saying extreme, polarizing things the largest megaphone in the room. And so I agree with you that we have to be careful about thinking about future-proofing.

Frances Haugen: (02:23:46)
But I think the mechanisms that we talked about earlier, about the idea of having risk assessments, and risk assessments that aren’t just produced by the company, but need to also be the regulator gathering from the community and saying, “Are there other things we should be concerned about?” Right? A tandem approach like that, that requires companies to articulate their solutions, I think that’s a flexible approach. I think that might work for quite a long time. But it has to be mandatory and there have to be certain quality bars, because if Facebook can phone it in, I guarantee you they’ll phone it in.

Dean Russell: (02:24:17)
Thank you. Back to you, Chair. Thanks very much.

Damian Collins: (02:24:20)
Thank you. Just a couple of final questions for me. We heard in the evidence session we heard last week, I think it was [Guillaume Shallow 02:24:31] who said, based on his experience at YouTube, said that the way algorithm recommendation works is not there just to give you more of what you want. It’s there to discover which rabbit hole you should be pushed into. Do you think that’s a fair characterization?

Frances Haugen: (02:24:44)
There’s a difference between the intended goals of a system, so Facebook has said, “We never intended to make a system that amplifies extreme polarizing content,” and the consequences of a system. So, all recommender systems intend to give you content that you will enjoy because as Facebook has said, “That will keep you on the platform longer.” But the reality is that algorithmic systems, AI systems, are very complicated and we are bad at assessing the consequences or foreseeing what they’re going to be. But they’re very attractive to use because they keep you on the site longer, right? If we went back to chronological ranking, I bet you’d view 20% less content every day, but you might enjoy it more. It’d be more from your friends and family. The question on rabbit holes, I don’t think they intended to have you go down rabbit holes.

Frances Haugen: (02:25:30)
I don’t think they intended to force people into these bubbles, though they have chosen choices that have unintended side effects. So, I’ll give you an example. Autoplay. I think Autoplay on YouTube is super dangerous, right? So, Autoplay, instead of having you choose what you want to engage with, it chooses for you and it keeps you in a stream, a flow, where it just keeps you going. There’s no conscious action of continuing or of picking things or of whether or not to stop, right? And that’s where those rabbit holes come from.

Damian Collins: (02:26:05)
The equivalent on Facebook would seem to be, from what you said earlier on, that someone signs you up to a group without your consent that’s focused on anti-vax conspiracy theories on COVID. You engage in one of the postings. You see it in your newsfeed, even though you never asked for it. You’re now automatically a member of the group. And probably not just when you get more content from that group, but that’s quite an interesting deviation from stuff you’ve done before and probably the whole system will give you more of that kind of content. That’s what I meant, I think, between the system recognizing a kind of interesting new line of inquiry from a user and then piling in on that with more stuff.

Frances Haugen: (02:26:37)
And I think that’s what’s so scary. There’s been some reporting on a story about a test user. So, Facebook has said it takes two to tango. So Nick Clegg wrote a post back in March saying, “Don’t blame us for the extreme content you see on Facebook. You chose your friends. You chose your interests. It takes two to tango.” When you make a brand new account and you follow some mainstream interests, for example, Fox News, Trump, Melania, it will lead you very rapidly to QAnon. It’ll lead you very rapidly to white genocide content. But this isn’t just true on the right. It’s true on the left as well. These systems lead to amplification and division. And I think you’re right. It’s a question of the system wants to find the content that will make you engage more and that is extreme polarizing content.

Damian Collins: (02:27:26)
Yeah. I think to claim, as Nick Clegg did in that situation, that it takes two to tango, it’s almost like it’s your fault that you’re seeing all this stuff. It’s, again, a massive misrepresentation of the way the company actually works.

Frances Haugen: (02:27:37)
Facebook is very good at dancing with data. They have very good communicators. And the reality is, the business model is leading them to dangerous actions.

Damian Collins: (02:27:47)
Yeah. So, actually, if it’s two to tango, the other party is Facebook, not another user, actually.

Frances Haugen: (02:27:52)
Yes.

Damian Collins: (02:27:54)
On fake accounts, we heard from Sophie Zhang last week about her work identifying networks of inauthentic activity. Based on your work at Facebook, how big a problem do you think that is on things like civil integrity around elections? I mean, some of Sophie’s evidence, we’re talking networks of hundreds of thousands of accounts that have been taken down. But how much of a problem is it in your area of work?

Frances Haugen: (02:28:14)
I’m extremely worried about fake accounts. And I want to give you guys some context on a taxonomy around fake accounts. So, there are bots. So, these things are automated. Facebook’s reasonably good at detecting bots. It’s not perfect, but it’s reasonably good. Then there are things called manually-driven fake accounts. So, a manually-driven fake account is, for example, there are cottage industries in certain pockets of the world. There’s certain parts of Pakistan. There’s certain parts of Africa. There’s certain pockets where people have realized that you can pay a child a dollar to play with an account, like a 12-year-old, to be a fake 35-year-old for a month. And during that window, you will have passed the window of scrutiny at Facebook, and you will look like a real human because you are a real human. And that account can be resold to someone else because it now looks like a real human account.

Frances Haugen: (02:29:05)
And those accounts, there’s at least 100,000 of them among the… There’s 800,000, I believe. Or back when I left, so this is May, there were approximately 800,000 Facebook Connectivity accounts. So, these are the accounts were Facebook is subsidizing your internet. Among those that were 100,000 of these manually-driven fake accounts that were discovered by a colleague of mine. They were being used for some of the worst offenses on the platform. And I think there is a huge problem around the level of investment Facebook has done in detecting these accounts and preventing them from spreading harm on the platform.

Damian Collins: (02:29:42)
How confident are you that the number of active users on Facebook is accurate, that those people are real people?

Frances Haugen: (02:29:55)
I think there’s interesting things around the general numbers. So, as we talked about before, this is about distribution of things. On social networks, things aren’t necessarily evenly allocated. So, Facebook has published a number of, I believe, 11%. They believe 11% of its accounts are not people. They’re duplicates. Amongst new accounts, they believe the number is closer to 60%, but that has never been disclosed in my awareness in a public statement. And so there’s this question of, if investors are interpreting the value of the company based on a certain number of new accounts every month and 60% of those are not actually new people, they’re over-inflating the value of the company.

Damian Collins: (02:30:37)
And I mean, if those audiences are being sold to advertisers as real people, that’s fraudulent. You’re selling people you’ve got reason to believe are probably fake but you’re selling them as real people to advertisers.

Frances Haugen: (02:30:48)
So, there is a problem known as SUMAs, which are same user, multiple accounts. And we had documentation that said… Or, excuse me. I found documentation that said for region frequency advertising… So, let’s say you’re targeting a very specific population. Maybe they’re highly affluent and slightly quirky individuals, and you’re going to sell them some very specific product. Facebook is amazing for these niches because maybe there’s only 100,000 people in the United States that you want to reach, but you can get all them, right? Facebook has put in controls called region frequency advertising. So, you say, “I don’t want to reach someone more than seven times or maybe 10 times because that 30th impression, it’s not very effective.” And Facebook’s internal research says that those systems, the region frequency advertising systems, were not accurate because they didn’t take into consideration these same user, multiple account effects. So, that is definitely Facebook overcharging people for their product.

Damian Collins: (02:31:48)
And that presumably works in the same way on Instagram as well.

Frances Haugen: (02:31:52)
I’m sure it does.

Damian Collins: (02:31:53)
And there have been concerns raised about, in particular, young people encouraged to have multiple duplicate accounts on Facebook. Is that a concern? Sorry. On Instagram. Have duplicate accounts on Instagram. Do you share that concern from a safety point of view?

Frances Haugen: (02:32:10)
I was present for multiple conversations during my time in civic integrity where they discussed the idea that on Facebook, the real names policy and their authenticity policies, those are security features. And then on Instagram, because they didn’t have that same contract, there were many accounts that would’ve been taken down on Facebook for coordinated behavior and other things that, because they weren’t inauthentic, it was harder to take them down. And so I think in the case of teenagers, encouraging teenagers to make private accounts, so let’s say their parents can’t understand what’s happening in their lives, I think is really dangerous. And there should be more family-centric integrity interventions that think about the family as an ecosystem.

Damian Collins: (02:33:01)
Yeah. Because as you say, a young person engaging with harmful content, problematic content, probably do it using a different account while their parents see the one that they think they should have. But I mean, do you think that policy needs to change? Do you think the system can be made to work on Instagram as it does today?

Frances Haugen: (02:33:18)
I don’t think I know enough about Instagram’s behavior in that way to give a good opinion. Yeah.

Damian Collins: (02:33:22)
Okay. But as a concerned citizen who’s worked in technology?

Frances Haugen: (02:33:26)
I strongly believe that Facebook is not transparent enough today and that it’s difficult for us to actually figure out the right thing to do, because we are not told accurate information about how the system itself works. And that’s unacceptable.

Damian Collins: (02:33:41)
I think we’d agree with that. I think that’s a good summation on, I think, a lot of what we’ve been talking about this afternoon. I think that concludes the questions from the committee. So, we’d just like to thank you for your evidence and for taking the trouble to visit us here in Westminster.

Frances Haugen: (02:33:52)
Thank you so much for the opportunity.

Damian Collins: (02:33:52)
Thank you. Thank you.

Members of Parliament: (02:33:52)
Thank you. Thank you. Thank you.

Beeban Kidron: (02:33:53)
Thank you very much.

Debbie Abrahams: (02:33:54)
Thank you very much.

Damian Collins: (02:33:54)
Thank you.

Transcribe Your Own Content

Try Rev and save time transcribing, captioning, and subtitling.