Social Media Company Executives Questioned by UK Lawmakers

Social Media Company Executives Questioned by UK Lawmakers

Representatives from social media companies TikTok, Meta, and Snapchat appear before the British Parliament's Education Committee. Read the transcript here.

Social Media Company Executives Questioned by UK Lawmakers.
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

Copyright Disclaimer

Under Title 17 U.S.C. Section 107, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing.

Moderator (12:11):

Order, order. Welcome this morning to this oral evidence session of the Education Select Committee. Our session this morning is part of the first of two oral evidence sessions that the committee is holding to look at issues around screen time and social media in the context of the government's consultation on future policy in this area. The committee, our predecessor committee undertook some important work in this area and what we're wanting to do is to refresh the evidence that we've taken in public so that we can make a contribution to the government's consultation. This sits alongside a longer-term piece of work that we're undertaking about the role of AI and EdTech within children's lives and within the education system at the moment, which will take a bit longer to conclude. But we're grateful to our witnesses for joining us today.

(12:59)
I need to put on record that we are extremely disappointed that Snapchat, who were due to come to give evidence today, withdrew from that commitment at quite short notice. We are hoping very much to hear from Snapchat on their own at our meeting this time next week. But I should put on record that we have taken the formal decision today that we will use our powers to summon a witness from Snapchat in the event that they're not cooperative with the committee in coming to give their evidence. This is important given the prevalence of children's use of Snapchat and their relevance to the debate that we are having as a nation and within this committee at the moment. I'm very grateful to the three witnesses who have joined us today. Can I invite you to introduce yourselves to the committee, please, starting with Alistair Law?

Alistair Law (13:55):

Thank you, Chair, and fully share your view on the importance of being able to give evidence in this debate. I'm Alistair Law. I'm Director of Public Policy for Northern Europe at TikTok.

Moderator (14:04):

Thank you. Rebecca Simpson.

Rebecca Simpson (14:06):

Good morning. I'm Rebecca Simpson. I'm the Director of Public Policy UK for Meta.

Moderator (14:11):

Thank you. And joining us virtually, we have Laura Higgins.

Laura Higgins (14:17):

Thank you for inviting me and facilitating the virtual joining. Hi, I'm Laura Higgins. I'm the Senior Director of Community Safety and Civility at Roblox, an immersive gaming and creation platform.

Moderator (14:28):

Thank you very much. I'll begin our questioning this morning. The Prime Minister met some of the most senior representatives of each of your companies, along with some other companies at Downing Street last Thursday to discuss children's use of social media. He told those representatives that things cannot go on like this. Do you agree with that? Can I start with you, Rebecca Simpson, please?

Rebecca Simpson (14:54):

Yeah. We were really pleased to be invited to that conversation. Very important, timely conversation as is this one today. We welcome the opportunity at that to just lay out the steps we have taken, particularly around 13 to 18 year olds on our platforms. And I'm happy to be sure we will get into that perhaps in this conversation this morning. But we also absolutely agree that this is not a job that is finished and done, unfortunately, and that there is always more to do to ensure safety, particularly as technology involves. So we recognize the really real concerns that were expressed at that round table and we look forward to working with the government and parliament as we all continue through the consultation process on what next steps might be the best idea.

Moderator (15:37):

Thank you. Alistair Law?

Alistair Law (15:39):

So look, I represented TikTok at that meeting in my capacity as Director of TikTok UK board. And I was very pleased to do so. I share the concerns that the Prime Minister set out, and it was a number of different concerns that again, I'm sure we'll go and talk through in more detail. I was pleased to be able to give an overview of the age-appropriate experience that TikTok provides under 16s on our platform, but completely agree with the notion that safety is a race that's never run. And that this is an area that we'll continue to review the evidence of and continue to make changes and invest in ensuring that children on our platform are protected.

Moderator (16:18):

Thank you. And Laura Higgins?

Laura Higgins (16:21):

Thank you. So I understand that the meeting held by the Prime Minister was specifically in relation to social media platforms. So we weren't invited to that conversation. We are, however, very pleased to be part of this broader conversation. We share the committee's commitment to keeping children safe online. We want to work with the committee and with government to make sure that we are all helping to contribute to this conversation, to make sure that any rules that come out of the process are proportionate and well-targeted.

(16:50)
We recognize as a platform that is used widely by children, that it carries real responsibility and we don't take that lightly. So we're looking forward to the conversation and sharing some of the work that we're doing.

Moderator (17:01):

If I may say that the tone of those initial answers is very much that there is work to do, but this is a managed, business-as-usual progression with some monitoring and possibly some tweaks. Alistair Law, the committee's been briefed today on a report coming from a police investigation that children on your platform are being groomed into sexual activity, are selling themselves through that grooming, and that content of that nature via TikTok is ending up on the dark web in the hands of predators. Do you think monitoring, engaging in a debate, and taking some further steps is enough in that context?

Alistair Law (17:59):

In the context that you mentioned just there, we absolutely abhor that kind of activity. It has no place on our platform. When we became aware of the report that you mentioned, which was first reported in The Telegraph, we immediately contacted the Home Office, the National Crime Agency, Crime Authority, and within a couple of days we're talking directly to the police force itself.

(18:21)
We have a number of different steps that prevents and acts against the kind of activity that you've just described, which I'm very happy to go into. But in terms of taking that report and trying to make sure that we identify direct harm and learn lessons from it, we were speaking to the police force within a couple of days. We have law enforcement teams that are designed to engage directly with police forces so that if there's any violative activity that is taking place on our platform, we can respond to it in real time. We can then continue to build our learning because, as you say, the predators and the objectional bad actors in this situation are always going to be looking to use whatever means they can to achieve their objectives.

(19:03)
We have a whole set of things that we do on our platform, use of content moderation, taking down and prevention of any live-streaming by somebody under the age of 18, additional checks as to how old they should be. But when we identify activity that is seeking to circumvent that, we want to know about that as quickly as possible, and we want to be able to take action and close any gaps as quickly as possible as well. So we're working with the police force, OCCIT, to further understand what is it that they've learned, particularly in an off-platform environment because, as you say, sometimes what we find is that people will be trying to direct people off platform, and the more that we can learn about what takes place in closed spaces, be it the dark web or encrypted messaging services, the more that we can strengthen our measures.

(19:52)
But to completely agree with you, that needs robust direct action and its action that we're taking.

Moderator (20:00):

And again, that response might be a reasonable and credible response if this was an exceptional type of activity that was occurring sometimes and you were reacting immediately to something that was occurring sometimes. That the report says, "The abuse and sexualization of children is frequently noted taking place on the platform itself. Within just a few days of reviewing offender behaviors on the platform, OCCIT noted hundreds of accounts dedicated to the sexualization of children, many of which specifically focused on those from the UK." So this is normal across your platform. This is not exceptional harmful behavior that's happening occasionally. Why aren't you stopping it?

Alistair Law (20:48):

As I say, we're very grateful for the report and in particular for being able to learn the ways in which bad actors try and evolve [inaudible 00:20:56]-

Moderator (20:55):

Why do you need the police to tell you what's happening on your own platform?

Alistair Law (20:58):

So we constantly look at trying to make sure that we are enforcing the rules that we have with all of the different content moderation approaches that we have. Just to set it out for a minute. Anytime any video is uploaded for TikTok, it is scanned on upload by auto moderating technologies. They will look for a range of different harms, anything that breaches our community guidelines. And there are certain things that actually AI models have been pretty effective at being able to identify. Nudity is a great example. It's prohibited on our platform in general, leaving aside obviously even see some of the most abhorrent images, but it's prohibited in general and we're quite effective with our AI models because they have good training data and they're very effective at blocking it on upload.

(21:42)
We take other measures such as ensuring that under 16s don't have access at all to direct messaging. And where direct messaging is allowed when you're over the age of 16, still ensuring that we're proactively scanning any images for known or novel CSAM as well. None of that means that we have completed our work in terms of preventing activity on that because of the fact that people evolve their approaches. And what we have met with OCCIT about and are continuing to learn more of is what are the evolutions that predators are undertaking to try and circumvent approaches and to evolve their behavior so that maybe they're using keywords that are discussed on the dark web, and so we need to understand those keywords so that we can block them and prevent people from off platforming on that basis. It's a constant level of evolution, but I do want to assure the committee that we are dedicating the priority resources on a significant level to tackling it.

Moderator (22:38):

I just wanted to drill into that example early in the evidence session, because I think it demonstrates that with the best will in the world you can't control it and you haven't been controlling it. We can't accept that that level of harmful activity is normal across the platform and that you're doing best endeavors to make sure that it doesn't happen when it continues to happen. Would you agree that the approaches that you're deploying so far aren't working and this is something that a platform like yours simply can't get a grip on, and therefore we need different approaches to the ones that have currently been taken to keep our children safe?

Alistair Law (23:18):

I think any single example of harm relating to that kind of activity occurring on the platform is obviously a failure and something that we need to directly address. At scale, the kind of moderation approaches that I was talking about mean that of the violative content that is available on our platform in general, 99% of it is taken down on a proactive basis under our activity. And close to 90% of that is taken down with zero views on pure automation, so not even involving humans as part of that.

(23:50)
Now, is that a complete and total deliverance of what we need to do in this area? No, and we need to continue to work on that. But from a scale perspective, the models, the investment that we have in AI and our content moderation approach is doing huge work to ensure that our community is kept safe. Our responsibility is to continue to enforce, invest in improvements in that enforcement and, as I say, work with partners, like this police force, like the NCAA, to make sure that as bad actors evolve their techniques, we're aware of that as soon as possible and we can put our own measures in place to address them.

Moderator (24:29):

Let me go to Chris now.

Chris (24:30):

Thank you. And thank you all for your time today. The previous iteration of this committee reported in 2023 about a distinct concern about the harms of screen time. In fact, they specifically said, and there's an overwhelming way of evidence submitted to us suggesting that the harms of screen time, social media use significant outweighs the benefits for young children. So I just wondered whether you would agree with that assessment and, well, if you do, or even if you don't, what benefits do you see that young people gain from using your service? And that's definitely [inaudible 00:25:01]. Rebecca, do you want to go first?

Rebecca Simpson (25:04):

Yeah, of course. So I think there's lots of benefits that people gain. You can see on our platforms on Instagram in particular, people are following things like BBC Bite Size, some of Britain's best institutions, museums. There's great tutoring and school and education support, which I know would be particularly relevant to this committee.

(25:24)
I think what we've generally found is that clearly screen time can be a problem if someone is passively scrolling. Over consumption of anything, can be a problem. I think it depends a bit what you're doing. And because of that, part of we have this thing called Teen Accounts where we have defaulted all 13 to 18 year olds into a much more restricted experience. We give people a nudge, teens rather a nudge after an hour of being on the platform, it's muted overnight and we have parents control so they can set the right screen time for their particular family, and that can be down to as little as 15 minutes a day on the app. So I think there's a mixed picture about screen time because we absolutely hear it and we've tried to respond to that by giving people choices.

Chris (26:07):

Can I just come back to that really quickly, Rebecca? Because it's interesting what you said about the difference, the experience that you give to teens. I mean, I've got Facebook because I'm that age, right? And I think there's a real danger to spend a lot of time scrolling because the algorithm's specifically designed to give you more content that you want. So I get a lot of standup comedy because that's what I'm interested in. And the problem with that, I think, is that that potentially extends the amount of time people spend on a screen potentially to levels that aren't actually good for them. So have you done anything for younger users of your accounts to tackle that potentially addictive algorithm problem?

Rebecca Simpson (26:46):

Yeah. So as I said, our recommender system is designed to offer you connections with your friends and your family and the things you enjoy. And that can, as you say, lead to if you like comedy and it keeps giving you that. So what we've done, particularly, as I say, for 13 to 18 year olds who have been defaulted into a more restrictive set of restrictions, they get told after an hour to leave the app. The app is muted overnight, so between, I think it's 10:00 PM and 7:00 AM, but also we've given ... That account is then managed by a parent and they have full control over the time. So if they felt like they wanted to restrict their child's time online, it can be reduced to as little as 15 minutes a day.

(27:26)
We haven't enforced that centrally, because there are lots of examples of young people making really great use of these apps to connect with issues they care about for activism, for hobbies and interests and education. So we think it's better to leave it to give some control to the families over it. Like I said, we interrupt after an hour.

Chris (27:44):

Is there, sorry, just in terms of the idea of having a teen account, my concern would be is they're not a danger that young people, teenagers just set up an account? And there's probably an answer to this and I don't know what the answer is. How do you ensure

Chris (28:00):

Or the people under the age of 18 or under the age of 16 don't just set up an adult account. How do you control that?

Rebecca Simpson (28:06):

We have a whole range of ways that we try to mitigate that. I think it is an absolutely accurate age assurance, I think we recognize this is an industry wide challenge, but we take a multifaceted approach to ensuring that people aren't lying about their age. One thing that AI has really helped us advance recently is that if you lie and you say your birthday is 1975, and it can then scan and detect what you're doing and your friends and what you're posting and image, and it has a very good way of telling that you're not and you've lied. So then you are defaulted into that experience. So we recognize that challenge, but that we try as much as possible to prevent people from setting up fake accounts or accounts where they're claiming to be an adult and they're not. And then now, as I said, in this new restricted experience, it's linked to a parent's account who can then have a great deal of autonomy and control over there, over what's happening for that teen on our platforms.

Chris (28:59):

Same question, really. Just on the general point about screen time, but also on Rebecca's point, what are you doing? What is TikTok doing to deal with the issue of age verification?

Alistair Law (29:10):

Sure. I just wanted to go back to the first part of your question quickly on the positive elements of our platform, which is obviously the ability to create, discover, express yourself. But very much from a learning perspective, we introduced on the homepage a STEM feed in 2024, so this is curated content specifically on science, technology, engineering, and maths. It's available to all under 18s by default. We find that a little under a third of them visit on a weekly basis. TikTok has been a place where communities like BookTok have created huge levels of new interest in reading, which is I know something else that this committee has looked at previously as well. So I think that the benefits of connection of community and expressions of creativity are meaningful and material. But of course, we recognize what it is that you're saying around concerns around screen time.

(30:03)
We also have an age appropriate experience. So the experience that you get on TikTok if you are 15 is very different to if you are 25. That actually starts by ensuring that you have a default one hour screen time cap. So you won't be able to use the app after an hour. Within that, we also have screen time break recommenders. So if you've been using it for half an hour straight, then a popup will come up saying you've been using this for 30 minutes. You can snooze that, you can dismiss it, or choose to take a different view. We also have a notification curfew, so nothing comes through as a notification on the app from 9:00 PM overnight. If you're using the app at 10:00 PM, we have a thing called sleep hours reminder, which is a full screen takeover that prompts people that it might be time to put down the phone, move away, go to bed. It actually takes them through a meditative breathing exercise as a way to shift their energy as well.

(30:56)
So there's a huge amounts of individual nudges and adding up to a collective that if you are 15 on the app, and I haven't mentioned the features that are restricted like direct messaging and going live, I think we can leave that for safety discussions, but if you are 15 on the app, there's a whole host of things that are acting together to present a balanced and healthy relationship with the app.

Chris (31:18):

On the hour cap that you mentioned. So after an hour, does it close completely or does it ... And if that's the case, how long after that can you then open up the app again?

Alistair Law (31:31):

So as default, it's an hour cumulative over a 24 hour period. And yes, it will tell you that you've reached your limit and it will close the app. It isn't completely mandatory, but I think what we have done, and when we designed it, we actually did so with the Boston Children's Hospital Digital Wellness Task Force that they have there, and we asked them, because there isn't actually an awful lot of academic research out there that says, what should a level of screen time be for somebody who is, and this is applicable for people between the ages of 13 and 15. So as part of their research, they alighted on an hour as a starting point.

(32:06)
I think as Rebecca said, the ability to go beyond that, lower than that, we also have parental tools so that if you have linked with your child on our family pairing feature, then you have additional levels of control. You can set lower screen time caps, you can set blocks of time away if you don't want people to access it during school hours, for example. But our focus is very much thinking about the multifaceted different ways that people might be using the app and might be nudged into considering their use of it so that it's balanced.

Chris (32:39):

Thank you. To come back to the initial part of the question. So the previous committee found that social media and screen time does pose harm. Would you agree with that to be the case? And I'm talking in terms of ... I think what you're saying is, you've put that hour cap in place for a reason, so you would say you would have concerns of a young person particularly using social media for a longer period of time in a 24 hour period.

Alistair Law (33:05):

I think as Rebecca said, the idea of over-consumption of anything is something that we want to be concerned about. We will always be led by the evidence on this, but I think that the evidence is contested, be it UNICEF findings as well that screen time as a whole, and then particularly the idea of a particular limit, I don't think has been firmly established from an academic perspective. I know you're hearing from academics later on at the moment. So from our perspective, putting in place a range of different measures that act together and giving people agency as well. So you can think of the interventions that we have in three buckets. There's some stuff that is just on and you can't change it. The notification bit that I was talking about earlier after 9:00 PM or the screen hours report that you're given at the end of the week, that's just on.

(33:50)
Then there's things that are on by default. You could opt out if you want, but I mentioned, for example, the full screen takeover meditation breathing exercise, we found that 98% of under 16s kept that on, and as a result, we extended that to under 18s as a whole. And then you have things that you can opt into, like the family parent, parental control. So all of them are little nudges, little features that collectively add up to something I think quite powerful.

Chris (34:16):

Yeah. And I take your point about the evidence, but I mean, I always mentioned this in every set committee, but I did used to be a teacher. And one of the things that I'm concerned about, and Rebecca touched upon it briefly in her statement, was about the amount of time young people are spending on these platforms late into the night. So you've mentioned that you've got this 9:00 PM curfew, you call it that, but actually that isn't mandatory. So there is a danger that young people will ignore that. Have you got any evidence to suggest ... I mean, I think you said 98% didn't change that function, but have you got any evidence in terms of that they could still then go back on it afterwards, presumably. So do you think there's more you could do to ensure that young people aren't potentially sitting up scrolling through TikTok, whatever it is, till two o'clock in the morning and then the impact that will have on their education? So what's your thoughts on that?

Alistair Law (35:06):

So there's a couple of different things in there. There's the 9:00 PM notification cutoff. So at that point you won't get anything that buzzes via our app that says you've received a message or anything like that. Then there's the 10:00 PM sleep hours reminder, which yes, you're right, you can dismiss. It will then come back, I think it's either 10 or 20 minutes later as another takeover as well. It's not at the moment mandatory. I think this goes back to my point around we will be led by evidence. So it's something that we introduced, I think probably around about a year ago, saw a good deal of take up, and we're constantly working both with our partners, I mentioned the Boston Children's Hospital, but also we have our own safety advisory councils, including a global youth safety council to evaluate what more we can do.

(35:52)
What young people tell us primarily is that they want agency. They want the ability to set their experience, and that's why we have both these default settings, but also a whole range of additional tools that you can opt into. We've got a digital wellbeing center, which I haven't touched upon, but I can go into more detail later, where we create missions and people can earn badges on the basis of educating themselves about the features that they have, like the screen time cap and the ability to set limits. So there's a whole panoply of different ways that we're thinking about trying to come together, but of course, as more evidence comes out and as we learn more, we will do more.

Chris (36:29):

Last question, then I'll come to Rebecca. Just briefly, I mean, do you think part of the challenge ... We talk about the evidence, but the reality is social media has grown at such a quick ... When I was at school, was it MySpace? I mean, that died off. But do you think part of the issue is that these services, these social media platforms are being developed before the evidence of their impact has fully been understood. Do you think that's a potential challenge that young people face or users face?

Alistair Law (36:58):

So I think it's absolutely right that we exist in a very fast moving industry, but I think that in part, that's one of the reasons why we're continuing to invest in the features that I've outlined. Prior to working at TikTok, I did 10 years in the TV industry. And obviously, there are more channels now than ever and there are a greater amount of content on streaming, et cetera. From our perspective, doing things like an hour screen time cap as default, the sleep hours reminder, et cetera, probably goes beyond what we see in other areas, be it TV, gaming, messaging. And part of the reason that we're doing that is because we're conscious of the potential impact and we want to make sure that our users have both set features and then also tools available to them to guard against that.

Chris (37:42):

Thank you. I'm sorry, Laura, I've kept you waiting. I apologize, Laura. Just on the initial question, just about obviously the predecessor committee, so it was a while ago I asked that question.

Laura Higgins (37:51):

That's fine, that's fine. Thank you. So I mean, for us, one of the things is we're not a social media platform. So the primary engagement that happens in Roblox, it happens within the experiences and games themselves. So we have children who are either actively playing with their friends, they're building and creating. They're not really passively consuming feeds, algorithmic feeds. So that's one element. It really is about what people do, I think, when they're in these online spaces. But we absolutely agree, balance is everything. We really want to encourage this healthy online and offline and physical activity, as well as the creation and play that happens within Roblox. We have a suite of tools for parents, so they can set daily play limits, which then when they cut off, they don't come back on until the following day, as well as other ways that they can manage when and how their children are using Roblox.

(38:47)
We don't serve any push notifications to under 13 users. And we also worked with our Roblox Teen Council to get their input around the wellbeing tools that they wanted. And very much, as Allie mentioned, we heard about their desire for agency. So we introduced do not disturb mode so they can opt out if they just wanted some quiet time to themselves. They have online status controls and their own screen time insights. What's been quite interesting hearing from the young people is that when they get that little bit older and perhaps the parents are a little less involved, the young people are still aware that they might be playing a little bit longer or from time to time, and they really find those screen time insights very helpful.

Chris (39:31):

Yeah, I mean, obviously, I appreciate your slightly different platform in terms of social media, but obviously you've still got direct messaging, so there are potential challenges there. What do you do specifically to protect young people when it comes to that sort of direct messaging and the potential, bullying messages potentially they could get from other users? And also just on the point that I made to Alistair about screen time, the nature of gaming is addictive, lowercase A addictive. You want to carry on, you want to get to the next level. So there is still that danger, isn't there, about potentially playing till two o'clock in the morning. And this isn't a new phenomenon. I remember kids playing on Grand Theft Auto or whatever it might be, that's not a good example, because they shouldn't be, until two o'clock in the morning. So what are you doing to tackle that particular issue?

Laura Higgins (40:19):

So we'll talk about the communication piece first. So all communication is off by default for under nines. We actually just launched a new feature called Roblox Kids and Roblox Select, which is our new age-based account frameworks. I'll talk a little more about that in a moment, hopefully. But Roblox Kids is really for the under nines. It's a ring-fenced experience where they have games that are really specifically suited for their age group and no communication. In terms of direct messaging, communications are off by default, they are opt-in and we need parental consent to access those messages. So that's just one piece. In terms of the problematic use of any platform, of course, we don't want that to happen. We want people to be having a healthy and thriving time on the platform.

(41:12)
So for the younger ones, as I mentioned, we do encourage parents to be involved. We take the responsibility for safety. That's where it sits with us, but parents also know what's right for their families. So that's why we try to give them that more granularity to manage what's right for each individual child. What we see on our platform doesn't tend to be that very long play sessions. It tends to be more weekends, it's kids getting together after school. We encourage and we work with expert organizations, for example, to create resources, to do guidance, to provide wellbeing tools for the community and for parents to try to prevent that from happening.

Moderator (42:01):

I'm going to go to Caroline in a moment, but can I just ask you very briefly that broadly within society where there are behaviors and activities that are addictive, we say to adults, "Here are a bunch of regulations and interventions and advice and guidance that helps you to avoid getting into difficulty with that behavior." We think about gambling, we think about alcohol. But we say to children, " You can't go there at all." Why is this any different?

Laura Higgins (42:37):

So there is no evidence directly that says that games are addictive by nature. We know that there is still a lot of work going on in that space. Anything that is consumed excessively is harmful, so we would discourage that. We want to see this healthy balance of activity. But in the same way that parents perhaps would get involved with their kids around what they watch on TV and how long they would be allowed to watch that for, or what books they might read. And some of it might be supervised, some might be unsupervised, but there's always a conversation that happens around, "It's time to go to bed now, please can we turn it off?" So we would really encourage those conversations to still be happening. We do appreciate for some young people there can be problematic online use. So we appreciate that the work that goes into supporting organizations to prevent that from happening and providing support for those young people should they need it.

Moderator (43:40):

Either of you like to respond on that broad question of what ...

Rebecca Simpson (43:45):

I'm happy to, yeah. So we don't design Instagram or Facebook to be addictive, and independent research only recently in the US has shown that the vast majority of parents and teens using that platform find it to be a positive experience for some of the reasons we were just discussing. But similar to Laura's answer, I think we absolutely recognize that there can be risks of people misusing the platform and poor behaviors and where that we do see that we have defaulted teens into a much more restricted experience and given parents the stronger ability to intervene. That includes things like time, but importantly, it includes an ability to reset your teen's algorithm. If, for example, they're going down a rabbit hole of content that perhaps you as the parent might not think is healthy or suitable for them, you can completely reset that. So it's not the way our platforms are designed to operate, but that doesn't mean that we've waited and stopped and not had these tools built and made available.

Alistair Law (44:43):

Yeah, and I would share a couple of the views there. I don't think that there's been a clinical finding of addictiveness on this, but that doesn't mean that we don't recognize responsibility to drive a healthy use, and that's why we have ... I mentioned the variety of different things that we have in place as default. I think that we're the only major platform that has a screen time cap as default for under 16s of an hour, along with the other measures that I mentioned as well. So we're very cognizant of the potential for overconsumption and we put in place measures as appropriate on that.

Moderator (45:15):

I mean, I would just say, I think that there'll be parents and indeed adults of all types watching this session who find that the claim that there is no evidence that this is not addictive, I mean, simply not a credible claim to make at all based on their experience. But I'll leave it there and go to Caroline.

Caroline (45:40):

Thank you, chair. Good morning. So I'd like to move on now to talk about a ban and obviously a conversation that has been led by Australia going first, but we know that governments right across the world are now waking up to the dangers of social media and discussing how they're going to introduce a ban. Would you agree that the failure of companies like yourselves to protect children and young people adequately from addictive algorithms, violent content, sexual predators, and so on, has led to this worldwide push for a ban, and are you concerned that there could well be one coming in the UK as well? I'll start with you, Alistair?

Alistair Law (46:25):

Can I start? So look, I think we recognize the concerns that you've just talked about, and I think what's really important in this debate is that there are a number of concerns, all valid, that come together. Concerns around harmful content, concerns around level of time spent, concerns around impact on wellbeing. Those are the sorts of concerns that we and my trust and safety team who are all dedicated professionals, we've got people there who are clinical psychologists, people who are ex law enforcement, people who have worked for NGOs on human trafficking and things, are dedicated to trying to deliver on. We were a later platform than many of our competitors, only launching in the UK in 2018, and we designed our platform from the start with safety in mind, both as a way to deliver for our users, but also as a competitive advantage.

(47:12)
But I recognize the level of concern. Our response to it is to set robust guidelines and enforce against them, create an age appropriate experience, and I've spoken about some of the wellbeing elements that we have on there, but also from a safety perspective, there's additional content that can't be seen by under 16, something like graphic fictitious violence, for example, that might be available to over 18s, but not to under 16s. I mentioned earlier direct messaging. That feature is completely turned off. You cannot access it if you're under the age of 16.

Caroline (47:42):

But you could access it if you were pretending to be under 16.

Alistair Law (47:46):

So similar to the answer that Rebecca gave earlier, we start from the perspective of if you're set at the app store as being 12, under 13, which is the age that you can join TikTok, the app won't even appear. Then if you try and sign up, we'll ask for your date of birth with a neutral age gate. And if you put something in under the age of 13, then we'll block you from reapplying. Obviously, people will try and circumvent that, but we too default people into an under 18 content experience until we have the level of confidence using signals on our platform that they are the age they have said that they are. So we recognize, as Rebecca said, that this is an industry-wide challenge in terms of accuracy, but we adopt a prudent approach to that.

(48:32)
In terms of your question about a ban, I think that the UK government consultation is a thoughtful and considered one that is asking a wide range of questions and importantly of a wide range of services. Now the OSA regulates 150,000 services by, I think, OFCOM's own measures, and the three different buckets of concerns, concerns about harmful content, concerns about time spent, and concerns about wellbeing are ones that are applicable to children's experience online as a whole. So I think the most important thing that we think is we think we've got a good model, setting rules and enforcing them and an age appropriate experience. Can you find a way to bring other services into that level of model and have a level of collective learning about what age appropriate experiences look like? If you can't, then clearly for policymakers, for this committee, for government, a more robust option is possible, but one that I think needs to operate across the board if it's going to go to the heart of what it is that parents are concerned about.

Caroline (49:36):

Thank you. Rebecca.

Rebecca Simpson (49:38):

Yeah. So I mean, Facebook's one of the oldest of these apps around, been here for more than 20 years. Similar to what Alice had just said, we've had safety features and policies built in from the very beginning and we work with hundreds of experts to design our products and policies, including here in the UK. I think the conversation at the moment just reflects that this is the evolution of the same ... The concerns around safety, which are perfectly valid and legitimate. What we hear most consistently from parents is screen time, what content can my teens see, and who can contact them. Our teen accounts experience, which we've had since 2024 ... 2024, yes, that's right, reflects those concerns, both defaulting people in, because I think we also recognize quite overwhelming for parents, the average teen has about 40 apps on their phone, everything has different settings, everything has different things.

(50:31)
So we have realized that it's actually much more helpful to default in and allow parents to come out if they choose to do so. Only those things can only be turned off for under 16s by the parent. But I think interesting things like, as we mentioned in the consultation around what features and functions are right for younger people. I think that's a evolving conversation that we're really interested in having that conversation to look at whether things like Autoplay, infinite scroll, other features should now also be looked at. We have some restrictions on that already, but we're interested in where that consultation conversation might go as the conversation evolves around the right measures for online safety for younger users.

Caroline (51:11):

And do you think ban is the right word? I mean, it's an interesting word, isn't it? Because we don't allow children to go into nightclubs because they would be exposed to alcohol and other harms. We don't allow children to drive a car. We don't allow them to smoke. We don't call it a ban. We don't say 14 year olds are banned from going into nightclubs. We just say you have to be 18 to go to a nightclub. Do you think that given everything that we're seeing with children's problematic use of social media and the effect it's having on mental health and wellbeing, the fact that 93% of parents believe it's harmful to their children and over two thirds want to see it banned, do you think actually we're using the wrong language and we should just say that this isn't something that's safe for children and young people and that they should be over 16 before they're allowed exposure to these platforms?

Rebecca Simpson (52:07):

I think part of the reason why we don't think a ban is the way to go is partly because as you said, I think it's going to lead people to believe that it's impossible to access these apps. And as we're seeing in Australia, it's actually not really enforceable or effective as a measure. So I think the better conversation is around where are young people spending their time and what is the evidence of certain features and functionalities of those platforms that we may want to look at. I think a ban is misleading in that sense. I don't think it's a helpful language because we don't think it's going to be something that's actually possible in practice.

(52:43)
I do just want to go back to some of the things we said, which is that one of the things even DCIP published quite recently, they under the last Secretary of State undertook a year long study of the available evidence and they also concluded that there is no strong, robust, concrete evidence of harm either way. It doesn't mean we shouldn't have a conversation about people's concerns, and I think you can see that we've made huge investments and huge strides around safety online, but I do think that what the government is trying to do around the evidence gathering for where is the best and most effective policy intervention is the right approach.

Caroline (53:17):

I think that evidence argument is on very shaky ground now. I mean, I think the evidence of most parents in the country who would say that their children are spending too much time online would suggest that there is a problem. Anyway, I'd just like to move on to the problems with the Australian ban, which you mentioned, Rebecca. So the eSafety Commissioner in Australia has criticized practices from platforms, including both ... Apologies, Laura, we'll come to Roblox in a second, platforms including both TikTok and Meta for repeatedly messaging children who are under 16 to try and encourage them to age assure themselves for the platform, using unreliable facial recognition software, letting children repeatedly try to age assure if they fail at the first attempt, and making it hard to report age restricted accounts. So I'll start with you, Alistair, briefly. How would you respond to these criticisms?

Alistair Law (54:14):

So I think we, as I say, have a multi-layered approach to age assurance that represents the investment that we've made into AI models and signals that we can use on our platform. And at the moment we use that for 13, which is our cutoff point, and for identifying whether or not people are the age that they say they are under 18 or otherwise. And from an Australian perspective, it's what we're using plus some additional aspects to allow them to appeal for under 16 and stopping use there. I think this goes back to the point which is a really important part that the consultation needs to establish and address, which is, what is the collective view from an age assurance perspective about how much confidence you want? You talked there about the idea of unreliable facial age estimation technology. That's an element that is at use under the OSA for pornographic sites here in the UK, and there are obvious challenges and trade offs that you begin to make between privacy and safety when you go down the route of using that as a sole way.

(55:24)
We have an approach that at the moment uses your activity on the platform to estimate whether or not you're the age that you say that you are, and when you want access to riskier features, such as going live for the very first time, we have a more robust series of things and we ask for, ID check or other facial age estimation. But it does show that there is a variety of different drawbacks and benefits to different versions of age replication. I think it's a critical element that the consultation will have to opine on, because whether or not you set a limit at 16 or you set different experiences under the age of 16, it does come back to

Alistair Law (56:00):

The level of confidence that you can have in people being the age that they are, that they say they are.

Caroline (56:04):

Would you say that TikTok is guilty of messaging children under 16 to try and get them to age ashore, and to let them repeatedly try if they-

Alistair Law (56:12):

I think my understanding is that what that referred to was as we were coming out up to the period of the ban, we were encouraging people to verify at the higher level if they were over the age of 16. If people had lied and we hadn't caught that, then that results in them being messaged. But obviously then, the additional level is then aimed at identifying that.

Rebecca Simpson (56:37):

Yes, similar. So as I mentioned previously, we use AI detection. So even if you've lied about your age, we try and detect that, and then we'll age-gate you into then proving your age. We use Yoti, as guided by the OSA, for facial recognition, but also document verification. We obviously try and take a proportionate approach to those higher levels of asking people for their ID for obvious reasons. We also know many people don't have any formal ID, so we have to have a multi-layered system. And we're also working out... We are a founder of a project called the OpenAge Initiative, which is looking at having age verified once on your device. Because again, similar to what I said to you previously about trying to think of how do we make this as easy as possible, for parents in particular. And we think that device-level and app-store level age verification, where people who run app stores know the bill payer of that phone and have access to information that we wouldn't have, and can then block at the apps.

(57:38)
Not to say that we would then stop doing what we do. That doesn't remove our responsibility. But there's a missing piece of the jigsaw there, which we think for any steps that the government takes as a result of the consultation. You're absolutely right. Accurate age assurance or improving that is going to be really important. And it wasn't really in the Online Safety Act, and Ofcom have done a bit of work looking at it, but we think that is a really important part of this conversation to absolutely address what you're raising.

Caroline (58:04):

Okay. Thank you. And Laura, I'd just like to turn briefly to Roblox, because I know that you haven't been covered by the ban in Australia. And there has been criticism that the Australian ban doesn't go far enough, and that the harms of sites such as Roblox are comparable to social media, even though I appreciate you say you're not a social media site and you don't share the same harms. I would challenge the idea that Roblox is not addictive from what I've heard firsthand from many friends and colleagues whose children are fairly addicted to it.

(58:39)
So, how would you respond to that? Do you expect that Roblox would be covered by a ban if it came in the UK?

Laura Higgins (58:47):

I think, again, the reason we weren't included in the Australia ban was because of the fundamentally different design and purpose of our platform, in the way that we are much more about the active play piece and not providing those social media services and features. One of the things, I think, that... Again, the different types of experience that young people have when they're on our platform, I think a blanket ban that captured us would really remove access to a lot of education and creative experience for a lot of young people, particularly here in the UK. We also concerned about pushing young people to less regulated environments. We know, by example, from what's happened in Australia, that a lot of young people were using VPNs and still going into these less regulated spaces.

(59:37)
I think we are not pushing back on regulation. We do understand that there is a need for it, but we really want to make sure that the regulation follows all of the evidence around what features are appropriate and how young people are actually using these different platforms.

Caroline (59:54):

So if a ban were to be based on features and functionality rather than a platform, which is the way Australia chose to do it, then it could well include Roblox. For example, banning anything that would allow anyone under 16 to be direct messaged by a bad actor.

Laura Higgins (01:00:16):

We've already rolled out a huge number of safety and policy updates. 145, I think, in the last year and a half. Most recently, just last week, we announced Roblox Kids and Roblox Select. So this is our new, as I say, age-based framework for making sure that young people are having the best experience for their different ages. This follows on from the previous work we did around facial age estimation, which we launched in January, which really narrows down the communication around who children can talk to, putting them in buckets with children who are a similar age. They can talk with children just a little older and a little younger, but it would be in the same way that, for example, they would probably hang out with kids from a year above them in the playground at school. We're also now adding this age-gated access, so being much stricter about the types of games and experiences that they can experience. And also, rolling out additional tools for parents to have more oversight up until age 16, and tools where they can help restrict or add communication to trusted friends.

(01:01:25)
So for example, that adults being able to make contact with an unknown teenager, for example, would not be able to happen. It's mandatory facial age estimation. If somebody chooses to join the platform and chooses not to undergo facial age estimation, they will automatically be defaulted to our lowest settings, which allow no communication with anyone else on the platform.

Caroline (01:01:49):

Okay. And you're confident that that could never happen? That a bad actor couldn't contact somebody who is under 16 through your platform?

Laura Higgins (01:01:59):

One thing, I think, is important to acknowledge is that bad actors come in all ages. So, we have a range of other tools that are constantly running across the platform at the same time. We know that peer-to-peer abuse happens on all platforms. And so, age rating... Sorry. A facial age estimation wouldn't tackle that. We have other tools such as our AI tool Sentinel, which is a grooming detection. It picks up contextual conversations, and we proactively report. We work very closely with law enforcement. If we detect any signals, we will escalate those to law enforcement, both here in the UK and through NCMEC. We're also members of Project Lantern. I believe my colleagues here, who are speaking, are also members, which is a signal-sharing project run by the Tech Coalition. Where if we detect signals about particular bad actors, we are able to share information to prevent them from being on the platforms collectively. And that also helps us when it comes to things like people setting up alt accounts. We're able to track them and ban them across the platform.

Moderator (01:03:05):

And there was a expert who recently advised parents that they should be with their children at all times while they were on Roblox, because it isn't a safe platform for young people, in the opinion of that expert. How would you answer that?

Laura Higgins (01:03:21):

I would push back on that. Millions of people do have a really safe and healthy experience on Roblox all the time. I think for the younger users on the platform, particularly, we do appreciate on Roblox. We do have a younger audience than a lot of the other platforms, and we take that very seriously. But we also encourage, as I say, for the younger ones, this might be their first experience of going into an online space. So we provide the safety tools, and want... It is safe for young people, but every child is different, and we encourage parental involvement. This is not about us pushing responsibility back on parents. We want to work in partnership with parents so that they have the tools, they feel that they are still in charge of their child's experience. And as their child develops and grows more skills, builds more resilience in these spaces, then the parent can sit back a little bit further and let their child go and explore more.

(01:04:15)
But I think in those very early stages, I think it's a real positive for families to walk this together. Particularly, somewhere like Roblox, it's actually fun to sit down as a family and play together. It's a really good opportunity for those kind of conversations that, otherwise, you might have to sit down and have a conversation about online safety, whereas this actually offers a natural place to do that.

Moderator (01:04:39):

Thanks. I'm going to go to Chris next.

Chris (01:04:41):

Thank you. So at present, children can only consent to have their data processed by companies age of 13. And obviously, young children need parental consent, but the government's consultation has proposed to raise this age of consent. So, would you support those changes?

(01:04:57)
Rebecca looked at me first, so I'll go to you first.

Rebecca Simpson (01:05:02):

Yeah, I think it's obviously a different thing to the band that is around the age at which we can process a person's data without their parental consent. I think sometimes people think it's a different way of getting to the same outcome. We obviously comply with the GDPR in this country. If they did raise the age of data-processing consent, it would obviously capture many, many businesses in the UK who are data processing under the GDPR, far beyond social media companies. So, we don't have a strong opinion about whether that's the right measure or not. I think just, I guess the point is some people conflate it slightly with it would result in a ban, and that's not my understanding of how that works.

Chris (01:05:45):

Yeah, I agree.

(01:05:45)
Alistair?

Alistair Law (01:05:46):

I really don't have much more to add to what Rebecca said. I think that pretty much represents our view, too.

Chris (01:05:52):

Laura, can you add anything or...

Laura Higgins (01:05:54):

So again, it applies slightly differently to us because we already don't process the data of young people, and we actually already have parental involvement and parental consent for most features up until age 16.

Chris (01:06:06):

Okay. It's me again, isn't it? So just on the government conversation then, it asks about restricting or banning the following features. So disciplinary messages, live-streaming, a location sit sharing, and sending/receiving images and videos obviously that contain nudity.

(01:06:23)
And you mentioned some of these things, but what restrictions you have on your site on these features already? And, yeah.

Alistair Law (01:06:33):

I'm happy to go first. So as I say, we've taken the few that you called out there. You can't access them if you're under 16. So, direct messages is a good example. We're not defaulted off, but you can opt in. We are just, direct messages are not available to under 16s at all. Live-streaming is another good example. You can't go live until you're 18, and that's an area of our site that you have to then provide a greater level of age verification via facial age estimation, or digital ID proof, and so on.

(01:07:03)
I think that goes back to the point that I was making about trying to make sure that we're both understanding the evidence and designing our platform in a way that provides an age-appropriate experience, but make sure that the most risky features are prevented from being accessible. So, the examples that you called out there would reflect on-

Chris (01:07:22):

Location settings as well, you can't do if you want to?

Alistair Law (01:07:25):

Yeah. So location settings, we've only recently introduced a kind of nearby elements to what you see. But in terms of people uploading content, the content that you upload, if you're under the age of 16, is not available in other people's feed. So again, you are set by private as default. And the only people who will be able to see what you're posting at all would be people who you have directly contacted, accepted as friend. Even then, you're not able to direct message because it's not available on our platform.

Chris (01:07:58):

Rebecca?

Rebecca Simpson (01:07:58):

Yeah, similar. So they're defaulted private, so no location. You're not discoverable by someone you don't know. You can't be contacted by somebody you don't know. You can't be tagged or mentioned in anything by somebody you don't know. Who you know is also visible to your parents and can be controlled. And if someone sends you a friend request or a follower request, that's all. No live-streaming. We've talked about the time restrictions. There's quite a long list. And all of us, and I think it's similar with what Ali said, under 16, you need parental approval to change any of those settings.

Chris (01:08:38):

Thank you. And Laura? Oh, Laura, I've got specifics meant for you. Sorry, I'm out a bit of paper.

Laura Higgins (01:08:41):

Okay, sorry.

Chris (01:08:43):

So, in five of your users... And you mentioned that your platform specifically does attract younger users. And I think you recognize the importance of the work you do on that. So, 2 in 5 of your users are under 13. So how do you keep, specifically those very young children, safe?

Laura Higgins (01:09:04):

As I mentioned, I will just answer the previous question because it's a quick answer. We don't have any of those features on the platform. We don't have any image sharing. There's no encryption in chat. We filter and monitor everything. To this specific question around under 13s, so as I mentioned, we have now rolled out facial age estimation, which is mandatory for all users. This does mean that we're much more accurate in who is in which age group and what they can access on the platform. We're really bringing parents into the conversation and giving them visibility. So, we have synced parent accounts. They have their own dashboard where they can see what games their kids are playing. They can actually opt in or out of specific games.

(01:09:46)
So for example, if a child is age rated to the under nine age group, and they have that very much more narrow access to the most mild and minimal experiences and games on the platform. But they have an older sibling and they want to be able to play with that older sibling, their parents are able to adjust that so that they have access to specific ones. They can see who their friends were, if they can help manage their friends list, and the parents can control the communication as well. As well as that, as I say, by default, we don't have a lot of those features that are on a more risky end. Yeah, I think those are the main things.

Moderator (01:10:24):

Just briefly, Alistair Law said the OSIT report refers to children as young as five live-streaming. The measures that you've outlined, it's the point where I started, take us so far in a world where everybody is behaving as they should. But all of the measures that you've talked about are optional, can be turned off. We know that not every parent understands social media, not every parent knows what their children are accessing, not every parent is on this all of the time. The report of children as young as five live-streaming, and live-streaming harmful content in a context of being groomed, would imply that the safety measures that you've outlined aren't working.

Alistair Law (01:11:10):

So, just on the first point that you mentioned there about these things aren't mandatory and they can be turned off, direct messages cannot be turned on if you're under 16. And as I say, live-streaming, we go through the process of additional levels of age verification with facial age estimation or document ID proof. We're speaking to OSIT about some of the specific examples that they gave in there, some of which, by the way, were off-platform as opposed to directly happening on the platform itself. But it's absolutely a challenge that I think that we're all alive to, which is whether or not the age verification elements that you have for those risky aspects of your site are sufficient, and it's one that we're putting a lot of work into.

Moderator (01:11:49):

Thank you. I'm going to bring in Caroline Johnson. We are at a point in the session where we're going to have to speed up our question to get through all the topics we want to. So, if I can ask both members and our witnesses to be brief as possible in your contributions from now on, that would be very helpful.

(01:12:04)
Caroline.

Caroline Johnson (01:12:05):

Thank you, Chair. We've talked a little bit about the potential harm. And it feels a little bit, to me, like that's not being properly acknowledged. I work as well as a consultant pediatrician, and I still do clinics to maintain my registration whilst being a member of parliament. And I frequently see children with headaches, with behavior changes, with tiredness and exhaustion, and they come to clinic with parents very worried that their child has something clinically seriously wrong with them. And then we find that they're clinically well, thankfully, but they're spending hours and hours and hours on social media. You'd be aware of Lord Darzi's report on the state of the health service, which was published at the end of 2024. Where the technical annex for that shows graphs relating very clearly, particularly in girls, the time spent on social media to mental health problems. So, I think there are significant issues there.

(01:13:02)
Do you have statistics within your companies on the amount of time children are spending on social media? Can you tell how long someone spent on the platform? Presumably, you can, because you can put in restrictions on that time. So presumably, you have statistics on how long children on average are spending, and what proportion of children are spending a long time per day on your platforms.

Alistair Law (01:13:23):

I'll go first. And is it-

Caroline Johnson (01:13:32):

And an answer from each of you on what figures you have. And if you don't have them, to hand what you can, provide them to the committee in writing afterwards.

Alistair Law (01:13:35):

I'm happy to go away and see what I can provide to the committee in writing. I think that the thing that I would, I suppose, most highlight is we don't want a situation where there is overconsumption. That doesn't serve us as the right thing to do, but it also doesn't serve us from a commercial perspective, because actually, as you say, overconsumption risks leading to burnout and a situation where people are not actually enjoying using the app. Our business objective is to create a healthy and sustainable relationship with users. Again, not dissimilar to the experience that I had in the TV industry for 10 years, where you want people to return to your channel but you don't want excessive levels of content. So, that's why we've designed the experience in the way that we have: that we've got default screen time caps, take-a-break reminders, sleep-hour reminders, notification cutoffs, as well as family parental controls as well that we can look at.

(01:14:28)
So we're alive to the risk, as you say, and we are focused on working with partners and our dedicated trust and safety team to create an environment that-

Moderator (01:14:40):

I'm really sorry, we need to be brief. You had a really specific question from Caroline Johnson, which was, how long are children using social media on your site? Do you have the data, and can you provide it to the committee? Please, could you answer the question?

Alistair Law (01:14:52):

So as I say, I will go away and see what I can share with the committee.

Moderator (01:14:55):

Thank you.

Caroline Johnson (01:14:56):

Laura?

Laura Higgins (01:15:00):

Yes, I will go back and we will confirm in writing for you.

Caroline Johnson (01:15:03):

Thank you. And Rebecca? One specific question for Rebecca, there's been talk about work with law enforcement. I had a horrific constituency case where a child was being bullied at a really, really serious level, and the family moved a long distance across the country to escape that. The bully used Instagram and his knowledge of the child's interests and hobbies to find this child at the other end of the country, and then used a profile on Instagram to bully and threaten, and threaten to burn the house down of the family. And the police had very great difficulty in finding out from Instagram who was the owner of that profile.

(01:15:47)
So perhaps, things have changed in recent times, but if the police and law enforcement came to you and said some serious crime is being committed using your profile, how long would it take you to provide that police force with the data? Of which IP, and which person is using that account?

Rebecca Simpson (01:16:02):

It should be very, very quick. And so, I'm very sorry to hear about that incident, and I'd be happy if it's an ongoing problem. So hopefully, it may have been a while ago, but we have a dedicated law enforcement team. And also, since the coming into force of the illegal harms, duties under the Online Safety Act, we have a dedicated illegal harms reporting channel. We did have that for law enforcement who are on board. It should be very quick, sounds like it wasn't in that instance, and I'm very sorry because that sounds like a really awful case.

(01:16:31)
But if it's something that's ongoing, I'd be happy to talk to you about it as well. But it should be instant. We do aim for it to be as quickly as possible.

Caroline Johnson (01:16:41):

Thank you.

Moderator (01:16:41):

Thank you. Manuela?

Manuela (01:16:43):

Thank you, Chair. Ofcom's data shows 23% of under 13s in the UK have a TikTok account, 14% of an Instagram profile, 19% of a Facebook profile, despite being under the minimum age restriction. Your age assurance or verification measures, they are not working, are they?

Rebecca Simpson (01:17:09):

As we've said, there is a real problem with age assurance. We take a huge range of steps to try and make it as accurate as possible. As I mentioned to my response to your colleagues, we think greater involvement from app stores who have linked to the bill payer of that device would be really helpful, because we can take that signal and use it through our systems. I do know that in the Ofcom report you're talking to, that is those datas reported by parents, which suggests that parents may be helping their children get online or aware of it. We know that there's also some evidence we saw recently about if there is a social media ban, half of parents would put their children back onto it.

(01:17:49)
I think it's a collective challenge. We absolutely have a responsibility. The OSA includes requirements around highly effective age assurance. There is a limit to that technology at the moment, which is I think what you're calling out in your question.

Manuela (01:18:04):

Alistair, we have heard that children as young as five use your live-streaming facilities. We also know from the OSIT report that this livestream files end up shared by sex offenders. What are you doing about it?

Alistair Law (01:18:24):

As I said on that specific one, we've met, and are continuing to talk to the police force about some of those examples, some of which actually took place off-platform. To answer your question on under 13s, we report on a quarterly basis as to the number of under 13 counts that we remove. And so, I think that the figure is published in our transparency report every quarter. We recognize it as a thing that people will try and circumvent. The collection of measures that we have to take against it is one thing. The other thing though, is to return to the point that I made earlier. Even if people are trying to circumvent, we will use AI models and AI signals to identify how old they are, but we are focused in the first instance on creating a safe environment for everyone. And that means, as I say, that when you join up with TikTok, you're entered into an under-18 content experience regardless of the age that you say that you are.

(01:19:17)
We will obviously then look to work quickly and swiftly to identify whether or not somebody has been trying to circumvent our approaches, and we report on that on a quarterly basis.

Manuela (01:19:29):

Laura, we also heard about some serious concerns regarding Roblox. There have been accusations that it is possible for people to create games depicting the mass shootings as Sandy hook and Columbine, and even recreating Epstein Island. What are you doing about that?

Laura Higgins (01:19:53):

Again, we have no space for those experiences and games on our platforms. They're strictly prohibited. We use technology to scan for game names, so they should be flagged up using our filters. In terms of the content within the experiences themselves, so before a game can be published, it goes through some automated moderation processes where we scan for things like audio files and video files within the experience itself. What we've actually found with a lot of these experiences is that they have originally not started out with that intent. They have not been designed and published on the platform as those experiences, but they have then been moderated later and made to recreate some of these experiences. But we absolutely have no space for this. We are constantly monitoring and taking down these accounts.

(01:20:47)
They're very prohibited, as I say. So, it is very unfortunate that any of them have ever appeared. We're still working on new technologies all the time to detect and prevent them from being uploaded in the first instance.

Manuela (01:21:00):

But the game developer said that actually, 30% of games flagged up for concerns get accepted. How do you respond to this serious, serious-

Laura Higgins (01:21:12):

I'm afraid-

(01:21:13)
I mean, that is a very serious allegation. I can't comment on that because I certainly haven't seen those statistics ourselves. We believe that our systems are robust in terms of both checking content before it is uploaded. And then in the rare occasion when there is something really bad, that we are very swift in taking it down.

Moderator (01:21:34):

Thank you. Peter.

Peter (01:21:36):

Thank you. The way that Instagram, TikTok, Facebook works is that users upload content, and we talked a lot about that content today, and then various ways to use your apps. But broadly speaking, other users are then fed that content through their For You page, or through Instagram, or Facebook feeds. And what they are fed is driven by an algorithm. And we know that algorithms are designed to promote the content that would drive the most engagement, and that often includes negative engagement. Speaking to young people in my constituency, young people themselves are aware of this. They're aware that the content that they are seeing often has a very negative effect on their mental health. They have told me that they have often felt ashamed at some of the content that they have been fed. They felt unable to talk to a trusted adult about what they have seen. And they are concerned that the addictive nature of the algorithm have left them spending far longer on social media, perhaps than they intended when they loaded up the app to just check it quickly after a long day at school or whatever the case may be.

(01:23:07)
I am certain that your companies have both done auditing of your platforms, auditing of the effects of your algorithms on young people, on compulsive use, on displacement of sleep, of emotional distress. Will you share the results of those audits with this committee, Rebecca?

Rebecca Simpson (01:23:31):

Just before I answer your direct question there, our algorithm is not designed to promote the virality of harmful content. It's quite the opposite. It's designed... Everything you get on Instagram and Facebook is personalized to you, mostly your friends and family, and your interests, and the things you follow. It does work in a different way.

Peter (01:23:48):

Rebecca, I'm going to stop you there. I am a Labor MP. I am broadly speaking on the center left of politics. Whenever I go into my Facebook feed, all I am served

Peter (01:24:00):

Endlessly is far right content. That is not designed for me. It's designed to get my engagement. It is not designed for me. And now obviously this is an experience of a 33-year-old man, not a child, but I do not accept that your algorithm shares content that is designed for people.

Caroline (01:24:19):

Can I also just contribute to that? You say it's mostly friends and family. When I go on Facebook, which I don't do very often anymore, but if I go on my personal Facebook page, all I get is adverts. Adverts, adverts, adverts, adverts, designed a middle-aged woman, beauty products, health products, menopause stuff. It's relentless and you barely see anything from somebody you actually know anymore. It's just not there.

Rebecca Simpson (01:24:44):

To go back to your question, we do publish transparency reports about how successful we are at finding removing, which is all algorithmically driven. It's not just the recommender system, which was the comment that you two are making, but also about the safety features. We would be happy to your question directly to share you information about how the algorithm works and the kind of results that it's achieving.

Peter (01:25:05):

And Alistair Law?

Alistair Law (01:25:06):

Yeah. So again, I think it starts with the content moderation element. So we're clear under our community guidelines that hate and harassment is prohibited, that hateful ideology is prohibited as well. So the starting point should be, and I'm happy to pick up with you on any examples, but the kind of extremist content that you're talking about shouldn't be present. When it then comes to the algorithm itself, TikTok is actually a place for discovery. And one of the things that you see commonly if you're on the for you feed is that you will go through popular videos, but you'll also go through videos that have had zero views and very few likes. And the idea is that we are comparatively content agnostic. When you first join TikTok, you're served a series of the most popular videos. And depending on how you engage with that, if you like it, if you watch it, if you share it, we will just say, what are the other users that have exhibited a similar like for videos as you have?

(01:26:00)
They're clustered over here. Here are a series of videos that they also like, and we'll then present that to you. So it's content agnostic. Two final points, if I may. In terms of then what we think about doing to try and I suppose disperse people's activities, we recognize that we don't want people to go down any kind of a rabbit hole. And so one of the things that we additionally do is to insert within there different kinds of content, content that you might not have expressed an interest in, but that might surprise you or discover. And then the final thing that we also do is we allow you to reset your algorithm at any given point. I think it's two, maybe three clicks to be able to sort of wipe the slate clean as well. Our algorithm is designed to provide an enjoyable experience, to provide an experience that gives you content that you value and that can surprise and excite you. And that's how we set the platform up.

Peter (01:26:52):

Rebecca, how does the Facebook algorithm rate an angry react or a laughing react compared to a like react in the way that it then feeds into that algorithm that curates?

Rebecca Simpson (01:27:07):

We take into account a number of signals to work out. Obviously, first and foremost, anything that violates our community standards is removed before it is-

Peter (01:27:16):

Well, just to be clear, because I don't want to go down this track again, I'm not talking about content that violates. I'm talking about content that falls well within your guidelines, but is nevertheless curating a negative experience for young people.

Rebecca Simpson (01:27:32):

Where we get something like an angry reaction or people can dismiss something or they can report it, anything like that that might suggest that that is not a high quality, helpful, pleasant piece of content previously. The algorithm is designed to downrank that. Also does it if it's been fact checked or screened for some reasons. Multiple signals from multiple different ways that both we and our users interact with that content means that it would be downright. When we downrank something, it means it would be seen by vastly significant less numbers of people.

Peter (01:28:01):

So if someone negative emoji reaction onto a piece of content that is scored against it in terms of engagement.

Rebecca Simpson (01:28:10):

Correct.

Peter (01:28:10):

Okay. That's helpful to know. The government's consultation has asked respondents whether personalized algorithms, so these algorithms that are designed to target content specifically at young users, in this case young users, should be age restricted. Would you support that move?

Rebecca Simpson (01:28:28):

We think the personalized algorithm is really one of the reasons people come to our platforms is that they want to see the content that they want to see, their friends, their families, the things they follow. Personalized algorithm is also a really important part of how we keep people safe. For example, by knowing who you are, how old you are and other features about you. So we make sure as much as possible you don't see harmful content. But like Ali has said, we also allow both parents and anyone to completely reset the algorithm, reset the parameters on which we may be targeting content at you if it's not what you want. So we don't think it's the most fruitful area of discussion in the current consultation. We think of more interesting areas, some of the features that we've been talking about in this conversation today.

Peter (01:29:09):

Alistair Law?

Alistair Law (01:29:11):

Yeah, I think a personalized algorithm has huge levels of benefits in order to show you content that is relevant to you. And it's also at a slightly removed point, the way that lots of online services operate. If you're a subscriber to a newspaper or magazine or anything like that, you are still showed content that is personalized to your preferences. And I think I would think that we think our responsibility is primarily to keep people safe with our content moderation and then to make sure that there are appropriate dispersals so that a level of personalization isn't getting you locked in to particular types of content, that there is still dispersal techniques and still injections of new types of content that will challenge you and surprise you.

Peter (01:29:56):

You make money out of having a personalized algorithm.

Alistair Law (01:29:59):

So we don't serve personalized targeted ads to people who are any of our younger users. Obviously, as you get older and you're over the age of 18, they will be personalized, but you can't raise revenue as a TikTok seller. So we make a very negligible amount of money from under 16s.

Peter (01:30:21):

Last question on this point, very brief answers appreciated. You've both referred to the way that your algorithms support your age verification process. What level of quality assurance have you done on that? As in how accurate is that process?

Rebecca Simpson (01:30:39):

Age assurance process specifically, you mean?

Peter (01:30:41):

Specifically the process of looking at what users are engaging with to check that they are the age that they say that they are.

Rebecca Simpson (01:30:48):

Yeah. It's part of how we look at this. So as we've been saying, we also do Yoti facial-

Peter (01:30:54):

Sure, but how accurate is that?

Rebecca Simpson (01:30:57):

Over and under 18, telling the difference between that, it's very accurate. If you're trying to tell a 13-year-old from a 14-year-old, the accuracy does drop because that is just more difficult to do. It's an ongoing thing we're working with. And as we say, it's one of the reasons we think that verifying at device level and absolute level would be incredibly helpful addition to the work that we're doing on age assurance.

Peter (01:31:17):

Okay. So it's not accurate enough. And Alistair Law?

Alistair Law (01:31:21):

So share what Rebecca said in terms of the differential between over 18 and under 18, it's high. We obviously share information with Ofcom under the Online Safety Act and it's an area that they're looking at closely, the efficacy of these techniques. From an under 13 perspective, we have a specific dedicated AI model, the under 13 model, which we launched in the UK first before any other jurisdiction, which is designed to try and pinpoint signals that might give that differentiation. It's an area we continue to invest in and it's an area we're talking to Ofcom of.

Peter (01:31:56):

Again, I note that you're not putting a specific number in it. Thank you.

Moderator (01:32:03):

Thank you. We'll go to Caroline Voaden. We're really, really running out of time. Okay.

Caroline (01:32:10):

Could you tell committee whether any of your engineering or product teams are awarded either financially or professionally for increasing time spent or session length among users, including children? Start with Rebecca.

Rebecca Simpson (01:32:27):

Yeah, that's not how our engineering teams are told. In fact, one of the principles that we have guiding the work that we do on the company is to ensure that our users have a safe and enjoyable experience. It's not measured by time spent.

Caroline (01:32:39):

There's no incentive to increase the amount of time someone spends on Instagram.

Rebecca Simpson (01:32:44):

No, and we've reset our algorithm entirely a few years ago to prioritize on time well spent rather than just time as a metric, because we recognize that that's not actually the right way to look at how people are using our platform.

Alistair Law (01:32:57):

I think again, it's back to my answer to Chris earlier, which is that we look for user growth as a whole, and that is not based on individual sessions worth of time spent. It's based on the overall experience that people had. There's a good thing that we have in TikTok, which is you can look at anybody's key objectives right up, including the CEO, and you can see what they're being benchmarked against. The first one that he has is safety as an all startup become the most trusted and safe platform that there is, and that's a really important guidance for all of our work.

Caroline (01:33:26):

Okay. So looking at addiction, Rebecca, Meta was recently found guilty in a Los Angeles court to have deliberately designed addictive products that harmed a young user. And we know she's not alone because there are thousands of other court cases now in the pipeline. There is research that shows that reward schedules are especially potent for young people because they don't have the cognitive ability to resist the dopamine hits. Now, all three of your apps contain features like infinite scrolling, autoplay and algorithms that reinforce addiction, and Roblox, you are adding a new function with a reward scheme into it. So could you just tell us very briefly and very succinctly, what exactly are each of you doing right now to reduce the addictive nature of your platforms? I'll start with you.

Alistair Law (01:34:13):

So I don't think that we accept the premise that there is an inherent addictiveness, but the measures that we're taking are the measures that I've set up already. Something like the screen time cap that exists as a default hour for anybody under the age of 16 acts as a way to try and ground people in a particular experience. Yes, it can be varied. Yes, there are kind of other experiences, but that is something like sleep hours is something that we introduced last year. We've introduced updates to family pairing. It's a constant level of ongoing review of the evidence, working with partners, and making sure that we understand the experiences of people so that we can build the tools that give them agency and a balanced environment.

Caroline (01:34:54):

You settled that court case, so you didn't actually go into the courtroom. You settled out of court.

Alistair Law (01:34:59):

Yeah. So it was a US litigation process. My understanding is that it's going to appeal as well. I think that there is kind of more to come in that area, but it wasn't a conclusion on TikTok as a result of that. And we will continue delivering on our responsibility, which is to make sure that users have a safe and balanced and healthy experience on our app.

Caroline (01:35:19):

What are you doing to reduce addiction?

Rebecca Simpson (01:35:21):

So just to answer that very quickly. A, we obviously are appealing that court case and we don't also accept the premise that our platforms are addictive, so I can't talk about that too much. I mentioned the algorithm reset we did that led to 50 million fewer hours spent online. And parents can set a 15 minutes a day limit for total time on our apps. We have the interruptions we've talked about. We are not intending for our platforms to be overconsumed by anyone and we've introduced a whole range of ways to try and prevent that.

Caroline (01:35:50):

Laura.

Laura Higgins (01:35:52):

So we don't actually build those algorithmic systems that maximize time on platform. We do not have infinite scroll. We do not have autoplay. We don't have follow accounts, et cetera, that I know are part of this debate. For us, the main motivation of the platform and the people who create on the platform is to have fun and build fun experiences where people come and just play together. So again, we're going to continue helping the parents to have more autonomy around what's right for the youngest children on the platform and ensuring that wellbeing is really at the heart of everything within Roblox.

Moderator (01:36:28):

Manuela.

Manuela (01:36:29):

Thank you, Chair. The European Commission preliminary fundings in October last year said that, and this is for you, Rebecca. While Meta does not have sufficient reporting mechanism for children and parents can report illegal content. So for example, and we heard the sex offenders, the suicide fora, the violent pornography, violent misogyny and so on. So what are you doing about how are you tackling these concerns?

Rebecca Simpson (01:37:06):

So absolutely everything on any of our platforms can be reported to us. There's three dots at the top. It's pretty straightforward to do. We have also built a dedicated illegal harms reporting channel for our platforms following the introduction of the OSA.

Manuela (01:37:24):

I tried to report illegal content and I don't understand what's happening after you report it, but what's happening after when you report the illegal content.

Rebecca Simpson (01:37:33):

What's happening, it depends a little bit on what the content is. It will either go direct to like dedicated law enforcement specialists. It can go to specialist teams looking at things like, as you mentioned, child sexual abuse material, all that kind of thing. It could go through a human reviewer generally working across our platform. It can also go through our automated systems. It does depend a little bit on what it is that's being reported.

Manuela (01:37:57):

Thank you. And in November 2025, the NSPCC recommended that tech companies should take steps such as using metadata to identify suspicious behavior and restricting the ability of adults to search for and communicate with child accounts. Are you actively, and I want to hear from Laura as well, are you actively implementing such processes and what is the timeframe for completing implementation?

Rebecca Simpson (01:38:27):

So we do absolutely use metadata, as you say, to pick up where someone might be acting suspiciously. So if you're an adult accountant, perhaps you're sending lots of follow requests or message requests to younger users, that kind of behavior will get detected and can be looked at. And then since 2024 on teen accounts, as I said, younger people between 13 and 18 have defaulted. They're not discoverable by anyone. They're not contactable by anyone that they don't follow and who they follow is approved and seen by their parents.

Alistair Law (01:38:56):

Very similar, I suppose, with the added element of direct messaging not being available to under 16s. And then as Laura mentioned earlier, we've increased the amount that we're working together as an industry to make sure that signals are being shared between platforms so that we can align where there might be cross-platform. And this is another area where working with law enforcement is absolutely critical as well, and making sure that we have information from them as to things from outside and off platform perspective that might be relevant to an assessment that we do on platform.

Manuela (01:39:28):

And Laura?

Laura Higgins (01:39:34):

Yes. So it's not possible for adults to communicate with children on the platform due to our facial age estimation, which buckets people into the similar age groups. The only way that an adult can talk with a child is if they become trusted friends, and that is by parental consent only. So that's the first safeguard that we have in that space.

Moderator (01:39:53):

I'm really sorry to interrupt. You say it's not possible. There is evidence that it happens. How do you reconcile those two statements?

Laura Higgins (01:40:02):

Yeah. So I mean, any incidents where a child has been harmed in any way because of contact through Roblox is absolutely awful. And we really are truly, truly sorry that anything like that's happened. We rolled out facial age estimation in January this year, so we're quite confident now that we are preventing that contact on our platform between adults and children. We also are really focused on preventing off-platforming, which Alistair mentioned can be sometimes whether harm actually happens. So we actually have a PII classifier, which detects and prevents the sharing of personal information to try to prevent children being taken from Roblox into other spaces as well. As Alistair said, we work very closely with law enforcement and with our partners through [inaudible 01:40:52] and continue to use our AI tooling such as Sentinel to detect any grooming type language or behavior on the platform, which we then proactively report to law enforcement.

Moderator (01:41:05):

Thank you.

Peter (01:41:10):

As many members of parliament have done, I've been speaking to constituents about social media for under 16s as part of the government consultation into this. And I was contacted by a number of constituents who had concerns around bullying on social media, including Amy, who works as a social worker. She says that she has worked with children and families where they've been contacted and groomed via social media platforms. She says, "I don't think parents understand the risks of social media enough." Social media gives children access to each other 24 hours a day. Children have no safe place anymore. We know that bullying takes place in the real world offline, but it has also been exacerbated by social media, by online spaces. So my direct question is, when bullying does occur in your platforms, how does it manifest? What features are most typically used to drive that bullying behavior? Rebecca Simpson, if we can start with you.

Rebecca Simpson (01:42:21):

Yeah. So it's probably easier to answer by... We have built the features in response to where we see it manifest. So that can be, for example, being tagged or having comments around you where someone might be posting something about you to try and bully you. They might be commenting on your posts to also try and bully you. And that's where we've put in a whole range of features where if you are unfollowing someone or you can block and report someone, they will not be able to do that. We also-

Peter (01:42:47):

So just on that point, according to Childline, if I block someone on Instagram, they can still find my profile and therefore the advice is that I change my username. Is this the case? And if it is the case, why is the advice that the victim should change something about the way that they're responding on your platform?

Rebecca Simpson (01:43:07):

I don't know when that Childline was from, but that is no longer the case since 2024 with teen accounts, as I've been saying, those accounts are private by default, forgive me, can't be discoverable, can't be contacted by anyone that doesn't follow them. The act of not following someone, not being connected to them means they will not be able to do any of the things I was just talking about. The other thing, we default everyone into teen accounts into our strongest protector hidden words. So we have a whole range of terminology that is tended to be associated with bullying comments and where the systems detect that, that is immediately moved into a different inbox. The victim of those comments never has to see them. They can either mass delete them or mass report them because again, as you're rightly pointing out, asking the victim to deal with some of that is clearly not always the right approach.

(01:43:52)
So hidden words is on for all under 18 accounts to pick up where we think bullying remarks. And as I say, it's removed away and the victim can deal with them on mass rather than having to experience those comments.

Peter (01:44:07):

And when bullying is detected either automatically, as you say, or because it's reported by the victim, what's the typical response time and what feedback do they receive?

Rebecca Simpson (01:44:17):

So the vast majority of bullying where we detect it is found and removed before it's seen by anyone. It's under 1% in our recent community standards report, which is publicly available of bullying comments online. We're very successful in capping finding it. When it's reported to us, as you said, it will go to either an automated review or human reviewer, and usually the response times is extremely quick.

Peter (01:44:43):

Yes. So obviously we're talking here about bullying on children. I'm a member of parliament. As you can imagine, I sometimes have people on social media say very unkind things about me. And that is of course their democratic rights to do. There are things that take place on social media that are deeply unpleasant, but are not illegal and might not necessarily be sufficiently severe to require you to flag that as breaking protocols. I would put it to you that a situation involving children, those protocols should be much stricter. Are your protocols stricter? If a young person receives an abusive comment on their platform by another young person, what action is taken against that bullier?

Rebecca Simpson (01:45:35):

As I say, they are stricter because I completely agree with what you're saying. And as you said, these measures go beyond the law because we're not talking about necessarily illegal harm, as you've also recognized. So the default settings are all stricter, and that is just the answer I've given you. Where we find someone who is bullying, the actions will vary depending. It can be up to and including removing that person's account. It can be about removing posts. We do sometimes, when we're dealing with younger people, try and give them the opportunity to learn so you might not automatically go to removal of their account, but we can do things whereas if we detect you're about to say something that we believe could be bullying, we give you a nudge to make you think about it, to try and encourage better behavior, but there's a range of sanctions we would take against either the content or the individual posting it, depending on what's happening.

Peter (01:46:22):

When these incidents happen, how do you inform the parents of both sides?

Rebecca Simpson (01:46:27):

So in teen accounts, which is this relatively last couple of years, they can see everything that's happening on their teens account. We also give them notifications if their teens are looking for, at the moment, it's mostly suicide and self-injury content, but we're looking to expand that as well, which could include-

Peter (01:46:45):

We're looking to expand letting parents know that their children have been involved either as the victim or the perpetrator of bullying. So you don't currently actively inform parents.

Rebecca Simpson (01:46:55):

We do in the sense that if you are a parent managed account, you can see everything that your child is doing. What I meant when my answer was-

Peter (01:47:01):

That's not actively informing them.

Rebecca Simpson (01:47:03):

Yeah. So that was what I was about to say. So if you mean an active notification at the moment that is just suicide and self-harm, but we're looking to expand that.

Peter (01:47:11):

We're very short on time. So Alistair, if I can briefly come to you.

Alistair Law (01:47:15):

So very quickly then, a lot of similarities. If you're under 16, you're private by default, your content won't appear in other people's for you feeds. As I've been mentioning, direct messages not available to you. So the idea of people directly contacting you and bullying you by direct messages is simply not applicable. Our community guidelines prohibit bullying and harassment. To your point, they do apply a different standard to young people in the general population than they do to democratic discourse with public elected officials. We also have family parent, which is our parental control tool. And with that, you can see where people have requested or friended or then blocked. If you have an under 16 who has blocked someone that will be available in family pairing as well, and we're continually looking at additional features in that area.

Peter (01:48:06):

Very briefly, Laura.

Laura Higgins (01:48:09):

So we have chat filtering for in game chats, so that will prevent any kind of harmful language. It doesn't necessarily have to be swearing, but even just mean language will be detected. We are currently rolling out a rephrasing project where if somebody types something that may be a little bit unkind, it will reword it using AI into more appropriate language. We work with anti-bullying organizations globally to create resources to support parents either of a child who is bullying somebody else or is a victim of bullying in how they can support their child as well as what we do on the platform. So again, a number of sanctions, depending on the seriousness of the incident, but reporting to us that we will escalate through our internal channels. Our community standards, this year, we actually worked with our team council to create youth-friendly versions of the community standards, just to make sure that that's out and available and as clear as possible about what's allowed on the platform.

Moderator (01:49:12):

Thank you all very much. I'm going to have to draw it to an end here. There were one or two topics that we didn't get to ask you because of time constraints, which we might write to you about to follow up on after the session, if that's okay. But thank you all very much for being with us this morning and for giving us your evidence. Order, order, we'll just suspend the sitting while we change to our...

Topics:
No items found.
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

Copyright Disclaimer

Under Title 17 U.S.C. Section 107, allowance is made for "fair use" for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is permitted by copyright statute that might otherwise be infringing.

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.