[overzicht] [activiteiten] [ongeplande activiteiten] [besluiten] [commissies] [geschenken] [kamerleden] [kamerstukdossiers] [open vragen]
[toezeggingen] [stemmingen] [verslagen] [🔍 uitgebreid zoeken] [wat is dit?]

Verslag van een gesprek, gehouden op 10 februari 2022, over de rol van socialmediaplatformen

Informatie- en communicatietechnologie (ICT)

Verslag van een hoorzitting / rondetafelgesprek

Nummer: 2022D09296, datum: 2022-03-10, bijgewerkt: 2024-02-19 10:56, versie: 3

Directe link naar document (.pdf), link naar pagina op de Tweede Kamer site, officiële HTML versie (kst-26643-822).

Gerelateerde personen:

Onderdeel van kamerstukdossier 26643 -822 Informatie- en communicatietechnologie (ICT).

Onderdeel van zaak 2022Z04566:

Onderdeel van activiteiten:

Preview document (🔗 origineel)


Tweede Kamer der Staten-Generaal 2
Vergaderjaar 2021-2022

26 643 Informatie- en communicatietechnologie (ICT)

Nr. 822 VERSLAG VAN EEN GESPREK

Vastgesteld 10 maart 2022

De vaste commissie voor Digitale Zaken heeft op 10 februari 2022 een gesprek gevoerd met mevrouw Haugen inzake de rol van socialmediaplatformen.

Van dit gesprek brengt de commissie bijgaand geredigeerd woordelijk verslag uit.

De voorzitter van de commissie,
Kamminga

De griffier van de commissie,
Boeve

Voorzitter: Leijten

Griffier: Boeve

Aanwezig zijn vijf leden der Kamer, te weten: Van Ginneken, Van Haga, Leijten, Rajkowski en Van Weerdenburg,

alsmede mevrouw Haugen.

Aanvang 16.29 uur.

De voorzitter:

Goedemiddag. Ik heet u allemaal welkom bij het gesprek van de vaste Kamercommissie voor Digitale Zaken met mevrouw Haugen over de rol van socialmediaplatformen. Good afternoon and welcome to this meeting of the Dutch House of Representatives with the standing committee on Digital Affairs. Welcome to Mrs Haugen, who joins us through a video connection. Thank you for joining us in this meeting and for the opportunity to discuss the role of social media platforms with you.

Before we start, I will introduce the Members of Parliament present here today to you: Ms Rajkowski of the liberal party, Ms Van Ginneken for Democrats 66 and Ms Van Weerdenburg for the Party for Freedom. I see Mr Van Haga for the individual Group Van Haga and my name is Ms Leijten. I am from the Socialist Party. I will chair this meeting.

Ms Haugen, I would like to give you the floor for an introduction on this topic. Please introduce yourself and tell us about your experience as a former Facebook employee.

Mevrouw Haugen:

Good afternoon. Before I begin: will simultaneous translation be occurring? Do I need to build in gaps?

De voorzitter:

We will hold this meeting in English, so you can continue.

Mevrouw Haugen:

Okay, cool. I just wanted to make sure. One of my core issues is around linguistic diversity, so I wanted to make sure I was inclusive.

My name is Frances Haugen. I am a former product manager at Facebook and a handful of other large tech companies. I am an algorithmic specialist. I have worked on systems that determine how content was selected and prioritised for home feed. I worked on search quality, which is the process of how you compose search results at Google. I founded the computer vision team at Yelp.

The reason I am before you today though, is because of the experiences I had at Facebook and the issues that rose to my attention in the process of learning more about how their systems work and how organisational engagement responds to them.

Facebook is facing serious, serious challenges. These challenges are actually similar to those of other large social media platforms. Because of the incentives that they work under, they are unable to fix those problems in isolation. I came forward because I saw that people's lives were on the line, and that the systems and the way Facebook has chosen to keep its platform safe does not scale in a linguistically diverse world. Its focus on content censorship is not effective. Facebook's own documents outline these problems.

Because of the subtleties in languages, it is unlikely that, until we have something called strong AI, or artificial general intelligence, we will be able to get more than 10% or 20% of hate speech on these platforms using AI. It is unlikely that we will get violence inciting content at any rate close to what we need to. Instead of focusing on censorship, I have been trying to explain for the last sixth months that we need to start thinking about how we build safety by design into these platforms and how we incorporate accountability and transparency, so that Facebook can have incentives other than just its profit and loss statement. Because Facebook knows many, many solutions that do not involve content that would make these platforms safer. There are questions about the dynamics of the systems, or how the algorithms work, how content is selected, about product choices and around the routes of content that are prioritised or facilitated. All these things together add up to a system that gives the most reach to the most extreme content.

Facebook did not set out to design a system like this. In 2018, they made a choice for their own and best business interest to begin prioritising what content you see based on the likelihood that it would illicit a reaction from you. That might be a comment, a like or a reshare. But in the process, it ended up giving the most distribution to the most extreme ideas. When Facebook looks at a post that creates an angry comment or that make people fight back and forth, it sees a high quality post that causes lots of engagement.

At the same time, our democracies are in danger if only one side of an argument gets lots of distribution. If a side or a party goes more extreme – that could be on the left or the right – it will get more reach. Someone may try to come from the middle, or someone may try to bridge those sides and say: we do not need to argue; we have commonality and there is a way forward that is about compromises. Or: we can think creatively and we could both win. Those voices are not as click baity. The shortest path to a click is anger. In pluralistic democracies, the only way we can continue forward is if we continue forward together. But these systems devalue speech that helps people see the common ground.

The last area that I want to flag as a critical thing that democratic leaders need to be aware of, is that Facebook's advertising platform follows the same principles. Engagement-based ranking, or prioritising content based on its ability to illicit a reaction, is used in ads as well. Facebook believes that an ad that illicit more interaction is a higher quality ad. What this means though, is that an angry, divisive, polarising ad is going to be five to ten times cheaper than a compassionate empathetic ad, or an ad that relies more on facts than heated emotions. When we subsidise angry, divisive, polarising, extreme speech, we tear our democracies apart, because we make a cheat code into the democratic process.

This impacts all of us. As far back as 2018 – that is over three years ago now – political parties told Facebook: we know that our constituents do not like some of the positions we are running, either distributed organically through Facebook or in our ads; we know they do not like these positions. But we feel forced to run them, because those are the ones that get distributed on social media. We used to be able to put out a white paper on our agricultural policy, which is a topic that is vitally important to all of us. We all eat. But because that content is not compelling in the same way as something that makes you rage-click the angry face, or rage-comment something about how frustrated you feel, that white paper does not get distributed anymore. The problem with that is that by the time we reach the ballot box, Facebooks product choices and their algorithmic choices have already filtered out ideas that we might have wanted to vote for. Someone in Menlo Park is going to influence what Dutch voters get to choose in the ballot box without any accountability from people in the Netherlands. That is unacceptable.

If we want to move forward with this new technology, we have to begin thinking about the rules of the road. There is no comparably powerful industry in the world that has as little accountability or transparency as these social media platforms.

The last thing I want to talk about is the question whether Facebook is a mirror or a megaphone. A lot of people like to say: these are just problems of human nature. This is not Facebook's fault. People will always argue. People have always had extreme ideas. People have had extreme ideas since human beings’ talking, but we used to manage the exchange of information at a human scale. We had humans who chose what we focused on. Suppose you have a big family dinner and there are fifteen people there. If someone has a really extreme idea, then you have the space, the opportunity and the relationships to work through that idea and to come back to something closer to consensus. When you have a platform that gives the most reach to the most extreme ideas, you no longer get a chance to have good speech counter bad speech, because bad speech gets to have the most reach.

In conclusion, I want to remind everyone that we have paths forward. Facebook knows a lot of solutions. The reason why they have not chosen these solutions is that there are no incentives for them today to do so. We need to start having mandated transparency and safety by design, so that the good people inside of Facebook, people who are working very, very hard today and who come up with these solutions have the opportunity to implement them.

I thank you for inviting me. It is always an honour to get to collaborate with leaders around the world who care about these issues. I am happy to help you in any way that would be constructive.

De voorzitter:

Thank you very much for you introduction. I will give the Members of Parliament present here the floor for a question to you. My wish is that you answer straight away, so that we can sort of have a conversation.

Mevrouw Haugen:

Yes. Will do.

De voorzitter:

Good. Then I give the floor to Mrs Van Ginneken. She is talking to you on behalf of the party D66.

Mevrouw Van Ginneken (D66):

Thank you, chair. And thank you, Mrs Haugen, for attending our meeting and sharing your ideas and experiences with us. That is very important for us. I am very much aware of the fact that you have put yourself at some risk by disclosing all kinds of information about Facebook's policies and how the platform works. I admire your bravery and I am very happy that you want to be with us today.

I guess we have all read or heard your testimonies before the American Senate, or the UK Parliament. One of the things I read is that some people accused you of advocating censorship. What do you have to say to those people?

Mevrouw Haugen:

The Wall Street Journal did some excellent reporting on actions of the Facebook PR team. The Facebook PR team actively reached to basically every conservative outlet in the United States and said that I was a dark agent, a Democrat that was there to promote censorship. Part of what I find frustrating about this narrative is that I have said from the beginning that I do not believe in strategies that are about picking out bad ideas, differentiating between good and bad ideas. I do not think they work. The reason why I think they do not work is in Facebook's own documents about these strategies. Human language has so much context in it, you can have sentences that are very similar, and one will be hate speech and one will not be hate speech. Unless you understand all of those cultural cues, you are not going to figure out what is hate speech. While I was at Facebook, 3% or 5% of hate speech was successfully taken down by Facebook's AIs. Facebook's internal discussions said: in the best-case scenario, even with massive investments in pushing this technology forward, we are only going to get to 10% to 20%.

This sounds like: maybe that is not the thing we should be relying on. Maybe we should be figuring out how to have more accountability about what the consequences of the system are. Because then Facebook could choose to do certain things. Let me give you a really simple example of something that Twitter does. They require you to click on a link before you reshare it. Have you been oppressed if you have to click on a link before you reshare it? I do not think so. That alone reduces this information by 10%. Facebook chooses not to do it, because there are countries in the world where 35% of all the impressions of things that are viewed in the newsfeed are reshares. There are countries where people do not produce as much original content. So, there is a strong business interest in not doing something that is for the public good.

The second thing is that I do not believe that focusing on censorship is a just strategy. In some of the most fragile places in the world, Facebook is the internet, almost universally. Most people are not aware that in the majority of languages in the world, 80% or 90% of all the content that exists on the internet for that language only exists on Facebook. So, Facebook is the internet in places like Ethiopia. But Ethiopia, which has 100 million people, is also profoundly linguistically diverse. There are 100 dialects and there are six major language families. At the time that I left, Facebook supported only two of those six language families even mildly. Remember: dialects matter. The UK is meaningfully less safe than the United States. Scotland is even worse, because dialectical things really confuse AI. AI is not actually intelligent.

So, I find it very humorous that people say: she is a dark horse for censorship. Because I have repeatedly said: if we focus on censorship, we are giving up on places that speak smaller languages. A language spoken by 10 million people, 15 million people or less, is not anywhere on Facebook's radar. That is just not fair. All people who use Facebook deserve to have a safe experience. We need to focus on product design choices. Questions such as: should you have a multi-picker that lets you blast out something to ten Facebook groups at the same time, or should you have to manually reshare each time? Because that change reduces misinformation on the platform by something like 20% or 30%? It is little things like this that really, really matter. They work across every single language.

De voorzitter:

Thank you very much. There is a question from Mr Van Haga.

De heer Van Haga (Groep Van Haga):

Thank you very much, Ms Haugen, and thank you for being with us and enlightening us. I must say that it is deeply worrying to hear all these things. My problem is not so much with the algorithms. We have had a brief chat with three people from NGOs before you came in. There was a discussion about private or financial interest versus the public interest. Next to the algorithms, there is always human intervention. I have personally been cancelled by YouTube and LinkedIn. I have taken that to court. I was wondering what your idea is about the human intervention behind all these algorithms when it comes to cancelling politicians, or basically to cancelling differences of opinion. Sometimes it is a difference of opinion, but at some point it becomes wrong or right. Then somebody takes the moral decision to cancel one of the sides. I fail to see the business model behind that. I can understand financial business models, but here I am in a blur. You have been inside the company, so maybe you could tell us more about that.

Mevrouw Haugen:

First of all, I think the issues you raise are very important and very serious. I think taking unilateral actions to remove people or content off the platform is something that needs a lot more transparency. The public deserves to understand how these things happen in a way that is much more accessible than today.

One of the things I want to flag for everyone here – which is another reason why I am against that magical «AI will save us» strategy – is that often, content gets taken down that is not violating. Some of the documents talk about how counterterrorism speech in languages like Arabic [...] 75% of the counterterrorism speech is labelled as terrorism and taken down. That is part of why we need to have transparency on the performance of these [...], because they are actually substantially less safe, because they took that content down. What I want to reinforce again, is the idea that we need transparency and accountability.

The second point is around why they remove some people from platforms. There are places in the world, like Taiwan, where people face extreme societal risks from China interfering in their information environment. They have thought carefully, and there is a lot of research coming out of there about how we design systems that encourage discipline and how we have [...] deliberations. How do we actually have [...] discourse? I think there would be a lot more tolerance on the platforms for extreme views if they were intentionally designing for ways that facilitated deliberations. This is not a thing ...

De heer Van Haga (Groep Van Haga):

Sorry, the connection is really breaking up.

De voorzitter:

The connection is a little bit poor, Mrs Haugen.

Mevrouw Haugen:

I am sorry. It is still breaking up.

De voorzitter:

Unfortunately, the connection is a little bit poor. We are looking at how we can sort it out on our side.

Mevrouw Haugen:

Okay. I will log out and come back in.

De voorzitter:

Yes, fine.

De vergadering wordt enkele ogenblikken geschorst.

De voorzitter:

We had some technical problems, but the connection is back again. Mrs Haugen, you can continue your answer to Mr Van Haga.

Mevrouw Haugen:

Platforms do not want to be accountable for the process of how they amplify content. There are no mechanism and they cannot design software for deliberations. How do we as a society come to consensus? How do we have meaningful discussions? They are not willing to follow that research. There are places in the world, like Taiwan, where they are investing in how to design social media in such a way that you can have free speech, and that you can have an open society and it is still safe?

I am sorry that you went through that. I do not think it is appropriate for that to be done in such a unilateral and non-transparent way. Beyond that, I do not have a lot of context.

De voorzitter:

Thank you very much. Wil je nog een vervolgvraag stellen? Dat is niet het geval. Then I give the floor to Ms Rajkowski from the Liberal Party.

Mevrouw Rajkowski (VVD):

Thank you, chairwoman, and thank you Mrs Haugen for being here and sharing with us all of your insights and your experiences. For me, it is especially valuable that you have not only worked at Facebook, but that you know other big tech companies from the inside as well.

One of my questions is about the fact that Facebook repeatedly promises that they will remove extreme and harmful content. Experts, one of whom is you, keep telling us that those promises are actually false.

In Europe, we are working on some legislation, such as the Digital Services Act. I do not want to ask you a question about that regulation, but I do want to ask you if it is possible that Facebook takes action now and delivers on their promises.

Mevrouw Haugen:

Like I mentioned in my opening statements, as long as Facebook continues to double down on the strategy of saying that AI will save us, and that we can magically pluck out the content that is bad, I do not think they will be able to adequately keep us safe. At a minimum, they will definitely not be able to fulfil just the basic things they claim they are doing.

Facebook's internal research says that about 3% to 5% of hate speech is taken down. To show you the reason for that, here is a real example from the documents. Imagine the phrase: «White paint colours are the worst.» You might ask: why did that sentence get flagged? Why is it that anyone would think that that could be hate speech? That is because AI's are not smart. They do not understand context. They have never painted a room. They do not understand the experience of walking into a hardware store and having to see a hundred and fifty shades of white. I do not know the difference between these.

What AI's do know, is how to look for words. They look for patterns in how these words are associated with each other. Unfortunately, that is how they make decisions. As long as we have such blunt technologies for being able to identify hate speech or any kind of violating content, which could be violence inciting content, bullying, or whatever it is, you are only going to be able to get, according to Facebook's documents, 10% to 20% of the violating content that exists. This is unless they start having a much more transparent approach.

I think there are ways in which they could be disclosing more information on their systems. Imagine all the different scores that come out of one of these systems. They are scoring how likely it is that content is bad. Imagine that we have 100 or 1,000 samples in Dutch. We could then look at each point in the scoring system and say: hey Facebook, your system is broken; these things that you think are hate speech are not hate speech, and you are taking down good people's content. And: you are missing all these examples; you do not have a context on what is hate speech in our society.

Until we start having transparency initiatives like those, there is no way Facebook is going to perform at the level they need to perform at for the public good.

Mevrouw Rajkowski (VVD):

Thanks for that insightful answer. As you have mentioned earlier, you have also worked at Google, for example. You have worked at several big tech companies with several challenges, I might say, when it comes to data collection, use of market power, et cetera. What happened at Facebook that made you decide to step out and stop working there, and not at all the other companies? Are they the worst in class? Can you explain a bit on that?

Mevrouw Haugen:

The thing that motivated me to come forward was that Facebook is very different from other big tech platforms. That is because they literally went into the most vulnerable and fragile places in the world and said: if you use Facebook, your internet is free; if you use anything else, you are going to pay for it. So, you are in places that are very fragile, linguistically diverse, have long histories of conflict and ethnic tensions, and Facebook has come in there and has choked off the opportunity for there to be a free and open internet. They choked off the opportunity to have alternatives. If you go to some of these places and ask people on the street what they think the internet is, they will think it is Facebook.

In a situation like that, you have a higher level of responsibility for making sure those people are safe. They are the victims of misinformation. The thing that really is happening in conflict zones around the world, is that people are sending out images claiming that your cousins are in the village and that there has been a massacre. Like: go grab your guns and save your cousins. But the photo that they have just sent out, is actually from seven years ago and six countries away. That kind of stuff is destabilising places. Talk to NGOs around the world. It is really, really profoundly dangerous. What we saw in Myanmar and Ethiopia is just the opening chapters of a dystopian fairy book that I personally do not want to read the end of.

I think it is important to contrast that expansion strategy with what Google has done. Google is a similar large tech company, which has also spent a lot on trying to get more people online. But Google has spent that money on other things. When I worked at Google, internet in Africa was very, very expensive. That was because there were limited fibre optic pipelines that came into the continent. Google paid for new fibre optic pipelines to get dropped into the continent, because that significantly decreased the cost of using the internet. That still allows there to be competitors to Google. It allows there to be an open and free information environment. It does not choke off other parties from being able to develop their own ways of interacting online. Facebook did all those things. I think that means that they have a higher responsibility for making sure that people in those fragile places are safe.

De voorzitter:

Mevrouw Van Weerdenburg for the Party for Freedom.

Mevrouw Van Weerdenburg (PVV):

Thank you, madam chair. Thank you, Mrs Haugen, for the information you have given us so far. I think the question you asked, whether Facebook is a mirror or a megaphone, is the question at hand. From what I understand, you are saying that it is a giant megaphone and that we should really take those megaphones away. I can understand that to some level, when you talk about fragile states like Ethiopia, where Facebook is the whole internet. But what about Western countries with healthy democracies that have a population that has a reasonably level of tech savviness? Why should those people be saved from the megaphones? Are they not able to distinguish between the online world and the real world? I fail to see why this is such a great danger that we should protect them from it.

Mevrouw Haugen:

To be really clear: I am not against things going viral. I believe everyone has a right to express themselves online. The thing that I believe is concerning is the following. This is a thing that changed in 2018. Before 2018, Facebook prioritised the content in your newsfeed. Every time you opened Facebook, Facebook surveyed the landscape. There were tens of thousands of options they could have shown you. They needed to show you only tens or hundreds of pieces of content. Prior to 2018, their definition of success was keeping you on your phone: can you keep scrolling? They made a conscious choice to change to: can we illicit an action from you? Can we provoke you to click the like-button or angry-button, or provoke you to make a comment? The content that was distributed, the content that now got the most reach, changed. It changed so distinctly that political parties in Europe told researchers at Facebook: we are no longer able to talk to our constituents about a broad set of issues. There are lots of issues that impact our daily lives that are not riveting. They are not things that provoke an emotional reaction. These are things like talking about our health policy, about agriculture, about education, or about how we take care of the elderly. All these things are not topics that provoke a knee-jerk click.

Facebook made that choice not for our well-being. They made that choice because they had run experiments on people that if you get more likes and comments, you produce more content, which means more money for Facebook. So, Facebook came and said: we are making a business decision that is good for us; even if the comments are bullying or hate speech, they are still going to count as meaningful social interactions.

When polled six months later, Facebook's own users said: my feed is less meaningful, so this was not done for our benefit. It was done for Facebook's benefit. After that change happened, suddenly political parties said: we cannot talk about how we take care of the elderly anymore; we cannot talk about education policy; we cannot talk about how we feed ourselves, because that is not exciting enough anymore. The stuff that gets distribution now is extreme, polarising, divisive content. If we do not put out that content, it does not get distributed.

I am not saying that people do not deserve to go viral. I could write a reasoned piece that tries to give context on an extreme claim that is made. I could say: this is actually a lot more complicated than that; here are three different ways to look at this, and I personally think it is this one. Right now, that kind of thoughtful, deliberative speech does not get a chance to get anywhere near as much distribution as something that pisses you off. Things that anger us and things that cause us to fight with each other in the comment threads, are seen as high quality. If I try to explain that we actually have a lot of common ground together, that does not inspire a click as fast and is viewed as lower quality.

When I ask whether it is a mirror or a megaphone, what we see online is not representative of what is actually being said online. It is representative of what Facebook views as high quality. The question I want to ask you is whether we should get at least some transparency into what gets to win and lose. We talk about being shadowbanned. A lot of shadowbanning is happening unintentionally, because Facebook's algorithms have decided that certain kinds of discourse are more valuable than other kinds of discourse.

De voorzitter:

Thank you very much. I have a question myself as well. The Netherlands is a member of the European Union. The European Union are preparing legislation on digital markets and digital services. Do you know these acts? And if you know them, do you think they are going to help a bit in this megaphone working of the Facebook algorithm?

Mevrouw Haugen:

I am a big proponent of the fact that Facebook today knows interventions that would let us all speak freely online and that would decrease the things that we think are toxic. Those are things like just requiring you to click before you reshare. I do not think that oppresses anyone. If you had to decide between taking one of you off the platform or asking people to click on links before they reshare them, which would you choose?

Twitter has come down on the opposite side on a lot of these things. They have chosen to do network-based intervention and product-choice interventions, because they scale better. They are actually cheaper to implement than the things Facebook are doing today. They are willing to make that trade-off. They are willing to lose 0.1% of profit, because they would rather do this product intervention.

I think that some of the things in the DSA will help companies go in that kind of direction, with a more Twitter-style intervention. They have mandatory risk assessments. I think mandatory risk assessments are great. I am a big proponent of requiring Facebook to disclose harms and having some kind of independent commission, body or whatever you want to call it, that can raise additional issues. Requiring Facebook to articulate how it is going to solve those problems is really important. There are many kind, conscientious people inside of Facebook that are coming up with very creative, interesting solutions. But today, they do not have space to act, because the only things that are reported externally are profit, losses and expenses. When you have mandatory risk assessments, when you have mandatory transparency, it suddenly becomes in Facebook's interest to say: losing 0.1% of profit for requiring people to click on links before they reshare them is totally worth it, because it is going to make all these other numbers that we now have to report look better. I think there is a lot of opportunity there.

I do not know a lot of details on the Digital Markets Act. I therefore feel hesitant to comment on it. I know there have been discussions on the question whether the DSA should only focus on illegal content, or also on legal but harmful content. I have had far too many conversations with parents who have lost their children to give up on the idea that we should care about Facebook's effects on teen mental health. Most of these effects are not about illegal content. They are about patterns of amplification. You can follow very moderate interests on Instagram, like healthy eating, just by clicking on the content. You can even take a brand new account, follow some innocuous searches, just click on the content Instagram gives you and be led to anorexia content. I do not want to have a world in which Facebook's choices to have the system work that way do not have consequences. I do not want to see any more kids hurt themselves.

De voorzitter:

Thank you very much. I give the floor again to Ms Van Ginneken.

Mevrouw Van Ginneken (D66):

Thank you, madam chair. You are very eloquent, Mrs Haugen, about what is happening. This is very insightful to all of us. I heard you state very clearly that the spreading of hate speech and polarisation is not collateral damage that comes from using the platform. It is a deliberate choice. That is very worrying, obviously. I want to dive into that a bit deeper. If you say it is their choice, do you then mean that this is something that automatically comes from the company's culture? Is it a blind belief in technological possibilities? Is it an explicit company policy? Or maybe all of the above? What is driving the force that continues to focus on turnover instead of societal responsibility?

Mevrouw Haugen:

I think there are a couple of different factors that are causing the current situation inside of Facebook. The first is that there is a long tradition inside of Facebook to focus on the positive rather than the negative. There is a culture that, if you want to get promoted, you toe the company line. There have been previous disclosures from former employees. This showed internal posts from executives that say things like: the only thing that matters and the most important thing, is connecting people; if people die, that is fine. A few people dying is not as important as connecting people. There really is a blind faith and a religion around that.

I think a secondary thing is that they have a culture that undervalues the role of individual leadership, or individual responsibility. In case of individual leadership, they have a culture that believes that if you set the right metrics, you can let people run wild and free. They can do whatever they want, as long as they move those metrics. That is fine, as long as the metrics themselves are not the problem.

As I described earlier, Facebook changed its definition of success from «how many minutes are you on the site every day?» to «how much engagement did we illicit from you?» It happened to be that this choice, changing the definition of success, led to the most extreme content getting the most distribution. The way you pull off of something like that is having voices or leadership say: this has extreme consequences; we need to figure out a different path forward. Facebook has a strong culture that values flatness. At Facebook they have the largest open floor plan office in the world. That is how religious they are about flatness. It is a quarter of a mile long. It sits 5,000 people in a single room. It is extremely COVID-friendly. I am sure they will get lots of use out of it in the near future. When you have a culture that does not value individual leadership and that devalues personal responsibility, there is one thing that you hear over and over again from executives at Facebook. You hear it in their commercial testimony or in public comments. Can you tell me who is responsible for X? That may be shipping Instagram Kids. Who is going to make that decision? It may be what the safety policy should be. Over and over again they say: at Facebook, we make decisions as teams. That is true. Decisions to launch products literally go to committees, and 20 or 30 people will weigh in. To a certain extent, no one's hands are ever dirty, because all of the hands are dirty.

I think that there is a real need across many tech companies to value leadership and that role more, because we need humans making decisions. In the history of newspapers, radio and TV, humans decided what we focused on. Facebook has kind of abdicated responsibility for caring about the consequences of the choices they are making. They want to believe that all that Facebook is, is a mirror, ignoring the fact that publishers have written to them, as BuzzFeed did after that change I described: the content that gets the most distribution on Facebook now is the content we are the most ashamed of; I think there is something broken about your system.

We need to think about where we want humans in the loop. Can we, the public, provide a counterweight through things like transparency, saying: the amount of complaints about various kinds of harmful content, nudity, violence, or whatever, are going up; let us dig into that. Right now, people who raise the red flag inside of Facebook just do not get promoted. We need to think about how we can change the incentives, so that they can have the space to do the right thing.

De voorzitter:

Then I will give the floor to Mr Van Haga.

De heer Van Haga (Groep Van Haga):

Thank you very much. Coming to these incentives, some time ago you said in an interview that at Facebook, there are only good, kind and conscientious people, but that they are all working with bad incentives. That sounds very positive. Is it not possible that some people behind these algorithms are just bad, and motivated by greed, power and maybe even political motives, and that this is why we get this silly or Machiavellian AI with unprecedented impact? The question next to that is of course how we get to the responsible people. How are we going to hold them accountable, especially because this is spanning the globe, goes across borders and to different judicial systems?

Mevrouw Haugen:

One of the things I have strongly encouraged should, I think, be in the DSA. I do not know if you are doing that or not. It is that tech companies have to put a single name on every change that went out, or for every change that impacted more than 100 million people. I think that it would actually change things if you said: even if it is a little change, if that thing that you are doing impacts 100 million people, someone's name has to be on it; someone has to be the responsible party. Someone would take a deep breath and they would probably be a bit more conscientious. At Facebook, there are no singular names. There are groups of names. We have seen throughout history that when we diffuse responsibly, people do not make as good a decision.

The second thing is a small correction. I did not say there are only good people at Facebook. I said that most of the people inside of Facebook are really kind and conscientious. I think it is important that, when we look at people and say they made a decision for money, it is often more complicated than just that. You never know if someone has a large family they support. Maybe they support family back home, where they came from. Maybe they have a disability or a child of theirs has disabilities. So, people sometimes feel economically fragile. That keeps them from harnessing their courage as much as they could. They might go along to get along, because they feel that they do not have another option.

I think that we can put in systems of responsibility. We need to in this case, because Facebook has shown time and time again that they are not willing to take personal responsibility. That is a pretty fundamental thing when you wield as much power as Facebook does.

De voorzitter:

Mrs Rajkowski.

Mevrouw Rajkowski (VVD):

Thank you, chairwoman. I have another question for you, Ms Haugen. It is becoming more and more clear to me that social media platforms like Facebook can actually take action now. In one of the former sessions that we had today, someone said: I can imagine that companies like Facebook grew so fast that maybe they do not know what is going on everywhere inside of the country. I would actually beg to differ when it comes to the societal impact that the algorithm has on democracies. They have been doing research. They know what their algorithms can and cannot do. If some kind of research concludes that this has a negative effect on democracies, who knows about these conclusions? Who is sitting in the boardroom together, discussing the report? Who is making the decision to keep it a secret?

Mevrouw Haugen:

I do believe that, in the end, responsibility lies with Mark Zuckerberg. I am not willing to call Mark evil, but I do believe that he has made choices to not seek the truth as assertively as he has a duty to. He has surrounded himself with people who reinforce the idea that connecting people is the only thing that matters. When all of this has gone past, everyone will remember Mark Zuckerberg favourably because of the work that he did.

Unfortunately, there are currently no mechanisms that crutch for that. He has the majority of votes. He controls the board, he is the chairman, he is the CEO. Right now, there is an active disincentive for raising this research to higher levels, so that this research is being done by individuals who are highly conscientious, who are good people. There are incentives at every single step of the hierarchy to water down findings. There are many, many reports coming out of Facebook right now about their human rights assessments, and the pressure being put on people to water down findings. Even when they are working with independent third parties, they will stonewall things if they are not positive enough.

I do not know how high this information goes up. I think there is an echo chamber at the top where, if you can maintain the company line and the reality distortion field, you get promoted. At some level, in the end, Mark is responsible for the company that he built and for the corporate governance internal to the company. He is responsible for that. He is the CEO.

I think there are serious problems. Actually, I need to flag something for you guys. This is really, really important. I think it is one of the red flags. After my disclosures, Facebook locked down all the safety information on me, so that it was only accessible by people who worked in Integrity. This has been widely reported. Think about this for a moment. The people who work in Integrity, are a minority at the company. Maybe it is one tenth of the company. People internally pointed out that locking down the materials this way would not have stopped me, because I worked in Integrity. So, they did not do this to prevent another Frances Haugen. The only thing that this allowed or enabled, was that now, current employees could not confirm that these things were true. That shows you the level of problems inside of Facebook. Facebook knows that if their average employee who had concerns, who read the news and saw this, and who would read the documents themselves, would demand change. Facebook took a unilateral action to make sure that no more employees who did not work on safety could see how unsafe the company was.

De voorzitter:

Thank you. Well, that is something. Ms Van Weerdenburg.

Mevrouw Van Weerdenburg (PVV):

I have a question. You said that we all agree that they should moderate illegal content. You made a case to also look at harmful content. You made that case very cautiously, by giving us an example on anorexia groups. I think that we would all agree on that example. When you speak about harmful content, you are kind of opening a can of worms. That is a subjective judgement. My colleague two doors down would, for example, believe that my views on climate change are very harmful, and that I should not broadcast these on social media. How can we ever find political consensus on what really is harmful?

Mevrouw Haugen:

What I think is interesting, is the following. Let us imagine a world where we get to have more public involvement in how solutions were selected. I want to be clear: I feel like that the far right draws a lot of attention around how it gets regulated on these platforms. There is a lot of extreme content on the left too. These platforms are amplifying and giving the most space to people on the far left and the far right. Everyone in the middle gets to have a smaller voice.

I think that if we all sat down at the table, we would want solutions that are content-neutral. We want to focus on the dynamics of these platforms. Should we have million person groups that have no moderators? I am a strong proponent of groups having to propose their own moderators. Because just the act of having to recruit volunteers and maintaining your group will decrease the volume of content that goes out through mass distribution channels. We should have a world in which these platforms are responsible for the consequences of their actions. That might be addiction. The highest rate of problematic use reported on the platform comes with 14 and 15-year-olds. These are in studies that have 100,000 participants. These are not small studies. Those consequences are things like addiction or depression. Facebook has internal research on its platforms on depression. A lot of these things are really harming people. All I am saying is that the public has a right to have conversations on it, and weigh in and have influence.

And guess what? There are lots of solutions that will not focus on censorship. Censorship does not work and it does not scale. If we sat at the table, I think we would converge to: let us make product changes, let us not make content changes. We could have transparency and then we could see the harms. They would have to show that they were making progressive changes over and over again. They would say: we know the amount of this type of content is increasing every time we do this. Should they then not be responsible for intentionally choosing over and over again to increase that kind of content?

De voorzitter:

Thank you very much. I was just looking around to see if there was another question from one of the Members of Parliament. That is the case, so I will give the floor to Ms Van Ginneken.

Mevrouw Van Ginneken (D66):

You have disclosed all this information in September or October, if I am not mistaken. What is your observation? Did anything change in the behaviour and the policies at Facebook?

Mevrouw Haugen:

Facebook did say that they are going to invest more money in safety. Previously, the average amount spent on safety over the previous five or six years was, I think, on the order of about 2 to 2.5 billion dollars a year. I love that Facebook likes to roll together all of its spending since 2016 when they say: look how much we spent. It makes the number look bigger. They did increase it. I think they said they were planning on spending about 5 billion dollars over the coming year, which is good. They are moving in the right direction.

I also want you to be aware that they are planning on buying back 75 billion dollars of their own stocks. That is one of these things where you think: hmmm, 5 billion dollars on safety and 75 billion dollars lighting money on fire. Is 5 billion dollars for safety enough if we are not there yet?

I do think they are moving in the right direction, but we need to keep pushing. I think the DSA is a lot further along. I think the Online Safety Bill in the UK is a lot further along, as a result of the conversations we have had. I have been extremely hardened by non-profits reaching out to me. There is a child safety group that reached out to me. They said: we just threw our annual fundraiser, back in December, and we raised five times as much as we did in any previous year. I think there are a lot of people out there who are feeling much more motivated, because we are not having a hypothetical discussion anymore.

The fundamental problem with Facebook, I think, is that we each only see our individual experiences. We do not see the systemic impacts, because only Facebook gets to see all the data. For years, we had activists coming forward and saying: kids are hurting themselves because of Facebook. Human trafficking is a huge problem with Facebook. Human beings are being sold, literally. A lot of human organs are being sold. Over and over again, Facebook would say: that is just anecdotal; this is not a widespread problem. As in: yes, you found a couple of examples, but that is not really a problem. In fact, the internal documents showed that it was a widespread problem.

When you move from your individual experience to realising that something is a more universal experience, that is very motivating. I am super excited to go forward in the coming year and to do more around organising young people. When people find out that there are paths forward, especially paths forward that do not involve censorship, they get excited.

I think there is a giant opportunity for us to go to Facebook and say: if Twitter can require people to click on a link before they reshare it, you can too. Or you can cut reshare chains at two. Alice writes something, Bob reshares it, Carol reshares it and it lands in Dan's newsfeed. Facebook's internal research says that, when they factcheck your content and take it down, it drives you crazy. It infuriates people. Cutting the reshare chains at two means that once something is outside your friends of friends, you have to copy and paste it if you want to share it again. You are still free to do whatever you want, but you have to make a choice. You have to consciously copy and paste it. You cannot just kneejerk and reshare it. That change alone has the same impact as all of third-party factchecking. Would you rather have your content taken down? Or would you rather have to copy and paste if something got beyond friends of friends?

These solutions exist, and people get excited when they hear that there are paths forward that will not make everyone crazy. Let us find solutions that are about making it possible for us to have conversations. Things can still go viral, but let us have them go viral in intentional ways, and not because Facebook gamified us.

De voorzitter:

Mr Van Haga.

De heer Van Haga (Groep Van Haga):

I just wanted to say thank you. That was very informative. I wish you all the best on your mission. I am afraid I have to be excused now, because I have another meeting to attend. Thank you very much. Goodbye.

Mevrouw Haugen:

Thank you so much. I am glad you were here.

De voorzitter:

Ms Rajkowski.

Mevrouw Rajkowski (VVD):

Thanks. Back to selling organs on the Facebook platform. That is still echoing in my head. The systems based on which they make money, with personalised ads and everything, work perfectly for their benefit. They can personalise ads so well that we did not even know that we needed some kind of shoes of something. When it comes to tracking down the selling of human organs, they say: we have so much trouble with our systems, and this is technically very hard. Can you elaborate on that a bit? You also have this technical knowledge. Is it true that this is a technical issue? Or is it a business choice that they are making?

Mevrouw Haugen:

There are a couple of different dimensions. Remember how I talked about the idea before that when we focus on content and not on systems, we have to rebuild safety systems in each individual language. Part of the problem is that Facebook has chosen not to build out adequate safety systems in enough languages. There are paths forward. A strategy Google used in order to function in a linguistically diverse world, is that they built software that allowed individual communities to say: I know that my community only has 5 million or 7 million speakers of our language, but we are willing to have hackathons and help translate Google into our language, because we want to participate on the internet too.

Facebook could say: organ trafficking happens all around the world; the places where people are most likely to sell their organs are places where they are economically disadvantaged. It happens to be that often, those places are also linguistically diverse, or people there speak more peripheral languages. Facebook could say: we are going to build systems in such a way that an NGO in a place that has 7 million speakers can make sure that there are enough examples, that they have found enough content so that we can do a good job. They could do that, but they have chosen not to. I think that is partly because Facebook's philosophy is about a closed system and not an open system. Google is part of the open internet; Facebook is part of the closed internet.

You can imagine having standards around equity with languages. That could actually force Facebook to invest in systems and technology that would allow more languages and dialects that get adequate coverage. Because dialects also matter.

A second question is around staffing. I worked on the thread intelligence team. That was my last team at Facebook. I was there for eight months or something. One of the teams in that group is called human exploitation. HEx is their shortened name. About six or seven people were responsible for all cases of organ trafficking, human trafficking, sex trafficking, domestic servitude, and four more kinds of human exploitation. You can image staffing a team with twenty people for the selling of organs, but they chose instead to have a team with maybe six of seven people that covered nine areas of human exploitation. That is a question around allocating resources. Facebook has chosen not to make registers transparent, so no one can help them. They have chosen not to invest in reporting channels. There are some problem cases. For example, in the US 2020 election, there was a way for NGOs to report when voter suppression or disfranchisement was occurring. People could report examples of false claims that a polling place was closed, for example. Facebook could do that for organ trafficking, and they choose not to. They choose not to allocate adequate resources.

This is an example of the fact that we really need to make sure that next to their incentives other than profit, loss and expenses, Facebook has to report externally. You can imagine a world where we said: we are going to require you to disclose which AI systems you have, and give us some sample data, so that we can see the performance of those systems. That would allow us to see if Dutch is adequately served. Just to be super blunt: I am guessing that Dutch is not supported. If there is any detection of human trafficking or organ selling, I am guessing that it is not supported in Dutch. I think that is wrong. We live in a linguistically diverse world and Facebook is responsible for too much of the internet to not serve languages with some equity.

Mevrouw Rajkowski (VVD):

Thanks.

De voorzitter:

Ms Van Weerdenburg.

Mevrouw Van Weerdenburg (PVV):

Thank you. We are talking about organ selling, human trafficking and anorexia. We can all agree that those are really bad things. I think the majority of Facebook users or users of social media are in Western democracies. They are people who come home after having worked all day. They just want to read up on their family and everything in five minutes. They are perfectly happy with how Facebook is right now. Are we not breaking something that the vast majority of users are perfectly okay with and happy with? Are we not solving something that for the majority of the people is not even an issue? Is that fair? That is what I am trying to see. It has worth for some people. It is a one-stop shop: five minutes and I am up to speed with everything, just like with the big Amazons. It is handy to be able to find everything in a one-stop shop. Are we not forgetting that the majority of users are perfectly happy? Also, if they are not happy, they can just cancel. You are not obliged to go on Facebook.

Mevrouw Haugen:

Let us hop on our time machine together. Ready? Has everyone put on their time traveler hats? Let us go back to 2009. In 2009, Facebook was about our friends and our families. If we had Facebook from 2009, we would not be sitting here having this conversation today. We can have social media that is like that. You do not actually have to have that many safety systems, because when our conversations are with our family and friends, we do not have a lot of the problems that we are worried about right now. The real problems at Facebook are not about us saying whatever we want to our family and friends.

Most of my social networking happens on a chat app called Signal. Signal is not very dangerous, because it lacks a lot of the dynamics that Facebook has. We loved Facebook in 2009. Back then, we did not have these conversations about how toxic and horrible Facebook is.

When it was about our family and friends, our conversations were about our breakfasts, our babies, cool vacations we went on or a book we read recently. But Facebook saw that we did not spend enough time on Facebook when the only content we had to consume was from our family and friends. When we got content just from our family and friends, we stayed for 20 minutes. Like you said: it was convenient. You could catch up with people, you could stay in touch with your friends from high school that you otherwise would not have heard from, and then you were done.

Guess what? That version of Facebook was worth a lot less money than the version of Facebook we have today. In the version of Facebook we have today, 60% of the content is not from our family and friends. It is from groups. Facebook keeps pushing us into giant groups, which it has no control over. If you belong to a million person group, every day a thousand pieces of content are created in that group. The algorithm does not show you all thousand. It chooses to show you maybe three or four pieces of content. It needs to show you stuff from other groups too. The algorithm gets the most reach with the angriest, most divisive, most polarising content. That is not the post that says: let us think about this issue with more nuance. Or: I think there is a place where we could compromise. That post does not get chosen out of the thousand pieces of content. It is the things that cause anger in the comment threads and that cause people to fight with each other.

I want to be clear. I think that the version of Facebook that I am advocating for, which is more about our family and friends again, is the version that people wish they had. A lot of people have signed off Facebook because it is too toxic. I am suggesting working every language around the world, making things more human scale, or saying: your community can say whatever you want. If you have 100,000 users or more and someone posts something into your group, at least 1% from your own group has to have seen that content before it goes out. Just that change alone would make it a lot less toxic.

I am going to be clear: I have never advocated people sign off. I have never said that social media is bad. I do want to remind people that we love social media when it is about our family and friends. I think there is a happy path that we can go back to.

De voorzitter:

Thank you very much. We have had several rounds with the Members of Parliament. I wish to thank you for attending this meeting. We are out of questions. We did not run out of time. That is a good sign. Thank you for having you as our guest, informing us today about the world of social media platforms. Tomorrow we will talk to them, in a hearing with Facebook, Twitter and others. We can use your information for the conversation we will have tomorrow. I hereby close the meeting, unless you wish to say something? We have time, so feel free.

Mevrouw Haugen:

I have a question that I would love you to ask them. They keep saying: Frances has taken our documents out of context. There is an easy solution, which is that they could publish more documents. You can ask them about that. That would make me happy, because we all deserve to actually understand what we are consuming. It is like the fact that the government does not tell us what food to put into our mouths, but it does say that we do deserve to know what goes into our food. We deserve to know what goes into our information diet. I hope Facebook publishes more documents and gives out more data. I think there are wonderful paths forward and that, if the public was involved, we could find things that we both enjoy and that are good for us.

Thank you so much for inviting me today.

De voorzitter:

Thank you very much. We will address your question tomorrow, I reckon. Definitely.

Mevrouw Haugen:

Thank you.

De voorzitter:

I hereby close this meeting of the standing committee on Digital Affairs. Have a good evening to you all.

Sluiting 17.46 uur.