A livestreamed “debate” yesterday between Facebook CEO Mark Zuckerberg and a European commissioner shaping digital policy for the internal market, Thierry Breton, sounded cordial enough on the surface, with Breton making several chummy references to “Mark” — and talking about “having dialogue to establish the right governance” for digital platforms — while Zuckerberg kept it respectful sounding by indirectly addressing “the commissioner”.
But the underlying message from Europe to Facebook remained steely: Comply with our rules or expect regulation to make that happen.
If Facebook chooses to invest in ‘smart’ workarounds — whether for ‘creatively’ shrinking its regional tax bill or circumventing democratic values and processes — the company should expect lawmakers to respond in kind, Breton told Zuckerberg.
“In Europe we have [clear and strong] values. They are clear. And if you understand extremely well the set of our values on which we are building our continent, year after year, you understand how you need to behave,” said the commissioner. “And I think that when you are running a systemic platform it’s extremely important to understand these values so that we will be able to anticipate — and even better — to work together with us, to build, year after year, the new governance.
“We will not do this overnight. We will have to build it year after year. But I think it’s extremely important to anticipate what could create some “bad reaction” which will force us to regulate.”
“Let’s think about taxes,” Breton added. “I have been a CEO myself and I always talk to my team, don’t try to be too smart. Pay taxes where you have to pay taxes. Don’t got to a haven. Pay taxes. Don’t be too smart with taxes. It’s an important issue for countries where you operate — so don’t be too smart.
“‘Don’t be too smart’ it may be something that we need to learn in the days to come.”
Work with us, not against us
The core message that platforms need to fit in with European rules, not vice versa, is one Breton has been sounding ever since taking up a senior post in the Commission late last year.
Although yesterday he was careful to throw his customary bone alongside it too, saying he doesn’t want to have to regulate; his preference remains for cooperation and ‘partnership’ between platforms and regulators in service of citizens — unless of course he has no other choice. So the message from Brussels to big tech remains: ‘Do what we ask or we’ll make laws you can’t ignore’.
This Commission, of which Breton is a part, took up its five-year mandate at the end of last year — and has unveiled several pieces of a major digital policy reform plan this year, including around sharing industrial data for business and research; and proposing rules for certain ‘high risk’ AI applications.
But a major rethink of platform liabilities remains in the works. Though yesterday Breton declined to give any fresh details on the forthcoming legislation, saying only that it would arrive by the end of the year.
The Digital Services Act could have serious ramifications for Facebook’s business which explains why Zuckerberg made time to dial into a video chat with the Brussels lawmaker. Something the Facebook CEO has consistently refused the British parliament — and denied multiple international parliaments when parliamentarians joined forced to try to question him about political disinformation.
The hour-long online discussion between the tech giant CEO and a Brussels lawmaker intimately involved in shaping the future of regional platform regulation was organized by Cerre, a Brussels-based think tank which is focused on the regulation of network and digital industries.
It was moderated by Cerre, with DG Bruno Liebhaberg posing and choosing the questions, with a couple selected from audience submissions.
Zuckerberg had brought his usual laundry list of talking points whenever regulation that might limit the scope and scale of his global empire is discussed — seeking, for example, to frame the only available options vis-a-vis digital rules as a choice between the US way or China.
That’s a framing that does not go down well in Europe, however.
The Commission has long talked up the idea of championing a third, uniquely European way for tech regulation — saying it will put guardrails on digital platforms in order to ensure they operate in service of European values and so that citizens’ rights and freedoms are not only not eroded by technology but actively supported. Hence its talk of ‘trustworthy AI’.
(That’s the Commission rhetoric at least; however its first draft for regulating AI was far lighter touch than rights advocates had hoped, with a narrow focus on so-called ‘high risk’ applications of AI — glossing over the full spectrum of rights risks which automation can engender.)
Zuckerberg’s simplistic dichotomy of ‘my way or the China highway’ seems unlikely to win him friends or influence among European lawmakers. It implies he simply hasn’t noticed — or is actively ignoring — regional ambitions to champion a digital regulation standard of its own. Neither of which will impress in Brussels.
The Facebook CEO also sought to leverage the Cambridge Analytica data misuse scandal — claiming the episode is an example of the risks should dominant platforms be required to share data with rivals, such as if regulation bakes in portability requirements to try to level the competitive playing field.
It was too much openness in the past that led to Facebook users’ data being nefariously harvested by the app developer that was working for Cambridge Analytica, was his claim.
That claim is also unlikely to go down well in Europe where Zuckerberg faced hostile questions from EU parliamentarians back in 2018, after the scandal broke — including calls for European citizens to be compensated for misuse of their Facebook data.
Facebook’s business, meanwhile, remains subject to multiple, ongoing investigations related to its handling of EU citizens’ personal data. Yet Zuckerberg’s only mention of Europe’s GDPR during the conversation was a claim of “compliance” with the pan-EU data protection framework which he also suggested means it’s raised the standards it offers users elsewhere.
Another area where the Facebook CEO sought to muddy the water — and so lobby to narrow the scope of any future pan-EU platform regulations — was around which bits of data should be considered to belong to a particular user. And whether, therefore, the user should have the right to port them elsewhere.
“In general I’ve been very in favor of data portability and I think that having the right regulation to enforce this would be very helpful. In general I don’t think anyone is against the idea that you should be able to take your data from one service to another — I think all of the hard questions are in how you define what is your data and, especially in the context of social services, what is another person’s data?” he said.
He gave the example of friends birthdays — which Facebook can display to users — questioning whether a user should therefore be able to port that data into a calendar app.
“Do your friends need to now sign off and every single person agree that they’re okay with you exporting that data to your calendar because if that needs to happen because in practice it’s just going to be too difficult and no developer’s going to bother building that integration,” he suggested. “And it might be kind of annoying to request that from all of your friends. So where would we draw the line on what is your data and what is your friends is I think a very critical question here.
“This isn’t just an abstract thing. Our platform started off more open and on the side of data portability — and to be clear that’s exactly one of the reasons why we got into the issues around Cambridge Analytica that we got into because our platform used to work in the way where a person could more easily sign into an app and bring data that their friends had shared with them, under the idea that if their friend had shared something with you, for you to be able to see and use that, you should be able to use that in a different app.
“But obviously we’ve seen the downsides of that — which is that if you bring data that a friend has shared with you to another app and that app ends up being malicious then now a lot of people’s data can be used in a way they didn’t expect. So getting the nuance right on data portability I think is extremely important. And we have to recognize that there are direct trade-offs about openness and privacy. And if our directive is we want to lock everything down from a privacy perspective as much of possible then it won’t be as possible to have an open ecosystem as we want. And that’s going to mean making compromises on innovation and competition and academic research, and things like that.”
Regulation that helps industry “balance these two important values around openness and privacy”, as Zuckerberg put it, would thus be welcomed at 1 Hacker Way.
Breton followed this monologue by raising what he called “the stickiness” of data, and pointing out that “access to data is the number one asset for the platform economy”.
“It’s important in this platform economy but — but! — competition will come. And you will have some platforms allowing this portability probably faster than you think,” he said. “So I think it’s already important to anticipate at the end of the day what your customers are willing to have.”
“Portability will happen,” Breton added. “It’s not easy, it’s not an easy way to find an easy pass but… what we are talking about is how to frame this fourth dimension — the data space… We are still at the very beginning. It will take probably one generation. And it will take time. But let me tell you something but in terms of personal data, more and more the customers will understand and will requests that the personal data belongs to them. They will ask for portability one way or the other.”
On “misinformation”, which was the first topic Zuckerberg chose to highlight — referring to it as misinformation (rather than ‘disinformation’ or indeed ‘fakes’) — he had come prepared with a couple of stats to back up a claim that Facebook has “stepped up efforts” to fight fakes related to the coronavirus crisis.
“In general we’ve been able to really step up the efforts to fight misinformation. We’ve taken down hundreds of thousands of pieces of harmful misinformation. And our independent fact-checking program has yielded more than 50M warnings being shown on pieces of content that are false related to COVID,” he said, claiming 95% of the time people are shown such labelled content “they don’t end up clicking through” — further suggesting “this is a really good collaboration”.
(Albeit, back of an envelop math says 5% of 50M is still 2.5 million clicks in just that one narrow example… )
Breton came in later in the conversation with another deflator, after he was asked whether the current EU code of practice on disinformation — a self-regulatory initiative which several tech platforms have signed up for — is “sufficient” governance.
“We will never do enough,” he rejoined. “Let’s be clear. In terms of disinformation we will never do enough, This is a disease of the center. So everything we have done has to be followed.”
“It’s a huge issue,” Breton went on, saying his preference as a former CEO is for KPIs that “demonstrate we’re progressing”. “So of course we need to follow the progress and if I’m not able to report [to other EU institutions and commissioners] with strong KPIs we will have to regulate — stronger.”
He added that platforms cooperating on self regulation in this area gave him reason to be optimistic that further progress could be made — but emphasized: “This issue is extremely important for our democracy. Extremely… So we will be extremely attentive.”
The commissioner also made a point of instructing Zuckerberg that the buck stops with him — as CEO — lightly dismissing the prospect of Facebook’s newly minted ‘oversight board‘ providing any camouflage at all on the decision-making front, after Zuckerberg had raised it earlier in the conversation.
“When you’re a CEO at the end of the day you are the only one to be responsible, no one else… You have an obligation to do your due diligence when you take decisions,” said Breton, after scattering a little polite praise for the oversight board as “a very good idea”.
“Understand what I’m trying to tell you — when you are the CEO of an important platform you have to deal with a lot of stakeholders. So it’s important of course that you have bodies, could be advisory bodies, could be a board of director, it could be any kind of things, to help you to understand what these stakeholders have to tell you because at the end of the day the mission of a CEO is to be able to listen to everyone and then to take the decision. But at the end of the day it will be Mark that will be responsible.”
In another direct instruction, Breton warned the Facebook CEO against playing “a gatekeeper role”.
“Be careful to help our internal market, don’t play a role where you will be a systemic player, the gatekeeper controlling others to play with. Be careful with the democracy. Anticipate what’s going to happen. Be careful with disinformation. It could have a bad impact on what is extremely important for us — including our values,” he said, appealing to Zuckerberg “to work together, to design together the right governance tools and behavior” — and ending with a Silicon Valley-style appeal to ‘build the future together’.
The inescapable point Breton was working towards was just “because something is not prohibited it doesn’t mean that it’s authorized”. So, in other words, platforms must learn to ask regulators for permission — and should not expect any forgiveness if they fail to do this. This principle is especially important for the digital market and the information society at large, Breton concluded.
A particular low point for Zuckerberg during the conversation came earlier, when Liebhaberg had asked for his assessment of the effectiveness of content moderation measures Facebook has taken so far — and specifically in terms of how swiftly it’s removing illegal and/or harmful content. (Related: Last week France became the latest EU country to pass a law requiring platforms quickly remove illegal content such as hate speech.)
Zuckerberg made a nod to his usual “free expression vs content moderation” talking point — before segwaying into a claim of progress on “increasingly proactive” content moderation via the use of artificial intelligence (“AI”) and what he referred to as “human systems”.
“Over the last few years… we’ve upgraded all of our content review systems to now… our primary goal at this point is what percent of the content that’s going to be harmful can our systems proactively identify and take down before anyone even sees that? Some of that is AI and some of that is human systems,” he said, referring to the up to 30,000 people Facebook pays to use their brain and eyes for content moderation as “human systems”.
“If a person has to see it and report it to us we’re not going to catch everything ourselves but in general if someone has to report it to us then that means that we should be doing a bit better in future. So there’s still a lot of innovation to happen here,” Zuckerberg went on, adding: “We’re getting a lot better at this. I think our systems are continually improving.”
His use of the plural of “systems” at this point suggests he was including human beings in his calculus.
Yet he made no mention of the mental health toll that the moderation work entails for the thousands of people Facebook’s business depends upon to pick up the circa 20% of hate speech be conceded its AI systems still cannot identify. (He did not offer any performance metrics for how (in)effective AI systems are at proactively identifying other types of content which human moderates are routinely exposed to so Facebook users don’t have to — such as images of rape, murder and violence.)
Just last week Facebook paid $52M to settle a lawsuit brought by 11,000 current and former content moderators who developed mental health issues including PTSD on the job.
The Verge reported that under the terms of the settlement, every moderator will receive $1,000 which can be spent how they like but which Facebook intends to partly fund medical treatment, such as for seeking a diagnosis related to any mental health issues a moderator may be suffering.
Read the original post: Europe to Facebook: Pay taxes and respect our values — or we’ll regulate
Organize your team with Milanote.