The Facebook Dilemma: It’s Not What You Think
What happened with Facebook and all of our personal information?
Facebook being cast as the bad guy is one of the most significant distortions in this story. The social network has been castigated for mishandling users’ personal information; but that isn’t quite right.
Facebook overall has very strongly protected user privacy; but it has some significant mistakes, which can be corrected going forward.
The dilemma Facebook has faced is how to provide the best user experience, which involves collecting and leveraging lots of data to best understand the user, improve the service, and tailor it to each person, while also managing and protecting that information from abuse. It does this free of monetary charges to the users, partly by creating an effective marketing engine for advertisers, which also is reliant on collecting and managing personal data.
I believe that Facebook’s intent for its own actions, and most of its practices, has been to protect user personal information — putting a wall between some of that data and brands. Our company, on behalf of some of the largest brands in the world, has worked closely with Facebook for about ten years. One of our clients wanted Facebook to modify its policies and provide access to more information about users who came to the brand’s Facebook page.
Facebook refused as a matter of protecting user privacy — although it would have meant more revenue. For clarity, I’m talking about the information Facebook itself would allow a brand to access directly through its services, not the information a third-party company or brand could access with its own app provided via Facebook. This distinction is where the mistakes began.
Facebook enabled third-party applications to run on its platform. It allowed these third-party apps to collect user information, including emails, without an explicit enough opt-in. Similarly, to build a wider advertising network, Facebook enabled third-party websites to use its registration system as the mechanism for users to sign up for those sites.
Facebook got a great deal of tracking information this way to use for additional advertising space and research. In return, the websites collected and made use of much of the users’ Facebook information. The user got easy access to all these applications and more tailored experiences.
In some cases the users were not clearly informed about the information they were sharing and how the third-party company would use it (or sell it). This explains how Cambridge Analytica was able to collect information about users without those users being explicitly aware of it.
As part of this muddy opt-in problem, for a while, Facebook allowed those same third parties, once getting you to sign up for its apps, to also access your friends list and collect their information without explicit opt-in. This was a case of the industry being so excited about the intersection of users through social interactions (called the Social Graph) that they ran too fast and missed a fundamental permission requirement. A few years ago, Facebook identified this “collect information from friends” problem and changed its API to prevent it without their consent.
Facebook relied on third parties’ word of honor as to whether they were abiding by Facebook’s policies — for example, not selling user data to someone else or using it in inappropriate manipulative or illegal ways. This left the door wide open for all kinds of abuse. At that time the common belief in the world of technology platforms was that managing user privacy was the responsibility of the company or app that was interacting with the user.
So to the extent to which Facebook proper was the venue, it was Facebook’s responsibility to protect privacy. But if it was a third-party app interacting, then it was that company’s responsibility.
The third mistake is the really big one. Facebook clung too long to the philosophy “we’re just a technology platform that empowers, but doesn’t censor or control, what the third parties or users do.” Essentially, Facebook turned a blind eye to how its own revolution demands a greater, if different, responsibility role.
The technology platform philosophy makes sense coming out of the personal computing age, but not in the world of social media, where applications mix with media publishing and user content, and where the opportunity to collect, manipulate, and abuse information and targeting is much greater.
Some say Facebook is actually a media company, with all the editorial control, responsibility, and liability that goes with that. That’s not quite true either, because it is indeed a platform that does empower independent third parties — and even more so, users — to do all kinds of things.
The problem and the solution rests in that Facebook is neither just a platform nor just a media company. It is a mix of platform and media and consumer content usage patterns. This last one is most important. Facebook content, brand content, and media published content all together are a tiny, tiny fraction of the ever-changing user content and behavior on the network.
Think of it as a highway system — not just the highway, but the entire system of lanes, on- and off-ramps, bridges, traffic lights, traffic monitoring and management, the vehicles, and even the people in the vehicles.
Facebook should be held responsible for the infrastructure (which it has always accepted). New for it — it is also responsible for the rules and tools of usage, including how laws are managed, and even educating people about them. It has struggled with this responsibility, hesitated, and avoided managing it. But if it takes all that responsibility and manages it properly, and still a third party or a user just insists on crashing his car into everybody else or driving off the bridge, we cannot then blame Facebook.
Still, society needs Facebook to step up to the challenges of protecting and managing personal information in this new social media ecosystem model. But how? We will answer that in Part 3.