Terms Of Abuse


Terms Of Abuse



by , March 29, 2018

 


Data is the lifeblood of any algorithm.  To have newer forms of AI marketing platforms perform, it is essential to have access to multiple forms of data, which stem largely from user activity.   The fact that consumers generate and provide  this data is increasingly becoming a hot button.  


The “Face-alytic” maelstrom may someday be referenced as the real awakening of privacy awareness.  It took a Facebook-sized breach to shine a glaring light on the way personal data is gathered and used for marketing.  


And even the most knowledgeable data experts I’ve spoken with have been surprised at the level of data collected via Facebook’s messaging app and phone conversations.  


John Montgomery, executive vice president, global brand safety at GroupM, speculates about the Facebook accusations: “It is either an issue with the data sharing rules or that they were not able to apply the rules as intended. Probably the latter.


“The level of public outrage is magnified because Facebook is a social network that people have trusted with their most personal data and they are now starting to understand how much data is being collected and used, and it has been amplified into an emotional issue,” adds Montgomery.


But it’s curious to me that this wave of negativity about Facebook is not just about the violation of terms, but also in reaction to the conventional way data has been used by marketers all along, which is legally represented in the terms of service signed by users. Meaning Facebook’s terms weren’t understood — probably (surprise, surprise) because they weren’t read.  


Seemingly no one ever reads user terms of service, which are practically impossible to get through — typically, mini novellas in four-point type using difficult-to-understand legalese.  In his book “Future Crimes,” Marc Goodman quips that these contracts should be more aptly called “terms of abuse,” as they specifically tell the user how their data will be owned and used in myriad ways to the benefit of the company, and sometimes the detriment of the user.  


Goodman uses the example of LinkedIn, whose privacy policy states: “You grant LinkedIn a nonexclusive, irrevocable, worldwide, perpetual, unlimited, assignable, sublicenseable, fully paid up and royalty-free right for us to copy, prepare derivative works of, improve, distribute, publish, remove, retain, add, process, analyze, use and commercialize, in any way now known or in the future discovered, any information you provide directly or indirectly to LinkedIn.”  


We sign, because we haven’t read or simply don’t have time to fight the points that might be unfair. Usually bad things don’t happen, but there is a question of whether these onerous agreements should be forced on people. Will consumers’ newfound awareness about how data is used create more angst and reluctance to participate in social spaces?


Like the LinkedIn example, other digital terms of service are outrageously assumptive in laying claim to data.  Would you be surprised to know that photos sitting on your phone might be legally accessed to understand your relationship with brands you have photographed?  How would you feel if a film or book review you wrote was analyzed to create a psychological profile about you, the writer?  Even if you are not personally identified, this is mighty personal information.  


And who is to say that these attributes won’t relate directly to your identifiable profile — especially if a first-party agreement exists? 


The reality is that data innovation using AI is laying bare details about users that they don’t even know themselves.  An AI company I recently met claims to be able to diagnose illnesses merely by listening to voices and analyzing them.  What if you signed an agreement allowing that to happen?  


In a MediaPostarticle written a year ago by Mike Azzara, Esther Dyson predicted, “The advertising community has been woefully unforthcoming about how much data they’re collecting and what they’re doing with it. And it’s going to backfire on them, just as the Snowden revelations backfired on the NSA.”  Does it feel like we are getting dangerously close to this prediction?


Shelley Palmer’s recent blog post on this subject asserts that more understanding about data is a good thing, and users will ultimately have to get smart about finding ways to control their own data.  


Platforms such as Facebook will continue to leverage user data, but will need to make clear where the line is drawn, and let their users know the score.  Already, it has  promised to make privacy more straightforward and easier to understand.


“I think there is an opportunity to be completely transparent with how it’s done,” offers John Montgomery. 


Whatever happens next, consumers will be more tuned in to their data and how it’s used. So let’s please get it right.

MediaPost.com: Search Marketing Daily

(61)