Let’s Just Agree That Location Data Is Inaccurate
Location data sellers typically tell brands that they know that a user went to the Starbucks at 33rd and 5th in New York at 5 p.m. But, if you know how location data works, you know that it’s impossible in the vast majority of cases, especially in big cities where satellite signals bounce off tall buildings, or in sprawling suburban malls with multiple levels.
Location data, by its very nature, is inaccurate. Yet, location data today is routinely packaged and sold as “accurate.”
In fact, what marketers buy is a black box, potentially unsuitable for the intended task. With no view into the limitations of a given data set, or the reliability of the data itself based on its composition and relative quality, buyers don’t really knowwhat they are getting, or how to use location data in a given use case. For marketers to get any value out of location data, this sales process needs to change, quickly.
Different use cases require different data
Marketing disciplines work toward different objectives, so naturally different data sets align with different tasks. Brands and marketers would naturally want to determine, in detail, the quality and composition of different location data sets when assessing their relative value, ROI, and a host of other significant KPIs.
Attribution is used to measure campaign effectiveness by how many consumers saw an ad and visited a store. In this case, location data is quite valuable for measuring ROI. A visits dataset often provides the most value for use cases that require this degree of accuracy. This set is a synthesized pool of a user’s known points of interest (POIs), including a timestamp, the latitude-longitude of the visited location, a unique venue-ID, the number and names of nearby venues, and the user’s dwell time (or length of visit), among several other data points.
In other scenarios, a pure dataset may suffice. These lightly filtered streams of aggregated location data still include a timestamp and the coordinates of a given interaction, but also account for horizontal accuracy, IP address and source-ID. This is a good fit for a sophisticated brand with the capability to attach their own POIs, and analyze and extract value from data in its raw form.
These different location data sets have different intrinsic value, and they should be sold as such. This is the only way that those who buy location data will be able to do so with confidence and clarity.
Pulling back the curtain on location data markets
To achieve this state of transparency, data providers must measure the accuracy of their data, freely relate the methods they use to arrive at that measurement, and then score location data based on its quality for a given use case.
Think of this way: butchers sell many cuts of beef available, each with different quality levels and price points. Ultimately, we buy what suits our palette on a given occasion. Why should location data be any different? Data sets built on one or two data points to measure a user’s interaction with a physical location signal low value. Buyers can have more confidence in sets with 50 or 100 data points — these are the prime cuts, if you will.
Location data’s long-term value comes from how well buyers understand how a particular dataset can be applied in a given use case. As an industry, we must become more transparent in how we communicate the quality and composition of location data. The only way to build a valuable marketplace is to build it on trust.