The concept of “sensitive” categories pervades the policy structures governing online ad targeting; there is a sense that certain online activities are “out of bounds” when it comes to behavioral advertising. Both the Network Advertising Initiative and the Digital Advertising Alliance have defined “sensitive” categories that participants must avoid when targeting ads.
Google, which operates the world’s biggest ad network, recently established a third standard, and it’s one that the industry should embrace.
Google’s applies their rule to advertisers who want to use the Google network for “remarketing.” This is the practice of reaching users who have previously your site in order to woo them back. A site using remarketing allows Google to tag the user when they visit the site, and Google uses that same tag to find that list of users later to show them ads on other sites they visit. The use of sensitive information in this way could cause greater concern than typical anonymous network advertising, since the website doing the remarketing may have stored user behaviors and characteristics tied to personal identity information.
Here’s how how Google defines the boundary around “sensitive” data that cannot be used in remarketing campaigns on their network:
Company may not use User Lists to select or target advertisements (i) based on past or current activity by Users on adult or gambling sites, government agency sites, or sites directed at children under the age of 13 years or (ii) based on other inferred or actual sensitive information (including without limitation, health or medical history or information, financial status or other detailed information pertaining to a person’s finances, racial or ethnic origins, religious beliefs or other beliefs of a similar nature, the commission or alleged commission of any crime, political opinions or beliefs, trade union membership, or sexual behavior or orientation).
Google also provides great detail and examples about these categories (but it isn’t stated whether these also are the boundaries applied to Google’s direct targeting on AdSense, which have a more limited definition).
Google’s definition above is not only more expansive than either the NAI’s or the DAA’s, it also contains a critical difference: it applies not just to actual status of the user, but also to inferred classifications. For example, you might infer that someone probably has high-blood pressure because they researched that condition on a medical website. By contrast, the NAI and DAA standards, while differing from each other in important ways, talk only about “precise” health information, as if it might need to come from direct medical records to be deemed off-limits. Yet I’d be surprised if most consumers would view such “inferred” health classifications with less concern than “actual” ones.
The definition of “sensitive” ad targeting goes to the heart of consumer fears about data collection — that sensitive profile data might end up being used for harmful purposes, like insurance eligibility. The self-regulatory effort would be more credible if the various standards for sensitive boundaries were unified and strengthened along the lines of Google’s definition. Tighter restrictions on sensitive categories no doubt would destroy some ad targeting revenue, but the benefits to consumer peace of mind would be well worth it.