What do you mean, we?

Enough with the "we" stuff about fixing web advertising.

This is not a "we" problem. "We" can't promise to replace "ads that provoke blocking" with "better-performing ads", because ads that provoke blocking are the high-performing ads. As a web user, you're not seeing crap ads because the advertisers want to waste money and annoy you. You're seeing them because they test well.

Crappy, annoying, deceptive ads get clicks.

The terrible stuff on the web is there because it works.

Everyone agrees that "we" need to get rid of "bad" ads. Naturally, "we" is defined as "you" and "bad" is "not the ads that work for me." But because the same qualities that get response also provoke blocking, there's no equilibrium strategy here.

Imagine that all the right-thinking people agreed to L.E.A.N. or some other set of self-regulation terms. No auto-playing videos, no NSFW animations, no fake error dialogs.

The more that self-regulation limits crappy/click-getting practices, the more incentive for any advertiser who is willing to bend the rules and offer a little more money to run an ad that's a little bit creepier, a little more attention-getting or finger-fumble-attracting.

Incentives for bad practices are there because users can be tracked from site to site. That marginally extra-annoying advertiser will be able to find a publisher with marginal reputation, who claims to be able to reach the desired users and is willing to accept the ad. And self-regulation breaks down, or never really gets going in the first place. Cross-site tracking gives everyone an incentive to do advertising that gets clicks today and provokes ad blocking tomorrow.

So there's no "we" solution. The fixes for the web advertising problem will have to happen one user at a time. Every user who becomes harder to track from site to site helps give high-reputation sites a little more market power to enforce ad standards.

Publishers and brands need action from users

In today's web advertising, high-reputation and low-reputation publishers compete to reach the same users. And high-reputation brands are hard to tell apart from low-reputation ones.

High-reputation publishers and brands win when users get less trackable, but users have to be the ones to take the action.

So instead of putting everything in terms of "we", it's time to think about reciprocity and measuring the benefit from each additional tracking-protected user. Instead of hippy-dippy "we" stuff, relying on everyone to cooperate, let's talk exchange of value. Big Data is not just a tool to help with low-reputation strategies. Data-driven projects can help with high-reputation strategies, too.

Questions might include:

  • Which customers gain the most value to me when they're protected from tracking by low-reputation competitors? (For an HMO, what's the net present value of protecting a customer from quack diet ads? For a car insurer, how much is it worth to keep the most profitable customers from being picked off based on their social media usage?)

  • Which categories of readers are most valuable to the best advertisers on my site? How much does it cost me when adtech intermediaries can follow them elsewhere? What's a cost-effective tracking protection solution that I can offer them, to keep them from being reachable on low-value sites?

I'm not against "we need to work together" messages in situations where a cooperative solution is really workable or necessary. But for fixing web ads? Time to give it up.

ANSI standard ad-supported piracy?

The Trichordist blog started pointing out the ad-supported piracy problem quite a while ago, so let's have a quick look to see how well the adtech business has done at cleaning up its act.

Should no longer be a story, right? The Internet solves problems on Internet time, after all.

Here's the plan. I'll spend one minute doing a basic check, then go work on something else. It's not as if there isn't enough broken stuff on the Internet I could be figuring out.

So I'll do a web search for

[Michael Jackson MP3]

I'll make it easy for them by picking a well-known non-Creative-Commons recording artist. I'm expecting to come up dry here. (After all, why would any sensible Internet company send me to a pirate site when they could make some money by sending me to a legit music download site, or sell me some tracks themselves?)

ANSI ad on a pirate site

Ouch. Probably the most obvious copyrighted works in the world, and who's got their fingers in the pie?

  • Amazon
  • BuzzCity
  • Google, Google, Google
  • LinkShare
  • LiveInternet
  • OpenX

But turn off your banner blindness for a minute, and check out that banner ad.

It's an ad for The American National Standards Institute (ANSI).

Why is ANSI running an ad on a questionable an infriging site, when it could be buying ads on a legit site that covers engineering and science? Spewing ads into the web's less reputable corners just feeds the growing impression that "technology" is a rent-seeking, deluxe-bus-riding racket that's focused on diverting value from others instead of creating new wealth.

So here are a few questions for ANSI.

  • How did your ad end up on an infringing site? Can you retrace its steps?

  • What agencies or other intermediaries did you work with to place the ad? Did they make any guarantees about what kind of site it would show up on?

  • Have you received a refund for ad impressions on problem sites?

  • If you don't have the information to answer the first three questions, what is broken about the way you buy advertising?

I'll keep you posted on what I come up with.

The NAA just did privacy tools a big favor.

The Newspaper Association of America has filed a complaint (PDF) with the US Federal Trade Commission about four ad blocking practices. The NAA asks the FTC to:

  1. Require ad blockers engaged in “paid whitelisting” programs to end such programs or to cease misrepresenting the nature of their services to consumers.

  2. Require ad blockers to discontinue ad substitution practices.

  3. Require ad blockers claiming that they make publishers whole to cease making deceptive statements that mislead consumers.

  4. Prevent ad blockers from evading metered subscription services and paywalls.

(Washington Post story: Newspapers escalate their fight against ad blockers by Elizabeth Dwoskin)

If we clarify number 4 to include only deliberate paywall avoidance, and not privacy measures that accidentally reset the article count for "soft paywalls", then NAA has just done a huge favor for the developers of legit privacy tools.

The NAA has written a pretty good start for a code of conduct for privacy tool developers and users.

Legit privacy tools are in "compliance" with the NAA's rules already. If you look at the aloodo.org tracking protection tools page, everything we link to or recommend already avoids the four no-nos. It shouldn't be a problem for any tool to avoid all of these. Paid whitelisting is a naked protection racket, ad substitution is reputation-harming scribbling of unreviewed ads into a publisher's context (yes, adtech does it too, that's not the point) and deception and sneaking in without paying are just so obviously wrong that why am I even typing this?

It's possible that some privacy tools can have the result of resetting a soft paywall, but it's possible to protect a soft paywall from accidental resets, and I can get behind a code of conduct that bans specific functionality to get around paywalls.


The first reaction to the NAA complaint was disappointing. (Please, Twitter and Medium, copy this YouTube feature already.) A bunch of early comments were along the lines of "well, existing adtech is bad, too!"

Yes, we know. Third-party tracking is not just a privacy issue. The trackability of users from high-value to low-value sites causes data leakage, which results in lower revenue for publishers, and enables fraud. And adtech targeting breaks economic signaling, which means publishers aren't just getting a smaller piece of the pie, it's a smaller pie.

Today's adtech is a trash fire of fraud, malware, and low revenue. But that means privacy tools have the opportunity to be different, by avoiding publisher-hostile schemes. When software developers send a privacy message but then just set a competing trash fire, they're wasting that opportunity.

Legit privacy tools and high-reputation publishers, working together, can transform advertising on the web. Tools and sites can help users block low-value, cold-call-like targeted ads while permitting signal-carrying ads, the ones that respect users' choices not to be tracked.

High-reputation publishers have a responsibility to both educate readers about the problems of adtech as usual and hold tool vendors to high standards. The NAA is making some real progress here.

How C.H.E.D.D.A.R. is your browser?

(Update: switched to BlockAdBlock.)

Doc Searls writes,

To have a deal, both parties need to come to the table with terms the other can understand and accept. For example, we could come with a term that says, Just show me ads that aren’t based on tracking me. (In other words, Just show me the kind of advertising we’ve always had in the offline world — and in the online one before the surveillance-based “interactive” kind gave brain cancer to Madison Avenue.)

Read the whole thing.

"Just show me ads that aren’t based on tracking me" is a message that you can send and receive today. You can build C.H.E.D.D.A.R. ads if you go to the right ad network, or install the right ad server. You can run a C.H.E.D.D.A.R. browser today, if you install (for example) Privacy Badger and Self-Destructing Cookies on Firefox.

All the pieces of C.H.E.D.D.A.R. exist, but they're just not integrated, branded, or made easy to install everywhere.

Are you already running a C.H.E.D.D.A.R. browser? Let's find out.


So, now all the JavaScript programmers have done a "View Source" on this page, and you're all like, wtf, that's it? A tracking protection detector and a check for a first-party ad element?

Yes, that's it. You can always write more refined versions of these, but the point is that you can do C.H.E.D.D.A.R. on the client without waiting for any new code on the server side, and you can do C.H.E.D.D.A.R. on the server without waiting for any new code on the client side.

Wouldn't C.H.E.D.D.A.R. be better if we added an extra layer of protocol, or a special HTTP header, or something? No, because no server can tell how a client is configured. If a browser or extension sends some future new intent message, it doesn't reliably tell the site if the user is also running a conventional ad blocker. Considering that ad blockers are the most popular browser extensions, it's likely that many people who install a "we welcome ads not based on tracking" extension will also have tried an ad blocker, and might not even remember they left it on, or not know they have to turn it off.

Actually testing for the delivery of a legit ad, or fake ad element, is necessary. Combine that with DNT and tracking detection, and you get a reliable "Just show me ads that aren’t based on tracking me" message.

That doesn't mean that C.H.E.D.D.A.R. is anywhere near done. If you want to build software around it, there are a lot of potential projects.

  • Better tracking protection and tracking detection

  • New ways to test that a DNT-respecting ad has been delivered to a human user

  • DNT-respecting, fraud-resistant web analytics

  • A "Just show me ads that aren’t based on tracking me" button in privacy tools and ad blockers.

  • C.H.E.D.D.A.R. detection built into web content management systems

The problem with web ads, legit and otherwise, is much like the problem of opt-in email newsletters and email spam. Somehow the idea of a "spam-free replacement for SMTP" never really caught on. Instead, we got:

  • Legit email is the kind of email that makes it through existing spam filters.

  • A good spam filter is the kind of filter that lets legit email through (but blocks "spam")

The first spam filters got started before legit email senders had to be concerned about deliverability—but because of spam filters, deliverability is big business today.

If you write a new spam filter, or set up an email service, you have to let through the mail that people agree is legit. If you start a new service that sends mail, you have to pass the existing spam filters.

Different services have different ToSs, but we can send and receive email as ToSs change, because they all reflect a common set of norms around what is and isn't spam. And we never actually have to agree on a common definition of "spam".

We'll never get the web advertising problem nailed down in precise legal and technical terms. There will always be a mix of old and new clients and servers, a variety of laws and norms, and new inventions and business models. Whatever we come up with will have to be messy, imprecise, and resilient in order to stand a chance.

Service journalism and the web advertising problem

There's a toenail fungus photo in my morning news.

And it looks like it's an ad for some questionable toenail-fungus-treating multi-level-marketing scheme.

Yeech. How did that get on there? Pass the ad blocker already.

Forget tracking protection, forget new standards for responsible advertising, forget all that. Gross infected body parts and MLM ads before I have even had my coffee? Burn all this stuff down.

Terrible ads are a big reason why tracking protection seems like an incomplete solution to the problems of web advertising. Web users don't just block ads because people are good applied behavioral economists, seeking signal and filtering noise. A lot of web ads are just deceptive, annoying, gross, or all three. (Oh, right, some of them carry malware, too.)

Even if we could somehow combine the efficiency and depth of the web medium with the signaling power of print or TV, won't web ads still be crap? And won't people still block them?

It doesn't have to be that way.

Publisher standards

Print ads are less crappy than web ads. Why can't publishers enforce better standards on the web? How can a newspaper have memorable, well-designed ads in print, while the ads on the web site have users looking for the computer sanitizer?

It's hard for publishers to enforce standards when an original content site is in direct competition with bottom-feeder and fraud sites that claim to reach the same audience. And that competition is enabled by third-party tracking. As Aram Zucker-Scharff mentions in an interview on the Poynter Institute site, the number of third-party trackers on a site grows as new advertising deals bring new trackers along with them. All those third-party pixels and scripts—and a news site might have 50 to 70 of them—cause slowness and obvious user experience problems. But the deeper problem, data leakage, is harder to pick out. Any of those third parties could be leaking audience data into the dark corners of the Lumascape until it re-emerges, attached to a low-value or fraudulent site that can claim to reach the same audience as the original publisher.

Publishers can try to pin down their third parties with contractual restrictions, but it's prohibitively expensive for a publisher to figure out what any one tracker is up to. You know that sign at the corner store, "only two high school students in the store at a time"? If the storekeeper lets 50-70 kids in, he can't see who shoplifted the Snickers bar. The news site is in the same situation on third parties. Because any one publisher has contact with so many intermediaries, only the perpetrators can see where data is leaking.

A security point of view

Information security is hard. When you have to maintain software, you fix a bug when you can see that there's a bug. You don't wait until someone starts exploiting it. The earlier you fix it, the less it costs.

News sites work this way for some issues. If you found a bug in your site's content management system that would allow a remote user to log in as "editor" and change stories, you would fix it. Even if you had no evidence that random people were logging in, it's not worth taking the chance. Because it's so hard to catch data leakage in the act, it makes sense to apply the same bug-fixing principle. When there is an emergent bug in the combination of your site and the user's browser that allows for data leakage, then it is more effective to proactively limit it than to try to follow audience data through multiple third parties.

That doesn't mean just walking away from all third-party tracking. Henk Kox, Bas Straathof, and Gijsbert Zwart write, in Targeted advertising, platform competition and privacy

We find that more targeting increases competition and reduces the websites' profits, but yet in equilibrium websites choose maximum targeting as they cannot credibly commit to low targeting. [emphasis added] A privacy protection policy can be beneficial for both consumers and websites.


If websites could coordinate on targeting, proposition 1 suggests that they might want to agree to keep targeting to a minimum. However, we next show that individually, websites win by increasing the accuracy of targeting over that of their competitors, so that in the non- cooperative equilibrium, maximal targeting results.

When publishers lack market power, they have to play a game that's rigged against them.

Changing the game

So how to turn web advertising from a race to the bottom into a sustainable revenue source, like print or TV ads? How can the web work better for high-reputation brands that depend on costly signaling?

C.H.E.D.D.A.R is a basic set of technical choices that make web ads work in a signal-carrying way, and restore market power to news sites.

Some of the work has to happen on the user side, but tracking protection for users can start paying off for sites immediately. Every time a user gets protected from third-party tracking, a little bit of competing, problematic ad inventory goes away. For example, if a chain restaurant wants to advertise to people in your town, today they have a choice: support local content, or pay intermediaries who follow local users to low-value sites. When the users get protected from tracking, opportunites to reach them by tracking tend to go away, and market power returns to the local news site.

And users see a benefit when a site has market power, because the site can afford to enforce ad standards. (and pay copy editors, but that's another story.)

Service journalism

Users are already concerned and confused about web ads. That's an opportunity. The more that someone learns about how web advertising works, the more that he or she is motivated to get protected. A high-reputation publisher can win by getting users safely protected from tracking, and not caught up in publisher-hostile schemes such as paid whitelisting, ad injection, and fake ad blockers.

Here is a great start, on the New York Times site. Read the whole thing:

Free Tools to Keep Those Creepy Online Ads From Watching You by BRIAN X. CHEN and NATASHA SINGER

The next step is to make it more interactive. Use web analytics to pick out a reader who is

  • valuable as an audience member

  • vulnerable to third-party tracking

  • using a browser for which you know a good protection tool

and give that reader a nice "click here to get protected" button that goes to your tool of choice. There is JavaScript to do this.

Tracking protection for users means fewer ad impressions available at bottom-feeder and fraud sites, which means more market power for news sites, which means sites gain the ability to enforce standards. Put it all together, and no more toenail fungus ads before breakfast.

Markets for intent data?

This is an extended version of a question that I want to ask the participants at VRM Day 2016.

An idea that keeps coming up is the suggestion that prospective buyers should be able to sell purchase intent data to vendors directly. I'm having trouble thinking about how this would work.

Here's an offline example. It's a summer weekend, and I'm walking through an anchor store at the mall, looking at khaki trousers.

Here are two pieces of intent information.

  • "I'm cutting through the store on the way to buy something else. I wonder if there are any decent clothes on sale, since I could probably use some extras."

  • "I ripped my last pair of pants and I have a meeting on Monday morning. I have a lot of stuff to get done and I'm not leaving this store without a new pair."

On a hypothetical intent trading platform, what's my incentive to reveal which intent is the true one?

My intent information is worth something to me as confidential information going into a negotiation. The value could be low (I'm just looking) or high (I ripped my last pair). If a vendor is willing to pay me some price for my intent information, then in order for me to accept it, that price has to be greater than the value of the intent information to me plus the transaction costs to me of selling the information.

If I'm "just looking" and don't need the product right away, I'm willing to sell intent information for almost any price. But it's of little value to the vendor, because it just tells them that my intentions are to only accept an incredible bargain. I even have an incentive to spoof my intentions. If I can convince a vendor that I'm not interested, I might get a better deal. And a fraud perpetrator has an incentive to simulate a serious buyer. It seems that in any market for user intent data where the user gets paid, wrong data will be over-represented.

So where's the market for purchase intent data?

I can think of a few possibilities.

  • Consultative sales: Some sellers are willing to do valuable work for me if I'm serious. Taking my measurements at a clothing store, or supporting an evaluation for a business IT product.

  • Controlled circulation: The basic exchange here is an old magazine model. Give a free magazine to people who do something to prove that they're in the market for a certain product. On the Internet, you can make this as fine-grained as you want. Give the user the ability to share some attribute in exchange for some content. (For example, a local news site might let you read the music and theater section free, if you can share the fact that you recently bought a ticket for a show—then the site's ads can command a higher price because they reach known buyers.)

It's not rational for users to "leak" purchase intent data without compensation, and therefore it's rational to block or spoof any kind of asymmetrical data collection. But are there special cases where a trade for purchase intent data can happen?

  • In consultative sales, the sales person can set priorities based on purchase intent—the more the customer appears likely to spend, the more time and other value he or she can get. There's no up-front payment for intent data, but an ongoing exchange of value for data.

  • In controlled circulation, the marginal cost of adding a subscriber is small. The 1,001st subscriber costs much less than 0.1% of the total budget to serve. This is different from directly paying for intent data, where all 1001 prospects (or 1001 copies of the same fraudbot) cost the vendor the same.

So I guess I'm still a Big Data optimist. Remember, when email marketing started, most of the people who used email for marketing were spammers. Today, most of the marketing email in the world is still spam, but most senders are legit, opt-in email newsletters work, and we have a set of technologies and norms to separate the two. But pay-to-spam concepts never really worked out.

Are there any other examples of how a market for intent data can work?

Bonus links

Let's make an acronym.

The IAB has come up with "L.E.A.N." and "D.E.A.L." for strategies to face down the ad blocking problem. But if that's all we do, we would be wasting a crisis here. Worse, we have the adfraud crisis happening at the same time, so we would be wasting two crises.

The big problem from the web publisher point of view is:

The same content brings in an order of magnitude less ad revenue on the web than in print.

From the advertiser point of view, that looks like:

The web is a low-value advertising medium.

Making changes around the edges to try to slow down ad blocking won't help that. Web advertising is still on the downward slope of the peak advertising curve that any targetable ad medium goes through. For example. the "E" in "D.E.A.L." is a weak link. Explaining how web ads work today is likely to build more interest in blocking. The more targetable an ad is, the more rational it is to ignore, block, or regulate it. It's only good behavioral economics to pay attention to advertising when the ad medium can carry a hard-to-repudiate signal.

We can't get web ads out of the ad blocking rut, but there are ways to make the web work as a low-fraud, high-signal medium and get it off the peak advertising curve entirely. Doc Searls writes,

For example, we could come with a term that says, Just show me ads that aren’t based on tracking me.

Good idea. We can take the qualities that next-generation web advertising must have, and make them spell out a word. Best if the word makes it clear that we're working on the core problem. We don't have an "ad blocking problem" and an "ad fraud problem". We have one problem, and ad blocking and ad fraud are two symptoms.

So, acronym. Right. Let me take a whack at it.

CNAMEs: Ads, and other third-party resources such as analytics scripts, served from what looks to the browser like a subdomain of the publisher's domain, not from a third-party domain that appears on multiple publisher sites. This is a small change for third parties, but a big barrier to cookie licking fraud. And responsible privacy tools won't block a dedicated subdomain that can't be used to track users across sites.

HTML5: avoid the malvertising risks of vintage plugins by using web standards only. Maintaining a reasonably secure device on today's Internet is hard enough. Users can't maintain problematic software just to see the ads.

Encryption: Limit the ability of ISPs and other observers to gather user data that can be used for targeting later.

Data leakage protection: Many users are still unprotected from web tracking. When appropriate, notify them and offer incentives to get protected. (This is especially important for brands in data-sensitive categories such as health care, and for high-reputation brands that compete with low-reputation ones in categories such as financial services and travel.)

DNT: Respect user norms on tracking across sites. (Update 26 April 2016: Show that you do this by hosting a copy of the EFF DNT policy on the ad server.) Respecting DNT is better than tricking users into giving up information, because eventually, users figure out what they're uncomfortable with and take steps to protect themselves. Meet the users where they are instead of trying to move norms.

Accountability: accurate WHOIS info for everything. No anonymous registrations. Rob Leathern explains this better than I can. Malvertising and fraud are too easy otherwise. (Update: important for publisher sites too because of the brand-supported piracy problem. Any real solution to brand-supported piracy depends on cleaning up both third-party tracking, to protect users from being tracked to infringing sites, and contact info for any site where an ad can appear.)

Reciprocity: Now you have an ad medium that's worth something to both ends. It restores the essential bargain of advertising: an offer of signal from the advertiser for attention from the audience. The result is an ad system that's harder for scammers to defraud, valuable for the advertisers who pay for it, and rational for users to accept.

So does C.H.E.D.D.A.R. work for you? Let me know, or just get started.

New BlockAdBlock-based method to detect tracking

Here's a new track.js script, based on BlockAdBlock.

If you are already using BlockAdBlock to alert users of "dumb" ad blockers (which is a good idea, because the best-known ad blocker gives users a false sense of security by participating in covert tracking) you can now use Aloodo with an almost identical interface.

Include the script with:

<script src="https://ad.aloodo.com/track.js"></script>

And set up your callbacks with:

if(typeof aloodo === 'object') {
    aloodo.setOption('debug', true);

The onLoad function gets called when the fake tracker iframe loads, and the onDetected function gets called when tracking is confirmed. The difference is because of the problem of an "untrained" Privacy Badger. If Privacy Badger is installed but has not learned to block ad.aloodo.com, then the onLoad function will get called even though the user has protection.


  • Use onLoad to correctly alert more users of list-based protection. (In this case you will have to let Privacy Badger users know that they can take a test to check their results.)

  • Use onDetected to avoid alerting untrained Privacy Badger users. (You will fail to alert some vulnerable users of list-based protection.)

Because Aloodo has to use a third party and wait for the iframe to load, this script can't be as fast as BlockAdBlock.



View source here or check out the project on GitHub for more info or to report a bug.

TV shopping with Rory Sutherland

Here's a great podcast episode: Ogilvy and Mather UK vice-chairman Rory Sutherland, on the Echo Chamber podcast. Includes a lot of topics, including the problem of how to shop for a TV.

That's a hard problem. TVs are complex, hardly anyone buys the same kind twice, the market may contain deceptive sellers, and shoppers have limited decision-making time. Sutherland can't afford to become perfectly well-informed about every available TV in order to make the buying decision that Homo economicus would. He has to make a satisfactory decision with limited information.

Brand advertising solves a problem.

How does Sutherland manage to buy a TV when he doesn't have time to learn much about TVs? The same way most people do, by using our existing human-reputation-evaluating wetware. "We solve the easier problem, which is can I trust the person selling it," he says.

It would be costly for a company that already has brand equity, such as Sony or Panasonic, to sell a bad product. A company selling crappy TVs under a low-value brand name can easily spend $5 on a new logo when people catch on, but a company with an established brand would be taking a bigger risk. Sutherland says,

We interpret the significance of a message not only by the information it ostensibly contains, but by the cost in terms of effort and expense that has gone into the creation of that message, and also by the cost consequent on the sender if that message is wrong.

Listen to the whole thing.

You're back? Good. I know that the podcast covered a lot of topics, but I'm going to dig into the TV shopping part a little bit more.

What does a reputable brand need from an advertising medium?

Low-reputation and high-reputation sellers need fundamentally different qualities from an advertising medium.

  • Low-reputation sellers need to split the audience into small groups in order to deceive the uninformed while escaping the attention of those who seek to build their own reputations by enforcing norms.

  • High-reputation sellers need to send a costly, hard-to-repudiate signal to an large audience that includes low-information shoppers and well-informed recommenders and norm enforcers.

With today's web advertising, sellers have the ability to target but not the ability to signal. Web ads therefore meet the needs of low-reputation sellers very well, but are inadequate for the needs of high-reputation sellers. A recent meeting at PageFair on the subject of fixing web ads came up with this:

Use of contextual targeting to establish ad relevance should be increased. This will end the over-reliance on behavioral tracking of users.

That's a good start, but it doesn't solve the problem for brands that are working on building brand equity. If, when a user sees an ad, that ad could have been targeted to him or her, it's worthless for signaling. Remember, we're all pretty good applied behavioral economists here. We can tell a high-signal ad medium from a low-signal one because we can see where ads can "follow us around".

As Bob Hoffman's refrigerator test shows, brand equity isn't built on the web. This is not just a concern for web publishers that want a piece of the market for high-value advertising. It's also a problem for brands that need to be able to move signal-carrying ads to the web as print advertising goes away.

Tracking protection for brand advertisers

Instead of simply relying on the web medium to slowly fix itself, high-reputation sellers can now experiment with tracking protection campaigns in order to give signaling power to advertising on the web. When a web visitor comes to check out the specs of a TV, test if the browser is vulnerable to third-party targeting. (There are easy ways to do this with a script included on a page.) Offer unprotected users a chance to get protected by installing or turning on some kind of tracking protection.

Now you're communicating with a tracking-protected user for whom the web works as a signal-carrying medium—a user who's no longer in the pool of commodity eyeballs being offered to deceptive sellers. Besides electronics, other high-reputation categories that are potential good fits for tracking protection include:

  • Insurance and other financial services

  • Health care

  • Hotels

With tracking protection, a brand's audience can see signal-carrying ads, like the Canvas ads on The Next Web, without the targeting-based low-value ads that tend to devalue the medium. I'll save a spot on my refrigerator shelf for a brand that can build its reputation with a tracking protection campaign and pass the refrigerator test.

Look for goods that are difficult to evaluate at point of purchase and where an experience with a deceptive seller can be costly, and you'll find a client for a tracking protection campaign. Shoppers are usually much better at applied behavioral economics than marketers give them credit for. Tracking protection is a way to make a better-informed, better-signaled-to audience work for brands.

HTTPS support, on the Kloudsec CDN

The ad.aloodo.com site is now on the Kloudsec CDN, using Kloudsec's automatic support for Let's Encrypt.

I'm not bragging on my own elite skills here. It only took about five minutes of actual work. (Not counting fixing a problem on our side that one of our users ran into when switching to the HTTPS version of Aloodo.)

Why is going encrypted now so simple from the webmaster point of view? Because Kloudsec took the great work of the Let's Encrypt project and built a straightforward web workflow around it, complete with checking and troubleshooting for the parts they can't automate, such as setting up the right records in DNS. We use GitHub Pages to host ad.aloodo.com, and Kloudsec has a GitHub Pages integration to make it even easier.

By the way, we also now have a global CDN.

Protecting users from targeted ads

Putting your site on HTTPS can block ad injection attacks on users of your site. I remember the first time I went to the Oakland airport and saw injected ads on one of my own sites. Please check out Let's Encrypt and Kloudsec to help protect your users.

And of course, another important kind of protection you can offer is a tracking protection warning, with Aloodo. Here's how to get started.

Bike helmets

An adtech proponent once suggested this to me (slightly paraphrased).

You can keep track of households that have children and are interested in cycling. Then when the parents visit a news site, you can offer them a 25% discount on children's bike helmets!

Cool story bro.

But that kind of thing is the lowest-value part of advertising. Adtech people are trying to chase another $50 billion by rearranging database marketing and throwing more computers at the problem. But that's not where the money is.

Let's think about that bike helmet example a little more.

Very few parents, fortunately, have direct experience of the quality of a children's bike helmet. You buy the thing, keep it for a while, the kid outgrows it or loses it, and you're done with it.

Unless something really bad, that you don't want to think about, happens. Some event where the difference between a good bike helmet and a not-so-good bike helmet means the difference between remember that time I crashed my bike and we had to go buy a new helmet? and something else.

Bike helmets are really hard for shoppers to tell apart. It looks like they're all basically made out of the same stuff. Some hard plastic, some foam, some nylon straps. But some designs are better than others. Specific products change all the time, but some companies are good at some product categories. Some brands have a good reputation.

Reputation. Some company is known for doing well at engineering, testing, and manufacturing? And I can pay that company to feel better about the protection of my child's brain? Like the man says, shut up and take my money.

Where does that reputation come from? It doesn't come from stalking users with the digital version of windshield flyers. Building brand equity is complicated, and depends only partly on advertising. And the advertising piece of the puzzle depends on where the ad appears, and how consistently and expensively it appears.

Product ads and product-related editorial need each other. High-reputation news and reviews come from a high-reputation source, and a publication's reputation depends on having the budget to write independently. The budget depends on ad sales. And signaling connects it all up. A brand's ability and willingness to advertise in a high-reputation publication carries a powerful signal of its intent in the market. A high-reputation publication isn't afraid to write bad things about problem products—even the ones that come from its advertisers.

Publisher reputation is even more important in regulated product categories. Shoppers know about regulatory capture even if they don't know to call it that. Government standards need independent reviews just as much as the products do. Same problem, one level up, even harder to do. (Anyone got a good link to a story about cadmium, a toxic metal, in art materials sold for children's use? And what regulations cover it? Let me know, I'll be over here chewing crayons.)

In real life, customers aren't in a funnel or a distillery, working their way down the pipe to purchase. Customers are active participants in markets and in communities of practice. Humans are wired to constantly try to measure the reputation and honesty of others. Everyone is picking up on signals, all the time.

So reputable publications run obviously expensive, hard-to-repudiate ads, which pay for more and better journalism and cultural works, which build publisher reputation, and that reputation brings in more ad money, and around and around the engine of wealth creation goes. Tracking and targeting users doesn't just take a piece out of publishers' share—it breaks the cycle. (Am I always getting ads for crap because someone thinks I don't know any better? Or am I seeing that same thing that more knowledgable shoppers are?) Signaling and the open web are a great match, as soon as we web people can fix up all those 1990s bugs that allow for cross-site tracking.

Let's make web ads work. The first step in growing web advertising from a targeted medium to a signal-carrying medium is to get more users protected from third-party tracking, so that signal-carrying ads will stand out. Take a tracking protection test and be part of the transformation. Firefox users, try the new pq add-on to turn on Firefox's Tracking Protection functionality. The social silos can growth-hack their funnels and distilleries of stalking, couponing, and crap, while the open web makes mad cash with signaling.

What is data leakage?

We hear a lot about ad blocking and ad fraud, but surprisingly little about data leakage, the underlying problem that feeds both. What is data leakage, anyway?

Sophia Cope at NAA: Data leakage is the collection and monetization of an online publisher’s audience data by a third party without the publisher’s knowledge and consent.

John McDermott at Digiday: Data leakage typically occurs when a brand, agency or ad tech company collects data about a website’s audience and subsequently uses that data without the initial publisher’s permission.

Nothing about user permission. That's a separate issue. Data leakage is a publisher/third-party problem, not a user/publisher problem. Both definitions are based on publisher permission.

That's a good start. But "permission" is hard. Publishers often don't understand all the third-party resources on their own sites. Third parties bring other third parties along, and pages often end up with 50-70 third-party pixels, scripts, and iframes. (There's even a zone on the Lumascape devoted to services that a publisher can use to figure out what other services they're using.) Every one of those third parties comes with some kind of contractual "permission." All those permissions, though, are expressed in complex contracts. Even if a publisher can afford to comprehend the terms of one contract, the set of resources on a page can introduce unwanted data transfers working in combination. And the combinations of technology and contracts change faster than anyone can evaluate them.

Learning from security software

It's inadvisable for security companies to call malware "malware" when the malware developer has a lawyer. The solution has been to come up with a new category name, Potentially Unwanted Programs. The definition is based on whether the software works against the user's interests, not on whether the user has clicked on "I agree" after not reading a complex, deceptive contract.

Labeling programs as PUPs is a small win. It is possible to define a problem in terms of interests and norms, not in terms of what someone has "agreed" to. We will be able to have a more productive discussion of the data leakage problem if we can shift the definition from one based on "permission" to one based on publisher interests and business norms. Data leakage is not a problem about user permission (user permission is another story) but user norms and expectations can be a helpful way to establish the definition of data leakage.

Here's a new definition.

Data leakage is collection or transfer of publisher audience data in a manner that violates the norms and expectations of the members of the audience.

Why is it important to define data leakage? Because we still have trouble collecting data on it, and we need to know what to measure. We don't know how much of the ongoing problem that newspapers have with building online ad revenue is attributable to data leakage.

In order to measure data leakage, we have to define it based on a reasonable standard, not on a mess of contracts that results in publishers unknowingly "agreeing" to practices that work against their interests.

Finding data leakage

We can look for leaks at the top of the pipe and at the bottom.

Bad bots visiting good sites: Fraudulent ad inventory gains value from associated user data, which is why even the best-run sites still get some fraudbot traffic. Bots are collecting cookies on legit sites, then showing up as valuable "users" on fraud sites. If your site has a small bot problem, then any advertiser paying to reach your users on other sites has a big bot problem.

Good audiences available in bad places: We don't have access to the "user data chop shops" that sit between legit publishers and sales of targeted impressions based on publisher data. But we can look at demand-side platforms to find out where valuable impressions are showing up, and measure the impact of anti-data-leakage tests.

We can't deal with ad blocking, ad fraud, and low ad revenue without understanding the data leakage problem. The good news is that data leakage is measurable, if we work it from the legit publisher end and the questionable inventory end.

Tracking protection layers and the next-generation browser

Joe Marchese writes, in the Wall Street Journal,

The situation with digital advertising is so dire that the only fix might be to reset. Start at zero. And how do we do that? We create a world where every consumer has an ad blocker. Then, we focus on how to earn consumers’ attention. To ask them to opt into quality advertising rather than dealing with a world where they’re opting out in every way they can.

Marchese is right that the wide-open, completely trackable web browsers of the 1990s are no longer workable on today's Internet. When a browser allows third parties to follow users from site to site, it enables data leakage, fraud, and malware, and makes the web as a medium less trustworthy.

But dumb ad blockers aren't the solution either. If you like reading William Gibson stories today, thank an OMNI Magazine advertiser from the 1980s. Advertising done right can support great photography, reporting, and fiction that would not otherwise exist.

Somehow, the browser needs to both protect sites and users from the problems of intrusive tracking and also protect the power of advertising to support journalism and cultural works. We can get a peek at the browser of the future by looking at what works on the leading browsers and privacy extensions today.

0. Send accurate Do Not Track

Inform sites of the user’s preferences on data sharing. Mainstream browsers already do this, and now that MSIE does not set DNT by default, it's a clear message about what the user intends.

1. Block connections to third-party trackers

Browsers have to be able to avoid connecting to any third-party site that does not comply with user norms. This may take the form of honoring a blocklist, like Tracking Protection for Firefox, monitoring tracking behavior, like Privacy Badger, or both. (In some cases, third-party tracking is gatewayed through a first-party host, so layer 1 protection may need to block URL paths, not just hosts.)

2. Limit data sent to third-party sites

Apple Safari's third-party cookie policy is the best-known example of layer 2 protection. Cookie double keying, where the same tracker will see different values for the same cookie on different sites, is another way to limit third-party data collection. The Crumble extension for Google Chrome appears to implement cookie double-keying.

Layer 2 protection is important in the case of third-party sites that both provide user-visible functionality and permit data leakage. (Privacy Badger uses layer 2, in the form of cookie blocking, for its "yellowlist" domains.)

3 Scramble or delete unsafe data

If a tracking cookie or other identifier does get through, delete or scramble it on leaving the site or later, as the Self-Destructing Cookies extension does. This layer of protection matters in two important situations.

  • Clean up browser state after a site visit in which the user turned off layer 1 or 2 protection. When sites ask the user “turn off your ad blocker to see this content” the user is choosing to accept the ads on the page, but is not choosing to leak data to unknown third parties.

  • If a particular cookie or other piece of browser state is discovered to be involved in tracking after it has already been set for some users, clean up existing copies in a way consistent with the user’s preferences, much as antivirus software regularly downloads updates for newly discovered malware.

Bringing it all together

No one browser or extension has all of the pieces today. Persistent users can put together a reasonable protection toolkit with a little downloading and configuration, but for mainstream adoption there is room for a lot of development and testing.

The prize for success, though, is about much more than saving the existing web advertising business from ad blocking. It's about leveling up web advertising from a low-value medium that can't pay for new works, to a higher-value medium that can. For example, today it's hard for authors to earn money from short fiction because the high-revenue ad model of magazines doesn't work on the web. Cut out tracking, and the data leakage, fraud, malware and loss of signal that come with it, and it's a whole new web ad business.

While the ad blocking debate makes a lot of noise, the real action is in improving the browser to enable advertising to pull its weight—starting with enforcing common-sense norms about user tracking.

Conversation with Doc Searls

(Mailing list discussion between Doc Searls and Don Marti, formatted and edited for the web. The original question was about the SilverPush tracking system.)

Don: When a communications platform facilitates surveillance marketing, it has a bug and needs to be fixed.

Adtech and malware use many of the same vulnerabilities, and even if all adtech companies reform, or governments regulate them out of using vulnerabilities, malware is still out there. Platform developers have to fix the bugs, keep hardening the platform to resist future ones, and move on. (The alternative is a Peak Advertising effect, where a targetable medium becomes less and less valuable for advertising over time.)

Doc: Can you differentiate adtech from what Google and Facebook do with their ad platforms? I’m getting questions about that lately, as all get lumped together, and ads on all are blocked by ad blockers.

Adtech can have narrow or broad definitions.

The narrowest definition is a system that implements the Fundamental Value Proposition of Adtech as defined by Michael Tiffany: diverting advertising money from high-value sites to low-value sites by tracking users who visit both.

An economic definition of adtech would be any system that relies on information about the user to reduce the signaling value of an advertisement.

Google has search ads that have some adtech built in to them, but could work without it and probably better. Some other Google ad products are pure adtech.

Facebook ads are pretty close to pure adtech.

Tracking makes mountains of money for Facebook: $4.5 billion in the last quarter. Not bad for a system that trades off signaling value. (In fact nearly all the advertising I see on Facebook is a breed of spam. But maybe that’s me.)

Facebook is as much an outlier in web advertising as Apple is in hardware.

Facebook takes up about 40 minutes per day for the average person in the USA.

Print newspapers are still around 14 minutes/day for the average person in the USA. (Both averages include non-users.)

Newspaper ads in this country are roughly as big a business as Facebook advertising in total, worldwide. ($16.4B in 2014 ).

Take the crappiest known ad medium on a per user-minute basis—Facebook ads—multiply by an insane number of user minutes, and you still get real money. But advertising that's only as valuable as Facebook advertising isn't strong enough to support news gathering and cultural works, the way that other ad media are.

Aside from that, I think the simplest definition of adtech is surveillance-based advertising. Maybe we should call it SBA.

I also think the difference with Google and Facebook is that users of both know they’re being followed and live with it (and, in some cases, like it) — while adtech (or SBA) outside the obvious orbits of both those companies has unknown provenance.

That's what was so interesting about retargeting. As soon as people started seeing ads "following them around", reactions to the whole medium started to change, most noticeably in ad blocking behavior.

The bigger question is: Why is surveillance marketing even a thing? Why do new exploit-based companies like SilverPush keep getting funded?

First, everyone on the Internet is inside some kind of filter bubble. The more complexity and consensus found within the filter bubble, the less information can enter from outside. (Windows NT and Unix were both difficult to keep working, so most of the people who were up to date on either were late to see Linux. Today, Linux is complex, popular, and loud.)

The adtech scene is so complex and shares so many assumptions that it's hard to consider alternatives.

There needs to be a master list of those assumptions. Or at least of the arcana involved.

Writing down the hidden assumptions of surveillance marketing would be like writing down the Bro Code (and they're very similar.) Start with the assumption that marketers are experimenters, customers are subjects, not that both sides are playing the same market game, and go downhill from there.

Another problem is that advertising is full of Principal-agent problems. Today's adtech ecosystem is structured to be a win for ad agencies, which are at risk of being disintermediated by the Internet.

Advertising is always in tension between "sell products to customers" and "sell advertising to clients." It seems to have shifted further over to the second, because user tracking makes it so easy to generate mathematical-looking graphs.

Numbers are crack. I was in the business and I’ve seen it first hand. I even made money coming up with an algorithm for factoring out seasonal variations in radio station ratings, all the better to win bets with buddies when each new “book” came out.

And the numbers today are so many more, and so much more precise, and so much more intoxicating, and so much prettier in PowerPoints. For an agency to sell a company today on a truly creative and signal-ful ad must be ridiculously hard. Though it can happen. I thought Volkswagen’s diesel test drive ads were terrific. (Alas, the company later shot itself in the tires, but that’s a different matter.)

Ad agencies won't give their own numbers away, but some will have an advantage when some of the surveillence marketing numbers stop working and other kinds of data collection become more important. (The big opportunity for VRM comes as soon as Marketing realizes that fair trade data is easier and/or more accurate than surveillance-derived data. Making surveillance harder, less accurate, and more expensive pushes the balance towards VRM).

Finally, Internet people were slothful in fixing early bugs that enabled the current generation of surveillance marketing, and we aren't clearly sending a message to investors (who should be scared off by the prospect of investing in a known software bug) that we're going to do better in future.

What are those, exactly, and can they still be fixed?

Any time a browser allows for fingerprinting or tracking, it's a bug. Mozilla tracks fingerprinting bugs, and the Tor browser depends on it.

The bugs will never all be fixed—software is hard and browsers keep getting new, potentially fingerprintable, functionality—but we can reduce the expected returns on surveillance marketing investments, and increase the returns on positive-sum investments, by reducing the time that any given bug remains open.

The good news is that the best browser for security and privacy is also the best browser for legit business. I don't agree with most of what the retro, anti-digital, old-school ad people are saying. "Big Data" could be much, much more useful for everybody, if people would only stop thinking of it as a way to automate the carny trick of putting chalk marks on audience members' backs and put it to uses that people agree on and don’t have to be hidden or made confusing. (Clicking “I agree" is not agreeing.)

Ya still gotta like the likes of Why I Lied by Bob Hoffman the Ad Contrarian.

Sure, but fixing tracking does not have to mean taking a step backward in technology.

IMHO, creative ad people and creative web people could have an awesome conference if we managed to exclude all surveillance marketing from it.

Is advertising ruining the web? Is the web ruining advertising? No way. Web people who think that "advertising" is creepy are just as mixed up as advertising people who think that "the web" is creepy.

Every set of new technologies has the obvious, "hey we could do THIS with it" application, and surveillance marketing is the one that a lot of people have come up with for the Internet. But once we can get past it—and make the net more trustworthy for more people—there are better opportunities.

Publishers complain about ad blocking, but that's only a tiny part of the problem. The real loss to advertising on the Internet is that users have mentally marked down the value of an ad online to the value of spam.

"For the past five years, newspaper ad revenue has maintained a consistent trajectory: Print ads have produced less revenue (down 5%), while digital ads have produced more revenue (up 3%) – but not enough to make up for the fall in print revenue."

Newspapers: Fact Sheet By Michael Barthel

The same news stories can bring in more ad revenue in a low-tracking medium, print, than in a high-tracking medium, the web.

Fixing the bugs that enable surveillance marketing is a way to build a more valuable ad medium. Web sites and ad agencies can get a head start now. It's easy to get started.

Bonus links

Why users will have a L.E.A.N. beef with adtech

Scott Cunningham, Senior Vice President of Technology and Ad Operations at the adtech industry organization IAB, is introducing a new concept called "L.E.A.N. Ads."

Today, the IAB Tech Lab is launching the L.E.A.N. Ads program. Supported by the Executive Committee of the IAB Tech Lab Board, IABs around the world, and hundreds of member companies, L.E.A.N. stands for Light, Encrypted, Ad choice supported, Non-invasive ads. These are principles that will help guide the next phases of advertising technical standards for the global digital advertising supply chain.

The media are covering this as some kind of change in direction for IAB, but the more I look at it, the more it looks like it follows the Law of Ad Blocking Solutions, which is: your first idea on what do do about the ad blocking problem is to do more of what you were planning to do anyway.

Let's see what L.E.A.N. is all about.

L is for Light. Don't hog the user's battery and network. Nothing wrong with that.

E is for Encrypted. Can't argue with that, either. (But it's kind of creepyweirdbadwrong that IAB members were willing to send user data in the clear for so long in the first place. Encrypting user data is a good thing, but it's the "zip up your fly before you go to work" of Internet business.)

A is for Ad Choice Enabled. The first two were basics, but now we're starting to get to the silly stuff. Have you tried AdChoices? Not the malware AdChoices, the real AdChoices. The adtech firms have to put some kind of privacy thing up in order to appease the FTC and the big bad Eurocrats, but nobody has an incentive to make AdChoices work for real, so it doesn't.

So far we're at two basic software quality items, one nowhere-near-done fragile opt-out thing. One more.

N is for "Non-invasive". That takes a little more unpacking. From a creative point of view, different kinds of ads will be "invasive" or not depending on what kinds of sites and devices they run on. So what does IAB want to do in order to get to a basic level of non-invasiveness?

Among the many areas of concentration, we must also address frequency capping on retargeting in Ad Tech and make sure a user is targeted appropriately before, but never AFTER they make a purchase.

Hold on a minute.

Never see an ad for something after you purchase one?

Finally, we must do this in an increasingly fragmented market, across screens.

So when I buy a pair of socks on my desktop computer, L.E.A.N. means I stop seeing ads for the same socks on my phone. From the adtech point of view, a little efficiency gain. But in order to make "never after" work, the adtech companies need to correlate all my purchases to all my web activity. Yes, that breaks signaling, but there's a more immediate reason why the "N" in L.E.A.N. will create more problems than it's worth.

People already don't like ads targeted using browsing history. And retargeting is already helping to make ad blocking go mainstream by making it obvious when ads are "following users around."

Now imagine the L.E.A.N. web, where you don't see an ad for the specific product you just bought. But because of all that data that the IAB companies needed to collect in order to get closer to "never after", more and more of the other ads you see can be targeted using browsing history plus purchase history. Adtech can never get all the way to "never after" but the closer it gets to having enough information to do it, the more opportunities it has to target users with not the same product, but related ones.

What would a L.E.A.N. web look like?

You bought a wood carving set, you start getting ads for first aid kits. Bought a chest freezer, get ads for bowhunting gear. It won't be just the one item you left in your shopping cart that follows you around, but many different related items somehow connected to everything you bought recently. Once the data is there to implement "never after," it will get used for other purposes. So users end up seeing more, not fewer, creepy ads.

And humans are great pattern matchers, even when no pattern exists. Once we start seeing ads for products that go with what we recently bought or looked at, then every ad will start to look like that. Is that jacket following me because I went to Super Duper Burgers and there was a guy in there wearing a jacket like that? Once you start to see related-product ads following you around, it will be hard to stop seeing them, and the sense of being watched, along with the incentive to block, can only grow. (What will the "that ad must be following me, but why?" environment do to personal hygiene product advertising?)

Protecting the web ad business?

L.E.A.N is missing some important parts, but nobody has a whole solution to ad blocking and ad fraud, two problems that came up together and have to be addressed together. (You can't even measure one without the other.) But the real mistake is a bigger one. Too often, we're looking at the problem of how to protect the existing web advertising business, when web advertising still isn't working. The same content can bring in an order of magnitude more ad revenue in print than online. In any other technology business, failing to keep up with 19th-century technology would be cause to reinvent things from the ground up. It's time to apply the same standard to web ads, and not just protect the existing web ad business from ad blocking, but make a new web ad business that works.

Web advertising is not yet ready to take on the burden of supporting news and cultural works, a burden that print and broadcast have carried for years. What is it about those older media that web ads still aren't able to do yet? What can a great ad medium do that the web, so far, can't?

  • Facilitate creative work. Ads must be able to use the force multiplier of memorable creative, to have a chance of getting more audience attention than the advertiser paid for.

  • Implement publisher-based standards. The medium must facilitate the banning or punishment of ads that are fraudulent, risky, or otherwise out of compliance with market norms. Publishers need to be able to set their own standards. An ad that works for one site might not be allowable where users' eyes, devices, and network connections are different.

So far so good on the first two. For example, The Next Web has broken out of the banner box with a "Canvas" ad format that gives designers a big space to work with, and many new publishers are doing without intermediaries that tend to bring crappy ads with them. Some other qualities of a great ad medium still need work.

  • Stable rates. The price of an ad in a great medium is relatively high compared to the amount of user time it occupies, and price for comparable space tends to be stable, to make it easier for a user's "inner economist" to compare the signal level.

  • Hard to repudiate. An advertiser must risk some reputation if it commits to an untruthful ad, or makes a promise in an ad that it breaks later. The more reputation you risk by running a bad ad, the more you stand to gain with a consistent campaign of good ones.

  • Not subject to per-user targeting. The more that a member of the audience sees an ad as custom-targeted, the less information about the advertiser's intentions in the market the ad is able to convey.

Some of the IAB member companies have profitable niches, sitting between publishers who still aren't getting paid what they need for online ads and advertisers who still aren't getting the results they need. But nobody has it right yet. Instead of L.E.A.N.ing on more adtech as usual to keep the existing system going, it's time to consider better alternatives.

(Update: Here's the sustainable alternative with its own easy-to-remember acronym.)