Google’s CEO Eric Schmidt defended the company’s privacy screw-ups at a Gartner Symposium on Wednesday. Reported by CNet, Schmidt responded to a question about Google’s privacy sensitivity, or lack thereof. According to Schmidt, they are guided by the company’s founding motto, “Don’t be evil.”
In response to a question about how Google treats consumer privacy, he tried to illustrate how the company’s don’t-be-evil philosophy trumps technology by recounting a meeting he attended with company co-founders Larry Page and Sergei Brin. In it, a business executive suggested a particular change at Google.
“One of the engineers says, ‘That’s evil.’ It was like setting off a bomb in the middle of the table,” Schmidt said. The concern was taken seriously: “You can pull the ripcord and stop the production line.”
He added that after a long debate, the engineer’s assessment prevailed over the business executive’s idea. “They concluded it was (evil), and this poor person was thrown out the room.”
Unfortunately, Schmidt’s story is indicative of a culture that doesn’t understand the dynamics of privacy problems. When some random engineer in a meeting decides that something poses a privacy problem, and raises the issue, it’s a happy event. But in my lengthy experience, it’s also a rare event.
It’s not the job of that engineer to raise obstacles to other people’s ideas. Indeed, the smart engineer will learn that after shooting down some big-wig’s idea, he’s a lot less likely to get invited to as many meetings. “Oh, no! That wouldn’t happen at a cool and funky place like Google,” I can hear someone say. Yeah, right.
As I’ve pointed out on numerous occasions, many of Google’s privacy blow-ups aren’t nearly as dire as some might make them out to be. But the sheer frequency of these privacy missteps has contributed to an image of a company that, at best, has a tin-ear when it comes to privacy matters, and at worst, is actually evil.
Touting a corporate motto of “Don’t be evil,” is cute. But “Don’t be evil,” is significantly different from, “Be good.” Google-up a definition of “amoral” and you’ll see what I mean.
“Don’t be evil,” is a passive statement. In a company guided by such a passive directive, it’s a tacit admission that, from time to time, the company is going to do something evil without realizing it. Sure the company can go back and fix stuff, and Google has shown its willingness to be reactive when their latest creation turns into Frankenstein’s monster. But by then the damage is done: trust is further frayed.
When it comes to privacy matters, reactive is bad. Being passive is a prescription for disaster. Privacy protection requires being proactive. In a business where it’s this easy to stumble into trouble, shouldn’t somebody have the job of watching out for the pitfalls in the first place, instead of mobilizing afterwards to dig yourself out of the hole?
To the best of my knowledge, Google still doesn’t have a Privacy Officer, or anyone else similarly situated whose job it is to be the curmudgeon, to look at everything askance, to not merely wait for somebody to discover a problem but to go looking for trouble and blow the time-out whistle.
Yes, it’s nice to think that everybody in an organization is going to be constantly vigilant, and will uncover the privacy risks associated with anything the company does. But with a new privacy-related problem arising on an almost weekly basis, clearly the “neighborhood watch” approach isn’t working. It’s time for Google to hire themselves a dedicated privacy cop.