I’m trying to work up a good headfull of outrage about Facebook’s ever-changing privacy “policies”…and really, I can only get as far as “that’s kinda skeezy” followed immediately by “but what exactly did you expect from a for-profit entity?” Plus: “and you’re paying how much for it?”
Anyway: I will concede that it would be far, far better if Facebook were honest with its users, if it simplified users’ abilities to keep private whatever users did not want publicly available, and that the most troubling item on the litany of recent changes is that third-party app developers are often granted access to some of this info by default (since users know they’re revealing info on Facebook, that it’s in some way available to users of Facebook seems implicit).
I think what we’re seeing is inevitable for any business trying to find a way to monetize eyeballs while keeping its service free to users. (All those bogus warnings that Facebook plans on charging money? They’re not about to – and if they were, they certainly wouldn’t just pop everyone with a flat fee all at once.) And something that started as, essentially, a way for a smallish crowd of college buddies to keep in touch has mushroomed into one of the largest portals on the internet: the changes that have occurred are, as I said, pretty much inevitable in a capitalist system. (Short microrant: so if you don’t like it, please direct your protests at the economic system that makes it possible, since in fact that’s what’s really screwing you. And in many much, much worse ways.)
But let’s look at this for a moment: what sorts of information is Facebook making “public,” and what are the consequences of that? The basic strategy seems to be: Facebook wants to make information about people’s likes and preferences available to marketers, so marketers can try to sell stuff to users by way of Facebook, by which Facebook would get itself a cut as intermediary.
So are directed ads annoying? Actually, I would say they’re annoying if they meet the following conditions: (1) they’re intrusive to the experience someone’s using the website for; (2) they’re inaccurate (gmail’s ads are laughably so, for example, based only on decontextualized language fragments); and (3) they reveal information to others or to the general public which a person would rather keep private.
Looking at the last point first: Facebook’s blundered Beacon thingy is an excellent example of bad usage of info: publicizing to one’s friends the things one has just bought can obviously go badly wrong if, for example, one wants to surprise a friend on his birthday. As for more serious sorts of private info: I would say that that sort of info shouldn’t be revealed on Facebook anyway. I mean, if you don’t want people to know you dress up like an enormous squirrel and get off only if someone dressed as Mark Trail punches a man with a mustache in front of you, don’t “like” a Facebook group dedicated to fans of exactly that.
But the other two are, I think, more serious problems, since a lot of the information users reveal might seem trivial in itself, but the cumulative effect of that info being made available to marketers might create unanticipated problems. But what causes those problems? I’d propose this: if the only ads you ever saw were accurately targeted (i.e., they were about items or information that you actually were interested in) and were not intrusive to your web-browsing experience, would you really mind? Why would you? If it’s not interfering with what you want to be doing, and it’s actually providing you with information you want, why would you be upset? The usual problem is that ads are obnoxious and intrusive and they’re only vaguely accurate. Just because you liked The Big Lebowski doesn’t mean you want to see every movie that features a Bob Dylan song. Curiously, though, in theory the solution to the problem of vague matching is…more information, better analyzed. Which is to say: the more you reveal, the likelier any targeted ads are to actually be relevant to you.*
I guess I’m also puzzled about the outrage, given that Facebook is a “social network”: that is, it seems inherently a medium wherein information is shared, not kept private in an in-group. And I think it’s that sharing that’s driven its growth: part of the fun is the unpredictability of which ghost from your past might pop up, or whom you might find listed among the friends of a friend of a friend, and so on. I’ve gotten back in touch with people I hadn’t heard from for years, and I don’t think that would have happened with a rigidly private network.
I’m certainly willing to change my mind: if my e-mail inbox is suddenly flooded with random ads from products vaguely associated with concepts I mentioned once in a six-month-old status update, that would be a problem. But that sort of “throw open the gates of information to any and all spammers!” approach doesn’t really seem close to where Facebook is, or even where its going. It knows that most of its newer users never knew the micro-Facebook of a few years ago, and that plenty of slightly older users are pretty well hooked, with Facebook now their primary means of retaining contact with large numbers of friends (actual friends, not just nominal friends). I’m thinking that for myself, I’m likely to remove my e-mail addresses from my profile (in case that info does end up being sold, purposely or accidentally) and avoid third-party apps (I already do that), but otherwise, really, I don’t particularly care if marginal ads target my interests: either I’m interested, or I’m not, but (a) they’re pretty ignorable for most people and (b) better yet, completely ignorable for those of us using AdBlock and the like.
But it doesn’t really seem to me that the sky is falling, nor that Facebook is busily dismantling its supports. If something seems free, it probably isn’t: most often, someone is likely trying to extract information from usage patterns in order to alchemize that info into dollars. That’s true pretty much anywhere online. (And again: see my parenthetical microrant above if that’s displeasing.)
*It occurs to me that one thing that might bug people is the Sherlock Holmes effect: a synthesis of browsing info and other data that truly accurately guesses one’s likes and dislikes seems creepy and uncanny, even if upon explanation, the derivation of that synthesis might seem obvious, even trivial. Then, it’s also ironic that most people in this world seem to go around constantly advertising their likes, dislikes, and other preferences, making sure their social and subcultural status and identity are clearly legible, at least to anyone to whom those people want it to matter. The fact that the phrase “emo haircut” makes sense proves as much.