The End of “Social DRM” is in Sight

I am pleased to see a major shift underway in the prevailing thoughts on one of the most important topics relating to data portability, interoperability, and the emergence of the Social Web — the question of whether the service providers need to protect us with “social DRM” or trust us to do the right thing. Microsoft’s Dare Obasanjo has an excellent post on the topic, outlining the two schools of thought, and publicly declaring that he has shifted sides in this critical debate:

The issue of what to do with content a user has shared when they decide to delete the content or attempt to revoke it is in an interesting policy issue for sites geared around people sharing content. When I’ve discussed this with peers in the industry I’ve heard two schools of thought. The first is that when you share something on the Web, it is out there forever and you have to deal with it. Once you post a blog post, it is indexed by search engines and polled by RSS readers and is then available in their caches even if you delete it. If you send an inappropriate email to your friends, you can’t un-send it. This mirrors the real world where if I tell you a secret but it turns out you are a jerk I can’t un-tell you the secret.

The other school of thought is that technology does actually give you the power to un-tell your secrets especially if various parties cooperate. There are ways to remove your content from search engine indexes. There are specifications that dictate how to mark an item as deleted from an RSS/Atom feed. If your workplace uses Outlook+Exchange you can actually recall an email message. And so on. In the case of Facebook, since the entire system is closed it is actually possible for them to respect a user’s wishes and delete all of the content they’ve shared on the site including removing sent messages from people’s inboxes.

I used to be a member of the second school of thought but I’ve finally switched over to agreeing that once you’ve shared something it’s out there. The problem with the second school of thought is that it is disrespectful of the person(s) you’ve shared the content with. Looking back at the Outlook email recall feature, it actually doesn’t delete a mail if the person has already read it. This is probably for technical reasons but it also has the side effect of not deleting a message from someone’s inbox that they have read and filed away. After all, the person already knows what you don’t want them to find out and Outlook has respected an important boundary by not allowing a sender to arbitrarily delete content from a recipient’s inbox with no recourse on the part of the recipient. This is especially true when you consider that allowing the sender to have such power over recipients still does not address resharing (e.g. the person forwarding along your inappropriate mail, printing it or saving it to disk).

And, as he points out, Dare is not alone in this shift. Mark Zuckerberg and the team at Facebook clearly appear to be shifting stance as well. In his epic post On Facebook, People Own and Control Their Information, in response to the confusion over the update to the Facebook TOS:

Still, the interesting thing about this change in our terms is that it highlights the importance of these issues and their complexity. People want full ownership and control of their information so they can turn off access to it at any time. At the same time, people also want to be able to bring the information others have shared with them—like email addresses, phone numbers, photos and so on—to other services and grant those services access to those people’s information. These two positions are at odds with each other. There is no system today that enables me to share my email address with you and then simultaneously lets me control who you share it with and also lets you control what services you share it with.

We’re at an interesting point in the development of the open online world where these issues are being worked out. It’s difficult terrain to navigate and we’re going to make some missteps, but as the leading service for sharing information we take these issues and our responsibility to help resolve them very seriously. This is a big focus for us this year, and I’ll post some more thoughts on openness and these other issues soon.

Some of us tried to get this debate started in September of 2007, with the publication of the Bill of Rights for Users of the Social Web, by Joseph Smarr, Marc Canter, Michael Arrington, and Robert Scoble. In hindsight, the world was not yet ready for that debate; few took notice, and no actions came in response. Then, in January of 2008, when Plaxo was trying to get a Facebook contacts importer ready to launch, which would have enabled social address book sync between Facebook, Plaxo, Outlook, the Mac address book, Yahoo Mail, and more, it turned by accident and miss-communication into a major incident. By then the world was ready to argue and debate the key questions, but not ready to come to any consensus.

But over the course of 2008, projects like Google Friend Connect, Facebook Connect, MySpaceID, and the quickening drumbeat of progress for OpenID and the Open Stack helped the industry to think through the issues preventing data portability and interoperability. In the end, we’re all coming to realize that rather than try to prevent anything bad from ever happening via “social DRM,” we’re going to have to trust our users, so that we can enable amazing things to happen — like all your tools and services working well together!

Tagged , , ,

7 thoughts on “The End of “Social DRM” is in Sight

  1. […] Its more than just email messages or shared photos. This is about who is in charge. When users leave a system, their content should be deleted, except if it was part of a “conversation” – where others contributed as well. […]

  2. Bob Kerns says:

    Hi, John. I like your post, and your summation of the history, but I have a bit different take on things.

    I don’t see this as being about ownership of information. I think it’s fundamentally a problem of user model — the model users have of what they are doing when they publish their information. People tend to underestimate their potential reach when they write.

    I think it’s not a new issue either — just higher profile.

    I wrote up my thoughts on my blog:

    And I give a 100-year-old example….

    (And I mention you, and your post as well).

    This really ends up with me coming down strongly on the “its out there” side, but I think I get there along a bit different path.

    Enjoy — thoughts welcome!

  3. […] Certainly, this model has worked very well for email, and people like Plaxo’s John McCrea are hailing the fall of ’social DRM’. However, content that is shared behind the scenes via APIs, and content that is shared […]

  4. Joe says:

    the fact that Facebook change their TOS back so quickly is like an admission that they knew they were wrong

  5. […] Creative Commons, Facebook, Magnolia, Plaxo, TOS, YouTube Who owns your data? Should there be DRM on the content/data you share with others? What happens to your stuff when the service you shared it on has a catastrophic failure? These and […]

  6. Bertil says:

    I’m not sure that the two models can’t co-exists, or that everyone has a unique position for every partner and every bit of information. More importantly, just like default Copyright and Creative Commons, I see value in both, or rather an optimal solution in their co-exsitence.

    Like most exhibitionnist web-dweller, I’d prefer trust; but honnestly, most people who have used the mail call-back feature in Microsoft Exchange (me included) liked it: it allowed to invisibly improve your output, usually in a professional context i.e. not have your boss bother with your initial clumsiness — but that was dependant both on the context (no scripting features, no Xobni-like mail statistical, no security against external threats) and the implementation: limiting the tool to unopened e-mail was a stroke of genius. It obviously wouldn’t scale with automated responses and e-mail-forms, but internally, it’s great to have.

    If we let users declare the information that they are comfortable to call back, you’ll have more information; of course not letting them the choice, or even set an open default setting, might force some into more sharing then they would have otherwise, and they *might* like it — but serendipity doesn’t impress anyone, while it is far more likely that a random accident will (wrongly) set aflame a moral panic around how wrong are Facebook settings. I might have the last few years of media coverage on my side on this one.

    Of course, if a friend refuses to give me permission to automagically update my Address Book with his Facebook changes, it will become what it always has been: a trust issue that needs to be addressed at the social level; if someone somehow wants to prevent me from calling him, that’s his loss anyway.

    Making that information more visible might stirr more drama between weak ties; and we don’t need that more then any other drama avoided by Facebook’s discretion around severed links, etc. so I’d argue against making that information attributable or aggregated into stats — but this is a different issue.

    That main issue with letting both option (or more, if you want to allow flexibility by bit of information and group of friends) is designing the privacy control board—but I do believe that this is something that we can overcome.

  7. I actually think it depends on the content and context. I guess only few people think they can delete an email they just sent out. And the same is the thinking probably for messages sent from user A to user B. But what about my photos I publish on my website in a password protected area which can only be accessed by 3 people? I could delete it and it’s gone and nothing is public as long as I trust those 3 people and they are worth that trust.

    The difference here is maybe that in case 1 I transferred information while in the second one I didn’t but just sent a link. Technically of course it is still transferred to their computer but I am talking more about the model behind it. It’s comparing a letter sent from house A to B compared to visiting house A to see those photos.

    And I also think that so far nobody really has cared about the privacy problem that much. Social networks just happened and in the beginning it’s usual that you omit the privacy controls probably because it’s easier. Later though I think they should be added. And I also think that if some data is going from A to B there should at least be some sort of license attached where the user (not the service like e.g. FB) defines what exactly is allowed to be done with it. This can be “only show it to me”, “show it to friends on list X”, “show it everybody”.

    I don’t really like the notion (as it’s articulated sometimes) that it’s all out there anyway and you cannot do anything about it so don’t let us even think about how we can change it. Would this apply to everything? My health data? My genome?

    Ok, I am moving a bit away from the deletion problem but I think this belongs in the broader picture.

    At least I think users should always know what will happen to their data. This might mean easier to understand TOS or some sort of icon set which describes what will be done with their data. This btw is what we are working right now in the TOS&EULA Taskforce of the DataPortability Project, see

    That way a user can at least choose which provider to use and it might also create some healthy competition between them if users really demand on these things instead of ignoring them or becoming aware of these problems after it’s too late.

    As for teenagers there was also a great talk at the 25C3 which I live blogged here:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: