Nope, I didn't get involved in a land war in Asia, but I did let yet another exciting thought exercise trick me into picking up a coding project I don't have time for. Happenstance was supposed to be just a reflection on whether the perceived benefits of central control could be achieved in a decentralized, distributed and federated status network.
And boy, is designing the next twitter the new hawtness right now. But while the exercise was quite interesting and I have answered the question of "is it possible" to my own satisfaction, I just don't have the energy or clout to implement and evangelize it given the other projects I'm already on and are giving me the evil eye over this distraction. So, rather than letting it linger, i'm just gonna admit that I'm going to stop working on the reference implemenation of Happenstance.
Following is already well established in RSS and PuSH solves the delivery of content in a near-realtime fashion. OStatus does a fine job of formalizing the existing infrastructure in a way to track other people's updates. And a feed having a URI, gives it a canonical name, so in the most basic extend the question of origin is solved as well. You could make the claim that this solves all the issues of distributed, federated and decentralized. But it really doesn't, it just tells you where to get things.
What makes twitter, and by extension every walled garden, valueable is not the publishing of content, but the trusted aggregation of content that you are interested in. This is both the subscription of feeds and the delivery of what's generally called mentions now.
Mentions are like emails you are CC'ed on by virtue of your name occuring in the message. But unlike email, the keeper of the walled garden instills the trust that we've all lost in email authenticity due to the ease of faking the origin. Sure Twitter has a spam problem, but you never doubt the origin of the message. You just don't care about certain messages arriving at all.
The way we read our timeline, we don't wonder if the content is faked, we trust that what is in our timeline is at least from the claimed sources. With aggregating distributed feeds you do trust that you only get messages that came from a feed you've personally validated, but the moment you add the capability to receive adhoc messages, everything message has to become suspect.
This is why i spent more time on the message-, name server- and meta-data signing than anything else on Happenstance. In the end, you are accumulating a store of messages that represents your own copy of messages passed around that may or may not have come directly from a trusted canonical uri. And for that network to be as valuable as twitter, the ability for your reader to authenticate the content as originating from where it claims to come from will make the difference between a valueable resource and the next spam haven that people abandon for the safety of the walled gardens.
And that's the one thing i've not seen anyone else address: Everyone is talking about all these competitors to twitter being able to interchange messages, which of course they must since none can rival the reach twitter has on its own. But none of the existing and proposed mechanisms address the issue of trust in content of a remote origin. Hopefully my exposition of the Happenstance design will at least inspire the emerging standard for status distribution to avoid the pitfalls that turned email into more spam than actual discourse.
Fundamental to the design of Happenstance is the idea that everything you publish, messages and meta-data, is cryptographically signed. The need for the signature stems from the distributed nature of the system. I.e. as messages are published and copied to any number of aggregators to be included in any number of feeds, there needs to be a way to know that the message you are reading came from the claimed author, without having to read all messages directly on the author's feed.
The challenge this brings with it, especially when striving for consumer adoption, is that the goals of accessibility/usability and security can be at odds. For this reason, I've designed (and am currently refinining/revising) the use of public key infrastructure (PKI) in such a fashion that private key compromise does not fatally affect the integrity of an author's identity.
A full explanation of how content is signed can be found in the Signing Spec. The short version is that feeds provide RSA public keys, and all message blocks are signed with an RSA private key in base64-encoded RSA-SHA256 format creating a _sig key in the message block like this:
In order for content to be signed, a private key has to be used. If anyone else gets their hand on your private key, they could fake messages appearing to come from you. While the fakes won't hold up to scrutiny, as I will explain below, it's still a trust issue. So the private key needs to be protected. Before I get into how a compromised key can be dealt with, let me explain why PKI for Happenstance may violate normal best practices for dealing with private keys, creating greater risk of compromise.
Ideally only you would have your private key, preferably encrypted and only decrypted in memory for the purpose of signing. That implies that all authoring has to happen client side. For some clients and users that is certainly feasible, but for non-technical users used to web applications the added level of complexity may be too cumbersome. These users have come to expect the experience provided by walled garden services with their implicit trust via authority (not that that trust can't be and hasn't been compromised).
Using a service to author and sign your messages will expose your private key to attack vectors one way or another. But since a hosted authoring experience can remove the complexity of key management, I've tried to enumerate some approaches to reduce the dangers, while simultaneously designing the ecosystem to survive the compromise of keys.
Note that all strategies assume that communication with the host always happens over HTTPS.
The first option keeps the key encrypted by the user's passphrase at the server and sends it to the client at authoring time. The client side application can then prompt the user to decrypt it, sign content and send the signed content back. Basically, the authoring experience is implemented on the client, but hides the complexity of key management.
Need the password for every message (could cache in memory, which has dangers of its own)
The encrypted key is frequently sent over the wire and its use by the client application may be exploited
If the service is compromised, the passphrase for each key stored at the server must be cracked, which isn't outside the realm of processing power these days.
It should be noted that this is the only approach in which the service provider might be considered untrusted, since they never get the passphrase. However they still have the key, so they could crack it or compromise the client authoring experience (since they provide it) to capture the key, so the service provider still needs to be trusted.
This approach moves authoring to the server, which means that the passphrase to decrypt the key must be sent to server. This makes the client a lot thinner. Less moving parts, less chance for errors or attack vectors on the client and once again the complexity of key management is hidden.
Need the password for every message (server could cache in memory, again introducing new dangers)
Passphrase is frequently sent over the wire
If the service is compromised, the passphrase for each key stored at the server must be cracked, which isn't outside the realm of processing power these days
Even worse, if the service is compromised, it could be made to capture the passphrases as well
You might think you could just use the password the user uses for login as the key passphrase, but in a properly designed system, the login password is stored in a one way hashed form, i.e. the service has no way to "decrypt" the password, it just hashes the incoming password and compares hashes. The private key, however, even though it is part of an asymetric encryption scheme is itself encrypted symmetrically, i.e. you need the same passphrase you used to encrypt the private key to decrypt it in order to use it. So the key's passphrase must be available for every signing event.
This final approach is the most user friendly but wrests the control over the private key from the user. The server keeps the key in a form in which it can use it to sign messages on the behalf of the user without any additional authentication. The entire key management mechanism is hidden.
Complete reliance on the service to protect the key in case of service compromise
This one seems to require more trust in the service provider, but in reality each version is vulnerable if the service is a bad actor or compromised. However this last method removes all need for the private key to ever be communicated between client and server. While the security of the key is now completely in the hand of the service provider, much like credit card storage, there are established practices for securely storing and using such data, such as via some form of one-way vault (usually a dedicated server not on the public network that accepts keys and can sign messages, but has no mechanism for retrieving the keys). Given that Happenstance allows for multiple keys and key expiration, a service provider that never divulges the private key to the user is likely to be the most secure and convenient approach.
Since all methods (even you storing your own key and never giving it out) can be compromised, the focus really should be on mitigating the effects a compromise can have. Since Happenstance does not actually encrypt data to hide it from prying eyes, but only uses it authenticate data in the clear, a compromised key's only use is for impersonation.
A compromised key could be used to push fake posts into the ecosystem in one of two ways: Create mentions to deliver messages directly to users impersonating the owner of the key, or by "re-posting" a faked post to their followers. Fortunately, a compromised key does not mean that the user has lost control over their feed and the user could simply generate a new key and remove the existing one from their feed meta data. This way any aggregator would immediately detect it as a fake when validating incoming messages.
The side-effect of key revocation is that all messages currently in flight and older messages being re-checked would also become invalid. To guard against this, a message failing validation should be re-fetched via its canonical uri and be re-validated. This way, upon removing a compromised key, all previously signed messages can be re-signed with the a new key.
Because loss of access to a key or compromise through personal or service provider negligence are realities that cannot be fully eliminated, it is important that Happenstance messaging can mitigate the loss of a key without compromising identity.
Keys being compromised isn't the only way that an author might have for discontinuing the use of a key. A service provider could never have given out the private key (as suggested above). Or a service provider went out of business and the status of their key storage is unknown. Or the author forgot their passphrase, or lost the only back-up of a key they used in client side authoring environment. etc.
In addition a user may have need for using multiple different keys due to using multiple authoring environments with different private key strategies. To support multiple keys, public keys are represented in the feed meta data like this:
Revocation of a key is simply the removal of the key from the author meta-data public_keys hash and re-signing all content signed by that key. Alternatively, a key can also be expired with the expired key, meaning that any message signed by that key after that time should no longer be trusted.
Expiration is best for keys that are no longer accessible, rather than compromised, since faked messages with old timestamps could be created.
Hopefully those better versed and more serious about the application of PKI infrastructure haven't stopped reading before this. I know that as a PKI best practice it does go against established norms for guarding private keys.
So to answer the above question: No, I do not. Happenstance is a spec and the implementation details are completely open. I'm simply providing strategies that I personally consider secure enough and which do not hinder broad user adoption. But it does not preclude anyone from creating authoring environments that only work on local, encrypted keys that are never shared. That's the point of Happenstance being broken up into small interoperating parts rather than being a large platform with lots of hard API requirements. Let the ecosystem and users decide which implementations strategy best suits their comfort level.
Just 2 days left before app.net either funds or fades away. It's pretty close, so a last minute push might do it. But regardless of funding, they will have a tough road ahead, since weening the internet off the "free" user-as-product paradigm has been a battle since the first .com boom where customer acquisition costs of over $1000/user with no revenue model were considered successful. I've signed up with app.net at the developer level and wish for them to succeed, but am afraid that it will either be a niche product or an interesting experiment.
Meanwhile MG Siegler has a bit more pessimistic of an assessment, i.e. either app.net will die because it sticks to its principles or succumb to the realities that being successful corrupts. If history is to be any indicator, his prognosis is a fairly safe one to make. And it goes to the heart of the problem of tackling something as fundamental as status network plumbing via a single vendor. As long as there is central control, the success of the controlling entity is likely to cause their best interest and their user's best interest to drift apart over time.
The annoying thing about this is that there doesn't have to be and neither should there be central control. Do you think that blogs would ever have been as big as they are now, if blogger.com or wordpress.com would have been the sole vendor controlling the ecosystem and you could only blog or read a blog by having an account on their system? There is no owner to RSS or Atom, it's just specifications that address a simple but common pain of trying to syndicate content. Yet despite this lack of a benevolent dictator owning the space, a lot of companies were able to spring up and become very successful, without ever having to sign-up and get a developer key.
And that's is why I am working on the Happenstance specification. Mind you, I don't have the clout of Dalton Caldwell, but most of the plumbing we now take for granted did not originate from established entrepreneurs in our space, so I hope I will hit a nerve here with Happenstance that can get some traction. My aim is to design and implement something simple, useful and expandable enough that adopting it is a tiny lift and opens opportunities. The specification doesn't force any revenue model, so if implementors think they are better served by ad or subscription or some other model, they can all co-exist and benefit from the content being posted. I'm gonna keep plodding away at Happenstance and see whether the web is interested in being in control of their message feeds and be able to innovate on top of the basic feed without asking permission from a Benevolent Dictator.
So everyone is talking about join.app.net and I agree with almost everything they say. I'm all for promoting services in which the user is once again the customer rather than, as in ad supported systems, the product. But it seems that in the end, everyone wants to build a platform. From the user's perspective, though, that just pushes the ball down the court: You are still beholden to a (hopefully benevolent) overlord. Especially since the premise that such an overlord is required to create a good experience is faulty. RSS, Email, the web itself, are just a couple of techs we rely on that have no central control, are massively accepted, work pretty damn well and continue to enable lots of successful and profitable businesses without anyone owning "the platform". So can't we have a status network truly as infrastructure, built on the same principles and enabling companies to build successful services on top?
Such a system would have to be decentralized, distributed and federated, so that there is no barrier for anyone to start producing or consuming data in the network. As long as there is a single gatekeeper owning the pipeline, the best everyone else can hope to do it innovate on the API surface that owner has had the foresight to expose.
Thinking about a simple spec for such a status I started on a more technical spec over at github, but wanted to cover the motivations behind the concepts separately here. I'm eschewing defining the network on obviously sympathetic technologies such as ATOM and PubSubHubub, because they provide lots of legacy that is not needed while still not being a proper fit.
I'm also defininig the system as a composition of independent components to avoid requiring the development of a large stack -- or platform -- to participate. Rather implementers should be able to pick only the parts of the ecosystem for which they believe they can offer a competitive value.
A crucial feature of centralized systems is that of being the guardian of identity: You have an arbiter of people's identity and can therefore trust the origin of an update (at least to a degree). But conversely, the closed system also controls that Identity, making it vulnerable to changes in business models.
A decentralized ecosystem must solve the problems of addressing, locating and authenticating data. It should also be flexible enough that it does not suffer from excessive lock-in with any Identity provider
The internet already provides a perfect mechnism for creating a unique names in the user@host scheme most commonly used by email. The host covers locating the authority, while the user identifies the author. Using a combination of convention and DNS SRV records, a simple REST API can provide a universal whois look-up for any author, recipient or mentioned user in status feeds:
As shown above, the identity document includes the uri to the canonical status location (as well as the feed location -- more on that difference later). That solves the locating.
Finally, public/private key encryption is used sign the identity document (as well as any status updates), allowing the authenticity of the document to be verified and allowing all status updates to also be authenticated. The public key (or keys -- allowing for creating new keys without loosing authentication ability for old messages) is part of the feed, while the nameservice record is signed by it. And that solves authentication.
Note that name resolution, name lookup and feed hosting are independent pieces. They certainly could be implemented in a single system by any vendor, but do not have to be and provide transparent interoperation. This separation has the side effect of allowing the user control over their data.
The reason for using an HTML page is that it allows you to separate your status page from the place where the data lives. You could plop it on a plain home page that just serves static files and point to a third party that manages your actual feed. It also allows your status page to provide a user friendly representation of your feed.
While the feed provides meta data about the author and the feeds the author follows, the focus is of course status update entries, which have a minimal form of:
In essence, the status network works by massively denormalized propagation of json messages. Even thoug each message has a canonical uri, the network will be storing many copies, realize that updates are basically read-only. There is a mechanism for advisory updates and deletes, but of course there is no guarantee that it such messages will be respected.
As for the message itself, I fully subscribe to the philosophy that status updates need to be short and contain little to no mark-up. Status feeds are a crappy way to have a conversation, and trying to extend them to allow it (looking at you g+) is a mess. In addition, since everybody is storing copies, arbitrary length can become an adoption issue with unreasonable storage requirements. For these reasons, I'm currently setting the size of the message body to be limited to 1000 characters (not bowing to SMS here). For the content format, i'm leaning towards a template format for entity substitution, but also considering a severely limited subset of html.
While feed itself is simply json, there needs to exist an authoring tool if only to be able to push updates to PubSub servers. Since the only publicly visible part of the feed is the json document, the implementers have complete freedom over the experience, just as wordpress, livejournal, tumblr, etc. all author RSS.
If the status feed is about the publishing of ones updates, the aggregators are the maintainers of ones timeline. The aggregator service provides two public functions, receiving updates from subscriptions (described below) and receiving mentions. Both are handled by a single required REST endpoint, which must be published in ones feed.
Aggregators are the the most likely component to grow into applications, as they are the basis for consumption of status. For example, if someone wanted to create mobile applications for status updates, it would need to rely on an aggregator as its backend. Likely an aggregation service provider would integrate the status feed authoring as well and maybe even take on the naming services, since combining them allows for significant economies of scale. The important thing is that these parts do not have to be a single system.
This separation also allows application authors to use a third parties to act as their aggregator. Since aggregators will receive a lot of real-time traffic and are likely to be the arbiters of what is spam, being able to offload this work to a dedicated service provider may be desirable for many status feed applications.
A status network relies on timely dissemination of updates, so a push system to followers is a must. Taking a page from PubSubHubub, I propose subscription hubs that feed publishers push their updates into. These hubs in return push the updates to subscribers and mentioned users.
The specification assumes these hubs to be publicly accessible, at least for the subscriber and only requires a rudimentary authentication scheme. It is likely that the ecosystem will require different usage and pricing models with more strenuous authorization and rate-limiting,but at the very least the subscription part will need to be standardized so that aggregators can easily subscribe. It is, however, too early to try to further specialize that part of the spec, and the public subscription mechanism should be sufficient for now.
In addition to the REST spec for publish and subscribe, PubSub services are ideal candidates for other transports, such as RabbitMQ, XMPP, etc. Again, as the traffic needs of the system grow, so will the delivery options, and extension should be fairly painless, as long as the base REST pattern remains as a fallback.
In addition to delivering updates to subscribers, subscription hubs are also responsible for the distribution of mentions. Since each message posted by a publisher already includes the parsed entities and a user's feed and thereby aggregators can be resolved from a name, the hub can deliver mentions in the same way as subscriptions, as POSTs against the aggregator uri.
Since PubSub services know about the subscribers, they are also the repository of followers for a user -- although only available to the publisher, who may opt share this information on their status page.
Discovery in existing status networks generally happens in one of the following ways: Reshared entries received from one of the subscribed feeds, browsing the subscriptions of someone you are subscribed to or search (usually search for hashtags).
The first is implicitly implemented by subscribing and aggregating the subscribed feeds.
The second can be exposed in any timeline UI, given the mechanisms of name resolution and feed location.
The last is the only part that is not covered in the existing infrastructure. It comes down to the collection and indexing of users and feeds, which is already publicly available and can even be pushed directly to the indexer.
Discovery of feeds happens via following mentions and re-posts in existing feeds, but does mean that there is no easy way to advertise ones existence to the ecosystem.
All of these indexing/search use cases are opportunities for realtime search services offered by new or existing players. There is little point in formalizing APIs for these, as they will be organic in emergence and either be human digestible or APIs for application developers to integrate with the specific features of the provider.
To name just a few of the unmet needs:
Deep search of only your followers
Indexing of hashtags or other topic identifiers with or without trending
Indexing of name services and feed meta data for feed and user discovery
Registry of feeds with topic meta data
As part of my reference implementation I will set up a registry and world feed (consuming all feeds registered) as a public timeline to give the ecosystem something to get started with.
As outlined above there are 5 components that in combination create the following workflow:
Users are created by registering a name with a name service and creating a feed
Users create feed entries in their Status Feed
The Status Feed pushes entries to PubSub
PubSub pushes entries to recipients -- subscribers and mentions
Aggregation services collect entries into user timelines
Discovery services also collect entries and expose mechansims to explore the ecosystems data
It's possible to implement all of the above as a single platform that interfacts with the ecosystem via the public endpoints of name service, subscription and aggregation, or to just implement one component and offer it as a service to others that also implement a portion. Either way, the ecosystem as a whole can function wholy decentralized and by adhering to the basic specification put forth herein, implementers can benefit from the rest of the ecosystem.
I believe that sharing the ecosystem will have cumulative benefits for all providers and should discouragw walled garden communities. Sure, someone could set up private networks, but I'm sure an RBL like service for such bad actors would quickly come into existence to prevent them from leeching of the system as a whole.
The worth of a social network is measured by its size. Clearly there is an adoption barrier to creating a new one. For this reason, I've tried to define a specification as only the simplest thing that could possibly work, given the goals of basic feature parity with existing networks of this type while remaining decentralized, federated and distributed. I hope that by allowing many independent implementations without the burden of a large, complicated stack or any IP restrictions, the specification is easy enough for people to see value in implementing while providing enough opportunities for implementers to find viable niches.
Of course simplicity to implement is meaningless to endusers. While all this composability provides freedom to grow the network, endusers will demand applications (web/mobile/desktop) that bring the pieces together and offer a single, unified experience, hiding the nature of the ecosystem. In the end I see all this a lot like XMPP or SMTP/POP/IMAP. Most people don't anything about it, but use it every day when they use gmail/gtalk, etc.
I'm currently working on a reference implementation in node.js, less to provide infrastructure than to show how simple implementation should be. I was hesitant to post this before having the implementation ready to go, since code talks a lot louder than specs, but I also wanted the opportunity to get feedback on the spec before a defacto implementation enshrines it. I also don't want my implementation to be mistaken as "the platform". We'll see how that approach works out.