XAuth a useful service with a poor name

Published April 19th, 2010 edit replace rm!

The Authosphere is a buzz this morning with XAuth. My first instinct when I started seeing headlines about it was, yet another OAuth competitor?

It turns out it is nothing of the sort and it could turn out to be a very useful service. “Service” you say, isn’t it a standard? It really is a service and not a new authentication standard. But I’ll get to that.

What is it?

The way you can look at XAuth is that it is a cross domain cookie containing a list of services that you use.

What does it solve?

The problem that this aims to fix is really a UI one, not a security one. If you know the user already uses Google accounts and Meebo, there is no need to over clutter your user interface with login buttons to 40 different social networks.

It could work really well hand in hand with both OpenID and OAuth to minimize the user interface for connecting to them. OpenTransact also has potential uses for XAuth as it could be used on a payment form to filter the list of payment types a user uses.

So how does it work?

XAuth essentially defines 3 different parties to the flow:

  • Extenders are web services that a user is logged into that present some public API. These often have the same role as the OpenID providers and OAuth Providers.
  • Retrievers are web services that want to discover and consume one or more of the Extenders. In OpenID terms these are relying parties and in OAuth terms these are Consumers.
  • XAuth.org is the final party. All communication of XAuth happens through an iframe and javascript hosted here. This is just static hosting, all data is stored in the users browser.

How do I use it?

As a developer it is pretty easy to do. Most of the work is in the Javascript layer and there really is very little that needs to be done on the server backend side. As I understand more about the process I will update this page.

XAuth is implemented as a javascript library you import in your header:

<script type="text/javascript" src="http://xauth.org/xauth.js"></script>

This script creates a hidden iframe on your page with the following page http://xauth.org/server.html.

Extender implementation

If you are an Extender, you extend your service by calling the XAuth.extend() function with a token. I would expect this to be done once a user logs in to the Extender service.

  token: "bb87aef641b8f29023a8c207f",
  expire: 1271773157402,
  extend: ["www.meebo.com", "www.youface.com", "*"],
  callback: extendCallback

The token is a XAuth specific token, which I don’t completely understand yet. The demo code just generates a random token. Here is what the spec says:

this is the XAuth token, created specifically for use with XAuth and used by an auth extender to request additional services from the extender. This token will be stored with the canonical domain as the key.

My guess is that it should say:

this is the XAuth token, created specifically for use with XAuth and used by an auth Retriever to request additional services from the extender. This token will be stored with the canonical domain as the key.

The spec doesn’t mention more here but I imagine this could be used together with OAuth somehow for a really low permission OAuthToken. But lets see what implementers do with it.

The extend allows you to limit the token to only certain trusted domains. Use “*” to allow anyone access.

The optional callback method is called with the following data:

  "cmd": "xauth::extend",
  "id": 0

Once this has been called a record is created in the HTML5 Local Storage for the http://xauth.org/server.html page:

If you are unfamiliar with HTML5 Local Storage, think of it as a more evolved form of cookies. Some people have called in browser based NoSQL.

As part of your logout code you should also call XAuth.expire(), which removes it from the list of tokens.

Retriever implementation

The retriever is also pretty simple to implement. You make the call to the XAuth.retrieve() function:

  retrieve: ["www.meebo.com", "www.youface.com", "xauth.org"],
  callback: retrieveCallback

You must list all the Extender domains you are interested in and the callback function is called with tokens for each of the available Extenders.

  "cmd": "xauth::retrieve",
  "id": 4,
  "tokens": {
    "xauth.org": {
      "token": "bb87aef641b8f29023a8c207f",
      "expire": 1271773157402

Now remember XAuth doesn’t say what to do with this information. Everything from now follows the regular API’s you already use. I expect that this is initially going to be used to hide various openid login widgets in the beginning.

For an interactive spec/testing console go to the XAuth Specs page.

User privacy issues

So as a user how do you control this? Do I really want these sites to know what I use?

At the XAuth home page you will find this control panel:

It allows you to switch XAuth off completely and to see and manage the tokens Extenders are publishing for you.

Service or Standard

I guess it really doesn’t matter, but strictly speaking this is a service not a standard. It does rely on the XAuth server to provide js and the iframe page.

This is Discovery not Auth

I find probably the biggest issue with XAuth is the choice of name. XAuth is not an authentication protocol. I would call it a discovery protocol. It has more in common with WebFinger (again in a complimentary way) than with authentication protocols like OpenID and OAuth.

Having spent more than 2 years explaining to people that OAuth and OpenID are not the same, this just adds more confusion. Names are really important.

I also think the choice of Extenders and Retrievers is confusing. I think using Provider and Client would be clearer. Lets not invent more terminology if it isn’t necessary.


I think this could be really useful. It looks very easy to implement for both Extenders and Retrievers. The usage of HTML5 functions also hints at a bunch of new services that are now possible.

Of course with any of these kinds of services until there are Extenders out there, very few Retrievers are going to implement this. I understand that both Meebo and Google will support it.

As it is so easy to implement, it might not hurt at all for smaller startups to implement Extenders. It could be a way to promote your API.

How OAuth beat Chip and Pin

Published February 12th, 2010 edit replace rm!

2 news stories on the same day are quite interesting in their contrast.

Pin and Chip is broken

The first one has the collective might and minds of the European banking system and their suppliers who overlooked a slight issue in their authentication protocol for authenticating Chip cards with a pin number. In Europe most Visa/MC cards are smart cards and have to be authenticated with a pin. This in theory allows for an authenticated payment message.

Only problem was that, well one very important bit of the message was not authenticated leaving a gaping hole. I won’t go into the details as well as Ross Anderson does. He is one of the security researchers who discovered the flaw. Unfortunately it sounds like carders discovered it before them.

Now what to do with these supposedly safe authenticated transactions? There is no way of knowing which ones were fake. You can’t mass revoke all european cards. Some one is in a bit of a bind right now.

Grader’s security screw up

The second story was about HubSpot a Cambridge, Mass. based startup who self admittedly screwed up and let a malicious user comprise the security of their Grader service a rating service for twitter users.

Granted we are not talking about a system that handles the majority of Europe’s electronic point of sales transactions here. But they know they screwed up. However due to the fact that Grader used OAuth they were able to mitigate any damage pretty quickly by asking Twitter to revoke their Consumer credentials and any tokens they had issued to it.


The difference is that while both Chip and Pin and OAuth are ways of doing delegated authentication, the only token to revoke in the Chip and Pin scheme is in the card itself. The standards behind Chip and Pin assumes that it’s technology is perfect and through their rule books that all parties involved along the long chain from the card to the issuing bank can automatically be trusted.

This is basically the exact issue I described in The sorry state of Payment Standards.

OAuth does not define how a user authenticates himself to either of the services involved, rather it is focused on the delegation.

The delegation is done in the form of an authorized token that can be equipped with limits and can at any time be revoked. It is under the control of the user. In this case Grader themselves request the revocation as they knew that all of their credentials were compromised. Where do the European bankers even start to clean up this mess?

I think OAuth a simple (as authentication standards go) standard developed on a mailing list by a small group of developers has incredible potential in payments applications. This is of course why we picked it as one of the fundamental building blocks for OpenTransact.

Is OAuth perfect? Probably not. Nothing is 100% secure. It has had one serious security flaw which was fixed. But by design it is revokable. You can do something about it if something goes wrong. There is now an IETF OAuth Working Group working on making it an official internet standard.

OpenTransact a tiny payment standard

Published February 11th, 2010 edit replace rm!

As I mentioned in my last post The Sorry state of Payment Standards I wanted to blog about OpenTransact which so far is probably one of the most important outcomes of the Agile Banking mailing list set up after my talk last year on Agile Banking.

OpenTransact was designed to be very simple and is based on REST the best practice for current web API design. It should be so easy to implement for developers that there really shouldn’t be an excuse not to implement it.

An OAuth authenticated HTTP POST request to assets URL containing the parameters amount, to and optional memo field. Payer is deduced from the OAuth token.


POST /transactions/usd HTTP/1.1
Authorization: OAuth ... oauth_token="ad180jjd733klru7", ...
Content-length: 239

amount=10.00&[email protected]&memo=Milk

The above example shows a payment of $10 to [email protected] from the user who authorized the “ad180jjd733klru7” OAuth token.

Using the Ruby OAuth gem you would write the above as:

@token.post "/transactions/usd", :amount=>10, :to=>"[email protected], :memo=>"Milk"

That is pretty much it. But please do read on for more about motivations, issues and applications.

The sorry state of payment standards

Published February 4th, 2010 edit replace rm!

Since my talk last year at Reboot on Agile Banking the Agile Banking Mailing List has been quite active with lots of different ideas. One of the most important yet also simple products of the list has been OpenTransact which I now realize I haven’t even mentioned on my blog.

OpenTransact aims to be the worlds simplest technical standard for transferring some sort of value between 2 accounts. We wanted it to be so simple that there wouldn’t be a good technical excuse for not implementing it and also make it extremely simple to build all sorts of new value added services on top of it. You can find out more at the OpenTransact site and I will post a detailed article about it tomorrow.

In this article I will focus on what we’ve got. Current payment standards and bank business processes are incredibly complex and I don’t pretend to understand all of it, but I will try to present a much simplified version of what goes on.

What is wrong with existing Payment Standards?

Complex messages

The big issue with most of them is that they are ancient (SWIFT the standard banks use to transfer funds internationally dates back to the mid 70s). Most of them are designed for ancient technology like mainframes with batch processing as well as ancient business practices such as delayed settlement and bank opening hours.

Most of these standards are message based and extremely complex.

SWIFT has 25 standard data elements and 79 types of messages with strange names like MT542. People make good money specializing in specific message types. You often see programmer jobs in the financial industry calling for 3 years of experience in eg. MT530.

ISO 8583 which is used for handling most payment card transactions dates from 1987 is a little better in that at least it does expect to be able to send one or two small messages online, but still most of the heavy lifting is done after the fact in the settlement phase.

Most countries have their own electronic banking clearing systems with their own set of standards and procedures. The largest US clearing system ACH won’t even let you at their online rules website without buying a $79 book.

Security nightmares

Any non cash financial transaction consists of one or more entries known as book entries into various companies databases. By definition any act of creating a financial transaction is delegated by you to someone else. Whenever your transaction involves another bank your original delegation gets delegated onwards to x number of other institutions. Unfortunately the transactions are often unauthenticated. Whether you are sitting in a bank office ordering a wire transfer or handing a waiter your credit card you are delegating authority to transfer funds out of your account. This is why a check says “Pay to the order of…”.

Card payments do support some level authentication from pin to signing via an onboard smart card. But they aren’t used when performing an internet payment. This is obviously where much of the fraud goes on. Credit Card or ACH fraud essentially boils down to convincing (delegating) someone to transfer funds from the victims account.

Since you can’t satisfactorily secure the payment message most of these standards come with complex reconciliation standards that make up most of their complexity and even larger and more complex rule books. These rules are what allow you to deny a transaction when you get your statement a month later after the fact.

Enter PayPal

PayPal’s original Website Payment API while not a standard was a big step up over the actual standards in that it was born of the web. Web 1.0 yes but still the web. So rather than using REST to hide some of the complexity it uses a single URL with lots of different fields including optional non payment fields such as address and shipping details.

A big issue that PayPal had was that they still had to interface with the traditional banking world with their settlement times and insecure payments. To make it economical for them they need to use ACH and similar slow low cost national payment networks guaranteed by credit cards if these don’t go through. This is where much of the complexity of dealing with their Instant Payment Notification comes from.

Most of PayPal’s competitors use similar API’s to PayPal’s, but there hasn’t been any real standardization on it yet.

Their new Adaptive Payment API is much clearer. Yet still fairly complex for a new developer faced with it. And no it’s still not REST.

Why is it like this?

Closing Time

There are many reasons why banking related standards are complex. Most of them are historical. For example banks still operate 5 days a week with close of business at some time like 6pm decided many decades ago by gentlemen in top hats. Of course they do work 24 hours but all their processes are still based on this pre online world.

This means there is still a lot of batch processing going on in that last hour before closing. Most banks have a cutoff time a couple hours earlier than their official closing time to give them time to batch up and reconcile the days business. This is why you have to rush to the bank with your checks before 3pm on a Friday or it won’t get credited until Monday evening.


Settlement is another important issue that all of these ancient standards have to deal with. Once you’ve bought that dry martini in the bar after work with your debit card, the money doesn’t actually leave your account yet. Your bank basically puts a hold on your money first. Then over the next couple of days they settle it through a network of banks and institutions depending on the country. Most of the settlement processes is done through batch processes once a day, the more step between your bank and the bars bank the longer settlement takes.

Most of these settlement (clearing) times are standardized by rules set out by central banks or banking associations. The holy grail for many of these systems is to reach what is known a T+0 clearing, which is fancy banker speak for instant clearing. It will take a long time for this to happen in most large economies like the US as there isn’t really a demand for it from anyone except consumers and small businesses.


Any new standard or API that attempts to deal with the existing banking standard is very difficult to simplify. You need to account for each countries settlement and fraud prevention rules. SWIFT was updated to a fancy smancy new XML standard, which didn’t do anything to simplify it besides making it easier to parse.

Any kind of innovation in this needs to be designed for an online world with 7 days, 24 hours and clear simple standards that only aim to solve a simple need. We have had this magic new technology now called HTTP thats been here for close to 20 years now and it’s been kind of successful. OpenTransact is only a very thin layer on top of HTTP and uses OAuth for its authentication, I think it is going to be big. But I will write more on that in my next post.

Portable Contacts in Ruby

Published October 6th, 2009 edit replace rm!

One of the great new standards made possible by OAuth is PortableContacts. Joseph Smarr from Plaxo is the main force behind it together with Chris Messina and others.

Portable Contacts as the name suggests is a standard for allowing one application access to your contact data on another application. The standard is a nice example of a very clean API which supports both JSON and XML.

I was looking around and I couldn’t find a Ruby library for it, so I wrote this simple Portable Contacts Ruby Gem to allow me and others to use it.

It requires OAuth and the Ruby OAuth Gem. If you need to use it with a Rails application use the OAuth Plugin if using straight Ruby use the Ruby Gem. All you need is an AccessToken object and a Portable Contacts URL to get started.

@access_token = ... # instantiate access token
@client = PortableContacts::Client.new "http://www-opensocial.googleusercontent.com/api/people", @access_token

# Get users profile

@profile = @client.me

puts @profile.display_name
=> "Bob Sample"

# Get users contacts
@contacts = @client.all

If you are using the plugin it is very easy to get started accessing Google.

Just install it and configure it. Once you have a GoogleToken for your user:

@google_token=GoogleToken.find_by_user_id @user.id
@client = @google_token.portable_contacts

# Get users profile

@profile = @client.me

puts @profile.display_name
=> "Bob Sample"

# Get users contacts
@contacts = @client.all

As this is the very first version there are likely to be issues, please do submit patches. I would also love to include a PlaxoToken etc for the OAuth Plugin if someone wants to submit one.


Eventually there will be some sort of auto discovery so you don’t need to provide a Portable Contacts url.

I would also like to add a server component to the gem, making it easy for you to provide portable contacts for your own application.

About me

Pelle gravatar 160

My name is Pelle Braendgaard. Pronounce it like Pelé the footballer (no relation). I live in wonderful Managua, Nicaragua. I work with Clojure, Bitcoin and Ethereum.

More about me:

Popular articles