Skip navigation

I have now upgraded and moved my blog from to

As far as I’m aware, I’ve successfully mapped my blog to my new domain. This means that any visits to my old blog will redirect you to my new domain automatically. This also means that any linking to those old posts, any bookmarks and (hopefully) RSS subscriptions should all still be OK – they too should ultimately get to their intended destinations.

Having said that, I’m not sure if all RSS readers will be happy with it, so if you do subscribe please update the feed to

See you there!


This month’s London Web Standards group met to discuss Designing for the Social Web, by Joshua Porter. Big thanks to Jeff Van Campen, the group’s organiser, for kindly giving me a copy!

I liked this book, and I don’t know if I had expected to. From the first page it’s immediate that the ‘design’ of the title doesn’t refer to graphic or Web design. It’s a 101 in creating successful social Web applications, covering the whole spectrum of ‘design’ and development and how to tackle the issues you’ll come across in doing so.

Quoting names as varied as Darwin, Freud, Berners-Lee and Douglas Adams, Designing for the Social Web offers a great amount of depth in it’s near-200 pages, more so than I had anticipated. Porter presents details studies in all aspects of social media ‘design’. Of course, the interface and UX designing itself, as well as discussing user behaviour, online identity, social economics and particularly the application life cycle.

The book lays the foundations for constructing the ‘perfect’ social website. It defines important goals and principles, it recommends research methodologies in order to identify your audience. From there, it takes you through the process of determining your users’ intentions, their goals and incentives – ultimately, for you to distinguish your core features and the site’s functions.

He looks at popular social sites, the big names as you’d expect and reflects upon their legacies, but others too that invite further investigation. Throughout, he refers to good supporting materials – important interviews, blogs and key names in this current social media boom.

Early on, Porter constructs a framework for development, his ‘AOF’ framework, describing a simple prioritisation scheme that he uses for reference thereafter. He expands upon its three building blocks – of audience, objects and features – and moves forward to studying each topic and their importance within the chapters that follow. It’s detailed and well deconstructed.

But the real good kick of the book is in the four chapters where Porter concentrates and dissects four very focused areas of your application’s development, specifically, designing for sign-up, for ongoing participation, for collective intelligence and for sharing. These parts make up the bulk of the book, they’re very well researched and informed.

Without wanting to relate the whole book here, these build upon four ‘hurdles’ that Joshua outlines at the very beginning, describing how to achieve, maintain and build-upon a deep level of user engagement.

Porter's User Engagement Hurdles
Ultimately, Porter concludes by offering methods of analysis in order to understand and optimise your application once published. He gives pointers to evaluate performance, measure and act upon usage statistics, describing techniques to gather meaningful metrics and how to react accordingly.

As I say, I enjoyed the book all in all. Throughout, it is very contemporary and up-to-date in both its principles and with its examples.

It’s needless for me to say how powerful and useful, lately how almost essential, social Web apps have become and can be. With giants like Facebook, MySpace, YouTube et al, even sites that you might not think at first to be inclusively ‘social’ (although they very much are) – Nike+, LibraryThing – it is an incredibly hard market to break into for any new startups.

I think for anyone intending to do just that, this book is both extremely relevant and important reading.

Last year Facebook released Facebook Connect and about the same time Google released Friend Connect, they’re two very similar services that allow users to connect with information and with their friends of the respective native platforms from third-party enabled sites. The intention, as I’ve written about before, is to add a layer of social interaction to ‘non-social’ sites, to connect your information and activity on these third-party sites to your information and activity (and contacts) on the original platforms.

Then in March, Yahoo! announced their service sign-on, called Yahoo! Updates.

Now, this week, Twitter have announced their connection service, called ‘Sign in with Twitter‘. It too gives you a secure authenticated access to your information and contacts, in exactly the same way the others do – except this time, it’s Twitter.

Sign in with Twitter

You might ask if we have three, do we need a fourth? Have you ever used any of the other three?

But don’t dismiss it, or think it Twitter are jumping on to any kind of bandwagon, Twitter’s implementation is fundamentally different to the others – and it could cause quite a stir.

The problem with the other services (ultimately the problem with the platforms) is, more than often not, they are completely closed and non-portable. Although you can sign-in to a third-party site and access your data, there’s a lot of limitation to what you can retrieve and publish. These popular social networks have grown and amassed huge amounts of members and data which they horde and keep to themselves. I’m not talking about privacy, I’m referring to data portability.

The infrastructures are like locked-in silos of information and each built differently, because, either, they never considered that you’d want to make your data portable or they didn’t then want (or see value) in you moving your data anywhere else. The services they’ve created to ‘connect’ to your data are also proprietary methods – custom built to channel in and out of those silos. Each of those services too, are singularities, they won’t work with each other.

Twitter though, have come up with a solution that adheres to agreed upon standards, specifically, by using OAuth to facilitate it’s connection. Technically, it’s significantly different, but in practice, you can expect it to do everything the others can do.

The community’s thoughts

Yahoo’s Eran Hammer-Lahav (a frequent contributor to OAuth) has written a good post discussing his thoughts, he says it’s ‘Open done right’ – no proprietary ‘special sauce’ clouds interoperability as happens with Facebook Connect. I think he’s right.

He looks at what happened when Facebook Connect was introduced, that they essentially offered third-party sites two key features: the ability to use existing Facebook accounts for their own needs, and access Facebook social data to enhance the site. The value of Facebook Connect is to save sites the need to build their own social layer. Twitter though, is not about yet another layer, but doing more with that you’ve already got.

Marshall Kirkpatrick also wrote about the announcement, his metaphor for the other ‘connection’ services best describes how they function – ‘it’s letting sites borrow the data – not setting data free’.

But then he talks about Twitter ‘as a platform’, and I think this is where things get interesting. He says:

Twitter is a fundamentally different beast.

All social networking services these days want to be “a platform” – but it’s really true for Twitter. From desktop apps to social connection analysis programs, to services that will Twitter through your account when a baby monitoring garment feels a kick in utero – there’s countless technologies being built on top of Twitter.”

He’s right. Twitter apps do pretty much anything and everything you can think of on top of Twitter, not just the primary use of sending and receiving tweets. I love all the OAuth and open standards adoption – but that’s because I’m a developer, but thinking about Twitter as a platform makes me wonder what kind of effect this will have on the users, how it could effect the climate, even landcape, of social media if, already being great, Twitter is given some real power

People have long questioned Twitter’s future – it’s business model, how it can be monetised, those things are important – but where can it otherwise go and how can it expand? Does it need to ‘expand’? It’s service is great it doesn’t need to start spouting needless extras and I don’t think it will. But in widening it’s connectivity, it’s adaptability, I think could change our perception of Twitter – it’s longevity and road map, the way we use it and think of ourselves using it.

My Thoughts

Irrelevant of Richard Madeley or Oprah Winfrey’s evangelism, Twitter is an undeniable success.

When Facebook reworked and redesigned their feed and messaging model, I almost couldn’t believe it. What was the ‘status’ updates, basically IS Twitter now, and that’s it’s backbone. It’s Twitter’s messaging model, it asks ‘What’s on your mind?’.

I’m probably not the only one who thought this, I’d guess any complaints about this being a bit of a blatant rip-off were bogged down by all the negativity about the interface redesign.

I think Facebook realised that Twitter has become a real rival. I think (and I guess Facebook also think) that as people become more web-savvy and literate to these sociable websites, they want to cleanse.

The great appeal of Twitter for me was, ingeniously, they took a tiny part of Facebook (this is how I saw it two years ago anyway) and made it their complete function – simple, short updates. Snippets of personal insight or creative wisdom, it didn’t matter really, what was important was it ignored the fuss and noise of whatever else Facebook had flying around it’s own ecology (and this was before Facebook applications came around) and took a bold single straight route through the middle of it.

Looking back, a lot of Facebook’s early adoption could be attributed to people growing restless with the noise and fuss of MySpace at the time – Facebook then was a clean and more structured an option.

I remember Twitter was almost ridiculed for basing it’s whole premise on such a minute part of Facebook’s huge machine. Now look at the turnaround.

Now people are growing up out of Web 2.0 craze. A lot went on, there was a lot of ‘buzz’, but a lot of progress was made in connecting things. People now are far more connected, but perhaps they’re over-connected, struggling from what Joseph Smarr calls ‘social media fatigue’. People they have multiple accounts in a ton of dispersed and unconnected sites around the web – true, each unique and successful for it’s own achievements – but it can’t go on.

Twitter for me is streamlined, cleansed, publishing. Whether talking about what I’m doing or finding out information from people or about topics that I follow, the 140 character limit constrains these utterances to be concise and straight-to-the-point pieces of information. The ‘@’ replies and hashtags are brilliant mechanisms conceived to create connections between people and objects where there is almost no space to do so.

I use my blog to write longer discourse, I use my Twitter to link to it. Likewise with the music I listen to, I can tweet Spotify URIs. I link to events and anything particularly good I’ve found (and probably bookmarked with Delicious) I’ll tweet that out too.

Twitter for me is like a central nervous system for my online activities. I won’t say ‘backbone’ – because it’s not that heavy. Specifically a nervous system in the way it intricately connects my online life, spindling and extending out links, almost to itself be like a lifestream in micro.

Recently, I saw Dave Winer‘s ‘Continuous Bootstrap‘ which although is admittedly a bit of fun, describes the succession of platforms deemed social media ‘leaders’ (see the full post here).

What I initially noticed is that he aligns successful platforms – blogging, podcasting – with a single application: Twitter. It doesn’t matter whether he is actually suggesting that Twitter alone is as successful as any single publishing form, but it did make me wonder if Twitter, rather than being the current ‘holder of the baton’, will actually be the spawn for whatever kind of Web-wide platform does become popular next.

If the real Data Portability revolution is going to kick in, if it’s on the cusp of starting right now and everything will truly become networked and connected – would you rather it was your Twitter connections and voice that formed that basis for you or your Facebook profile?

I know I’d much rather read explore the connections I’ve made through Twitter. The kind of information I’d get back from the type of people who’d connect in this way would be far more relevant from my pool of Twitter connections rather than the old school friends and family members (notoriously) who’ve added me on Facebook, the kind that just add you for the sake of it.

If Web 3.0 (or whatever you want to call it) is coming soon, I’d rather detox. Twitter is slimmer and still feels fresh to start it out with. For me, Facebook feels far too heavy now, out of date and messy. Maybe I’m being unfair and I feel that way because I’ve fallen out of touch with it and now I visit less frequently, but all the negativity hasn’t done it any favours – and those complaints aren’t unfounded.

If you’re like me and cannot resist watching YouTube videos in High Definition or High Quality whenever the option is available, you might also get a bit disgruntled that no-one ever seems to link directly to these versions – or might not know how to.

I’ve not seen it documented anywhere on YouTube’s site (maybe you’re told when you upload a video – I’ve not tried), but you can link directly to High Quality and High Definition versions of a video by adding or altering a single argument on the URL string.

For example, take a normal YouTube link:

If you add ‘fmt=18‘ to the end of the URL, you’ll automatically view the High Quality version:

If a High Definition version is available, add ‘fmt=22‘:


If you want to embed higher quality versions of the videos through the YouTube player though, you’ll have to use two arguments like so:


The HD videos on YouTube’s site play at 854 x 505 pixels by default (including the player chrome), but these 720p videos will support up to 1280 x 720 pixels.

You can edit the dimensions of the player in the embed code you’re provided with. So your final HD embed code will look something like this:

<object width=”854″ height=”505″><param name=”movie” value=”″></param><param name=”allowFullScreen” value=”true”></param><param name=”allowscriptaccess” value=”always”></param><embed src=”” type=”application/x-shockwave-flash” allowscriptaccess=”always” allowfullscreen=”true” width=”854″ height=”505″></embed></object>

Another thing, you can also jump straight into a specific part a video by adding ‘#t‘ parameter and specifying the time value like so:

These arguments can be paired of course, for example, the same in High Quality:

I recently found out you can do this with Spotify URIs, too:


Albeit formatted differently, it does the same job. Note this only works with Spotify URIs, it won’t work with the HTTP links.

It would be nice to see a standard adopted for such features, but it’s really up to the platform developers to decide upon the mechanism.

Unfortunately too often each one wants to make their own unique. This is a micro example of a much larger problem I tend to go on about (see Data Portability and Linked Data).

Anyway, I think YouTube should definitely make their quality selection easier. These parameters tweaks feel like code hacks. It would be much nicer if YouTube allowed you to specify ‘quality=HD’, or something similar, to any video link.

Last month I managed to get to This Happened London #7, another great night of inspiring talks showcasing recent innovations in digital art and interactive media.

Although it was nearly a month ago now, I noticed Chris O’Shea recently posted all the videos and presentations online, so thought I’d use that as an excuse to talk about it.

I’m always impressed with the evenings and inspirational talks, it’s incredible that they’re free – and you’re incredibly lucky if you manage to get hold of a ticket. I think the latest batch disappeared in about thirty seconds.

All the talks were great, but I especially want to mention Adam Cubitt of Glassworks.

Another former Bournemouth University student, a Computer Animation graduate, he led a team in collaboration with a host of anaesthetists and surgeons from the London Heart Hospital to create an incredible real-time, relative-sized interactive simulation of the human heart to assist in the teaching of transesophageal echocardiography to medical students.

The level of detail in the final model is astonishing, seemingly surpassing the expectations of everyone associated with the project. The medical staff themselves claim never to have seen anything quite to realistic or even expect that such a replication and project so ambitious could be so successful.

And as if the software alone isn’t impressive enough, students can actually use a specially developed probe on a mannequin connected to the system to visualise what the camera would actually be seeing inside a real human body, reacting (again in real-time) to it’s position and rotation within the chest cavity as a real echocardiography transducer would.

It really, really is worth a watch:

All the videos and presentation files (for all the talks) are on the This Happened site, or you watch the videos on their Vimeo channel.

There’s also a lot of content on Flickr – the This Happened London group, tags for thishappened-london-0309 and thishappened-london-7 and Twitter discussion with #thlon and updates from this_happened.

For a change, this month’s LFPUG didn’t actually feature any Flash. Instead, we were presented with an introduction to iPhone application development with two single-hour talks delving into developing with Objective-C and the Cocoa Touch environment, but all specifically tailored to take an approach designed for a Web developer’s perspective.

I really don’t know how I feel about iPhone development, as a Flash developer, anyway. There’s been a lot of ‘iPhone for Flash developers’ or ‘Actionscript for the iPhone’ or similarly titled tutorials spring up around the Web lately, which I guess have come about on the back of the over night success stories from applications booming in popularity and the ease with which the App Store lets a small dev team get equal distribution and visibility of their application. But why Actionscript?

The profitability is understandable, I just think it’s strange that specifically Flash developers assume that iPhone development is something they can or should be laying claim to, that it’s something within their domain or their right to be creating these apps? It’s gotten to the point where a Flash platform meeting is hijacked (hijacked isn’t the right word, it wasn’t unwelcome and it drew the biggest turn out I’ve ever seen) – is it selfish (?) that there’s such a demand for tutorials to be made easily digestible for them, when traditionally any other kind of development is usually approached by everyone else peaking in?

I’m not sure, I can’t decide. Maybe it’s more honest – that instead, for example, it’s the look and feel, the slick presentation layer and the interactivity of the interface that’s so attractive (it’s Flash-like) – and a lot of people do start playing with Flash because it looks good. Flash is inherently a visual platform. Maybe Flash just is the closest platform and Flash development easily lends to iPhone development.

Whatever the case, thinking as a platform-agnostic programmer, I was looking forward to the meeting. I have an addiction to learning (or trying) new languages even if I needn’t, plus it was honestly disclosed that this week there would be no Flash content an iPhone 101 bent into shape for Flash developers to understand, it would be was tailored for Web developers generally – I was hoping for more conceptual comparisons rather than perhaps just pointing out syntactical differences – either way, curiosity had the better of me.

First up was Masi Woermann starting with An Introduction to iPhone Application Development. He introduced the broad concepts of iPhone development and the different approach developers must under go to create applications mainly looking at the workflow. Being primarily a Flex developer, Masi maintained comparisons between the architectures of a Flex app and an iPhone app – drawing parallels between Objective-C and Actionscript coding and their relationship to the UI components created with Interface Builder versus MXML.

He introduced the development tools and iPhone SDK, discussed some basics of Objective-C (pointers, memory management, classes) and eventually produced a very simple application – demonstrating the basics of interacting with visual components, straightward methods, getters, setters – some simple OOP.

It was good to see some hands on coding and that, obviously, although it’s a completely different kind of development – it might be intimidating but it’s not impossible. Watch it here:

Masi Woermann - An Introduction to iPhone Application Development

Then Matt Biddulph presented iPhone Development for Web Developers. Matt is primarily server-side developer working with the likes of Python, Ruby and Rails, but instead of going into any code expressed that his real interest in iPhone applications lies in the device’s connectivity, specifically the capability to connect to the Web and interoperate and network with data and objects found there.

He looked at the applications that Twitter and Facebook developed, quoting Joe Hewitt‘s development wisdom with his work at Facebook.

He also criticised some of the failures of the current SDK, as Hewitt also did, specifically that some of the native features that you’ll see in Apple’s applications still aren’t available for third-party developers to utilise. I hadn’t realised this was the case, or would have thought Apple would hold back on anything – I guess with later releases more features will become available. The iPhone OS 3.0 SDK is due for release this summer, perhaps more will become available then.

Watch Matt’s talk here:

Matt Biddulph - iPhone Development for Web Developers

Matt also mentioned Phonegap, an open source cross-platform mobile framework for building apps with JavaScript. It’s been labelled as being ‘like AIR for the iPhone‘ and operates on Android and Blackberry, too. Again, maybe it’s just be another means to cut a corner and not develop with the native environment, but it looks impressive – and it seems powerful. You can take advantage of all the core features of the various platforms – geo-locations, the accelerometer etc.

It’s also created entirely by by Web developers. As the video on their site claims, there’s not many Objective-C developers but there are a lot of Web developers – so in keeping with the rest of these observations, there really are more and more opportunities and points of entry for Web developers to get into mobile and iPhone app development, it’s purely demand that has created these.

All in all, whether it’s ‘in favour’ or not for Flash developers to want to develop iPhone apps is probably irrelevant – whether its a for profit or to expand a skillset probably doesn’t mater either. Hopefully all the attention will cause a shift in the perception of developing for mobile devices in general, I know I still cringe whenever anyone mentions Flash Lite – but it seems that’s starting to enjoying the beginnings of a resurgence too.

Then I think of initiatives like Adobe’s Open Screen Project and think this could be a really exciting time for mobile devices regardless, maybe there’s just so much fuss over the iPhone right now because it’s the iPhone.

As I write this post an email has just arrived in my inbox announcing there’s new group meeting specifically for developing iPhone, the London iPhone Bootcamp – ‘part seminar, part hackathon, part workshop’ – they too, are looking for the next killer iPhone app!


I’m a regular attendee of the London Flash Platform User Group and the London Flex User Group and I get notified each time a new technology group is formed or arranges an event on Anyway through that I recently heard about the London Web Standards group.

The London Web Standards group meet monthly to discuss topics like creating websites, web standards, the W3C, XHTML, CSS, DOM, ECMAScript, how they impact you, your organisation, your audience and your clients.

This month they meet to discuss Joshua Porter’s ‘Designing for the Social Web‘ (Amazon) which, not having read, I would usually have passed on if it were not for the group’s organiser, Jeff Van Campen, kindly giving me a free copy.

London Web Standards Book Drop Off by otrops

So hopefully I’ll be able to rattle through it before this month’s meeting on April 14th, see the event page is here.

You can also follow ‘webstandards’ on Twitter for notices, meetup dates and agendas. Look forward to it!

Last month I talked about Ted Patrick‘s “Facebook on The Flash Platform“, an Adobe eSeminar discussing development with the Facebook API using Flash. In particular, I pointed to an open source Actionscript API library to work with in the absence of Facebook themselves offering ‘official’ support as they (then) did only for PHP and Javascript development.

This week though, Adobe and Facebook announced a partnership they’d managed to keep neatly under wraps and have now released a new official open source client library for Actionscript 3.

This joint effort is intended to be a complete resource supporting all Facebook APIs, including Facebook Connect, for all Flash and Flex applications for Facebook.

Over on the Adobe’s Facebook Developer Connection, Adrian Ludwig (Adobe) and Josh Elman (Facebook) talk about the library and the partnership. There’s also documentation, example applications, quick starts, inspiration and code.

Adobe Flash Platform - Facebook Platform | Adobe Developer Connection

In his post, Serge Jespers points to a nice quick start by Danny Dura that uses the library to set up a simple connection to Facebook in about ten minutes.

You can tell both Adobe and Facebook are excited about this, adding social elements to games, user experiences or interactions makes them endlessly more engaging, it almost goes without saying. But now that these experiences can be as easily enabled – and in my opinion, enhanced – by Flash, I hope will prove to facilitate some great applications.

It’s equally beneficial for Facebook and Adobe. It means even more applications for the Flash platform and an easily entry point for yet another set of developers to integrate with the Facebook API.

And to help kick that off, Rich Tretola at InsideRIA has announced a new series of articles covering development with the new AS3 library, authored by Mirza Hatipovic – an ambitious 20 articles, from a simple Hello World to advanced PHP and database-supported applications.

I’m particularly looking forward to seeing and playing with the Facebook Connect API – not sure if InsideRIA will cover these – but hopefully whatever I do get up to, I’ll get round to writing about.

In the latest Nodalities podcast, Paul Miller talks to Dame Wendy Hall, Professor of Computer Science at the University of Southampton and a founding Director of the Web Science Research Initiative.

The Web Science Research Initiative (WSRI) is a joint venture between MIT and the University of Southampton to teach the literal academic ‘science’ of the web.

Founded in 2006 alongside Sir Tim Berners-Lee, Professor Nigel Shadbolt and Daniel J. Weitznerby, Dame Wendy talks with Paul about some of the thinking she and Sir Tim shared that eventually resulted in the conception of the project.

They recognised there are many determining factors outside of pure technology that shape the evolution of the Web. That as a human construct, there is a need for new ways of thinking about the Web, that we need to understand as much about how and what effects humans have on its evolution as much as how the Web effects our society.

The Web is one of the most (if not the most) transformative applications in the history of computing and communications. It’s changed how we teach, publish, govern and do business, studied in anthropology, sociology, psychology, economics – needless to say a lengthy list – and the Web Science is to consider the Web in all these fields (and more) not only as a computer science.

It’s also to intended to anticipate future developments, forsee the good and bad consequences of its change.

They’ve been working with the Web for a long time – since the earliest days of hypertext and hypermedia and with such experience have recognised the cyclical nature of Web trends, that every five years or so sees great advances in the Web’s evolution. Think Web 2.0 for the latest phase – the next (apparently) being Web 3.0 (or the ‘Data Web’ or the ‘Web of Linked Data’) or the Semantic Web – whatever buzzword you want to ply it with. The WSRI, in part, stands to find out what’s likely to come, to inform us and our decisions.

Of course, it was also in part founded to evangelise the Semantic Web. The Semantic Web was and is still Berners-Lee’s original vision for the Web that he had back as early as WWW94 (though ‘unnamed’). These small phases add up to the larger realisation of this original dream – and with that, Dame Wendy discusses her thoughts on how this will continue in its future. She talks about the WSRI’s efforts to create a wide network of key Web Science labs across the globe and their work with curriculum developers and government agencies, also of their training of university teachers and educators to inject Web science into higher education as recognised academia.

Paul Miller also shares some thoughts on his ZDNet blog – at first he was sceptical, suggesting that we really don’t need yet another academic subject just to ‘permit’ us to study the Web, that we’re perfectly well served by enough areas of study (those listed above) that already seek to understand both the Web and its impact upon all of us. But he too, can’t deny that Web Science as a ‘label’ can be beneficial to the Semantic cause in both the evangelistic sense but also by providing ‘institutional credibility’ to their area of research.

I collected a number of Web Science and the WSRI related bookmarks during my thesis research, for further reading:

Yahoo! have announced Yahoo! Updates, their answer to Facebook Connect and Google Friend Connect.

I’ve spoken about Friend Connect and Facebook Connect a of couple times already as they vie for the mantle of the primary single sign-on for social web applications, but I hadn’t heard anything about Yahoo! offering an implementation on their part – but it turns out to be a pretty good challenge.

They announced on their Developer Network Blog yesterday, that it’s a collaboration with JS-Kit, a leading distributed social network that currently connects over 600,000 sites through powering comments and ratings services. JS-Kit have already got some huge implementations, including sites for AOL, Evite, and Sun Microsystems.

As the JS-Kit blog explains, in this scenario these third-party sites can share user-generated content (e.g. blog comments) directly to a user’s social connections on Yahoo! via the “Updates from My Connections” feed.

This new capability offers a potentially massive new distribution channel for content and a great engine for driving referral traffic from Yahoo!’s global audience. The integration with the Yahoo! Social Directory API enables these sites to allow users to leverage their Yahoo! identity and display their Yahoo! profile photo with their comments. By enabling commenters to use their Yahoo! profile commenting, publishers can build a stronger sense of community at their sites.

Yahoo! have implemented the OAuth open standard to achieve this, a trusted connection to third party sites, not only making users more willing to add content, but by leveraging the social profile data already stored by Yahoo!, makes the commenting and rating an easier process.

This in turn not only means no need for re-registration (cue repetitive form filling), but also a singular, portable syndication of activity can be aggregated on Yahoo! for those users, which, as Marshall Kilpatrick suggests, could then be used by site owners to access verified information about their readers’ profiles and interests.

If this were possible, this could be used to serve more relevant content to those users as well as offer those site owners assurance as to who is contributing to their sites.

Yahoo! talked about their Open Strategy back in October, so it’s good (and about time!) to see some of this come to fruition. They’ve hit the ground running, but the JS-Kit support won’t win this alone – those sites also support Facebook Connect.

I’m more impressed with Yahoo! for consistently championing open standards, as they do with SearchMonkey also, so hope this will expand to other networks in the future. But as much as I enjoy a third contender to the table heating things up, I hope this doesn’t result in three (or more) incompatible platforms heading in their own directions and we’re back to square one.