Skip navigation

Category Archives: google

Last year Facebook released Facebook Connect and about the same time Google released Friend Connect, they’re two very similar services that allow users to connect with information and with their friends of the respective native platforms from third-party enabled sites. The intention, as I’ve written about before, is to add a layer of social interaction to ‘non-social’ sites, to connect your information and activity on these third-party sites to your information and activity (and contacts) on the original platforms.

Then in March, Yahoo! announced their service sign-on, called Yahoo! Updates.

Now, this week, Twitter have announced their connection service, called ‘Sign in with Twitter‘. It too gives you a secure authenticated access to your information and contacts, in exactly the same way the others do – except this time, it’s Twitter.

Sign in with Twitter

You might ask if we have three, do we need a fourth? Have you ever used any of the other three?

But don’t dismiss it, or think it Twitter are jumping on to any kind of bandwagon, Twitter’s implementation is fundamentally different to the others – and it could cause quite a stir.

The problem with the other services (ultimately the problem with the platforms) is, more than often not, they are completely closed and non-portable. Although you can sign-in to a third-party site and access your data, there’s a lot of limitation to what you can retrieve and publish. These popular social networks have grown and amassed huge amounts of members and data which they horde and keep to themselves. I’m not talking about privacy, I’m referring to data portability.

The infrastructures are like locked-in silos of information and each built differently, because, either, they never considered that you’d want to make your data portable or they didn’t then want (or see value) in you moving your data anywhere else. The services they’ve created to ‘connect’ to your data are also proprietary methods – custom built to channel in and out of those silos. Each of those services too, are singularities, they won’t work with each other.

Twitter though, have come up with a solution that adheres to agreed upon standards, specifically, by using OAuth to facilitate it’s connection. Technically, it’s significantly different, but in practice, you can expect it to do everything the others can do.

The community’s thoughts

Yahoo’s Eran Hammer-Lahav (a frequent contributor to OAuth) has written a good post discussing his thoughts, he says it’s ‘Open done right’ – no proprietary ‘special sauce’ clouds interoperability as happens with Facebook Connect. I think he’s right.

He looks at what happened when Facebook Connect was introduced, that they essentially offered third-party sites two key features: the ability to use existing Facebook accounts for their own needs, and access Facebook social data to enhance the site. The value of Facebook Connect is to save sites the need to build their own social layer. Twitter though, is not about yet another layer, but doing more with that you’ve already got.

Marshall Kirkpatrick also wrote about the announcement, his metaphor for the other ‘connection’ services best describes how they function – ‘it’s letting sites borrow the data – not setting data free’.

But then he talks about Twitter ‘as a platform’, and I think this is where things get interesting. He says:

Twitter is a fundamentally different beast.

All social networking services these days want to be “a platform” – but it’s really true for Twitter. From desktop apps to social connection analysis programs, to services that will Twitter through your account when a baby monitoring garment feels a kick in utero – there’s countless technologies being built on top of Twitter.”

He’s right. Twitter apps do pretty much anything and everything you can think of on top of Twitter, not just the primary use of sending and receiving tweets. I love all the OAuth and open standards adoption – but that’s because I’m a developer, but thinking about Twitter as a platform makes me wonder what kind of effect this will have on the users, how it could effect the climate, even landcape, of social media if, already being great, Twitter is given some real power

People have long questioned Twitter’s future – it’s business model, how it can be monetised, those things are important – but where can it otherwise go and how can it expand? Does it need to ‘expand’? It’s service is great it doesn’t need to start spouting needless extras and I don’t think it will. But in widening it’s connectivity, it’s adaptability, I think could change our perception of Twitter – it’s longevity and road map, the way we use it and think of ourselves using it.

My Thoughts

Irrelevant of Richard Madeley or Oprah Winfrey’s evangelism, Twitter is an undeniable success.

When Facebook reworked and redesigned their feed and messaging model, I almost couldn’t believe it. What was the ‘status’ updates, basically IS Twitter now, and that’s it’s backbone. It’s Twitter’s messaging model, it asks ‘What’s on your mind?’.

I’m probably not the only one who thought this, I’d guess any complaints about this being a bit of a blatant rip-off were bogged down by all the negativity about the interface redesign.

I think Facebook realised that Twitter has become a real rival. I think (and I guess Facebook also think) that as people become more web-savvy and literate to these sociable websites, they want to cleanse.

The great appeal of Twitter for me was, ingeniously, they took a tiny part of Facebook (this is how I saw it two years ago anyway) and made it their complete function – simple, short updates. Snippets of personal insight or creative wisdom, it didn’t matter really, what was important was it ignored the fuss and noise of whatever else Facebook had flying around it’s own ecology (and this was before Facebook applications came around) and took a bold single straight route through the middle of it.

Looking back, a lot of Facebook’s early adoption could be attributed to people growing restless with the noise and fuss of MySpace at the time – Facebook then was a clean and more structured an option.

I remember Twitter was almost ridiculed for basing it’s whole premise on such a minute part of Facebook’s huge machine. Now look at the turnaround.

Now people are growing up out of Web 2.0 craze. A lot went on, there was a lot of ‘buzz’, but a lot of progress was made in connecting things. People now are far more connected, but perhaps they’re over-connected, struggling from what Joseph Smarr calls ‘social media fatigue’. People they have multiple accounts in a ton of dispersed and unconnected sites around the web – true, each unique and successful for it’s own achievements – but it can’t go on.

Twitter for me is streamlined, cleansed, publishing. Whether talking about what I’m doing or finding out information from people or about topics that I follow, the 140 character limit constrains these utterances to be concise and straight-to-the-point pieces of information. The ‘@’ replies and hashtags are brilliant mechanisms conceived to create connections between people and objects where there is almost no space to do so.

I use my blog to write longer discourse, I use my Twitter to link to it. Likewise with the music I listen to, I can tweet Spotify URIs. I link to events and anything particularly good I’ve found (and probably bookmarked with Delicious) I’ll tweet that out too.

Twitter for me is like a central nervous system for my online activities. I won’t say ‘backbone’ – because it’s not that heavy. Specifically a nervous system in the way it intricately connects my online life, spindling and extending out links, almost to itself be like a lifestream in micro.

Recently, I saw Dave Winer‘s ‘Continuous Bootstrap‘ which although is admittedly a bit of fun, describes the succession of platforms deemed social media ‘leaders’ (see the full post here).

What I initially noticed is that he aligns successful platforms – blogging, podcasting – with a single application: Twitter. It doesn’t matter whether he is actually suggesting that Twitter alone is as successful as any single publishing form, but it did make me wonder if Twitter, rather than being the current ‘holder of the baton’, will actually be the spawn for whatever kind of Web-wide platform does become popular next.

If the real Data Portability revolution is going to kick in, if it’s on the cusp of starting right now and everything will truly become networked and connected – would you rather it was your Twitter connections and voice that formed that basis for you or your Facebook profile?

I know I’d much rather read explore the connections I’ve made through Twitter. The kind of information I’d get back from the type of people who’d connect in this way would be far more relevant from my pool of Twitter connections rather than the old school friends and family members (notoriously) who’ve added me on Facebook, the kind that just add you for the sake of it.

If Web 3.0 (or whatever you want to call it) is coming soon, I’d rather detox. Twitter is slimmer and still feels fresh to start it out with. For me, Facebook feels far too heavy now, out of date and messy. Maybe I’m being unfair and I feel that way because I’ve fallen out of touch with it and now I visit less frequently, but all the negativity hasn’t done it any favours – and those complaints aren’t unfounded.


Yahoo! have announced Yahoo! Updates, their answer to Facebook Connect and Google Friend Connect.

I’ve spoken about Friend Connect and Facebook Connect a of couple times already as they vie for the mantle of the primary single sign-on for social web applications, but I hadn’t heard anything about Yahoo! offering an implementation on their part – but it turns out to be a pretty good challenge.

They announced on their Developer Network Blog yesterday, that it’s a collaboration with JS-Kit, a leading distributed social network that currently connects over 600,000 sites through powering comments and ratings services. JS-Kit have already got some huge implementations, including sites for AOL, Evite, and Sun Microsystems.

As the JS-Kit blog explains, in this scenario these third-party sites can share user-generated content (e.g. blog comments) directly to a user’s social connections on Yahoo! via the “Updates from My Connections” feed.

This new capability offers a potentially massive new distribution channel for content and a great engine for driving referral traffic from Yahoo!’s global audience. The integration with the Yahoo! Social Directory API enables these sites to allow users to leverage their Yahoo! identity and display their Yahoo! profile photo with their comments. By enabling commenters to use their Yahoo! profile commenting, publishers can build a stronger sense of community at their sites.

Yahoo! have implemented the OAuth open standard to achieve this, a trusted connection to third party sites, not only making users more willing to add content, but by leveraging the social profile data already stored by Yahoo!, makes the commenting and rating an easier process.

This in turn not only means no need for re-registration (cue repetitive form filling), but also a singular, portable syndication of activity can be aggregated on Yahoo! for those users, which, as Marshall Kilpatrick suggests, could then be used by site owners to access verified information about their readers’ profiles and interests.

If this were possible, this could be used to serve more relevant content to those users as well as offer those site owners assurance as to who is contributing to their sites.

Yahoo! talked about their Open Strategy back in October, so it’s good (and about time!) to see some of this come to fruition. They’ve hit the ground running, but the JS-Kit support won’t win this alone – those sites also support Facebook Connect.

I’m more impressed with Yahoo! for consistently championing open standards, as they do with SearchMonkey also, so hope this will expand to other networks in the future. But as much as I enjoy a third contender to the table heating things up, I hope this doesn’t result in three (or more) incompatible platforms heading in their own directions and we’re back to square one.

Quite out of the blue and without notification of it’s launch as far as I’ve been able to find, Google seem to be exposing semantic data in their global search results.

Try searching for ‘What is the capital city of England?’ or ‘Who is Bill Clinton’s wife?’ and you’ll see sourced direct answers returned at the top of your search results.

It’s hard to tell if these direct results are actually semantic expressions or just presented to appear that way – in the expected Semantic triple of subject-predicate-object. The list of sources definitely don’t structure their information with semantic expression, so perhaps quite an amount of logic and natural language processing is being done on Google’s part to process non- or semi-structured data.

I’ve tried to find out before what Google have been up to concerning semantic technology but found little. The coverage over at ReadWriteWeb reports that neither they or their Semantic Web contacts had heard or seen anything about this before, but the community feedback suggests there’s been variations of this for some time – including a three year old Google program called ‘Direct Answers’ – but none of the coverage of that program offers the kind of examples we’re seeing here.

Marshall Kirkpatrick points to a blog post of Matt Cutts, Google search algorithm engineer, but it seems to be a dead link now. Though trailing through Google’s caches, it seems to find him quote:

Many of the data points are being pulled in from the structured part of Wikipedia entries, which is interesting. Other sources are wide ranging, from a license plate website to Jason Calacanis’s Mahalo.

If Google are constructing semantic data from semi-structured or non-structured source data, then there’s undoubtedly some quite powerful semantic processing technology in place. I highly doubt this will be the final product of their development with such technologies, simply the first we’ve noticed – most likely why it’s slipped under most people’s notice.

The inaccuracy is also an issue. Try searching ‘Who is Bob Dylan’s wife?’ – and you’ll see Sara Lownds (his ex-wife) returned. Seeing these direct answers reminds me of True Knowledge.

Even their example questions though, are far more complex – for example, ‘Who owns Universal Studios?‘, ‘Is the Chrysler building taller than the Eiffel Tower?‘, ‘What is the half life of plutonium 239?‘.

More importantly, if it doesn’t know that answer, it won’t ‘guess’ – it’ll tell you it doesn’t know and ask you to deconstruct your query in order to expand it’s knowledge base so it can find the answer later.

As Marshall says, this is all speculation based on limited observation – and low visibility of Google’s development. Hopefully there’ll be more soon!

Whilst I’m on the subject of Google and trying to finish half-written drafts hanging over from last year, I thought I’d briefly mention the release of Analytics for Flash.

Aside from capturing all the obvious generic statistics you’d expect from a Flash tracking package – and by being fluidly compatible with the main JavaScript library is capable of outputting all the core functionality of the existing Analytics components – the metrics offered by Google Analytics for Flash can be particularly designed to offer interesting insight into other aspects of your users’ activity you may not first expect. For example, you can collect data that can help you gauge levels of usability or (kind of) the implementation of design success. Seemingly you can monitor the behaviour of the users’ interaction during their visit too – as well as the length of the visit itself.

It’s all technically possibly, with Google’s introduction of event tracking that can be fired from custom interactions – whether that be a button click or video view or anything else. Along with that, the event can carry a payload, later received by your Analytics dashboard for your interpretation. It sounds simple – but it’s capable of being very powerful.

Previously, tracking your Flash content would be in isolation. That is to say, you could fire a tracking event when a user accesses a page of Flash content, but from there you were blind to their progress until navigating again.

This payload though, not only could detail traffic to specific sections within a Flash application (although in turn, separate events could be created for those) but could return data specific to that user and session. For example, the total time the user has spent in a particular place, or the site as a whole.

Depending on how complex you wish to be (and how many stats you want to trawl though later) this could offer very valuable data. But that data need not only be of value to an agency or advertiser. Counts for clicks on specific buttons aren’t anything new when you want to find out how many people click a ‘News’ link first, or if anyone notices the ‘Help’ button. This can be far more granular – to the point, as above, where the data could be used to inform decisions on say, design or usability.

Take a standard Flash video player as a media component you’re used to seeing on a daily basis. You can easily picture the common control bar. But how many people actually use those ‘Rewind’ and ‘Fast forward’ buttons? Could the design be improved?

Admittedly with Flash video components, you’re unlikely to see those nowadays 😉 – but that (as I’ve picked this example) is the result of user testing, something this kind of tracking can’t replace – Jesse Warden has a strong sense of this in his post about Flash Analytics.

Anyway, the custom events let you send as (overly-) complex amount of data as you wish. Flash of course can be used everywhere, deployed as widgets or embedded on blogs anywhere on the Web. These Analytics though, are part of your application itself. So you can track its usage outside of the original HTML page the previous iteration of Analytics would have restrained you to.

And it’s free! Check out the code repo.

Exactly how search engines deal with the content of Flash-based websites and information in SWF files has notoriously been a bit of a grey area for a long time. Historically, website creators had to battle with clients as to whether the aesthetic potential of Flash was enough a pay-off against their judgement of the importance of this new idea called ‘SEO’.

In July of last year, Adobe announced a collaboration with Google (1, 2) and Yahoo! to develop a new Flash Player technology specifically to enhance the search results of dynamic content in Flash – ultimately, to make the SWF searchable.

But it was unclear how it worked, what it actually did and what provisions the Flash developer or content creators would have to make.

Peter Elst aired his thoughts and agreed as I did, it looked like a ‘backup’ or intermediary solution. There also lacked a standard or recommended approach to deploying the content for this new technology – presuming this new platform hadn’t just become instantly intelligent to all possibly methods of delivery.

Adobe later published an FAQ, but still it wasn’t very technical, so a few developers started experimenting. After seeing Peter’s attempts, Ryan Stewart announced a Flex SEO Contest – an outright declaration that we’re confused but determined to find out what exposure our content has. As well as being a bit of fun. 😉

Dominic Gelineau constructed fourteen test cases, essentially finding every possible way you could contain a simple text string in a SWF file (see 1 – 7 here, 8 – 14 here). He used both static and dynamic TextFields, populated them in various ways, MXML components, standard Flash UI components, whether to use states, etc – covering all the bases across Flash and Flex.

Initially he concluded Google wasn’t really finding anything new, but in a later article for InsideRIA he listed his principle observations:

  1. Most of the content that was on the stage/timeline at compile time would be indexed even if it was outside the viewing area.
  2. The TextArea, Text, ViewStack and custom MXML component in Flex would get indexed if they were in the MXML (the Flex equivalent of being on the stage) but the Label component would not.
  3. Until October, SWF files embedded in the HTML using JavaScript (SWFObject, AC_RunActiveContent, etc) could not be found on Google.
  4. Again until October, anything related to the ActionScript 3 method addChild would not get indexed. As an example, adding a MovieClip from the library with static text in it using addChild method would not show up in Google’s search results. In the same way, using states in Flex wouldn’t work. My guess is that since states uses addChild in its MXML syntax, once compiled it would get converted to the addChild method in AS3.
  5. Finally, any content loaded externally from the embedded SWF file wouldn’t get indexed, but was clearly stated by Google.

Fortunately, Jim Corbett, Flash Player Engineer at Adobe offeres some much-need clarification, answering many of these questions at the Adobe MAX conference this year. The video can be found at Adobe TV, (I’m having problems embedding it with WordPress) – it’s lengthy, and gives a good insight into the Player’s search mechanics.

The other day I wrote about Facebook Connect and Google Friend Connect – two recently launched, very similar services going head-to-head in the ambitious self-proclaimed aim of ‘opening up’ the social web.

But if these platforms are successful, what will that actually be like? The demo sites Google provides are good for functional demonstrations but little else. There’s a complete list of sites that use Facebook Connect up on their dev wiki – there’s Joost, Netvibes and TechCrunch, but no-one with such a diverse and active user base like Twitter.

Then on Monday came on the news that Twitter chose to Connect with Google’s service. It’s strange that there wasn’t more made of the announcement, considering what could come of it.

Twitter hardly said much about it at all on their blog, Google covered it in more depth but also provided the first real recognisable use case for an integrated site. Now whenever you join a ‘Friend Connected’ site, you can use your Twitter profile to join their service. From there, you can see of a combination of your followers and those who you follow that are already on the site and connect with them there too. You can tweet about your find from the connected website’s portal.

Getting a big site like Twitter on board will really kick Friend Connect up a gear, undoubtedly it’ll receive a massive increase in attention. But it’s not like Facebook Connect is by any means down or out – it’s so early. If anything, the introduction of these services to such widely used web apps as an almost unblinkingly ‘standard’ feature (this will eventually boil down to a simple ‘Connect’ button) could positively change users’ perceptions of them to being just commonplace. I’m sure that’s the ultimate intention, but right now it’ll work in favour for any such service, be it Facebook Connect or any other.

It’ll be a while before we see any real difference in the reception or growth of implementation for either service, whether by then we have a preferred leader or not.

I’m interested to see how Facebook will respond in aiming to get as big a site as Twitter integrated with Connect. Prior to the Twitter inclusion, I felt that Google’s Friend Connect came across almost like a developer’s toolkit – like a set of ready-made widgets to enhance onto your site, boosted by the capability to network centrally. But now I’ve seen it in action, Facebook have a undeniable rival product.

It should be said of course that Twitter hasn’t really chosen Google over Facebook. Biz Stone wrote that there was hardly any effort required on Twitter’s part – Google maybe just got in there first.

It’s in the same post he goes on to say that Facebook Connect integration is already in development. Twitter officially announced integration with MySpace and the Data Availability initiative seven months ago – they’re embracing everything they can, good on ’em.

For some time I’ve been meaning to write about Facebook Connect and Google Friend Connect, two potentially huge social web developments that have been gathering speed and popularity over the past few weeks.

Both services are very similar. Essentially, each functions to simplify the connection between social and non-social websites by offering connectivity (and some functionality) of each’s proprietary central platform on 3rd party websites.

The idea is that a user can ‘Connect’ with whichever service the site has employed and find users with whom they’ve already connected with on the other services – rather than creating a new account, profile, repeat the steps of entering information to then find the friends you’ve already added over and over again with every other social-enabled web app you’ve used previously.

I first saw Facebook Connect in August with their demonstration service  The Run Around. There, you could ‘Connect with Facebook’ to initially join the site and immediately see who else (of your Facebook friends) has joined too. This is all outside of the Facebook chrome, not on the Facebook domain. What’s more, as well as interacting with the linked data pulled from Facebook, the website could push data back in. The actual site intended to track your running routes and times, so when you submitted a new ‘run’, it would publish to your live newsfeed on your Facebook profile.

The idea is simple, the effect could be game-changing. It’s been met with both cautious optimism and healthy skepticism.

If this becomes as massive as it could be, we could see a single sign-in that abolishes the need to register and re-register for every newly launched social app. We’re already experiencing social fatigue within that process as consumers and as developers, we’re having to build whole registration and authentication systems from scratch every time. Plugging into a platform like this – that we assume to be secure and trusted – could offer a means to develop and deploy services much easier and faster.

But can we trust – or do we want to trust – a propriety platform to do this for us? The idea of a single social graph isn’t new, but I don’t know if I want Facebook to offer it. I’d much prefer FOAF 🙂 – but how many people outside of the development world have heard of it?

I feel I need to write another post entirely about OpenID, OpenSocial and OAuth entirely – services that can’t go unmentioned here – but Marshall Kirkpatrick at ReadWriteWeb wrote a direct comparison of Facebook Connect and OpenID that asks some interesting questions as well as offering a good introduction to the open source services anyway. Although he started by discussing as to which of the two should website owners use to authenticate and learn about their users, the community expanded his initial mindmap to cover pretty much every angle in the comparison – and it’s very detailed, see it here.

He also asks, even if it doesn’t become the dominant identifier online, will Facebook’s challenge breathe new life into the movement for open source, standards based, federated user identity?

Then there’s Google Friend Connect – launched in public beta the same day as Facebook Connect went public for 3rd party sites. This does use a blend of the open source services, but although integrating the open standards might suggest a weightier development process, the first thing to notice is a far less developer-oriented implementation than Facebook Connect.

Using Facebook Connect is down to the site creator to construct and integrate an interface to facilitate the connection – Google Friend Connect is widgety, with pretty much zero coding other than cutting and pasting directed portions. Similarly with the functionality, Google offer widgets for simple commenting on pages, media sharing, or rating content. With Facebook Connect you have to write that yourself – although admittedly, you then have full reign on design and interaction.

There’s a demonstration video on the Google blog’s announcement of the beta launch.

It’s not like this is just a two-horse race though, or that someone won’t work out a way two use both anyway. Google and Facebook are in direct competition, but attempting to open the Web in this way extends far beyond them.

What I find interesting is the interoperability. These technologies aren’t semantic, but do push the exposure and interoperation on a user’s social graph with ideas akin to the Semantic Web – utilising data to extend single-site online identities and networking social connections.

They’re not Semantic Web efforts but they have similar aims. Friend Connect’s goal is an open social web, the Semantic Web is – quite simply 😉 – a fully understood, completely open web, not only it’s social domain.

Just lately I’ve been really interested in finding out Google’s position on semantic technologies and their view on the Semantic Web.

I’d been asked before whether Google were making any efforts in developing semantic technology, but I couldn’t really say. Then I attended the Googleworld debate, at London’s ICA, but couldn’t really find the chance to pose any technical questions.

In an attempt to satisfy my curiosity – and anyway, to investigate something I believe to be of interest that, as far as I can find, hasn’t received any real attention to date – I wrote an open letter, of sorts, forwarded to Google and Semantic Web researchers I’ve found connected to Google, simply asking:

What’s the deal?

Dear Google,

My name is Marc Hibbins, I write a blog I’m sure you wouldn’t have read, I tend to cover new web technologies, online trends, my own development issues, but I’m also extremely interested in the Semantic Web.

Over the past couple of weeks I’ve become increasing interested in finding out Google’s position, or just their ideas even, on the Semantic Web and semantic technologies. I recently wrote about the increase in tech blogs covering the subject over the past couple of weeks, and I’ve been asked a few times – what’s Google up to?

I also recently attended a debate called ‘Googleworld’ – it covered, generally, the past ten years of Google and what’s to come. I wrote about it, and the chair of the meeting replied that he too, is unaware of Google’s position.

Could you shed any light on the topic? Having had a thorough look around online, I’ve found next to nothing. I’m extremely intrigued to find out if Google have any plans with semantic technology – or even if there’s any in place already that just might not be so visible?

If you’ve no plans, do you have any comments? Do you think it’ll even ever happen?

Kindest regards,

Marc Hibbins

I wasn’t sure what kind of response I’d get, if any at all. Or if anybody I did get in touch would be wary of offering any insight that might be misinterpreted as any ‘official’ position.

To my surprise, my first response came from executive ‘Google Fellow’ Jeff Dean. He works in the Systems Infrastructure Group (crawling, indexing and query systems – full bio here), but he couldn’t initially offer any real strong thoughts on the issue. He did say however, that he wasn’t sure if Google even had any real position on the subject at all. If nothing, at least this confirmed that my lack of findings wasn’t down to only an absence of research published externally from Google – or poor investigative work on my part.

My second reply was from Stefan Decker, professor at the National University of Ireland, Galway and director of the Digital Enterprise Research Institute, an internationally recognised institute in Semantic Web and web science research. He co-presented a very interesting Google Tech Talk last year, and worked in Stanford at the same group as Sergey Brin and Larry Page.

He said, very explicitly, that:

In short: The Google management does not believe in meta-data.

Craig Silverstein is on record several times negatively of talking about the topic, as well as Sergey Brin. It is very clear that they are not proactive – a serious mistake from my point of view.

Interesting. I got in touch with his co-speakers, Eyal Oren and Sebastian Kruk. Both said they have contacts at Google still, but neither are aware of any public developments.

Eyal pointed me toward Sindice, a semantic search engine and index as perhaps (though only speculatively – as likely any search engine), might one day receive interest from Google. Perhaps to incorporate their infrastructure for RDF and semantic data consumption. But as he said, there’s absolutely no evidence of it right now.

Sebastian on the other hand described the lack of address specifically as:

[Their] ‘anti-semantic’ approach.

An increasing trend he’s recognised. Suggesting an almost concious movement against any such development. He also expressed his disappointment at the very low turn out at the Tech Talk, that literally only one attendee showed any real interest.

My final response was initially the most exciting – from Ramanathan V. Guha, who leads development of Google Custom Search. He said he’d be happy to comment on what’s going on, although could only offer his own personal opinion and nothing official – but I’ve not received any correspondence from him since.

All in all, at least I know I’ve not overlooked anything major. Fingers crossed I get a response back from Guha, but otherwise I guess I’m left keeping a close eye out for any other developments.

Picked up a spare ticket to the ‘Googleworld’ debate at the the Institute of Contemporary Arts yesterday evening. Bill Thompson chaired a pleasantly ‘warm’ chat between New York Times columnist Randall Stross and Independent columnist Andrew Keen.

Initially I wasn’t sure what to expect, billed as a look back over the past ten years of Google and forward to whatever might come next, it wasn’t as technically oriented as I’d hoped it’d be. It more focused on social and philanthropical interest – as a well as being a bit of a sell for both their new books.

I would have liked the opportunity to open up discussion to Semantic technologies, perhaps to pose the question, What are Google’s intentions? – if they even have any – of introducing any Semantic Web technologies to their platform. It’s something I was recently asked about after writing my last post, but it wasn’t really the right crowd.

In other news, Semantic start-up Twine goes public today. Founder Nova Spivack, posted some interesting stats yesterday about user engagement on the site over the last eight months during it’s semi-public, semi-beta phase.

It seems their users queue up some lengthy sessions on the site, longer even, he now predicts, than Delicious and MySpace.