Skip navigation

Another really cool thing Lee Brimelow spoke about at Flash Camp on Thursday was his London edition of the ‘Master Collection dead drop’.

As he did in Amsterdam and Boston before that, Lee hid a copy of Adobe CS4 Master Collection (worth almost £2,000) in an anonymous location in London and left a string of complicated clues and puzzles for anyone willing, to try and decrypt them.

London CS4 Master Collection dead drop

Yesterday Lee posted a video walkthrough, explaining how all the pieces fitted together. It’s quite complex – here’s how it went:

1. Lee posted an article explaining the dead drop had begun, hiding an unstyled link in the final full stop of the paragraph that this to this wav file.

2. As you might think, it wasn’t just noise – using an editor like Adobe Soundbooth you can see a spectral view of the file where Lee had written in a URL and ‘LEE FLASH’ into the sound levels using Adobe Audition.

Spectral view in Adobe Audition

3. At that URL, enter the username ‘LEE’ and the password ‘FLASH’ you’ll find GPS co-ordinates (51.508034630224635, -0.13934612274169922) and told to ‘Get a coffee and look out the window’ – and there are three form fields to fill in.

Looking out the window - Office To Let phone number

4. This location is a Starbucks in Picadilly, looking out the window you’d see an ‘Office to let’ sign – the phone number of which (020 7935 1653) goes in the above form.

5. That’ll give you a link to an AIF sound file, a flat tone. In spectral view isolate and remove that tone and amplify the very faint remains – Lee reading a link to an image file.

Bush

6. The image was of a bush, zooming in very closely with an image editor like Adobe Photoshop reveals some more GPS coordinates hidden amongst the grass (51.502250, -0.137883) – which is the actual location of this bush in St. James’s Park.

7. Inside that bush was an envelope with another URL and login credentials, which took you what looked like an Apache 403 error, but in fact was a SWF file.

8. When decompiled, this SWF file had an unused image in its library – a satellite image with more co-ordinates in St. James’s Park (51.50635, -0.142883) – the location of the software.

I thought this was brilliant!

The video going through all the clues on his blog is worth a watch – the best dead drop yet.

Advertisements

Ted Patrick has posted a recording of his Adobe eSeminar “Facebook on The Flash Platform” that he presented last Thursday.

Josh Elman, Facebook Platform Project Manager, joins him – offering a little technical history and strategy behind the application platform and Facebook Connect also.

Facebook on The Flash Platform

It’s a good introduction to building Flash applications using the Facebook API. It’s clear that building on the Facebook platform grants developers an equal opportunity to create powerful and successful social experiences.

As Ted and Josh describe, by utilising Facebook users’ social contexts and by the ease of which you can distribute through the social graph, applications can generate a huge amount of traffic – and as Josh puts it, developers can profit wildy. 😉

Ted gives a simple overview of the architecture of a Facebook application, the various application states and talks about FBML, Facebook’s XML mark-up – and shows how to build a simple single-component Flex application that really demonstrates how easy Facebook have made the information retrieval possible via the API.

The Q&A is worth listening to (it starts around 37 minutes), Ted and Josh discuss important development aspects outside of the actual coding – how hosting is managed, handling session keys and such.

But Ted also points to a promising looking Actionscript library designed for Facebook application developers, simply called The Facebook Actionscript API – which definitely sounds worth checking out (no pun intended).

As yet, Facebook only ‘officially’ support their PHP library, but continue to work with developers in the community to support the other languages. Josh claims this will only improve over the next couple of months and through the year – hopefully (it sounded) to develop similar ‘official’ library counterparts.

Ted’s also posted his Facebook on The Flash Platform sample files.

Yesterday I visited Flash Camp London ’09, an all day community-run Adobe sponsored event on all things Flash Platform.

Last September I attended Flex Camp ’08, (essentially the same, but obviously focused on Flex) so I expected much the same – cool demos, sneak previews, maybe some insight to what Adobe have in the pipeline for the future – and got pretty much exactly that.

Flash Camp '09

Serge Jespers‘ opening keynote held a lot of optimism and promise for the future of the Flash Platform, quoting the huge number of downloads to date and pointing to the constant growth in market share that the Flash Player and AIR are enjoying – throwing in a couple of digs to the various doubters in the sums while he was at it.

He spoke about the Open Screen Project and Adobe’s ongoing aim to achieve a level of open portability across multiple platforms – not only in the browser and onto the desktop, but to mobile devices too and television platforms. On the subject of the mobile platform, he discussed prototype versions of Flash Player 9 (and 10?) running on a few devices he had to hand (though unfortunately no demo) and expressed Adobe’s wish to have those ready for manufacturers by the end of the year, with intention to have them consumer ready for the end of 2010.

Seb Lee-Delisle was first up, showing off some of the Papervision work he’d recently completed with his agency. He also had some nice demos of the augmented reality tutorials that have been going around lately. These usually use nice applications of the ARToolKit, but Seb pointed to a Flash port I hadn’t yet come across called the FLARToolKit. Presumably with which, you have full control via Actionscript. The Papervision blog has a pretty cool example of the kind of things you can achieve with it.

Next up was Michael Chase, Senior Creative Developer at AKQA. He presented his latest work, Nike Football, which involved a lot of work with Pixel Bender – the new video processing and visual manipulation platform available with Flash Player 10.

Pixel Bender is a non-destructive way to manipulate the pixel data of images and videos by means of developing bespoke plug-ins that function in Flash in a similar way to the various visual effects and filters do in Photoshop or Illustrator.

He demonstrated the Pixel Bender Toolkit, the GUI software used to create these filters. It’s purposely almost identical to every other program in the Creative Suite. Adobe are really pushing for seamless integration across the whole family of software for creators – the vocabulary, workspace, tool sets – all feel very familiar.

For the Nike site, Michael basically developed one filter for use across all video and image content. This seems straightforward enough, but it’s an brilliant advancement only made possible by using Pixel Bender. This way, there’s no need to render of every piece of video with the filter on – or subsequently re-render when the filter is inevitably tweaked (which, of course, could only be the case if permission was given to manipulate supplied video footage in the first place). It also means the video filter doesn’t have to be designed by a creator skilled in After Effects or other video editing software – as said, the Toolkit handles very much like Photoshop, which most designers are fluent in – I think Michael said you could actually use Photoshop to create filters anyway.

It also means you can change the single filter once and apply the changes to all the assets rather than having to edit every piece individually – and as he suggested, not having manipulated the source material means the un-filtered source can be reused elsewhere. And of course because it’s just Actionscript before it’s compiled, the whole plug-in script can be manipulated by a Flash developer.

It was good to see this in use, I’d only really seen the default demo ‘Swirl’ effect that a lot of others there also seemed only to have seen (I’m not sure of the real name). That ‘swirl’ is so drastic it seems to have no possible use case, so I’d not really considered Pixel Bender since. Here though its use is subtle, well executed and well placed – I’ll have to give it a go.

Mike Chambers then discussed ‘Scripting with Actionscript 3.0’. Though relatively well-covered territory for the developers, he set about debunking popular misconceptions of Actionscript 3, going through the benefits of migration and giving some examples.

He started with a little background on the new Actionscript version, discussed how the Flash Player was hitting the limits of performance that AS2 could achieve, that Actionscript 3 was heavily driven by the need for application development – which by that point a lot of (the now) RIA developers were forcing into Actionscript 2. They also had Flex in mind.

As I agree with him, ultimately, AS3 isn’t that different to AS2, but it is just different. It’s not harder, or ‘slower’ per se. On a language level, the syntax is still simple and very much the same – it’s the APIs that might present more difficulty for those migrating. The APIs in Actionscript 2 grew organically, expanding where needed, but unfortunately did so inconsistently. It’s that realignment that’s a larger change to overcome.

Arguably, any developer with OOP experience, where consistency is promoted, wouldn’t struggle. He suggests that learning Actionscript 3 is future-proofing yourself for new languages that will be far more digestible now that Actionscript contends as a stronger language.

The Timeline is not Evil!

With that in mind, he did admit that the way Adobe present Actionscript 3 can be somewhat intimidating to those without that kind of basic knowledge. The documentation is very much aimed at developers – the code examples are in class and package structures, assuming programming experience where the previous help documentation never did.

Timeline coding is still possible, easily, but it isn’t documented anywhere near as much as class structured code. With one or two caveats, it actually works in almost exactly the same way.

As well as the ‘future-proofing’ mentioned, Actionscript 3 heralds a whole load of other advantages. It’s more verbose (probably where the argued ‘slower development process’ claim lies) but in that, offers better debugging – the compiler can be set to be more strict and to detect errors earlier, even – and it’s also the language for new libraries and APIs (think Papervision, Alchemy, the many tweening engines) both from Adobe and efforts from the community.

Richard Dean presented his work on the EA Spore microsite, specifically his efforts built using the Inverse Kinematics and 3D of Flash CS4 – demonstrating some nice timeline-based animation effects, the use of the new ‘Bone’ tool to build character skeletons (more about this later) – as well as some handy tips and best practices.

James Whittaker‘s presentation ‘Your First Custom Chrome AIR App With Flash CS4’ delivered exactly what it said on the tin. He offered a walkthrough on how to build your first AIR application, how to design a custom chrome and the various provisions that must be made in doing so, up to publishing an AIR application file and customising the various settings in the new CS4 GUI. He also spoke about handling icons, digital signing, then creating a nice installer badge at the end. His presentation files are already up online.

Lee Brimelow had a huge amount to say about the new CS4 version of Flash – apparently trying to cram a whole day session into his 45 minute slot. He spoke about the new animation model in Flash, how it’s more like After Effects now – again, the overlapping of software uses in the Creative Suite – how even the timeline in the standard workspace is at the bottom of the screen, more along the lines of video editing software.

So much more of the animation process is automated now, to great effect. Motion paths are automatically constructed, even for simple tweens. The path can be treated like any other line in Flash thereon, allowing curvature, adjustment of Bézier angles. Adding a keyframe and point in the middle of a tween no longer creates an awkward corner, but a curve to compliment the original motion path.

There’s far more control. The tween itself is handled as a unique object, so moving or resizing or changing the length of an animation is much easier and also independent of the clip being tweened – there’s no more clumsy attempt to select multiple frames to modify a complete tween.

Again there was demonstration of the native ‘3D’ in Flash Player 10. Lee couldn’t emphasise enough though, that these is intentionally simple 3D effects for transitions and such – not for full 3D immersive environments, for which he recommends to look to Papervision or similar. When the 3D tools are in use though, it’s seamless. There’s a tool to rotate by the Z-axis as simply as there is one for the 2D axes – in doing this, Flash starts to look like 3D rendering software.

These renders are possible because of the ‘notorious’ inclusion of a constantly-running Flash Player on the stage – it’s how Adobe have addressed differences seen in author-time to run-time. In having an constantly running instance of the Flash Player, there should be far fewer discrepancies – although, as they are fully aware of – is a memory hog.

Lee also pointed out the code snippets panel Flash CS4 offers – something I thought Mike Chambers would have mentioned. They’re basically small templates of handy bits of code that anyone unfamiliar with Actionscript (or Actionscript 3, for migrating developers and designers alike) to add common bits of functionality – mouse or frame event handlers for example.

Again we saw Inverse Kinematics – these are great for character animations and (I think perfect) for mocking up prototypes when realistic proofs are required but perhaps the resource isn’t available to fully code them. They’re very quickly put together but equally very effective. Simply constraints applied to skeleton joints create faux-physics that look very convincing. Have a look here if you’ve not seen these in action.

All of that is possible with zero code. Also, all the drag-drop manipulation possible at author-time can also be translated for the user to play with at run-time with the tick of a box – still, with no coding.

Finally Lee demonstrated the new motion editor, which has also has given a huge amount of control to the author compared to what was available before. The complexity of a tween (whether an ‘x’ position or alpha value or whatever) can now be broken down into multiple channels of manipulation.

For example, previously the complexity of control over a tween was determined (and limited) by the tweening graph. This remains, but now different types of easing can be applied to the different parameters within that graph. Say a clip was moving diagonally across the stage – the horizontal movement could have an ease out whilst the vertical direction may have an elastic easing (or obviously any combination). All the tiny tweaks and nuances to animations that couldn’t be easily achieved in previous versions of Flash, or even those only achievable by code now look entirely possible on the timeline at author-time. Lee’s tutorial is a must-see.

Finally, Serge returned to discuss ‘Flex workflows with Flash CS4’. He demonstrated some good techniques in working across Flash and Flex within single projects – firstly how to use Flex metadata tags in Flash, then how to create classes using the Flex SDK and compile those as Flex Library Projects to use as SWC files within Flash (and the Flash CS4 use of SWCs is so much better – adding files to the library rather than to the classpath list) – then likewise compiling components in Flash to handle in Flex. The latter also maintains coded methods on the Flash components that can be handled within the Flex projects, easing the workflow between Flash and Flex developers no end.

Similarly, to ease the workflow between developers and designers (and as I thought would get a mention), Serge ended by demonstrating Flash Catalyst (previously ‘Thermo’). He created Flex components from Flash graphics, multi-layered PSD files and Illustrator assets – all of which generated MXML code that a developer can play with later.

All in all, a great session – Chester and the guys were never going to disappoint. 😉

Various content online can be found in a number of places if you look for the ‘flashcamp_uk’ tag – there’s a whole heap of conversation on Twitter, I expect photos on Flickr and videos on Youtube and Vimeo will surface soon enough. I’ll also put up links to presentations files and source code as and when they find themselves uploaded online.

Update (09.03.09): Serge now has a video tutorial over on his blog demonstrating how to use simple Flex Library Projects in Flash.

Seems I was a little late in finding out about the BBC’s work on integrating and exposing semantic data in their (then) new beta trial of Artist pages a little while ago.

In an interview with Silicon.com, Matthew Shorter, BBC’s interactive editor for music, speaks about establishing data associations with MusicBrainz, an open user-contributed ‘metadatabase’, to roll out across all of their encyclopaedic artist pages on the BBC site.

MusicBrainz has been around for some time now, it’s a huge database of music metadata storing information such as artists, their releases, song details, biographies. Right now it has information on over 400,000 artists.

As early as 2001, it was described as a ‘Semantic Web service‘ (think a Semantic Web web service), in its offering of a massive store of machine-processable, openly available information (mostly public domain or Creative Commons-licensed), available via open protocols – in RDF format no less.

The BBC have adopted this open standard, mapping their data schema with that published by MusicBrainz to utilise the unique identifiers they provide. This allows the BBC site to leverage the public domain content, augmenting the profile pages found there.

Take a look at one of the records from MusicBrainz, for example, John Lennon’s information at http://musicbrainz.org/artist/4d5447d7-c61c-4120-ba1b-d7f471d385b9.html.

The unique ID here is the MBID, ‘4d5447d7-c61c-4120-ba1b-d7f471d385b9‘.

The BBC then, have a dynamically generated page at http://www.bbc.co.uk/music/artists/4d5447d7-c61c-4120-ba1b-d7f471d385b9.

Previously, writers at the BBC would have to write (and keep up to date) interesting and relevant content on every single artist pages they publish – which I’m sure you can imagine is as unenviable as impossible. Now, MusicBrainz populates a lot of the information here – see the Releases and Credits – and also provides the retrieval of the biography from Wikipedia.

At the same time, the BBC radio playout system (reportedly giant iPods in the basement of Broadcasting House) update the playlist information on the right of the page.

As Matthew Shorter says, automation and dynamic publishing means the pages can be created and maintained with a fraction of the manpower. Check the Foals page for a more recent artist and you’ll see news articles automatically aggregated also.

Gathering resources in this way and adding context around the artists enables machines to process the links between these data sets, establish relationships between the information and perform interoperation based on those.

In his article above, Tom Scott (the Technical Project Team Leader) also describes these URIs as ‘web scale identifiers’ and talks about the principles of Linked Data. Whilst in this use case these locators facilitate simple data retrieval, the notion of the absolute, global URI is a far larger idea, and here, could grow to be far more powerful.

The URIs facilitate the mechanisms, but stand to play a far larger role in opening and standardising information on the Web as a whole. The MusicBrainz MBID attempts to standardise the way we reference information online regarding music, it’s wide reuse, is in a sense, achieving that goal. But rather than thinking of these alphanumeric strings as pointing to locations of database records, they too can refer to the real world concepts they identify.

Imagine all online materials that feature a particular artist universally employing their single MBID string. Every semantically linked and annotated document and resource could be unified by an intelligent agent instructed to do so, collecting and amounting the information to describe that real world concept in it’s entirety. With consideration to the Semantic Web, ultimately, for a machine agent to understand that concept in it’s entirety.

In linking to MusicBrainz, the BBC then have equally made their data more portable to third parties wanting to use their data elsewhere. By agreeing on these unique IDs to identify resources, these pages can be automatically linked to and accessed based of this consistency.

The site provides a RESTful API, just add .xml.rdf, .json or .yaml to the end of the artist url.

The value of online information isn’t determined by scarcity like physical good are in the physical world. Reuse, repopulation and increasing visibility means, for the BBC, an enriched repository for the purposes of making information more accessible and useful to the reader (surely the inital goal), but also in having the link now established to MusicBrainz, the information is connected out into the Web, therefore enriching the source (and then exponentially any other links thereon). Better for the BBC, better for the third party, better for the reader – everything is enriched – so hopefully any later applications can benefit from this network effect.

Anyway, it turns out this has been going on since July last year, so perhaps the Silicon.com article was an attempt to increase visibility – we’re six months down the line now, after all.

If so, it worked – Sarah Perez wrote up an article at ReadWriteWeb and reports over at MusicBrainz suggest things are hotting up for this year. But if not, they should be applauded for commendable transparency and their open-minded efforts (and accept the extra drive of users to the service that comes with it!). It’s frustrating when products that are intended to ‘open up the web’ are kept closed and private for commercial purposes.

Thing is, I’m surprised I hadn’t found out about this before now. Shorter also describes this as being part of a general movement that’s going on at the BBC, “to move away from pages that are built in a variety of legacy content production systems to actually publishing data that we can use in a more dynamic way across the web.” So I went digging for more – thinking that, if this (pretty awesome) beta went online relatively quietly and the BBC aren’t particularly shouting about these new innovation (which I think they should!), perhaps there’s more elsewhere?

Well, I found two presentations over at Slideshare, the first on “BBC Programmes and Music on the Linking Open Data Cloud“, the second titled “Semweb at the BBC“, but unfortunately without transcripts of videos I can only really marvel at what might be in the works.

Patrick Sinclair (software engineer at the BBC – see his post on the Music beta) said a video might surface, but I’ve yet to find one.

By the looks of things though, there could be some fully recognised Semantic Web applications coming out of the BBC in the future. They look to discuss a handful of the languages and technologies that make up the Semantic Web stack, refer to constructing their own ontologies, reason use cases for Linked Data and look to be applying the techniques of the Music pages to Programmes sections and onward.

Look forward to it!

As part of our agency rebranding, we’ve all been tasked with finding a suitable image for the reverse side of our business cards. Apparently it should represent our image and personality, be quirky, but important to us.

Avoiding copyright infringement means I can’t use any bad-ass images of Superman, we can’t have any people we ‘know’ – assuming this includes famous people scratches out using Kirk, Kara Thrace or Tim Berners-Lee etc – even his Semantic Web stack is too square to use as well. (Put this on a t-shirt for me and I’ll be your friend forever).

Anyway, after trawling Flickr for anything half decent under a Creative Commons license I’ve narrowed it down to five images – bit geeky, quite unimaginative, cliché retro.

TAC-2 controller

TAC-2 controller

"Yeah" by Sameli

The background is as good as the joystick itself, TMNT ftw.

Atari 2600 games

Atari 2600 games

"Day 323/366" by Great Beyond

Brilliant. We could use patterns instead of photos if we like – this is almost both.

Commodore CBM

Commodore CBM

"Cutting Edge Technology, 1981" by Superbomba

Could I pass this off as me?

Girl coder!

Girl coder!

Original by Dave & Bry

Older still, maybe a bit obvious.

Then I started looking for trash – I love photography of pretty much anything abandoned or broken. Flickr has a great pool of Abandoned Swimming pools.

Mac and Toaster

Mac and Toaster

"Macintosh Plus + Toaster" by Eric__I_E

They belong together!

Abandoned Monitors

Abandoned Monitors

"Four Toxic Computer Monitors" by Tonx

Possibly the strongest contender. Looks like they’re their holding cables ready to cross the road. Not too overly techy either?

What do you think?

Suggestions/recommendations/votes welcome – need to decide before Friday!

While I’m on the subject of data portability, I thought I’d talk about DataPortability.

A loose analogy: Consider the definition of the Semantic Web – a conceptual framework combining standardised semantic applications on the web. Similarly, the DataPortability project aims to define and implement a set of recommendations of open standards to enable (entire and complete) end-to-end portability of data.

Both ‘capitalised’ terms denote distinct, considered models – composed of specific selections of the technologies that together embody their respective namesakes.

Not that DataPortability really has anything to do with the Semantic Web other than the shared idyllic standardisation and ‘boundless’ interoperation of data and services online..

In essence the project a volunteer-based workgroup, as transparent and ‘frictionless’ a movement as the borderless experience they promote. Their vision describes the web as place where people can move easily between network services, reusing data they provide, controlling their own privacy and respecting the privacy of others (read in full here).

They wish to see end to every problem I described in my last post – the social network fatigue, the fragmentation and walled-garden silo landscape of current web platforms – and too, promote the combination of a open source technologies and protocols (including OpenID and OAuth) for web-wide benefit, not only with regards to social networking.

The following video, quite simply but accurately, describes the already too familiar picture:

So what technologies are we talking about?

Although our Semantic friends RDF, SIOC and FOAF are present, it’s much more familiar territory for the rest. The line up includes RSS, OPML, again OAuth, OpenID and Microformats. These are existing open standards though, not technologies still in development awaiting a W3C recommendation like some of the Semantic Web projections.

There’s some other very cool stuff I’d like to go into more detail with later. Definitely APML, for example – Attention Profiling Markup Language – an XML-based format that encapsulates a summary of your interests, your informed ‘attention data’.

As well as identifying the components that make up their blueprint (the recognition of how their goals can be achieved – which, and I know I keep coming back to this, is one of the largest cause for doubters of the Semantic Web – that the speculative combination of some of the technologies is almost unimaginable) – the DataPortability project also documents best practices for why you should to participate in the initiative – specifically tailored as to how they can come together for you, as developers, or consumers, or service providers etc.

DataPortability is about empowering users, aiming to grant a ‘free-flowing web’ within your control.

How are they doing this? Are they likely to succeed? They’ve already got some huge names on board – Google, Facebook, Flickr, Twitter, Digg, LinkedIn, Plaxo, Netvibes – the list goes on. This is really happening.

Find out more at dataportability.org.

Hopefully the last of the posts that I should have written last year – a while back I wrote about Facebook Connect and Google Friend Connect, I mentioned three open source data projects – OpenID, OpenSocial and OAuth.

I only mentioned them briefly in the thinking that they deserved attention separate to that topic – they’ll play a key part in the progression of social media technology, but the three are part of a bigger issue. That of data portability – one perhaps more concerned with my current Semantic Web conversation.

While the three have been separately developed over the past three (or so) years, their popularity and general implementation are becoming ever more widespread. In combination, they offer very powerful potential in leveraging data, interoperability thereof between systems and ultimately offer standardising methods and protocols in which data ‘portability’ becomes possible.

In very, (very) short:

  • OpenSocial (wiki) is a set of common APIs for web-based social network applications.
  • OpenID (wiki) is an decentralised user identification standard, allowing users to log onto many services with the same digital identity.
  • OAuth (wiki) is an protocol to simplify and standardised secure API authorisation and authentication for desktop, mobile and web applications.

 
There’s a ton of reading fired from each of those links.

But more than anything, I very strongly recommend watching the following presentation by Joseph Smarr of Plaxo, taken from Google’s I/O conference last year:

Google I/O 2008 – OpenSocial, OpenID, and OAuth: Oh, My!

He covers each of these open source building blocks in detail, collectively considering them as a palatable set of options for developers in creating social media platforms. He presents the compelling engagement they can offer social websites, how they fit together in a holistic way so developers aren’t constantly building from scratch and how he envisions the social web evolving.

He critiques that today’s platforms are essentially broken, highlighting the fragmentation of social media sites – that their rapid growth forced developers to build each platform to be built separately, from scratch so therefore differently, so that each platform has their own silo, headed in a different direction. That the very nature of social network infrastructure and architecture is still very nascent.

We are at breaking point, social media sites still assume that a every new user has never been on a social network site before. We’ve all experience having to register and re-register, upload profile information, find friends to then confirm friends – it’s not scaling any more.

Not only has it gotten to the point that we as consumers are experiencing social network fatigue, but users are also, understandably, opting out of joining even newer networks, pre-empting the nauseous motions they’ll have to repeat.

It’s very easily digestible – not at all deeply technical until the Q&A section. Do watch!

Not to be outdone by Google’s efforts this week, Ask.com have also expanded their search technology to return specific ‘direct’ answers to searches, where possible, by means of semantic language processing.

Fortunately far more public than Google, Ask.com announced on their blog yesterday that they’ve been developing proprietry semantic analysis technology since October of last year in their efforts of advancing the next generation of search tools.

DADS(SM) (Direct Answers from Databases), DAFS(SM) (Direct Answers from Search), and AnswerFarm(SM) technologies, which are breaking new ground in the areas of semantic, web text, and answer farm search technologies. Specifically, the increasing availability of structured data in the form of databases and XML feeds has fueled advances in our proprietary DADS technology. With DADS, we no longer rely on text-matching simple keywords, but rather we parse users’ queries and then we form database queries which return answers from the structured data in real time. Front and center. Our aspiration is to instantly deliver the correct answer no matter how you phrased your query.

The results aren’t returned as explicitly as Google’s, mainly due to the amount of adverts above the page fold, but they work. Try searching for ‘Football on TV this weekend‘ or ‘Movies on TV now‘ and you’ll see the results in accordingly custom-formatted sections.

Unfortunately the results are still only returned in HTML, so again – the term ‘semantics’ here describes the form of processing Ask.com are doing beind the scenes rather than depicting this as their first outright foray in to the Semantic Web (capital S).

This though is proprietary technology and presumably it’ll stay that way. So I’m unsure whether to celebrate their realisation of the importance of semantics (in search at least) or in realising their more ‘closed source’ ethos, consider this to be almost against the idea of the Semantic Web – portability, sharing, transparency – as they hold these advances close to their chest in order to gain an edge over their competitors, causing others in the future to understandably do too.

Quite out of the blue and without notification of it’s launch as far as I’ve been able to find, Google seem to be exposing semantic data in their global search results.

Try searching for ‘What is the capital city of England?’ or ‘Who is Bill Clinton’s wife?’ and you’ll see sourced direct answers returned at the top of your search results.

It’s hard to tell if these direct results are actually semantic expressions or just presented to appear that way – in the expected Semantic triple of subject-predicate-object. The list of sources definitely don’t structure their information with semantic expression, so perhaps quite an amount of logic and natural language processing is being done on Google’s part to process non- or semi-structured data.

I’ve tried to find out before what Google have been up to concerning semantic technology but found little. The coverage over at ReadWriteWeb reports that neither they or their Semantic Web contacts had heard or seen anything about this before, but the community feedback suggests there’s been variations of this for some time – including a three year old Google program called ‘Direct Answers’ – but none of the coverage of that program offers the kind of examples we’re seeing here.

Marshall Kirkpatrick points to a blog post of Matt Cutts, Google search algorithm engineer, but it seems to be a dead link now. Though trailing through Google’s caches, it seems to find him quote:

Many of the data points are being pulled in from the structured part of Wikipedia entries, which is interesting. Other sources are wide ranging, from a license plate website to Jason Calacanis’s Mahalo.

If Google are constructing semantic data from semi-structured or non-structured source data, then there’s undoubtedly some quite powerful semantic processing technology in place. I highly doubt this will be the final product of their development with such technologies, simply the first we’ve noticed – most likely why it’s slipped under most people’s notice.

The inaccuracy is also an issue. Try searching ‘Who is Bob Dylan’s wife?’ – and you’ll see Sara Lownds (his ex-wife) returned. Seeing these direct answers reminds me of True Knowledge.

Even their example questions though, are far more complex – for example, ‘Who owns Universal Studios?‘, ‘Is the Chrysler building taller than the Eiffel Tower?‘, ‘What is the half life of plutonium 239?‘.

More importantly, if it doesn’t know that answer, it won’t ‘guess’ – it’ll tell you it doesn’t know and ask you to deconstruct your query in order to expand it’s knowledge base so it can find the answer later.

As Marshall says, this is all speculation based on limited observation – and low visibility of Google’s development. Hopefully there’ll be more soon!

Whilst I’m on the subject of Google and trying to finish half-written drafts hanging over from last year, I thought I’d briefly mention the release of Analytics for Flash.

Aside from capturing all the obvious generic statistics you’d expect from a Flash tracking package – and by being fluidly compatible with the main JavaScript library is capable of outputting all the core functionality of the existing Analytics components – the metrics offered by Google Analytics for Flash can be particularly designed to offer interesting insight into other aspects of your users’ activity you may not first expect. For example, you can collect data that can help you gauge levels of usability or (kind of) the implementation of design success. Seemingly you can monitor the behaviour of the users’ interaction during their visit too – as well as the length of the visit itself.

It’s all technically possibly, with Google’s introduction of event tracking that can be fired from custom interactions – whether that be a button click or video view or anything else. Along with that, the event can carry a payload, later received by your Analytics dashboard for your interpretation. It sounds simple – but it’s capable of being very powerful.

Previously, tracking your Flash content would be in isolation. That is to say, you could fire a tracking event when a user accesses a page of Flash content, but from there you were blind to their progress until navigating again.

This payload though, not only could detail traffic to specific sections within a Flash application (although in turn, separate events could be created for those) but could return data specific to that user and session. For example, the total time the user has spent in a particular place, or the site as a whole.

Depending on how complex you wish to be (and how many stats you want to trawl though later) this could offer very valuable data. But that data need not only be of value to an agency or advertiser. Counts for clicks on specific buttons aren’t anything new when you want to find out how many people click a ‘News’ link first, or if anyone notices the ‘Help’ button. This can be far more granular – to the point, as above, where the data could be used to inform decisions on say, design or usability.

Take a standard Flash video player as a media component you’re used to seeing on a daily basis. You can easily picture the common control bar. But how many people actually use those ‘Rewind’ and ‘Fast forward’ buttons? Could the design be improved?

Admittedly with Flash video components, you’re unlikely to see those nowadays 😉 – but that (as I’ve picked this example) is the result of user testing, something this kind of tracking can’t replace – Jesse Warden has a strong sense of this in his post about Flash Analytics.

Anyway, the custom events let you send as (overly-) complex amount of data as you wish. Flash of course can be used everywhere, deployed as widgets or embedded on blogs anywhere on the Web. These Analytics though, are part of your application itself. So you can track its usage outside of the original HTML page the previous iteration of Analytics would have restrained you to.

And it’s free! Check out the code repo.