Saturday, January 07, 2006

Web 3.0: enabling content to find you

Over on Blogspotting in a post entitled "New Stars of Indie Content?", Heather Green asks:

And knowing that now we're going to see this avalanche of video online, my question is how will amateur content bubble up? How do you find video or audio podcasts that you like?

Great question. The current answer is that you have to pick the "right" set of keywords or tags to drill down to the content you want, maybe find a recommendation engine or collaborative filtering interface, or simply hear about it by word of mouth or by monitoring various blogs.

The better answer is that you really want a capability that we won't have until : enabling content to find you. The precise mechanisms for doing that are not here yet and probably require additional research, but the core concept is that the computer knows enough about your interests, needs, desires, moods, priorities, your history, and your future (e.g., calendar, plans, etc.) to detect when new content appears that might be a good match for you. In this way, the content is effectively finding you, rather than you constantly searching for new content. Obviously we're not there yet, but this is the answer to the question of how you will get the content that you "like" or might like if only you knew it existed.

A traditional is the closest we've come to content finding us, but alerts have traditionally been driven by simple keyword searches that don't support elaborate expressions of your interests, desires, moods, etc. that can vary from day to day and even moment to moment.

If you insist on over-simplifying, consider and content finding you as "alerts on steroids". That still doesn't capture the full essence, but is at least a start.

The big research question is how to get audio and visual content properly categorized. Some of that may be mechanical and automated, and some may come from members of the community annotating content, not with overly-simplistic tag words, but more general concepts, so that, for example, content could find you based on the type of mood that users feel that it appeals to. Given the high value for A/V content, things like lyric text, scripts, scene descriptions, parenthetical script commentary, etc. could form a solid basis for describing what is in the content.

Unfortunately, people (and the media) are still obsessing over and we can't even get people to agree to attach transcripts to podcasts, so if sounds exciting, you'll have to wait, and maybe quite a while.

That said, it is still very feasible that motivated entrepreneurs could at least begin to experiment with richer annotation mechanisms for A/V content publishers and more sophisticated profiling and matching and alerting mechanisms. Call is , or maybe or .

-- Jack Krupansky

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home