This article provides an example of Electronic Programme Guides [EPG] i.e. TV metadata which is snippets of information and images. Use of Gracenotes that can be used for descriptive info, images and multimedia on one show as a whole.
We have all watched a show and been given the option to see more which is done by TV tagging. Example on CNN they ask a viewer if they want interactive TV. We all know what tagging means basically. The TV metadata is a tagging process, but is bi-directional and provides IP metrics. But, some value has been identified thus the process is not easy and has no common standards ...again!!!Common Standards.
The article does set a pathway to working out an solution. First automation of tagging because we would need too many geeks to do it manually. This part has some headway with Speech to Text technology additionally use of Closed Caption when available. But, the best way as suggested is meshing of both ways "moderated/curated by humans." Some attempts at XML standards for EPG metadata have started to make headway..Common Standards!!!!...Many of the standards are built on some significant predecessors from mostly the European digitalisation of TV. Soon the finalization of DVB-SI. Second is MPEG-7 which provides a broader ability to provide tools for description of all types of multimedia content from a broadest range of networks and terminals.
The funding of projects to demonstrate how Semantic Web tech can connect TV content and Web convergence with a focus on BMF 2.0(Broadcast Metadata Exchange Format). Basically this format allows metadata interoperability within their platform of noTube which is pushing that standard. We will have to wait and see where this tension of using a pushed standard through a platform will effect European tv and internet connectivity.
It will be interesting to see how this all plays out. It is amazing how televisions are connected to the web and have so many capabilities now. You can talk to some televisions (or at least if you have an Xbox, you can tell it what to do) or you can use hand gestures for navigational purposes.
ReplyDeleteMetadata on digital videos (and images) are a huge new area of opportunity for both human indexing (professional and crowdsourced*) as well as for automatic machine-generated metadata. Kasie is correct about how interesting things will be!
ReplyDelete* Here's an example of crowdsourcing: http://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/
--Dr. MacCall