You are on page 1of 3

OnScreenDb

A collaborative website proposal by Carl Jamilkowski It's Wikipedia for movies and TV, on a per-scene basis. Users enter metadata for what's happening on screen at that moment. There is a high interest amongst device and software manufacturers to provide a complementary experience for those who are watching television and movies. The existence of a data source that can cover backdated material would allow for them to provide this second screen experience on another device. Here, users can contribute metadata for timecode segments in a given piece of media: these actors appear in this scene, this type of car is displayed onscreen, you can read more on the obscure reference they just made at this URL. Similarly to Reddit, it could be prepopulated by auto-generated data: face-recognition algorithms could be run on media to determine which actor is in which scene, parse-able captioning would allow for auto-suggestions for tagging. Predecessors to this might include manually looking up something through the web on a second screen web interface (e.g. IMDB) or listening to commentary tracks on a DVD; this is supplemental material relevant to what is happening on screen. Today, public TV databases like TVDB provide user-generated TV show and episode information for use in DVR applications. Software like MyMovies uses user-generated metadata to produce a database of info for movies in your DVD library and unlocks software features based on your contributions. There is financial potential embedded in this product. Second screen implementations, like Xbox SmartGlass, already have a demand for this sort of metadata, but no immediate source for it to be produced. Content producers want favorable and accurate data about their media. It is easy to imagine them providing most of the data going forward for new content; perhaps with the added incentive to have profit sharing for product placement (e.g. they are drinking Coke and a Coke ad appears). Studios could also have the ability to endorse data: all the user submitted content for something could get the studio stamp-of-approval as accurate and certain badges/awards could be given. The other side to this demand for a data source is the consumer of the raw data: software and hardware developers want this to sell more of their product. In order to get Game of Thrones to produce content for SmartGlass, Microsoft had to reach out and partner with them (likely with a financial incentive); this investment cant scale with the magnitude of human media. A user-produced data source like this would allow these companies to produce superior software and hardware experiences without worrying about where how they can produce the data. Small and open source developers are a good opportunity for early engagement. The XBMC project, Plex, MediaBrowser, VLC, etc. are all potential candidates for a <v1 product. These existing primary-screen media interaction services provide a natural segue into a second screen, given this new data source. In fact, the first editing interface would need to be built

upon one of these because the native wiki interface doesnt lend itself to video-based tagging. The first interface for this data will be a trendsetter for the rest; therefore, it must be barebones to spark additional possible features, or exquisitely executed to set the bar high for a MVP. Return visitors to OnScreenDb will be pulled in by notification of entries being changed/updated by others. There is also the natural return: watching another movie, so they pulled up the editing interface to do it again. They can view changes to media they have annotated previously and optionally tweak the data, annotate another movie, see how many people have used/seen their entered data (i.e. hit counter to see the impact of their work). The interfaces for these interactions can differ drastically: entering in info while a movie is playing would likely happen through a tablet or phone interface; viewing on a webpage would be more info-dense and allow for more data-entry-intense (enter links to actors/movies/products/etc. or long form text) with short GIFs of the described onscreen action. There could be "stubs" of sorts, where you could mark something on the couch to add info later while on a desktop. Having initially populated the database with some automatically generated data and executing a very private alpha test, the first broad users for the site would be invited by existing users wanting to show off the data they have entered (and have added to it), while more users would be gained through adoption by small developers: adding a metadata feature into their beta software for a second screen. True collaborative uses will come later on. Data will be added for many sources, and then users will start having something to say in addition to the person before them (e.g. he might have been wearing Armani gloves, but he murdered the guy in Dior sunglasses). This community is very meticulous about curating their media libraries, so there is a very natural initial user base. There might be collaborative uses among specific fandoms: Star Trek, Die Hard, Law and Order; these groups will meticulously enter in all the relevant data because they are fans and are already inclined to engage in this kind of cataloguing behavior. A past precedent by The Fast and the Furious was with their HDDVD release: Internet lookup of the total value of the damage taking place onscreen. This type of unique cataloging is an example of what might take place with an open-ended system. Communities around specific types of media will want to highlight specific data for each scene: number of times a catchphrase is used, references to a scene from another film, and the general fodder for drinking games. Most growth will be driven because of developer implementations of the service. There are two user endpoints: the general consumer experience (which is read-only), and the editor experience (which requires an account to edit). These are specific API implementations. It is worth mentioning that fan communities for specific (especially geek-centric) shows are very methodic about their cataloging. They already engage in this sort of behavior, but in disparate data sources; here we can tie it back to the TV and increase engagement with the

fruits of their labors. Simultaneously engaging different strong fandoms will help to prevent any one to overtake the service. The unique user experience to whatever you are consuming is a way to prevent people from sensing which media is more prominent on the service. This service wont duplicate other data sources: IMDB, thetvdb, etc. It also wont try to replicate the second screen user-engagement activities of other services (AMC, Viggle, etc.): quizzes, trivia questions, polls. OnScreenDb deals with factual information and doesnt try to game-ify that in any way. If users dont return, they probably feel that there is too much corporate influence. They are providing data that others could profit from. It could also be that it is too difficult to enter data while watching a show (this can be mitigated by tagging things to enter later, providing animated GIFs of what is happening onscreen). The OnScreenDb would give users a chance to share with others the minutia about a particular piece of media. These content producers already exist and produce content without a broad audience. By dispersing it over many scenes of a production, it will give more legitimacy to their labor, as well as providing entertainment value to the general population consuming it.

Additional Resources
Concept video for using a second screen with supplementary info. https://vimeo.com/53104522

You might also like