Abstract / Description of output
The introduction of streaming and VoD platforms such as Netflix, Hulu, and Youtube, has presented the film industry with a new way for how they can deliver their product to their audience. This has seen a rise in online videos and subsequently has made it harder for viewers to discover the right visual content for them. Traditional video search tools enable users to find visual content based on various metadata fields, such as genre or title, however, this information might not accurately represent the actual content in the movie. In this position paper we describe our idea of leveraging the richness of semantic technologies to (1) enrich movie metadata, and (2) create meaningful semantically-enriched descriptions of movie scenes using a variety of video and audio processing techniques. This information will enable us to create a Knowledge Graph (KG), which will be interlinked with other KGs available on the Web of Data resulting in a more comprehensive representation of these movies. The KG enables users or agents to search and reason within movies' metadata and scenes using both the implicit and explicit knowledge available in the graph and the interlinked resources, in a federated fashion
Original language | English |
---|---|
Pages | 302-303 |
Number of pages | 2 |
DOIs | |
Publication status | Published - 2018 |
Event | 2018 IEEE 12th International Conference on Semantic Computing (ICSC) - Laguna Hills, United States Duration: 31 Jan 2018 → 2 Feb 2018 |
Conference
Conference | 2018 IEEE 12th International Conference on Semantic Computing (ICSC) |
---|---|
Country/Territory | United States |
City | Laguna Hills |
Period | 31/01/18 → 2/02/18 |