We present Scalable Insets, a technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visual spaces such as gigapixel images, matrices, or maps. Exploration of many but sparsely-distributed patterns in multiscale visual spaces is challenging as visual representations change across zoom levels, context and navigational cues get lost upon zooming, and navigation is time consuming. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the annotated pattern. Insets support users in searching, comparing, and contextualizing patterns, while reducing the amount of navigation needed. They are dynamically placed either within the viewport or along the boundary of the viewport to offer a compromise between locality and context preservation. Annotated patterns are interactively clustered by location and type. They are visually represented as an aggregated inset to provide scalable exploration within a single viewport. A controlled user study with 18 participants found improved performance in visual search (up to 45% faster) and comparison of pattern types (up to 32 percentage points more accurate) compared to a baseline technique. A second study with 6 experts in the field of genomics showed that Scalable Insets are easy to learn and effective in a biological data exploration scenario.
|Publisher||bioRxiv, at Cold Spring Harbor Laboratory|
|Number of pages||28|
|Publication status||Published - 2018|