This website provides details of several software solutions created in the Human Factors Lab.



CatScan is a tool to administer category construction experiments (also known as card sorting). The difference to existing software solution is that CatScan can display animated stimuli. Demonstrations are being prepared. If you are interested in this software, please contact Dr. Klippel.

Figures and demonstration are under construction.



To serve our interest in revealing and explaining conceptualizations of any kind of information (as the result of the category construction / card sorting experiments performed with CatScan), we (that is Chris Weaver) implemented a visual analysis tool called KlipArt. KlipArt is realized within Improvise, an integrated environment for developing highly interactive visual analysis tools (Weaver, 2004). The first version of KlipArt has been used to analyze the influence of shape characteristics of star plot glyphs on classification (Klippel et al., 2009) and has been iteratively improved; we are currently using version 4. The KlipArt user interface displays a graph consisting of a node (yellow square) for each participant plus a node for each unique grouping of icons produced by at least one participant. Edges connect each participant to the groups that they (and potentially others) created. Bubble-like ‘packs’ encompass the grouping nodes of each participant. The graph supports dragging of any visual element (nodes, edges, and packs) as well as toggling of an iterative force-directed (spring-based) layout algorithm, allowing the experimenter to interactively manipulate the graph in order to tease apart even complex many-to-many grouping relationships.

Subsets of visual elements can be flexibly selected for inclusion in the graph in order to reveal individual differences in conceptualizing movement patterns. This analysis can be done from two perspectives: Either from the perspective of icons that have been treated differently by different participants (for example, those with particular high or particular low similarity ratings), or from the perspective of individual participants or groups of participants (for example, those discussed in the previous section or, say, age/gender differences).


More figures and a demonstration are under construction.



The MatrixVisualizer (realized within Improvise by Chris Weaver) displays two half matrices. These half matrices provide different perspective on the grouping behavior of participants (collected with CatScan). While the upper left matrix is a direct visualization of similarities between icons, the lower right matrix visualizes results of additional similarity assessments. We discuss and explain both matrices in the following.

Each cell in the upper left matrix encodes, in color and optional text, the number of times that participants placed the icon for that row and the icon for that column into the same group. This matrix can be optionally filtered to show counts for an arbitrary subset of participants by way of interactive selection from miniature icon co-occurrence matrices along the bottom and right sides of the lower right matrix.

In contrast, cells in the lower right matrix contain miniature binary difference matrices for each row-column pair of participants. Each of these matrices is accompanied by a binary similarity measure, calculated as a “simple Levenshtein distance” of 1-(cellsdifferent / cellstotal). The Levenshtein distance (Levenshtein, 1966) allows for comparing participants with each other to reveal individual differences. While the Levenshtein distance (also called edit distance) in general allows for three operations (insert, delete, and replace), we only need the replace function as all matrices have the same length (i.e., number of cells, which technically can be captured by the Hamming distance (e.g., Gusfield, 1997). With the result of this analysis it becomes possible to compare individual differences (between participants).

Screenshot 1 (XP3)

Screenshot 2 (XP4)