Contextual influences on the category
construction of geographic-scale
movement patterns (ConCat)
The project ConCat addresses the question of how movement patterns at the geographic scale are understood by humans (e.g., the paths of hurricanes) and how this understanding relates to qualitative spatio-temporal formalism. The project is NSF funded, more information can be found here.
EAGER: Spatial Awareness through Sapient Interfaces (SASI)
SASI is an NSF funded project that addresses the question of how navigation devices can be improved to create spatial awareness. More information can be found here.
GeoCAM - Representing, extracting, mapping, and interpreting movement references in text
GeoCAM is an NGA/NURI funded project that advances our knowledge on machine-based understanding of linguistically encoded movement patterns. The project has its own webpage and further information can be found here.
Design of visual interfaces & geovisualization
How and why visual/graphic representations work and why they are (sometimes) worth 10,000 words is a research question that has been addressed in many disciplines. Together with several reserachers worldwide we are addressing this question by conducting behavioral experiments on different kinds of graphic representations.
We analyzed, for example, star plot glyphs (see Figure) to demonstrate the duality of perception and cognition and how they interact in the understanding of graphic representations. The specific questions we address is whether the assignment of variables to rays has an influence on how star plot glyphs are interpreted by participants in a classification task. We know, for instance, that shape is one of the most dominant perceptual characteristic of an object from research on perception (e.g., Palmer, 1999). Previous research on multivariate point symbols, such as Chernoff faces, indicates that the assignment of variables to facial characteristics does indeed have an impact on its perception (Chernoff & Rizvi, 1975). However, for the shape of star plots we only find a theoretical suggestion that teardrop-shaped star plots are the most perfect form (Peng, Ward, & Rundensteiner, 2004). This assumption contradicts other findings in perceptual experiments that revealed the importance of concave shapes in object recognition (Hoffman & Singh, 1997).
Our experiments used a grouping task to shed light on the influence of star plot shapes in classification tasks using car data (variables) such as miles per gallon, maximum speed, and emissions (see also Chambers et al., 1983). Figure 2 illustrates the effect that assigning variables to rays differently has on the shape of a star plot glyph. Both star plot glyphs in Figure 2 represent the attributes of the same car entity. The differences in geometric shape between these two star plot glyphs are produced by exchanging the variable assignment of only two rays.
It was found through the experiments that the shape of a star plot glyph significantly influences the classification of represented entities. These influences surfaced when shape characteristics in one condition became visually prominent, as in the has spikes shape (see Figure 2). In these cases, the shape characteristics dominate the data characteristics, that is, the represented car variables (the data semantics). In addition we found that more salient shapes also led to a faster grouping of star plot glyphs and searching for similar icons became easier. Based on this finding, our future research goal will focus on addressing questions of how to combine faster perceptual processing while diminishing the influence of perceptually salient shape characteristics. In follow up experiments we showed that the influence of salient shapes can be reduced by coloring the rays of a star plot glyph.
This reserach is ongoing and we are currently conducting further experiments on mapping multivariate data using point symbols.
pdf (prefinal draft)
pdf (prefinal draft)
Cognitively Ergonomic Route Directions
Klippel and colleagues (Klippel, Tappe, Kulik, & Lee, 2005; Klippel, Tappe, & Habel, 2003) have developed a high-level cognitive framework for route directions based on several behavioral experiments they have conducted (Klippel et al., 2003; Klippel, Tenbrink, & Montello, to appear) and a growing body of research in the area of wayfinding and navigation (e.g., Lovelace, Hegarty, & Montello, 1999; Allen, 2000). These behavioral experiments were taken as the basis for a formal characterization of route knowledge at the conceptual level (i.e., how the movement patterns of a cognitive agent in a constraining network, like a system of streets, are conceptualized). This approach allows for automatically generating route directions that adhere to principles of cognitive ergonomics. Additionally, a translation between different modalities such as language and graphics becomes possible through a common representation at the conceptual level.
This approach of cognitively ergonomic route directions is based on the notion of conceptual primitives called wayfinding choremes (Klippel, 2003). The term choreme was coined in the theory of chorematic modeling by the French Geographer Brunet (Brunet, 1987). The term consists of the root of the Greek term for space (chor-) and the linguistic construct -eme that is used to indicate the smallest meaningful unit. Wayfinding choremes are defined as mental conceptualizations of wayfinding and route direction elements (i.e., the conceptually smallest meaningful units of movement patterns). They function as terminals in the formal Wayfinding Choreme Route Grammar (WCRG) (Klippel et al., 2005). Routes and route parts are formalized as meaningfully combined wayfinding choremes. This approach allows for modeling cognitive conceptualizations of movement patterns as well as adapting route directions to different spatial contexts or personal preferences, such as the familiarity of a wayfinder with their environment (Srinivas & Hirtle, 2007). With respect to the structuring qualities of T-intersections, for example, the instruction follow the road until it dead ends has the potential of reducing the amount of information by coarsening the level of granularity (Duckham & Kulik, 2003; Mark, 1986; Klippel, Hansen, Richter, & Winter, accepted). In terms of the WCRG, this is modeled as follows: Several intersections where the wayfinder has to move straight are terminated by a left or right turn at a T-intersection (wc, wc) and are combined to a new concept.
An extension of the specification of the Open GIS consortium for Location Based Services, OpenLS (Mabrouk, 2005) has been developed to further adapt ergonomic principles to the requirements of information technology (Hansen, Richter, & Klippel, 2006; Klippel et al., accepted). The combination is possible as the OpenLS specification centers on a central object class called Navigation Maneuver that can be extended to match individual wayfinding choremes. In partnership with the Transregional Collaborative Research Center for Spatial Cognition in Bremen, Germany, we implemented a system that realizes the communication of route information using environmental features (Richter & Klippel, 2005). For example, structuring elements such as rivers are explicitly integrated into the description of routes (drive along the river).
The behavioral research that guides the aforementioned developments is rooted in the analysis of both verbal and graphic route directions. Klippel and collaborators investigated aspects of spatial chunking (Klippel et al., 2003) and the influence of complexity on given route directions (Klippel et al., to appear). In the latter, several corpora of route directions generated by participants while viewing simple maps were analyzed. The focus was placed on the conceptualization of direction changes at decision points of varying complexity. In this study, the variability of conceptualizations underlying turning actions at decision points and the level of detail given to specify actions were evaluated. A systematic approach to characterize route direction data suitable to account for the difference between structure (for example, the layout of an intersection) and function (the actions performed in an environment) was advanced. This study indicates that verbalizations of motion trajectories within a route direction task may require different levels of detail depending on the spatial situation to enable the degree of disambiguation needed to perform the following task of route. Klippel and colleagues’ (Klippel et al., to appear) characterization of aspects that influence the specification of spatial relations in the context of route directions accounts for (a) the spatial structure of an intersection, (b) the action to be performed at an intersection that demarcates functionally relevant parts, (c) the availability of additional features that can be used to anchor the action to be performed (landmarks), and (d) the conceptualization of this action as the result of structure and function and additional features available.
Earlier research has focused on how routes are segmented, or from the opposite perspective, how primitive elements of routes (wayfinding choremes) can be spatially chunked into so-called higher order route direction elements (HORDE) (Klippel et al., 2003). This approach is particularly interesting when combined with new technologies that allow for individually adapting route information for a user familiar with a specific environment (Srinivas & Hirtle, 2007). These users would need information on a different level of granularity than a user who is unfamiliar with the environment in question. To allow for changes in the level of granularity, rules need to be specified to adapt route information to the needs of the user and situation (see also Klippel et al., accepted).
The theoretical basis developed is also useful to develop gold standards for the application of computational linguistics. In a study on favored conceptualization of landmarks (as point-like, linear-like, or area-like entities), classifiers were developed that allow for the mining of web pages (Furlan, Baldwin, & Klippel, 2007). This information is the basis for structuring movement patterns in environments constrained by spatial structures such as city street networks (Klippel, MacEachren, Mitra, Turton, Jaiswal, & Soon et al., 2008).
Springer, pdf (prefinal draft)
Best Paper Award
pdf (prefinal draft) / springer
Best Paper Award
science direct prefinal draft
The starting point for every routing activity is knowledge about one's current position, not in terms of geographic coordinates but rather with respect to cognitively and perceptually salient features of the environment (Liben & Downs, 1993). At present, You-Are-Here (YAH) maps fulfill this function in buildings and at prominent locations. However, an analysis of YAH maps (Klippel, Freksa, & Winter, 2006) has shown that they often provide misleading information by violating YAH map design criteria (Levine, 1982), such as the correct alignment with the user's reference system. Poor map design may prove deadly for people unfamiliar with an environment or older map users during an emergency(Aubrey, Li, & Dobbs, 1994) and getting lost, for example, in hospitals is a serious problem. Several facets of YAH maps such as general placement principles and effects of you-are-here symbols and map alignment have been studied (e.g., Aretz & Wickens, 1992; Levine, 1982). Adding to this body of knowledge, we examined the relationship between map alignment and the presence of landmarks (Lynch, 1960; Sorrows & Hirtle, 1999; Caduff & Timpf, 2008) at decision points that require a turn. It is generally assumed that landmarks are a natural way for human users to make sense of their environments (Golledge, 1999) and are essential aspects in the development of spatial knowledge (e.g., Montello, 1998). Additionally, the correct alignment has been shown to be essential for correct and fast interpretation of YAH maps (Aretz & Wickens, 1992). As most routes consist of more than one segment, a critical question to ask is “What is the relationship between map alignment and the presence of landmarks in a map?” or ”Can the presence of landmarks at decision points reduce the negative effects of misalignment?” We have conducted an experiment that addressed this question (McKenzie, Klippel, & Bishop, submitted) by combining the factors landmark and alignment (see Figure 3). The landmark factor had three variants: (a) no landmarks at a decision point with a direction change, (b) one landmark, and (c) two landmarks. The rationale for using these three variants is grounded in Levine’s (1982) two-point theorem stating that orientation in the two landmarks per decision point condition should work best. The alignment factor also had three variants: (a) properly aligned first route segment, (b) 90 degree misalignment, and (c) 180 degree misalignment.
Our findings show that for properly aligned maps landmarks indeed add support the wayfinding process (i.e., participants followed routes learned from map in a virtual environment faster). In the case of properly aligned maps, providing two anchor points (landmarks) in addition to one’s own position allowed people to complete routes in the fastest time and confirms Levine’s (1982) suggestion (see above). However, maps that are misaligned 180 degrees produce contrasting results. The presence of landmarks, either one landmark per decision point (that yielded the worst results) or two landmarks per decision point was not helpful. Instead, maps showing no landmarks outperformed the two other conditions. After the experiments, participants reported that they modified their wayfinding strategy for cases in which no landmarks were present in the maps. They switched from anchoring their wayfinding activities on landmarks to counting blocks. This strategy proved to be significantly faster for misaligned maps compared to using landmarks.
Static and dynamic visualization of information and the conceptualization of geographic movement patterns
Several studies in the Human Factors Lab have addressed aspects of the dynamic, static, and hybrid presentation of route information (Klippel et al., 2003; Lee, Klippel, & Tappe, 2003) as well as the use of motion in displays of air traffic controllers (Lee & Klippel, 2005). Results of these studies suggest that motion fosters several cognitive aspects of Human-Computer-Interaction and tends to capture the user’s attention strongly. It is necessary; however, to abandon the restricted perspective that animation is either good or bad. Rather, findings indicate that it is more pertinent to focus on the user group (their tasks, heuristics, and mental representations of space) and integrate motion (or animation) where it is most beneficial by using alternative representation formats per their particular representational characteristics and advantages.
Research on the dynamic display of information has been extended toward the conceptualization of geographic movement patterns (i.e., how spatio-temporal information is conceptualized into cognitively meaningful units). In a recent study, Klippel and colleagues (Klippel, Worboys, & Duckham, 2008) analyzed whether formal topological characterizations of spatial relations between moving spatially extended entities provided an adequate basis for the human conceptualization of motion events. The focus in this study was on gradual changes in topological relationships that are caused by continuous transformations of the regions (i.e., translations). The conceptualization and perception of conceptual neighborhoods was investigated using a series of experiments employing a grouping paradigm; in particular, the role of conceptual neighborhoods in characterizing motion events. In addition, a custom-made tool for presenting animated icons was developed. The analysis examined whether paths through a conceptual neighborhood graph sufficiently characterize the conceptualization of the motion of two regions. The results of the experiments show that changes in topological relations, as detailed by paths through a conceptual neighborhood graph, are not sufficient to characterize the cognitive conceptualization of moving regions. The similarity ratings obtained show clear effects of perceptually and conceptually-induced groupings, such as: identity (which region is moving), reference (whether a larger or a smaller region is moving), and dynamics (whether both regions are moving at the same time).
pdf (prefinal draft)
Linguistic and non-linguistic conceptualization
The idea that a person’s native language influences the way that she or he thinks has intrigued researchers from many fields (e.g., Gumperz & Levinson, 1996) and has also been controversially discussed in the literature (e.g., January & Kako, 2007). However, the question of whether there is a distinction between linguistic and non-linguistic thought (even within one language) has not received the same attention (but see, e.g., Crawford, Regier, & Huttenlocher, 2000). Terms used to describe linguistic influences on the way we think are linguistic relativity or thinking for speaking Gumperz & Levinson, 1996; Slobin, 1996). This topic becomes pertinent for the goals of: (a) providing spatial information in different modalities such as linguistically and graphically, (b) deciding which modality is suitable for particular spatial information, and (c) understanding how a user processes spatial information. Klippel and colleagues (Klippel & Montello, 2007) investigated the conceptualization of turn directions along traveled routes and the influence language has on the conceptualization of the turning directions. Two experiments were carried out that contrasted the way people group turns into equivalent classes when they expect to verbally label the turns, as compared to when they do not. A question regarding what proportion of the space surrounding us would we consider as (turn) left is a linguistic example. Particular emphasis was placed on the role of major axes, such as the perpendicular left and right axis. Questions posed were: Are they boundaries of sectors (dividing front and back plane); Are they central prototype of the sector, or do they have two functions—boundary and prototype? Figure 4 shows the results of the experiment detailing the way turns were grouped together in the non-linguistic and in the linguistic condition. Our findings extend Crawford et al.’s (2000) claim that prototypes for left and right (i.e. 90 degree left/right) in linguistic tasks serve as boundaries in nonlinguistic tasks by allowing the left and right axis to fulfill a double function (prototype and boundary). These findings have implications for cognitive models of learning environmental layouts and route-instruction systems in different modalities. They also show that conceptual models might have to be adapted depending on the modality to which they are related.
Klippel, A., & Montello, D. R. (2007). Linguistic and nonlinguistic turn direction concepts. In S. Winter, B. Kuipers, M. Duckham, & L. Kulik (Eds.), Spatial Information Theory. 9th International Conference, COSIT 2007, Melbourne, Australia, September 19-23, 2007 Proceedings (pp. 354–372). Berlin: Springer.