Keywords

1 Introduction

This paper takes a different approach to exploring new ideas for interface design by analyzing interactive semiotics and aesthetics from the perspective of contemporary artists who continually explore ways to use their art to push the boundaries of audiovisual communication. For the past two decades, the international Art of Research conferences have fostered a dialog between artists and design practices [1]. The conferences recognize that artists produce work that challenges the way we see the world around us. Artists use their work to raise questions and prompt reactions and critical thinking about design.

As these conferences point out, art can be a catalyst that expands the horizons of many different types of design practice [1]. In this paper, various types of artwork are highlighted to illustrate semantic and cognitive models for interactive, multimedia design. Interaction design creates opportunities to explore a new syntax for communication and user experience design. In this paper, I show how space and time, kinesthetic design, rhythm, and cross-modal perception define new semantic structures for creating cognitive maps. I also explore some designs concepts for interface designs that challenge traditional approaches to interaction design that use sequential, hierarchical information architectures for content organization and navigation. The examples in the paper that illustrate these concepts include screen captures from interactive art installations where I use art as a catalyst to generate discussion and explore new approaches to interface design.

Unfortunately, it is difficult to demonstrate some of the concepts in this paper. The paper lacks interactivity and sensory stimuli beyond visual information, and it is formatted in a hierarchical, two-dimensional layout that is designed to present information sequentially. As you read this paper, it is important to keep these limitations of print design in mind and envision dynamic, multisensory information spaces that can be accessed in different ways using various types of media and interaction.

2 The Poetics of Space and Time

During the Renaissance, artists used mathematical relationships in space and time to experiment with linear perspective and define three-dimensional space in a two-dimensional plane. Since then, artists have continued to explore ways to define space and time in art and design. Artists like Josef Albers and Frank Stella explored abstract representations of space by using subtle changes in color, line, and form. Jackson Pollock and Willem de Kooning and other abstract expressionists used the physical energy from gestures and body movements to define spontaneous shapes, color combinations, and rhythms in their paintings. Conceptual art and performance art of the 1960s highlighted interaction aesthetics by demonstrating that art can evolve and change over time through audience participation.

Any of these types of art can provide inspiration for interface designs by demonstrating how the innovative use of color, line, form, and human interaction shape the visual message. The field of visual poetry is another area of art that can inspire visual concepts for electronic interfaces. Visual poetry integrates text and forms into a new visual language. Huth [2] points out that these “verbo-visual creations that focus on the textual materiality of language” integrate shape and function [para. 3].

Numerous examples of visual poetry are available online that illustrate how text, form, and space can combine to create this new verbo-visual language. Huth’s visual poem titled jHegaf [3] and Derek Beaulieu’s piece untitled (for Natalee and Jeremy) [4] weave letters and areas of white space into rhythmic, integrated wholes. K. S. Ernst and Sheila Murphy create different levels of spatial depth using layers and color gradations in Vortextique [5]. In haiku #62 by Helmes [6], negative space carves into the text and reveals shapes that redefine the meaning of the words.

The individual elements in visual poetry, including the forms, text, and negative space, can be links in an interactive information space. As in the works mentioned above, Fig. 1 illustrates the use of text, space, and form to create a visual language with multiple levels of space and time. The design on the left is the logo for my HyperGlyphs research which is a series of experimental art installations with interface designs that deemphasize hierarchical navigation and content organization [7,8,9,10]. Visuals such as the HyperGlyphs logo, or the visual poetry examples mentioned above, could become interface designs in which each element in the design, including the areas of space, are links to the virtual information. In the second version of the HyperGlyphs logo on the right, words have been added to the visual to show how multiple links could be included in the design. While this type of visual integration of positive and negative space has been used in art and design in the past, it has not been adopted as a format for interface designs.

Fig. 1.
figure 1

The designs illustrate how text, form, and space can be integrated into a visual language that becomes the interface for an interactive program. Copyright 2001 Patricia Search. All rights reserved.

3 Digital Art

Computer graphics technology enables artists to define a new type of audiovisual aesthetic and semantic space. Artists create spatial relationships that are mapped to Cartesian grids and highlight the logic of the underlying mathematics and computer programming. Digital tools enable artists to make subtle changes in color, texture, and position that transform two-dimensional lines, forms, and planes into volumetric extensions of space. Artists use algorithms and digital technology to redefine the two-dimensional space using unique shapes and visual effects.

Transparent layers, color gradations, animations, fades, three-dimensional modeling, and viewer participation through interactive technology are just a few additional ways to use this technology to redefine space and bring the element of time into the visual semiotics of the art. Hall’s work [11] is an example of the complex new visual language that can be created with digital techniques that integrate layers of text, line, color, and form. Her art creates depth and visual layers of associations that highlight individual elements within the context of a synergistic whole. Examples of her work can be seen on her website at www.debhall.com.

With the addition of interaction design, the ability to reveal information in different configurations brings the element of time into the spatial experience. In Jim Rosenberg’s Intergrams [12], words layered on top of each other form complex, interactive graphics where the individual words become recognizable as the user removes each layer to reveal the underlying text. His art, which creates texture, depth, and spatiotemporal interaction with text, is an excellent example of Huth’s concept of “textual materiality” in verbo-visual forms of expression [2, para.3].

All of these techniques can be incorporated into interface designs to reflect the semantic connections in the program. For example, the ability to use audiovisual layers to represent the semantic structures in the interactive program creates opportunities to define intuitive information spaces that (1) highlight individual elements within the context of the whole; (2) show the perceptual and cognitive networks of associations in the information space; and (3) highlight simultaneous as well as sequential relationships. Although artists have explored these techniques in art and design, few interaction designers have taken advantage of the semiotics of digital art to enhance the interactive experience by visualizing the complex relationships and organization of information in interactive programs.

There will be more discussion of design and semiotics later in this paper. First, it is important to see how perception and cognitive models define the user experience in interaction design.

4 Cognitive Models

Research in cognition can provide insights for designing user interfaces. Most interface designs focus on the hierarchical organization of information. However, research indicates that the cognitive models we create are not limited to this form of organization. Collins and Loftus [13] deemphasized the significance of nodes and links in hierarchical networks and emphasized the role simultaneity plays in constructing semantic models. Research has shown that we use our experiences to build cognitive collages [14]. In a non-linear, multisensory information space, layers of events and time, along with affective domains based on sensory experiences and kinesthetic memory derived from physical actions, enable us to build these cognitive collages.

Research has also shown that the creation of semantic models is a dynamic process that changes over time. Hopfield [15] explored dynamic connectionist models where semantic representations change through interaction with other cognitive processes. These models evolve as an emergent process through learning [16, 17] and contribute to the building of complex schemas and syntactic processing [18]. Schemas and the narratives within them play an important role in knowledge construction, memory, and perception [19, 20] which in turn, contribute to the building of cognitive models. These models identify locations and include information about the way the different areas are connected such as the topology of the space [21].

In the user interface, it is possible to design visual, spatial maps that represent the topology of the space and show layers of simultaneous relationships, as well as hierarchical connections. These maps, which can incorporate references to narratives and schemas to aid memory and knowledge retention, can help the user identify landmarks and build cognitive models for navigation through the dynamic information space. Some visual examples are discussed in Sect. 6 on the Semiotics of Interface Design.

5 Cross-Modal Perception

Perception also plays an important role in building these cognitive associations. Research has shown that we use perception to make cognitive connections, and we integrate current perceptual experiences with past experiences [22, 23].

With multisensory interaction design, it is especially important to consider the role cross-modal perception plays in cognition. Cross-modal perception adds multiple layers of sensory information (stimuli) to the interactive experience that augments the cognitive networks of associations. Cross-modal stimuli can enhance the perception of visual and audio information and impact the perception of spatial and temporal relationships [24]. Freides [25] concluded that perception that involves more than one sensory modality is more accurate than information that is represented with one sense. Brunel et al. [23] cite established research that supports these findings. Specifically, they point to research that shows that with multisensory events it is generally easier to identify [26], detect [27], categorize [28], and recognize [29] information [23, para. 3].

Moreover, we intuitively make connections between specific characteristics in different sensory modalities. For example, a curved shape may evoke a soft tactile experience. We also have “hard” and “soft” sounds in letters and words. Even when an association isn’t congruent or intuitive, it is possible to assign connections between sensory modalities that the user will learn and later associate with these modalities [30,31,32]. Connolly [33] suggests that relationships can be formed by integrating perceptual experiences through an associative learning process called “unitization” which “enables us to ‘chunk’ the world into multimodal units” [para. 1].

Interaction designers can use these perceptual models and define relationships between information presented in different sensory modalities. Using familiar sensory and cognitive associations in the interface design (e.g., combining curved objects with soft sounds) makes the interface more intuitive. However, it is also possible to assign relationships that the user learns through the associative learning process [33]. This flexibility allows designers to define specific relationships. For example, they can designate unique forms as links to different types of information, images, or sounds.

Connolly [33] notes that it is best for these perceptual systems to be flexible to perform cognitive tasks. He identifies an interactive parallel processing model, such as the one proposed by Townsend and Wenger [34], as a model for associative learning [33]. The concept of parallel processing of sensory information and the ability to make dynamic, flexible connections between information is an interesting concept to incorporate into interface designs. The model presents a flexible way of thinking about cross-modal perception that supports exploration in information design. If the connections between the sensory information are not fixed or permanently integrated through associative learning, the designer has the ability to assign connections that are relevant for a specific application. Alternatively, if a more flexible interactive space is required for the user to explore different relationships, this model provides that flexibility as well. The user can make cognitive assignments that are relevant to specific objectives, and through this process, create cognitive collages that include layers of meaning that are created by perceptually integrating units of information through the associative learning process. Visual, spatial representations of these parallel layers of information in the interface design can highlight simultaneous connections in this cognitive model.

6 Semiotics of Interface Design

Designers can use various techniques in digital technology to visualize the types of cognitive models just discussed. Transparent forms create spatial cues that define simultaneous relationships and show individual elements as well as the integrated whole. Layers of text can be combined or revealed in different configurations and create a new graphic language that integrates the visual semiotics and linguistic syntax.

Figure 2 is a screen capture from the HyperGlyphs research. In this interface design, the words are links, and the user can move the shapes to reveal different layers of text over time. In this example, which illustrates some of the content in this paper, the words images, sound, kinesthesia, and senses are visible through transparent layers, and as the user interacts with the design and moves the forms, those words move to the top and are accessible as links.

Fig. 2.
figure 2

Design elements in an interface design, such as transparent layers, can show simultaneous networks of associations and multiple dimensions of sensory and cognitive relationships in an interactive program. Copyright 2018 Patricia Search. All rights reserved.

Figure 3 is another example of an interactive, visual representation of content relationships in an information space. As the user interacts with this interface design, the individual lines of text initially appear as illustrated in the first screen capture, and then move closer together to form a new visual language that symbolizes the integration of the content or ideas (as shown in the second screen capture). Each text link remains selectable, but the design symbolizes the synthesis of ideas and content into a new visual language and network of cognitive associations or cognitive collage.

Fig. 3.
figure 3

As the user moves the graphics in this interactive design, the text moves closer together to create a visual language that symbolizes the integration of ideas. Copyright 2018 Patricia Search. All rights reserved.

This multi-planar environment expands with the addition of sound. Sound defines space in diverse ways depending on the timbre, duration, and rhythm of the sound. Long sounds penetrate space and create depth that intersect with the visual planes and spatial relationships. Short, rhythmic sounds define temporal intervals and add layers of audiovisual syncopation to the information space. Sound also creates an immersive experience in the physical space. Sound can appear to come from different locations in the space and surround the viewer. Sound, like the physical interaction, creates a perceptual and cognitive bridge between the physical space that defines the viewer’s real world and the virtual space.

These interactive environments weave layers of multisensory information and diverse media syntaxes into discursive spaces that challenge traditional perspectives of space and time. Layers of sensory information continually change and form new connections and patterns that define a discursive semiotic space. Deterministic logic and fixed networks of associations give way to flexible semantic structures and a symbolic space where causation is all-inclusive, rather than exclusive [35].

As researchers point out, perception is a dynamic process that creates new meanings. In this discursive semantic space, signs take on multiple meanings as new syntactical relationships evolve. Semiotics now means “the logic of signs is the logic of possibilities” [36, p. 76]. In addition, the “space” that defines the links between the audiovisual information is integral to the semiotics of the interactive design. The physical and temporal spaces that define the actions and links are not empty spaces. Space takes on meaning as well, because it is the place where connections evolve into an endless array of associations and relationships.

6.1 Temporal Dynamics

Digital technology makes it possible to add the dimension of time to the semantic relationships. The two-dimensional screen is a dynamic space that continually changes as the user navigates through the information space. The interaction design consists of sequential and simultaneous representations of time. Although navigation itself is a sequential process, the user forms perceptual and cognitive networks of associations that define simultaneous semantic relationships.

Time is also expressed through rhythm—the rhythm of words, images, sound, movement. Rhythm is an important temporal dimension in design that can reflect different types of relationships, while also unifying the diverse layers of information into a coherent whole. For example, the juxtaposition of fluid, organic shapes with mathematical grids creates a unique blend of different types of rhythm. The organic shapes create a dynamic, fluid counterpoint to the quiet rhythm of the grid-like formations. The user’s physical interaction with the program also creates a rhythm that defines the aesthetics and semantic meaning of the actions. Djajadiningrat et al. [37] use terms like “interaction choreography” [p. 673] and “semantics of motion” [p. 669] to refer to our physical interaction with products. The rhythm and semantics of motion are integral to kinesthetic design which can help the user define landmarks, create memory, and build cognitive maps. Kinesthetic design and its role in interface design is discussed in more detail in the next section.

Designers can assign meaning to specific types of rhythm to communicate different temporal perspectives and add layers of cognitive associations to the interactive experience. The screen capture in Fig. 4 includes geometric and grid-like forms, as well as fluid, unstructured shapes. The temporal rhythm changes with each type of form. Designers can use different forms like these to represent specific kinds of information in the database. For example, the geometric areas can be links to numerical data, while the fluid forms might link to less structured information such as ambient sounds or diverse perspectives on a topic. Despite these different types of forms, the design still defines an integrated whole that symbolizes the interrelationships in the information space.

Fig. 4.
figure 4

In interface designs, forms and space define temporal rhythms that can symbolize different types of information. Copyright 2018 Patricia Search. All rights reserved.

The resulting semiosis in interaction design is a combination of sequential and simultaneous representations of time and space. Non-linear, pluralistic concepts of temporality emphasize simultaneity and the multiple ways time can be expressed. Audiovisual rhythms and the rhythm of gestures and other physical movements contribute additional temporal dimensions to the interactive experience.

6.2 Kinesthetic Design

The interactive experience can become more tangible with the addition of kinesthetic design which includes the physical movement of the body and gestures. Research has shown that interaction with the physical world helps clarify abstract concepts and aids cognition [38, 39]. Digital technology makes it possible to push the boundaries of embodiment and gesture, and use kinesthetic design to aid cognition and memory in the interactive experience.

With kinesthetic design, there is the movement in the physical space as well as movement in the virtual environment. In large-scale, immersive computing environments, the user can walk around in a space and move to new locations to investigate the virtual world and sound from different angles and perspectives. This experience can lead to new insights and cognitive networks of associations.

Kinesthetic design also includes the gestures or physical movements the user makes to interact with the computer program. This interaction defines patterns of movement in the physical space as well as patterns of movement on the screen. The physical interaction helps define and clarify the conceptual and abstract information in the virtual world by making the abstract concepts more tangible and concrete. Gestures and bodily movements are intuitive forms of communication and learning because they are based on shared experiences [40]. LeBaron and Streeck [40] pointed out that gestures provide a bridge between tactile experiences and the abstract conceptualization of the experiences. Physical movement and interaction with the artifacts and the physical space enable us to define symbolic representations [41] and build cognitive metaphors through image schemas which we derive from our physical orientation, movement, and interaction with the physical world [39].

These physical movements also create muscle memory or implicit memory that helps us learn and remember how to perform actions [42]. In interaction design, the physical movements that build implicit memory include the body movements or gestures that are required to navigate through the program and retrieve information. These movements help us define locations, landmarks, and spatial and temporal relationships that shape our cognitive understanding of the information space [38, 39, 41].

The correlation between these physical movements and the movements in the virtual space defines a network of cognitive metaphors that expands the semantic dimensions of the interactive design. For example, when users move the mouse or other interactive hardware, they create patterns in the physical space in order to select and move objects in the virtual world. These movements create sensory, kinesthetic, and cognitive landmarks for navigation and information synthesis. The user’s movements and the movements in the virtual space can be synchronized to have the same movement patterns, directions, and rhythm. The physical and virtual movements can also be different and define a dynamic discourse and syntax that emphasizes change and contrast in the spatial and temporal representations in the semantic models.

As previously mentioned, there is also the space that exists between gestures or body movements as the user selects links and navigates through the environment that is integral to the semiotics of interactive design. It is part of the “interaction choreography” [37, p. 673], and on a conceptual level, the space represents the “space” between ideas where new ideas happen. As illustrated in Fig. 1, visual space is a design element that can be incorporated into the interface design to underscore this potential for new insights and creative exploration.

6.3 Metastructural Dynamics

In multimedia computing, the integration of different media results in a metasyntax and the opportunity for transmodal analysis [43]. A new audiovisual semantic structure integrates the syntax of the media into complex affective and cognitive models. The metasyntax integrates the semantic, spatial, and temporal modalities of words, images, sound, and movements into a holistic, multisensory experience. This pluralism results in a polysemiotic semantic structure that is further enhanced by online collaborations where individuals continually modify information.

The layers of audiovisual information and interactive transitions create mediascapes that transcend the limitations of the two-dimensional screen by weaving a network of sensory, cognitive, and physical associations into spatial patterns that visualize the temporal dynamics of the space. Users can continually redefine relationships and develop new perspectives based on additional information and sensory input. Relationships can be framed within the context of a multidimensional sense of space and time without the perceptual and cognitive restrictions of Western logic and linear causality. This cognitive processing leads to reflective abstraction [44]—a process that promotes the synthesis of diverse perspectives and “leads to constructive generalizations, to genuinely new knowledge, to knowledge at higher levels of development, and to knowledge about knowledge” [45, p. 12].

As users interact with the virtual environment with gestures, body movements, or interactive hardware, they create patterns in the physical space. The patterns define lines and planes that intersect in the user’s cognitive map of the relationships. These “hyperplanes” also co-exist with the visual planes in the virtual world [10, 24]. Audio stimuli define additional hyperplanes as layers of sound in the space immerse the viewer with sensory stimuli from different angles and directions.

The hyperplanes create a counterpoint of audio, visual, and rhythmic patterns that define grids of intersecting spatiotemporal planes. The mediascapes and hyperplanes contribute to the development of a sense of place in interaction design by enabling users to weave multisensory landmarks in the physical and virtual environments into semantic maps and cognitive collages.

7 Future Directions

Interaction designers can create visual representations of these spatial and temporal semantic structures in interface designs to help users build cognitive models. However, designing interfaces that visualize the multiple dimensions of this multisensory space can be a challenge with the current software tools that are available for designing the information architecture, content organization, and navigation in an interactive program.

We need software that makes it possible to easily visualize layers of sensory and cognitive relationships. We may need to move beyond two-dimensional design and create three-dimensional visuals to reflect the layers of associations in complex information spaces. There are software packages for interactive, three-dimensional presentations that could serve as models for interface design software that would facilitate the visualization of multidimensional user interfaces [46, 47].

In addition to tools for designers, it will be important to consider interface designs that the user can actually modify to reflect new insights and associations. Interface designs should not remain static, but instead be flexible designs the user can revise to show the multisensory networks that emerge as perception, associative learning, and reflective abstraction contribute to new perspectives and cognitive models.

In database design, there are precedents for creating interactive semantic webs that users can modify to create user-defined information structures. Researchers are expanding the semantic structures in databases by using multimedia metadata (images, video, animations, and sound in addition to text) and allowing users to define relationships between database objects and the metadata [48,49,50]. These flexible ontologies support user exploration and dynamic revisions that reflect the experiences and perspectives of multiple users who are collaborating on a project. Users can change ontological relationships and discover new connections between ideas and the database content [51]. Srinivasan and Huang [50] created a project called Eventspace for online exhibits that allowed users to change the relationships between nodes in the database. This process leads to a meta-level in the semantic structure of the database that Srinivasan and Huang call “metaviews” [p. 194]. These metaviews evolve when users rearrange the database elements to define different perspectives and create “multiple evolving ontologies” [p. 200]. Paul [52] also noted that meta-narratives emerge from the dynamic reorganization and exploration of the database elements.

Complex semantic structures in interactive multimedia design require new approaches to interface design from the standpoint of the both the design process and the final interface design itself. Both the designer and the user need to be able to explore relationships from different perspectives so they do not limit the possible associations. They should be able to quickly build flexible, dynamic cognitive collages using different sensory stimuli, schemas and narratives, and cognitive models.

There is significant potential for using multimedia in interface designs to define spatial and temporal relationships that represent sensory and cognitive associations in an interactive program. However, there are also design challenges in creating interfaces that encourage exploration and creative input from the user. In particular, we need to consider a new approach to interface design that allows users to modify visual representations of information to reflect the new insights and associations they develop through multisensory perception and associative learning. This approach to flexible, user-centered interface design can become the foundation for the next generation of interaction design.