Examples of Multimodality

Before giving examples of multimodal forms of communication, I think it would be beneficial to clarify the difference between multimodality and multimedia.  Multimodality is the combination of modes of communication (whether it be through text, visuals, animations, sounds, etc) used to deliver a message to a particular audience.   The combination of modes do not need to be the final deliverable to an audience, so long as multiple modes are used in order to reach the final message.  Multimedia, however, is a term used to describe the final deliverable.  Claire Laurer, through her piece entitled Contending with Terms: “Multimodal” and “Multimedia” in the Academic and Public Spheres, gives the CD-Rom as an example.  The CD-Rom is the final deliverable, which uses and relies on a combination of media to deliver its message.  However, something that is multimodal wouldn’t be enclosed in a CD-Rom, because the CD-Rom is merely one media/mode.  In order for a CD-Rom to ever be considered multimodal, it would have to be in combination with other modes and other media, such as a piece requiring the use of the CD-Rom, in addition to text being projected onto a wall with audio playing in the background intended to communicate ideas to an audience. 

YouTube (What is New Media?)

YouTube is the epitome of a multimodal form of communicating ideas across wide audiences.  The below video is perfect in illustrating Claire Lauer’s definition of multimodality through its combination of multiple modes (text, visuals, visual effects, audio, animation, etc)  that are operated by a single user (the narrator, Dan Brown) through one interface (youtube.com and its video player) by showing that, although monomodal communication does not require the interdependence of media to deliver a message, multimodal communication requires the interdependence of modes and media to deliver messages.  For example, a physical copy of a book (the mode) only requires one medium (the text) in order to deliver the information to the audience (the reader).  However, the same book (the mode) read on a kindle (another mode) requires multiple media (the text, animation of turning pages, and other visual cues) in order for the full multmodal effect to be delivered to the audience (the reader).  As if the video itself wasn’t clear enough to illustrate such multimodal communication, the content within gives a good definition of new media and various multimodal forms of new media.�

Twitter (Twitter Art Projects)

Another good way of illustrating multimodality is by looking at Twitter.  Twitter uses various modes to communicate information to their audience.  For example, not only do they use text, but they allow for the attachment of images into tweets, to be delivered to their audiences through various modes of mobile applications (TweetDeck, TwitBird Pro, Twitterific), online communication (twitter.com), and desktop applications (TweetDeck, Twhirl).  Where Twitter allows the various modes of communicating ideas to their audiences (through images and text), they don’t require that both be used.  The user has a choice in what information they relay.

Twitter has been used amongst the masses as a tool used to convey a wide range of ideas.  To give a specific example of twitter as a multimodal tool, the following link outlines a multitude of twitter art projects.  Whether it be the picture mosaic created from tweets matched to flickr photos, or the twitter fountain that generates continually changing artwork from tweets and images, twitter’s multimodal applications have allowed for the further integration of modes and media.

facebook (The facebook Project)

As anyone reading this blog already knows, facebook is one of the biggest forms of social media to exist today.  It’s a perfect example of multimodality through its combination of text, image, video, animation, etc.  It also is available through online communication directly on facebook.com andthrough facebook mobile applications that allow for the same interactivity offered through the site.  Again, this is not multimedia as the combination of modes and media are not required to be used simultaneously in order to deliver a message.  The final deliverable is never the same, whereas something  considered multimedia, such as a DVD or CD-Rom, will always have the same restrictions no matter the amount of information within that medium.

The facebook Project is a great example of facebook being used as a multimodal tool where the combination of media such as plain text, imagery, wikis, blogs, etc. are used to portray facebook as the huge impactful social medium that is is.

Chicago is a Multimodal Town

In the first 5 chapters of his book Multimodality, Gunther Kress expresses the basic principles of semiotics that should be considered amongst all societies.  These three basic principles are:

  1. forms + meanings = signs
  2. these forms and meanings that make up signs are based upon the perception of the creator of such signs
  3. such signs are devised based on the sign-maker’s cultual perceptions (unfortunately, though, not always geared towards the cultures of his/her target audience).

Kress expresses how linguistics were once used entirely as the main source of communicating ideas across various audiences, but language alone can never be used as an accurate representation of communication as no language can translate perfectly into another.  This is why social semiotics through multimodal discourse, that is, through various modes of communication with sounds, text, visuals, interactions, and social + cultural factors (or in Kress’ terms, “life-worlds” comprised of gender, generation, education level, profession, ethnicity, etc.) are extremely significant as as form of framing communication between various audiences.  As we know, and as Kress has adamantly expressed, without framing, there exists no meaning.

In using this approach of social semiotics as the ideal method of modern-day communication, the production of social semiotics in a multimodal world requires one to implement processes of design by including interests of  the communicator, designer, and audience of such design. This production is not only the creation of a particular message, but can also be the remediation of a message that has already existed, which leads to the controversy of digital authoring versus canonical authoring.  When I say remediation leads to controversy, I simply mean (and Kress agrees) that different generations will feel differently about the credibility of sources dependent upon the medium in which they are gathered.  When I say controversy over digital authoring versus canonical authoring, I simply mean that information that will be found on various new media such as YouTube, Facebook, or Wikis (which can be seen as unorthodox to certain demographics) may not be seen as credible to older generations (according to Kress, 25 years and older) who are more comfortable with getting their information from print: newspapers, books, and directly from the source themselves (encyclopedia vs. wikipedia.com, doctor vs. webmd.com, anchorman vs. chicagotribune.com).  Especially with remediation, where words, images, and sounds are being taken from various locations and sources, older generations may feel that this information is not as credible due to the misconception that such remediation is plagiarism.

No matter how one feels about the credibility of the aforementioned form of communication, one thing for certain is that such form is a contemporary approach to communication through the preponderance of new media.  Being the contemporary phenomena that it is, Kress points out that “multimedia” does not work in describing this new approach as such a term was used to describe a way in which communication has been previously framed–through a discrete, black and white mindset where “this form belongs to this medium” and “that form belongs to that.”   However, using multimodality as a term in describing this form of communication implies the use of various modes as opposed to various media.  Where media can be seen as black and white, modes of communication vary vastly between various social and cultural “life-worlds” and, in turn, fit better in the field of new media.

To give meaning to an otherwise abstract term, Chicago is a multimodal town.  Everywhere you turn your head, you can see (and hear) the ambiance of various modes of communication.  The CTA (in particular the newly renovated Fullerton CTA stop) provides visual and aural cues to let you know when and where your train is and how long it’ll take for it to arrive. You read “15 minutes until…..” on the screen, you hear “This is Fullerton. This is a Red Line train to Howard.  Transfer to Brown and Purple Line trains….” from the intercom, you feel the wind pass you by as the train approaches, and you hear the sound of the train approaching before it actually approaches.

Or, on the corner of Jackson and Wabash in the South Loop, a sign just underneath more CTA trains where pigeons have seemed to increase drastically over the years, informs the reader not to feed the pigeons (Figure 1). The creator of the sign has taken cultural resources into consideration, knowing that anyone in the area knows what a pigeon is and looks like, and knew the combination of various modes was not necessary in the sign itself as the visual cue that would most likely appear on such sign would be the the pigeon itself, who, chances are, is already standing underneath or in very close proximity to the sign. In this case, the combination of the tangible pigeon plus the intangible words on the sign makes the message multimodal.

Another example of Chicago as a multimodal town would be U.S. Cellular Field, home of the Chicago White Sox.  The field is one of the most multimodal fields in baseball, in my opinion.  Many advertisements with words and images can be found all over the field.  During various points throughout the game, visual and aural cues are flashed across the jumbotrons as a form of entertainment (i.e., which pizza will cross the finish line first: cheese, sausage, or pepperoni? )  Or, even simpler, on Opening Day, the names of each of the players and their jersey numbers are flashed across the jumbotrons in combination with the orations of their names and jersey numbers “Number 14, First-baseman, Pauuuuuullll Konnnnerrrkkkkoooo,” while the player runs out wearing their jersey with text matching not only what was verbally announced, but what was also shown as a visual cue on the jumbotron (Figure 2).

One last example would be the House of Blues, a venue used for various performances of various genres.  The venue has various religious symbols built within the infrastructure of the building, because Chicago is very multicultural and home to many ethnicities and religious organizations (Figure 3).  This is a great example of multimodality because the cultural and religious modes, plus the aural cues of whatever artist are performing, plus the visuals of whatever is on stage in combination with all the people who attend the shows, is the biggest way of communicating the idea that Chicago is multimodal, multicultural, and tolerable of everyone regardless of faith.  Chicago is, indeed, a multimodal town.

Figure 1: Do Not Feed Pigeons
Figure 2: #14 Paul Konerko on Jumbotron
Figure 3: Religious Unity at House of Blues

Note:  All images were taken by me.  Feel free to disseminate according to your liking.