[BOOK REPORT] Garrett, The Elements of User Experience: User-Centered Design for the Web

This blog post is a report on the standards for designing successful web pages.  Jesse James Garrett’s book on The Elements of User Experience: User-Centered Design for the Web focuses on the five planes of user-centered design where proper implementation is important to engaging efficient user experiences.  These five planes are:

  1. Strategy Plane
  2. Scope Plane
  3. Structure Plane
  4. Skeleton Plane
  5. Surface Plane

Garrett stresses that when creating any website, it is important to have a user’s needs defined before production. Once these needs are realized, it is easy to formulate the site objectives.  It’s important to ask the questions, “What purpose do I want this site to serve?” and “What purpose do my users want this site to serve?”  It’s always important to know the business goals of a site before designing because you’ll be able to design towards those goals.  Garrett states that creating a brand identity is an essential first step to take.  Whether you’re looking to make this identity known to your users through conscious choices or through subtle accidents, brand identity will ultimately leave a mark on your audience.  An easy way to understand if these objectives have been met are through success metrics, or indicators that track whether or not goals are being reached after the site has launched.

It is also important to focus on user segmentation at this plane.  It’s vital to know your user demographic (i.e. age, education level, gender, income, etc) as well as psychographic (i.e. attitudes and perceptions users have towards certain subjects).  Knowledge of your user segmentation will allow you to design to their wants, while  knowing to stay clear of anything they find undesirable.

It is also at this level where usability and user research comes into play.  Surveys, interviews, focus groups, user tests, field studies, contextual inquiries (methods to understand your users in their everyday lives), task analysis, user testings and card sorting (giving users cards and having them sort them out according to groups they find most comfortable) are all crucial to knowing your user, retrieving user feedback, and to the success of your website.

Once you’re aware of what you want from your site and what your users want from your site, defining the scope of your project is easy to do.  List everything you and your users want your site to have and everything you and your users don’t want the site to include.  Garrett states that once you have this understanding, focus on the functionality and content of your site and ask, “What am I going to make?”  Know your functional requirements and specifications.  You can find out what these requirements are by asking your users what they want and find out, through tests, what it is they really want (sometimes what people say they want isn’t really what they’re looking for). Another requirement to find out is what a user doesn’t know they want.  Brainstorming usually leads to these findings after figuring out what users say they want and what they really want. 

Garrett says that during implementation of functional specifications, it’s important to be positive and describe what a system could do to prevent a bad thing, instead of describing a bad thing that the system shouldn’t do.  It’s also important to be as specific as possible in order to avoid confusion due to the possibility of alternate interpretations.  This specificity will also prevent the use of subjective language that can cause ambiguous meanings.

Another major component of the scope plane is your ability to prioritize the site’s requirements.  Garrett says it’s not hard gathering ideas for possible requirements, but it is difficult to sort out what requirements the scope of your project should feature.  Sometimes objectives require multiple requirements, whereas others can be fulfilled by one requirement.  Also, sometimes requirements are technically impossible to implement, so alternatives may need to be pursued.  Regardless, the priority of requirements should all be necessary to a functional website.  If time constraints are an issue, certain features can always be implemented later, that is why it’s important to prioritize what is most important through what is least important, as what is most important will be already completed if you are forced to launch the website before all requirements have been implemented.

Once requirements have been gathered and prioritized, a conceptual structure for the site can be made.  Interaction design (creating a structured experience for the user) attempts to formulate how the design will accommodate possible user behaviors.  Garrett says that an approach that works best for a computer is almost never the approach that works best for a user, so knowing what will work best for them is the ideal situation.

Garrett stresses the importance of knowing conceptual models, or users’ impressions on how interactive components behave (i.e. a container to represent a website’s shopping cart), because this knowledge allows for consistency in design decisions.  It’s important that a particular element is always treated the same.  For example, a “checkout” metaphor to make a purchase online should always be treated as “checkout” and not just a term to describe a purchase.  In other words, when the user clicks “checkout,” treat the term as if you were physically standing in a store, at the register, checking out.  Another example would be the “shopping cart.”  Treat the metaphor as a real shopping cart.  Allow the user to “add” and “remove” things to the cart in the same way a customer would physically at a store.

Interaction design requires that you take into consideration user error (mistakes by the user, not the “programmer”).  Knowing that people, by nature, have the ability to make mistakes, it’s best to make such mistakes nearly impossible to do (or at least, very difficult to do).  Garrett knows that no matter how hard you attempt to prevent human error, it’s still inevitable, so offering solutions/troubleshooting is necessary so the user will know how to fix the problem.

The last major component of the Structure Plane is a website’s information architecture.  Garrett describes the information architecture as organizational and navigational schemes that allows for the effective progression through a site without confusion.  Two ways of approaching these schemes are through a top-down approach, where buttons/links are located at the top of the page and you work your way down, or through a bottom-up approach, where this same information is located towards the bottom and you work your way up.  No one way is better than the other, but depending on the look and feel you want for your site, you might prefer one over the other. 

The information architecture also arranges nodes (information in pieces or groups, no matter how big or small) in four various structures:

  • hierarchical structure
  • matrix structure
  • organic structure
  • sequential structure

Through a hierarchical structure, the nodes have parent/child relationships with other nodes.  The nodes that have parents offer broader information where as the child offers more specific information in relation to its parent.  Where not every node has a child, every node has a parent.  This structure is the most commonly used structure on the WWW.

A matrix structure arranges the same nodes along two or more criteria, which enables users with different needs to navigate through the same content, but in a different manner.  For example, a car manufacturer’s website will allow a user to search the inventory by color, price, year, model, and location.  The user has the choice to pick and search between more than one criteria, but they are ultimately searching through the same content.

A structure that does not follow a consistent pattern when arranging its nodes is the organic structure.  Garrett explains that these structures are ideal for exploring nodes whose relationship is either unclear or evolving.  However, these structures aren’t the best for sites where users rely on being able to find their way back to the same information.

The last of these four structures is the sequential structure.  These structures organize information in, as the name suggests, sequences.  For example, a tutorial site will use a sequential structure.  Perhaps you can’t proceed to step two until you pass the first step of the tutorial.  Once passed, the button allows you to click on to the next step.  Another example of a node designed in this structure would be a video.  You watch the video from beginning to end, where the information in the beginning occurs earlier in history and the information towards the end occurs later in history.

This plane gives further modification to the conceptual structure, where specific criteria for the interface, navigation, and information design move from an intangible abstraction to a tangible and concrete structure.  When defining the skeleton, Garrett says that it’s important to make sure you know how your site will work.  It’s important that the structure is all mapped out according to your needs and your user’s needs before implementing the skeleton.  The skeleton is defined through the interface design (buttons, fields, drop down menus, etc), navigation design (a form of the interface that gets the user from node to node), and information design (the way information is presented to the user).

Garrett states that the most successful interfaces are those that allow the user to notice the most important nodes, where unimportant information is not noticed (most likely because this superfluous information is not included).  It is important when designing an interface to make sure you know what information should be visible and what information should not.  HTML and Flash are the two technologies where interfaces are created on the web, and both have their limitations, so it’s important to design interfaces accordingly.  HTML and Flash allow for the following interface elements:

  • checkboxes (users select as many boxes as they deem necessary, independent of one another)
  • radio buttons (users select only one option amongst a set of mutually exlusive selections)
  • text fields (users are given the option to enter whatever text they find necessary)
  • dropdown lists (users select only one option amongst a set of mutually exclusive selections, like radio buttons, but in a compact space allowing more information to be presented more efficiently)
  • list boxes (users select as many boxes as they deem necessary, independent of one another, like checkboxes, but allows for more information to be presented in a compact space due to their ability to scroll).
  • action buttons (users select a button and are rerouted to another location (interface elements, pages, etc.)

When designing the navigation, Garrett says it’s important to simultaneously accomplish three goals.  The first most important rule is allowing the user to get from one point to another.  The second rule is to make sure the navigation design shows the user the relationship between the container and the elements that exist within this container.  Lastly, the navigation should somehow relate to the current page, or at least let the user know they are on the current page.  For example, if the user is on the page about cars, and the link that leads to this page is entitled “cars,” then this link should be different than the rest so the user knows that “cars” has been selected.

Different sites offer different navigation systems (systems that allow users to navigate sites through various circumstances).  The five most common navigation systems are as follows:

  • global navigation (provides users with access to the entire site no matter which page they’re currently navigating)
  • local navigation (provides users with access to nodes that are in close proximity to each other)
  • supplementary navigation (provides users with access to related content that is not readily available through the global/local navigation)
  • contextual navigation/inline navigation (provides users with access to navigation elements (links, buttons, etc) within the content itself.  i.e. a link/button within a paragraph as opposed to in the interface)
  • courtesy navigation (provides users with access to information that they wouldn’t need regularly, but made available for convenience purposes)

When implementing information design, Garrett expresses the need to present information in a way users can easily understand.  Information graphics generally give clarity to dry text.  Pie graphs give instant meaning to facts and figures.  The way information is grouped can also play a huge role in the success of your site.  Remaining consistent to conventions users are familiar with when presenting information is always an important note to consider.  For example, if your site offers a form for users to fill out, the order is very important.    You wouldn’t want to ask for a credit card number, then first name, then zip code, then credit card type, then last name, etc.  This not only makes your site seem very incoherent, but less credible.  Garrett says it’s important to stick to conventions people are familar with.

It is also important to provide support for wayfinding, or visual cues involving information and navigation design to help the users know where they are, where they can go, and what choices they can choose in order to fulfill their goals.  Garrett explains that wayfinding can include specific color schemes for specific sections or typography, icons, and labeling to let the user know where they are.

The last crucial element of the skeleton plane are wireframes (page schematics, or a bare-bones depiction of all the components of a page and how they are placed on that page).  Garrett states that wireframes are a necessary initial step to determining the aesthetics of the site.  Wireframes are valuable in that they incorporate every element of this plane.  They incorporate interface design through the selection and arrangement of interface elements, navigation design through the identification of navigational systems, and information design through the placement of information components.

The last plane of user-centered design that is located at the top of this five-plane model is the surface plane.  In this plane, all the previous planes work together to provide the user with a finished, fully functional design.  Garrett says when fine tuning the visual design of a page, you don’t necessarily have to rely on sophisticated eyetracking equipment to determine exactly where a user’s eyes are going, but you can simply ask the user.  Although it’s definitely not as accurate as eyetracking, it still gets the job done.  After determining where the eye moves, or how frequently the eye is moving around, it’s important to create visual elements that will fix the problems.

Generally, the primary tool designers use to get the user’s attention is contrast.  Without contrast, the visual design is seen as being dull and without features that makes a user’s eyes wander around the page.  Garrett proclaims that contrast is necessary in defining essential aspects and differentiating between aspects such as interface and navigation design.  Having contrasting elements allows the user to make note that something is different.  This awareness of differences allows the user to pay attention to the overall design and, in turn, functionality of the site.  It is also important that the contrast is completely different than the opposing elements.  If the contrast varies only slightly from the other elements, the user will be confused as to what the purpose of the slight variation is.  Slight variation will make a user think something has happened accidentally, or that a flaw has occurred within the site, and this is not a response that you want to elicit from your users.  With this said, maintaining uniformity is essential to ensuring that such responses are not invoked.  Garrett says that keeping elements uniform is beneficial to the designer, also.  With uniform elements, you’d be able to reuse them in new designs and pages, where you would just change the content, not the format.  Using a grid-based layout (an invisible grid that you imagine exists on the page) allows you to create consistent pages with consistent and uniform elements from page to page.

Garrett also stresses the importance of maintaining consistency internally and externally.  External consistency should show that every aspect of an organization’s many divisional sites should maintain consistent themes.   For example, any university has many divisional sites.  Their career center divisional site should be consistent to their student records divisional site. Internal consistency should show that every aspect of a divisional site maintains consistent themes with it’s subpages.  For example, the hypothetical career center divisional site mentioned previously should be consistent with a page on the site that says “careers for current students,” “careers for alumni,” etc.  All these subpages should maintain consistencies amongst their uniform and contrasting elements, including color palettes, schemes, and typography.

Work Cited
Garrett, Jesse James. The Elements of User Experience: User-Centered Design for the Web. New York: American Institute of Graphic Arts, New Riders, 2003. Print.
The Elements of User Experience: User-Centered Design for the Web

Zachry, An Interview with Andrew Feenberg

Computers & Communication
Feenberg’s interest in computer technologies emerged from his work in online education at the Western Behavioral Sciences Institute in the 1980s.  He also studied computer mediated communication in France when working on a project to introduce computer conferencing to the French Minitel System.  He said this attempt was unsuccessful due to a keyboard unsuitable for typing (which ties into Norman’s book emphasizing the importance of good design for objects to be successful).  Feenberg states that people who design technologies don’t think about human communication when designing.  It is only after being influenced by user feedback that they go back and design important usability features.

Critical Theory & Design
Feenberg says the critical theory of technology is a critique of domination exercised through the organization of technically mediated institutions such as how minds are shaped and controlled.

Hacking, Creative Appropriation, and User Agency
Feenberg says that when people hack/redesign/reinvent technology, they’re doing so to either 1.) represent them better, or, 2.) represent more aspects of their lives.  He says ease of communication between victims and users is important as users clearly influence the design of technology, and participatory design is ideal. 

Social Design
Feenberg says designing for social implications is critical.  Different social purposes would be served through different design implementations.  He says where this seems like an obvious fact today, in the mid 1980s, people were not designing differently to different social contexts.

Technical Communication
Feenberg says it’s important to communicate technology in a way where a nontechnical crowd could understand it.  He says many technical users think of solving problems in ways nontechnical users would ever think of, so a technical writer must understand how to bridge the gap between nontechnical and tech savvy.

Norman, The Design of Everyday Things

In the Preface of Donald A. Norman’s The Design of Everyday Things, he explains his inspiration in writing the book. While taking a sabbatical year at the Applied Psychology Unit in Cambridge, England, he discovered a very poorly designed infrastructure- some faucets would require the handle/knob to be turned left for warm water, whereas other faucets would require them to be turned right for warm water- and these were all in the same building. Similarly, the building would consist of doors that had the same inconsistent patterns. Some would require you to push to exit, whereas others would require you to pull to exit…. or even slide to exit.

In the preface, Norman goes on to explain how the book has been published under the title, The PSYCHOLOGY of Everyday Things, also, but he realized readers interested in design and objects (and their relationships to them) weren’t being able to find this book because it was always wrongly categorized and shelved in the Psychology section.  After recommendations from his editor, he agreed that “Psychology” should be swapped with “Design” and every new edition from this point onward will be entitled accordingly.

Norman’s purpose in writing this book is to show that if an individual has trouble figuring out how to properly make something work, it’s not the users fault, and they shouldn’t blame themselves.  IF you have to exert a lot of energy and effort into figuring something out, then that is the result of poor design, and it is the design that needs to be re-evaluated.  He explains that there are 3 topics in this book that are of vast importance:

1.) Don’t blame yourself for not knowing how to use something because chances are a person’s inability to make that particular thing function is a result of poor design.

2.) Design Principles such as conceptual models (making something that seems arbitrary more concrete), feedback (record the effects of all actions), constraints (constrain choices to avoid errors in order to make something easy to use), and affordances (appropriate actions are perceptible, whereas inappropriate actions are invisible).

3.) Be very aware of the objects around you.  No longer blindly pass a bunch of objects and think nothing of them.  Begin to analyze and critique the design of those objects and think of ways to make them better.

The Psychopathology of Everyday Things
In this section, Norman starts off by saying an engineering degree from MIT should not be required to operate basic gadgets or household items.  A person shouldn’t have to work for hours in attempts to figure out how to operate an object.  These objects should, instead, be very intuitive and leave the user happy, not frustrated.

The Frustrations of Everyday Life
A person needs help when using an object of any sort.  Only the appropriate and necessary things should be visible.  These things include indicators of what things work and how they work and also how the user is to interact with these operators and operations.  These visible indicators are important in defining distinctions between what is necessary and what is not, what’s black and what’s white.  An excessive amount of visibility of superfluous items make an object harder to understand, less intuitive, and far less user friendly.

The Psychology of Everyday Things
The psychology of everyday things puts emphasis on how to understand these things.  As mentioned in the preface, affordances provides clues to how to make things work.  A door knob tells you that you turn and pull.  However, poorly designed things may lack the necessary affordances.  Norman gives an example of a cabinet that has a string attached to it to allow it the affordance of being opened.  The fact that the user needed to add a feature to the cabinet shows that the cabinet has been designed poorly.  Had it been designed properly, the string would have never been needed.  As mentioned in the preface, Norman also discusses the importance of providing conceptual models of objects before designing.  Making an arbitrary idea into a conceptual model will allow you to grasp a better understanding of whether or not something will be a successful design.  He also reemphasizes the importance of making the necessary visible, and the irrelevant invisible.  For example, you need all the number keys on a phone to make a phone call, but you don’t necessarily need the 10 extra buttons that surround these keys to perform additional functions.  Perhaps the design can be revamped to make only a couple of those 10 additional keys perform all additional 10 functions.  He also says how mapping (the relationship between two things) should be carefully thought out.  The relationship of a steering wheel to turning a car, for example, is clearly visible and provides immediate feedback.  You turn the wheel left to turn left, you turn the wheel right to turn right. Norman says that natural mapping (mapping according to implementation of cultural standards/norms) provides immediate understanding because the audience you are designing for are already familiar with these concepts.

Social Media Revolution

This video poses the question, “Is social media a fad, or here to stay?”  I believe that it’s definitely here to stay.  This video was an amazing look into how our reality has been altered by the presence of new social media.  The part in the video where it showed that Generation Y & Z view e-mail as passé and the reaction of Boston College to this view on e-mail (in 2009 Boston College has eliminated the distribution of e-mail to incoming freshman) is proof of this.  If new media is ALREADY having its effect on the procedures of higher education, imagine what the future will hold.  Shortly after, the video touches on what the Pew Internet Report on Digital Footprints touched on.  This idea of our past being a memory is no longer true with online technologies.  Another thing that I must comment on is the fact that Wikipedia is MORE accurate than the Encyclopedia Britannica.  This is completely mind boggling to me.  I’ve never used Wikipedia as a source for reliable information due to the fact that it is able to be edited by anybody.  This just goes to show that what you perceive to be better isn’t always the case.  This also shows how strong social media really is and how big of an impact it has on our lives.  The video itself was comprised using motion graphics, a form of new media that comprises instances of old medium mixed in with new technologies to give the viewer somewhat of a hypermediated experience.

Professional investigation: academic disciplines, courses, & technology training

You’ll find below a listing of coursework that is relevant to the M.A. New Media Studies as separated by department.

  • Department of Writing, Rhetoric, & Discourse, http://condor.depaul.edu/~wrd/
  • WRD 513: Semiotics
    An introduction to semiotics, or the study of “the sign”—a theory of meaning that is concerned with anything intended to or interpreted to stand for something else, including objects, pictures, sounds, gestures, and body language. The course examines the construction of meaning in manifold contexts, extending the notion of “text” beyond the written page to any artifact that functions as a “message” embodied in a genre and a medium. The study of semiotics is important for writers in that our understanding of and expectations for literacy have become increasingly bound up with other modes of symbolic production in digital environments such as the Internet.

    WRD 520: Computers and Writing
    Explores the cultural, institutional, professional, and pedagogical implications of digital writing technology, drawing upon theories of technology as well as discussions from the field of computers and composition.

    WRD 521: Technical Writing
    An introduction to various aspects of technical writing, including readability, document design, editing, and usability.

    WRD 524: Document Design
    Theories, concepts, and components of effective document design, including the interrelation of visual displays and written texts across a range of electronic and print genres.

    WRD 525: Writing for the Web
    An introduction to various genres of web-based communication and the roles played by writers, readers, and users of web sites. Includes analysis, design, and revision of web-based writing as well as practice producing written documents which accompany the development of web information.

    The relevance of these courses from this department focus on the technical formats for writing for New Media and the meaning behind signs and symbols.

  • Department of Art & Art History, http://condor.depaul.edu/~art/
  • There aren’t any graduate level coursework from the Department of Art & Art History, but relevant courses include Video Art, Culture & Media, Documentary Video, Color Theory, etc.  This department offers courses that focus on the visual communication aspect of New Media.

  • College of Communication, http://communication.depaul.edu/
    This seminar considers the cultural ramifications of new media in shaping life experience and opportunity. As interactive digital media technologies expand opportunities for social networking, text and instant messaging, file sharing, collaborative authoring, blogging, podcasting and mobile communication, this seminar asks how these new technologies impact identity formation, creative participation and concepts of public culture. Issues of concern include race, gender, class, sexuality, cultural citizenship, fandom, subcultures and democratic participation.

    This course examines the ever-increasing influence of public relations and advertising in our society, highlighting issues of power and social responsibility. Students are asked to think critically about the societal effects of public relations and advertising and their roles in the production and maintenance of pubic opinion. Future practitioners consider the potentially adversarial relationship that exists between public relations and advertising and the media in societies bases on a free press. Formerly CMN 505

    This foundational course examines the theories, principles, applications and standards of advertising in multiple contexts, both from the perspectives of the practitioner and the consumer. Formerly CMN 553

    The relevance of these courses from this department focus on the social and cultural reactions and effects to the usage of New Mediums.

  • CDM’s graduate programs <http://www.cdm.depaul.edu/academics/Pages/MastersDegrees.aspx>, particularly HCI <http://www.cdm.depaul.edu/academics/Pages/MSinHuman-ComputerInteraction.asp>
  • HCI 402 Foundations of Digital Design
    Shape, line on two-dimensional surfaces. Color. Composition rules as they apply to digitally created documents. Digital manipulation of two-dimensional images. Use of commercially available draw and paint tools to create two-dimensional designs.

    HCI 422 Multimedia
    Multimedia interface design. Underlying technological issues including synchronization and coordination of multiple media, file formats for images, animations, sound and text. Hypertext. Information organization. Survey of multimedia authoring software. Topics in long distance multimedia (World Wide Web). Students will critique existing applications and create several multimedia applications.

    HCI 440 Usability Engineering
    The user-interface development process. Introduction to methods for practicing user-centered design including user and task analysis, user interface design principles and testing using low-fidelity prototypes.

    HCI 470 Digital Page Formatting I
    Problem-based applications of perceptual and communication principles to the presentation of on-line and off-screen pages. Includes experience with industry standard vector, raster and formatting software.

    HCI 454 Interaction Design
    Information architecture and interactive page design. Perception and use of menus, labels and user controls. Structuring information for navigation and presentation. Selecting and placing user controls for optimizing task flow on pages and across pages. Creating wire frames and using content managers.

    The relevance of these courses from this department focus on the technical creation of aesthetically pleasing, intuitive, and user-friendly pages and applications that are useful when designing for New Mediums.

    Madden, Fox, Smith, & Vitak- Digital Footprints, Report: Identity, Search, Social Networking

    Madden, Fox, Smith, & Vitak have compiled a Pew Internet report detailing user awareness of their digital footprint.  With the emergence of Web 2.0 technologies, a person’s name, address, and phone number just scratch the surface of what really comprises personal information.  In a time where people are voluntarily authoring personal content (thoughts, pictures, videos) over the web where an innumerable amount of people have access to viewing, we are not only willingly (although sometimes involuntarily) opening the doors to public criticism, but we’re also introducing our innermost thoughts and an intimate look into our lives to strangers in a way that we wouldn’t normally do.

    The phrase “digital footprints” may seem less serious than it really is.  The authors point out that unlike footprints in sand, digital footprints (or online data trails) last much longer after the tide has come and gone.  According to this report, users today are much more aware of their digital footprints than just 5 years ago.  In 2002, only 25% of people have searched information about themselves online as compared to a 22% increase in 2007, when this report was conducted.  Users under the ago of 50 are more likely to conduct self-searches via the internet than those that are 50 and older.  Other statistics show that men and women search information about themselves in equal numbers, but men and women who have higher levels of education and income are more likely to search themselves and monitor their identities through search engines than those with lower levels of education and income.

    Out of all self-searchers, nearly 3/4ths have said they only check-up on their digital footprints only one or twice, 22% said they check up on their footprints every once in awhile, and only 3% have said they make it a regular habit.  With the exception of their e-mail addresses, home phone numbers, home addresses, and their employer information, the other internet users who aren’t considered one of these self-searchers say they aren’t even aware of what personal information of theirs is available on the internet.  Still, some of these internet users say they aren’t certain if even this information is available.    However, privacy advocates have assured the authors that most of this information is readily available on the internet, whether through online databases or to the public through the world wide web, regardless of whether you’ve authored the information or not.

    Out of all internet users, 60% aren’t concerned with the information available about them online and 61% of online adult internet users do not find it necessary to limit the amount of information that can be found about them online.  38% of these online adults have, however, indicated that they have taken steps to limit the amount of information that can be found about them online.

    Madden, Fox, Smith, & Vitak have divided online adult internet users into the following four categories:

    1.) Confident Creatives(17% of online adults- Not worried about the amount of information found about them online and actively participate in the authoring of their digital content. They do, however, take steps to limit their personal information that can be found online.)
    2.) Concerned & Careful (21% of online adults- Weary about the information found of them online and regularly take steps to limit their online information.)
    3.) Worried by the Wayside (18% of online adults- Worried about the information found on them online but don’t do much to limit the information that can be found of them online.)
    4.) Unfazed & Inactive (43% of online adults- This group of adults neither care about the amount of information that can be found online of them nor do anything to limit their digital footprints.)

    Users such as the 43% of online adults have legitimate reason to show minimal concern and/or uncertainty about their digital footprint, as 38% of internet users who performed searches of their name find little or no information on themselves.  Among the users who perform self-searches, 13% express disbelief on how little of information is retrieved from such searches.  Regardless, 87% of self-searchers find their personal information retrieved to be very accurate.  However, some find inaccurate information like the 4% of online adults who said such information has caused them to have bad experiences due to the embarrassing and inaccurate information that exists of them online.

    Neale & Russell-Bennett, What value do users derive from social networking applications?

    Neal and Russell-Bennett wish to unveil the value in which users of social networking applications consider such applications to be.  For example, what makes one application more worthy of recommendations than another?  What is it that a user values more when determining the notoriety of certain applications?

    Neale & Russell-Bennett wrote of how only 3% of interactive advertising dollars were spent on social media, not because they believe social media doesn’t work, but because advertising via social mediums is really expensive.  With this said, advertisers implemented advertising via social networking applications due to being much more economical – cheaper/quicker to make.  Also, depending on how popular the application becomes, the more people will see their advertisements (presumably, the users of these applications will pass the word on to friends which will garner more views.)  In their investigation, Neale & Russell-Bennett ask the questions: 1.) What value do users derive from cool Facebook applications? 2.) What features of an application encourage/discourage users to recommend applications to their friends?

    Customer Value
    In terms of social media, value is not dependent upon the monetary gain produced by sales.  Rather, value relies heavily on the time and information exchanged between user and organisation.  In Neale & Russell-Bennett’s investigation, they’ve categorized four types of value generated by Facebook applications:

    1.) Emotional (pleasure/fantasy/fun resulting from the use of an application)
    2.) Functional (performance/technical features)
    3.) Social (connections of other people by use of an application)
    4.) Altruistic/Humane (helping others in society)

    What factors are needed in making an application “Cool?”
    When issuing an anonymous online survey addressing the research questions (mentioned in the beginning of this blog post) that Neale & Russell-Bennett wished to get answers to, coders have analysed 3 factors:

    1.) A feature that can encourage/discourage user recommendation.
    2.) A feature where different levels of the feature can encourage/discourage user recommendation.
    3.) A feature that’s uni-directional and either encourages or discourages, not both.

    What makes an application cool?Applications that allow self-categorisation (develops social/personal online identity), applications that change regularly, applications with high levels of interactivity, applications that are highly recommended, applications with high levels or creativity, applications linked to pop culture, applications that allow users to access uncommon information, and applications that waste time when time is available to waste.

    What symmetrical features encourage & discourage recommendations?Time wasting can be seen as a legitimate use of an application as wasting time may be a way to keep a user occupied until something more important comes up, but can also be seen as an illegitimate use of an application as wasting time may indicate a pointless activity.  Notifications may encourage user recommendations as they keep an individual up-to-date on what friends are doing, but at the same time may discourage user recommendations due to the influx of messages being received.  Sharing encourages user recommendations as some applications require a certain amount of referrals in order to unlock certain features, but at the same time sharing also discourages user recommendations as the number of requests received can be seen as spamming.  Competition of knowing how you stand in relation to friends and other users of an application encourages recommendations, but can also discourage recommendations as some may think this opens up a door to judgement.  Lastly, the ability for an application to express one’s personality encourages recommendations, but can also discourage recommendations because some feel it reveals to much information.

    What polar features encourage & discourage recommendations?Positive word of mouth may encourage, whereas negative word of mouth may discourage.  High interactivity may encourage, but low interactivity may discourage.  Novelty of an application may encourage, but an excessive amount of users to this application may discourage.  Positive reactions (fun, enjoyment, excitement) may encourage, but negative reactions (annoyance, anger, boredom) may discourage.  Knowledge of whether a friend would like an application may encourage, but knowing that you dislike receiving application requests may discourage.  An application that increases user knowledge may encourage, whereas an application that is not mind-stimulating may discourage.  Lastly, an application that is user-friendly may encourage, but an application that isn’t intuitive may discourage.

    What uni-directional effects encourage & discourage recommendations? Applications that support a cause, that serve as virtual substitutes to tangible gifts, that allows synchronization with devices/applications outside of Facebook, that give rewards for usage, and that provide reminders are all applications that encourage user recommendations.  Applications that are blatantly used for commercial advertising, that intrude on a user’s privacy, that appear to lack credibility, that ask too many questions before actually using the application, that embody immoral themes, that require monetary costs to use, that serve their purpose after only one use, that is completely irrelevant and superfluous to a user’s needs, that appear to be childish or childlike, and applications that have low user ratings and poor user feedback are all applications that discourage user recommendations.

    Kress & Leeuwen, Multimodal Discourse: The modes and media of contemporary communication

    Kress and Leeuwen introduce this piece by explaining how monomodality (the use of one mode of communicating ideas.  i.e. books with only text, paintings with only one form and one medium, etc) had great prevalence in Western Culture, but is beginning to reverse as the dominance of multimodality (the use of multiple modes of communicating ideas.  i.e. combination of color illustrations, typography, and layout design in comic strips, magazines, brochures, etc.) begins to surface.  Kress and Leeuwen wish to give definition to how something technical can be made semiotically and work in unison together.  They wish to theorize communication through an analysis of semiotic modes.

    The issue of meaning in a multimodal theory of communication
    Kress and Leeuwen discuss 4 strata, or domains of practice in which meaning is made, in the categories of discourse, design, productivity, and distribution.

    Discourse- Describes reality through socially situated forms of knowledge with inclusions to the who, what, when, where, and why in relation to the events that constitute that reality.

    Design- Conceptualized forms of events and products of semiotics with three things designed simultaneously: 1.) discourse(s) 2.) interaction of discourse 3.) a way of combining semiotic modes.  Using abstractions of semiotic modes, the design process is separate from the semiotic product/event.

    Production- Semiotic products/events articulated into material form, regardless of what the form may be, so long as it is encoded into a comprehensive form upon distribution.  The production process gives meaning to the articulation process and gives form to design.

    Distribution- The recoding of semiotic products/events for purposes of distribution and recordings.  Generally not intended as production technologies, they are not intended to produce meaning.  However, their recoding generally produced unintentional semiotic potential (i.e. noise/film grain).

    Bolter & Grusin, Remediation: Understanding New Media

    Introduction: The Double Logic of Remediation
    In the introduction of this piece, the authors point out that digital technologies and new mediums are becoming increasingly prevalent in modern society.  In fact, their prevalence is getting  to the point where keeping up with and maintaining these technologies is hard to do.  Also in this introduction, Bolter and Grusin give definition to the title of this work: “remediation”- the use of one media in another media.  The logic behind remediation is the contradictory act of ridding media by multiplying media, which is an act the authors say our culture wishes to do.  This is occurring when older forms of media (print and electronic) attempt to compete with newer forms of the same media in digital formats.   Our desire for immediacy allows for, both, the old and new mediums to capture the moment that we seek- high speed car chases, artificial environments on movie sets to recreate historical moments or in webcams to create certain perceptions, etc- by use of successfully eliminating the medium.  This elimination of the medium allows for the creation of a certain visual representation that makes the viewer feel as if they are within the particular represented environment.

    This introduction also speaks of how immediacy and hypermediacy are used together, where immediacy depends on the latter. Bolter and Grusin give examples of how newscasters use hypermedia such as graphics, audio, text, etc. to scroll across their segments in order to provide the public with the complete and immediate stories they deem necessary.  Similarly, hypermediated forms also use various tactics when attempting to provide a sense of immediacy, as in the hypermediated music videos that appear to have been taking place live.  Another good example of the combination of immediacy with hypermediacy are flight simulators, where the hypermediated computer simulation is used as a real-time instrument to teach users how to fly and obtain immediate feedback, as if one was flying an actual aircraft.

    The Logic of Transparent Immediacy
    Bolter and Grusin write of the transparent immediacy without mediation and start with examples of virtual reality apparatuses.  Virtual reality relies on the mentality that you are to disappear and immerse yourself into an environment other than your own.  The viewer is given an immediate sense of presence, where the realistic graphic visualization allows this viewer to forget, and to a certain extent, deny the fact that they are using any type of virtual reality apparatus to make such a visualization possible.   Another example of immediacy lies in the form of 2D/3D digital graphics that replace traditional forms of tangible imagery and the monotony found in text on a screen and add depth to phone conferences (by allowing for visuals to represent those otherwise pictureless sounds via videoconferencing).  Also, digital compositing in films allow for the replacement of stunt doubles and the graphical representations of file folders, recycle bins, paper on a computer screen not only allow for a sense of immediacy, but also serve as replacements to their tangible forms.  Like virtual reality apparatuses, these mediums seek to be transparent.  By creating environments similar to those that exist in reality, GUIs and other graphics (particularly 3D graphics) seek to blur the line between the producer and the produced so that what is produced is not seen as something coming from a media.

    How is it, though, that one obtains a sense of transparent immediacy? How do you make a viewer/user not see the media when acknowledging the fruits of such media’s labor?  To put it simply, create the most realistic atmosphere possible. Bolter and Grusin say that digital graphics are created in perspective and are mathematically perfect- from size, shape, shading, color, and illumination – everything is calculated to the best of a computer’s capability.  Hence, the line between the medium and what is produced is erased as the more realistic the production the easier it is for an individual to forget that it was produced.

    Bolter and Grusin say that their logic behind transparent immediacy is not to say they want the viewer to be completely oblivious to the fact that whatever is produced was produced by a media- as that is somewhat impossible.  The logic, however, is to want these same viewers to be in awe of what is produced- something that is only possible by such a realistic production that makes it appear as if what is produced did not come from a media.

    The Logic of Hypermediacy
    What exactly is hypermediacy?  Bolter and Grusin list quotes from people such as William J. Mitchell, Bob Cotten and Richard Oliver to define hypermediacy as a combination of mixed media and heterogeneous spaces with random access that has no set beginning, middle, or end.  Examples of European cathedrals, secular furniture such as cabinets of the 16th & 17th centuries, oil paintings, mechanical reproductive technologies of the 19th century, collages and photomontages are provided by the authors to show hypermediacy.  They all were comprised of the disposition and interplay of various forms and heterogeneous spaces.

    No matter what form hypermedia envelopes, the logic remains the same.  Hypermedia expresses a mediated space to consist of a realistic space beyond its mediation, but does not seek to consist of transparent hypermediacy.  Conversely, hypermedia artists strive to make their viewers see the medium as a medium by the repetitive representation of visual & conceptual relationships amongst mediated heterogeneous spaces.

    Bolter and Grusin begin their section on remediation by showing how various literary works have not only made their way as on-screen adaptations, but also into paintings through a process known as repurposing (where elements from one medium are reused in another medium).  They explain that nowhere in these films or paintings was it introduced that the content was borrowed from another media, as this will deflect from the immediacy and seamlessness that a viewer desires.  With the idea of repurposing in mind, Bolter and Grusin explain that Marshall McLuhan’s statements on the content of any medium always being another medium is problematic. They give examples of Dutch painters who incorporated mirrors, inscriptions, maps, globes, etc. into their paintings. 

    The representation of one medium in another medium is remediation, which Bolter and Grusin argue is a major characteristic of new media.  In other words, new media is usually remediated as it is comprised of an older medium.  For example, a DVD of the movie “Batman” contains the same content of a VHS of the same movie, but performs better and has better features.  Although the DVD’s purpose is to produce the same information as the VHS, it is advertised as having a much better quality and offering special features.  Because it aggressively points out stark differences, this particular new medium does not wish to be transparent.

    Some remediations, however, do wish to be transparent and erase the lines between the old and the new.  For example, online photo galleries/websites provide images of actual paintings as a means of providing easier access to an otherwise very hard way to get access (you’d have to fly to Italy to see Michelangelo’s painting on the ceiling of the Sistine Chapel…or just walk to the library and flip open a book, but this, too, is less convenient than the remediated online gallery).

    The last form of remediation comes in the act of wanting to completely eliminate the older medium.  This, of course, is an impossibility because anything remediated relies on its older form.  For example, games like Doom, Myst, World of Warcraft, Final Fantasy, etc. have very elaborate and cinematic storylines with the added features of interaction between the user and the game.  These interactive films seek, in a way, to absorb the older medium of television.  I say “in a way” because the creators of these games know better than to think the film industry will ever been absorbed, but they wish to give them a run for their money by the implementation of amazing graphics and interactive features.  Similarly, computer graphics are being implemented in Hollywood in order to fight off the possibility of digital media replacing traditional film.