Desktop. Window. Mouse. Document. Loop. Trash can. Recycle bin. These things all had rather independent meanings before the emergence of the modern computer. They existed in a manner that fulfilled their own

In 1998, William Stubblefield published an article titled, “Patterns of Change in Design Metaphor – A Case Study.” Written during the heart of the dot-com/technology boom, the author sought to uncover the significance of metaphors in the digital world, such as with Black’s interaction theory of metaphor. The development of the Design for Machinability (DFM) Advisor was used as a case study. This was created to improve the manufacturability of metal parts. The design team included two software engineers, a knowledge engineer, a manufacturing engineer, a project management team, and the customer.

There were three different prototypes:

  1. Prototype 1: A Pure Spelling Checker
  2. Prototype 2: The Dual-Use Approach
  3. Prototype 3: Tools and Critics

The spell checker prototype was mainly concerned with holes in terms of detecting specific problems. Much like a spell checker, the interface included a “Next” button that permitted navigation from one critic to the next. However, the feature recognition of Prototype 1 appeared to adhere to no particular sequence, leading to a more flexible order of evaluation for Prototype 2. I did have to look up tolerances, as related to the engineering field, which essentially means variabilities and imperfections. So all machines contain some level of tolerances.

Prototype 2 allowed for more leeway by allowing for a “dual-use” approach where the user had the ability to select different features, as opposed to the one-item-at-a-time-model of a spell-checker. Prototype 3 essentially used the same foundation as Prototype 2 but also by splitting up the the knowledge base into two components  tools and critics. To counter the issues they had in the previous two iterations, “critics were used exclusively to evaluate existing features, whereas tools could be used for either evaluation of existing features, or to design features from scratch.” The critic component was very much in line with the spell checker metaphor, while the tool component allowed the user to make inputs that were more customized.

The 2006 publication by Phoebe Sengers and Bill Gaver, “Staying Open to Interpretation: Engaging Multiple Meanings in Design and Evaluation” has a somewhat different take on design, while not being completely divergent. The goal of the was to demonstrate the possibilities of multiple or co-existing interpretations for the same thing — most notably in Human-Computer Interaction (yep, one of my fields, from back in the day!). Due to the structure of the the paper, I felt that it was important that I outline their work, not unlike they did. After all, my very popular blog was created as a means for me to take notes!

In terms of there being a “single authoritative interpretation,” the authors ask what the interpretation might be and who is making the interpretation. In terms of what, there are higher (social/cultural meanings), middle (everyday use), and lower (usability) levels of interpretation. In terms of who, they refer to the mental models of users and designers. Since the user is expected to be using the design, then the design of a system is best led by the interpretation of the user.

The authors explain that systems that allow for multiple interpretations might be “safer” since the users are responsible for their own actions and users can even define their own meanings and hopefully, make them engaged, active users. One example they offer is the use of “skins” and end-user programming.

About half of the paper consists of a “taxonomy” When designing for “multiple, heterogeneous interpretations,” the authors state that designs:

  1. Clearly specify usability, while leaving interpretation of use open
    1. Design as a blank canvas to allow for multiple interpretations
    2. Key Table is an example
  2. Support a space of interpretation around a given topic
    1. Suggest possible topics without specifying how users should relate to them
    2. Electronic History Tablecloth is an example, which glows when objects have been on it for some time
    3. Users might ask — should I move the object? Is this good? Bad?
    4. eMoto is an example
  3. Stimulate new interpretations by blocking obvious, expected ones
    1. Drift Table is an example, which had one simple use
    2. Many people lost interest at first, but came to appreciate it for its simplicity
  4. Gradually unfold new opportunities for interpretation
    1. System complexity gradually increases over time
    2. Penny’s Traces is an example, where physical movements in the CAVE leave 3-D traces that interact with the user
  5. Allow for user re-interpretation by downplaying system authority
    1. Users are allowed to make up their own minds about whether or not a system is correct
    2. Seamfulness – a design strategy that represents the limitations and uncertainties in data, allowing users to make up their own minds about how to deal with it
    3. Alien presence – a design strategy that supports a license to reinterpret systems
    4. Office Plant #1 is an example of alien presence, where it was a robotic sculpture that responded to emotional/social tenor of incoming email stream
    5. Voice recognition applications come to mind here
  6. Can thwart any consistent interpretation
    1. Ambiguity, designed to support multiple interpretations

However, with this seemingly wishy-washy approach (it’s really not) to design, one might wonder how one would go about evaluating such systems. The authors approach is that “systems can no longer be effectively evaluated in terms of criteria generated from a single, authoritative approach.” Essentially, there may be several strategies and several different interpretations for evaluation. Furthermore, users are presented with their information, a notion known as dynamic feedback, as well as gaining multiple interpretations from multiple groups. This is not unlike the idea of, “two sets of eyes are better than one.”

In their concluding statement, the authors state that the goal of their paper was to, “allow the rhythms of constraint and openness in interpretation to become part of the design language available to us in HCI.” They also stress that with these approaches to multiple interpretation and multiple evaluation, there should not be “an anything goes mentality” with respect to design, interpretation, and evaluation.

While I enjoyed reading both articles with their emphasis on design, I found them both to be rather dense with information and written at a language level that Mozart could not have soothed. It is interesting to ponder the two articles together. The 8-year difference between the two articles is not unlike 60 human years in the technology world. In 1998, few people had mobile phones and analog networks still existed.  In the year 2006, Blackberry (once known as crack-berries) devices were hot and the very first touch screen mobile device, the iPhone was just a year away. However, the design concepts in both these articles are classic, perhaps even timeless. Metaphors can still be seen in many aspects of today’s technology and allowing for multiple interpretations is becoming more central in design.