Wednesday, April 6, 2016

Stage 1-Part C- i) Research

Brand Extension Info:
There are two categories: line extension and category extension.
The Halo Effect: The established brand promise and brand image of the parent brand carries over to the brand extension automatically
Risks: Poorly executed brand extensions can tarnish the parent brand.
Examples: Bic lighters, pens, and razors were successful, but Bic underwear failed because it was too far removed from the parent brand image

Current Starbucks Brand Extensions:
Delivery
Food
Ice cream
Evenings concept
Mugs, cups, etc.
Seattle’s Best Coffee
Self-service stations


ii) Preliminary Research Question


Starbucks’s identity, formulated as a simulacrum of the Italian cafe, was being lost to a relentless focus on growth and profitability rather than experience. Lines backed up as baristas made a growing list of complicated blended drinks, often incorrectly. The joy of a morning coffee with a smile was lost as automated machines took over for the humans. The air grew rank with the stink of egg sandwiches. And Starbucks stock took a tumble, too.


There are over 16,700 Starbucks locations in more than 50 countries, including Wales, which we're pretty sure isn't a country (update: it is a country). During a particularly heady period in the late 1990s and early aughts, Starbucks was opening a new store every workday.



Starbucks doesn't franchise

iii) Generative Research
Observation

Observation is a research method that is behavioral, qualitative, traditional, and exploratory.  Observation can be formal (structured) or informal (semistructured). It involves ethnographic methods in the exploratory phase of the design process. The observer goes into the oftentimes new environment with an open mind and takes notes, pictures, and raw video footage to document their experiences. Later, the researcher synthesizes their findings into categories to uncover common themes or patterns.

This method may be useful in this project in the beginning stages of the process. Going to Starbucks, sitting, and watching customers and employees interact will be a great place to start looking for design flaws and where a brand extension could be implemented.

Example:
"With preschoolers a stationary video camera with a wide-angle lens can be put near the ceiling in one corner of the room, without influencing behavior. Sherman (1975) used this technique to study the phenomenon of group glee in preschoolers. Group glee was defined as "joyful screaming, laughing and intense physical acts" which quickly spread in the group. Sherman made video recordings of 596 preschool classes taught by student teachers. He was able to identify what factors set off the group glee (for example, it tended to happen when a teacher asked for volunteers for an activity), and he studied reactions of teachers. Notice that a naturalistic study need not take place "out in nature.""

-http://www.intropsych.com/ch01_psychology_and_science/observational_research.html

Generative Research

Generative research is a broad method of design research. Exercises engage users in creative opportunities to express their feelings, dreams, needs, and desires, resulting in rich information for concept development. Its main goal is to develop empathy for users. There are two main types of generative research: projective and constructive. Projective exercises are an initial phase and help users articulate thoughts feelings, and desires and are ambiguously instructed. Constructive methods occur later and help with concept development. Generative methods combine participatory exercises with verbal discussions. It is best used for generation of design concepts and early prototype iterations.

This method of research will be very useful in this project, especially during the design prototyping phase. Participants can be asked their opinions on the developed product and reiterations can be made based on their feedback.

An example of generative research is in the creation and further development of "Studio School", a new type of state school model for young people in the UK. Based on extensive generative research and best practice, they offer a pioneering practical approach to learning.

-http://www.studio-school.org.uk

Desirability Testing

The research method of desirability testing is used when there is a disagreement about which design direction to pursue. "It shifts the conversation from which design is "best" to which design elicits the optimal emotional response from users." Desirability testing helps people articulate how a design makes them feel, especially their first response. It requires a series of index cards with adjectives written on them and, after looking at the prototype, participants pick 3-5 cards that best how they feel about it. It maintains focus on the end user and helps end team debates about design options.

This method could be effective in the prototyping a iteration phase of this project. A series of cards could be made for participants to describe their feelings about the design of the final deliverable. In this way, the design of the final deliverable will reflect the actual users' needs rather than what the designer believes the users to need.

One company who frequently utilizes desirability testing is Mad*Pow Media Solutions, a UX business. Here is an account of their experience:

Our Experience

“We tried this approach to desirability testing on a recent project to see whether it would help us refine our visual design direction for a public-facing Web site.”
We tried this approach to desirability testing on a recent project to see whether it would help us refine our visual design direction for a public-facing Web site. Once we’d reached the point in our overall design process where we’d finalized the content, messaging, and information hierarchy, we started designing multiple visual concepts for the site.
The goal of the site was to persuade customers to sign up for a discount health plan that could offer them savings on out-of-pocket medical expenses. Our goals for the site’s design and emotional impact were as follows:
  • We wanted to portray a professional and trustworthy image to overcome any objections consumers might have if they weren’t familiar with the brand.
  • We didn’t want a site that would appear gimmicky or overly promotional and discourage customers.
  • We sought to design a site that potential customers would find friendly and genuinely approachable.
  • Given the sensitive nature of healthcare expenditures, we wanted visitors to feel comfortable with the site and let a sense of empathy come through the design.
With these goals in mind, we developed two alternative visual design options. In the first option, shown in Figure 2, we used clean edges and bold colors in an effort to make the site appear conservative and stable. Our assumption was that visitors might find similarities between this site and other well-known brands with which they are familiar. This, in turn, would help them develop a sense of trust in the site. In the second design, shown in Figure 2, we opted for a softer, warmer color palette, with rounded corners and welcoming images to give the site a friendly feel.
Figure 2—Visual design option 1
Option 1
Figure 3—Visual design option 2
Option 2
To test which approach would best align with our intended goals, we conducted a desirability test using product reaction cards. Starting with the full Microsoft list of cards, we revised the list to include only the adjectives we felt were important for this brand, after assessing our early user research. We narrowed the final list to 60 adjectives, but kept the 60/40 split between positive and negative terms Benedek and Miner had suggested.
“To test which approach would best align with our intended goals, we conducted a desirability test using product reaction cards.”
We conducted the study through a survey, dividing participants into three groups. We showed the first group only the first design option, instructing them to select five adjectives from the list that they thought best described the design. We showed the second group only the second design option, giving them the same instructions. Because the designs were static screenshots, participants were not able to interact with either of them. We showed the third group both design options—alternating which design we showed participants first to minimize order bias—and asked which design they preferred. We had hypothesized that data analysis of the results from the third group would be difficult, but our client was keen on our asking the simple preference question, so we decided to do so. Finally, we gave all participants an opportunity to comment on and give their rationale for their adjective choices or preferences. Through our survey, we collected responses from 50 people in each of the three group.
As we expected, the results from the third group were inconclusive. Participants in that group were evenly divided in their preferences and their rationales for their decisions varied widely. However, tabulating the adjectives the other two groups had selected from the list proved to be very helpful. We identified the adjectives participants selected with the highest frequency and tallied the total numbers of positive and negative adjectives for each design.
“We identified the adjectives participants selected with the highest frequency and tallied the total numbers of positive and negative adjectives for each design.”
Contrary to our assumptions before conducting this research, while participants thought the first option was both understandable and clear, they also described it as sterilesophisticated, and impersonal. The sense of trustworthiness we had intended did not come through as one of the adjectives for that design. As we had anticipated, participants saw the second option as approachable and friendly, but surprisingly, they also described it as professional and trustworthy. Obviously, all of these adjectives were in line with our intended emotional response. Additionally, the second option received a much higher percentage of positive adjectives than the first option.
Compared to the simple Which design do you like better? question, our survey of product adjectives did a much better job of informing and helping us to achieve consensus on our design decisions. Based on our research findings and a review of participant comments, we developed consensus between designers and business stakeholders, selecting the second design option as the starting point for design refinements. Best of all, when others outside the project team questioned the appropriateness of a design element, because they liked other styles, we were able to provide a research-based rationale that minimized preference disagreements and moved us toward successful completion of the project.
- http://www.uxmatters.com/mt/archives/2010/02/rapid-desirability-testing-a-case-study.php#sthash.CanfJGXQ.dpuf







No comments:

Post a Comment