February 28, 2017 – by sylver
I’ve been working in innovation for 15 years now. I’ve supported many different organizations (small, medium and large) to define, develop and execute innovation processes in their organizations. I’ve learned a few things in that time.
When innovation fails, most people want to point to the product or service itself as the reason for the failure. Yet, too often, this product or service was never set up to succeed simply because the team championing its creation failed to do the appropriate “in between the lines” work around that innovation to sufficiently enable its success.
I don’t want YOU to do that. To that end, here are five reasons why I often see innovation processes fail within organizations.
1. The learning focus is incomplete.
Innovation is, first and foremost, about people. You create innovation to better serve and support people (your current or potential customers) in meaningful and relevant ways. You hope this better service and support will translate to more revenue for your organization.
At this point in time, many organizations get that a customer-centric approach is a HUGE key to what will make them successful and have practices in place to connect regularly with them as a result. A variety of tools and disciplines are leveraged to get at this information. Personally here at Sylver, we use tools and best practices of market research, design thinking, creative problem solving and user experience to help define the problems to solve or the “jobs to be done” by any innovations that come to the market for chosen customer targets.
But, this customer-centric learning focus is only one piece of the pie. An innovation’s success or failure is dependent on its execution. Execution is, in turn, dependent on the alignment of internal stakeholders around how to deliver against the values that prove meaningful and relevant to customers.
Too often I find that organizations do not spend sufficient time surfacing the hypotheses, limiting beliefs and hard constraints of internal stakeholders that will inadvertently drive both the willingness and ability to execute on any innovations emerging from any process or program instituted.
Bottom line, for innovation to succeed, equal attention must be paid to surfacing “insight” around your internal stakeholders (the people in charge of execution) as it does around your customers themselves.
2. Stakeholder/Executor buy-in is an afterthought.
Building on the last point, when you fail to involve your execution stakeholders in the journey you often create seemingly blue-sky solutions (not in a good way!). Yes, those solutions might promise great value back to your customer, but some aspect of those solutions are undoable right now for the organization. They either don’t align to key constraints of the initiative (technology, budget, etc.) or this new project has no prioritized slot in an executor’s calendar. What happens as a result? The solution is shelved for a later date (maybe to see the light of day again or maybe not). Regardless, momentum is stopped short and if you’ve been in the championing spot for this innovation, you’re likely feeling defeated. That feeling of defeat doesn’t motivate one to stay on the innovation train.
3. There is misalignment between the wishes for and actions around innovation.
I see teams struggle a lot with the spectrum of innovation. Many want to create disruptive or game changing innovation. Some even have the processes and systems in place to support that. But, too often, where those game changing innovations get cut off at the knees are the systems employed by the organization to (1) award money for new products and services that they bring into their portfolio and (2) in the evaluation procedures around which products/services are bringing revenue into the organization vs. not.
Too often, metrics for product innovation are flat. Incremental innovation and game changing innovations are awarded money and evaluated equally, which essentially puts game changing innovation at a strong disadvantage. It’s harder to define a market size for a category or product that doesn’t exist for instance. It’s hard to realize established category growth projections when you’re creating a new market or cultivating a new customer target.
Bottom line, if being a game changing innovator is part of your growth agenda, you need to have separate processes in place that allow those types of innovations to get their fair shot.
4. There are too many caveats around what makes an innovation a good innovation worth further pursuit.
Most typically innovation initiatives are evaluated by the immediate expected ROI they will bring to the organization. Too often I’ve seen companies discount initial ideas because they are not immediately expected to return $1M in revenue, for instance. Again, this is particularly problematic for those game changing innovations that can often benefit from an iterative, test and evolve type of development model—to hone the idea and its value proposition for the market.
5. The company has an urgency to have new innovation in the market … like yesterday.
Urgency (and specifically pressures around time) can be both a blessing and a curse for innovation. It’s a blessing in that is creates momentum to get stuff done, which is great. It’s a curse when the focus becomes more on meeting an in-market deadline, at the expense of creating a product that truly resonates—in at least a minimal viable product type of way.
I see the “curse” side of urgency show up most often in the initial exploration or discovery phases of innovation. Essentially organizations don’t invest the time or money to sufficiently surface meaty “jobs to be done” in their innovations. When this happens, surface level insight leads to surface level (and weak) ideas.
The “curse” side of urgency can also poke its head out at the Evaluative Stage of the innovation process. Closer in ideas always feel more doable. When unhealthy urgency permeates every innovation project of your company that can also push your innovation agenda into one of more incrementalism vs. a balance of incremental and game changing innovative solutions (what I often perceived to be the ideal state).
The reality is that innovation is not a one-size-fits-all process. Yes, templates of best practice can be used (are used) as reference points. But, culture plays a HUGE role into how an organization should bring innovation to bear within their walls.
These five “in between the lines” of innovation failures are great to pressure test your processes as you move forward in your unique process around innovation. Should you need some support in defining, developing or executing on that innovation process, reach out at firstname.lastname@example.org We always welcome conversation in this domain.
Tags: Innovation, Innovation process, Innovation failure, Innovation success
September 22, 2016 – by Jean McDonnell
How to Win with Millennials?
Stop Generalizing Them and Quit Asking How “I” Can Win
I’m what you might call an “ambivalent” Millennials expert. I’ve spent an entire career studying and following this particular generational cohort from their early childhood years up through their transition into adolescence, young adulthood, and more recently into parenthood. I’ve researched Millennials at different stages of life across a broad range of topics, categories and industries. From my unique vantage point, I’ve developed a deeply rooted understanding of what makes Millennials “special” (in identity, outlook, and behaviors) compared to other generations before them.
So why do I cringe when I call myself – or am called – a Millennials expert? It’s for the same reason that Millennials themselves are so tired of and repelled by this moniker and by the broad generalizations that go with it.
It’s a fact that the Millennial generation is a formidable force to be reckoned with. They have the population size, the education, and the access to technology that enable them to alter the course of business and commerce (as well as society in general), like no other generation before them. Indeed, now that Millennials have come of age en masse, we’re feeling the impact of their influence more acutely across a wider range of industries and categories than ever before.
So it’s no surprise that many of the requests we get from clients today have a Millennials agenda or focus attached to them. Typically the client need is expressed as: “we need to better understand and connect with Millennials” or . . . “we need to find a way to be more relevant to todays’ Millennials”. The implicit goal behind this request is, frankly, a self-serving, business-focused one. In other words. “How can we (the business) benefit (win) by gaining a better understanding of this consumer cohort group?”
After all, this is the reason that businesses exist – to make a profit, right?! While true, I would argue that this is the wrong question to ask because, behind it, is a way of thinking (a mental model) that actually prohibits an organization’s ability to “win” with Millennials. The real question client companies need to be asking themselves is this: “How do we make things (products, services, offerings) that are authentically interesting and relevant to Millennials?” Authenticity and “for-me”relevancy are core values of Millennials – as is the need to be recognized as an individual with unique needs, wants and interests that cannot be generalized to an entire generation. Millennials can sniff out false motives with ease and can be pretty unforgiving of marketers who abuse their sense of “fair play”.
To win with Millennials, therefore, requires a shift in mindset from a “what’s in it for me” perspective to a “how can we better serve the needs of this consumer?” mindset. It’s this inverse way of thinking about consumers (users) that is the special domain of the User Experience (UX) discipline. UX was born out of the need to improve the product experience of the user/customer by focusing very specifically on the needs of the individual at the granular (in situ) and personal level.
The diagram below serves to illustrate the inverse approaches that market research (AKA consumer insights) takes to achieve a “win” with Millennial consumers vs. consumer/user experience-focused functions showing up in departments with names such as user-centered design, innovation, user experience, customer experience, design thinking, etc.
Despite their complementary nature, in most organizations today, these two functions operate independently of each other, rarely converging and bringing together the unique perspective territories that each one “owns”. Moreover, as indicated by the arrows in this diagram, neither function tends to go far enough into the realm of the other.
- Market research doesn’t go far enough in terms of understanding the consumer at a deeply personal, situational, and individual level (e.g. beyond group-based segmentation analysis). In the case of Millennials, a typical MR approach tends to get stuck in viewing this particular cohort as a generalizable “target” (just think about that word for a minute) to be studied and dissected for the purpose of finding areas of opportunity where the business can insert itself to their gain. This approach is heavily focused on finding the “win” for the business and only secondarily focused on the offering itself (product/service, etc.) and its value to consumers. By its nature, the MR approach encourages a rather impersonal, detached view of the consumers (staying at the high level view and not delving deep enough to see them as “real” people).
- UX professionals, on the other hand, typically don’t go far enough into understanding the business context that is driving the research and/or the specific action criteria that needs to come out of it. In other words, the UX approach is flawed for being too singularly focused on the personal, individual needs of the consumer/user without giving enough consideration to the needs of the business and it’s core problem to solve.
With these “gaps” comes the opportunity to merge and integrate practices for a more holistic understanding. So, how do you do that? That’s where an “up leveled” practice of “personas” come in.
Today, the UX answer to authentically understanding and connecting to Millennials is to develop “personas” of key sub-segments of the larger group. The expressed purpose of personas among UX professionals is to dimensionalize and “humanize” each particular sub-segment in such a way that it will create empathy for this type of consumer and will inspire creative solutions for addressing their needs.
The “persona” way of profiling and thinking about consumers is definitely a step in the right direction. Corporate marketers and insight professionals can benefit tremendously from the use of personas as it provides a perspective-changing view of what connection to a “real life” consumer really means. That said, personas are only as good as the purposeful outcomes and impact it can have on the business. Due to the business context-light mindset of most UX professionals, the majority of personas being done today do not go far enough in the other direction – beyond empathy – to translating the personal and specific characteristics of each persona into business outcomes and action steps needed to deliver impact — on both the business and the customer.
In sum, I’m ambivalent about calling myself a “Millennial expert” because the title itself tends to unwittingly reinforce a top-down, overly generalized and business-serving perspective of this cohort vs. a bottom-up, individualized and consumer-minded perspective. The reality is that no one is or can be an “expert” on a whole generation of people. What is true is that my subject-matter expertise provides me with a solid foundation for understanding the critical factors of influence affecting this generation and, in turn, for knowing where and how to dig deep into specific topic issues to identify nuances, patterns and insights that others without this foundational understanding may likely miss.
Going even further, my real expertise is in understanding how to design and lead Millennial initiatives that yield a broad and deep understanding of Millennial consumers and where both stakeholder perspectives (the business and the individual consumer) are taken into account and ultimately integrated for maximum gain by everyone involved.
Finding that “sweet spot” between perspectives is our mission here at Sylver Consulting. Sylver operates at the nexus of Market Research (MR), User Experience (UX), and Strategy, giving us the knowledge and the skills needed to bring together these complementary disciplines in a powerful way.
If you want to know more about how to “win” in an authentic and relevant way with Millennials, reach out to us at email@example.com. Let’s set up a call!
August 11, 2016 – by Rob Maihofer
By Robert Maihoffer
As Sylver’s Senior Quantitative Market Research Analyst, I find myself often in conversations with clients about communities. The trend I see is that clients today are looking at customer communities as a means to increase the speed of insight delivery and to reduce project-fielding costs. What I also see is a lot of confusion around what a “community” is exactly.
In market research, the term “community” is often frustrating as it is used to generally describe two materially different forms of longer-term customer engagements (i.e., communities lasting a minimum of 3 months to 1+ years). In this longer-term community category, there are online panels and then there are online research communities (ORCs).
With this article it’s my intention to illustrate — through five different dimensions — how online panels and ORCs differ from one another. While both commonly share the goal of establishing an at-your-ready type of engagement with your current and/or potential customers, these important differences should be kept in mind when considering which type of community is best at supporting you to achieve your market research “community” insight goals.
Dimension #1: Your Market Research Goals — General industry guidelines recommend using online panel communities if you want to have a set of pre-recruited individuals at-your-ready for ad hoc quant or qual projects. An online panel in this case provides a “community” of individuals for you to access as needed. However, it does not support the building of a “community” for those recruited into it. Many industries with “hard to reach” sample targets set up panels for this use, as having an online panel community can reduce recruit times to a few days vs. 2 – 4 weeks (depending on the segment) for qual and make the difference between a quant project with a “hard to reach” sample being feasible vs. not.
Think of an online panel as being similar to attending a major league baseball game. You as a ‘member’ participate by cheers, boos, and other forms of group ‘feedback’, but unless you are a player or coach, no one is going to ask you personally how your team should get the next homer!
ORCs are recommended if you want ongoing and an evolving intimate, trust-based interaction from your participants. ORCs are really committed to creating a sense of community for all involved — the participants, moderator and client. The interaction model associated with ORCs fosters relationships and assumes that frequent and ongoing communication will occur within the community. For example, many pharmaceutical companies use online communities to learn about the journey of individuals living with specific medical conditions. The ORC format in this case allows for the capture of information daily and for the ability of a relationship to be developed between the participant, the moderator and other members of that community. It’s the building of this relationship over time that deepens the insight capabilities of this format of “community.”
So, if we think of an online panel community as a baseball game, we can think of an ORC as a small community book club. While baseball fans may respond in mass boos and cheers, the book club moderator might shoot you a ‘look’ if that is how you answer a question on a character’s motivation. He/she wants you to go deeper in the expression of your emotion. In fact, each club member is encouraged to reveal how they interpret events and motivations through their own experiences and beliefs (and, of course, a good beverage never hurts!). The book club is a smaller, more intimate setting with ongoing, monthly interaction.
Dimension #2: Size — Online panels tend to have tens of thousands, if not millions, of members, whereas ORCs tend to have fewer participant members (25-500 people in total).
Dimension #3: Member participation — Online panel managers intentionally keep participation level at only a few surveys a week or 1-2 qual projects a quarter in order to prevent participant fatigue. ORC members, on the other hand, are often expected to contribute to discussion groups and activities multiple times during a week. With fewer members, it is considerably easier for an ORC community manager to monitor contributions, log-in activity, and response quality.
In fact, the ongoing communication may actually encourage and motivate MORE participation as people become emotionally involved in the topic and become friends with others in the community.
Dimension #4: Compensation — Compensation is too often an after thought when setting up “communities” — online panel or ORCs. Yet, incentives are instrumental in sustaining engagement of your community participants and thus ensuring the long-standing health of your community.
Online panels tend to pay incentives to members of the community as they complete ad hoc projects. ORC members tend to get paid a monthly incentive, aligned to their participation level in that community for the month.
Incentive “payments” can take many forms — from points later redeemable for experiences/goods to gift certificates to an actual check.
Dimension #5: Moderation — For online panels, there is little, if any, ongoing intimate and direct interaction between the administrator and panel members (outside of survey invitations and customer service). Some panels do require a minimal amount of monthly/quarterly interaction, so they can get a read on the “health” of their community at that moment in time. However, this level of engagement is rarely personalized. It is meant to really just get the pulse on whether each panel member has continued interest in being part of that particular online panel community.
ORCs on the other hand not only encourage interaction, but also depend on it in order to be successful. Thus, from a moderation standpoint, the community manager needs to be available every day to post assignments, review completed assignments, ask probing questions, and to generally support the community with any customer service or more tactical questions.
So, there you have it — the essentials of what differentiates an online panel from an ORC. Neither method of building a “community” is more “right” compared to the other. However, the decision of which type of community to create does require a fair amount of consideration and conversation should you choose to invest in building a community as a means to speed your insights delivery timeline and reduce your project field costs.
Here at Sylver, we have experience with both types of these communities. Should you need support determining which community approach is best for you, reach out to set up a conversation: firstname.lastname@example.org
Tags: Online communities, MROCs, Market Research communities, Online panels, ORCs, Online research communities, Long-term communities, Long-term Market Research communities
June 30, 2016 – by Matthew
Image credit: https://www.amazon.co.uk/Sea-Monkeys-Ocean-Zoo/dp/B00005YWOB
Sylver Consulting has been moderating online communities for seven years now. In that time, we’ve learned a thing or two about what an online community is vs. what it is not. We’ve also gained some practical tips on what it takes to effectively moderate an online community in order to yield rich insights into the members and/or topic of that community.
First things first … you need to appreciate your participants as people, not view them as passive Sea-Monkeys. Successful moderation, in my opinion, is more than asking questions, assigning exercises, or managing people. It is about creating engagement. Specifically, it’s fostering sustained engagement over time that facilitates the rich two-way learning experience possible within an online community. How to foster sustained engagement within a community is what I’d like to cover in this post.
In my opinion, online community moderators must connect with participants on two levels: as a group and one-on-one. When there is a shared sense of experience and learning occurring, participants feel more connected to the moderator. In these cases, participants tend not only to stay engaged for the full duration of the project (usually a few days to months at a time), but they also give more from an insights perspective during their moments of study participation.
Strong connections between moderators and participants breed trust and empathy. Trust and empathy foster more in-depth conversation around the topic at hand, and hence result in deeper insights for Sylver’s clients. When trust and empathy are at the foundation of the participant-moderator relationship, participants demonstrate a proactive effort to sustain their participation in your study. It’s really that simple!
So, how do I develop trust and empathy with participants? I start the online community with a light-hearted introduction exercise. This exercise is designed so that a slice of a participant’s individuality is revealed: a passionate Broncos fan, a dad that never misses his young daughter’s soccer matches, the woman who loves eating sunflower seeds with her feet on the dashboard during road trips, etc. These seemingly small details matter, as empathy and trust begin here.
This introduction exercise is not about being chatty with participants. Rather, this seemingly inconsequential introduction question is the first step to becoming relatable as a moderator to my participants. I am signaling to them, via this question, that I am genuinely interested in learning about them as individuals. The two-way part of the conversation gets sparked when I share a bit about myself and react—with genuine interest—to what they have shared about themselves. (As a Patriots fan, some years are better than others to speak with Broncos fans.)
Introduction questions change, but they are never the topic of the study. Most of our online communities begin on a Monday. Sometimes, as a first question, I will share an anecdote from my weekend and ask participants to describe something they did over the weekend that excited them. This is a simple introduction exercise that establishes a personal connection between the participant and me, and is an opportunity for participants to familiarize themselves with the basics of a particular online platform.
Throughout the study, I seek to be open, curious, and to consciously ask probing questions. Moderating online qualitative research appeals to me on a deeply personal level because it is an opportunity to meet and engage with interesting people that I may not encounter in my everyday personal life. This is especially true when I interact with people that live lives very different from my own. My moderation approach is driven by empathy for individual participants and a desire to broaden my own perspective on the topic overall. This approach is especially helpful with participants that may hold views that are seen, in some cases, as “extreme” or “radical.” .
For example, we recently completed a study on the US education system. A participant in that study expressed very strong opinions on standardized tests and the government. To many, her thoughts would have been polarizing. Some less experienced online moderators might have discounted her view as “radical,” and thus less worthy of consideration from an insights perspective. I, on the other hand, was curious to understand the root of her views so I asked her to help me understand her perspective. I took the time to really listen and hear her explanation. She revealed a wealth of information in that explanation that helped to both ground her responses and provide context for her beliefs. Not only did I, as a moderator, learn about more about this particular individual and that category of thought around the US education system, but also this knowledge and context supported me, as a moderator, to ask more focused probing questions of the other participants in the study. In essence, this participant helped me to get more insight from the whole study because I was willing to be open and curious.
While the participant with the “radical views” acknowledged openly that we “probably have different opinions” on the topic, she appreciated my genuine and non-judgmental openness to her views. Because of my openness and curiosity, we were two very different people that quickly became relatable. She was also one of the most engaged participants throughout the study because of the connection facilitated through this open and curious dialogue!
In conclusion, electing a moderator for your online community that has a strong sense of empathy and respect for participants is essential. More engaged participants generate more robust results.
So, the next time that you are inquiring about a project that involves online moderation, ask to speak to one of us. If I am the moderator, I will be happy to discuss past experiences with online communities and my approach to your community and project. Until then, go Pats!
Tags: Online communities, MROCs, Online moderation, Qualitative moderation, Moderation, Online qual, Online qualitative, Participant engagement, Market research, Participant experience
June 21, 2016 – by Brianna
Friends — It is with much excitement that I get to share a project, The UX Careers Handbook, that I have been involved in for the past year.
Back in early 2015, Cory Lebson—the author of The UX Careers Handbook—approached me about writing a chapter for a comprehensive UX career resource that he was compiling. It was Cory’s vision to create an anthology that goes in-depth to explain what it takes to get into and succeed in a UX career, be it as a designer, information architect, strategist, user researcher, or in a variety of other UX career specialties.
Only one word came to mind when this request came in and it was “Yes!” The User Experience (UX) industry in general is misunderstood, as its shape and form looks different within each organization (based on what its offerings, culture, and supporting resources look like). Cory’s book seeks to dimensionalize UX as a profession and the different skill sets that UX teams need to invest in to properly show up and perform as UX professionals.
The UX Careers Handbook is now officially available for purchase — Yay! My personal contribution to this handbook, “UX For Market Researchers,” can be found on page 194. Topically I explore how the skill sets of market researchers can add value to the UX domain.
I feel honored to have shared this journey with so many talented individuals, including contributors such as Tracey Lovejoy, John Payne, Jessica Peterson, Kevin Lee and so many others. This handbook is an excellent resource for:
Employers and recruiters who want to better understand how to hire and retain UX staff.
Undergraduate and graduate students who are thinking about their future careers.
Individuals in other related (or even unrelated) professions who are thinking of starting to do UX work.
You can grab a copy of The UX Careers Handbook on Amazon
Get updates on the Facebook Page:
And while you’re there, feel free to follow Sylver Consulting on Facebook!
Tags: User Experience, UX, Cory Lebson, Market Research, UX Education, Design, Career Readiness, Product Development, Product Experience, Job Seekers, Information Architect, Branding, Networking, and User Research
May 16, 2016 – by Brianna
As many of you know, Sylver Consulting is uniquely positioned in the marketplace at the intersection of Market Research, User Experience and Strategy. I’ve always known this positioning to be valuable to Sylver’s clients. Yet, up until a year ago, I was continually dissatisfied with all articulations that I had attempted to define the specific added value of each of these different disciplines in my work. In the moment, as the words came out of my mouth, a description of how market research was alike and different from user experience research might sound “smart.” Yet, playbacks of that moment would always produce the litany of caveats in my head, “Not true because of X, Y, Z; What about in this situational context? Does it hold true there?” Bottom line, I was often left wondering … are there really differences between market researchers and user experience researchers or are we just making this stuff up to substantiate two different insight disciplines and make ourselves feel better in the process?
Short answer—no. These insights disciplines are indeed different, but not in the ways that the industry so wants us to believe. The key defining differences are not hinged on methodologies. Rather, it’s the philosophies behind the work and work process that literally divide these two insight disciplines of Market Research and User Experience.
Yet, because these insight roles look eerily similar from the outside (especially as User Experience matures in Corporate America), what begins to emerge are territorial wars wrought with ill assumptions and uninformed biases. Heels get dug in and pontifications of “I’m better than you” start to surface. It gets fairly ugly rather quickly.
I advocate, “Why do we have to go there? Can’t we all get along?” I think there are lots of reasons for each insights discipline to share and collaborate with one another and, by way of that, produce some incredibly powerful, insightful and strategic shaping work.
So, what does it take to break down the walls between the insights disciplines? And why would you want to make it your mission to do that? Diffusing the territorial wars and creating meaningful collaborations between these disciplines is precisely what I’ve been so fired up about this past year. I’ve now spoken on the topic of integrating Market Research and User Experience in four different venues (to rave reviews each time!), have contributed a chapter to a UX Careers Handbook (coming out soon!) and have recently published a full feature article entitled “Why Now—More than Ever—Market Researchers Should Consider a Transition into UX” in the Spring 2016 QRCA (Qualitative Research Consultants Association) Views magazine (gifted to you in PDF form here).
I feel strongly that there is more to gain by these two insights disciplines working together vs. separately. I have my own thoughts on how to make that happen (and, in fact, will soon be leading a UX/MR cross-disciplinary training for a Fortune 500 company to do just that.) I’d also love to hear your thoughts and ideas on this subject.
Read the article, then please share your comments and thoughts below. What actions might you personally take to diffuse the war between Market Researchers and User Experience Researchers?
March 22, 2016 – by Jean McDonnell
In preparation for this month’s edition of Sylver’s newsletter, I’ve been doing a lot of thinking about the nature and evolution of what is commonly referred to in the insights industry as “hybrid” research. As someone who has been trained in both qualitative and quantitative methods and has used each approach individually and in combination for many years, this topic is especially intriguing to me.
For as far back in my career as I can remember, I’ve witnessed the territorial divide between qualitative and quantitative methods – always wishing and hoping that the two sides could “play nice” together and appreciate the unique and complementary perspective that the other provides. And so it was with great enthusiasm that I embraced the hybrid research “movement” that began to emerge, in earnest, roughly 8–10 years ago.
At that time, the concept of a hybrid approach to research was getting a lot of “buzz.” It was perceived to be a cutting edge, best practice solution for overcoming some critical “pain point” problems plaguing the industry – problems I discussed at length in my article “Better, Faster, Cheaper” in Jan. 2016. Thanks to the evolution of online research and improvements in web 2.0 technology, the practice of combining and mixing different methods into one initiative has steadily gotten easier and more efficient over the years, thus ensuring the continued use and popularity of hybrid research approaches. Looking forward, there’s no question that the demand for hybrid-based research will continue to grow and evolve. Indeed, most people in the industry expect (as I do) that the mixing and combining of methods and techniques will become such standard everyday practice, that the term itself will eventually become meaningless.
In fact, that’s exactly what has happened here at Sylver. Hybrid thinking (like design thinking) is in our DNA and inextricably linked to our method-agnostic, problem-solving approach. We no longer consider the act of doing “hybrid research” as differentiating. Rather, in most cases, its just what you do to ensure due diligence in your work.
What I find interesting is how universally the term “hybrid research” is used within the Insights Industry today. Yet, surprisingly, the meaning of that phrase — and thus the execution of “hybrid research” — is anything but universal. Let me shed some light on this …
What is hybrid research?
Most people think of “hybrid research” as a catch-all phrase for any “mixed-method” approach that a) combines multiple or mixed methodologies in general and b) specifically involves the mixing of qualitative and quantitative methods into one project. What I’ve come to understand over the years is that this definition and way of thinking about hybrid research is only part of the story.
According to Greenbook, the leading publication and resource for the market research industry, hybrid research is defined as, “The combination of any two or more techniques applied concurrently within the same study, but also applies to new techniques that blend aspects of qualitative and quantitative approaches into a single new technique.”
Notice that there are actually two separate and distinct definitions embedded within the classification of the term “hybrid research.” One is about combining or mixing multiple methods/techniques into one concurrent project (the definition understood by most). The other is about blending together certain aspects of qual. and quant. to create an entirely new technique (the not-yet well recognized or understood application of hybrid research).
Here at Sylver, we operate at the evolutionary stage where the first of these two hybrid applications is standard practice to us. That is, the mixing together of methods is so common and so embedded in our culture and problem-solving process that we rarely distinguish it or think to call it out as a “hybrid” practice. When the situation calls for it – as it often does in the exploratory and opportunity-definition space we play in – we simply select and mix together, from our wide toolbox, the methods and techniques that are most appropriate and most efficient to solving the problem at hand.
Sylver ups the ante on “hybrid research” with its creation of proprietary “blended” methods
It’s with the second of these two hybrid applications where, I believe, Sylver really shines and offers the most unique and differentiating value to clients. Specifically, Sylver has developed four proprietary “blended” methods over the years, each one in response to a specific insight need of a client; an insight need that no other known method could optimally address alone or through the use of a mixed-method approach at that time. Each method developed by the team has pushed the boundaries of existing methodologies. We’ve combined the best-fit elements of qualitative and quantitative techniques into each, thus creating “blended” methods that have proven to yield more complete, focused and richer insight than alternate options available. We’ve also found that the outputs generated have supported quicker internal decision-making, as the results coming back are more holistic and integrated in nature vs. still a bit piece meal, as is often the case with other forms of hybrid research.
As someone with a lot of strategic branding and positioning research under my belt, the proprietary blended methodology I am personally most excited about is SymbolicsTM. SymbolicsTM was developed after a number of Fortune 500 clients’ had come to Sylver seeking greater support around the strategic positioning of existing brands and/or of new products/offerings being brought to market. Existing “qual. only” and other so-called “hybrid” methods were just not cutting it. What these clients needed was a way to identify the deep, hard to articulate emotional and functional connections that consumers have to a brand or subject (traditionally a qual. task) and they needed to do it in a way that was reliable, measurable and projectable to a larger audience or segment (a quant. task). SymbolicsTM was created to address that specific “ask.”
Unlike other metaphorical and collage-based tools on the market, SymbolicsTM is a true blended qual./quant. hybrid methodology. On the qual. side of the equation, it works by tapping into the emotional and functional unconscious associations that customers have towards a brand or subject matter. It does this by leveraging a curated set of stimuli elements comprised of words and images. On the quant. side, it works by utilizing a robust, Bayesian-based quantitative algorithm that analyzes the associative relationships between the stimuli elements used – and from that – creates a distinctive composite “map” for decoding and parsing out the deeply rich and nuanced meanings of the subject matter. The final result is a deep and reliable positioning and messaging framework that clients have a great deal of confidence in, and which springboards the decision-making process for brand positioning and activation strategy.
In past work I’ve done on brand strategy and positioning work, I’ve relied heavily (like most of my colleagues have) on a mixed-method qual./quant. approach. Specifically, the project inquiry typically included a qualitative element for unearthing and interpreting the emotional, unconscious views of consumers on a given subject, and a quantitative survey element for adding the hard numbers and metrics to those soft qualitative insights. The linkage between the two qual. and quant. explorations was never perfect or elegant – but it was all I (and the industry) knew at the time and so it was sufficient. I had never before experienced (or even imagined) a method that allowed me to see the “whole connected picture” all at once from a uniquely integrated perspective. After working on my first SymbolicsTM project at Sylver, I had one of those “you’ve got to experience it to fully appreciate it” moments of clarity. It changed my way of thinking about hybrid methodologies.
My prior reference of “hybrid research” was about appreciating the different complementary or composite pieces of the insights puzzle that qualitative and quantitative data could provide. Now, I see how a blended qual./quant. approach produces an integrated and holistic picture of the data and, in the case of SymbolicsTM, an integrated and holistic picture of how your consumer is experiencing your brand or product. The integrative analysis of these two qual. and quant. data sets actually up-levels and transforms the meaning that could be derived by either data set individually.
Our clients’ experience of engaging in a SymbolicsTM project with us is typically just as profound as my own personal experience with this method. Here’s what one of our clients recently had to say following a SymbolicsTM engagement with Sylver:
“I cannot tell you how much value the entire team got not only out of the output, but of the process, and how integral the work has become to our positioning decks. You did a phenomenal job of driving the process, deepening our thinking, and clearly communicating the output. Thank you so much.”
I love talking about methodologies, best practices in research and the direction the insights industry is headed to best support global business trends. If you do too and/or if anything here resonates with you and you’d like to learn more about SymbolicsTM (or any of the other “blended” hybrid methods we offer) please reach out to me via: email@example.com or 312-239-0346.
March 22, 2016 – by Matthew
Sylver Consulting organizes a unique suite of qualitative and/or quantitative methodologies for each project, always allowing the problem or project at hand to determine our choice of methods used. When a problem cannot be solved with available methods, we design new tools — a very exciting opportunity for all of us on the team!
Of all method types, I find that hybrid methods (those generating qualitative and quantitative data sets simultaneously) are the most fun for me to develop, as they require a highly collaborative process between team members.
I perceive the act of designing new methods from scratch as freedom! When this occurs, I — as well as others on the team — am given unconstrained space in which to think about a given problem. I’m challenged to think of that problem from as many perspectives as possible. “What if…” scenarios and pontifications are desired ten-fold. This exercise excites me to the core!
But, it is important to note that method development doesn’t just always happen at Sylver Consulting. Rather, it’s purposeful. To ramp up a method’s development at Sylver, we have to be truly confident and articulate about why other available methods on the market do not sufficiently address the needs of the problem/project at hand. To do that, we create a laundry list of each existing method’s limitations. From that, we identify a new path forward and define what a new method needs to do to best address our needs on the problem/project at hand. We also identify who among the team is best to lead on that particular method’s development process — at least for its first iteration.
In fact, it’s the spirit of “iteration,” embedded into Sylver’s new method development process that truly makes my heart sing! Temporal demands of the problem/project at hand force us out of the shadows of perfectionism and into a proactive spirit of prototyping and reflection. I relish these iterative cycles of development, as each exposure of the method to team members with different points of view, experience, and modes of thinking pushes the development of the method into new, constructive territories of discovery. I love seeing what “territories of discovery” emerge throughout that process.
I also find the iterative cycles of our method development to be playful, fun and open. In a freeform style, group members develop a rhythm and flow by sharing their “What if…” ideas — bad or good. Sometimes things resonate, other times they are dismissed; at the very least, they lead to the next thought. Most important is that play is perceived to be key in stoking the team’s creativity.
Once a new method transitions out of its initial “building new” phase and into its “evolution” stage, a new type of excitement begins. At this stage, tweaks and amendments are required to make each subsequent instantiation of the method relevant to a new problem. I find that the ongoing evolution and development of a method furthers the boundaries of the original project team’s method design to include perspectives from all future project teams (and the problems they are trying to solve for). As a result, the method is rendered more robust because of these viewpoints. I enjoy coming back to a method following its transformation by a different project team or two because it allows me to see versions of the method that I did not or could not have imagined prior.
At this moment in time, I’m most personally charged by SymbolicsTM. We recently completed a project that pushed Symbolics’TM previous boundaries and revealed new models of its use. This has once again kicked up a cycle of reflection and iteration at Sylver as we prep for future SymbolicsTM projects coming down the pike. Needless to say, I’m excited, as is the rest of the team. And, because of our enthusiasm and excitement for iteration and consequent evolution, our clients can always feel confident that they are getting the best and current thinking of the Sylver team at that moment.
Interested in hearing more about the Sylver’s unique and proprietary methods? We’d love to share more!
Reach out to set up a conversation: firstname.lastname@example.org or 312-239-0346
March 22, 2016 – by Adriano
A lot of people talk about the need for establishing more connections between design and other disciplines, such as engineering and business administration. As a result, many schools have put together graduate programs to think about the way in which design can combine the best of creativity, imagination, and alternative approaches to come up with novel ways to make things. Such programs are well received and can be found today at universities such as the Illinois Institute of Technology (IIT), Stanford, Northwestern, and Harvard, to name a few.
When I was doing my doctorate degree, in the early 2000s at the IIT Institute of Design, I was particularly interested in understanding the intersection of design and engineering and how taking a multidisciplinary mindset might improve a project development process involving each of these core functions. Based on the work I was doing at Motorola and the academic literature I was reviewing at the time, it was clear to me that design and engineering professionals were working alone in their own islands, with engineers brought in for one piece of the project and designers for another. Only a few companies, like Apple and Google, were leading a visible effort to leverage the duo of design and engineering to produce meaningful results. Thus, it felt like the right time to ask, “Is there a better way to bring both disciplines of engineering and design together? Can collaboration between these disciplines be better facilitated?”
The answer to these questions resides in the Function-Task InteractionTM method (FTI), initially conceived while I was completing my Ph.D. at the IIT Institute of Design. FTI is a three-step approach that combines task analysis and functional modeling to establish common ground and language between designers and engineers. Task analysis, often performed by human factors and ergonomics professionals, is the process of learning about users by observing them in action to understand in detail how they perform their tasks and achieve their intended goals. Its main job is to break down complex (even simple) behavioral sequences into steps so that the meaning and relation between tasks is made evident. Functional modeling, on the other hand, is typically performed by product engineers during the conceptual design phase and its main job is to provide the graphical tools necessary to develop a complete model of a product. When functional modeling is performed, engineers have at their disposal terminology to describe and experiment with technical functions before any money is spent on building prototypes.
The idea of bringing together the step-by-step goal-orientation of task analysis with the experimental building capabilities of functional modeling was well received by design practitioners who tested this approach. But I really knew I was onto something when my academic paper on this approach received the XEROX-ASME Best Paper Award at the Design Theory and Methodology Conference, which is organized by the American Society of Mechanical Engineers. I attended that conference with the intention of sharing my early research findings and ended up the grateful recipient of this prestigious award. My Ph.D. advisor, Keiichi Sato, was proud of our work and I was thankful that I had the opportunity to fully enjoy that moment.
Since that time, my research and writings on this approach have been published into a book. The book, “Design Relationships: Integrating User Information into Product Development” includes three case studies of FTI in use. Also specified in the book is a computer-based tool to link technical functions and users’ tasks. Tom MacTavish, who was Vice President of Human Interaction Research at Motorola Labs at the time, said this about the book, “Dr. Galvao has produced a well grounded methodology for making and managing design decisions.”
For me, the most exciting part of designing a unique method is to see it being used in the real world to solve real problems. Clients of Sylver Consulting have received great benefit from using the Function-Task Interaction™ method. Specifically, FTI is a method to consider if you need to:
-Assess how new functions or features of a product may impact, and potentially alter, the mental model for how a user may interact with your product.
-Understand how and where to gain task efficiencies for users. (We’ve found this one particularly relevant in B2B contexts where significant cost savings are being sought by the streamlining of product and process workflows).
-Improve the design of a device or product and identify what other non-efficiency benefits (i.e. health, safety, aesthetics, etc.) consumers associate with each improved design function.
-Increase the levels of collaboration between your staff’s designers and engineers.
Curious to explore if FTI is a right fit for you and your project? Email now to schedule a “Clarity Call.” In this call we assess if your current need is a best-fit match for Sylver’s FTI method.
Contact email@example.com or 312-239-0346 to schedule your consultation.
January 27, 2016 – by Matthew
Sylver Consulting “the business” has existed at the nexus of Market Research (MR), User Experience (UX) and Strategy (STR) for over ten years. Yet, individual team members do not exist at that convergence. How is this possible?
Brianna Sylver, our founder, has purposefully developed a dynamic team of researchers and designers that exist across MR, UX, STR and other related disciplines. Because most client projects exist at an unfixed and inconsistent point among these disciplines, we are able to leverage the unique combinations of our team and our collective skills to meet our clients’ many needs.
The Sylver Consulting staff is composed of professionals from varied disciplines (i.e., market research, sociology, business, design, UX, innovation, film, theater, political science). These different perspectives offer the opportunity for each of us to grow through daily and project-based interactions with one other. Additionally, our project teams are ever-changing, since every project requires a different collection of skill sets to yield the success outcomes that we desire for that project. This intentional integration of varied disciplines within the staff and across each project team forces each team member to think about a problem from another member’s experience and view, which naturally changes each of us for the better as a result and renders the team more adaptive and flexible over time.
Now, there is often a concern that the constant shake up of project teams might result in some inefficient work practices. This is where the use and development of a share vocabulary comes in. The shared vocabulary used at Sylver enables clear communication across all team members, regardless of discipline. Without it, intent and vision can be confused or lost, leading to friction and reduced productivity.
Bottom line, no matter their skill, individual team members at Sylver are not able to produce the same robust solutions and insights as the collective project team.
Hence, it is Sylver’s goal is to continue to cultivate a team rooted in difference rather than forge a homogeneous “MRUXSTR” staff. An internal shared vocabulary is a key to our success, allowing us to continually integrate, sustain, and leverage diversity amongst our team so that we can ultimately better serve our clients.
At the time of this publication we are closing a large project that was an overwhelming success. Without revealing too much, the end result would not have been achieved if the team were more like Jean, Brianna, Rob, Jeff, Perry, or me. Personally, my view has been broadened thanks to an intense analysis/synthesis session with Jean and Perry—two people with very different experiences and work styles from my own. Likewise, our post-morten project debrief session will be an opportunity for us to grow together as a project team one more time before moving on to the next project and hence project team.
Sylver Consulting has existed and continues to exist at the convergence of MR, UX and STR because our team members do not.