Issue 4 \\ Translating Data

Issue 4 \\ Translating Data

Data comes in many shapes and sizes: emails from colleagues and customers, stats from site visits and surveys, observations and insights that accrue over time. All of these hold value, but it's often difficult to translate these myriad data streams into usable intelligence.

In this issue we look at three sets of data from some different angles. Web Analyst Allison Urban explains how she used Google Analytics to rapidly solve a UX problem in the MailChimp app. And UX Intern Fernando Godina covers a couple of topics: making survey data usable, and translating an internship into an opportunity. We wrap up with links of interest and a response to a reader question from Issue 3.

Fixing a Leaky Funnel with Google Analytics

by Allison Urban

As a Web Analyst, I spend a lot of time thinking about how to translate data into design insight. One of my favorite tools for this is Google Analytics' Next Page Paths. This handy little report tells you where users go next after viewing a particular page. In terms of UX, it's one of the quickest ways to identify where things are going wrong in a funnel.

Here's an example from our Account Activation funnel. After a user signs up, they need to complete a Captcha before they can log into their account. The navigation pattern we'd expect to see in Google Analytics goes like this: 

signup > signup/success > signup/confirm >

However, last week we noticed that 32% of next page paths from signup/success go to, completely bypassing signup/confirm.

If you could log into your account without completing the Captcha first, this wouldn't be an issue; the problem is, you can't!
It turns out you used to be able log in before completing a captcha, but the developers patched this loophole for security reasons. So why were some people still trying to log in rather than following our instructions to confirm their account? Well, because we were encouraging them with our shiny Log In button.
Since removing the Log In button from /signup/success/, the percentage of people going directly to /signup/confirm/ has increased from 63% to 79%. Our support department also reports receiving 25% fewer emails related to account activation, which puts them on track for the lowest number of emails related to account activation all year.

Not bad for a few minutes of research.

I Sorted 506,000 Data Points and Lived to Tell

by Fernando Godina
Customer feedback is very important here; it helps to guide our decisions as we improve MailChimp. To start the year, we tried something new for us: we sent out a massive, 46-question survey. After one week, we had received responses from over 11,000 customers. If you multiply 11,000 completed surveys by 46 questions, you get 506,000 responses!

The responses were a combination of multiple choice and open-ended answers. Multiple choices are fairly easy to analyze. SurveyMonkey has some nifty tools that let us filter the data based on responses to specific questions. For instance, we could chart mobile device usage based on size of company or see which industries reported collaborating in teams the most. 
We also asked our resident Data Scientist, John Foreman, to do some data digging to find valuable insights. He shared with us the magic of Excel pivot tables, but that's a story for another day.

However, thousands of unique answers to open-ended questions are a bit harder to comb through. We approached this feedback like a puzzle: our job was to put these pieces together to find patterns. Instead of thinking about the responses individually, we read through them looking for commonalities and started to categorize them. Now, instead of having 1,966 unique answers to a question, we ended up with 14 buckets that expressed a type of answer.
Patterns are hard to see in open feedback when they're not categorized.
All responses have been bucketed, exposing patterns in responses.
Bucketing is a simple idea, but doing it takes time and concentration. However, once it was done we were able to rank order buckets and garner answers to our open-ended questions. This helped us quantify feedback patterns and share that information internally.
I spent the last 5 months as a UX Intern. Here's what I'd like to pass along.

Interning has been one of my favorite college experiences. As an intern, you get the opportunity to work on real projects and to learn new things along the way. The intern experience will be what you make of it, but here are a few guidelines that I think will make it better.

Learn Fast
Internships can vary in length, but it’s not about how much time you spend at a company; it’s about what you learn while you’re there. As you take on projects, tackle as many problems as you can and do it quickly. The quicker you move, the faster you’ll get feedback that will help you learn how to improve.

Immerse Yourself
It’s one thing to go in and do your work at your internship, but to get the most out of it, you should immerse yourself in your projects. Read books that are related to what you’re working on, watch talks about your company or industry, and talk to your coworkers. Once you've taken in this information, you can tackle a variety of projects with your coworkers. They'll see you're learning quickly, and you'll be trusted with greater responsibilities.

Meet Fellow Chimps
This is the most important and valuable thing I've learned: a great company takes a really talented team of people working to make it awesome every day. I  love that every one of my coworkers has a unique story as to why and how they got here, but we all pull together to make a better product. Take the time to talk to your colleagues and really learn from them. It's an opportunity to make yourself a better intern and prospective employee.

[Editor’s Note: Fernando joins the UX team full-time this summer.]

UX Around The Web

Ask Us Anything

After Issue 3, Dylan Blanchard asked, How much does data from support (support tags, personal observation) drive product iterations? And if it does, what's that process look like?

Thanks for the question, Dylan. Data from support certainly plays a big role in product iteration. The data we get is both empirical (frequency of specific issues) and anecdotal. The former is easy: if we see the same issue repeatedly in support chats and emails, or a knowledge base article gets a lot of traffic, we know something’s up. We pull transcripts from help emails and chats then run some scripts to find terms used most often. This often gives us hints at emerging issues we need to investigate further. 

At the same time, personal observation and interpersonal discussions yield a good amount of ideas. Our Support teams chats internally both online and in person about potential issues and workarounds, and uses a ticketing system to track issues and make recommendations. Sometimes the Support staff will just see a way to fix things independent of customer feedback; it's just a "hunch" that there's a better way to empower our users.

We'll be talking more about how we collect and analyze data from many departments at MailChimp in future issues of this newsletter. Stay tuned.
We want this newsletter to be a dialogue. If you have any questions for our team about design, data, usability testing, mobile, movies, font sizing, or even Gregg's running shoe preferences, send them in. Seriously: hit reply and ask us anything. We'll try to answer every email and maybe even share our conversation in future newsletters.
© 2001-2013 All Rights Reserved.
MailChimp® is a registered trademark of The Rocket Science Group

view in browser   unsubscribe   update subscription preferences   

Email Marketing Powered by Mailchimp

Love What You Do