VOLUME 2, ISSUE 2
Charts & Chats
 
Gather the information. Make sense of the information. Share the information. Discuss the information. Four simple steps, when all is said and done, coalesce into a complex dance routine. From planning a product to eliciting product feedback from others—how we present and gather information is no small task.

In this issue, developer Alvaro Sanchez walks us through the visualization of complex data in our new Pro features, followed by researcher Laurissa Wolfram-Hvass's quick and dirty in-house testing protocols. We conclude with links of interest from around the web.

Publishers: Fabio Carneiro & Gregg Bernstein
Editor: Laurissa Wolfram-Hvass
Code: Alex Kelly

Chart-topper

Alvaro Sanchez, Developer

As you may have heard, we recently launched MailChimp Pro. It was a collaborative effort, with multiple teams building interlocking components. I was tasked with Comparative Reporting for Pro, working to ensure it achieved its mission of empowering our users and giving them the ability to grow and discover new insights on their customer base.

Discovery

Imagine a tool that allows you to compare multiple email campaigns, visualize trends on different metrics (like sends, open, and click rates), and learn from past performance. It’s a complicated feature, but with the combination of Pro Segmentation, compelling graphs, and some new ideas, I knew we could build it.

During our discovery phase of this project, we realized that some of our Pro customers store huge amounts of data and finding a way to visualize it was going to be a crucial part of Reporting. Finding a new scheme for data visualization meant revisiting the current MailChimp charting methods to evaluate if they could work for a much larger set of data.

Hurdles

MailChimp sends email campaigns, but we also store tons of list subscriber data. This list data can be sliced and diced to uncover segments—specific sets of subscribers based on complex groups of criteria like location, spend amount, and gender—which can be used to send targeted campaign content. We wanted to provide a way for users to visually compare their email campaign data against these custom segments.

As a starting point, we used the existing MailChimp chart library for our main dashboard and regular reports. Off the bat, one of the first issues we encountered is that some campaigns could have been sent on the same day, down to the same hour. This created points on top of points, due to the proximity in time of the campaigns.

Campaigns with multiple send times and segments proved difficult to visualize with MailChimp's existing charting library.

We decided to switch gears and try a scatter-plot type of chart. This switch solved the problem of visualizing campaigns sent on the same day, along with segments, but introduced another problem.

A scatter-plot chart presented a lot of information, but it was too cluttered to
communicate insights clearly.

If your report included a lot of campaigns, the chart became cluttered, making it difficult to detect trends on a specific segment. After this experimentation, we realized that we would have to create an in-house solution for displaying the information in a way we felt was both clear and compelling.

Drawing Board

After imagining our ideal solution, we came to the realization that we wanted flexibility when creating the charts, and we needed to improve our rendering performance for large amounts of data. To do that, we decided to create a custom chart library for comparative reports based on D3, a JavaScript library for manipulating documents based on data.

Unlike other some libraries, D3 doesn’t ship with charts out of the box. Instead, D3 helps bind arbitrary data to a Document Object Model (DOM), and then applies data-driven transformations to the document using HTML, SVG, and CSS.

D3 gave us total control over how visualizations were created and allowed us to combine scatter charts and line charts in one visualization, as well as have more control over the transitions/animations and responsive behavior of the charts. This gave us precisely the flexibility we needed.

Well-displayed

During this research stage, we focused on graphs that showed multiple sets of data over time—but in a way that was easy to understand. After looking for inspiration in several places, one chart finally caught our attention. The chart, used in a story in the New York Times, had all the elements we consider important, and allowed us to visualize data in a way we felt made a lot of sense.

With their decades-long experience visualizing complex data, the New York Times was a perfect source of inspiration.

This chart, with its included baseline, gave us a great place to start. Based on the metric users explore, we create an average trend line of all the selected campaigns over time. We then add a scatter plot chart so you could visualize how the campaigns compare to that average.

We can take this idea further, as well. When adding segments to the report, you can compare the average to the segment-based campaigns, allowing you a clear view of how all this data performs across different metrics and criteria.

The refined chart gives users a clearer view of the story being told by their data.

Charting Ahead

So far, we’ve had great feedback about how this tool has helped customers visualize their data in ways they couldn’t before (and in ways they hadn't considered).

This project has helped us as well, opening the door for us to build the next generation of charts and visualizations for use across MailChimp. Creating an in-house chart library will give us the freedom to tweak interactions when necessary, and to translate complex sets of data into fluid and easy to understand “stories.”

By building and providing the right tools, our users can discover and learn more from their data, ultimately empowering them better serve their customers.

Further Reading

Interested in reading more about data visualization?

Finding the Right Color Palettes for Data Visualizations [Medium] Samantha Zhang has a great write-up on creating a color palette for illustrating data.

Visualising Data's References [Visualising Data] section is bursting at the seams with wonderful resources.

We Should [Pro]bably Test This

Laurissa Wolfram-Hvass, Researcher

As researchers, our team takes customer insights and feedback very seriously. Before products or features are built, we do our homework. We filter through customer feedback, visit and talk to our intended audience about their needs, and identify the specific jobs to be done. We communicate closely with the designers and developers who build MailChimp’s products and share our research by every means necessary—through emails, face-to-face conversations, short reports, Basecamp posts, IMs, and more. Once something is built, we recruit for remote or in-person testing to see if the things we built actually fill the needs we identified.

Nice, neat, straightforward—right?

Well, rarely.

Things don’t always go as planned. A design doesn’t work out as first envisioned. Someone gets sick. Something takes little longer to build than anticipated. It happens, and we’ve all been there!

Most recently, it happened with our newest feature release: Advanced Segmentation for MailChimp Pro. Things got busy and our designers and developers were crunched for time, but they still wanted feedback, and we wanted to help.

MailChimp Pro’s newest feature addition: Advanced Segmentation

Often in product development, we have to change plans—which, in the case of Advanced Segmentation—meant pivoting from external user testing to lightning fast rounds of internal user testing.

Internal User Testing

We always test our products internally before they’re released, and we have a rock-solid Quality Assurance team that carefully reviews new updates to the app. In this instance, though, peer feedback and QA aren’t the “internal” testing approaches I’m referring to. For this kind of internal testing, we recruit colleagues around the office who are active MailChimp users but are removed from the product development cycle. (In other words, they aren’t involved with planning, designing, or building features.)

With a company of over 400 people, we’re always stumbling across colleagues who use MailChimp in ways similar to our general customer base. Talent Coordinator Merridith Snyder, for example, uses the MailChimp Subscribe tablet app to collect email addresses at recruiting events. Brad Gula handles all newsletters sent to MailChimp users, and he piloted our Pro Multivariate Testing feature on our own lists before it was publicly released.

In the case of Advanced Segmentation testing, we reached out to several colleagues, including Kale Davis, a developer on MailChimp's marketing team who sends out the widely-read HackerNewsletter each week using MailChimp.

The Testing Setup: Keepin’ it Lean

I usually start with one round of three internal participants. This gives us enough feedback (without being overwhelming) and leaves room for our Product team to address issues, while giving the researchers more time to recruit externally without slowing down production. Sometimes a round of testing will uncover things that our Product team wants to tackle and immediately re-test (again, leaving us little or no time for external recruiting). By pulling from our internal testing pool, we can run a few additional tests with new participants in under 24 hours.

The actual testing sessions with internal users mirror the sessions with external, non-employee participants. We use a very simple set-up with a small table, laptop with an external monitor, and a few chairs. Although we often have a designer or developer sitting in on the session, we also capture everything with Apple’s stock Quicktime app and the internal mic from our testing laptop, so we can share it with others.

In my opinion, the leaner the setup, the less intimidating the experience. So we try to emulate a more familiar, relaxed environment for testing. (There are certainly benefits to a having a formal lab setup, but so far we’ve had great results just keeping it simple.)

When I moderate tests, I usually give participants scenarios and tasks taken straight from specific use cases in customer feedback, or I create tasks that prompt participants to use the product or feature as they would normally with their own MailChimp accounts. I want the experience to be as close to reality as possible—even though the testing session itself is artificial.

While testing Advanced Segmentation, I dug through past customer interviews and our feedback forms to come up scenarios like these:

  • Scenario 1: Send a special discount to my most engaged subscribers who are interested in a particular type of product (Coffee-related products).
  • Scenario 2: Send an email to subscribers in “Hong Kong” or “Singapore.” From that audience, only send to users who have spent more than $0.

Using a test account with dummy list data created by our QA team, our Advanced Segmentation testing participants worked through these scenarios, building out segment types that our customers requested.

At the end of the day, I upload all testing videos to a private Vimeo account and share them—along with a 1-2 page summary of my notes—with the designers and developers working the feature. I usually timestamp my notes, so viewers can jump to the most important parts without having to watch the entire video from start to finish.

We use a Vimeo account to share usability videos within the company.

Benefits (and a Caveat)

Internal user testing is especially helpful when deadlines are tight and the projected date of completion is uncertain. Without a firm date, it’s difficult to recruit! And the last thing we want to do is schedule a participant, only to cancel because we weren’t quite ready to test. Colleagues are a bit more forgiving, and since they work in the same building, they’re usually willing to be flexible—especially if we let them know up front that the schedule may change.

Oh, and bonus: internal tests like these give us opportunities to work with MailChimp folks we might not normally see. I’m a big believer in researchers building strong relationships across the company—when else would I get the chance to work with one of the marketing site engineers or with one of our recruiters? It’s relationships like these that often spark interesting conversations about the app or our customers, giving me new perspectives and fresh ideas.

All that said, internal user testing could never replace external testing. Even though our internal participants don’t work closely with the design or development of the product, they’re still a part of the MailChimp company and inherently predisposed to certain biases (like having a familiarity with our internal jargon or an understanding of best practices). We always follow up later by bringing test participants into our office or by testing online.

Ultimately, though, researchers have to work with what we have. Some testing is always better than no testing. And any feedback from an invested user gets us closer to creating better products.

Enjoyed this issue? Share it and start a conversation:

Share
Tweet
Forward
+1
 

Around The Web

2016 Front End Conferences [CSS-Tricks] gives us a month-to-month rundown of next year's conferences (you can bet we'll be at several).

Get in the Van [Medium] and talk to your customers. Every. single. day.

Solid [Buzzfeed] is a style guide that's... well... pretty dang solid.

The Atomic Workflow [Atomic Design], the latest chapter in Brad Frost's in-progress book, has us waiting on the edges of our seats for the rest.

Better SVGs for the Web [Sara Soueidan] is full of great insights from the ever-impressive Sara Soueidan.

The State of UX [UXDESIGN.CC] takes a look at 2015/2016 trends in the world of user experience design.


Ask Us Anything

Email is at its best when it's a dialogue, so if you have questions for the MailChimp UX team about data visualization, usability testing, or that shirt with Fabio's beard on it, send them in! Seriously: hit reply and ask us anything. We'll try to answer every email and maybe even share our conversation in future newsletters.

 
MailChimp® is a registered trademark of The Rocket Science Group, which is located at
675 Ponce de Leon Avenue NE, Suite 5000, Atlanta, GA 30308 USA.

You’re receiving this email because you subscribed at theuxnewsletter.com. If you don’t want to hear from us anymore, you can unsubscribe, though we’ll be sad to see you go.