VWO Logo VWO Logo
Dashboard
Request Demo

eCommerce Optimization Using Voice Of Customer Data

 

Context can be used as a strategy. Learn how to prioritize the voice of customer research findings and create a process for testing and action.

Ben Labay

Speaker

Ben Labay

Research Director, Speero

Transcription

Paresh: Hello and welcome to ConvEx, where we are celebrating experiment-driven marketing. My name is Paresh and I am the Director of Marketing at VWO. For those of you who haven’t heard about VWO, VWO is an Experience Optimization and a Growth Platform – the only connected platform that enables organizations to optimize their entire end-to-end audience journey. You can check us out at VWO.com. I’m excited to be hosting Ben Labay today for the session, who is the Research Director at ConversionXL. Welcome Ben and it’s great to have you here today.

Ben – Yeah. Thanks, Paresh. It’s a pleasure to be here, and I’m excited.

Paresh- Great! Alright. So before Ben begins his presentation, I want to inform all of you that you can join the ConvEx official networking group on LinkedIn and ask your questions from this presentation there. That being said, Ben, the stage is all yours.

Ben-  Thanks Paresh. Hi everyone! Today I want to get into voice of customer research, effectively our playbook at the agency, and what I supervise most directly. When we bring in clients  to help with their optimization programs, digging into qualitative data, digging into voice of customers is key in what we want to do. But before I jump in, I’ll kind of just set the stage for this talk. It’s roughly put into two parts.

The second part is highly tactical: going through a lot of frameworks and templates and questions and kind of rapid-fire through our voice of customer playbook. But the beginning is rather strategic so I don’t want us to rely on tactical growth alone within our programs or testing programs or research programs. So the beginning is a bit strict strategic in the sense of going through some mental models that I use in conducting successful research. You see my background is not explicitly in marketing, I’ve only been with ConversionXL for four years, and before that, I was doing a lot of research [and] actually also doing a lot of fishing. Here is a fish that I caught pretty recently, with my belly there, but, I’m not per se a marketer. My academic background is in behavioral, evolutionary behavior and conservation science.

I worked in Academia for ten years prior to kind of jumping over into marketing. And effectively what I bridged over from that previous career, was to doing effective research. And there’s a lot of pitfalls in doing research and so that’s what I want to get into first in terms of what I carry over into the world of Conversion Rate Optimization and running successful research programs that support effective testing experimentation and hypothesis-driven testing, right? So that’s where the transition to get into these mental models for conducting successful research. 

But before I jump in, I want to talk about this fish actually. It’s pretty common in the US as well as in Europe it’s known as European Redfin or a […]  River perch, yellow perch, a pair shown in France where this was caught. It’s known by a lot of names, but weirdly if I was to try to catch this fish or if I was to try to conserve, you know study and conserve this fish. Knowing these names actually doesn’t help me at all. The names of this fish really speak to understanding the people that give this fish the name rather than the fish itself and this speaks to my mental model number one. Knowing the name of something is not the same as knowing something. This [is] kind of a simple statement, but it’s very powerful and the sense that you know in marketing we’re familiar with this concept and some different language or some different nomenclature. For example: System 1 versus System 2, this is what that speaks to the The Reptilian Brain versus a million brain, that whole behavioral psychology part of marketing and how we’re applying things. But what this really speaks to is that we really suck as marketers at communicating. We’re really good at characterizing our products, but we’re not good so much of characterizing customer pain.

So we think that we sell products but when you really don’t, we sell customer success, you know Kodak thought that they sold film but they did not and they ran out of business. Nokia thought that they sold phone hardware but they didn’t. Let’s look at how it applies in a simple case on a e-commerce homepage. This is a brand that we work with Proctor and Gamble brand called Native Deodorant. They sell natural deodorants […] organic deodorant if you will. Here’s a landing page where we see a value proposition deodorant to stay fresh and clean. It’s not very satisfying really is very characteristic. It characterizes the product but it doesn’t again speak to that customer pain. Changing that value proposition to where it does speak to customers pain is a massive win.

So deodorant without the chemistry experiment this speaks to why someone buys natural deodorant, you know to not pollute their body and this is a huge win. So again that mental model the name of something you don’t let that distract from what that something is and what it represents especially to your customer.

Mental model number two, this is my favorite: Knowledge is equal to experience plus sensitivity. This is really powerful. If you ever find yourself saying that you don’t like something you need to think about this mental model, but in research and understanding when tactics apply, this is where it becomes really powerful.

This mental models were best practices and guidelines go to die for example, we just published hundreds of e-commerce best practice guidelines on our website and if I were to give you, 12 guidelines for e-commerce homepage design that were successful and you know such and such use case and case study. If you were to go in and apply these to your products and to your site, would that be successful and the answer is likely not. So you need to understand how these tactics apply acrosshow they’re sensitive to a breathe of experience for it to really matter.

So let’s take another example: Here is a landing page for a company that we work with after anonymizes this one but they are a payroll company out of the U.S, really fortune 500 and this is the tip of the sphere in terms of bringing in a lot of leads. And if you look at this, it’s a start quote button with an email field and you would think that this breaks a lot of common best practices and it does we’re breaking kind of foot at door technique with that email field friction that it adds.

We’re breaking some of that sunk cost fallacy and how to leverage that and if we were to run it with just that sent against the treatment with just the start quote button removing that email you would think that it would be a better way to go, but in fact, it’s not it was a massive loser. And the reason is that again, it goes against a little bit of the common best practice, way of thinking that email field actually provides a bit of friction, but in a sense where it stops somebody and it makes them start to engage more and scroll down and it starts to engage  through the additional content and the value associated with that content below the fold and then come back up and enter their email and start the quote. So here’s where a best practice doesn’t quite apply and it would loose massively. So again when you’re trying to apply tactic best practice [or] guideline, that you need to understand thatyou don’t know that it works until you understand how it applies across and how its sensitive across a bunch of different situations. So back to the fish.

Let’s say that we do have a sensitivity, let’s say that we know this fish, when of course we know the name, but we know where it eats, we know where it lives we knowwhat makes it what allows it to breathe and be successful and all that kind of stuff. So the question is; would this allow me would this help me to catch the fish and would it help me to conserve study and conserve the fish and the answer to this is no and that leads me to my third mental model. It is not enough to know. So my grandfather used to say with regards to fish that you can’t catch a fish without a line in the water. As marketers, we are very good at collecting data, analyzing data. We’re very good at creating plans with data, but we suck at implementation, we suck at the action part of the movement part and this is really key and in terms of transitioning it, all of the interesting data points that you’re gathering and creating some action and creating some results. And this is especially poignant for human communication, right?

If you think about how conversations work, you can’t, it’s not enough to know, you have to show that you know, you have to be sympathetic to your audience and know when to hit them up and when not to. Think about human communication and the people and the situations where it excels worldwide. What does it take to be the best at communicating? I think about comedy, with comedy it’s thought timing is really key. What about art? It’s thought historical and cultural perspective is key. What about advertising? It’s generally thought that it needs to be extremely memorable and all these cases they’re beautiful applications of context. So you need to hit the customer at the right time. You need to have the right perspective and talk to them in the right place, the right time and then you need to have contextual memory, connect brand value with their own personal value. So you need to again show that you know, hit them up at the right time.

So what are some examples of this.  Here’s a brand that we work with, Serta […] consumer brand mattress and box type of company. They came to us with starting out with this product and this  brand and we helped them with regards to understanding the customer quite a bit and then help them with regards to some design and testing. They were sending people directly to the product detail page, which is a screen on the right really quickly, but they had a ton of great content down below. We changed up the user flow, allowed people to seamlessly weigh through the additional content pieces down below, double the engagement time and also doubled revenue per visitor, tested this a […]. This is a great success for us. Another example of this in terms of the timing type of thing. This is that Procter & Gamble brand on a deodorant. Here with deodorant, people are used to buying it, really high conversion rates, too high actually our focus is not conversion rate, it’s actually average order value, lifetime value. And so we’re hitting people up at the right time, the right people, with the right offer; as people are on product page that transition to car door there on the cart transitioning to check out our they just checked out there maybe they see and you know, one click upsell, upgrade, cross-sell upsells is we spent years testing this and it leads to if we turn off these tests it’s like 8K a day this would start to lose quarter million a month that this contributes to. So a lot of different ways that we think about this and personalize these cross sells and upsells when to speak to somebody how to speak to them what type of value that you’re giving them depending on who they are. That’s what this third mental model is meant to  show that you understand where to add value to them when they need it.

So this is the transition into the second part of the talk where I get really tactical and start to speed through a lot of these templates and questions and our voice of customer playbook that we use at the agency. Context for this the standard rubric for marketing is this [point] 1,2,3 – the audience, message and action. The perspective here has nothing to do really with message and action and it has all to do with the audience and understanding four key pillars of the audience: Perceptions, Behaviors, Motivations and Anxieties. This is what we want to know and then the key word being ‘Know’, want to have knowledge about, right? Now the way that we do that to understand to know these four pillars; we use surgical techniques to get their surveys, you experience marketing for example to understand users and their perceptions, user testing to understand friction and behavior etc. So we’ll be walking through this real quick with some examples and templates and such starting with users and perceptions.

 So with user perceptions through a buying digital buying experience what we use is UX benchmarking and for better or for worse the common one that we all know about is NPS, Net Promoter Score. We see these implemented through little widgets like this a little pop, little likert scale questions, ‘How likely are you to recommend?’ We see a lot of great success implementing this through SMS doubling, tripling the response rate, so try this if you’re not. But okay, this is all MPS. This is a UX benchmark on one dimension of user experience and that dimension is loyalty and how likely are you to recommend a friend is not the only question that you can ask here. So there’s another one that we like as well. I will likely visit this website in the future. And again, this is all loyalty one dimension of user experience, but there are other dimensions of user experiencing including credibility or trust. Appearance that aesthetic part of their experience and their experienced usability. So that pure usability is easy to use together. This question set is a standardized survey known as SUPR-Q. This was developed from Jeff Sauro [of] Measuring U and it’s a really powerful standardized surveyed again to get it the multidimensionality of user experience.

We worked with Jeff to expand the SUPR-Q. We worked from conversion rate optimization shop. We work with a lot of clients with their success and value propositions and these metrics, these dimensions of user experience sort of didn’t quite address that. So we worked with Jeff to add a Fifth Dimension Clarity. This speaks to the questions here why I should buy from this website instead of its competitors. Again speaking to that success proposition and the customer do they understand that or not? Real powerful you can do these types of things similar to NPS and little pop ups the whole thing at once it takes you know panel and some time for your people to go through a big standardized survey like that.

Typically the NPS for example is implemented longitudinally and a lot of cases. So through time maybe pre and post design change or something like that, where we like to apply is competitively. So here’s a link to a study that we did and published recently on our blog where we did a competitive UX benchmark on for beauty and cosmetic shops, Clinique, Lush, Sephora, and Fresh and this is really cool. So in this case, we’re able to see how these sites fall relative to each other in these different dimensions of user experience and user perception. And we’re able to adjust the road maps on the priorities of the tests. And then especially a lot of the tests that speak to brand, brand positioning, changing user perceptions of the testing program.

So yeah! Gives you taste of that. If you’re doing this type of stuff you need panels. Amazon MTurk is great for US audiences and US companies and products, Click Workers is a really good one to get panels for out of Europe, but Amazon MTurk super-powerful it just quick heads up if you haven’t gone there in a while, couple of years ago they had maybe two premium qualifications. You could Target iPhone users [and] Android users. Now, it’s exploded. You can target a ton of people [for] online purchase histories of music and jewelry bar, financial assets owned employment industry all sorts of cool stuff. So this is all self-reported. You got to keep that in mind and kind of do some vetting and understand that limitation but really powerful tool nonetheless.

So, into the friction.  ‘Behavior’ – understanding customer behavior as they go through these user journeys, we use user testing as well as some passive feedback. For user testing I’m not going to go into the details of exactly conducting user testing. But if you’re a big shop, if you’re an agency for sure you want to invest in a moderate, in person set up. I’ve provide[d] a link to a really cool article that allows you to hack together one from UX Collective here.

On the right here, […] moderated or unmoderated remote user testing validately.com, usertesting.com,Userlytics; these types of companies are really great to do that. I will go over the real quick common mistakes that we see with user testing.

One the wrong users – you don’t want people that have gone through your journey before. The whole idea behind user testing is to watch people trip so you want to identify those trip wires. If they’ve already done that, they’ll avoid the trip wires. And you of course need to target your demographics and who are you, who’s your audience. Another mistake – trying to get data on user perceptions. This is a huge one that I see this again and again and again, we’re not asking questions with user testing. We’re not observing behavior. We’re only doing user testing. We’re only doing these sessions with five or ten people.

If you’re trying to get data on perceptions that’s what surveys, that’s what the UX benchmarking is for polls, things like that. User Testing is for observing behavior not getting data on perceptions. Leading text questions and tasks, you don’t want to get too specific there. You want to just provide them with a kernel of a task and see where they go with it. You don’t want to bias your results. Launching and reviewing all of them at once – you don’t want to do that, we do what’s known as a 2×2 approach where you may be launched one on desktop, one on mobile, review, adjust, repeat. And then non-accounting full for the full range of customer intent, especially on mobile and just generally these days the user intent comes into four buckets: I want to know, I want to go, I want to do and I want to buy. Note the top three are all discovery stage through information stage. 

The last one is the only one that’s transactional in nature. So you want to keep this in mind when you’re setting up your tasks and your site might not have all four. [It] might not have ‘I want to go’ for example, but you have to account for those nonetheless. Your understanding of customer friction passive feedback is really key as well. On the right here we’ve got a or maybe on your left we got a feedback button, Usabilla made this really popular. A lot of different voice of customer platforms and customer experience platforms have this now but making it easy for people to provide feedback when they have some friction points through you, and within your buying journey. This is really really powerful and with more sophisticated platforms, like our partner platform with Usabilla you can do some fun things. Here’s an example of one of the clients Tui travel shop out of the UK. They identified a massive bug after a code release on their Chrome extension that was hitting up revenue to the tune of three or four hundred thousand pounds a week and it was through this crowd source QA feedback button that they were able to take screenshots of the issue tied to sessions to tie the HTML even within that session and send it to the right channel in to a team to address that and start to type in and fix it really really quick. So it can be really really powerful to have these type of feedback mechanisms on your site to address customer friction points. Here’s an example with Virgin and a […] app. This if you know the path of least resistance if you have a problem on one of these apps if they find it easy to go to the App Store to provide that type of feedback and they’re angry with it; they will. So you need to make it extremely easy to provide feedback within the app itself, good or bad and if it’s good may be send them to the App Store and crank up those app store ratings, right? 

So now transitioning to bucket number three: Motivations and Goals. Here we’re using active surveys and polls. Here, starting out, is understanding user goals and user intent. Here’s an example with another UK shop Money Supermarket, here are the car insurance comparison site and the team at Money Supermarket had aggressive goals at new aggressive goals at trying to crank up sales on mobile specifically, but before they started to do anything they set up this top task type of goal – ‘So what are you here to do today?’

And you can do this in a number of ways, but you can do it open ended or close ended. This was close ended but they wanted to know generally what was the flavor of user intent on mobile and also compare that to desktop. What they found was what’s known as a mobile Halo effect. So people on mobile in that discovery stage and then transitioning later on into desktop to pull the trigger and it was an extreme case of this Mobile Halo Effect to the tune of 80 percent or so of people on mobile didn’t have a driver’s license, let alone a car so they really were not at that stage of buying yet. This was inverse on desktop where they really were ready to pull the trigger. And so this completely changed the strategy for the team and at Money Supermarket in terms of getting a lot more informative, a lot more conversational with the content of the mobile experience. And in terms of understanding motivation, we want to do surveys here. Surveys are our best friend. First identify who you want to speak to, the number one audience that we like to identify and understand their motivation is first-time customers; people that just recently went through the buying experience. They had the motivation to go through and so we want to get them to describe that motivation in their words right. 

‘VIP’ [customers] is the what sets them apart and also maybe something persona specific. You can send out the survey, you can conduct the survey in a lot of ways, you can hit them up on on a thank you page, you can you can hit them up on SMS, you can hit them up with a pop-up and then there’s a survey link they go out to a different survey. We like email it […] and generally it works in a lot of ways. I won’t go to the template here, but I will point out with one special thing on the subject line specifically. As I see a lot of bad subject lines, you will increase your response rate if you’re really explicit so the formula or that the format of what you get for what, is what you want to do. So for example, get five dollars for credit for five minutes of your time, something like that.

So questions in the survey. Typically we’re asking 4 to 6 to 8 questions. You need to think about response rate there. So not too much but not also not overlaying a ton of goals within a survey. So this survey is specific at motivation and the types of questions that we ask we want to get real specific in terms of translating the answers in to test hypotheses to improve our testing program. So we’ve got a couple questions that speak to understanding the customers perspective of what [are] the benefits of the product and the brand and the site and what they’re getting. The first one is what matters to you most when buying this product online. Two, please list the top three things that make you purchase from this site. Then a couple questions that speak to understanding and getting  information and some more data points for testing value proposition.

What made you choose a site over other online stores and what did you like most about your shopping experience with the site? Those are both really really powerful at helping improve value propositions. Yeah. So lastly, customer anxiety, fears, uncertainties and doubts and for this we use intercept polls and thank you page polls. Here is one of our clients Dermalogica. We do a lot of work with the beauty and cosmetic space. Here is an example, in this case implemented on the left. ‘Is there anything holding you back from making a purchase today?’, kind of an aggressive question, but it’s just got that. Yes, No, answer.

So it’s just as easy to click yes or no as is it is to close out that survey, but if they hit yes, you see there on the right we can come up with an open-ended box to allow them to elaborate and we’re usually implementing this product page card somewhere where we’re seeing an abnormal drop and on the funnel so response rate here from 2 to 5 to 7% depending on the product and the audience and the brand. If you want to improve your response rate, here’s an example of Ticketmaster implemented through our partner Usabilla. This one, full screen takeover on a tablet in this case. This is when a customer is in checkout [page] can actually hit that back button and go back to cart. So they’re going the opposite way from what we want them to go. We hit them up with a real quick survey. It’s ‘why didn’t you complete your purchase today’, much a little bit more aggressive and since of a full-screen take away and that action but the response rate goes up here you’re looking 5 to 10 to 15%. And then the third example here is also one implemented with Usabilla.  Our voice of customer partner that we work with a lot this one on the website […]. So they just checked in and they asked him ‘is there any feedback’, [or] ‘how easy’.

A few questions with regards to this. So this is post-purchase getting some information. Response rate here, believe it or not on this one when it was implemented 70% so much so that […] thought there was a bug in the Usabilla system. So that’s the quick win for this section post-purchase surveys. There’s a little friction here. You’ve already made the transaction and there’s a lot of benefits to ask them some questions, one that we really liked a lot ‘what almost prevented you from completing your purchase today?’ There’s some other questions for you to play with as well. So we do a lot of examples really quick. There’s a lot of cases where we’re collecting data and collecting a lot of open-ended data. So I wanted to provide a slide in terms of how process that we get a lot of questions there and so here codification bring in a lot of qualitative data a lot of open-ended data.

We want to set up a spreadsheet. We’ve got the responses in a column and start adding issues as we come across them and start tallying those responses and see kind of a rubric for that over here on the right. On the left here, I mentioned a little bit of the stats. Generally with open ended type of questions or responses, you’re going to want around 200 and 250 of the responses; in order for there to be a proper confidence interval that the strength of a signal of the other issue is actually valid. And so if 20% of the people are having problems with shipping per se there’s a confidence interval around that 20% that makes you be more confident that your population of consumers actually have that same strength of signal for that issue. And there [are] links to dive in more into these stats if you’d like.  Wrapping up so four key takeaways here. 

One – you need to be strategic about your research. You can’t let your nomenclature, you can’t let these jargon, you can’t let these best practices be distracting from what you’re looking to get and how you’re looking to fold that data in and then and then process it and then push it out there to your customers and solve your customer problems effectively. You need to start or expand how you collect voice of customer data. I just showed that these four different buckets of areas that you can start collecting for some customer data. This is effectively our playbook for how we go about it. I guarantee you that no one does all of them start to expand on how you collect the data. Don’t survey for survey’s sake or don’t collect data for data’s sake, you have to have these explicit goals in mind and don’t cross-pollinate, don’t have multiple goals for one technique, be really surgical there.

You’ve got to act you’ve got to experiment You can’t just collect data and create a plan. You’ve got to push out these experiments and push out these tests that all of this data helps to support. So that’s the final takeaway there. And lastly, my final point, if you’re not collecting a lot of this data then someone else is. Here are a bunch of screenshots from Google. They in the last year have been incredibly aggressive at collecting voice of customer feedback. What does this mean? This is really powerful if you can think of any company out there in the world that has more data on customers and would maybe not need to collect voice of customer data that would maybe not have a customer blind spot. It would be Google; you look how aggressive they are at collecting voice of customer data.

They are incredibly aggressive about it. And what this implies is that voice of customer data is going from a nice to have to an absolute must because digital experience is trumping everything today. It’s more important than price, it’s more important than quality. If you make things frictionless and easy for your customers, they will come back again and again, and again, they’re having these watershed moments, especially with regards to these mobile payment gateways and how those are changing the game. And what that speaks to is well, is that who you think your competitors are; are actually not your competitors. If you’re selling jewelry online, your competitors are not other jewelry makers, your competitors are Google, Uber, Netflix. They are the companies that are listening and speaking to their customers and making and creating these frictionless experiences. They are raising the bar for expectations of our customers these days and so if you are not following suit, if you are not as well creating these frictionless experiences, then you’re going to fall behind regardless of what your competitors are doing, your direct competitors. 

So with that I’ve got a bunch of resource links here at the bottom and I will stop the presentation here and hand it back over to Paresh.

 Paresh – All right. Thank you Ben! It’s a great presentation, really insightful. I was recently reading a report from Gartner which mentioned that by 2020 experiences are going to overtake price, product, pretty much everything, and companies are going to be built on experience and I hope everyone learnt a lot from your session today. With that I do have a couple of questions, if I could quickly ask them. So the first one is you spoke about the SUPR-Q, the standardized survey. If you could talk a little bit about the process to implement it. How long does it take, how many people do you really need? And once you have collected this data for somebody who’s just for the first time collecting something like this?

Or typically, how do they connect that data to the larger goal and one of your slides also mentioned that you need to use that data into some action. So how do you typically go about that from a process perspective? 

Ben – The SUPR-Q are these types of standardized surveys. You think of it like user testing about a bit at scale and here we’re not looking at behavior but perceptions and so you get people at least a hundred of ideally maybe a hundred fifty, two hundred people that go through that digital experience and then and answer a survey after. So you need a panel. Just that panel might be,five hundred thousand dollars to try and then you’re setting up your survey tool and stuff like that.

So it’s not out of this world but expensive to execute but there’s a time to process all of that data. How you bring this in, so understanding where you are in terms of that benchmark number means nothing. You have to compare it right to it to something else. So again longitudinally, you’re collecting the types of benchmarks over time and you’re seeing where things are shifting or that competitive sense where that I showed in the example in the presentation. So you’re putting yourself against another website. And if you see that you fall behind in a particular area or there’s, through time, indicator at this lagging then that provides a point of priority for that area. So for example some kind of data point that speaks to people not having as much trust that credibility benchmark is it’s not so good. The test that speak and the actions that might help increase trust and credibility on the site, you might bump those up in your priority prioritization framework your test plan. So that’s how it would fold into your test plan effectively.

Paresh: Sure that sounds great. And strategically what bucket or area of qualitative research really impacts you in building quality hypothesis? What’s your thought on that? 

Ben: Yeah, so we went through that Playbook and they’ve got those four buckets those four areas that we speak to. I would say motivation is the biggest one. So as a conversion heuristics, so to speak customer motivation is the strongest knob dial to turn. When we’re thinking about what changes customer behavior the strongest and so understanding from getting voice of customer data on what’s motivating them. How do they describe it? How do they put those in there words? That’s usually the area that we put the strongest emphasis. In fact in our prioritization framework the PXL model, we have a metric that specifically [tells] does this test affect customer motivation or does it you know address customer motivation and if it does there’s a point score that contributes to the ranking of the best ideas so the test ideas with the speak to motivation are going to be ranked higher.

Paresh: Great, so I think you know that brings us to the end of this session or typically we like to end this session with a little question about you. And so I think we would love to know what books are you currently reading? Any book that you would recommend to the audience that you really like recently and how can they connect with you? If they have any questions or anything that they want to chat with you about? 

Ben: Sure. Yeah. So books. I just reached right towards the end of a book called The Road Less Stupid by Keith Cunningham. It’s a leadership [book] and especially with regards to management and business operations in terms of asking the right questions and getting instead of trying to find Solutions.

It’s all about being smarter strategically, I would say. And then the other book more relevant to this talk I think that I finished maybe within the last six months, a great book. It’s called The Culture Map by Erin Meyer and it’s a book that speaks to the pitfalls of working across cultures. So colleagues, partners, clients, if you’re working across cultures understanding these scales of how cultures perceptions differ on communication, time, feedback, how one culture criticism can be constructive but in another culture that same criticism can be destructive. And so it’s really powerful if you think about voice of customer if you’re trying to localize a program and working with customers and one market versus another becomes really really powerful.

And then in terms of how to get in touch with me, LinkedIn is a good place. Twitter, I am on there BenJLabay and then through the agency you can email me directly.

Paresh: Great, so thank you so much for your time today, Ben. We really appreciate it. And you have a great day ahead. 

Ben: Yeah. Thanks Paresh. Have a good one. Bye.


Other Suggested Sessions

The Science Of Testing At Trainline

Iqbal Ali

Learn how Europe's leading train app delivers the best ticket booking experience to its users through online experimentation.

Performance: Each 0.1 Second Is Worth $18M For Bing

Ronny Kohavi

Every millisecond decides whether a user will stay or leave Bing. In this session, hear from Ronny himself how Bing ensures a great experience to its users.

Embedding Experimentation In Company Culture

Paras Chopra

To rule opinions out of decision making you need to have experimentation mindset throughout the company. Paras shares how you can do that in 4 steps.

All you Need to Know About Conversion Optimization Get the Guide