CHRIS ANDERSON: Thanks, Michael.
So I'm going to stand down here. I feel really gigantic when I stand up there. So I'm tall as it is. And when I'm two feet up, I feel like I'm a monster. So we'll just stay down here and try and make it a little more informal.
So welcome. As Dean Johnson indicated, I'm in the hotel school with a focus on pricing, marketing, and how that really relates to management of revenue for firms in the service space-- so not just hotels, but airlines, rental cars. I also do work with performing arts and theater. So basically, anyone who's in the service space wants to drive revenue and how that relates to operations and marketing is that space that I play in.
And today, I'm going to talk about how technology is really coming to bear on where those places converge. So how is technology converging, and how is it leading to performance? This case from the hotel standpoint, really, we're focusing on average daily rates, occupancy, and the product of those two being revenue per available room. So basically, we'll look at performance across that space.
So when we look at a travel-- so you probably can't see all numbers in the pie chart, but the pie chart here, just look at the slices of the pie to get a sense. I'm really focusing on the online reservation space. And so right now, most of us are very technologically enabled.
But interesting enough, when we think about online reservations in the hotel space, it's somewhere right around a quarter of all transactions-- So 26%, 27%. And that's roughly split. 1/3 of that is at third parties, online travel agents, and 2/3 of that is at the brand.com or Marriott.com, Hilton.com, et cetera. There's another 16 or so percent of reservations which are technology enabled, those being through the global distribution system.
So roughly somewhere around 35% or 40% of transactions are really hardcore technology, but there's still a lot of reservations which are done with the property directly or over the phone. We're going to focus here on looking at measuring that traffic on those online spaces, and at the same time, how that impacts total performance, not just the performance of that online space.
AUDIENCE: Excuse me. Are we getting a flash drive afterwards with this stuff on it or not?
CHRIS ANDERSON: I'm happy to email to you if that helps. So I can email you the slides. Probably on Friday, we will have-- the full report will be released by the Center for Hospitality Research. So you'll be able to go to our website and download a full copy of the report. That should be towards the end of the week. So if you need the slides in the interim, I can send you those. Otherwise, everything is, in more detail, encapsulated in the report.
And so I'm going to focus on two pieces here. The first one is research prior to that reservation. So what's happening to the consumer, this online consumer, prior to that transaction, and what's the role of social media in that position? And then lastly, what's the impact, specifically, of social media at the point of purchase-- so when you're trying to choose from Hotel A over Hotel B-- and then ultimately, how that drives performance.
So this is really three studies that I'm talking about. I'm going to blur them all into one and give you a sense of those impacts together. And so for the first part of this, my data source comes from publicly available data through comScore. So comScore is today's version of the Consumer Panel. It's an online consumer Panel.
comScore roughly has 2 million consumers that they track everywhere they go online-- so at the URL level, where people are going. So this is a non-paid panel, but they get some sort of benefits-- coupons, access to promotions, that kind of stuff. But basically, it's a subset of your colleagues who are being tracked online.
And so my focus here is all domain level internet traffic for a subset of these 2 million people, specifically about 2,000 reservations that were made with InterContinental Hotel Groups during the months of July and August in 2008, '09, and '10. And so basically, we take those 2,000 reservations, and we track everywhere they went prior to that reservation with the brand directly.
So did they do a search at Google? Did they visit Expedia? Did they go to TripAdvisor? So how long did they spend on these sites? What sort of performance did they do? And that's for a 60-day window prior to that reservation. So it's not the whole space of the reservations, but a sample of roughly 2,000 consumers from this panel of 2 million. And then we generalized that behavior upwards.
And so this research was-- part of this was in earlier work that Dean Johnson talked about where I focused on the billboard effect. So some of this we've talked about before. But basically, roughly 3/4 of consumers are visiting a third party prior to booking with the property directly at their website. So prior to booking with Marriott.com, roughly 3/4 of those people have visited a third party intermediary or an online travel agent.
83% of customers are performing a search, so going to Google, Yahoo, or Bing. So prior to that reservation, they're searching one of those search engines. Of those 83%, 2/3 of those are branded.
So not all this research is in the blind. A lot of it is I really want to stay at Marriott, so I'm focused in. Or in this case, I want to stay at the Westin, so I'm really focused in on the Westin. So a lot of that search is targeted research. It's not just cheap New York hotels. It's very focused.
And then roughly 2/3 of consumers-- here it's 65%-- do both of those actions. They're searching Google, Yahoo, or Bing and are also visiting an online travel agent. But the real interesting part here is that only 10% of consumers don't do any of that. They go right to Marriott.com to make that reservation.
So the other 90% are trying to decide, to make a decision between one supplier or another. Only 1 in 10 are really captive to a brand and go to that brand directly. So that tells us there's lots of marketing opportunities, because 90% of those consumers are still trying to side from one brand versus another.
And if we look at not just these fractions, but their intensity, it's actually quite astounding. So this graph here is the number of visits to travel-related sites. This could be Expedia. It could be TripAdvisor. It could be LasVegas.com-- so visits to travel-related sites prior to booking directly with the brand.
We see here that roughly 30% of people are visiting five or less sites, and then the fractions decrease. But over here in the tail, we see almost 5% of consumers are visiting over 150 or more travel sites prior to that reservation. So a lot of behavior is going into this research process. So decaying here, the average number of visits is right around 15. So you get a sense of what people are doing as a distribution as well as on average.
And then if we look at searches-- so Google, Yahoo, or Bing-- then it's a little more compact. So I kept the scales here the same so you can compare these bars to the previous bars. So basically, we see more people performing more searches, but less people performing the extreme number of searches.
So here, the bulk of the searches are in the 10 to 20 range in over 53% of people. So a lot of people are searching, but they're not spending hours and hours at Google. They're just narrowing down, and then they're doing a lot of that specific research at these other travel-related websites. So the search engine becomes a launchpad for some of these other travel-related sites.
If we look at TripAdvisor-- so with our social theme here, let's focus just on one of those travel-related sites, and that's basically TripAdvisor. And what's interesting here-- we have behavior across these three years, so July and August of 2008, '09, and '10. And then what fraction of those people who booked with the brand dot com directly visited TripAdvisor.
And what we see here is that these percentages are roughly increasing over this three-year period. So more people are visiting TripAdvisor. And what we see here is they're visiting more often. But they're looking at less pages and spending less time.
So part of this is perhaps that technology is better. Part of this is perhaps that we're more efficient shoppers. And part of this is perhaps that TripAdvisor's content has basically become more efficient and a little more streamlined. So more people are going there, but also, part of that is they're spending a little less time and a little more focused in their efforts. So these are people who, again, book with InterContinental Hotel Groups directly and what time and effort they spent at TripAdvisor.
So if we look at TripAdvisor as the main source for user-generated content, then what does this indicate about the impact of that user-generated content upon which firm we choose as consumers? When we look at this distribution-- so again, I'm a math and statistics guy, so I always talk about averages. But then I also have to show distributions just so we all get a sense that we're not talking just about averages.
We see a lot of this behavior is in the last five days prior to the reservation. So the bulk of that TripAdvisor information is really in those last few days before I choose Firm A over Firm B. So some of it is early on research. So this is the investigation phase. But this is really the decision phase where now I've narrowed it down. I'm really going to choose between these two properties.
And so we get a sense here that that user-generated content is really important when I've narrowed down my subset to these two or three people. And so we've really gotten rid of price. Now I'm just focused on these last two or three firms. And so we get the bulk of that behavior really close in to that purchase decision. So it gives us a sense that this is a pretty important piece in the hotel selection process when I'm trying to choose one firm over the other.
And so now let's look specifically at that where we look at a second data set. This data set is provided by Travelocity. And this data set includes basically the last page of results that consumers booked. So you've done some research at Travelocity. You've got that last page sort. Consumers are going to purchase one of those 25 hotels.
So we capture the information of that last page. So which are those 25 hotels, what's the prices, what's their location, and then what's their smiley faces. So in Travelocity's world, that's user-generated content or user reviews is the smiley faces. Also, we have things on number of stars, et cetera.
So we have data from roughly 14,000 booked hotels, and then just over 400,000 not-booked hotels. So you can think of this as I've looked at that page. We have 14,000 reservations. And then we have a much larger space of hotels that weren't transacted with for whatever reason.
And we focus on a series of attributes here- price, star, location, position on the page. So obviously, the further they get down on the page, the less apt you are to look at them. So again, that is important in trying to understand what drives conversion. And then lastly, user reviews-- so not just the number of reviews, but their score and how that leads to consumers choosing one hotel over the other.
And so what we look at-- so I'll leave the technical things for the full report. What we build is a type of regression model. It's referred to as logistic regression, where we're modeling purchase versus not purchase. And we can look at the probability I purchase this hotel versus I don't purchase it.
So probability I purchase it divided by the probability I don't purchase it, which we refer to as the odds ratio. For all our gamblers in the horse track, you're talking about the odds of this horse. That would be the same thing. So we have gambling on hotel rooms here, the odds that this hotel transact.
And what we have here is a series of parameters that comes from our modeling efforts. And you can think of these in the increase in odds given an increment of 1 in the attribute. So if we focus on user review score, 1.142, what that means-- if I was to change my review score from 3 to 4 as a hotel, then my odds of being purchased would go up by 1.142-- i.e. you'd have a 14.2% better odds of being purchased if I had a higher review score. So it's the odds of being selected if your review score increases.
The 1.002 means basically there's a 0.2% increase in odds for every incremental review you add. So I could have a great review score from 10 reviews. I'm probably going to be more likely to be selected if I have 200 reviews and a slightly lower review score, just because we as consumers tend to believe more versus less information.
And then the last one is price. And so for price, because we have this data across 10 cities in the US, data across different chain scales, so my price variable is a relative price-- so my price divided by the similar properties on that page. So think about if I'm a four star in Times Square. Then this attribute is my price divided by the average price of four star Times Square hotels that are co-displayed with me. So it's a relative price to control city c and location and star class.
And so if we put the price variable together with the score variable, then what that means is if I was to increase my score, say, from 3.8 to 4.8, I would have a 14.2% better chance of being selected. At the same time, I could increase my price by 11.2% and maintain the same market share, the same occupancy, or the same probability of being purchased.
So when we put these two coefficients together, that tells us the impact of reviews and price together. So we have substantive pricing power through user reviews. So if I go from 3 to 4, 11% increase in price.
Anytime you have questions, just fire away. That's why we're here. I'll pontificate for a few more minutes, and we'll have lots of questions. But if you have questions along the way, we'll talk about those.
AUDIENCE: The 1.14 is 14%?
CHRIS ANDERSON: Yes. So the 1.142 basically means we just get rid of the 1. We talk about a 14.2% increase in that review.
So this is at the time of purchase. So I've got this display. I've narrowed it down. What's the likelihood I choose Firm A over Firm B?
But as we talked about, online travel agents are roughly 8% of total transactions at a firm in the industry as a whore. So yeah, it's impactful, but that's only a small piece of my reservation volume. So can we generalize this to my overall performance, not just this little 8% of our transactions?
And so to do that, we look at the third study here. And here, we combine data from two different sources. So Smith Travel Research-- I'm assuming most of you are familiar with this. It's an aggregator of hotel performance-- ADR, occupancy, and RevPAR for both the hotel and its competitors. So we have a good sense of how I'm doing relative to my competitors.
And then ReviewPro, a company out of Barcelona, basically is an aggregator of user-generated content as well as social media across the whole space-- so not just TripAdvisor or not just Travelocity reviews, but putting that all together so we get a sense for hotel's overall online reputation. So it's aggregate. Just like Smith Travel Research aggregates performance, ReviewPro aggregates online reputation.
So we put those two together. So now I have my online reputation and I have my performance. And we do that at a match sample. So we'll look at both the firm's online reputation as well as its performance relative to its competitors' performance and online reputation.
So we have basically 2 and 1/2 years of monthly data-- so monthly hotel performance in ReviewPro's GRI. It's called their Global Review Index. And so we have that across 11 cities, which is roughly 50,000-plus observations. So these are monthly performance observations.
We're going to focus on ADR, occupancy, RevPAR, and then ReviewPro's GRI, and basically focus on demand. What's the impact of demand given my online reputation? What's the impact of my pricing power or ADR given my online reputation? And ultimately, what's the impact of RevPAR given my online reputation? So how does that-- not just the purchase incidence, but this overall look change?
And what we see here is what I refer to as online reputation elasticity. So this is how the percent change in, say, ADR-- so how does my ADR change on a percent basis given I improve my online reputation by a percent? So if I was to increase my reviews scores by 1%, what would happen to my pricing power, occupancy, or RevPAR?
And so the first line here is when we look at all hotels as one basket. And we'll see that if I improve my online reputation, I gain some pricing power. I gain a little bit of demand. And then these two together lead to roughly a percent change in RevPAR. So if I was to increase my online reputation by 8%, then my RevPAR will increase by 1%.
Interesting when we take out properties and separate them out into their chain scales, then we start to see some really interesting results.
So here, we moved down in quality from luxury to mid scale. And let's just focus on RevPAR. So if I'm a luxury hotel and I increase my online reputation by 1%, then basically, my performance goes up by a half a percent. So it's not insignificant, but maybe it's not going to take all your marketing efforts to improve that online reputation.
But as I decrease change scale, typically, we increase the certainly in the product. So with a luxury product, there's not a lot of uncertainty in the product, as we're not necessarily using what other people say to weigh off one firm versus another.
But as I decrease change scale, then there's a lot of variance in the product. And so hence, what prior guests have to say, what sort of sentiment is available online starts to impact that hotel selection decision. And as a result of that, we see much more dramatic changes upon hotel performance here. So we go from a half a percent to almost 1.5%. So substantive impact as we move down the chain scales.
And then as we look at the other two columns, what we see here is firms choose-- there's not as the same consistent story. Because at an individual level, you can think of one hotel choosing to drive rate given its improved TripAdvisor score and another hotel choosing to drive demand given its improved TripAdvisor score. So how a hotel capitalizes on its improved reputation is an internal strategy focus. And regardless of whether or not they choose to do with ADR or occupancy, the net comes down to RevPAR.
Anecdotally, I've left economy off the graph. Because at the early stage of the study, we decided to not look at economy, because we didn't think it would have the same impact. But interestingly enough, a few economy hotels snuck their way into the data set. And the story continues as we move down economy and actually gets further dramaticized, with economies roughly about 1.7% change in RevPAR given a percent change in their online reputation. So that story continues as we decrease that change scale.
Interesting the story is also consistent both domestic US as well as international. So our sample here is 11 cities. Five of those are US. Six of those are international. The story is pretty consistent also across independent versus branded. Yes.
AUDIENCE: Were there any scenarios where increased reputation had a negative impact on performance or vice versa?
CHRIS ANDERSON: So without a doubt, probably at the individual property level, yes. So if we look at individual property level, for sure, there's probably examples of that. But when we start to aggregate things, then that's never the case that an aggregate would see that.
Keep in mind that firms are doing lots of things to drive demand drive rate. And so this is just one of those drivers. And so we're only explaining part of the noise about performance. There's a lot of other marketing levers that we're doing. There's a lot of other factors that are going on.
So at the individual level, yes, we could see some of that. But once we aggregate things and basically control for all those other things-- so if all those other things we don't measure are random across all our sample, then we get to this story. But at the individual, yes, you might see some of that.
AUDIENCE: Are you talking her strictly about recreational travel, or do you break out business travel separately, where people may have less choice because of the location of the convention or whatever?
CHRIS ANDERSON: Right. So interesting enough, this is total performance. So this is total revenue at the hotel, total rooms sold. So it's all those pieces of business together. So yes, a subset of that may be driven by online content. Another subset may be delivered by contracted rates. But together, we're putting that all together so we have this total picture.
So the Travelocity study is really just this leisure or mercenary traveler who has full flexibility in how they choose one firm over the other. So when we look at this, this is that one guy who's really making that individual purchase decision.
When we look at this study, now we're looking at total travel. So that's groups. That's contracts. That's negotiated. That's Leisure That's business. So that's everything together.
Obviously, some of those segments, this matters more. Others, it matters less. But when we put that all together, we still get some pretty substantive impacts upon performance as a function of how you manage your online reputation.
AUDIENCE: Online reputation just means social buzz and reviews, right?
CHRIS ANDERSON: So we're predominantly looking at reviews here-- so user-generated content, the aggregation of user reviews, say, across Priceline, Expedia, Travelocity, Booking.com, TripAdvisor-- so all those sites, putting that together into one macro score. ReviewPro also captures information on sentiment as well as these reviews. But we're predominant focused on that user-generated content right here.
AUDIENCE: Even though the study didn't look at this, why do you think traffic laws influence choice? You mentioned reviews on sites likes TripAdvisor. Also whether demographics have any role in this. Anecdotally, [INAUDIBLE] I wonder [INAUDIBLE] to agencies.
CHRIS ANDERSON: So we see-- so the reason I include the first idea is to see that increasing traction in TripAdvisor in the purchase phase. So I think over time, these social sites are becoming increasingly more important. Even if we think they're targeting more active users, I think all users are becoming more active, more technology savvy.
And so we see that with the TripAdvisor information, that regardless of demographics, there's a shift to being more secondary sources to drive that decision. So we see that increasing over time. Again, so we're not controlling at the individual consumer level as much as we're looking at the macro level.
I think as-- to go back to your first question about travel-- more aggregators and how that other stuff comes together, I think that this content is getting connected all over the space. So as we start to see things like TripAdvisor reviews on Travelocity or showing up at Kayak, or showing up at Priceline-- so this content is becoming universally spread across all these different intermediaries, and it's really hard to avoid it. Even if you're not looking for it, it's going to be in front of you.
You guys are fast. Any other questions?
AUDIENCE: What's RevPAR?
CHRIS ANDERSON: Revenue per available room. So that's total hotel revenue divided by total rooms available. So it's basically your performance on a per room basis. So ADR is Average Daily Rate. So that's just your total revenue divided by how many rooms you actually sold.
So I could have a high ADR if I only sold one room for $1,000. I would have a very low RevPAR if I sold one room for $1,000 and I had 100 rooms. So now I'm down to $10.
So basically, it's-- and they're all part of the performance drivers. Some firms will try and drive rate from a positioning standpoint. We're going to focus on rate. Others will drive occupancy to focus on ancillaries. If I look at food and beverage or other revenue streams, I want to make sure we're occupied.
And then RevPAR is the product of these two. So the percent rooms I've sold times their average rate is my basically average rate per room, not per sold room.
AUDIENCE: So [INAUDIBLE] perform better? The performance would be better if they were more flexible about their rates based on their online reputation?
CHRIS ANDERSON: No, that's not what I'm trying to say. Really what I'm saying here is that online reputation is not as critical at the luxury level to sway a consumer from Four Seasons over to the Peninsula. So if I'm going to make that tradeoff as a luxury traveler, for starters, I probably really don't care what somebody else has to say. So that's part of that.
So you have different-- you value the opinions of travelers en masse very differently as you move up chain scales. And at the same time, that product is relatively standardized at that luxury level. And so as a luxury firm, I know I have to have superior product, and I'm not going to deviate from that too much. So there's not a lot of variance across that.
Whereas if we move down the chain scale, then-- not to pick any firm out, but as we look at, say, one Holiday Inn versus another or one Courtyard versus another, there's a lot of variance from location to location within that product. Even as much as the brand tries to standardize that, there's still going to be some variation. Here, we're trying to-- reviews are helping to reduce some of that variation.
AUDIENCE: Did you see in the study any particular subjects or operational areas that hotels could focus on that would improve their review score?
CHRIS ANDERSON: So interesting enough, that's what we're working on now. So basically, the ReviewPro's global review index is-- so I use it at a macro level, the total hotel performance. But they also break it out into departments. So you can look at user reviews associated with front desk or housekeeping or service. And so we can subdivide that online reputation into 10 departments and then look at, OK, of those 10 departments, which is the most impactful. So yes, the overall view matters, but maybe I need to focus just in.
Now, that's very much more tactical. So if I'm an individual hotel and I'm trying to manage my online reputation, I might look more at departmental scores to figure out, OK, where can I move the lever the best. What are people complaining about? They're complaining about my breakfast. So therefore, I can put effort to improve that and therefore improve my overall performance.
So I think that's a very tactical approach. We're going to try and see, though, if that-- if we can put that tactical information together at a high level performance level. So that's--
AUDIENCE: That will be the next piece?
CHRIS ANDERSON: Yeah, it's the next piece.
AUDIENCE: Chris, can we think of the data you've shown as supporting a general diminishing returns to quality?
CHRIS ANDERSON: Yeah. Basically, it is. I attribute that to variance. So better quality, reduced uncertainty around the product. Therefore, I don't need this secondary source to reduce that uncertainty.
AUDIENCE: You can equate quality with-- higher quality, the more expensive.
CHRIS ANDERSON: Right. So that would be our major measurement of quality here. Obviously, there's variance within a chain scale. But the variance within a chain scale is much less than the variance across chain scales.
AUDIENCE: I think that reputation is a proxy for quality.
CHRIS ANDERSON: Yeah.
AUDIENCE: So you were talking about luxury brands, hotels, and you were saying that people don't really care what other people think.
CHRIS ANDERSON: Maybe that's my own opinion.
AUDIENCE: I thought that was good. What about within the luxury brands, though? Would they want a luxury brand website that they're going to, or do they care about jumping ship from the Four Seasons-- when you were mentioning Four Seasons to the Peninsula. Can you talk about that?
CHRIS ANDERSON: So I would expect most of that jumping brands at the luxury segment is probably based upon experience, own personal experience versus the experience of others. And that's why we would see this score as being lower.
So I might change one individual supplier from another as a function of a poor service experience, but that's probably a service experience that I experienced versus one that somebody else experienced and told me about. Whereas down here, I'm more apt to choose one firm over another. That's why we see this.
AUDIENCE: And you also mentioned that it's going to be a different brand, whereas with the lower end ones, they might go from a Courtyard to something else you mentioned. However, we know that Ritz Carlton is not uniform, because they're just doing the management of the hotels and are dictated quite often by the owners. If the owners don't want to put a lot of money into it, then the management is not going to. So it does vary from place to place.
CHRIS ANDERSON: It does. Without a doubt, there is variance in product within an individual brand. What I'm alluding to here is that that variance relative to the quality is typically smaller as we go up the chain scale. So yes, there's variance across Ritz Carlton from one location to another, but probably not to the same degree as there's variance, say, in one Holiday Inn versus another one that hasn't been refurbished in 10 years in a different city. So there's, I think, more variance as we go down that chain scale.
AUDIENCE: This is probably something you couldn't quantify, but does this perhaps suggest that luxury consumers are more gullible or less gullible than those non-luxury consumers?
CHRIS ANDERSON: Not necessarily more or less gullible, but probably more or less opinionated would be what-- so the luxury consumer is probably more focused on their own opinion versus that of others is one of these things we might generalize.
AUDIENCE: Have you been able to factor in the role of traditional travel agencies into your studies in any way?
CHRIS ANDERSON: So basically, they're in this. So this is total-- this is basically, in a day, how much money I made today and how many rooms were occupied across all these different channels. So that's part of the puzzle.
So going back to the very first slide where I showed the pie-- so given the size of the pie-- traditional, travel agents, GDS-- somewhere around 16% or 17% of total room nights comes through a travel agent. So those 17% are weighing into this in their proportion. So it's all part of that.
AUDIENCE: I wanted to speak to this gentleman. I agree with you on the luxury level, though. You feel like-- when I get booked for business trips, if they say it's a 5-star, luxury hotel, I'm not as likely to check it out as if they say it's a Holiday Inn or something else. You're going to be more likely to look at this and see what it looks like because of the reputation of luxury. So the luxury customer doesn't worry as much, because pretty much across the board, those hotels are nicer.
CHRIS ANDERSON: So it's good to be great, and there may be variance around great. But--
AUDIENCE: It's still going to be great.
AUDIENCE: Do any of these numbers take into account frequent stay cards, like frequent flyer membership cards?
CHRIS ANDERSON: So yes, to the level that they're part of the overall performance. So that's the nice part about this study where we combine ReviewPro and [? SDR is ?] we put all those worms into the same can.
Now, we don't control for it. So we don't look at what people came through rewards programs at one hotel versus another. But they're just all part of that.
Now, I did attempt to control for independence versus branded. And there was really no impact across independence. So you would expect that if loyalty programs were a major driver, then once we control for independence, then that would take some of that impact out. But when I attempt to control for independence, the results basically are the same, and the independent doesn't really matter.
So we expected to see more impact for independence. I also expected to see more impact at international locations versus domestic US, because internationally, we're 70% independence, 30% branded. And we're the opposite in the United States, where we're mostly branded and 20% or 30% independent. But again, the results were consistent across independence as well as Europe versus US once we control for change scale.
At the macro level, yes, there's a difference between independence in Europe. But once we control for change scale, all those other effects get dwarfed. So chain scale is clearly the main driver here.
AUDIENCE: These results are-- you said US and Europe. Does it also include Asia?
CHRIS ANDERSON: Right now, just Europe. So we've got six-- Madrid, London. All the cities are listed in the report. So good motivation to download the report. Yes.
AUDIENCE: What about how some online sites, TripAdvisor and stuff, they say that the results are skewed. So if [INAUDIBLE] employees and my friends give me a good review, or I go on all my competitor's sites and give them a bad review. Could that skew the results a little?
CHRIS ANDERSON: So the nice part of looking at ReviewPro is that we put all those together. So we have reviews from TripAdvisor, which are not necessarily confirmed stays. They could be anybody. Whereas we have typically reviews from online travel agents-- Expedia, Priceline, Booking, Travelocity-- they're from confirmed guests.
So this is somebody who booked with me, and then I sent them an email. Then they did the review. So we put all those together in creating the global review index. So that's all blurred into there. So we kind of control for it in that it's not just-- it's validated opinions as well as unvalidated ones, I guess, is the way to look at that.
CHRIS ANDERSON: So there should be equal doubt, yes.
AUDIENCE: So one of the things that ReviewPro didn't have is word of mouth, which is big time [INAUDIBLE]. Do you find maybe that's a limitation to your study, then? Is that something for future research, maybe?
CHRIS ANDERSON: So we focus on ReviewPro's Global Review Index, which is predominantly user-generated content, review scores. They also have sentiment information. And we're going to basically look at some of the correlation for that. But they're pretty strongly related.
Yes, they are different measures. But they tend to be strongly correlated, so we see some of that. So RJ is here from ReviewPro. I don't know if you want to comment on--
AUDIENCE: That's interesting what you're saying that there's different [INAUDIBLE]. And what we found in working with thousands of clients around the world is that [INAUDIBLE] content, that's what's really influencing the decision if I go to Hotel A or B and I'm only going to spend $100 or $150. And social media-- [INAUDIBLE], Twitter, and Facebook and Pinterest-- this is clearly interesting and relevant content, but it's very much different. It's not influencing my decision.
So what we're finding is that, for example, Twitter and Facebook, [INAUDIBLE] by many hotels, both big and small, for service recovery. So it's relevant and interesting [INAUDIBLE] social, but the impact is not nearly the same when it's time to make a decision on Hotel A or B or the impact of price, what you're willing to pay.
AUDIENCE: Could you talk about service recovery? What is it, and how are they using those things?
AUDIENCE: Actually, probably the best way is to give a couple examples.
Service recovery-- what I mean by that is an extension of their client service or customer service. A couple really good examples-- the Citizen M hotel, which is a chain-- it's one of the hottest chains on the planet right now. They've got five or six properties. They're growing very rapidly. And they're a very customer-centric organization.
And so they're using social media-- primarily Twitter-- as a way to engage with clients prior to say, while on stay, and to interact with people's guest experiences. And so when I say service recovery, if someone is having an issue-- because today, oftentimes what clients are doing-- if they're on property and there's an experience that they're not happy with-- they're also sharing positive experiences, but even more so negative experiences-- they're tweeting those.
And so hotels like the Citizen M, chains like the Corinthia Hotels, are using Twitter as a way to monitor in real time feedback about the experience, and they're responding quickly. So when I say service recovery, if someone says hey, I'm in a great presentation at the Westin Hotel, but I'm freezing my back side off. It's really cold in the room. People are tweeting these kinds of things. Hotels are listening to that and actually responding in real time.
So when I say service recovery, it's a great way to listen, engage in real time, and respond in real time. And that's having a huge impact.
CHRIS ANDERSON: In driving your review score. So it feeds back to the creation of better review scores. And so it impacts at that level as it influences--
AUDIENCE: Absolutely. Because in the end, when someone goes to-- imagine they booked their stay at Booking or on Priceline or Expedia. In the end, what they're doing is they're sharing their experience, and they're giving a very detailed quantitative and qualitative feedback about to what degree the hotel met their expectations.
So as Chris says, when hotels are using Twitter and Facebook as a-- as I said, this recovery system or to be in real time communication-- to the extent that they're doing that, it has a direct and positive impact when it comes time to write the review. So these things are related.
They have different roles in the process. They influence the consumer in different ways. But again, what we're finding is what clearly is the influencing factor when it comes to choosing a hotel and determining the price, price sensitivity. It's what happening in the reviews.
AUDIENCE: Do you guys have a roster of hotels-- a roster of those that are managing their online reputations best?
AUDIENCE: Not really per se. We've got 4,000 clients around the world, from very small-- our smallest client has 12 rooms. Our largest client is the Louvre Hotel Group that has 1,100 properties. They go from-- our client's average daily rate that's lowest is 38 euros.
It was interesting. When they signed up with us, I thought, this is going to be a difficult client to keep. They've renewed for three years. Because what they're doing is they're listening and modifying their products.
So anecdotally and internally, we see the use, the engagement, the renewal rates. But we haven't published that in some sort of detailed list.
AUDIENCE: I was just wondering, are there certain websites or sources or [INAUDIBLE] that are weighted more than others [INAUDIBLE]
AUDIENCE: So basically-- so the question is how do we calculate this global review index. So when we developed the GRI, we got feedback from industry experts, from mathematicians, from clients. And it's an algorithm, so it's not an average. And so what we're doing is-- this is being taken into account.
AUDIENCE: But you said maybe-- [INAUDIBLE] specifically [INAUDIBLE]?
AUDIENCE: Well, the details of exactly how we calculate this--
CHRIS ANDERSON: That's how he makes money.
AUDIENCE: This is the secret sauce. But to give you an idea today, there's over 2,000 hotels in the world that use the GRI as a means to setting property level goals. Giving it a concrete example-- Melia Hotels, which is one of the largest chains in Europe, they've got nearly 400 GMs and another close to 100 individuals in sales and marketing that have a monthly-- they have a monthly goal for the GRI score for their property.
And in that case, 20% of these individuals-- their bonus is tied to the performance. So it's something that was developed over a period of time with lots of feedback. And what we're seeing is a very high level of industry acceptance with that.
AUDIENCE: In the current economy, I would think that price sensitivity might be a [INAUDIBLE]. If you go to something like Travelocity, they'll ask if you want the hotels broken down by economic patterns. Someone may not do that, but that may wind up being the ultimate issue that has to be a decision factor for them. I'm wondering how you determine in you studies what people actually make their decision based on as opposed to the order in which they look at something-- location, price, quality of the hotel, other people's experiences, et cetera.
CHRIS ANDERSON: OK. So the data source that I use for the study is the last screen that they look at. So they've done all their-- we'll call it their pre-sort behavior. So they've either sorted by distance from city center, sorted by chain scale, sorted by price. So the purpose here to look at just the last 25 hotels versus all 500 that they looked at is exactly what you're talking about.
Given what you thought is important-- so controlling for that, what is the impact of price and user reviews. So we specifically focus on the last 25 properties, just because people have their own preferences. They might sort by location, price. So we take-- conditional on that sort, and you're down to this last few.
So in some instances, the sample is not 25, but it's four. I've narrowed it down to four hotels based upon that constrained set, what's the impact of reviews and price, location, all that kind of stuff. So we control for that by only using that data source.
AUDIENCE: Do you look at-- is these more data available for the luxury and higher end hotels? Is there a way to gauge how much content there were for the different chain scales? And wouldn't you think if less people cared at the higher ends, there would be less reviews at the higher ends, which doesn't seem to be the case anecdotally? I don't know if there's a correlation. I don't know what I'm really asking.
CHRIS ANDERSON: Yeah, that's interesting. So here, I look at the number of reviews matters. But what you're asking is--
AUDIENCE: Is there more data.
CHRIS ANDERSON: Is there more reviews by chain scale. So that's a note to self. I'll look at that and see if we can-- are consumers more apt to share content as they think it's more important.
And so yes, there's going to be more total reviews by chain scale, just because there's more stay nights. But controlled as a percentage of stay nights or reviews, that's--
AUDIENCE: You didn't really look at--
CHRIS ANDERSON: I didn't look at that, but that's a very good question. Thanks.
AUDIENCE: One thing we found is it also has to do with the behavior of the chains themselves. So if you go back two years ago, it would have been very, very rare to find a luxury chain that even was encouraging their guests to write reviews. Because the demographics of those the visitors a couple years back probably weren't heavy users or TripAdvisor and others.
So what we've seen over the last couple years is a radical shift in the perception of the importance of gathering this guest satisfaction. So the numbers would probably-- this is comment more anecdotal than with the exact numbers in mind-- but I think there's clearly-- the luxury properties are putting more emphasis on that, encouraging their guests. So it's creating this positive dynamic where the number of reviews across the luxury segment is growing.
CHRIS ANDERSON: Yeah. So in the early days, a lot of brands thought the only people who were going to leave comments were the people who were really annoyed, and then at the other extreme, the people who were extremely satisfied. So it's really the outliers that review.
But the evidence is the exact opposite to that, that there's a whole distribution of responses and a whole spectrum of guests are replying. So just as I value that content, I want to share my experience regardless of the spectrum of experience that I had. And properties are realizing that and encouraging that feedback, because that's becoming impactful.
AUDIENCE: What's the ratio between positive and negative.
CHRIS ANDERSON: So that varies by chain scale. And so interestingly, it's not so much the ratios that matter. There's different factors that go into that. And so we don't necessarily focus on the numbers.
There's clearly cliffs. So once I get below a certain level, then the water dries up. So it's not so much the number of those negatives, but their aggregate. So it's really their proportion. Because the review scores are definitely chunky in how consumers react to those.
And that's part of the reason-- when you look at a Travelocity score, for the most part, it's chunky. It's smileys or half smileys. It's not third or three quarter smileys. For most of us, we make decisions not just on whether or not it was 3.3, but is it 3.5. So we do things in steps versus a continuous measurement. And that's why we'll see a lot of the review sites.
In order to get the actual number, you have to click on the hotel and get that number. And we get an aggregate score just by looking at it.
AUDIENCE: I do read a lot, however, about-- in travel sections in places like the New York Times where they talk about people seeding these review sites, like TripAdvisor. Let's have everybody on our staff write in, and pretend that they're a guest, and how wonderful it is. And that hotel across the street that's stealing all our business, let's write in and talk about the bed bugs we found in the mattresses, et cetera. So there's a tremendous amount of distrust in this particular area right at the moment. And I'm wondering how you can factor that.
CHRIS ANDERSON: So I wouldn't say distrust, but there's definitely some questions around how biased or unbiased-- yeah, suspicion about the quality of reviews, especially from TripAdvisor, which again, as I indicated earlier, is not a confirmed stay. It could be anybody who writes that report.
And so as I tell most people, we have to be educated on how we use that information. So you look at things en masse versus individually. So that's why, when we go to reviews at places like TripAdvisor, review volume matters. Because it's not so much just the one offs, but was there 50 people that said that versus five people. So at some level, it comes down to volume.
And that's part of the reason we focused in on the GRI as a metric versus user reviews from Travelocity, to look at things on the whole versus just one of those data gatherers.
AUDIENCE: Either you or anyone else, are you ranking any attempt to [INAUDIBLE] who's giving the opinions [INAUDIBLE]. Are all of the opinions coming from any particular [INAUDIBLE]?
CHRIS ANDERSON: So we've seen some of that anecdotally, but not globally presented. So TripAdvisor has talked a little bit about the demographics of respondents, as do most of the OTAs. And for the most part, if we look at-- the only people who are in the position to really disclose that fully are the online travel agents, because they know who you were when you booked it, as far as your credit card, et cetera, and then what you've said.
And for the most part, the anecdotal evidence from those is that the reviews are the same distribution as the reservations. So if I look at who has reserved with me and then I look at the distribution of who has responded, there are very similar distributions across those. So if 20% from this demographic, 20% of the reviews are from that demographic. So pretty consistent across. There's no bias in who gives those opinions.
AUDIENCE: You mentioned a little bit earlier that you [INAUDIBLE] content, you're more likely to respond to these. Is there a trend of certain individuals consistently commenting or providing most of the feedback on these hotels, or is it different? And are you more likely if you've commented once to comment multiple times on your experiences?
CHRIS ANDERSON: So TripAdvisor is really big into that. So TripAdvisor creates badges for how many reviews you give. The one thing about TripAdvisor is they're also connected to Facebook in that if you're using your same Gmail for TripAdvisor as Facebook, or if you're using your Facebook to login to TripAdvisor, then they will share information about people you know who have provided reviews.
So there is this incentive. TripAdvisor is not just potentially showing anonymous reviews, but from people that you know. So it's almost like word of mouth and talking to somebody across the street in that there is some sort of mentality to this. So it's both of those. And to encourage more reviews, they're trying to create this system of badges where you get different levels as a function of how many reviews you've created.
So is your opinion-- so the question becomes, if you stay more and have more opinions, does that mean your opinion is worth more? That remains to be seen. Just because you're opinionated doesn't mean your opinion is valued. So that's a different question, I guess.
AUDIENCE: What do you think is the biggest takeaway for the hotel industry from this data? How should they react?
CHRIS ANDERSON: So the big one is I would say it definitely matters. Reviews matter. Review volume matters. Review content matters. And it's worth the investment.
So we're talking in the neighborhood of 1% to 1.5% RevPAR impact. That's a substantial performance improvement. And think of it-- it's all incremental. So a 1% change in RevPAR is probably a 10% change in bottom line. So this is not just trivial.
And for the most part, it's engaging consumers, seeing what they didn't like and liked, and reacting to it. So the industry wants to make guests happy. And user reviews are a great way to find out what they liked and didn't like. So it's crazy not to gather them and respond to them, because you're basically improving that customer experience. So that would be the takeaway, is that if you listen, you will be rewarded.
One last question.
AUDIENCE: I guess it's kind of obvious, but do you think that this is going anywhere anytime soon, or do you think that we'll keep using reviews for the long term?
CHRIS ANDERSON: I think people are going to use reviews. We've been using reviews for decades. So the old Consumer Reports-- so professional reviews from professional people about my toaster before I bought my toaster. Consumer Reports.
So it's not like it's new. But just how we collect that information and from how many opinions we collect is growing. And the more and more engaged consumers are-- so the millennials as a demographic grows-- then the more and more important reviews are going to be.
So this study, as well as all the studies we produced through the Center for Hospitality Research, you can download at that CHR.cornell.edu. Else you can email me. The study should be wrapped up by the end of the week. But I would encourage you to reach out to the Center for Hospitality Research, to Cornell in general, for thoughts and discussions of other things going forward. And I'll try and address some of your questions with subsequent reports in the future given your great questions. But thanks for your time.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact email@example.com if you have any questions about this request.
Chris Anderson, Cornell associate professor in the School of Hotel Administration, offers fresh data on how travel review web sites and hotel-industry elasticity are forming a tidal wave of change in 2013.
Anderson, along with Hotel Administration Dean Michael Johnson, presented this new research Nov. 12, 2012 at the Westin New York at Times Square in Manhattan, as part of the Inside Cornell series.
Inside Cornell is a monthly series featuring researchers and experts working at Cornell University's centers in Ithaca, Manhattan and around the world.