Too many people in the hotel industry who purport to be revenue management experts don’t understand what forecasting error is or how to accurately calculate it. Or if they do, they’re ignoring it. That’s the only conclusion I can come to when I read about people expecting or getting an unconstrained demand forecast on average 98% right six months out. That’s practically impossible.
And if someone is consistently doing that, I’d suggest they head to Vegas and play roulette or head to Wall Street and play the stock market. Forecasting isn’t an exact science, but it’s an important function of a successful hotel. It may be the most challenging job a revenue director has, and one that is critical not only for pricing the hotel, but also for the general manager and operations, the accounting and finance departments and even for the owners and investors. (If these other functions are not using the same forecast, that’s an all too common problem and one for another blog post.)
Just as important as the actual forecasting is the evaluation of it after the fact. Testing is the only way to know the accuracy of your forecast and consistent errors must be addressed and can often be corrected.
Forecasting is too important to not even try to learn from your mistakes. Most hotels do not calculate forecast error and too many that do are calculating their forecast error by averaging the daily margin of error. For example, a hotel may forecast 100 rooms one day and end up with 80 filled, hence a negative (-) 20 margin of error. The next day the forecast may be for 100 again, and the result is 120, for a plus (+) 20 margin. Average the two, and voilà, the net error is zero.
Wait, what? Yes, that’s how forecasting error is often calculated, and it’s completely wrong, self-serving and it doesn’t make you any better. Be honest and look at the big picture. The further out you’re forecasting, the harder it gets. I wouldn’t be appalled to see 15% or even 20% error for six months out. If I saw 2%, I’d be almost certain the margin of error was calculated incorrectly. There’s no right answer to what margin of error is acceptable. It all depends on the amount and quality of data available to the revenue director and most importantly the volatility of the data. The closer in dates obviously should and will have less margin of error than those six, eight and 10 months out.
There is another factor that confounds a forecast that is often overlooked. Even if the demand data is stable and robust, and the math is perfect, a forecast can be invalidated by changes in strategy by the hotel management team or changes in the market that deviate from historical patterns. For instance, if an asset manager calls up a revenue director and demands they raise price by $50 for the last five days of the month so that the property will hit its ADR budget (often irrespective of whether that will drive RevPAR growth), then within a couple days the forecast will be wrong as consumer behavior changes. The forecast can catch up to the new reality quickly, but there will almost always be an adjustment period.
In my short example above on margin of error, using the absolute value of the variance, 20 in both cases, leads to an average error of 20%. For six months out, in most cases at most hotels, that’s a perfectly acceptable and believable number.
The other way forecasters cheat the system and themselves is by benchmarking the results against actual occupancy. The forecast is for unconstrained demand—which is how many people are willing to stay at your hotel if you had an unlimited number of rooms—so comparing it to occupancy is comparing apples to oranges. If the unconstrained demand forecast calls for 140 rooms in a 100-room hotel, and all 100 rooms end up booked, the margin of error isn’t zero. What is it? That depends on how much business could have been booked that day, and to understand that, you must know how much business was lost.
We calculate lost business by when a customer searches for a hotel room but does not complete the purchase. The lost business is either a regret, when the customer opts not to book, or a denial, when the customer is told the hotel or requested room type is sold-out. Unless you’ve got a revenue management system that can accurately show what the true unconstrained demanded ended up being on that date, it’s almost impossible to know. Wouldn’t it be great if there was a revenue management system out there that could calculate those regrets and denials using web-shopping data? (Check out Duetto Edge)
By including lost business with reservation data, revenue directors can get a better sense of the total demand for a particular day, and a better assessment of their forecasting error.
There are two basic metrics that should be used to evaluate forecast accuracy. The first is Mean Absolute Deviation (MAD) and the other is Mean Absolute Percentage Error (MAPE). MAD measures the average error in terms of room nights and MAPE expresses it as a percentage.
This is the list of procedures to calculate MAD and MAPE:
1. Subtract the forecast values from the actual values to find the variance for each period.
2. Take the absolute value of the variance for each period so that the values are positive numbers.
3. Average the absolute values to calculate the MAD.
4. Divide the MAD by the average of the actuals to calculate the MAPE.
Duetto Edge, our RMS I alluded to above, calculates and shows our users the actual forecasting error using these methods and more. Sometimes the result is more than 2%. Sometimes it’s 10% or even 20%. We show the true error, not because we like being wrong, but because we want to get better and want you to get better. How can you optimize your revenue and manage your staff if you don’t understand what the forecast is and what kind of error can be expected.
If anyone is producing a forecast for you and they are unable or unwilling to show you the MAD and MAPE calculated correctly at the day level that should be a red flag. They either don’t know or they don’t want you to know how things work in their black box. A common lament is that “the customer” (aka you) would not understand how to interpret it and it would be a huge hassle to have to explain it—but that’s rubbish. When you dive into a forecast, you should be able to figure out what went wrong and it’s often a change in strategy, a new marketing campaign, a hotel down the street that sold out too early and shifted demand toward you, etc. etc. and these are things that anyone in the business can understand. A quarterly, monthly or weekly rollup of the error is not good enough because it conceals the detail necessary for diagnosis.
Forecasts are a fact of life for the revenue director. Every hotel and manager need forecasts to be able to put a stake in the ground to start making decisions. But it must be done knowing the forecast can (and will) be off by as much as 20% or maybe more, depending on how far out you’re looking. That uncertainty is also a fact of life, and managers must be prepared to hedge their bets based on it.
The goal shouldn’t be to try to come up with the lowest possible forecast error and fudge the numbers to make it happen. The goal should be an honest and realistic assessment and then use that information to make better forecasts going forward. If you really are precise enough to accurately forecast hotel demand six months out, give me a buzz in Vegas. Let’s hit the casino.
Join Our Community: