Big data and big analytics are a big opportunity for hotels

By. Kelly McGuire 22nd Nov 2014

By infusing analytics through every phase of the guest journey, hotel managers can help shore up the complicated balance between the guest experience and revenue and profit responsibilities – delivering memorable and personalized guest experiences, while maximizing revenue and profits. To accomplish this, hotels need to be able to collect, store and analyze the volumes of data generated by their guest interactions, their operations and the broader market. As the volume and complexity of data increases, wrapping your head around what is available and how it can be useful is becoming challenging.

“Big Data” is a challenge for organizations not just because the volume of data has increased, but also because the variety has increased – it’s gone beyond traditional transactional data into unstructured formats like text, video, email, call logs, images, click-stream. It is coming at us fast. Data like tweets or location is stale nearly the minute it is created.

The reason why big data is a “big deal” is because the volume and complexity of the data puts pressure on traditional technology infrastructures, which are set up to handle primarily structured data (and not that much of it). In these environments, it is difficult, or even impossible, for organizations to access, store and analyze “big data” for accurate and timely decision making. This problem has driven innovations in data storage and processing, such that it is now possible to access more and different kinds of data.

To a certain extent, big data is forcing business leaders (like our analytic hospitality executives) to get more involved in technology decisions than ever before. To help with this, in this post, I’ll talk about how technology has evolved to handle big data and give some examples of how companies are innovating with their big data. Next week, I’ll do the same for big analytics. This is not intended to make everyone into technology experts, but rather, to provide some basic information that can arm hotel managers to start having conversations with their IT counterparts.

This influx of large amounts of complex data has necessitated changes in the way that data is captured and stored. To handle the volumes of unstructured data, databases need to be faster, cheaper, scalable and most importantly more flexible. This is why some have been talking about Hadoop as an emerging platform for storing and accessing big data. Hadoop is a database that is designed to handle large volumes of unstructured data. Hadoop works because it is cheap, scalable, flexible and fast.

  1. Cheap & Scalable – Hadoop is built on commodity hardware – which is exactly what it sounds like – really cheap, “generic” hardware. It is designed as a “cluster”, tying together groups of inexpensive servers. This means it’s relatively inexpensive to get started, and easy to add more storage space as your data, inevitably, expands. (it also has built in redundancy – data stored in multiple places,- so if any of the servers happen to go down, you don’t’ lose all the data)
  2. Flexible – The Hadoop data storage platform does not require a pre-defined data structure, or data schema. I use the analogy of the silverware drawer in your kitchen. The insert that sorts place settings is like a traditional relational database. You had to purchase it ahead of time, planning in advance for the size of the drawer and the kinds of silverware you wanted to put in it. It makes it easy for you to grab out the four sets of forks and knives you need for a place setting. However, the pre-defined schema makes it difficult to add additional pieces of silverware should you decide to buy ice tea spoons or butter knives, or if you are looking for a place to store serving utensils. Hadoop, on the other hand, is more like an empty drawer with no insert – it has no pre-defined schema. You can put any silverware you want in there without planning ahead of time. You can see the advantage of this approach with unstructured data. There is no need to “translate” it into a pre-defined schema, you can just “throw it in there” and figure out the relationships later.
  3. Fast – Hadoop is fast in two ways. First, it uses massive parallel processing to comb through the databases to extract the information. Data is stored in a series of smaller containers, and there are many helpers available to reach in and pull out what you are looking for (extending the drawer metaphor: picture four drawers of silverware with a family member retrieving one place setting from each at the same time). The work is split up, as opposed to being done in sequence. The second way that Hadoop is fast is that because the database is not constrained by a pre-defined schema, the process of loading the data into the database is a lot faster. (picture the time it takes to sort the silverware from the dishwasher rack into the insert, as opposed to dumping the rack contents straight into the drawer).

Many companies have put a lot of effort into organizing structured data over the years, and there are some data sets that make sense to be stored according to traditional methods (like the silverware you use every day). Because of this, most companies see Hadoop as an addition to their existing technology infrastructure rather than a replacement for their relational, structured, database.

Next week I’ll talk about innovations in the execution of analytics that speed up time to results, allowing organizations to take full advantage of all of that big data.

Big data is of no use unless you can turn it into information and insight. For that you need big analytics. Every piece of the analytics cycle has been impacted by big data, from reporting, with the need to quickly render reports from billions of rows of data, through advanced analytics like forecasting and optimization, which require complex math executed by multiple passes through the data set.

Without changes to the technology infrastructure, analytic processes on big data sets will take longer and longer to execute. It’s not enough now to push the button and wait hours or days for an answer. Today’s advanced analytics need to be fast and they need to be accessible. This means more changes to the technology infrastructure to support these new processes.

Analytics companies like SAS have been developing new methods for executing analytics more quickly. Below is a high level description of some of these new methodologies, including why they provide an advantage. Once again, the intention is to provide enough detail to start conversations with IT counterparts (or understand what they are talking about), certainly not to become an expert. There is a ton of information out there if you want more detail!

  1. Grid computing and parallel processing – Calculations are split across multiple CPUS to solve a bunch of smaller problems in parallel, as opposed to one big problem in sequence. Think about the difference between adding a series of 8 numbers in a row versus splitting the problem into in four sets of two, and handing them out to four of your friends. To accomplish this, multiple CPUs are tied together, so the algorithms can access the resources of the entire bank of CPUs.
  2. In-database processing – Most analytic programs lift data sets out of the database, execute the “math” and then dump the data sets back in the database. The larger the data sets, the more time consuming it is to move them around. In-database analytics bring the math to the data. The analytics run in the database with the data, reducing the amount of time-consuming data movement.
  3. In-memory processing – This capability is a bit harder to understand for non-technical people, but it provides a crucial advantage for both reporting and analytics. Large sets of data are typically stored on the hard drive of a computer, which is the physical disk inside the computer (or server). It takes time to read the data off of the physical disk space, and every pass through the data adds additional time. It is much faster to conduct analysis and build reports from the computer’s memory. Memory is becoming cheaper today, so it is now possible to add enough memory to hold “big data” sets for significantly faster reporting and analytics.

To give you an idea of the scale of the impact, applying these methodologies, we have been able to render a summary report (with drill down capability) from a billion rows of data in seconds. Large scale optimizations like risk calculations for major banks, or price optimization for thousands of retail products across hundreds of stores, have gone from hours or days to minutes and seconds to calculate. As you can tell, the advantages are tremendous. Organizations can now run analytics on their entire data set, rather than a sample. It is possible to run more analyses more frequently, testing scenarios and refining results.

Here are some examples of how innovative companies are applying big analytics to get value from their big data:

  • Airline companies are incorporating the voice of the customer into their analyses, by mining all of the internal and external unstructured text data collected across channels like social media, forums, guest surveys, call center logs, and maintenance records for passenger sentiment and common topics. With big text analytics, these organizations are able to analyze all of their text data, as opposed to small samples, to better understand the passenger experience and improve their service and product offerings.
  • A major retailer is keeping labor costs down while maintaining service levels by using customer traffic patterns detected by security video to predict in advance when lines will form a the register. This way, staff can be deployed to various stocking tasks around the store when there are no lines, but given enough notice to open a register as demand increases, but before lines start to form.
  • A major hotel company has deployed a “what if” analysis in their revenue management system which allows users to immediately see the impact of price changes or forecast overrides on their demand, by re-optimizing around the user’s changes. Revenue managers no longer have to make a change and wait until the overnight optimization runs.

Unlocking the insights in big data with big analytics will require making some investments in modernizing technology environments. The rewards for the investment are great. Organizations that are able to use all that big data to improve the guest experience while maximizing revenue and profits will be the ones that get ahead and stay ahead in this highly competitive environment!

About Kelly McGuire

Kelly McGuire is an analytics evangelist, passionate about helping the hospitality and travel industries realize the value of data-driven decision making. I focus on connecting the dots between strategy, business process, technology and execution. I have a background in revenue management. I have also worked extensively in marketing analytics and hospitality operations. She adept at facilitating integration

View Complete Profile

Related Resources