Without getting into the deep details of the genesis of the big data space, lets just say that it really all started with a few of the really big consumer focused internet companies in sillicon valley. Yahoo, Google, Facebook etc. Because they had so much data on users, it became increasingly more difficult to use this data without new ways of managing it and using it for their customers. One of the first real areas that applied to big data, was using this new tech called Hadoop to more effectively help run search engines. But from there, the technology has grown and new use cases for more traditional enterprises have become the focus.
The first broad area of focus in deploying new big data technology has been to make it easier and faster to do data discovery and advanced analytics. For years people have been reliant on the traditional Business Intelligence tools to help understand what is happening with the data they are gathering. These tools have been great at what they do for years, but as the amount of data explodes and the time frames for using that data shortens, the new big data ecosystem and technology has become more of the standard way to explore data and look for formerly hidden patterns that are business impacting. As an example, many manufacturing companies are starting to put all of their production data into a big data system so they can do more advanced analytics on defect detection rates or factory yields. Or Telcos, who are the original big data companies, are using big data technologies to help determine which areas of their networks are overloaded and how they should plan to spend capital to upgrade them, to keep customers happy.
The second area of focus for use cases has revolved around the infamous Customer 360 that we have been chasing for years. It seems like every 5 years or so, a new technology or platform hits the market and promises to finally deliver a single view of your customers to the business. Well, big data is that next technology. The idea behind something like Hadoop, is that it is a distributed file storage system that lets a company store any kind of data, from any where, in any format, in the same place and bring that data together quickly to gain a single, full view of a customer. It really becomes the single storage locker for all data being collected about or for a customer and then can be used in multiple ways to add value. One use case is just being able to deliver a single view of the customer to contact center agents for customer interactions. Another use case is using this centralized data to more effectively make real time decisions about next best action or offer for customers who are on a website. Yet another might be using this single consolidated view of a customer to help automate and predict when customers will likely churn. Picking up on those events that most likely are good predictors of a customer leaving and using them to alert the business before they go.
The third area of use cases can fall into a bucket that is more highly focused on making predictions about what is going to happen in the future. Again, taking all of the disparate data a company may have, centralizing it into something like Hadoop and then using that centralized data to predict better future outcomes for the company or for customers. You might have retail organizations that use the mountains of data they have to more effectively plan inventory availability in their stores to ensure customers are happy. There are also many organizations that are jumping on board with big data to gather data from sensors to help predict outages of machines. GE is one of the big boys in this space these days, talking about jet engines and wind turbines. But there are many others that are closer to consumers also using this sensor data to help predict when something is about to go wrong. Thinking about car manufacturing companies, HVAC service providers and the oil and gas market as a few of the other industries that are using machine data in a variety of ways to help ensure that the minimize down time or outages in their facilities or products.
The fourth and final area I will throw out today in this post is really focused on the plumbing layer of a company and how it can be optimized or overhauled to better serve the business or customers. I won't go into much detail here, as it can get quite technical fast, but the idea is that there are many data management and storage systems in the market today that are getting long in the tooth. And with that age, comes incredible expense and risk that many organizations are looking to mitigate. One example of this would be what is called a "Data Warehouse Offload". For years, there have been a few companies that have dominated the traditional data warehousing space and as such it has gotten ever more expensive to hold the huge amounts of data that companies are producing. Many of these companies have begun to off load this data from these expensive, older and less flexible systems onto newer, more agile, more innovation friendly, cheaper systems like Hadoop etc...
So, we will leave it at that today. These are some of the ways that companies are starting to use big data to bring added value to their companies and customers. Of course, there are a number of other use cases not listed here, that are adding great value to large enterprises across all industries. The key is finding the use cases that are going to drive the most value for you and making a plan to see it through.
Next time, we will talk about the value of some of these use cases and who in a company typically should be thinking about these things...
No comments:
Post a Comment