Wednesday, December 7, 2016

Big Data Technology Explained- Part 3 (Other Big Data Tech)

In the previous posts on Big Data, we talked about some of the base technology tools that are in use today by companies all over the world to drive their Big Data programs.  We talked about the Hadoop ecosystem and a few of the projects or tools that have become common in commercial use cases.  Things like Hbase, Hive, Impala, Storm, Spark etc...

But, we can't limit the big data world to just the world of Apache Hadoop.  There are dozens and dozens of other technology tools, frameworks, platforms and applications that have been developed over even the past 5 years, that drive real value for organizations.  Take a look at the chart below and you can see that there are a ton of players.  But I will only dig in to a few of them that I have seen real value be generated within the folks I work with everyday.


As I said, a TON of companies making a play in the Big Data arena.  Still lots of opportunity I think to build more useful apps, but that is for another post down the road.

Out of these many dozen companies on this graphic, I want to call out a couple:

Elastic:
Formerly known as ElasticSearch, Elastic is an open source indexing and search tool.  Similar to Apache Solr, Elastic is used by companies to take documents or chunks of data or even individual log events, index them, make them searchable and then purge when space is needed.  The beauty to me of the Elastic platform is not just the search mechanism, but also the other tools that have been built to enhance the user experience in using Elastic.  In particular, I call out Kibana as a great tool that sits on top of Elastic and makes it very easy to find data that you are looking for.

NiFi:
When it comes to the world of big data, almost nothing is more important that actually being able to easily and quickly move data from one place to another.  But not just move the data, but securely move it, at scale and the ability to recover in case something happens in transit.  This is where Apache NiFi shines.  NiFi has been picked up with incredible speed by some of the largest companies in the world to fill in the gap that they all have of more effectively moving their data around their organization.

Neo4j:
As we start to move to a world that is based more and more on relationships and networks, graph databases start to become more and more important and that is what Neo4j is all about.  Every company, whether they like it or not, over the next few years will need to start connecting the dots that exist about their customers, partners, suppliers etc... Doing this with traditional databases is almost impossible and there are not any really great tools out there within the common frameworks that make graph database a possibility, besides Spark.  So, we will begin to see real growth in this area is my view and should be an area to keep an eye on for new, more user friendly kinds of solutions.

Ok, this is the end of this part of the series on Big Data, focused on the technology.   Again, my goal was not to go toe to toe with all of the architects of the world on what big data technology really is or how it works.  My goal was to help business teams get just enough of the detail about this technology that it helps them make more informed decisions with their internal and external technology partners for their big data programs.


Wednesday, November 23, 2016

Big Data Technology Explained- Part 2 (The Hadoop Ecosystem)

As I talked about in the last post, Hadoop was really the engine that started to drive the application of big data 10 years ago now.  But since then, there has been incredible growth within the Hadoop ecosystem to address many of the challenges that were identified by commercial organizations as they started to adopt Hadoop.

Of the 20+ Apache projects that have come to life as a part of the ecosystem in the last many years, I will just mention a few of the larger projects and their signifigence in the grand scheme of things.  Again, this is not meant to be "developer 101", but instead, business person 101.  My goal here is to help business people know enough to ask the right kinds of questions of both their internal and external technology partners, before they embark on a big data adventure.

Hbase:
The first ecosystem component to talk about is Apache Hbase.  Hbase is what is called a NoSQL columnar database, which, to distill it all down, really just means you can read/write data really fast to or from it for time sensitive operations and not need to just use SQL queries to get the results.  One of the areas in the commercial space where Hbase is commonly used is as the "database" for lookups on websites.  So, if I am on eBay and I am searching for specific kinds of products, the search I do may take me behind the scenes to something like Hbase to do a fast look up to see if that product is available.  Or if I am on Etsy, all the clicks I may make on the site during my visit could be saved and tracked and then stored in something like Hbase for immediate access by operations teams.  So when you hear the word or term Hbase, think super fast reading and writing to a data store that sits right on top of Hadoop (HDFS).

Hive (LLAP) and Impala:
Two other major projects that were developed to help speed the adoption of Hadoop in general were Apache Hive and Impala.  Both are what is referred to as "SQL Engines"on top of HDFS.  As is the case with just about every new technology, in order to drive adoption, making it easy for users to interact in a way they are familiar with is critical.  SQL happens to be a query language that is pretty much universally accepted and used by millions of analysts around the world with relational databases.  So in order to really drive the adoption of Hadoop, it made sense to build a tool that could leverage those skill sets but still get the value, power, speed and low cost of the Hadoop backend.  That is where Hive and Impala come in.  They both allow folks with those traditional SQL skills to continue doing what they do well, but leverage the goodness of Hadoop behind the scenes.  Organizations may use Hive or Impala to do dynamic queries or large scale summarizations across multiple, very large sets of data.

Spark:
Over the last number of years, all of these projects started to solve the problems that companies had with the original Hadoop components of HDFS and MapReduce.  But one thing kept coming up over and over again as computing started to get cheaper and specifically, as memory has started to become cheaper.  How can I make my processing go faster?  This is where Spark came in and started to become the new hot tool and darling of the big data community.  Spark was developed out of Cal Berkeley in their AMP Lab and has become one of the fastest growing open source projects in history.  The big reason is that it was able to speed up processing on data by orders of magnitude for users, by storing data in memory for fast and easy access. And for many organizations using data today at the heart of their business, they needed this speed to make faster decisions.  Spark also has become popular lately because it is a tool that is much more friendly for developers and also has multiple uses for the same core engine.  It can be Spark for batch processing or Spark Streaming for real time data movement or you can use Spark for creating graph databases.  So it is a tool that is incredibly flexible for a myriad of use cases.

Storm, Flink, Apex:
With the original processing framework of MapReduce and then even the additional of other tools like Hive, Impala etc... there was always a missing piece to the puzzle.  And that was real time streaming analysis over large sets of data.  While those other tools did a great job with running batch analysis on large sets of data, they were not built to do analysis on real time, streaming data. A great example of real time streaming data would be something like Twitter.  Tons of data, coming in real time and needing to be analyzed in real time to make decisions.   This is just one of many use cases or examples where real time is being used.  There are dozens of others, in every industry, where real time streaming analytics is becoming more and more popular and valuable.   Apache Storm was the first real project that was built to address this real time streaming analytics need and was actually built by the folks at Twitter.  Apache Apex and Apache Flink are two other real time streaming projects that have also gained steam lately along with Spark Streaming.


YARN:
As many of the projects started to become a core part of any large scale big data project, there became the need to better manage the resources that were executing all of these queries or processes that people wanted to run.  So, that is where YARN comes in.  It was developed to be the layer on top of HDFS to help manage the resources effectively.  Kind of like the operating system for HDFS.  Not something that business people are really going to care about, but still good to know what YARN is and where it fits in the picture.

Ranger and Knox:
No list of tools would be complete without talking about the security of the Hadoop stack.  With all of the concern with regards to data privacy and security, over the last few years the open source community has ramped up the work on putting in place tools that make it easier to lock down the data held within Hadoop.  That is what Ranger and Knox were developed to be, the access and authorization tools to ensure that only the right people or systems, with the right kinds of privileges, are able to access data in Hadoop.  For many commercial organizations, this has been the hurdle that needed to be cleared in order to adopt Hadoop and start deriving real business value.  It just flat out needed to be more secure.

Alright, so that is a good review of some of the core technologies that make up the Hadoop ecosystem and where they fit.  As I mentioned in a previous post, Hadoop is not the only ecosystem of technologies that is used in the big data space.  There are numerous other open source and proprietary frameworks and tools that are being used to augment the use of Hadoop tools.  We will talk about a few of those other tools next time.








Monday, November 7, 2016

Big Data Technology Explained- Part 1 (Hadoop)

I am going to break this topic into a few posts, simply because it could get quite long.  But my goal here is not to go into exhaustive detail on the technology in the big data ecosystem or go toe to toe with the big data architects of the world on what tools are better or faster etc....  Instead, my goal is to help business teams get just enough of the detail about this technology that it helps them make more informed decisions with their internal and external technology partners as they begin a big data program.

After being around big data now for a number of years, I continue to be amazed at how complex the technology really is.  There IS a reason why there is such a shortage of skills in the big data space, not only in data science but also with just pure big data architects and programmers.  It is just flat out hard to get it all to work together.

But, lets just start with the basics and keep it at a high level for now.

Lets not get too deep in the weeds on when Big Data really started etc... Just a waste of time.  Lets just start with the idea that Big Data really started to get moving in the Enterprise and Consumer spaces with the advent of what is widely known as Hadoop.  

I am sure that many of you have heard this word used or have read about it somewhere in the last few years.  Hadoop started out as two simple components of a technology stack that Yahoo and Google started using many years ago to help make search faster and more relevant for users.  The two parts were HDFS and Map/Reduce.  HDFS is the Hadoop Distributed File System and Map/Reduce is a distributed processing framework or job that runs on top of this file system that executes some kind of command to then spit out a set of results that can be used.

What is unique and special about HDFS, as opposed to other file systems or databases, is that it can store mass quantities of data, in any format and any size for an incredibly small amount of money.  So, as you can imagine, when you are trying to index "all" of the worlds information, like Google or Yahoo were trying to do, it is really useful to have a data store that is incredibly flexible but also incredibly cheap.  So think of HDFS as the "Oracle" of this new world of big data.

Alongside the uniqueness of HDFS, Map/Reduce was also unique in the world of processing.  As you can imagine, with all of that data that Yahoo and Google were now storing in HDFS, they needed really fast ways of being able to process that data and make sense of it.  That is what Map/Reduce was all about.  It was this idea that you now had a framework that could distribute out processing over many many different machines to get answers to questions faster.  The concept is sometimes referred to as Massively Distributed Parallel Computing.  It just means that you can spread out the work from one computer to many, so that you can get the answers much much faster.

So, when we are beginning to talk about Big Data, these two components really were the engine that got the big data world moving in the last 10 years.  Early on, these software frameworks were donated to the Apache Software Foundation as open source tools, which allowed the technology to be used by literally anyone.  So when you hear the term Hadoop now, it typically is coupled with the world Apache (Apache Hadoop) for this reason.

Since that time, there are have been dozens of new projects that have been developed by teams of people from all over the world that layer on top of these original components of HDFS and Map/Reduce.  It is literally an entire ecosystem of technology tools that leverage or are built right on top of Hadoop that accomplish some specific task.   See below for an "architecture" picture of the different tools in the ecosystem today, 10 years later.



And as processing power grows, memory costs shrink, storage costs continue to decline and use cases for Big Data continue to evolve, companies and individuals are creating new tools everyday to deal with specific challenges that they have in their organization.

In the next post, I will break down, at a high level, some of these tools in the graphic above that have been added to the Hadoop ecosystem over the last 5-7 years and why they are important in the context of business users and their needs.

Thursday, November 3, 2016

Business Value for Big Data Part 2

Ok, lets pick right up from where we left off in our last post and dig into buckets three and four from our business value use cases for big data.

Our third big bucket of value from big data projects was focused on driving real business value by more effectively predicting outcomes.  Again, the key to this bucket, like the others, is that today, companies have a much different and more effective way of bringing disparate data sources together and extracting some kind of signal from all of the noise.  I mentioned GE in my previous post as a good example of driving value in this space.  They have committed billions of dollars over the last few years to become a software company because they see the business opportunity in front of them with being able to predict when machines will fail.  Much of their marketing has gone towards talking about airplane maintenance or optimizing monsterous windmill farms in the middle of the ocean.  I think the common marketing pitch they give these days is that by better diagnosing problems with machines up front and eliminating downtime, there are trillions of dollars to be realized across the industrial machine space.  Yes, you read that right, Trillions.

But, predicting outcomes doesn't need to be focused on such a large class of assets or within a set industry to be valuable.  Using data to better predict outcomes really cuts across all industries and across all lines of business within an enterprise.  IT organizations are using predictive analytics to determine how best to optimize their hardware and software to save costs in their data centers.  Security teams are using predictive analytics to help find Advanced Persistent Threats within a network and cut off hackers before they even get started stealing data.  Sales organizations are using predictions across diverse data sources to more effectively target prospects that have a higher likelihood to buy.  Marketers have been using predictions for years to make more personalized offers to customers when they are checking out online, think "People who bought this item also bought.....".   As we move into the future, marketers are getting ever more creative with big data and using predictions to make even more personalized offers across multiple platforms.  And Customer Service leaders are using predictive analytics to more effectively match call center agents with customers that are calling in and have a particular type of personality, class of the problem or recent their activity.

Finally, big data has real value in a category I explained last time as "plumbing".  While no where near as eye catching or interesting as these other kinds of use cases that drive business value, updating the "plumbing" can be of tremendous value to many organizations.  In fact, a solid place to start a big data program can be a use case as mundane as just moving away from a traditional Data Warehouse approach to storing/operating on data.  These traditional approaches can be incredibly expensive and can be incredibly frustrating to use/maintain for generating reports across the business.  The big data alternatives seem to be able to leverage alot of the same tools that are user facing, but put in place a much more innovative back end infrastructure that allows companies to significantly reduce their costs for the data warehouse technology and help to speed up processing, or creating reports, by orders of magnitude.

I know that was a lot to ingest and consume all at once across these two posts.  But I think it is important for business people to understand that all of the hype you may hear about Big Data or Hadoop etc... has real legs and real value behind it.  The dirty little secret, well not so much a secret anymore, is that the real challenge with big data programs is less about determining outcomes to focus on and more about the complexity of the technology itself.

But we will get to that more in depth in one of the upcoming posts.

As always, please feel free to comment and share your experience with big data programs and their associate value for your company.

Thursday, October 27, 2016

Business Value for Big Data Part 1

In the last post of this series, I talked about the different types of use cases that are bubbling up for using big data technologies to drive value.  I talked about four buckets that I see these use cases falling in to:
  • Faster and more advanced analytics
  • Customer 360
  • Predictive Analytics
  • Optimizing the Plumbing
One of the common blog posts seen over the last few months within the big data community and tech blogs/sites in general is focused on the lack of value that companies seem to be getting from their investments in big data programs.  It is quite common to read one analyst or another writing about the "science projects" that are going on in the market with big data and the adoption of big data technologies being no where near the forecast.  I even read a twitter post from a well known analyst the other day calling for big data companies to focus on "outcomes" versus the technology.

While I do agree with this particular analyst in focusing technology projects on outcomes, I will say that I don't think this is really rocket science or anything new.  Focusing on outcomes should be what every company is doing, whether they are doing the investing or providing the technology.  Without focused outcomes, the project will be doomed from the beginning.

So what are some of those outcomes we should be focusing on for big data projects?
Well, they all come down to the same two big buckets we have seen for many years now:
  • Saving Money
  • Making Money
Now, one could argue that there are sub categories to these two outcomes, but by and large, these are what business leaders are looking at when investing in projects.

So then, how do the four use cases I laid out in the previous post connect to these two outcomes?
Let's focus on bucket one and two in this post and three and four in our next post.

Let's start with number one.  When looking at the "Faster and more advanced analytics" bucket, value starts to be realized for companies by being able to find patterns that were never uncovered in the past.  As an example, a retailer who was able to optimize their truck routes in logistics more effectively because they had a more advanced way of looking at their data, thus saving huge dollars on fuel costs.  Or a Telco that was able to cross reference data from multiple silos to show broader patterns related to network outages and capacity.   Which directly impacts both customer acquisition and maintenance costs to the tune of multi millions of dollars a year. 

When we think about Customer 360, the associated business value no doubt straddles both saving and making money.  As we talked about in the last post, Customer 360 has been the panacea for marketers and customer service leaders for years.  For marketers, the Customer 360 represents the best opportunity they have at truly understanding their customers wants and needs and then being able to offer products or services that match most closely to those wants and needs.  A great example of this would be insurance companies.  They are one of the "OGs" (originals) in the big data space (along with Telcos), collecting more data in one day that some companies capture in a year on their customers.  Now, by bringing all of this data together in new ways, marketers can offer more granular tiers of car insurance, thus broadening their prospect base.  Or marketers can much more easily help identify customer life events that may trigger offers for new types of insurance to long time customers, thus driving new forms of revenue capture. 

We can not forget though, that the Customer 360 is not only a win for marketers, but also a huge win for customer service leaders.  By giving them and their teams access to the full view of the customer, they are empowered to create a set of processes and experiences for customers that ultimately drive real business value.  Whether it be through providing an authentic customer experience (soft value) or by solving problems faster (hard value) or even starting to be proactive about problems that might be coming (hard value), the Customer 360 drives real value for both customers and companies via the customer service teams.

In our next post, we will tackle bucket three and four, using big data to more effectively predict outcomes and fixing the "plumbing".










Thursday, October 20, 2016

How are Companies Using Big Data

So, if we distill Big Data down to this simple concept of just getting more value out of your data, then what kinds of use cases are companies hitting first that is driving value?  Lets use this post to explore some of these use cases and start to make Big Data more tangible for everyone.

Without getting into the deep details of the genesis of the big data space, lets just say that it really all started with a few of the really big consumer focused internet companies in sillicon valley.  Yahoo, Google, Facebook etc.  Because they had so much data on users, it became increasingly more difficult to use this data without new ways of managing it and using it for their customers.  One of the first real areas that applied to big data, was using this new tech called Hadoop to more effectively help run search engines.  But from there, the technology has grown and new use cases for more traditional enterprises have become the focus.

The first broad area of focus in deploying new big data technology has been to make it easier and faster to do data discovery and advanced analytics.  For years people have been reliant on the traditional Business Intelligence tools to help understand what is happening with the data they are gathering.  These tools have been great at what they do for years, but as the amount of data explodes and the time frames for using that data shortens, the new big data ecosystem and technology has become more of the standard way to explore data and look for formerly hidden patterns that are business impacting.  As an example, many manufacturing companies are starting to put all of their production data into a big data system so they can do more advanced analytics on defect detection rates or factory yields.  Or Telcos, who are the original big data companies, are using big data technologies to help determine which areas of their networks are overloaded and how they should plan to spend capital to upgrade them, to keep customers happy.

The second area of focus for use cases has revolved around the infamous Customer 360 that we have been chasing for years.  It seems like every 5 years or so, a new technology or platform hits the market and promises to finally deliver a single view of your customers to the business.  Well, big data is that next technology.  The idea behind something like Hadoop, is that it is a distributed file storage system that lets a company store any kind of data, from any where, in any format, in the same place and bring that data together quickly to gain a single, full view of a customer.  It really becomes the single storage locker for all data being collected about or for a customer and then can be used in multiple ways to add value.  One use case is just being able to deliver a single view of the customer to contact center agents for customer interactions.  Another use case is using this centralized data to more effectively make real time decisions about next best action or offer for customers who are on a website.  Yet another might be using this single consolidated view of a customer to help automate and predict when customers will likely churn.  Picking up on those events that most likely are good predictors of a customer leaving and using them to alert the business before they go.

The third area of use cases can fall into a bucket that is more highly focused on making predictions about what is going to happen in the future.  Again, taking all of the disparate data a company may have, centralizing it into something like Hadoop and then using that centralized data to predict better future outcomes for the company or for customers.  You might have retail organizations that use the mountains of data they have to more effectively plan inventory availability in their stores to ensure customers are happy.  There are also many organizations that are jumping on board with big data to gather data from sensors to help predict outages of machines.  GE is one of the big boys in this space these days, talking about jet engines and wind turbines.  But there are many others that are closer to consumers also using this sensor data to help predict when something is about to go wrong.  Thinking about car manufacturing companies, HVAC service providers and the oil and gas market as a few of the other industries that are using machine data in a variety of ways to help ensure that the minimize down time or outages in their facilities or products.

The fourth and final area I will throw out today in this post is really focused on the plumbing layer of a company and how it can be optimized or overhauled to better serve the business or customers.  I won't go into much detail here, as it can get quite technical fast, but the idea is that there are many data management and storage systems in the market today that are getting long in the tooth.  And with that age, comes incredible expense and risk that many organizations are looking to mitigate.  One example of this would be what is called a "Data Warehouse Offload".  For years, there have been a few companies that have dominated the traditional data warehousing space and as such it has gotten ever more expensive to hold the huge amounts of data that companies are producing.  Many of these companies have begun to off load this data from these expensive, older and less flexible systems onto newer, more agile, more innovation friendly, cheaper systems like Hadoop etc...

So, we will leave it at that today.  These are some of the ways that companies are starting to use big data to bring added value to their companies and customers.  Of course, there are a number of other use cases not listed here, that are adding great value to large enterprises across all industries.  The key is finding the use cases that are going to drive the most value for you and making a plan to see it through.

Next time, we will talk about the value of some of these use cases and who in a company typically should be thinking about these things...


Monday, October 17, 2016

What Really Does Big Data Mean to People?

It seems that I go through these times in life (which I assume most people do) when life hits you up side the head and you feel like you are just a small little boat being tossed around in the middle of a hurricane.  That has what life has been for me in the last 9 months or so, but I am back writing, which is something I truly love to do.

So, lets pick right back up where we left off back in March with some discussion around big data and the value it brings to an organization.  I know some of this may be a bit elementary now, 9 months later, but it still is worth discussion.

Big data to most people is a really nebulous term that really means very little to them.  Almost everyone in the business world has heard or talked about the concepts and what it means for the last many months now, but I am unsure if business folks really understand it.  I would go as far as to say that most still do not.

The way that I look at big data is quite simplistic.  I believe the world of big data is really all about working to get more value out of data, in order to better acquire and serve customers.  Yes, you will have people writing articles talking about how it is game changing, how it is revolutionary, how it is not about big data but small data and the like.  All of this is noise in my mind.  The real focus of any business with data is, how can I get more value out it.

For most, I still think big data is a bit intimidating as it can be incredibly complex.  Within the big data "ecosystem" you have a whole bunch of new or growing concepts that fuse together to help companies derive value for their organization or customers.  There is distributed computing, dataflow, sensors, algorithms, models, machine learning, artificial intelligence etc.... And then there are the names like Hadoop, Hbase, NoSQL or Spark that get thrown around to make things even more confusing.

But lets just keep it simple.  When people talk about Big Data, it is really just a bunch of really interesting concepts and technology that are coming together to help drive value in new ways for companies.  

Again, I know that this is not a game changing post or super insightful for many, but I think keeping the world of big data simple helps many non technical folks begin to wrap their heads around where technology is enabling us to go with our customers.

In the next post, we will take this very simple way of looking at Big Data and apply it to use cases that are beginning to add value for companies.

Tuesday, March 29, 2016

Big Data and It's Implications For Customer Experience

Now that I have a bit of a new focus for this blog around big data and how that impacts the customer experience, I thought I would write a series of posts that, at the core, will lay out what people are doing with big data today and how it is been useful for driving customer experience in organizations.  I know there are a ton of blogs or sites that may discuss this, but I am going to try to keep it simple, short and understandable for anyone to consume.

Over the course of the next few blog posts, I am going to try to lay out:

  • What big data really means to most people
  • Why it is important
  • What kinds of use cases are being deployed currently
  • The associated business value of those use cases
  • The technology behind the big data ecosystem
  • The journey data takes to get from being produced to being used


By doing this, I hope to help pull back the veil of complexity that seems to trip most non developer/non technical folks up when they are learning about big data.  It surprises me how poor of a job most big data companies are doing in communicating the capabilities of big data technology and how those capabilities relate to true business value.  And this is not just a problem for executives or what most people refer to as the "business" people.  I am still seeing many people that are considered technical in their job roles who don't fully understand the technology ecosystem in big data and how it can enable business value.

So these first few posts will try to break this complexity and confusion down to small, digestable nuggets that anyone can consume and begin to use in their role.

First up, we will focus on what big data really means to most people and why it is important.

Tuesday, March 1, 2016

New Beginnings

This is a new beginning for me, not only with my blog but also with work changes over the last few years.
Over the last 4-5 years, I have been writing fairly regularly about technology in a space that was near and dear to my heart for many years, customer service.  This blog was all about creating authentic customer experiences, which I believed and continue to believe, are at the heart of any company that is going to survive in the ever changing business climate we live in. 
About 18 months ago now, I got a phone call from a good friend of mine from college that had some questions for me about technology.  He knew I worked at Salesforce and wanted to ask me about a company that was in the Salesforce ecosystem.  I helped answer his questions about this technology and then we wrapped up the call.  He then called me a few more times and thus started me on a unforgettable journey with him and a few other guys to start a company called Onyara.
Onyara was really the true dream startup experience that literally none of us expected.  It had the meager beginnings of starting in a basement to talk about how we were going to do this thing.  It had all of the anxiety of leaving steady jobs with steady paychecks which were providing for guys with young families.  It had the ups and downs of interpersonal relationships, raising money, finding the first customers etc…. And we were blessed to have a successful exit of the business, being sold off to a larger company.
I consider myself incredibly blessed to have been a part of this journey with Onyara and specifically with the guys that I got a chance to work with for 18 or so months.  It was a great adventure and one that I will never forget.
So now I am on to the next chapter of work life.  Again working for a much bigger company that acquired our team back in late August of 2015.  Integrating into a new company is an interesting challenge all it’s own.  But it is one that I am happy to take on and help our new parent be ridiculously successful.
Which means that I will begin to morph this blog to cover other, newer, fresh topics that are top of mind for people in the Enterprise Business space and specifically, I will focus on how technology is impacting these areas.
For now, some of the new topics will focus on data, as I believe that data really is the great equalizer.  Producing it, Moving it, Processing it, Mining it, Enriching it, Transforming it, Analyzing it, Storing it and most importantly, creating some specific and meaningful action to take because of it. This last bit about taking action being the most interesting part for me in the future.
I will then just let it flow and see where I take things.
I hope you enjoy the journey along with me as I give you my perspective and what I am seeing in the market.
Rob Sader