In 2017 Big Data will begin to cross a chasm into the mainstream, in large part resulting from the popularity of Hadoop
and Spark. Companies will use Big Data for mission-critical needs when running their data stacks. These are the same companies that once had issues with the security threat propaganda that plagued Hadoop and Spark; that’s now in the past.
We have only touched the tip of the iceberg for what Hadoop and Spark are capable of offering when running mission-critical jobs on a high-performance Big Data platform.
We will also see more Big Data workloads moving to the cloud, while a large number of customers who traditionally have run their operations on-premises will move to a hybrid cloud
/ on-premises model. I think we can also expect to see companies using the cloud not just for data storage, but for data processing. And we’ll see mainstream adoption of the cloud, which will give companies confidence in running their Big Data clusters in the cloud, and not just on-premises.
As Hadoop and Spark enter the mainstream, we can expect consumers to demand comprehensive Big Data solutions – not just piece parts. Even in 2016, many companies have seen platforms running just Hadoop and Spark as unstable. But those platforms will be tasked to run a multitude of apps, and those platforms will be expected to become the cornerstone for companies’ Big Data initiatives. On the supplier side, we can expect to see more companies selling prebuilt Big Data solutions that meet a variety of needs, while delivering stable high-performance as well as the ability to “foresee” and head off performance issues before they arise.