If it seems your IT team has more data to manage than ever before, you’re not mistaken. Just about every enterprise is trying to determine how to manage more data growth without scaling budget or staff as well.
451 Research analyst Henry Baltazar emphasized this trend in a recent report, noting, “The increasing relevancy of data management is in parallel with the ongoing growth of the sheer volume of data that enterprises must deal with.” The good news is that there are many approaches IT can take to ease the challenges of data growth. Let’s take a look at four steps IT can use to make a big impact.
It might seem obvious, but you can’t fix the problems you don’t know about. This makes it critical to gain visibility about what’s really going on with your data. If you’re not sure what data is hot and truly needs pricey all-flash performance, or what data is cold and can move to a less costly storage resource, not knowing might be more expensive to your enterprise than you might want to believe. If application owners are asking for more performance or capacity, you could serve their needs far more efficiently by first making sure the right data is on the right resource.
To do this, you’ll need to gain visibility into what’s going on across your storage resources. Software can now deliver this insight, using metadata to determine when files were last opened, by whom, when they were last changed, and so on. Before you can fix any of your other data management problems, it is critical to get a unified view of what’s going on with your data. Look for solutions with dashboards that give you a clear picture of aggregated data activity across your storage ecosystem – rather than just one system at a time, as few IT departments have the time to monitor and collect information across multiple different systems.
Most petabyte-scale enterprises have significant storage sprawl, with over half managing ten or more different storage systems according to a 2016 survey. As the business ages, storage sprawls out even further and soon IT ends up managing a substantial investment in infrastructure. This infrastructure is valuable, but the challenge is that over time, the difficulty of moving data means much of it is on the wrong resource for current business needs.
By virtualizing data with software, enterprises can create a global namespace that makes different storage resources simultaneously available to applications. Once the control path is separated from the data path through virtualization, control can span storage silos. This makes it possible to easily move data without interrupting applications. That way, high performance storage can serve hot data, and budgets can be better utilized by moving colder data to a lower-cost storage tier. As an added bonus, painful storage migrations become obsolete, as data moves as needed throughout its life cycle.
There are few better options for saving budget today than adding on-premises object or cloud storage. The challenge is how to integrate the cloud as a storage tier and move the right data off other storage. Data virtualization, metadata management and machine learning can all help make this a simple and automated process with objectives can determine how much performance is essential for each application, or how much IT wants to spend to store that data set. As data gets cold — whether “cold” is a month or a year of inactivity for your enterprise — IT can have it move off high-performance storage, but kept accessible in case it is needed again.
When adding the cloud, it’s important to make sure data moved off-premises can seamlessly move back at the file level. If you are forced to rehydrate an entire volume from the cloud, you could end up paying much more than you bargained for. This is because it is generally inexpensive to move data to the cloud, but costly to bring it back again. Making sure you can pull back data at file-level grainularity will help you keep costs low while enjoying the flexibility and agility that is driving rapid cloud adoption in the enterprise.
Once you’ve gained insight into your data and given your applications awareness to your diverse storage resources, the final step is automating management. Some storage systems can provide these capabilities within a single system or vendor ecosystem, but metadata engine software can automate management according to IT-defined objectives even across different vendors. Storage Switzerland lead analyst George Crump calls this “end-to-end data management.”
With machine learning on the rise, it is no surprise that this type of intelligence is also coming to data management. Over time, smart software can observe patterns, such as when internal business data gets hot at the end of the quarter, and if IT’s data management objectives allow, move it back to performance storage before it is time to prepare reports.
The old adage says that the only things certain in life are death and taxes, but today, we might add data growth to that list. Given that most IT departments aren’t getting more budget or more headcount to help them deal with data growth, visibility, integration, cloud adoption, and automation are critical to giving IT staff the time needed to manage strategic projects instead of spending their days working as storage traffic cops. Determining how to add these capabilities to your enterprise today is essential to scaling to meet the challenges that face all businesses working to use their data as a tool that helps them lead their industries.