The big data market continues to expand and enable new types of analyses, new business models and new revenues streams for organizations that implement these capabilities. Following our previous research into big data and information optimization, we’ll investigate the technology trends affecting both of these domains as part of our 2016 research agenda.
A key tool for deriving value from big data is in-memory computing. As data is generated, organizations can use the speed of in-memory computing to accelerate the analytics on that data. Nearly-two thirds (65%) of participants in our big data analytics benchmark research identified real-time analytics as an important aspect of in-memory computing. Real-time analytics enables organizations to respond to events quickly, for instance, minimizing or avoiding the cost of downtime in manufacturing processes or rerouting deliveries that are in transit to cover delays in other shipments to preferred customers. Several big data vendors offer in-memory computing in their platforms.
Predictive analytics and machine learning also contribute to information optimization. These analytic techniques can automate some decision-making to improve and accelerate business processes that deal with large amounts of data. Our new big data benchmark research will investigate the use of predictive analytics with big data, among other topics. In combination with our upcoming data preparation benchmark research, we’ll explore the unification of big data technologies and the impact on resources and tools needed to successfully use big data. In our previous research, three-quarters of participants said they are using business intelligence tools to work with big data analytics. We will look for similar unification of other technologies with big data.
Another key trend we will explore is the use of data preparation and information management tools to simplify accessibility to data. Data preparation is a key step in this process, yet our data and analytics in the cloud benchmark research reveals that data preparation requires too much time: More than half (55%) of participants said they spend the most time in their analytic process preparing data for analysis. Virtualizing data access can accelerate access to data and enables data exploration with less investment than is required to consolidate data into a single data repository. We will be tracking adoption of cloud-based and virtualized integration capabilities and increasing use of Hadoop as a data source and store for processing of big data. In addition, our research will examine the role of search, natural language and text processing.
We suggest organizations develop their big data competencies for continuous analytics – collecting and analyzing data as it is generated. It should start with establishing appropriate data preparation processes for information responsiveness. Data models and analyses should support machine learning and cognitive computing to automate portions of the analytic process. Much of this data will have to be processed in real time as it is being generated. All of these advances will need advanced methods for big data governance and master data management. We look forward to reporting on developments in these areas throughout 2016 in our Big Data and Information Optimization Research Agenda.
Regards,
David Menninger
SVP & Research Director