Using our research, best practices and expertise, we help you understand how to optimize your business processes using applications, information and technology. We provide advisory, education, and assessment services to rapidly identify and prioritize areas for improvement and perform vendor selection
We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.
Services for Technology Vendors
We provide guidance using our market research and expertise to significantly improve your marketing, sales and product efforts. We offer a portfolio of advisory, research, thought leadership and digital education services to help optimize market strategy, planning and execution.
If you enjoyed my previous blog, “Hadoop Is the Elephant in the Room,” perhaps you’d be interested in what your organization might do with Hadoop. As I mentioned, the Hadoop World event this week showcased some of the biggest and most mature Hadoop implementations, such as those of eBay, Facebook, Twitter and Yahoo. Those of you who need 8,500 processors and 16 petabytes of storage like eBay likely already know about Hadoop. But is Hadoop relevant to organizations with less data that is still a lot?
For those not yet familiar with Hadoop, it is an open source software project with two key components: the Hadoop Distributed File System (HDFS) and a data processing and job scheduling technique called MapReduce. There are as many as nine other components depending on which distribution you use and other complementary tools or products from proprietary and open source software companies. In this post I’ll concentrate on why you might be interested in learning more about Hadoop and its components rather than explaining what each of the components does.
I see three common use cases for Hadoop:
1) To store and analyze large amounts of data without having to load the data into an RDBMS
2) To convert large amounts of unstructured or semistructured data (such as log files) into structured data so it can be loaded into an RDBMS
3) Or to perform complex analytics that are hard to express in SQL such as graph analysis and data mining.
Generally the factor that prompts organizations to consider Hadoop is data volume. Hadoop is designed to process large batches of data quickly. Several presenters at the conference said it enables them to do analyses that they couldn’t do previously. Often there is no real alternative to Hadoop to complete such analyses in a reasonable timeframe. The other initial attraction is cost savings derived from Hadoop being an open source technology, which holds down or eliminates software licensing and upgrade fees.
Hadoop World offered 45 breakout sessions. By far the largest market segment represented was Web-related businesses such as AOL, eBay, Facebook, Mozilla, Stumble Upon, Twitter, Yahoo and others. These organizations have to deal with large volumes of log files, search strings and social network data. Other market segments represented included media and advertising, financial services, healthcare and government intelligence.
In the media and advertising space, organizations are using Hadoop to perform best-ad-offer analysis and analyze performance of online videos to determine, for example, factors behind viewer abandonment. I was surprised that only a handful of the 900+ attendees identified themselves as being part of the financial services industry. Bank of America gave a presentation, but it didn’t go into a lot of detail on how it is using Hadoop. Chicago Mercantile Exchange speakers talked about how they analyze the daily streams of transaction data. As well I know of at least two firms (not part of the event) that are analyzing trade data with Hadoop back-testing trading algorithms. One chose Hadoop because it can express complex algorithms more easily than SQL. The other chose Hadoop to replace an RDBMS because of its cost advantages.
In the healthcare space, one presentation discussed analyzing the intersection of mountains of electronic health records, treatment protocols and clinical outcomes. I also know of pharmaceutical organizations using Hadoop in the drug discovery process. And while I also know of Hadoop being used in the intelligence community, if I told about it I’d have to kill you. However it is easy to imagine that the intelligence community would be interested in social network analysis, digital image analysis and other analyses involving large amounts of data and/or complex algorithms that would be difficult to express in SQL.
For more use cases and examples of the popularity of Hadoop, see http://wiki.apache.org/hadoop/PoweredBy where close to 200 organizations have voluntarily listed information about it.
Having discussed the virtues of the technology, I also want to point out some caveats about it. First, Hadoop is not a real-time processing environment but a batch processing environment with response times measured in minutes or hours depending on data volumes. I heard several times at the event that just to start up a Hadoop job takes around 30 seconds. Your mileage may vary, but the point is it doesn’t provide subsecond or even few-seconds response times.
As well, Hadoop is not a database environment in the traditional sense. However, it can be used to store large amounts of data such as source files or detailed data that generally is not accessed on a frequent basis. Shifting some of this type of data to Hadoop can help reduce licensing costs of a traditional RDBMS. Frequently accessed data (typically the results of a Hadoop job) would be stored in an RDBMS for any type of ad-hoc or frequent query and analysis.
Whether your motivation is to achieve scalability, cost savings or complex analytics, Hadoop is a technology worth considering. At this point there are plenty of examples of its use you can draw upon to understand how it could be relevant to your organization.
Let me know your thoughts or come and collaborate with me on Facebook, LinkedIn and Twitter .
Regards,
David Menninger – VP & Research Director
David Menninger leads technology software research and advisory for Ventana Research, now part of ISG. Building on over three decades of enterprise software leadership experience, he guides the team responsible for a wide range of technology-focused data and analytics topics, including AI for IT and AI-infused software.
Ventana Research’s Analyst Perspectives are fact-based analysis and guidance on business,
Each is prepared and reviewed in accordance with Ventana Research’s strict standards for accuracy and objectivity and reviewed to ensure it delivers reliable and actionable insights. It is reviewed and edited by research management and is approved by the Chief Research Officer; no individual or organization outside of Ventana Research reviews any Analyst Perspective before it is published. If you have any issue with an Analyst Perspective, please email them to ChiefResearchOfficer@ventanaresearch.com