Hadoop for many is synonymous with big data. Hadoop is not an application but a set of open source multi-tools with the ultimate goal is to analyze volumes of structured and unstructured data. 

The past five years have seen the incredible development of Hadoop, from small Apache project to the “next big thing.” Of course, as the next big thing, everything up to this point has simply been experimentation. Some organizations already have Hadoop deployments, but the vast majority of enterprises are still looking at this newfangled open-source project and weighing their options.

Nearly 4 out of 10 managers surveyed indicated that they use big data technologies in the innovation of products and services in the context of modelling data to test scenarios. The less frequent use of Hadoop include deployments to work in conjunction with SQL technologies. A significant proportion using Hadoop is to replace traditional data warehouse technologies. Finally, enterprises use Hadoop for the analysis of large volumes of data generated by the Web. 

The three benefits cited by the report using Hadoop are improve customer satisfaction, reduce development time and reducing costs operations. The difficulties encountered in the implementation of Hadoop include costs, lack of available skills and the difficulties of making the choice of technologies. 

Hadoop is an increasingly important technology and many organisations are storing vast amounts of data in Hadoop.