Four Ways to Regroup After Implementing Hadoop

Four Ways to Regroup After Implementing Hadoop

Posted on September 24, 2015 0 Comments

“Thru 2018, 70 percent of Hadoop deployments will not meet cost savings and revenue generation objectives due to skills and integration challenges.” 

– Nick Heudecker, Gartner Analyst

A Gartner research report on Apache Hadoop found that 46 percent of companies surveyed plan Hadoop investments, marking a slow but steady adoption. However, there are still questions about achieving optimal value with Hadoop. Additionally, critical enterprise features – such as ACID compliance, reliability and disaster recovery – take decades to solidify and are not found in the Hadoop stack.

Fortunately, MarkLogic enhances the value of the data in Hadoop by making it accessible and operational. With our scale-out architecture and the only database that combines the flexibility of NoSQL with enterprise-hardened features such as ACID transactions and government-grade security – MarkLogic is the ideal database for Hadoop.

Use it to:

  1. Run Real-time Enterprise Applications with Hadoop Data The Hadoop Distributed File System (HDFS) is a cost-effective file system but lacks enterprise-grade capabilities and features such as indexing. A database is required to handle this workload and MarkLogic is ideal as it uniquely provides the mature power capabilities of relational databases (indexes, transactions, security and enterprise operations) with the flexibility and scalability found within open source NoSQL databases.
  2. Manage Data with Cost-Effective Tiered Storage MarkLogic helps Hadoop users segregate data, store it according to its value and make it available as needed. HDFS stores data in a cost-effective manner but it’s not designed for real-time data access that requires indexes and interactive query capabilities. MarkLogic leverages HDFS within a tiered storage model, seamlessly moving data between any combination of HDFS, S3, SSD, SAN, NAS, or local disk—all without having to alter code, saving companies on costly and complex ETL and data duplication processes. Companies are ensured of ACID compliance as data is moved during online operations, and MarkLogic also aids in data governance and provisioning challenges.
  3. Make Hadoop a Compute Layer The MarkLogic Connector for Hadoop provides direct access to a MapReduce job in order to efficiently read all of the data in a MarkLogic file. This means consistent data views and jobs can be conducted in minutes versus hours.
  4. Complement the Hortonworks Data Platform (HDP) and Cloudera MarkLogic is certified on the Hortonworks Data Platform (HDP) and Cloudera Distribution for Hadoop (CDH).

MarkLogic helps customers build real-time enterprise applications, leverage infrastructure investments to save time and money, improve data governance, and affordably run a mix of analytical and operational workloads based on Hadoop data.

Learn how to maximize the value of your Hadoop investment – see us at Strata+Hadoop World in Booth #732.

For more information, be sure to watch the Video: Making Hadoop Better with MarkLogic.

Sources:

– Quote from Nick Heudecker via Twitter

Jim Clark

View all posts from Jim Clark on the Progress blog. Connect with us about all things application development and deployment, data integration and digital business.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation