Software frameworks like Apache Hadoop can help you process large data sets by distributing the data and processing across many computers. But deploying, configuring, and managing these distributed clusters can be difficult, time-consuming, and expensive.
Amazon EMR is a managed Hadoop framework that uses the elastic infrastructure of Amazon EC2 and Amazon S3 to make it easy, fast, and cost-effective to distribute computation of your data across multiple, dynamically-scalable EC2 instances.
You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB.
Amazon EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.
EMR manages the clusters so you can focus on analyzing the data.
With Amazon EMR, you can provision one, hundreds and even thousands of compute instances to process data at any scale. You can easily increase or decrease the number of instances manually or with AutoScaling, and you only pay for what you use.
You can spend less time tuning and monitoring your cluster. Amazon EMR has tuned Hadoop for the cloud; it also monitors your cluster —retrying failed tasks and automatically replacing poorly performing instances.
Amazon EMR automatically configures Amazon EC2 firewall settings that control network access to instances, and you can launch clusters in an Amazon Virtual Private Cloud (VPC), a logically isolated network you define. For objects stored in Amazon S3, you can use Amazon S3 server-side encryption or Amazon S3 client-side encryption with EMRFS, with AWS Key Management Service or customer-managed keys. You can also easily enable other encryption options and authentication with Kerberos.
You have complete control over your cluster. You have root access to every instance, you can easily install additional applications, and you can customize every cluster with bootstrap actions. You can also launch Amazon EMR clusters with custom Amazon Linux AMIs.
You can launch an Amazon EMR cluster in minutes. You don’t need to worry about node provisioning, cluster setup, Hadoop configuration, or cluster tuning. Amazon EMR takes care of these tasks so you can focus on analysis.
At nClouds, we wanted to make it fast and easy to get started with Amazon EMR so we created a Quick Start for Amazon EMR. You can get up and running fast with all your use cases, and we’ve made it really easy to use Spot and Dedicated Instance discounts.
Go faster & reduce costs:
Provisioned resources summary
Enter the parameter specific to your AWS account and submit the CF stack.
CF stack creation in progress.
CF stack creates EMR cluster.
Nodes created in new VPC created from CF stack.
Stack created successfully.
EMR cluster ready to take jobs.
CF stack executes Spark Job successfully.
Output of Spark Job pushed to S3 bucket.
To access the cluster via SSH, you will need to replace the default key, nclouds-emr-demo, with a real one already set up in your AWS account.
In addition to the above, you will probably want to set up your own workload, by updating the sample Spark step.
You can also email us directly at email@example.com for your inquiries or use the form below