Sponsored Content
Special Forums UNIX and Linux Applications Virtualization and Cloud Computing CEP as a Service (CEPaaS) with MapReduce on Amazon EC2 and Amazon S3 Post 302261746 by Linux Bot on Tuesday 25th of November 2008 01:08:53 PM
Old 11-25-2008
CEP as a Service (CEPaaS) with MapReduce on Amazon EC2 and Amazon S3

Tim Bass
11-25-2008 01:02 PM
Just as I was starting to worry that complex event processing community has been captured by RDBMS pirates off the coast of Somalia, I rediscovered a new core blackboard architecture component, Hadoop.

Hadoop is a framework for building applications on large commodity clusters while transparently providing applications with both reliability and data motion.* Hadoop implements* Map/Reduce, where an application is divided into many small components of work, each of which may be executed or re-executed on any node in the cluster.

There are a number of great articles on implementing Hadoop in the Amazon Elastic Computing Cloud (EC2), including this one, Running Hadoop MapReduce on Amazon EC2 and Amazon S3.* Hadoop provided the core component that permits a distributed agent-based architecture to become a manageable, simple-to-use service.** This, in turn, provides a framework, as a service, for solving complex distributed computing problems.

Another good article to read is Taking Massive Distributed Computing to the Common Man - Hadoop on Amazon EC2/S3. There is also a nice article on the Amazon EC2 on the Hadoop Wiki.

It is interesting to note that if you Google around you will find that the same RDBMS folks who have pirated the term “complex event processing” are some of the most vocal Hadoop critics. Further reading, however, you will see that most of the critical comments by the RDBMS crowd have been answered.* It is very interesting to see the same debate in the MapReduce community as in the CEP community, the difference of course is that the MapReduce community is much larger than the CEP community.

However, there should be no doubt in anyone’s mind that MapReduce and the Hadoop implementation provide a way to accomplish CEP.* It is very refreshing to see this emerging CEP architecture on the rise.

Stay tuned for much more information related to MapReduce and CEP.



Source...
 

3 More Discussions You Might Find Interesting

1. Virtualization and Cloud Computing

Running MySQL on Amazon EC2 with Elastic Block Store

Here is an excellent article on Running MySQL on Amazon EC2 with Elastic Block Store. Amazon Web Services Developer Connection : Running MySQL on Amazon EC2 with Elastic Block Store (0 Replies)
Discussion started by: Neo
0 Replies

2. Virtualization and Cloud Computing

Securing code in Amazon EC2

Hi All, I am facing a problem, regarding code security on EC2. We have created an AMI which contains our code in it, and need to bind the code to the AMI so that no one can take the code out of the AMI. Are there some ways to achieve this ??? (2 Replies)
Discussion started by: akshay61286
2 Replies

3. Virtualization and Cloud Computing

anyone running SELinux on amazon EC2?

Hi, Has anyone enabled SELinux on Amazon EC2? I tried to enable SELinux using a CentOS image, and the steps in the following post, but it didn't work!! Amazon Web Services Developer Community : Has anyone successfully enabled SELinux ... The steps i took: 1)I started with CentOS 5.3 base... (5 Replies)
Discussion started by: fun_indra
5 Replies
S3LS(1p)						User Contributed Perl Documentation						  S3LS(1p)

NAME
s3ls - List S3 buckets and bucket contents SYNOPSIS
s3ls [options] s3ls [options] [ [ bucket | bucket/item ] ...] Options: --access-key AWS Access Key ID --secret-key AWS Secret Access Key --long Environment: AWS_ACCESS_KEY_ID AWS_ACCESS_KEY_SECRET OPTIONS
--help Print a brief help message and exits. --man Prints the manual page and exits. --verbose Output what is being done as it is done. --access-key and --secret-key Specify the "AWS Access Key Identifiers" for the AWS account. --access-key is the "Access Key ID", and --secret-key is the "Secret Access Key". These are effectively the "username" and "password" to the AWS account, and should be kept confidential. The access keys MUST be specified, either via these command line parameters, or via the AWS_ACCESS_KEY_ID and AWS_ACCESS_KEY_SECRET environment variables. Specifying them on the command line overrides the environment variables. --secure Uses SSL/TLS HTTPS to communicate with the AWS service, instead of HTTP. --long ENVIRONMENT VARIABLES
AWS_ACCESS_KEY_ID and AWS_ACCESS_KEY_SECRET Specify the "AWS Access Key Identifiers" for the AWS account. AWS_ACCESS_KEY_ID contains the "Access Key ID", and AWS_ACCESS_KEY_SECRET contains the "Secret Access Key". These are effectively the "username" and "password" to the AWS service, and should be kept confidential. The access keys MUST be specified, either via these environment variables, or via the --access-key and --secret-key command line parameters. If the command line parameters are set, they override these environment variables. CONFIGURATION FILE
The configuration options will be read from the file "~/.s3-tools" if it exists. The format is the same as the command line options with one option per line. For example, the file could contain: --access-key <AWS access key> --secret-key <AWS secret key> --secure This example configuration file would specify the AWS access keys and that a secure connection using HTTPS should be used for all communications. DESCRIPTION
Lists the buckets owned by the user, or all the item keys in a given bucket, or attributes associated with a given item. If no buckets or bucket/itemkey is specified on the command line, all the buckets owned by the user are listed. If the "--long" option is specified, the creation date of each bucket is also output. If a bucket name is specified on the command line, all the item keys in that bucket are listed. If the "--long" option is specified, the ID and display string of the item owner, the creation date, the MD5, and the size of the item are also output. If a bucket name and an item key, seperated by a slash character, is specified on the command line, then the bucket name and the item key are output. This is useful to check that the item actually exists. If the "--long" option is specified, all the HTTP attributes of the item are also output. This will include Content-Length, Content-Type, ETag (which is the MD5 of the item contents), and Last-Modifed. It may also include the HTTP attributes Content-Language, Expires, Cache-Control, Content-Disposition, and Content-Encoding. It will also include any x-amz- metadata headers. BUGS
Report bugs to Mark Atwood mark@fallenpegasus.com. Occasionally the S3 service will randomly fail for no externally apparent reason. When that happens, this tool should retry, with a delay and a backoff. Access to the S3 service can be authenticated with a X.509 certificate, instead of via the "AWS Access Key Identifiers". This tool should support that. It might be useful to be able to specify the "AWS Access Key Identifiers" in the user's "~/.netrc" file. This tool should support that. Errors and warnings are very "Perl-ish", and can be confusing. Trying to access a bucket or item that does not exist or is not accessable by the user generates less than helpful error messages. This tool does not efficiently handle listing huge buckets, as it downloads and parses the entire bucket listing, before it outputs anything. This tool does not take advantage of the prefix, delimiter, and hierarchy features of the AWS S3 key listing API. AUTHOR
Written by Mark Atwood mark@fallenpegasus.com. Many thanks to Wotan LLC <http://wotanllc.com>, for supporting the development of these S3 tools. Many thanks to the Amazon AWS engineers for developing S3. SEE ALSO
These tools use the Net::Amazon:S3 Perl module. The Amazon Simple Storage Service (S3) is documented at <http://aws.amazon.com/s3>. perl v5.10.0 2009-03-08 S3LS(1p)
All times are GMT -4. The time now is 10:21 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy