Sponsored Content
Special Forums UNIX and Linux Applications Virtualization and Cloud Computing Amazon CloudFront / S3 Small Object Test Results Post 302347596 by Neo on Wednesday 26th of August 2009 05:32:19 AM
Old 08-26-2009
I've not posted for a while, happy with CF/S3 performance - a happy camper as they say.....

However, something has always "bugged me" about CF/S3, and that is the need to create an origin-server side sync method to update S3/CF when dynamic content on an origin server is added (or existing content expires in the CDN - pull only from S3, etc).

For example, if you have a site where users upload their avatars, you need to sync with S3, either in near-time (best) or batch to keep S3 current. The same is true for any site with dynamic content. It is a lot of unnecessary extra work to initiate this on the origin service side, when it can be initiate from the client-user side.

In fact, I just noticed that SimpleCDN offers this service.... during a discussion at the link below, titled /clientscripts over to Amazon S3/Cloudfront?

/clientscripts over to Amazon S3/Cloudfront? - vBulletin.org Forum

And in particular, this reply to someone who introduced the SimpleCDN feature in the discussion thread:

vBulletin.org Forum - View Single Post - /clientscripts over to Amazon S3/Cloudfront?

I have been informed that SimpleCDN does have a feature where, if a file is
required from the CDN and it is not yet uploaded, SimpleCDN will pull it from the origin server. This is a very good feature and we are considering to try it now because of this. (!!)

Also, I note that S3/CF do not have this functionality (presently) and is not currently on the AWS roadmap.

I think this feature is critical now, especially them more I think about it. In fact, it is a "game changer".... as folks can move to the CDN and then (effortlessly)"on demand" populate the CDN by having the CDN (or S3 in the AWS case) pull from the origin server. Also dynamic changes to any site means almost "zero configuration" and no requires sync / upload to the CDN... and yea this is goodness for all!

As a fan of Amazon CF/S3, I would like to see this feature soon. More than likely, we will move some user-dynamic content over to SimpleCDN because of this feature that, if effect, works like "cache-miss, pull from origin server"). Very user friendly.... and also aligns with "the spirit of the Internet".... (like DNS, etc. where content is cached and pulled from the origin server when not available, cache timeouts, etc.) Having S3 as "the final frontier" is sub-optimal.

Right now, it is (1) Try CF, if CF does not have a file, (2) pull from S3, if S3 has no file, return "file not found" (404 I guess...) .... Better would be:

(1) Try CF, if CF does not have a file (2) pull from S3, if S3 has no file, (3) try origin server, if origin server has not file, then return "file not found".

Or, if you really want to serve AWS customers, offer this option..... :-)

(1) Try CF, if CF does not have a file (2) pull from origin server, if origin server has not file, then return "file not found" (or try S3 .... :-)

I assume that AWS wants to require people to store in S3 and will not permit CF to pull from the origin server..... OK... business is business, but please permit, at least, S3 to pull from origin server.

I will update some stats from our SimpleCDN tests soon.
 

3 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Printer Error(the Object Instance Test Does Not Exist)

Hello, i need some help about how to set up a high velocity impact printer in UNIX SCO 5.05, this printer is attached with a parallel port in a PC(host), the host use tunemul to access unix.(this reference is just to ask you if this is a local or remote connection, just to be sure), so, i... (2 Replies)
Discussion started by: jav_v
2 Replies

2. Virtualization and Cloud Computing

CEP as a Service (CEPaaS) with MapReduce on Amazon EC2 and Amazon S3

Tim Bass 11-25-2008 01:02 PM Just as I was starting to worry that complex event processing community has been captured by RDBMS pirates off the coast of Somalia, I rediscovered a new core blackboard architecture component, Hadoop. Hadoop is a framework for building applications on large... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. Shell Programming and Scripting

PERL - traverse sub directories and get test case results

Hello, I need help in creating a PERL script for parsing test result files to get the results (pass or fail). Each test case execution generates a directory with few files among which we are interested in .result file. Lets say Testing is home directory. If i executed 2 test cases. It will... (4 Replies)
Discussion started by: ravi.videla
4 Replies
S3RMBUCKET(1p)						User Contributed Perl Documentation					    S3RMBUCKET(1p)

NAME
s3rmbucket - Delete Amazon AWS S3 buckets SYNOPSIS
s3rmbucket [options] [bucket ...] Options: --access-key AWS Access Key ID --secret-key AWS Secret Access Key Environment: AWS_ACCESS_KEY_ID AWS_ACCESS_KEY_SECRET OPTIONS
--help Print a brief help message and exits. --man Prints the manual page and exits. --verbose Print a message for each created bucket. --access-key and --secret-key Specify the "AWS Access Key Identifiers" for the AWS account. --access-key is the "Access Key ID", and --secret-key is the "Secret Access Key". These are effectively the "username" and "password" to the AWS account, and should be kept confidential. The access keys MUST be specified, either via these command line parameters, or via the AWS_ACCESS_KEY_ID and AWS_ACCESS_KEY_SECRET environment variables. Specifying them on the command line overrides the environment variables. --secure Uses SSL/TLS HTTPS to communicate with the AWS service, instead of HTTP. bucket One or more bucket names. As many as possible will be deleted. A bucket may only be deleted if it is empty. Bucket names must be between 3 and 255 characters long, and can only contain alphanumeric characters, underscore, period, and dash. Bucket names are case sensitive. If a bucket name begins with one or more dashes, it might be mistaken for a command line option. If this is the case, separate the command line options from the bucket names with two dashes, like so: s3rmbucket --verbose -- --bucketname ENVIRONMENT VARIABLES
AWS_ACCESS_KEY_ID and AWS_ACCESS_KEY_SECRET Specify the "AWS Access Key Identifiers" for the AWS account. AWS_ACCESS_KEY_ID contains the "Access Key ID", and AWS_ACCESS_KEY_SECRET contains the "Secret Access Key". These are effectively the "username" and "password" to the AWS service, and should be kept confidential. The access keys MUST be specified, either via these environment variables, or via the --access-key and --secret-key command line parameters. If the command line parameters are set, they override these environment variables. CONFIGURATION FILE
The configuration options will be read from the file "~/.s3-tools" if it exists. The format is the same as the command line options with one option per line. For example, the file could contain: --access-key <AWS access key> --secret-key <AWS secret key> --secure This example configuration file would specify the AWS access keys and that a secure connection using HTTPS should be used for all communications. DESCRIPTION
Delete buckets in the Amazon Simple Storage Service (S3). A bucket may only be deleted if it is empty. BUGS
Report bugs to Mark Atwood mark@fallenpegasus.com. Occasionally the S3 service will randomly fail for no externally apparent reason. When that happens, this tool should retry, with a delay and a backoff. Access to the S3 service can be authenticated with a X.509 certificate, instead of via the "AWS Access Key Identifiers". This tool should support that. It might be useful to be able to specify the "AWS Access Key Identifiers" in the user's "~/.netrc" file. This tool should support that. Some errors and warnings are very "Perl-ish", and can be confusing. A bucket can only be deleted if it is empty. It might be useful to add an option to delete every item in the bucket before then deleting it, similar to the semantics of the "rm -rf dir" command. This tool should support that. AUTHOR
Written by Mark Atwood mark@fallenpegasus.com. Many thanks to Wotan LLC <http://wotanllc.com>, for supporting the development of these S3 tools. Many thanks to the Amazon AWS engineers for developing S3. SEE ALSO
These tools use the Net::Amazon:S3 Perl module. The Amazon Simple Storage Service (S3) is documented at <http://aws.amazon.com/s3>. perl v5.10.0 2009-03-08 S3RMBUCKET(1p)
All times are GMT -4. The time now is 09:46 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy