Sponsored Content
Special Forums UNIX and Linux Applications Virtualization and Cloud Computing Custom ACL for S3 buckets and keys. Post 302382576 by linuxpenguin on Wednesday 23rd of December 2009 08:20:35 PM
Old 12-23-2009
Custom ACL for S3 buckets and keys.

Recently I realized that s3cmd (ubuntu) does not let you have custom acl's on s3 objects. So I wrote the following ruby script and I thought I could share with you all. Using s3fox, s3hub etc was really painful.

Code:
#!/usr/bin/env ruby
require 'rubygems'
require 'aws/s3'

key = ARGV[0] ## they key that you want to add ACL to. e.g. foo.txt
bucket = "<your_bucket_name>"
user_id = "<id_of_theobject>" ## It is a little tricky to get this one.
name = "<display_name_of_the_user>"
perms = "<Permissions you want in the ACL>" ## e.g. READ

AWS::S3::Base.establish_connection!(
  :access_key_id     => "<key_id>",
  :secret_access_key => "<access_key>"
)

policy = AWS::S3::S3Object.acl( key, bucket )
grant = AWS::S3::ACL::Grant.new
grantee = AWS::S3::ACL::Grantee.new
grant.grantee = grantee
grant.permission = perms
policy.grants << grant
grantee.type = "CanonicalUser"
grantee.id = user_id
grantee.display_name = name
AWS::S3::S3Object.acl( key, bucket , policy)

## And thats it, run s3cmd info /<BUCKET>/<key> to see the new ACL.


Last edited by Scott; 12-23-2009 at 11:08 PM.. Reason: sharing code tags :)
 

9 More Discussions You Might Find Interesting

1. Cybersecurity

ACL

Hi all, I've just been handled the responsibility for a FTP-site. Having no experiens of UNIX at all. And now one of my users needs to have full access to the usr directory and all it's subdirectories, don't know why just trying to do what the boss tells me. The type of UNIX is FreeBSD and the... (4 Replies)
Discussion started by: -tri-
4 Replies

2. UNIX for Dummies Questions & Answers

arrow keys / special keys

how to use the arrow keys in shell scripting. is there any special synatax / command for this. i just want to use the arrow keys for navigation. replies appreciated raguram R (3 Replies)
Discussion started by: raguramtgr
3 Replies

3. Shell Programming and Scripting

Need help to create ACL

Hi, I generated a script that will create the list of dir/sub-dir and will allow to create the same on diff server. this is what i have done : #!/bin/ksh # Script to migrate the directory between the two servers. # Ver 0.1 # Author Krishna. D # c - create and e - extract directory if ;... (1 Reply)
Discussion started by: krishnadvn
1 Replies

4. Linux

ACL

Hi, I want to know what does the "effective" comment means in the output of the getfacl and whether it has to do with the acl mask... thanks (0 Replies)
Discussion started by: Gartlar
0 Replies

5. Shell Programming and Scripting

What are public keys in ssh and how do we create the public keys??

Hi All, I am having knowledge on some basics of ssh and wanted to know what are the public keys and how can we create and implement it in connecting server. Please provide the information for the above, it would be helpful for me. Thanks, Ravindra (1 Reply)
Discussion started by: ravi3cha
1 Replies

6. Solaris

ACL

Can i get the synopsis for add multiple users in single command for ACL access for a directory or a file thanks in advance dinu (3 Replies)
Discussion started by: dinu
3 Replies

7. UNIX for Advanced & Expert Users

Need assistance on ACL

Hi Friends, I went through the ACL threads that were posted in the past but none were matching to my requirement . Hence starting a new thread . Challenge : user : a group : Test1 user: b group: Test2 Say under user a i create dir /tmp/debug with the privilege of 755 and also... (3 Replies)
Discussion started by: leobreaker
3 Replies

8. UNIX for Dummies Questions & Answers

ACL concept

Hi.. Could someone explain about setfacl,getfacl in unix and its uses. Regards, Suresh (1 Reply)
Discussion started by: suresh sunkara
1 Replies

9. Solaris

ACL on the Solaris

we have two Solaris 10 servers with same configuration and settings. We have hard mounted the NFS with the version 4. In one of the server the newer ACL commands are working fine (chmod and ls -v) whereas in another only posix (getfacl and setfacl alone is working) when we try ls -V in in that... (13 Replies)
Discussion started by: sathishbabu89
13 Replies
S3MKBUCKET(1p)						User Contributed Perl Documentation					    S3MKBUCKET(1p)

NAME
s3mkbucket - Create Amazon AWS S3 buckets SYNOPSIS
s3mkbucket [options] [bucket ...] Options: --access-key AWS Access Key ID --secret-key AWS Secret Access Key --acl-short private|public-read|public-read-write|authenticated-read Environment: AWS_ACCESS_KEY_ID AWS_ACCESS_KEY_SECRET OPTIONS
--help Print a brief help message and exits. --man Prints the manual page and exits. --verbose Print a message for each created bucket. --access-key and --secret-key Specify the "AWS Access Key Identifiers" for the AWS account. --access-key is the "Access Key ID", and --secret-key is the "Secret Access Key". These are effectively the "username" and "password" to the AWS account, and should be kept confidential. The access keys MUST be specified, either via these command line parameters, or via the AWS_ACCESS_KEY_ID and AWS_ACCESS_KEY_SECRET environment variables. Specifying them on the command line overrides the environment variables. --secure Uses SSL/TLS HTTPS to communicate with the AWS service, instead of HTTP. --acl-short Apply a "canned ACL" to the bucket when it is created. To set a more complex ACL, use the "s3acl" tool after the bucket is created. The following canned ACLs are currently defined by S3: private Owner gets "FULL_CONTROL". No one else has any access rights. This is the default. public-read Owner gets "FULL_CONTROL". The anonymous principal is granted "READ" access. public-read-write Owner gets "FULL_CONTROL". The anonymous principal is granted "READ" and "WRITE" access. This is a useful policy to apply to a bucket, if you intend for any anonymous user to PUT objects into the bucket. authenticated-read Owner gets "FULL_CONTROL" . Any principal authenticated as a registered Amazon S3 user is granted "READ" access. bucket One or more bucket names. As many as possible will be created. A user may have no more than 100 buckets. Bucket names must be between 3 and 255 characters long, and can only contain alphanumeric characters, underscore, period, and dash. Bucket names are case sensitive. Buckets with names containing uppercase characters or underscores are not accessible using the virtual hosting method. Buckets are unique in a global namespace. That means if someone has created a bucket with a given name, someone else cannot create another bucket with the same name. If a bucket name begins with one or more dashes, it might be mistaken for a command line option. If this is the case, separate the command line options from the bucket names with two dashes, like so: s3mkbucket --verbose -- --bucketname ENVIRONMENT VARIABLES
AWS_ACCESS_KEY_ID and AWS_ACCESS_KEY_SECRET Specify the "AWS Access Key Identifiers" for the AWS account. AWS_ACCESS_KEY_ID contains the "Access Key ID", and AWS_ACCESS_KEY_SECRET contains the "Secret Access Key". These are effectively the "username" and "password" to the AWS service, and should be kept confidential. The access keys MUST be specified, either via these environment variables, or via the --access-key and --secret-key command line parameters. If the command line parameters are set, they override these environment variables. CONFIGURATION FILE
The configuration options will be read from the file "~/.s3-tools" if it exists. The format is the same as the command line options with one option per line. For example, the file could contain: --access-key <AWS access key> --secret-key <AWS secret key> --secure This example configuration file would specify the AWS access keys and that a secure connection using HTTPS should be used for all communications. DESCRIPTION
Create buckets in the Amazon Simple Storage Service (S3). BUGS
Report bugs to Mark Atwood mark@fallenpegasus.com. Making a bucket that already exists and is owned by the user does not fail. It is unclear whether this is a bug or not. Occasionally the S3 service will randomly fail for no externally apparent reason. When that happens, this tool should retry, with a delay and a backoff. Access to the S3 service can be authenticated with a X.509 certificate, instead of via the "AWS Access Key Identifiers". This tool should support that. It might be useful to be able to specify the "AWS Access Key Identifiers" in the user's "~/.netrc" file. This tool should support that. Errors and warnings are very "Perl-ish", and can be confusing. AUTHOR
Written by Mark Atwood mark@fallenpegasus.com. Many thanks to Wotan LLC <http://wotanllc.com>, for supporting the development of these S3 tools. Many thanks to the Amazon AWS engineers for developing S3. SEE ALSO
These tools use the Net::Amazon:S3 Perl module. The Amazon Simple Storage Service (S3) is documented at <http://aws.amazon.com/s3>. perl v5.10.0 2009-03-08 S3MKBUCKET(1p)
All times are GMT -4. The time now is 04:51 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy