Amazon Simple Storage Service

このガイドは、 Amazon Simple Storage Service のためのニフティクラウド SDK for PHP クライアントにフォーカスしています。 このガイドでは、 ニフティクラウド SDK for PHPをダウンロードしインストールが既に終わっていると想定しています。 はじめる場合のより詳細な情報は インストール を参照して下さい。

利用可能な操作

利用可能なすべてのメソッドおよび入出力の記述についてのより詳細な情報を得るためには ニフティクラウド API リファレンス または Amazon Simple Storage Service クライアント API リファレンス を参照して下さい。

AbortMultipartUpload (service docs) CompleteMultipartUpload (service docs)
CopyObject (service docs) CreateBucket (service docs)
CreateMultipartUpload (service docs) DeleteBucket (service docs)
DeleteBucketCors (service docs) DeleteBucketLifecycle (service docs)
DeleteBucketPolicy (service docs) DeleteBucketTagging (service docs)
DeleteBucketWebsite (service docs) DeleteObject (service docs)
DeleteObjects (service docs) GetBucketAcl (service docs)
GetBucketCors (service docs) GetBucketLifecycle (service docs)
GetBucketLocation (service docs) GetBucketLogging (service docs)
GetBucketNotification (service docs) GetBucketPolicy (service docs)
GetBucketRequestPayment (service docs) GetBucketTagging (service docs)
GetBucketVersioning (service docs) GetBucketWebsite (service docs)
GetObject (service docs) GetObjectAcl (service docs)
GetObjectTorrent (service docs) HeadBucket (service docs)
HeadObject (service docs) ListBuckets (service docs)
ListMultipartUploads (service docs) ListObjectVersions (service docs)
ListObjects (service docs) ListParts (service docs)
PutBucketAcl (service docs) PutBucketCors (service docs)
PutBucketLifecycle (service docs) PutBucketLogging (service docs)
PutBucketNotification (service docs) PutBucketPolicy (service docs)
PutBucketRequestPayment (service docs) PutBucketTagging (service docs)
PutBucketVersioning (service docs) PutBucketWebsite (service docs)
PutObject (service docs) PutObjectAcl (service docs)
RestoreObject (service docs) UploadPart (service docs)
UploadPartCopy (service docs)  

クライアントの作成

まず以下のような方法のいずれかで、クライアントオブジェクトを作成しなければなりません。

Factory method

Aws\S3\S3Client::factory() メソッドを利用することで、最も簡単に立ち上げ速やかに動かすことができます。その際にクレデンシャル (key and secret) を提供して下さい。

use Aws\S3\S3Client;

$client = S3Client::factory(array(
    'key'    => '<aws access key>',
    'secret' => '<aws secret key>'
));

先の例のようにしてアクセスキーをセットすることができます。また、もし、 AWS Identity and Access Management (IAM) roles for EC2 instances を利用しているか、環境変数 AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEY からクレデンシャル情報を利用できるのであれば、これらの設定を省略することも可能です。

サービスロケータ

より確実に Amazon Simple Storage Service に接続する方法はサービスロケータを通す方法です。 設定ファイルの中でクレデンシャルと設定値を指定することができます。 これらの設定はすべてのクライアントで共有されるので一度だけ設定を行えばよくなります。

use Aws\Common\Aws;

// Create a service builder using a configuration file
$aws = Aws::factory('/path/to/my_config.json');

// Get the client from the builder by namespace
$client = $aws->get('S3');

Creating a bucket

Now that we've created a client object, let's create a bucket. This bucket will be used throughout the remainder of this guide.

$client->createBucket(array('Bucket' => 'mybucket'));

If you run the above code example unaltered, you'll probably trigger the following exception:

PHP Fatal error:  Uncaught Aws\S3\Exception\BucketAlreadyExistsException: AWS Error
Code: BucketAlreadyExists, Status Code: 409, AWS Request ID: D94E6394791E98A4,
AWS Error Type: client, AWS Error Message: The requested bucket name is not
available. The bucket namespace is shared by all users of the system. Please select
a different name and try again.

This is because bucket names in Amazon S3 reside in a global namespace. You'll need to change the actual name of the bucket used in the examples of this tutorial in order for them to work correctly.

Creating a bucket in another region

The above example creates a bucket in the standard US-EAST-1 region. You can change the bucket location by passing a LocationConstraint value.

// Create a valid bucket and use a LocationConstraint
$result = $client->createBucket(array(
    'Bucket'             => $bucket,
    'LocationConstraint' => \Aws\Common\Enum\Region::US_WEST_2
));

// Get the Location header of the response
echo $result['Location'] . "\n";

// Get the request ID
echo $result['RequestId'] . "\n";

You'll notice in the above example that we are using the Aws\Common\Enum\Region object to provide the US_WEST_2 constant. The SDK provides various Enum classes under the Aws\Common\Enum namespace that can be useful for remembering available values and ensuring you do not enter a typo.

Note

Using the enum classes is not required. You could just pass 'us-west-2' in the LocationConstraint key.

Waiting until the bucket exists

Now that we've created a bucket, let's force our application to wait until the bucket exists. This can be done easily using a waiter. The following snippet of code will poll the bucket until it exists or the maximum number of polling attempts are completed.

// Poll the bucket until it is accessible
$client->waitUntilBucketExists(array('Bucket' => $bucket));

Uploading objects

Now that you've created a bucket, let's put some data in it. The following example creates an object in your bucket called data.txt that contains 'Hello!'.

// Upload an object to Amazon S3
$result = $client->putObject(array(
    'Bucket' => $bucket,
    'Key'    => 'data.txt',
    'Body'   => 'Hello!'
));

// Access parts of the result object
echo $result['Expiration'] . "\n";
echo $result['ServerSideEncryption'] . "\n";
echo $result['ETag'] . "\n";
echo $result['VersionId'] . "\n";
echo $result['RequestId'] . "\n";

// Get the URL the object can be downloaded from
echo $result['ObjectURL'] . "\n";

The AWS SDK for PHP will attempt to automatically determine the most appropriate Content-Type header used to store the object. If you are using a less common file extension and your Content-Type header is not added automatically, you can add a Content-Type header by passing a ContentType option to the operation.

Uploading a file

The above example uploaded text data to your object. You can alternatively upload the contents of a file by passing the SourceFile option. Let's also put some metadata on the object.

// Upload an object by streaming the contents of a file
// $pathToFile should be absolute path to a file on disk
$result = $client->putObject(array(
    'Bucket'     => $bucket,
    'Key'        => 'data_from_file.txt',
    'SourceFile' => $pathToFile,
    'Metadata'   => array(
        'Foo' => 'abc',
        'Baz' => '123'
    )
));

// We can poll the object until it is accessible
$client->waitUntilObjectExists(array(
    'Bucket' => $this->bucket,
    'Key'    => 'data_from_file.txt'
));

Uploading from a stream

Alternatively, you can pass a resource returned from an fopen call to the Body parameter.

// Upload an object by streaming the contents of a PHP stream.
// Note: You must supply a "ContentLength" parameter to an
// operation if the steam does not respond to fstat() or if the
// fstat() of stream does not provide a valid the 'size' attribute.
// For example, the "http" stream wrapper will require a ContentLength
// parameter because it does not respond to fstat().
$client->putObject(array(
    'Bucket' => $bucket,
    'Key'    => 'data_from_stream.txt',
    'Body'   => fopen($pathToFile, 'r+')
));

Because the AWS SDK for PHP is built around Guzzle, you can also pass an EntityBody object.

// Be sure to add a use statement at the beginning of you script:
// use Guzzle\Http\EntityBody;

// Upload an object by streaming the contents of an EntityBody object
$client->putObject(array(
    'Bucket' => $bucket,
    'Key'    => 'data_from_entity_body.txt',
    'Body'   => EntityBody::factory(fopen($pathToFile, 'r+'))
));

Listing your buckets

You can list all of the buckets owned by your account using the listBuckets method.

$result = $client->listBuckets();

foreach ($result['Buckets'] as $bucket) {
    // Each Bucket value will contain a Name and CreationDate
    echo "{$bucket['Name']} - {$bucket['CreationDate']}\n";
}

All service operation calls using the AWS SDK for PHP return a Guzzle\Service\Resource\Model object. This object contains all of the data returned from the service in a normalized array like object. The object also contains a get() method used to retrieve values from the model by name, and a getPath() method that can be used to retrieve nested values.

// Grab the nested Owner/ID value from the result model using getPath()
$result = $client->listBuckets();
echo $result->getPath('Owner/ID') . "\n";

Listing objects in your buckets

Listing objects is a lot easier in the new SDK thanks to iterators. You can list all of the objects in a bucket using the ListObjectsIterator.

$iterator = $client->getIterator('ListObjects', array(
    'Bucket' => $bucket
));

foreach ($iterator as $object) {
    echo $object['Key'] . "\n";
}

Iterators will handle sending any required subsequent requests when a response is truncated. The ListObjects iterator works with other parameters too.

$iterator = $client->getIterator('ListObjects', array(
    'Bucket' => $bucket,
    'Prefix' => 'foo'
));

foreach ($iterator as $object) {
    echo $object['Key'] . "\n";
}

You can convert any iterator to an array using the toArray() method of the iterator.

Note

Converting an iterator to an array will load the entire contents of the iterator into memory.

Downloading objects

You can use the GetObject operation to download an object.

// Get an object using the getObject operation
$result = $client->getObject(array(
    'Bucket' => $bucket,
    'Key'    => 'data.txt'
));

// The 'Body' value of the result is an EntityBody object
echo get_class($result['Body']) . "\n";
// > Guzzle\Http\EntityBody

// The 'Body' value can be cast to a string
echo $result['Body'] . "\n";
// > Hello!

The contents of the object are stored in the Body parameter of the model object. Other parameters are stored in model including ContentType, ContentLength, VersionId, ETag, etc...

The Body parameter stores a reference to a Guzzle\Http\EntityBody object. The SDK will store the data in a temporary PHP stream by default. This will work for most use-cases and will automatically protect your application from attempting to download extremely large files into memory.

The EntityBody object has other nice features that allow you to read data using streams.

// Seek to the beginning of the stream
$result['Body']->rewind();

// Read the body off of the underlying stream in chunks
while ($data = $result['Body']->read(1024)) {
    echo $data;
}

// Cast the body to a primitive string
// Warning: This loads the entire contents into memory!
$bodyAsString = (string) $result['Body'];

Saving objects to a file

You can save the contents of an object to a file by setting the SaveAs parameter.

$result = $client->getObject(array(
    'Bucket' => $bucket,
    'Key'    => 'data.txt',
    'SaveAs' => '/tmp/data.txt'
));

// Contains an EntityBody that wraps a file resource of /tmp/data.txt
echo $result['Body']->getUri() . "\n";
// > /tmp/data.txt

Uploading large files using multipart uploads

Amazon S3 allows you to uploads large files in pieces. The AWS SDK for PHP provides an abstraction layer that makes it easier to upload large files using multipart upload.

use Aws\Common\Enum\Size;
use Aws\Common\Exception\MultipartUploadException;
use Aws\S3\Model\MultipartUpload\UploadBuilder;

$uploader = UploadBuilder::newInstance()
    ->setClient($client)
    ->setSource('/path/to/large/file.mov')
    ->setBucket('mybucket')
    ->setKey('my-object-key')
    ->setOption('Metadata', array('Foo' => 'Bar'))
    ->setOption('CacheControl', 'max-age=3600')
    ->build();

// Perform the upload. Abort the upload if something goes wrong
try {
    $uploader->upload();
    echo "Upload complete.\n";
} catch (MultipartUploadException $e) {
    $uploader->abort();
    echo "Upload failed.\n";
}

You can attempt to upload parts in parallel by specifying the concurrency option on the UploadBuilder object. The following example will create a transfer object that will attempt to upload three parts in parallel until the entire object has been uploaded.

$uploader = UploadBuilder::newInstance()
    ->setClient($client)
    ->setSource('/path/to/large/file.mov')
    ->setBucket('mybucket')
    ->setKey('my-object-key')
    ->setConcurrency(3)
    ->build();

You can use the Aws\S3\S3Client::upload() method if you just want to upload files and not worry if they are too large to send in a single PutObject operation or require a multipart upload.

$client->upload('bucket', 'key', 'object body', 'public-read');

Setting ACLs and Access Control Policies

You can specify a canned ACL on an object when uploading:

$client->putObject(array(
    'Bucket'     => 'mybucket',
    'Key'        => 'data.txt',
    'SourceFile' => '/path/to/data.txt',
    'ACL'        => 'public-read'
));

You can use the Aws\S3\Enum\CannedAcl object to provide canned ACL constants:

use Aws\S3\Enum\CannedAcl;

$client->putObject(array(
    'Bucket'     => 'mybucket',
    'Key'        => 'data.txt',
    'SourceFile' => '/path/to/data.txt',
    'ACL'        => CannedAcl::PUBLIC_READ
));

You can specify more complex ACLs using the ACP parameter when sending PutObject, CopyObject, CreateBucket, CreateMultipartUpload, PutBucketAcl, PutObjectAcl, and other operations that accept a canned ACL. Using the ACP parameter allows you specify more granular access control policies using a Aws\S3\Model\Acp object. The easiest way to create an Acp object is through the Aws\S3\Model\AcpBuilder.

use Aws\S3\Enum\Permission;
use Aws\S3\Enum\Group;
use Aws\S3\Model\AcpBuilder;

$acp = AcpBuilder::newInstance()
    ->setOwner($myOwnerId)
    ->addGrantForEmail(Permission::READ, 'test@example.com')
    ->addGrantForUser(Permission::FULL_CONTROL, 'user-id')
    ->addGrantForGroup(Permission::READ, Group::AUTHENTICATED_USERS)
    ->build();

$client->putObject(array(
    'Bucket'     => 'mybucket',
    'Key'        => 'data.txt',
    'SourceFile' => '/path/to/data.txt',
    'ACP'        => $acp
));

Creating a pre-signed URL

You can authenticate certain types of requests by passing the required information as query-string parameters instead of using the Authorization HTTP header. This is useful for enabling direct third-party browser access to your private Amazon S3 data, without proxying the request. The idea is to construct a "pre-signed" request and encode it as a URL that an end-user's browser can retrieve. Additionally, you can limit a pre-signed request by specifying an expiration time.

The most common scenario is creating a pre-signed URL to GET an object. The easiest way to do this is to use the getObjectUrl method of the Amazon S3 client. This same method can also be used to get an unsigned URL of a public S3 object.

// Get a plain URL for an Amazon S3 object
$plainUrl = $client->getObjectUrl($bucket, 'data.txt');
// > https://my-bucket.s3.amazonaws.com/data.txt

// Get a pre-signed URL for an Amazon S3 object
$signedUrl = $client->getObjectUrl($bucket, 'data.txt', '+10 minutes');
// > https://my-bucket.s3.amazonaws.com/data.txt?AWSAccessKeyId=[...]&Expires=[...]&Signature=[...]

// Create a vanilla Guzzle HTTP client for accessing the URLs
$http = new \Guzzle\Http\Client;

// Try to get the plain URL. This should result in a 403 since the object is private
try {
    $response = $http->get($plainUrl)->send();
} catch (\Guzzle\Http\Exception\ClientErrorResponseException $e) {
    $response = $e->getResponse();
}
echo $response->getStatusCode();
// > 403

// Get the contents of the object using the pre-signed URL
$response = $http->get($signedUrl)->send();
echo $response->getBody();
// > Hello!

You can also create pre-signed URLs for any Amazon S3 operation using the getCommand method for creating a Guzzle command object and then calling the createPresignedUrl() method on the command.

// Get a command object from the client and pass in any options
// available in the GetObject command (e.g. ResponseContentDisposition)
$command = $client->getCommand('GetObject', array(
    'Bucket' => $bucket,
    'Key' => 'data.txt',
    'ResponseContentDisposition' => 'attachment; filename="data.txt"'
));

// Create a signed URL from the command object that will last for
// 10 minutes from the current time
$signedUrl = $command->createPresignedUrl('+10 minutes');

echo file_get_contents($signedUrl);
// > Hello!

If you need more flexibility in creating your pre-signed URL, then you can create a pre-signed URL for a completely custom Guzzle\Http\Message\RequestInterface object. You can use the get(), post(), head(), put(), and delete() methods of a client object to easily create a Guzzle request object.

$key = 'data.txt';
$url = "{$bucket}/{$key}";

// get() returns a Guzzle\Http\Message\Request object
$request = $client->get($url);

// Create a signed URL from a completely custom HTTP request that
// will last for 10 minutes from the current time
$signedUrl = $client->createPresignedUrl($request, '+10 minutes');

echo file_get_contents($signedUrl);
// > Hello!

Amazon S3 stream wrapper

The Amazon S3 stream wrapper allows you to store and retrieve data from Amazon S3 using built-in PHP functions like file_get_contents, fopen, copy, rename, unlink, mkdir, rmdir, etc.

You need to register the Amazon S3 stream wrapper in order to use it:

// Register the stream wrapper from an S3Client object
$client->registerStreamWrapper();

This allows you to access buckets and objects stored in Amazon S3 using the s3:// protocol. The "s3" stream wrapper accepts strings that contain a bucket name followed by a forward slash and an optional object key or prefix: s3://<bucket>[/<key-or-prefix>].

Downloading data

You can grab the contents of an object using file_get_contents. Be careful with this function though; it loads the entire contents of the object into memory.

// Download the body of the "key" object in the "bucket" bucket
$data = file_get_contents('s3://bucket/key');

Use fopen() when working with larger files or if you need to stream data from Amazon S3.

// Open a stream in read-only mode
if ($stream = fopen('s3://bucket/key', 'r')) {
    // While the stream is still open
    while (!feof($stream)) {
        // Read 1024 bytes from the stream
        echo fread($stream, 1024);
    }
    // Be sure to close the stream resource when you're done with it
    fclose($stream);
}

Opening Seekable streams

Streams opened in "r" mode only allow data to be read from the stream, and are not seekable by default. This is so that data can be downloaded from Amazon S3 in a truly streaming manner where previously read bytes do not need to be buffered into memory. If you need a stream to be seekable, you can pass seekable into the stream context options of a function.

$context = stream_context_create(array(
    's3' => array(
        'seekable' => true
    )
));

if ($stream = fopen('s3://bucket/key', 'r', false, $context)) {
    // Read bytes from the stream
    fread($stream, 1024);
    // Seek back to the beginning of the stream
    fseek($steam, 0);
    // Read the same bytes that were previously read
    fread($stream, 1024);
    fclose($stream);
}

Opening seekable streams allows you to seek only to bytes that were previously read. You cannot skip ahead to bytes that have not yet been read from the remote server. In order to allow previously read data to recalled, data is buffered in a PHP temp stream using Guzzle's CachingEntityBody decorator. When the amount of cached data exceed 2MB, the data in the temp stream will transfer from memory to disk. Keep this in mind when downloading large files from Amazon S3 using the seekable stream context setting.

Uploading data

Data can be uploaded to Amazon S3 using file_put_contents().

file_put_contents('s3://bucket/key', 'Hello!');

You can upload larger files by streaming data using fopen() and a "w", "x", or "a" stream access mode. The Amazon S3 stream wrapper does not support simultaneous read and write streams (e.g. "r+", "w+", etc). This is because the HTTP protocol does not allow simultaneous reading and writing.

$stream = fopen('s3://bucket/key', 'w');
fwrite($stream, 'Hello!');
fclose($stream);

Note

Because Amazon S3 requires a Content-Length header to be specified before the payload of a request is sent, the data to be uploaded in a PutObject operation is internally buffered using a PHP temp stream until the stream is flushed or closed.

fopen modes

PHP's fopen() function requires that a $mode option is specified. The mode option specifies whether or not data can be read or written to a stream and if the file must exist when opening a stream. The Amazon S3 stream wrapper supports the following modes:

r A read only stream where the file must already exist.
w A write only stream. If the file already exists it will be overwritten.
a A write only stream. If the file already exists, it will be downloaded to a temporary stream and any writes to the stream will be appended to any previously uploaded data.
x A write only stream. An error is raised if the file does not already exist.

Other object functions

Stream wrappers allow many different built-in PHP functions to work with a custom system like Amazon S3. Here are some of the functions that the Amazon S3 stream wrapper allows you to perform with objects stored in Amazon S3.

unlink()

Delete an object from a bucket.

// Delete an object from a bucket
unlink('s3://bucket/key');

You can pass in any options available to the DeleteObject operation to modify how the object is deleted (e.g. specifying a specific object version).

// Delete a specific version of an object from a bucket
unlink('s3://bucket/key', stream_context_create(array(
    's3' => array('VersionId' => '123')
));
filesize()

Get the size of an object.

// Get the Content-Length of an object
$size = filesize('s3://bucket/key', );
is_file()

Checks if a URL is a file.

if (is_file('s3://bucket/key')) {
    echo 'It is a file!';
}
file_exists()

Checks if an object exists.

if (file_exists('s3://bucket/key')) {
    echo 'It exists!';
}
filetype() Checks if a URL maps to a file or bucket (dir).
file() Load the contents of an object in an array of lines. You can pass in any options available to the GetObject operation to modify how the file is downloaded.
filemtime() Get the last modified date of an object.
rename() Rename an object by copying the object then deleting the original. You can pass in options available to the CopyObject and DeleteObject operations to the stream context parameters to modify how the object is copied and deleted.
copy()

Copy an object from one location to another. You can pass options available to the CopyObject operation into the stream context options to modify how the object is copied.

// Copy a file on Amazon S3 to another bucket
copy('s3://bucket/key', 's3://other_bucket/key');

Working with buckets

You can modify and browse Amazon S3 buckets similar to how PHP allows the modification and traversal of directories on your filesystem.

Here's an example of creating a bucket:

mkdir('s3://bucket');

You can pass in stream context options to the mkdir() method to modify how the bucket is created using the parameters available to the CreateBucket operation.

// Create a bucket in the EU region
mkdir('s3://bucket', stream_context_create(array(
    's3' => array(
        'LocationConstraint' => 'eu-west-1'
    )
));

You can delete buckets using the rmdir() function.

// Delete a bucket
rmdir('s3://bucket');

Note

A bucket can only be deleted if it is empty.

Listing the contents of a bucket

The opendir(), readdir(), rewinddir(), and closedir() PHP functions can be used with the Amazon S3 stream wrapper to traverse the contents of a bucket. You can pass in parameters available to the ListObjects operation as custom stream context options to the opendir() function to modify how objects are listed.

$dir = "s3://bucket/";

if (is_dir($dir) && ($dh = opendir($dir))) {
    while (($file = readdir($dh)) !== false) {
        echo "filename: {$file} : filetype: " . filetype($dir . $file) . "\n";
    }
    closedir($dh);
}

You can recursively list each object and prefix in a bucket using PHP's RecursiveDirectoryIterator.

$dir = 's3://bucket';
$iterator = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($dir));

foreach ($iterator as $file) {
    echo $file->getType() . ': ' . $file . "\n";
}

Another easy way to list the contents of the bucket is using the Symfony2 Finder component.

<?php

require 'vendor/autoload.php';

use Symfony\Component\Finder\Finder;

$aws = Aws\Common\Aws::factory('/path/to/config.json');
$s3 = $aws->get('s3')->registerStreamWrapper();

$finder = new Finder();

// Get all files and folders (key prefixes) from "bucket" that are less than 100k
// and have been updated in the last year
$finder->in('s3://bucket')
    ->size('< 100K')
    ->date('since 1 year ago');

foreach ($finder as $file) {
    echo $file->getType() . ": {$file}\n";
}

Syncing data with Amazon S3

Uploading a directory to a bucket

Uploading a local directory to an Amazon S3 bucket is rather simple:

$client->uploadDirectory('/local/directory', 'my-bucket');

The uploadDirectory() method of a client will compare the contents of the local directory to the contents in the Amazon S3 bucket and only transfer files that have changed. While iterating over the keys in the bucket and comparing against the names of local files using a customizable filename to key converter, the changed files are added to an in memory queue and uploaded concurrently. When the size of a file exceeds a customizable multipart_upload_size parameter, the uploader will automatically upload the file using a multipart upload.

Customizing the upload sync

The method signature of the uploadDirectory() method allows for the following arguments:

public function uploadDirectory($directory, $bucket, $keyPrefix = null, array $options = array())

By specifying $keyPrefix, you can cause the uploaded objects to be placed under a virtual folder in the Amazon S3 bucket. For example, if the $bucket name is my-bucket and the $keyPrefix is 'testing/', then your files will be uploaded to my-bucket under the testing/ virtual folder: https://my-bucket.s3.amazonaws.com/testing/filename.txt

The uploadDirectory() method also accepts an optional associative array of $options that can be used to further control the transfer.

params Array of parameters to use with each PutObject or CreateMultipartUpload operation performed during the transfer. For example, you can specify an ACL key to change the ACL of each uploaded object. See PutObject for a list of available options.
base_dir Base directory to remove from each object key. By default, the $directory passed into the uploadDirectory() method will be removed from each object key.
force Set to true to upload every file, even if the file is already in Amazon S3 and has not changed.
concurrency Maximum number of parallel uploads (defaults to 5)
debug Set to true to enable debug mode to print information about each upload. Setting this value to an fopen resource will write the debug output to a stream rather than to STDOUT.

In the following example, a local directory is uploaded with each object stored in the bucket using a public-read ACL, 20 requests are sent in parallel, and debug information is printed to standard output as each request is transferred.

$dir = '/local/directory';
$bucket = 'my-bucket';
$keyPrefix = '';

$client->uploadDirectory($dir, $bucket, $keyPrefix, array(
    'params'      => array('ACL' => 'public-read'),
    'concurrency' => 20,
    'debug'       => true
));

More control with the UploadSyncBuilder

The uploadDirectory() method is an abstraction layer over the much more powerful Aws\S3\Sync\UploadSyncBuilder. You can use an UploadSyncBuilder object directly if you need more control over the transfer. Using an UploadSyncBuilder allows for the following advanced features:

  • Can upload only files that match a glob expression
  • Can upload only files that match a regular expression
  • Can specify a custom \Iterator object to use to yield files to an UploadSync object. This can be used, for example, to filter out which files are transferred even further using something like the Symfony 2 Finder component.
  • Can specify the Aws\S3\Sync\FilenameConverterInterface objects used to convert Amazon S3 object names to local filenames and vice versa. This can be useful if you require files to be renamed in a specific way.
use Aws\S3\Sync\UploadSyncBuilder;

UploadSyncBuilder::getInstance()
    ->setClient($client)
    ->setBucket('my-bucket')
    ->setAcl('public-read')
    ->uploadFromGlob('/path/to/file/*.php')
    ->build()
    ->transfer();

Downloading a bucket to a directory

You can download the objects stored in an Amazon S3 bucket using features similar to the uploadDirectory() method and the UploadSyncBuilder. You can download the entire contents of a bucket using the Aws\S3\S3Client::downloadBucket() method.

The following example will download all of the objects from my-bucket and store them in /local/directory. Object keys that are under virtual subfolders are converted into a nested directory structure when downloading the objects. Any directories missing on the local filesystem will be created automatically.

$client->downloadBucket('/local/directory', 'my-bucket');

Customizing the download sync

The method signature of the downloadBucket() method allows for the following arguments:

public function downloadBucket($directory, $bucket, $keyPrefix = null, array $options = array())

By specifying $keyPrefix, you can limit the downloaded objects to only keys that begin with the specified $keyPrefix. This, for example, can be useful for downloading objects under a specific virtual directory.

The downloadBucket() method also accepts an optional associative array of $options that can be used to further control the transfer.

params Array of parameters to use with each GetObject operation performed during the transfer. See GetObject for a list of available options.
base_dir Base directory to remove from each object key when downloading. By default, the entire object key is used to determine the path to the file on the local filesystem.
force Set to true to download every file, even if the file is already on the local filesystem and has not changed.
concurrency Maximum number of parallel downloads (defaults to 10)
debug Set to true to enable debug mode to print information about each download. Setting this value to an fopen resource will write the debug output to a stream rather than to STDOUT.
allow_resumable Set to true to allow previously interrupted downloads to be resumed using a Range GET

More control with the DownloadSyncBuilder

The downloadBucket() method is an abstraction layer over the much more powerful Aws\S3\Sync\DownloadSyncBuilder. You can use a DownloadSyncBuilder object directly if you need more control over the transfer. Using the DownloadSyncBuilder allows for the following advanced features:

  • Can download only files that match a regular expression
  • Just like the UploadSyncBuilder, you can specify a custom \Iterator object to use to yield files to a DownloadSync object.
  • Can specify the Aws\S3\Sync\FilenameConverterInterface objects used to convert Amazon S3 object names to local filenames and vice versa.
use Aws\S3\Sync\DownloadSyncBuilder;

DownloadSyncBuilder::getInstance()
    ->setClient($client)
    ->setDirectory('/path/to/directory')
    ->setBucket('my-bucket')
    ->setKeyPrefix('/under-prefix')
    ->allowResumableDownloads()
    ->build()
    ->transfer();

Cleaning up

Now that we've taken a tour of how you can use the Amazon S3 client, let's clean up any resources we may have created.

// Delete the objects in the bucket before attempting to delete
// the bucket
$clear = new ClearBucket($client, $bucket);
$clear->clear();

// Delete the bucket
$client->deleteBucket(array('Bucket' => $bucket));

// Wait until the bucket is not accessible
$client->waitUntilBucketNotExists(array('Bucket' => $bucket));