Provides S3Source & S3Cache.
S3Source maps a URL identifier to a Simple Storage Service (S3) object. S3Source can work with both AWS and non-AWS S3 endpoints.
In AWS, the following permissions are required: s3:GetObject
, s3:PutObject
, s3:DeleteObject
, s3:ListBucket
.
BasicLookupStrategy locates images by concatenating an identifier with a pre-defined path prefix and/or suffix. For example, with the following configuration options set:
# Note trailing slash!
source.S3Source.BasicLookupStrategy.path_prefix: path/prefix/
source.S3Source.BasicLookupStrategy.path_suffix:
An identifier of image.jpg in the URL will resolve to path/prefix/image.jpg within the bucket.
It's also possible to include a partial path in the identifier using URL-encoded slashes (%2F
) as path separators. subpath%2Fimage.jpg in the URL would then resolve to path/prefix/subpath/image.jpg.
slash_substitute
configuration key.
When your URL identifiers don't match your S3 object keys, ScriptLookupStrategy is available to tell S3Source to capture the object key returned by a method in your delegate class. The s3source_object_info()
method should return a hash containing bucket
and key
keys, if an object is available, or nil
if not. See the FilesystemSource section of the Guide for examples of similar methods.
Like all sources, S3Source needs to be able to figure out the format of a source image before it can be served. It uses the following strategy to do this:
GET
request is sent with a Range
header specifying a small range of data from the beginning of the resource.
Content-Type
header is present in the response, and is specific enough (i.e. not application/octet-stream
), a format is inferred from that.S3Cache caches variant images and metadata in a Simple Storage Service (S3) bucket. It supports both AWS and non-AWS endpoints.
s3:GetObject
, s3:PutObject
, s3:DeleteObject
, s3:ListBucket
.
cache.S3Cache.multipart_uploads
is enabled, multipart uploads will be used to work around the size limit of single-part uploads. If using this cache in AWS, it is recommended to enable the AbortIncompleteMultipartUpload
lifecycle rule on your bucket.
Credentials are obtained from the following sources in order of priority:
aws.accessKeyId
and aws.secretKey
system propertiesAWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and/or AWS_SESSION_TOKEN
environment variablessource.S3Source.access_key_id
and source.S3Source.secret_access_key
keys in the application configurationThis is similar to the behavior of the AWS SDK's DefaultCredentialsProvider, except that the application configuration is consulted after the environment.
Use the plugin installer:
bin/install_plugin.sh galia-plugin-s3
Alternatively, download the plugin directly and extract it into Galia's plugins directory.
Copy the keys from config.yml.sample
into your application configuration
file and fill them in.
Copy the s3source_object_info()
method from delegates.rb.sample
into your delegate script file, and implement it there.
If you would like to use S3Cache, you must tell Galia to use it as the variant cache and/or info cache by editing the following keys in your application configuration file:
cache.server.variant.enabled: true
cache.server.variant.implementation: S3Cache</p>
cache.server.info.enabled: true
cache.server.info.implementation: S3Cache