Amazon announces storage cloud for 5TB objects
December 13, 2010For Amazon Web Services (AWS), an object is a piece of data, file or group of files. AWS assigns an identification key to the object, which is stored across a number of datacentres in AWS’s S3 storage cloud.
On Thursday, AWS announced in a blog post that it had boosted the potential size of objects.
"A number of our customers want to store very large files in Amazon S3 — scientific or medical data, high-resolution video content, backup files, and so forth," Amazon Web Services wrote. "We’ve raised the limit by three orders of magnitude. Individual Amazon S3 objects can now range in size from one byte all the way to five terabytes (TB)."
Users will need to use AWS’s multipart upload feature to upload the large objects in individual chunks, AWS wrote.
The multi-part upload feature, announced in November, originally allowed people to split objects between 100MB and 5GB into up to 1,024 individual chunks for distributed, parallelised uploading. Splitting data into chunks boosts redundancy, AWS said at the time, because it distributes the risk of a failed upload across multiple objects, rather than one single file. The total number of parts into which files can now be split is 10,000, according to the AWS S3 developer guide.
The increased object size in S3 will help Amazon support various "big data use cases" such as genome sequencing, Amazon Web Services chief technology officer Werner Vogels wrote in a blog post on Thursday.
AWS on Thursday also launched software development kits for the Android and iOS mobile device operating systems, which make it easier for applications made on those platforms to upload and download data stored in S3.