PageRenderTime 46ms CodeModel.GetById 21ms RepoModel.GetById 0ms app.codeStats 0ms

/boto-2.5.2/docs/source/s3_tut.rst

#
ReStructuredText | 245 lines | 191 code | 54 blank | 0 comment | 0 complexity | d9c7a55fc87eb2134ab0541fd2f3fc61 MD5 | raw file
  1. .. _s3_tut:
  2. ======================================
  3. An Introduction to boto's S3 interface
  4. ======================================
  5. This tutorial focuses on the boto interface to the Simple Storage Service
  6. from Amazon Web Services. This tutorial assumes that you have already
  7. downloaded and installed boto.
  8. Creating a Connection
  9. ---------------------
  10. The first step in accessing S3 is to create a connection to the service.
  11. There are two ways to do this in boto. The first is:
  12. >>> from boto.s3.connection import S3Connection
  13. >>> conn = S3Connection('<aws access key>', '<aws secret key>')
  14. At this point the variable conn will point to an S3Connection object. In
  15. this example, the AWS access key and AWS secret key are passed in to the
  16. method explicitely. Alternatively, you can set the environment variables:
  17. AWS_ACCESS_KEY_ID - Your AWS Access Key ID
  18. AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
  19. and then call the constructor without any arguments, like this:
  20. >>> conn = S3Connection()
  21. There is also a shortcut function in the boto package, called connect_s3
  22. that may provide a slightly easier means of creating a connection:
  23. >>> import boto
  24. >>> conn = boto.connect_s3()
  25. In either case, conn will point to an S3Connection object which we will
  26. use throughout the remainder of this tutorial.
  27. Creating a Bucket
  28. -----------------
  29. Once you have a connection established with S3, you will probably want to
  30. create a bucket. A bucket is a container used to store key/value pairs
  31. in S3. A bucket can hold an unlimited amount of data so you could potentially
  32. have just one bucket in S3 for all of your information. Or, you could create
  33. separate buckets for different types of data. You can figure all of that out
  34. later, first let's just create a bucket. That can be accomplished like this:
  35. >>> bucket = conn.create_bucket('mybucket')
  36. Traceback (most recent call last):
  37. File "<stdin>", line 1, in ?
  38. File "boto/connection.py", line 285, in create_bucket
  39. raise S3CreateError(response.status, response.reason)
  40. boto.exception.S3CreateError: S3Error[409]: Conflict
  41. Whoa. What happended there? Well, the thing you have to know about
  42. buckets is that they are kind of like domain names. It's one flat name
  43. space that everyone who uses S3 shares. So, someone has already create
  44. a bucket called "mybucket" in S3 and that means no one else can grab that
  45. bucket name. So, you have to come up with a name that hasn't been taken yet.
  46. For example, something that uses a unique string as a prefix. Your
  47. AWS_ACCESS_KEY (NOT YOUR SECRET KEY!) could work but I'll leave it to
  48. your imagination to come up with something. I'll just assume that you
  49. found an acceptable name.
  50. The create_bucket method will create the requested bucket if it does not
  51. exist or will return the existing bucket if it does exist.
  52. Creating a Bucket In Another Location
  53. -------------------------------------
  54. The example above assumes that you want to create a bucket in the
  55. standard US region. However, it is possible to create buckets in
  56. other locations. To do so, first import the Location object from the
  57. boto.s3.connection module, like this:
  58. >>> from boto.s3.connection import Location
  59. >>> dir(Location)
  60. ['DEFAULT', 'EU', 'USWest', 'APSoutheast', '__doc__', '__module__']
  61. >>>
  62. As you can see, the Location object defines three possible locations;
  63. DEFAULT, EU, USWest, and APSoutheast. By default, the location is the
  64. empty string which is interpreted as the US Classic Region, the
  65. original S3 region. However, by specifying another location at the
  66. time the bucket is created, you can instruct S3 to create the bucket
  67. in that location. For example:
  68. >>> conn.create_bucket('mybucket', location=Location.EU)
  69. will create the bucket in the EU region (assuming the name is available).
  70. Storing Data
  71. ----------------
  72. Once you have a bucket, presumably you will want to store some data
  73. in it. S3 doesn't care what kind of information you store in your objects
  74. or what format you use to store it. All you need is a key that is unique
  75. within your bucket.
  76. The Key object is used in boto to keep track of data stored in S3. To store
  77. new data in S3, start by creating a new Key object:
  78. >>> from boto.s3.key import Key
  79. >>> k = Key(bucket)
  80. >>> k.key = 'foobar'
  81. >>> k.set_contents_from_string('This is a test of S3')
  82. The net effect of these statements is to create a new object in S3 with a
  83. key of "foobar" and a value of "This is a test of S3". To validate that
  84. this worked, quit out of the interpreter and start it up again. Then:
  85. >>> import boto
  86. >>> c = boto.connect_s3()
  87. >>> b = c.create_bucket('mybucket') # substitute your bucket name here
  88. >>> from boto.s3.key import Key
  89. >>> k = Key(b)
  90. >>> k.key = 'foobar'
  91. >>> k.get_contents_as_string()
  92. 'This is a test of S3'
  93. So, we can definitely store and retrieve strings. A more interesting
  94. example may be to store the contents of a local file in S3 and then retrieve
  95. the contents to another local file.
  96. >>> k = Key(b)
  97. >>> k.key = 'myfile'
  98. >>> k.set_contents_from_filename('foo.jpg')
  99. >>> k.get_contents_to_filename('bar.jpg')
  100. There are a couple of things to note about this. When you send data to
  101. S3 from a file or filename, boto will attempt to determine the correct
  102. mime type for that file and send it as a Content-Type header. The boto
  103. package uses the standard mimetypes package in Python to do the mime type
  104. guessing. The other thing to note is that boto does stream the content
  105. to and from S3 so you should be able to send and receive large files without
  106. any problem.
  107. Listing All Available Buckets
  108. -----------------------------
  109. In addition to accessing specific buckets via the create_bucket method
  110. you can also get a list of all available buckets that you have created.
  111. >>> rs = conn.get_all_buckets()
  112. This returns a ResultSet object (see the SQS Tutorial for more info on
  113. ResultSet objects). The ResultSet can be used as a sequence or list type
  114. object to retrieve Bucket objects.
  115. >>> len(rs)
  116. 11
  117. >>> for b in rs:
  118. ... print b.name
  119. ...
  120. <listing of available buckets>
  121. >>> b = rs[0]
  122. Setting / Getting the Access Control List for Buckets and Keys
  123. --------------------------------------------------------------
  124. The S3 service provides the ability to control access to buckets and keys
  125. within s3 via the Access Control List (ACL) associated with each object in
  126. S3. There are two ways to set the ACL for an object:
  127. 1. Create a custom ACL that grants specific rights to specific users. At the
  128. moment, the users that are specified within grants have to be registered
  129. users of Amazon Web Services so this isn't as useful or as general as it
  130. could be.
  131. 2. Use a "canned" access control policy. There are four canned policies
  132. defined:
  133. a. private: Owner gets FULL_CONTROL. No one else has any access rights.
  134. b. public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access.
  135. c. public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access.
  136. d. authenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access.
  137. To set a canned ACL for a bucket, use the set_acl method of the Bucket object.
  138. The argument passed to this method must be one of the four permissable
  139. canned policies named in the list CannedACLStrings contained in acl.py.
  140. For example, to make a bucket readable by anyone:
  141. >>> b.set_acl('public-read')
  142. You can also set the ACL for Key objects, either by passing an additional
  143. argument to the above method:
  144. >>> b.set_acl('public-read', 'foobar')
  145. where 'foobar' is the key of some object within the bucket b or you can
  146. call the set_acl method of the Key object:
  147. >>> k.set_acl('public-read')
  148. You can also retrieve the current ACL for a Bucket or Key object using the
  149. get_acl object. This method parses the AccessControlPolicy response sent
  150. by S3 and creates a set of Python objects that represent the ACL.
  151. >>> acp = b.get_acl()
  152. >>> acp
  153. <boto.acl.Policy instance at 0x2e6940>
  154. >>> acp.acl
  155. <boto.acl.ACL instance at 0x2e69e0>
  156. >>> acp.acl.grants
  157. [<boto.acl.Grant instance at 0x2e6a08>]
  158. >>> for grant in acp.acl.grants:
  159. ... print grant.permission, grant.display_name, grant.email_address, grant.id
  160. ...
  161. FULL_CONTROL <boto.user.User instance at 0x2e6a30>
  162. The Python objects representing the ACL can be found in the acl.py module
  163. of boto.
  164. Both the Bucket object and the Key object also provide shortcut
  165. methods to simplify the process of granting individuals specific
  166. access. For example, if you want to grant an individual user READ
  167. access to a particular object in S3 you could do the following:
  168. >>> key = b.lookup('mykeytoshare')
  169. >>> key.add_email_grant('READ', 'foo@bar.com')
  170. The email address provided should be the one associated with the users
  171. AWS account. There is a similar method called add_user_grant that accepts the
  172. canonical id of the user rather than the email address.
  173. Setting/Getting Metadata Values on Key Objects
  174. ----------------------------------------------
  175. S3 allows arbitrary user metadata to be assigned to objects within a bucket.
  176. To take advantage of this S3 feature, you should use the set_metadata and
  177. get_metadata methods of the Key object to set and retrieve metadata associated
  178. with an S3 object. For example:
  179. >>> k = Key(b)
  180. >>> k.key = 'has_metadata'
  181. >>> k.set_metadata('meta1', 'This is the first metadata value')
  182. >>> k.set_metadata('meta2', 'This is the second metadata value')
  183. >>> k.set_contents_from_filename('foo.txt')
  184. This code associates two metadata key/value pairs with the Key k. To retrieve
  185. those values later:
  186. >>> k = b.get_key('has_metadata)
  187. >>> k.get_metadata('meta1')
  188. 'This is the first metadata value'
  189. >>> k.get_metadata('meta2')
  190. 'This is the second metadata value'
  191. >>>