• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

S3/H27-Sep-2021-7,7305,856

s3cmd.egg-info/H03-May-2022-5346

INSTALL.mdH A D27-Sep-20213.2 KiB11585

LICENSEH A D27-Sep-202114.9 KiB281237

MANIFEST.inH A D27-Sep-202158 32

NEWSH A D27-Sep-202121.2 KiB465418

PKG-INFOH A D27-Sep-20212 KiB5346

README.mdH A D27-Sep-202115 KiB352224

s3cmdH A D27-Sep-2021148.2 KiB3,2932,625

s3cmd.1H A D27-Sep-202121 KiB709693

setup.cfgH A D27-Sep-202196 117

setup.pyH A D03-May-20223.8 KiB12391

README.md

1## S3cmd tool for Amazon Simple Storage Service (S3)
2
3[![Build Status](https://github.com/s3tools/s3cmd/actions/workflows/test.yml/badge.svg)](https://github.com/s3tools/s3cmd/actions/workflows/test.yml)
4
5* Author: Michal Ludvig, michal@logix.cz
6* [Project homepage](http://s3tools.org)
7* (c) [TGRMN Software](http://www.tgrmn.com) and contributors
8
9
10S3tools / S3cmd mailing lists:
11
12* Announcements of new releases: s3tools-announce@lists.sourceforge.net
13* General questions and discussion: s3tools-general@lists.sourceforge.net
14* Bug reports: s3tools-bugs@lists.sourceforge.net
15
16S3cmd requires Python 2.6 or newer.
17Python 3+ is also supported starting with S3cmd version 2.
18
19See [installation instructions](https://github.com/s3tools/s3cmd/blob/master/INSTALL.md).
20
21
22### What is S3cmd
23
24S3cmd (`s3cmd`) is a free command line tool and client for uploading, retrieving and managing data in Amazon S3 and other cloud storage service providers that use the S3 protocol, such as Google Cloud Storage or DreamHost DreamObjects. It is best suited for power users who are familiar with command line programs. It is also ideal for batch scripts and automated backup to S3, triggered from cron, etc.
25
26S3cmd is written in Python. It's an open source project available under GNU Public License v2 (GPLv2) and is free for both commercial and private use. You will only have to pay Amazon for using their storage.
27
28Lots of features and options have been added to S3cmd, since its very first release in 2008.... we recently counted more than 60 command line options, including multipart uploads, encryption, incremental backup, s3 sync, ACL and Metadata management, S3 bucket size, bucket policies, and more!
29
30### What is Amazon S3
31
32Amazon S3 provides a managed internet-accessible storage service where anyone can store any amount of data and retrieve it later again.
33
34S3 is a paid service operated by Amazon. Before storing anything into S3 you must sign up for an "AWS" account (where AWS = Amazon Web Services) to obtain a pair of identifiers: Access Key and Secret Key. You will need to
35give these keys to S3cmd. Think of them as if they were a username and password for your S3 account.
36
37### Amazon S3 pricing explained
38
39At the time of this writing the costs of using S3 are (in USD):
40
41$0.026 per GB per month of storage space used
42
43plus
44
45$0.00 per GB - all data uploaded
46
47plus
48
49$0.000 per GB - first 1GB / month data downloaded
50$0.090 per GB - up to 10 TB / month data downloaded
51$0.085 per GB - next 40 TB / month data downloaded
52$0.070 per GB - next 100 TB / month data downloaded
53$0.050 per GB - data downloaded / month over 150 TB
54
55plus
56
57$0.005 per 1,000 PUT or COPY or LIST requests
58$0.004 per 10,000 GET and all other requests
59
60If for instance on 1st of January you upload 2GB of photos in JPEG from your holiday in New Zealand, at the end of January you will be charged $0.06 for using 2GB of storage space for a month, $0.0 for uploading 2GB of data, and a few cents for requests. That comes to slightly over $0.06 for a complete backup of your precious holiday pictures.
61
62In February you don't touch it. Your data are still on S3 servers so you pay $0.06 for those two gigabytes, but not a single cent will be charged for any transfer. That comes to $0.06 as an ongoing cost of your backup. Not too bad.
63
64In March you allow anonymous read access to some of your pictures and your friends download, say, 1500MB of them. As the files are owned by you, you are responsible for the costs incurred. That means at the end of March you'll be charged $0.06 for storage plus $0.045 for the download traffic generated by your friends.
65
66There is no minimum monthly contract or a setup fee. What you use is what you pay for. At the beginning my bill used to be like US$0.03 or even nil.
67
68That's the pricing model of Amazon S3 in a nutshell. Check the [Amazon S3 homepage](http://aws.amazon.com/s3/pricing/) for more details.
69
70Needless to say that all these money are charged by Amazon itself, there is obviously no payment for using S3cmd :-)
71
72### Amazon S3 basics
73
74Files stored in S3 are called "objects" and their names are officially called "keys". Since this is sometimes confusing for the users we often refer to the objects as "files" or "remote files". Each object belongs to exactly one "bucket".
75
76To describe objects in S3 storage we invented a URI-like schema in the following form:
77
78```
79s3://BUCKET
80```
81or
82
83```
84s3://BUCKET/OBJECT
85```
86
87### Buckets
88
89Buckets are sort of like directories or folders with some restrictions:
90
911. each user can only have 100 buckets at the most,
922. bucket names must be unique amongst all users of S3,
933. buckets can not be nested into a deeper hierarchy and
944. a name of a bucket can only consist of basic alphanumeric
95   characters plus dot (.) and dash (-). No spaces, no accented
96   or UTF-8 letters, etc.
97
98It is a good idea to use DNS-compatible bucket names. That for instance means you should not use upper case characters. While DNS compliance is not strictly required some features described below are not available for DNS-incompatible named buckets. One more step further is using a fully qualified domain name (FQDN) for a bucket - that has even more benefits.
99
100* For example "s3://--My-Bucket--" is not DNS compatible.
101* On the other hand "s3://my-bucket" is DNS compatible but
102  is not FQDN.
103* Finally "s3://my-bucket.s3tools.org" is DNS compatible
104  and FQDN provided you own the s3tools.org domain and can
105  create the domain record for "my-bucket.s3tools.org".
106
107Look for "Virtual Hosts" later in this text for more details regarding FQDN named buckets.
108
109### Objects (files stored in Amazon S3)
110
111Unlike for buckets there are almost no restrictions on object names. These can be any UTF-8 strings of up to 1024 bytes long. Interestingly enough the object name can contain forward slash character (/) thus a `my/funny/picture.jpg` is a valid object name. Note that there are not directories nor buckets called `my` and `funny` - it is really a single object name called `my/funny/picture.jpg` and S3 does not care at all that it _looks_ like a directory structure.
112
113The full URI of such an image could be, for example:
114
115```
116s3://my-bucket/my/funny/picture.jpg
117```
118
119### Public vs Private files
120
121The files stored in S3 can be either Private or Public. The Private ones are readable only by the user who uploaded them while the Public ones can be read by anyone. Additionally the Public files can be accessed using HTTP protocol, not only using `s3cmd` or a similar tool.
122
123The ACL (Access Control List) of a file can be set at the time of upload using `--acl-public` or `--acl-private` options with `s3cmd put` or `s3cmd sync` commands (see below).
124
125Alternatively the ACL can be altered for existing remote files with `s3cmd setacl --acl-public` (or `--acl-private`) command.
126
127### Simple s3cmd HowTo
128
1291) Register for Amazon AWS / S3
130
131Go to http://aws.amazon.com/s3, click the "Sign up for web service" button in the right column and work through the registration. You will have to supply your Credit Card details in order to allow Amazon charge you for S3 usage. At the end you should have your Access and Secret Keys.
132
133If you set up a separate IAM user, that user's access key must have at least the following permissions to do anything:
134-  s3:ListAllMyBuckets
135-  s3:GetBucketLocation
136-  s3:ListBucket
137
138Other example policies can be found at https://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html
139
1402) Run `s3cmd --configure`
141
142You will be asked for the two keys - copy and paste them from your confirmation email or from your Amazon account page. Be careful when copying them! They are case sensitive and must be entered accurately or you'll keep getting errors about invalid signatures or similar.
143
144Remember to add s3:ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.
145
1463) Run `s3cmd ls` to list all your buckets.
147
148As you just started using S3 there are no buckets owned by you as of now. So the output will be empty.
149
1504) Make a bucket with `s3cmd mb s3://my-new-bucket-name`
151
152As mentioned above the bucket names must be unique amongst _all_ users of S3. That means the simple names like "test" or "asdf" are already taken and you must make up something more original. To demonstrate as many features as possible let's create a FQDN-named bucket `s3://public.s3tools.org`:
153
154```
155$ s3cmd mb s3://public.s3tools.org
156
157Bucket 's3://public.s3tools.org' created
158```
159
1605) List your buckets again with `s3cmd ls`
161
162Now you should see your freshly created bucket:
163
164```
165$ s3cmd ls
166
1672009-01-28 12:34  s3://public.s3tools.org
168```
169
1706) List the contents of the bucket:
171
172```
173$ s3cmd ls s3://public.s3tools.org
174$
175```
176
177It's empty, indeed.
178
1797) Upload a single file into the bucket:
180
181```
182$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
183
184some-file.xml -> s3://public.s3tools.org/somefile.xml  [1 of 1]
185 123456 of 123456   100% in    2s    51.75 kB/s  done
186```
187
188Upload a two-directory tree into the bucket's virtual 'directory':
189
190```
191$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
192
193File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
194File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
195File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
196File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
197File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
198```
199
200As you can see we didn't have to create the `/somewhere` 'directory'. In fact it's only a filename prefix, not a real directory and it doesn't have to be created in any way beforehand.
201
202Instead of using `put` with the `--recursive` option, you could also use the `sync` command:
203
204```
205$ s3cmd sync dir1 dir2 s3://public.s3tools.org/somewhere/
206```
207
2088) Now list the bucket's contents again:
209
210```
211$ s3cmd ls s3://public.s3tools.org
212
213                       DIR   s3://public.s3tools.org/somewhere/
2142009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
215```
216
217Use --recursive (or -r) to list all the remote files:
218
219```
220$ s3cmd ls --recursive s3://public.s3tools.org
221
2222009-02-10 05:10    123456   s3://public.s3tools.org/somefile.xml
2232009-02-10 05:13        18   s3://public.s3tools.org/somewhere/dir1/file1-1.txt
2242009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir1/file1-2.txt
2252009-02-10 05:13        16   s3://public.s3tools.org/somewhere/dir1/file1-3.log
2262009-02-10 05:13        11   s3://public.s3tools.org/somewhere/dir2/file2-1.bin
2272009-02-10 05:13         8   s3://public.s3tools.org/somewhere/dir2/file2-2.txt
228```
229
2309) Retrieve one of the files back and verify that it hasn't been
231   corrupted:
232
233```
234$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
235
236s3://public.s3tools.org/somefile.xml -> some-file-2.xml  [1 of 1]
237 123456 of 123456   100% in    3s    35.75 kB/s  done
238```
239
240```
241$ md5sum some-file.xml some-file-2.xml
242
24339bcb6992e461b269b95b3bda303addf  some-file.xml
24439bcb6992e461b269b95b3bda303addf  some-file-2.xml
245```
246
247Checksums of the original file matches the one of the retrieved ones. Looks like it worked :-)
248
249To retrieve a whole 'directory tree' from S3 use recursive get:
250
251```
252$ s3cmd get --recursive s3://public.s3tools.org/somewhere
253
254File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
255File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
256File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
257File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
258File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
259```
260
261Since the destination directory wasn't specified, `s3cmd` saved the directory structure in a current working directory ('.').
262
263There is an important difference between:
264
265```
266get s3://public.s3tools.org/somewhere
267```
268
269and
270
271```
272get s3://public.s3tools.org/somewhere/
273```
274
275(note the trailing slash)
276
277`s3cmd` always uses the last path part, ie the word after the last slash, for naming files.
278
279In the case of `s3://.../somewhere` the last path part is 'somewhere' and therefore the recursive get names the local files as somewhere/dir1, somewhere/dir2, etc.
280
281On the other hand in `s3://.../somewhere/` the last path
282part is empty and s3cmd will only create 'dir1' and 'dir2'
283without the 'somewhere/' prefix:
284
285```
286$ s3cmd get --recursive s3://public.s3tools.org/somewhere/ ~/
287
288File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '~/dir1/file1-1.txt'
289File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '~/dir1/file1-2.txt'
290File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '~/dir1/file1-3.log'
291File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '~/dir2/file2-1.bin'
292```
293
294See? It's `~/dir1` and not `~/somewhere/dir1` as it was in the previous example.
295
29610) Clean up - delete the remote files and remove the bucket:
297
298Remove everything under s3://public.s3tools.org/somewhere/
299
300```
301$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
302
303File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
304File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
305...
306```
307
308Now try to remove the bucket:
309
310```
311$ s3cmd rb s3://public.s3tools.org
312
313ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
314```
315
316Ouch, we forgot about `s3://public.s3tools.org/somefile.xml`. We can force the bucket removal anyway:
317
318```
319$ s3cmd rb --force s3://public.s3tools.org/
320
321WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
322File s3://public.s3tools.org/somefile.xml deleted
323Bucket 's3://public.s3tools.org/' removed
324```
325
326### Hints
327
328The basic usage is as simple as described in the previous section.
329
330You can increase the level of verbosity with `-v` option and if you're really keen to know what the program does under its bonnet run it with `-d` to see all 'debugging' output.
331
332After configuring it with `--configure` all available options are spitted into your `~/.s3cfg` file. It's a text file ready to be modified in your favourite text editor.
333
334The Transfer commands (put, get, cp, mv, and sync) continue transferring even if an object fails. If a failure occurs the failure is output to stderr and the exit status will be EX_PARTIAL (2). If the option `--stop-on-error` is specified, or the config option stop_on_error is true, the transfers stop and an appropriate error code is returned.
335
336For more information refer to the [S3cmd / S3tools homepage](http://s3tools.org).
337
338### License
339
340Copyright (C) 2007-2020 TGRMN Software - http://www.tgrmn.com - and contributors
341
342This program is free software; you can redistribute it and/or modify
343it under the terms of the GNU General Public License as published by
344the Free Software Foundation; either version 2 of the License, or
345(at your option) any later version.
346
347This program is distributed in the hope that it will be useful,
348but WITHOUT ANY WARRANTY; without even the implied warranty of
349MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
350GNU General Public License for more details.
351
352