Getting started with Amazon S3 sdk with java

Put the sdk in your pom.xml of your maven project:
  1. <dependency>
  2.     <groupId>com.amazonaws</groupId>
  3.     <artifactId>aws-java-sdk</artifactId>
  4.     <version>1.9.13</version>
  5. </dependency>

Before running, make sure you setup API credentials properly.Follow the instruction here to do this :

Creating a directory:-
http://voidweb.com/2013/03/how-to-create-a-folder-programmatically-in-s3-using-amazon-aws-java-sdk/ has good examples for creating a directory
Uploading a file:-
Here is an for uploading a file which will be publicly accessible:
http://mohiplanet.blogspot.com/2014/11/aws-s3-multipart-file-upload-with.html
About content type:-
Note:- You have to select appropriate content types for each of your files. S3 SDK is not gonna do it automatically  for you.
Here is a sample that you can use during uploading with correct content type:-
  1. public ObjectMetadata generateCorrectContentTypeObjMetaFromKeyName(String key) throws InvalidContentTypeException {
  2. ObjectMetadata metadata = new ObjectMetadata();
  3. String contentType = getFileContentTypeFromFileName(key);
  4. if (contentType.length() == 0) {
  5. throw new InvalidContentTypeException();
  6. }
  7. LOG.log(Level.INFO, "Setting content type : " + contentType);
  8. metadata.setContentType(contentType);
  9. return metadata;
  10. }

Set content type according to your key name with:

  1. initRequest.setObjectMetadata(generateCorrectContentTypeObjMetaFromKeyName(keyName));

 Details and definition of getFileContentTypeFromFileName(key) are here:-
 http://mohiplanet.blogspot.com/2014/12/amazon-web-services-s3-api-java-client.html

More examples:-
https://github.com/aws/aws-sdk-java/tree/master/src/samples/AmazonS3

Command line tools for working fast:-
http://s3tools.org/s3cmd

Deploy play framework 2.1.x app script(.sh)

Go to project dir.
Run the following:
  1. activator clean compile stage
  2. rm /usr/local/development/workspace/Vudy/target/universal/stage/RUNNING_PID
  3. activator clean stage
  4. nohup target/universal/stage/bin/RUNNING_PID &

References :
https://www.playframework.com/documentation/2.1.x/Production

Amazon Web Services S3 API Java Client : update file object key with correct content type

By default the uploaded file content type is :
application/x-www-form-urlencoded; charset=utf-8

To change this you can do something like this :

  1. static AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
  2. public void validateKeyContentType(String key) {
  3.         try {
  4.             ObjectMetadata metadata = generateCorrectContentTypeObjMetaFromKeyName(key);
  5.             
  6.             final CopyObjectRequest request = new CopyObjectRequest(bucket, key, bucket, key).withSourceBucketName(bucket)
  7.                 .withSourceKey(key)
  8.                 .withNewObjectMetadata(metadata).withCannedAccessControlList(CannedAccessControlList.PublicRead);
  9.             s3Client.copyObject(request);
  10.         } catch (Exception exception) {
  11.             LOG.info(exception.getMessage());
  12.         }
  13.     }
  14. public ObjectMetadata generateCorrectContentTypeObjMetaFromKeyName(String key){
  15.         ObjectMetadata metadata = new ObjectMetadata();
  16.         String contentType = getFileContentTypeFromFileName(key);
  17.         
  18.         LOG.log(Level.INFO, "Setting content type : " + contentType);
  19.         metadata.setContentType(contentType);
  20.         return metadata;
  21.     }
  22. /*
  23. These method takes some argument like "bucket1uploads/file100.mp4" and returns it's content type "video/mp4" by parsing the text (javax.activation)
  24. */
  25.     public static String getFileContentTypeFromFileName(String filePath) {
  26. //javax.activation is too old and misses these content types
  27.         MimetypesFileTypeMap ftmp = new MimetypesFileTypeMap();
  28.         ftmp.addMimeTypes("audio/mp3 mp3 MP3");
  29.         ftmp.addMimeTypes("video/mp4 mp4 MP4");
  30.         ftmp.addMimeTypes("video/mkv mkv MKV");
  31.         ftmp.addMimeTypes("video/webm webm WEBM");
  32.         ftmp.addMimeTypes("video/flv flv FLV");
  33.         ftmp.addMimeTypes("video/x-flv x-flv X-FLV");
  34.         return ftmp.getContentType(filePath);
  35.     }

Setting up AWS S3 API Java Client credentials

You may get this during Amazon s3 java API client first run:
Exception in thread "main" java.lang.IllegalArgumentException: AWS credential profiles file not found in the given path: /root/.aws/credentials
        at com.amazonaws.auth.profile.internal.ProfilesConfigFileLoader.loadProfiles(ProfilesConfigFileLoader.java:45)
        at com.amazonaws.auth.profile.ProfilesConfigFile.loadProfiles(ProfilesConfigFile.java:173)
        at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:109)
        at com.amazonaws.auth.profile.ProfilesConfigFile.<init>(ProfilesConfigFile.java:89)
        at com.amazonaws.auth.profile.ProfileCredentialsProvider.getCredentials(ProfileCredentialsProvider.java:117)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3691)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3647)
        at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:626)
        at com.vantage.photowall.toolset.s3client.PhotowallAWSS3Client.listKeys(PhotowallAWSS3Client.java:179)
        at com.vantage.photowall.toolset.s3client.AWSS3BucketAPIClientCLI.main(AWSS3BucketAPIClientCLI.java:55)

To solve this, simply :
Download crendentials file from https://github.com/aws/aws-sdk-java/blob/master/src/samples/AmazonS3/credentials

  1. wget --no-check-certificate https://github.com/aws/aws-sdk-java/raw/master/src/samples/AmazonS3/credentials

put it in ~/.aws/ directory:

  1. mkdir ~/.aws/
  2. mv credentials ~/.aws/credentials

open credentials file and fill access key id and secret access key :

like this:

  1. # Move this credentials file to (~/.aws/credentials)
  2. # after you fill in your access and secret keys in the default profile
  3. # WARNING: To avoid accidental leakage of your credentials,
  4. # DO NOT keep this file in your source directory.
  5. [default]
  6. aws_access_key_id=YOUR_ACCESS_KEY_ID
  7. aws_secret_access_key=YOUR_SECRET_ACCESS_KEY

Save and close the file.
Run API client program again :)

AWS S3 multipart file upload with public access

  1. import java.io.File;
  2. import java.io.IOException;
  3. import java.util.ArrayList;
  4. import java.util.List;

  5. import com.amazonaws.auth.profile.ProfileCredentialsProvider;
  6. import com.amazonaws.services.s3.AmazonS3;
  7. import com.amazonaws.services.s3.AmazonS3Client;
  8. import com.amazonaws.services.s3.model.AbortMultipartUploadRequest;
  9. import com.amazonaws.services.s3.model.CannedAccessControlList;
  10. import com.amazonaws.services.s3.model.CompleteMultipartUploadRequest;
  11. import com.amazonaws.services.s3.model.InitiateMultipartUploadRequest;
  12. import com.amazonaws.services.s3.model.InitiateMultipartUploadResult;
  13. import com.amazonaws.services.s3.model.PartETag;
  14. import com.amazonaws.services.s3.model.UploadPartRequest;
  15. public class MultipartPublicFileUpload {
  16.     static AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider());
  17.     static public boolean multiPartFileUploadWithPublicAccess(String existingBucketName, String keyName, String filePath) {
  18.         // Create a list of UploadPartResponse objects. You get one of these
  19.         // for each part upload.
  20.         List<PartETag> partETags = new ArrayList<PartETag>();
  21.         // Step 1: Initialize.
  22.         InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(existingBucketName, keyName);
  23.         initRequest.setCannedACL(CannedAccessControlList.PublicRead);//This is where we are enabling this file for public access
  24.         InitiateMultipartUploadResult initResponse
  25.             = s3Client.initiateMultipartUpload(initRequest);
  26.         File file = new File(filePath);
  27.         long contentLength = file.length();
  28.         long partSize = 5242880; // Set part size to 5 MB.
  29.         try {
  30.             // Step 2: Upload parts.
  31.             long filePosition = 0;
  32.             for (int i = 1; filePosition < contentLength; i++) {
  33.                 // Last part can be less than 5 MB. Adjust part size.
  34.                 partSize = Math.min(partSize, (contentLength - filePosition));
  35.                 // Create request to upload a part.
  36.                 UploadPartRequest uploadRequest = new UploadPartRequest()
  37.                     .withBucketName(existingBucketName).withKey(keyName)
  38.                     .withUploadId(initResponse.getUploadId()).withPartNumber(i)
  39.                     .withFileOffset(filePosition)
  40.                     .withFile(file)
  41.                     .withPartSize(partSize);
  42.                 // Upload part and add response to our list.
  43.                 partETags.add(
  44.                     s3Client.uploadPart(uploadRequest).getPartETag());
  45.                 filePosition += partSize;
  46.             }
  47.             // Step 3: Complete.
  48.             CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(
  49.                 existingBucketName,
  50.                 keyName,
  51.                 initResponse.getUploadId(),
  52.                 partETags);
  53.             s3Client.completeMultipartUpload(compRequest);
  54.             return true;
  55.         } catch (Exception e) {
  56.             s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
  57.                 existingBucketName, keyName, initResponse.getUploadId()));
  58.             return false;
  59.         }
  60.     }
  61.     public static void main(String[] args) throws IOException {
  62.         String existingBucketName = "myphotowall";
  63.         String keyName = "photo/myuploadedtestfile3.txt";
  64.         String filePath = "/tmp/mytestfile.txt";
  65.         multiPartFileUploadWithPublicAccess(existingBucketName, keyName, filePath);
  66.     }
  67. }
Before you run make sure you have setup the API clients crendentials properly.See http://mohiplanet.blogspot.com/2014/12/setting-up-aws-s3-api-java-client.html

Note * : This will upload file with content type application/x-www-form-urlencoded; charset=utf-8

Please refer to http://mohiplanet.blogspot.com/2014/12/amazon-web-services-s3-api-java-client.html for updating keys with correct content type or setting correct content type with object metadata while uploading.

References :
http://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html

svn : remove mistakenly added project files and directories within a directory

  1. #Remove unnecessary files and directories by
  2. svn rm file_or_directories
  3. #e.g.
  4. svn rm .settings //directory
  5. svn rm .project //file
  6. svn rm .git //directory
  7. svn rm .idea //directory
  8. #commit all changes
  9. svn commit -m "Your remove message."

Install pycharm 4 EAP on CentOS 6 Script(.sh)

    1. cd /usr/local/
    2. #download pycharm EAP 4
    3. wget http://download.jetbrains.com/python/pycharm-professional-139.113.tar.gz
    4. tar xf pycharm-professional-139.113.tar.gz
    5. rm pycharm-professional-139.113.tar.gz
    6. #Make sure that root has permissions all the way through the unzipped directory
    7. chown -R root:root pycharm-professional-139.113/
    8. cd pycharm-professional-139.113/bin/
    9. #script to run start IDE conveniently
    10. echo "sh /usr/local/pycharm-professional-139.113/bin/pycharm.sh&" > /usr/local/pycharm-professional-139.113/bin/pycharm1.sh
    11. chmod 777 /usr/local/pycharm-professional-139.113/bin/pycharm1.sh
    12. ln -s /usr/local/pycharm-professional-139.113/bin/pycharm1.sh /usr/bin/pycharm
    13. #lets run
    14. echo "Just type 'pycharm' n run :)"
    15. pycharm