We have a requirement to upload files/attachments on CASE record (using custom drag and drop section) from Service Cloud to AWS S3. We ran into file size limit issue with files bigger than 12 MB. Since the upload process is asynchronous process, it goes through for files of size up-to 12 MB and we are OK with that. But, we are unable to download files which are bigger than 6 MB (synchronous call). Reason for downloading the files instead of using full context URL to AWS is, it's a client/browser call and exposes Secret key information. Please advise if there is a way to download files up-to 12 MB (atleast) via server call from Apex. All we know and have is AWS host URL, Bucket Name, Secret Key and Access Key. Any help is greatly appreciated!
[SalesForce] Upload/Download files to and from AWS using Apex
Related Solutions
I found an answer to your question a couple if years back and it works well up to about a total of 5 megabytes for all of the files total. I do not remember the source but will look as I want to give due credit. I will add the controller, component, javascript and css fileto this post response. If you have additional question please let me know.
I added my own test code for controller and the custom objects in which I use this feature, so you will have to write some of your own test code. This example was old enough I was allowed to add the test code to the same class. Something we cannot do anymore. I used another utility class called TestConfiguration in which I call static methods to create objects for testing. There are many good sources on using this concept to manage creation of test objects.
I will find the source and add this information as I need to give due credit as much of this work is not my own.
Controller
`global with sharing class FileUploadController {
@RemoteAction
global static String attachBlob(String parentId, String attachmentId, String fileName, String contentType, String base64BlobValue){
/*
parentId: The sfdc object Id this file will be attached to
attachmentId: The record of the current Attachment file being processed
fileName: Name of the attachment
contentTye: Content Type of the file being attached
base64BlobValue: Base64 encoded string of the file piece currently processing
*/
//If recordId is blank this is the first part of a multi piece upload
if(attachmentId == '' || attachmentId == null){
Attachment att = new Attachment(
ParentId = parentId,
Body = EncodingUtil.Base64Decode(base64BlobValue),
Name = fileName,
ContentType = contentType
);
insert att;
//Return the new attachment Id
return att.Id;
}else{
for(Attachment atm : [select Id, Body from Attachment where Id = :attachmentId]){
//Take the body of the current attachment, convert to base64 string, append base64 value sent from page, then convert back to binary for the body
update new Attachment(Id = attachmentId, Body = EncodingUtil.Base64Decode(EncodingUtil.Base64Encode(atm.Body) + base64BlobValue));
}
//Return the Id of the attachment we are currently processing
return attachmentId;
}
}
@isTest
private static void testFileUploads(){
//to add additional information to the account record iterate through the returned list before inserting
List<Account> accts = TestConfiguration.createAccounts('Account', 1);
insert accts;
//to add additional information to the course record iterate through the returned list before inserting
List<Course_Detail__c> courses = TestConfiguration.createCourses('Course', 1, accts);
insert courses;
List<Course_Inspection__c> inspections = TestConfiguration.createInspections('Inspection', 1, courses);
insert inspections;
//this has no coverage as is
Blob bodyBlob=Blob.valueOf('Unit Test Attachment Body');
FileUploadController fc = new FileUploadController();
String result = FileUploadController.attachBlob(inspections.get(0).Id, '', 'test.js', 'javascript', bodyBlob.toString());
List<Attachment> a = [Select Id, ContentType, Body, ParentId from Attachment where Id =: result];
System.assertNotEquals(a.get(0), null);
Blob newBody = EncodingUtil.Base64Decode(EncodingUtil.Base64Encode(a.get(0).Body) + bodyBlob.toString());
String result2 = FileUploadController.attachBlob(inspections.get(0).Id, a.get(0).Id,'','',bodyBlob.toString());
System.assertNotEquals(result2, null);
}
}`
Component (to be added to the page)
<apex:component controller="FileUploadController">
<apex:attribute name="parentId" description="The ID of the record uploaded documents will be attached to." type="String" required="true"/>
<link rel="stylesheet" type="text/css" href="{!$Resource.FileUploadCSS}"/>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"/>
<script type="text/javascript" src="{!$Resource.FileUploadJS}"/>
<script type="text/javascript">
var parentId = '{!parentId}'; //Will be used by FileUploadJS.js but must be declared here. Static resources don't support dynamic values.
</script>
<div class="uploadBox">
<table cellpadding="0" cellspacing="0" class="uploadTable">
<tr>
<td><input type="file" multiple="true" id="filesInput" name="file" /></td>
<td class="buttonTD">
<input id="uploadButton" type="button" title="Upload" class="btn" value=" Upload "/>
<input id="clear" type="button" title="Clear" class="btn" value=" Clear "/>
</td>
</tr>
</table>
</div>
</apex:component>
JavaScript
var j$ = jQuery.noConflict();
j$(document).ready(function() {
//Event listener for click of Upload button
j$("#uploadButton").click(function(){
prepareFileUploads();
});
//Event listener to clear upload details/status bars once upload is complete
j$("#clear").on('click',function(){
j$(".upload").remove();
});
});
var byteChunkArray;
var files;
var currentFile;
var $upload;
var CHUNK_SIZE = 180000; //Must be evenly divisible by 3, if not, data corruption will occur
var VIEW_URL = '/servlet/servlet.FileDownload?file=';
//var parentId, you will see this variable used below but it is set in the component as this is a dynamic value passed in by component attribute
//Executes when start Upload button is selected
function prepareFileUploads(){
//Get the file(s) from the input field
files = document.getElementById('filesInput').files;
//Only proceed if there are files selected
if(files.length == 0){
alert('Please select a file!');
return; //end function
}
//Disable inputs and buttons during the upload process
j$(".uploadBox input").attr("disabled", "disabled");
j$(".uploadBox button").attr({
disabled: "disabled",
class: "btnDisabled"
});
//Build out the upload divs for each file selected
var uploadMarkup = '';
for(i = 0; i < files.length; i++){
//Determine file display size
if(files[i].size < 1000000){
var displaySize = Math.floor(files[i].size/1000) + 'K';
}else{
var displaySize = Math.round((files[i].size / 1000000)*10)/10 + 'MB';
}
//For each file being uploaded create a div to represent that file, includes file size, status bar, etc. data-Status tracks status of upload
uploadMarkup += '<div class="upload" data-status="pending" data-index="'+i+'">'; //index used to correspond these upload boxes to records in the files array
uploadMarkup += '<div class="fileName"><span class="name">'+ files[i].name + '</span> - '+ displaySize+ '</div>';
uploadMarkup += '<div class="percentComplete">0%</div>'
uploadMarkup += '<div class="clear"/>';
uploadMarkup += '<div class="statusBar">';
uploadMarkup += '<div class="statusBarPercent"/>';
uploadMarkup += '</div>';
uploadMarkup += '</div>';
}
//Add markup to the upload box
j$('.uploadBox').append(uploadMarkup);
//Once elements have been added to the page representing the uploads, start the actual upload process
checkForUploads();
}
function checkForUploads(){
//Get div of the first matching upload element that is 'pending', if none, all uploads are complete
$upload = j$(".upload:first[data-status='pending']");
if($upload.length != 0){
//Based on index of the div, get correct file from files array
currentFile = files[$upload.attr('data-index')];
/*Build the byteChunkArray array for the current file we are processing. This array is formatted as:
['0-179999','180000-359999',etc] and represents the chunks of bytes that will be uploaded individually.*/
byteChunkArray = new Array();
//First check to see if file size is less than the chunk size, if so first and only chunk is entire size of file
if(currentFile.size <= CHUNK_SIZE){
byteChunkArray[0] = '0-' + (currentFile.size - 1);
}else{
//Determine how many whole byte chunks make up the file,
var numOfFullChunks = Math.floor(currentFile.size / CHUNK_SIZE); //i.e. 1.2MB file would be 1000000 / CHUNK_SIZE
var remainderBytes = currentFile.size % CHUNK_SIZE; // would determine remainder of 1200000 bytes that is not a full chunk
var startByte = 0;
var endByte = CHUNK_SIZE - 1;
//Loop through the number of full chunks and build the byteChunkArray array
for(i = 0; i < numOfFullChunks; i++){
byteChunkArray[i] = startByte+'-'+endByte;
//Set new start and stop bytes for next iteration of loop
startByte = endByte + 1;
endByte += CHUNK_SIZE;
}
//Add the last chunk of remaining bytes to the byteChunkArray
startByte = currentFile.size - remainderBytes;
endByte = currentFile.size;
byteChunkArray.push(startByte+'-'+endByte);
}
//Start processing the byteChunkArray for the current file, parameter is '' because this is the first chunk being uploaded and there is no attachment Id
processByteChunkArray('');
}else{
//All uploads completed, enable the input and buttons
j$(".uploadBox input").removeAttr("disabled");
j$(".uploadBox button").removeAttr("disabled").attr("class","btn");
/*Remove the browse input element and replace it, this essentially removes
the selected files and helps prevent duplicate uploads*/
j$("#filesInput").replaceWith('<input type="file" name="file" multiple="true" id="filesInput">');
}
}
//Uploads a chunk of bytes, if attachmentId is passed in it will attach the bytes to an existing attachment record
function processByteChunkArray(attachmentId){
//Proceed if there are still values in the byteChunkArray, if none, all piece of the file have been uploaded
if(byteChunkArray.length > 0){
//Determine the byte range that needs to uploaded, if byteChunkArray is like... ['0-179999','180000-359999']
var indexes = byteChunkArray[0].split('-'); //... get the first index range '0-179999' -> ['0','179999']
var startByte = parseInt(indexes[0]); //0
var stopByte = parseInt(indexes[1]); //179999
//Slice the part of the file we want to upload, currentFile variable is set in checkForUploads() method that is called before this method
if(currentFile.webkitSlice){
var blobChunk = currentFile.webkitSlice(startByte , stopByte + 1);
}else if (currentFile.mozSlice) {
var blobChunk = currentFile.mozSlice(startByte , stopByte + 1);
}
//Create a new reader object, part of HTML5 File API
var reader = new FileReader();
//Read the blobChunk as a binary string, reader.onloadend function below is automatically called after this line
reader.readAsBinaryString(blobChunk);
//Create a reader.onload function, this will execute immediately after reader.readAsBinaryString() function above;
reader.onloadend = function(evt){
if(evt.target.readyState == FileReader.DONE){ //Make sure read was successful, DONE == 2
//Base 64 encode the data for transmission to the server with JS remoting, window.btoa currently on support by some browsers
var base64value = window.btoa(evt.target.result);
//Use js remoting to send the base64 encoded chunk for uploading
FileUploadController.attachBlob(parentId,attachmentId,currentFile.name,currentFile.type,base64value,function(result,event){
//Proceed if there were no errors with the remoting call
if(event.status == true){
//Update the percent of the status bar and percent, first determine percent complete
var percentComplete = Math.round((stopByte / currentFile.size) * 100);
$upload.find(".percentComplete").text(percentComplete + '%');
$upload.find(".statusBarPercent").css('width',percentComplete + '%');
//Remove the index information from the byteChunkArray array for the piece just uploaded.
byteChunkArray.shift(); //removes 0 index
//Set the attachmentId of the file we are now processing
attachmentId = result;
//Call process byteChunkArray to upload the next piece of the file
processByteChunkArray(attachmentId);
}else{
//If script is here something broke on the JavasSript remoting call
//Add classes to reflect error
$upload.attr('data-status','complete');
$upload.addClass('uploadError');
$upload.find(".statusPercent").addClass('statusPercentError');
$upload.attr('title',event.message);
//Check and continue the next file to upload
checkForUploads();
}
});
}else{
//Error handling for bad read
alert('Could not read file');
}
};
}else{
//This file has completed, all byte chunks have been uploaded, set status on the div to complete
$upload.attr('data-status','complete');
//Change name of file to link of uploaded attachment
$upload.find(".name").html('<a href="' + VIEW_URL + attachmentId + '" target="_blank">'+currentFile.name+'</a>');
//Call the checkForUploads to find the next upload div that has data-status="incomplete" and start the upload process.
checkForUploads();
}
}
CSS File
.buttonTD{
padding-left: 6px;
}
.clear{
clear:both;
}
.fileName{
float: left;
max-width: 235px;
overflow: hidden;
position: absolute;
text-overflow: ellipsis;
white-space: nowrap;
}
.percentComplete{
float: right;
}
.statusBar{
background: none repeat scroll 0 0 #FFFFFF;
border: 1px solid #EAEAEA;
height: 11px;
padding: 0 2px 0 0;
}
.statusBarPercent{
background-color: #1797C0;
float: left;
height: 9px;
margin: 1px;
max-width: 100%;
}
.statusBarPercentError{
background-color: #CE0000;
}
.upload{
background-color: white;
border: 1px solid #CACACA;
border-radius: 3px 3px 3px 3px;
margin-top: 6px;
padding: 4px;
}
.uploadBox{
background-color: #F8F8F8;
border: 1px solid #EAEAEA;
border-radius: 4px 4px 4px 4px;
color: #333333;
font-size: 12px;
padding: 6px;
width: 350px;
}
.uploadError{
border-color: #CE0000;
}
.uploadTable{
margin-left: auto;
margin-right: auto;
}
Starting from my prior gist, I then built a core class that is designed to work with S3 calls. Here it goes:
abstract class Core extends AWS {
Core() {
super();
}
protected S3User getChildNodeUser(Dom.XmlNode node, String ns, String name) {
S3User result = new S3User();
Dom.XmlNode ownerNode = node.getChildElement(name, ns);
if(ownerNode != null) {
result.Id = getChildNodeText(node, ns, 'ID');
result.DisplayName = getChildNodeText(node, ns, 'DisplayName');
}
return result;
}
protected virtual override void init() {
AmazonS3__c configSettings = AmazonS3__c.getOrgDefaults();
endpoint = new Url('https://'+configSettings.Endpoint__c);
accessKey = configSettings.AccessKey__c;
region = configSettings.Region__c;
service = 's3';
setHeader('date', requestTime.formatGmt('E, dd MMM yyyy HH:mm:ss z'));
// Prevent leaking the secret key by only exposing the signing key
createSigningKey(configSettings.SecretKey__c);
}
}
From there, I needed a place to hold the results from the call:
public class BucketGetListObjectsResult {
public String name, prefix, marker, delimiter;
public Integer maxKeys;
public Boolean isTruncated;
public File[] files = new File[0];
public String[] commonPrefixes = new String[0];
}
And a class to represent a File:
public class File {
File() {
}
File(String bucketName) {
bucket = bucketName;
}
String bucket;
public String name, eTag;
public S3User owner;
public DateTime lastModified;
public Integer size;
public Blob contents;
}
And a class that represented a S3 user:
public class S3User {
public String ID, DisplayName;
}
Finally, I built a class that lets me specify the various actions and get the results:
public class BucketGetListObjects extends Core {
String bucket;
BucketGetListObjects(String bucketName) {
super();
bucket = bucketName;
}
public BucketGetListObjects delimiter(String delimiter) {
setQueryParam('delimiter', delimiter);
return this;
}
public BucketGetListObjects marker(String marker) {
setQueryParam('marker', marker);
return this;
}
public BucketGetListObjects maxKeys(Integer maxKeys) {
setQueryParam('max-keys', String.valueOf(maxKeys));
return this;
}
public BucketGetListObjects prefix(String prefix) {
setQueryParam('prefix', prefix);
return this;
}
public override void init() {
super.init();
host = bucket+'.'+endpoint.getHost();
resource = '/';
}
public BucketGetListObjectsResult execute() {
method = HttpMethod.XGET;
BucketGetListObjectsResult result = new BucketGetListObjectsResult();
HttpResponse response = sendRequest();
Dom.XmlNode rootNode = response.getBodyDocument().getRootElement();
String ns = rootNode.getNamespace();
result.Name = getChildNodeText(rootNode, ns, 'Name');
result.Prefix = getChildNodeText(rootNode, ns, 'Prefix');
result.Marker = getChildNodeText(rootNode, ns, 'Marker');
result.maxKeys = getChildNodeInteger(rootNode, ns, 'MaxKeys');
result.Delimiter = getChildNodeText(rootNode, ns, 'Delimiter');
result.isTruncated = getChildNodeBoolean(rootNode, ns, 'IsTruncated');
Dom.XmlNode child;
while((child = rootNode.getChildElement('Contents', ns)) != null) {
File file = new File();
file.bucket = bucket;
file.name = getChildNodeText(child, ns, 'Key');
file.lastModified = getChildNodeDateTime(child, ns, 'LastModified');
file.eTag = getChildNodeText(child, ns, 'ETag');
file.size = getChildNodeInteger(child, ns, 'Size');
file.owner = getChildNodeUser(child, ns, 'Owner');
result.files.add(file);
rootNode.removeChild(child);
}
while((child = rootNode.getChildElement('CommonPrefixes', ns)) != null) {
result.commonPrefixes.add(getChildNodeText(child, ns, 'Prefix'));
rootNode.removeChild(child);
}
return result;
}
}
The actual class is used like this (these classes are all in a class called AWSS3):
AWSS3.Service service = new AWSS3.Service();
AWSS3.Bucket bucket = service.getBucketByName(bucketName);
AWSS3.BucketGetListObjects listObjects = bucket.bucketGetListObjects();
AWSS3.BucketGetListObjectsResults results = listObjects.execute();
The remaining parameters let you specify things like a "marker" (necessary for iterating over pages), a prefix (which lets you find related files), and so on. You'll find more details at Bucket GET.
You are also allowed to daisy-chain the calls in my design:
AWSS3.BucketGetListObjectsResults results =
new AWSS3.Service()
.getBucketByName(bucketName)
.bucketGetListObjects()
.execute();
I can see that you're using v2 of this API, which does include the "list-type=2" parameter. If you decide to use v2, you'll need to modify my example code above to support the continuation-token, which is one of the primary differences between v2 and v1 (which is what my code demonstrates). Most of the remaining code should work verbatim.
However, to get to the final point, your request really should look something like "https://s3.amazonaws.com/?list-type=2&prefix=/ti-use1-da-data-dropzone/ti-int-boss-pos/", which is clearly not the same as what you attempted to do.
I realize that I've also left out some code, but this thing was already getting pretty long as it is. Bucket and Service are just more uninteresting wrappers that perform various actions. Service can list all of an account's buckets, for example, Bucket can list and delete file contents, and File can also be for file uploads. There's a lot of framework that went in to this class, so I tried to keep it relevant.
Best Answer
This is demonstrated in the Amazon Toolkit. AWS allows you to create a pre-signed URL that does not expose your secret key. You can leverage this to upload and download files from S3 without exposing your secrets client-side. You'll want to read the manual for specifics. Here's the description from the documentation:
You can also choose to upload file parts, which basically goes like this: StartMultipartUpload, UploadParts, FinishMultipartUpload. I haven't gotten this working in Apex Code, as it is pretty tricky to get just right, but it's all there in the documentation.
Finally, you can use the Range header on the GET Object to download parts of the file at a time, and figure out a way to recombine them later. This is also tricky without running into memory limits, but possible with some client-side effort.