Author: Rick Root
So, we store a lot of data on S3 that our users upload. For example, 155
of our sites have over 1000 user profile images.
Currently, we keep a local copy of all these images, even though they are
primarily stored and served from S3. The main reason we do this is so that
we can allow the admins to easily click a button to "Download Profile
Images" for all of their users. We use <cfzip> to package them up and give
them the file.
We are considering NOT keeping local copies of these images... but we still
want to provide this same functionality - I modified the script and tested
it with my own site to download the images from S3 via cfhttp ... I could
also use <cffile> I suppose with the s3:// path (not sure if that would be
faster or not, ultimately cffile is still making an http call so probably
This is way slower of course. under 1 second for my class to zip up the
local files (copy them to a temp folder, zip the folder, return the zip),
delete the temp folder. 30 seconds to do the same thing but getting the
source files from s3.
And I only have 59 profile photos uploaded to my site by my users.
Obviously, I can make this a background process of some kind for users with
large numbers of profile photos.. but I'm intrested in opinions here on if
there's a better way to "bulk retrieve" files from S3 storage.
One thing I was thinking about doing is setting up an EC3 server with CF
installed (or railo or whatever).. and having IT do the work of retrieving
the images from the clould and then returning the zip. Seems likely that
it would be able to get the data much faster. But that also seems like
it'd be more work than I want to do :)
*The beatings will continue until morale improves.*