S3 buckets, nice technology.
I have a few redmine instances that require quite a lot of storage, as users are uploading massive files into it. At start I wrote a small plugin to allow direct transfer of files been uploaded into issues, so that the redmine instance stores only the structured data, and the s3 the big files. This worked fine for some time, but has its issues. The biggest one is lake of previous, as the file are not actually within the same file system as the redmine instance. Files are just links to the s3 bucket.
This got me thinking, I needed a batter solution. Ether I create a dynamic drive file on the network storage for the VM with redmine, or I just get the s3 bucket directly mounted into the file system. S3FS to the rescue!
Just as a disclaimer, I use Wasaby S3, not Amazon AWS. That said, all code works with any S3, just when mounting the bucket, change the URL.
How to I get S3FS mounted on a linux (Debian) system.
We need to start with some updates and installations of dependencies.
Open up your terminal and type code line by line:
sudo apt-get update
sudo apt-get install fuse
sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev mime-support
Next, we will need to get the latest copy of s3fs and compile from source, again in terminal:
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
make && make install
This is actually it! It's that simple to get s3fs onto your system, now we need to create an access file, and then mount the bucket that we want to have available on our OS.
First, we need our S3 access key ID and the S3 access key, type in command line (replace S3_ACCESS_KEY with your access key, and the S3_SECTET_ACCESS_KEY with your access key):
echo S3_ACCESS_KEY_ID:S3_SECRET_ACCESS_KEY > ~/.pwd-s3fs
The file .pwd-s3fs can be set with what ever name you want, as well as location. I chose home directory of my user.
Here we go, lets mount our S3 bucket:
s3fs test-bucket /s3mnt -o passwd_file=~/.pwd-s3fs -o url=https://s3.wasabisys.com
/s3mnt location is where the bucket will be mounted, create your folder in the right location and past the path here.
It's a good idea to use the URL of the location of your bucket, Wasaby and Amazon have multiple geo locations, in example I used the generic Wasaby link.
Now I have all my redmine instances with the /files folder as a S3 Wasaby bucket! I have unlimited storage and can preview any file inside the redmine instance!
Hope this will be useful for you! Let me know in comments.