Unlimited Plex Storage via Google Drive And Rclone35 min read

Important Edit – 2022/05/22

The fun may be over. Google has started forcing older users like myself into Workspace and they have not been clear about what is going to happen with storage. Whilst their plans still claim ‘as much as you need’, varying reports are coming out seemingly pointing out that this will no longer be unlimited.

My account is now showing as being 3000% over utilised and support are telling me different things. Will we be able to ‘request more storage’ as their support pages state? Or is that a veiled way of putting in hard limits? We don’t know.

I will update this post once there is more solid information. If this service is no longer feasible, I do have a good backup which is slightly more expensive depending on how you go about it, but it’s a solid solution. Keep an eye out for that here.

Should you carry on with this now? Maybe. At the time of writing, you won’t lose too much if the limits come into effect and we are in fact hard limited.

Either way, it’s been a good run boys.

End of edit.

What’s this now?

This post is going to walk through how to create a Plex server with a Google drive backend for ‘unlimited’ storage. There are a few caveats explained below, reoccurring costs and a variety of ways to achieve this, but this will be the Muffin way.

I go into a lot of detail on some of these steps and applications used, this is not a ‘wham bam thank you mam’ blog post to copy and paste some stuff and have it running (although that may well work.) If you don’t mind rambling and want to understand what is happening and why, this is the post for you.

Some things to note.

This is conditional on Google continuing this service.
I can already hear the screeching from so many people right now, “but gOOGLe wIll jUSt end UnLiMiTED stOrAGe And tHeN yOU’LL SeEE”. And, yes, they might do, and probably will at some point, but that’s a problem for another time. As of the writing of this post, Google currently does allow unlimited storage for a small price per month which can be used for what we’re going to be doing.
If you feel like this is too risky then this isn’t for you. I’m more than happy to run 100s of TBs locally if I need to, but right now I don’t, so I won’t.
Always be aware that this service can be pulled from you at the drop of a hat. This doesn’t necessarily mean you shouldn’t do it, or flex this fact to people, just be aware of the situation.

You need a monthly Google apps/workspace subscription.
I’ll go through this below, but this storage will cost you $20~ per month, if you feel like this is worth it then continue reading, if not then dropping insane amounts on drives and managing storage is still the best option. Remember, unlimited storage space is useful for much more than streaming Linux ISOs. Let’s not forget that you also get your own mailbox and all that jazz too. I leverage my storage in workspace heavily for use with Google photos and cold backups, for example.

You need a good internet connection.
Files will be being pushed and pulled from the internet regularly, the better your connection, the better your experience is likely to be. I would say a 100Mb/100Mb is the lowest I would do this on. Now, if you don’t have as good of an internet connection as that (I don’t!) there are other options.

You need a decent server.
This is true of most installations in any case, but if you’re going to be transcoding stuff then you need to be able to actually do that reasonably, the services we will be running alongside need to be able to run comfortably too.
Your server will still need a reasonable amount of local storage, for cache purposes.

None of your files are accessible via Google Drive without rclone.
The way we are doing this setup is to fully encrypt both the file and file names to keep our data safe and account safe. This means that you will not be able to access your media files directly from Google Drive on the web, for example, but this is how we want it.

Why are you doing this Muffin?

Good question. I have about 250TB of storage in my old lab, but I downsized a lot. I needed something that required less work, wasn’t in my backyard, didn’t cost me an arm and a leg in electricity and let me dump in data like there was no tomorrow.

I told myself if this worked well, there was really no need for the headache of managing such large arrays at home, and here we are.

Methodology.

I will be doing this different to how I have this setup at home, simply for ease of explanation and following along. My setup involves more moving parts than is probably required, with things on different VMs with shares between them and all that jazz but this can all be done on one host/VM, and we will be doing it on one.

You are, of course, free to do this however you feel if all you need is an example of the setup.

Below is how I have things setup at a high level:

And here is how we will be setting this up here:

Firstly, please excuse my awful diagram but it kind of gets the point across. Don’t worry if you don’t know what some of these things are just yet, we will go over them.

What I was trying to illustrate here is simply that this tutorial will be deploying everything on one machine/VM.

The server.

You’ll need some kind of server for this, whether you run this all on bare metal or in a VM, colo/vps/dedicated/home server is up to you. Here is a reiteration of some of the points previously made in regards to the server required.

  • Decent server. So, this is completely up to you how you go about this. You’ll need to size up what kind of server spec you need for your workload, if you’re planning on transcoding 4K for instance, you need to take this into account. You’re not going to get a good experience on cheap VPS machines unless you’re certain you can direct stream everything. Below you will see what I use for my setup.
  • Good internet connection. You’ll need to upload to Google drive at a reasonable speed as well as pull-down content that you’ll be streaming. If your server is not in the same location that you’re watching, you’ll need to pull down content and then upload to your client. I would say 100Mb/100Mb is the bare minimum to consider.
  • Local storage. Before being uploaded to Google drive your content needs to sit locally for a little bit, depending on how much you’re uploading this will need to be reasonably large. Google Drive has a 750GB a day upload limit, so if you ended up downloading terabytes of data at once, only 750GB of that can be uploaded per day, so the rest of it will need to be stored locally until your quota frees up.
    Not only this, but the way we’re going to do this means anything streamed will be downloaded locally and kept there for a certain amount of time as a sort of cache, so if you end up watching a 50GB BD-RIP, you’ll need 50GB locally to watch it.
    I would say a few TB will have you sitting comfortably, you can manage with much less but you’ll have to ensure your cache time is short, and you don’t download too much at once. Configuring the intricacies of this is on you, and I will point out in the configs where you can make such changes to suit your setup.

If you’re lucky enough to have a 1Gb/1Gb connection at your house and already have something you can use or don’t mind building out a server for this, then that will be the best bet. If you’re like me and have to deal with abysmal internet connections and expensive power, then you’re better off trying to co-locate a server as I have, or sucking up the cost for a dedicated server.

I myself do everything in VMs as you saw above, with a co-located server with a decent uplink. This server has an Nvidia P2000 passed through to the Plex VM so that I can leverage hardware transcodes and I am not bound by CPU horsepower, this has worked really well.

This R720 sits in a datacenter and does all the work, the P2000 really comes into it’s own as I’m able to do multiple very strenuous transcoding tasks with ease.

For the purposes of this tutorial, I will be using a dedicated server I rent from Hetzner to do a variety of tasks. I can see this being a viable route if you want to rent a dedicated server, the prices are pretty good if you can nab something decent, and the speeds to Google are about 80MB/s in my testing, with unlimited bandwidth. Here is what I rent for £23 per month:

I can see that this has gone up at the time of writing, but these seems to be a trend globally, I’m sure prices will settle down once the world does too.

This is connected back to my home network and I can access it like any other server on my network.

Getting a workspace account.

Here I will walk you through acquiring a workspace account with unlimited storage. If you look at the tiers at first glace it would appear as if you cannot get this option without contacting Google, but at the time of writing there is an easy workaround for this.

First thing you’re going to need is a domain which won’t be covered here. Once you have a domain it’s time to purchase a Google Apps Workspace account.

Head over to this link and follow the steps to get your workspace account setup.

For this tutorial, I will be using a spare domain I have. Once you get to the following screen, simply continue.

You will need to enter payment information after this, note that Workspace has a 14 day free trial, so cancelling anytime inside this will not incur any charges.

Follow the steps to verify your domain and add the correct records into DNS. This will be different with every registrar/DNS provider.

Once this is done, you should be thrown into the Google workspace admin console, over at admin.google.com and click on ‘Billing.’

Inside billing you should see your current license, “Google Workspace Business Standard”, and above this a link to “Add or upgrade a subsection”, click on this.

Now you will have the ability to upgrade to Google Workspace Enterprise Plus. Go ahead and do this.

Select your payment plan and go through to checkout, you will see now that our new plan has unlimited storage for the new monthly cost.

Note: You do not need Enterprise Plus, just regular Enterprise will give you unlimited storage, I have no idea why I did this for this tutorial, but Enterprise is cheaper and Plus will have no benefit for 99.9% of people.

Here is the costing at the time of writing:

The OS, and a few notes.

For the purposes of this tutorial I will be using Ubuntu Server as my OS of choice. I love Debian, and I love Ubuntu server, if you would rather use another distro then by all means, but this tutorial won’t necessarily translate 1-1.

I don’t usually use a root account for my work on Linux servers usually, however the Hetzner install script defaults to using a root account and I’ve just not bothered to setup a less privileged account with sudo access, so if you’re following along please take this into account. I will still append sudo to my commands for easy pasteability.

This also means that my permissions will ‘just work’ for things such as Docker, you will need to ensure that you are setting the correct permissions for your setup.

Another thing to note is that I will be using vim to edit configurtations because vim is awesome. If you do not wish to use vim and prefer instead, say nano, please replace all instances of vi with nano.

Once we’re in our server we will make sure the OS is up to date and ready for our applications, so run the following:

sudo apt update; sudo apt upgrade -y; sudo apt autoremove -y

I have created some template files that we will be using for this tutorial, so we will grab these now. This will create a folder rclone-gmedia in your home directory.

cd ~
git clone https://github.com/MonsterMuffin/rclone-gmedia/

Installing and Configuring Rclone.

Now we will install rclone, this is done with one simple command that downloads and runs an installation script.

curl https://rclone.org/install.sh | sudo bash

Rclone is a tool we will be using to interface with Google drive via API. It supports many other backend storage services and has a lot of cool built in tools. You can find out about rclone here, as well as read it’s very well written documentation. If you do not wish to use the install script for security reasons, you can follow the install instructions.

After this you should be able to run the following and get back the version installed.

rclone --version
[email protected] ~ # rclone --version

rclone v1.53.4
- os/arch: linux/amd64
- go version: go1.15.6

Before going any further, follow the instructions here to obtain a client-id. You need to do this to use rclone properly.

After this head over to Google Drive in your web browser and create a folder where all rclone data will live and enter the folder.

Note the highlighted section of the URL once you are inside the folder, this is your folder ID. You will need this later.

Now we will setup the rclone config required for connecting rclone to our Google drive. Run the following command to bring up the rclone config.

Note: I have been informed that some of the options have changed their ordering now, so please disregard the numbers I selected and instead ensure you select the correct option corresponding to the action description.

rclone config
n - Creates a new remote
gdrive-media - This is what we will call this remote, you can call this whatever you like.
13 - Selects Google Drive
client_id - Enter the client ID you obtained previously.
client_secret - Enter the client secret you obtained previously.
1 - Enables access to the entire drive.
root_folder_id - Enter your folder id from the created folder before.
service_account_file - Leave blank and continue.
y - Edit adavnce config.

Once you get here, keep pressing enter to accept the defaults until you get to the following:

2022/04/26 – Please note, the authentication section below has now changed. Do the following:

Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes (default)
n) No

Type n and the setup will be waiting for a code.

As part of the new authentication setup, you are required to get this code on a machine that is not headless. This is explained in this link that is given to you at this stage.

The best way to get your auth code is by running rclone on the machine you’re using to do all of this. You can get rclone from Chocolatey for Windows or Brew for MacOS using the following:

Windows Chocolatey Rclone Install:
choco install rclone
MacOS brew Rclone Install:
brew install rclone

If you do not know what Chocolatey or Brew is, I suggest looking into whichever is applicable to the OS you’re running. They are extremely useful, easy to install tools that you should be using!

If you’re doing this on Linux I will presume you know how to get rclone locally on your distro.

You should now be able to run rclone authorize “drive” "<your very long generated ID>" on your local machine and get an authorisation code to finish off the rclone setup with.

This is the end of this edit. This tutorial will now continue.

Once you do this and allow access, you will be given a code, copy that code and paste it into the terminal when rclone config is asking for your verification code.

n - Do not configure as team drive.
y - Accept the config.

You will now be thrown to the start of the rclone config page with your config sitting there, we can now create the encrypted config.

What we will do now is create another config, that will be using our previous config and adding an encryption layer.

n - Creates a new remote
gdrive-media-crypt - This is what we will call this remote, you can call this whatever you like.
10 - Encrypt a remote
remote - Enter the name of the previous remote, in my case gdrive-media, with a colon at the end and path. For me this is gdrive-media:/media
1 - Encrypt file names.
1 - Encrypt directory names.
g - Generate password.
1024 - Generate 1024 bit password.
y - Accept password.
g - Generate password.
1024 - Generate 1024 bit password.
y - Accept password.
n - Do not edit advanced config. 
y - Accept config.

You should now have the 2 configurations as so:

Name                 Type
====                 ====
gdrive-media         drive
gdrive-media-crypt   crypt

Note: You will need to keep the 2 generated passwords safe. Like, really safe. Without this, your data is as good as gone.
If you ever need to setup rclone again for this mount, you will need these keys, you have been warned.

Now we can test what we just did running the following commands:

touch test
rclone copy test gdrive-media-crypt:

Now if you head over to the rclone folder inside Google Drive in your web browser, you will see a new folder called ‘media’ and our encrypted file with the obfuscated file name.

But if we run the following command inside our terminal we can see the file unobfuscated.

rclone ls gdrive-media-crypt:

Rclone is now setup to read and write from the cloud at our whim, but currently we can only send and receive files using the rclone command.
Our next step will be to create a systemd service file to tell rclone to use it’s rclone mount command to actually mount our storage onto the filesystem so we can read and write to it like a normal directory.

We do this as a systemd service file so we can have systemd start the mount at bootup, restart it against failure and allow us to stop/start it whenever we please.

I have already done this for you, but I have left variables to allow for your own paths. We will use sed to change these variables in the files.

First however, we will need to decide a few things:

  • Where do you want your log files to live?
  • Where are your mountpoints going to be for rclone?
  • Where do you want your media to be downloaded to temporarily when something is watched?

I have chosen /opt/rclone/logs/ for my log directory, /mnt/gmedia-cloud for rclone to mount GD and /opt/rclone/cache/gmedia for my cache directory.
Once you have done that, we need to create the directories.
We will also install fuse here so we are able to use rclone mount. This may be installed already on your system but was not on mine.

sudo mkdir /mnt/gmedia-cloud
sudo mkdir /opt/rclone/logs -p
sudo mkdir /opt/rclone/cache/gmedia -p
sudo apt install fuse -y

Now, cd into our directory we pulled earlier, and then into the services folder. We will use sed to change the values in the service files to what you have decided on, please ensure you replace the variables specified in square brackets with your chosen options, without the brackets. I have included my variables if you are using the same paths as me.

Here are the variables you are changing and what they should be changed to:

  • <youruser> – The username that holds your rclone config, this is most likely your user and not root like me.
  • <rclone-log-dir> – The directory you will be using for logs, do not add a trailing / as I have included this in my config.
  • <useragent> – Specify a unique user agent for rclone to use to connect to GD.
  • <cache-dir> – Where will locally downloaded files be temporarily stored?
  • <cache-size> – How big should your cache be? If this is filled up then older files will be removed before the cache time is up (see below).
  • <cache-time> – How long will files you watch stay on the server locally before being deleted? If you have the local space available this can be set somewhat high as it allows files that will be watched multiple times in a short timeframe, new TV shows for example, to not have to be pulled from the cloud again.
  • <cloud-mountpoint> – Where will rclone mount GD?

With your own variables:

cd ~/rclone-gmedia/services/
sed -i 's,<youruser>,[YOUR USERNAME HERE],g' *
sed -i 's,<rclone-log-dir>,[YOUR LOG DIR HERE !!NO TRAILING /!!],g' *
sed -i 's,<useragent>,[YOUR USERAGENT HERE],g' *
sed -i 's,<cache-dir>,[YOUR CACHE DIR HERE],g' *
sed -i 's,<cache-size>,[YOUR CACHE SIZE HERE],g' *
sed -i 's,<cache-time>,[YOUR CACHE TIME HERE],g' *
sed -i 's,<cloud-mountpoint>,[YOUR RCLONE MOUNTPOINT],g' *

With my variables:

cd ~/rclone-gmedia/services/
sed -i 's,<youruser>,root,g' *
sed -i 's,<rclone-log-dir>,/opt/rclone/logs,g' *
sed -i 's,<useragent>,muffinsdrive,g' *
sed -i 's,<cache-dir>,/opt/rclone/cache/gmedia,g' *
sed -i 's,<cache-size>,150G,g' *
sed -i 's,<cache-time>,12h,g' *
sed -i 's,<cloud-mountpoint>,/mnt/gmedia-cloud,g' *

If you used another name for your rclone configuration, you will need to replace that too the same way, if not then ignore this.

sed -i 's,gdrive-media-crypt:,[YOUR CRYPT CONFIG NAME HERE],g' gmedia.service

You will now have a completed service file that will look something like the below (or exactly like mine). These commands will have also changed the relevant variables in the other service file too, which we will come to.

[Unit]
Description=Gmedia RClone Mount Service
After=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount \
  --config=/root/.config/rclone/rclone.conf \
  --log-level=INFO \
  --log-file=/opt/rclone/logs/rclone-mount.log \
  --user-agent=muffinsdrive \
  --umask=002 \
  --gid=1002 \
  --uid=1000 \
  --allow-other \
  --timeout=1h \
  --poll-interval=15s \
  --dir-cache-time=1000h \
  --cache-dir=/opt/rclone/cache/gmedia \
  --vfs-cache-mode=full \
  --vfs-cache-max-size=150G \
  --vfs-cache-max-age=12h \
  gdrive-media-crypt: /mnt/gmedia-cloud
ExecStop=/bin/fusermount -uz /mnt/gmedia-cloud
Restart=on-abort
RestartSec=5
StartLimitInterval=60s
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

Notice here how my --config= is not /home/ but instead /root/, this makes sense for a root user. If you’re running as root then you will need to change this in the config file too.

You may also need to change the uid and gid settings to match your system/needs.

Lets go over some of these flags. You can see all the rclone mount flags in their documentation here, but we will go over a few I feel are noteworthy.

Here is where you can change some of the settings to better suit yourself.

  • --poll-interval – The interval at which rclone will look for ages, I set this to pretty low as I like to keep things up to date. This is not an API heavy task and it is safe to have this low.
  • --dir-cache-time – How long rclone caches the directory. This is set to very high to keep the directory in cache as it remains pretty static, and is updated regularly.
  • --vfs-cache-mode – Sets how vfs cache functions. In the setup we are deploying the file is downloaded in it’s entirety and served from local storage. I had very bad experiences with watching media on anything but full, reading directly from the cloud led to many stutters and buffering, so change this at your own will but setting it to full has the best experience.
  • --vfs-cache-max-size and --vfs-cache-max-age – As explained above, this is how large your cache is for files being downloaded for viewing, and how long they will stay cached. If you are playing with minimal storage, set this as big as your expected file size to stream. If you expect to be downloading 50GB MKVs, then ensure this is at least 51GB. Similarly, you can set the max age very low to remove anything as soon as it’s been played if you need the storage. This will not remove files that are being read,.

If you feel like you need to tweak the settings, especially the VFS settings, then by all means play with this yourself. You will see the changes once everything is setup and you can start testing with your media.
In my experience --vfs-cache-mode=full was the best and only way to go, but if you want to play around with the other modes then please read about them here.

We can copy the service file, enable it and start it. Enabling it will ensure that it starts at bootup, and starting it will start the mount.

cd ~/rclone-gmedia/services/
sudo cp gmedia.service /etc/systemd/system
sudo systemctl enable gmedia.service
sudo systemctl start gmedia.service

Now if we check the service, we should get an output that looks something like the following.

sudo systemctl status gmedia.service

If the service doesn’t start, consult your log file in the directory we told rclone to use to figure out what the issue it.

It’s quite easy to think that we’re almost done here, and that we can now start reading and writing out files to GD via rclone, but this is not the case.

Installing and Configuring MergerFS.

In this setup we are using MergerFS. You can read about it here. MergerFS can do some really complex and cool things, but we will be using it very simply here.

What we are doing is creating a local mountpoint on our server and telling MergerFS to ‘merge’ the cloud and the local one. We do this so that we can point all of our apps to this ‘merged’ share which allows us to have a mix of files both in the cloud and locally, but in the same location to applications.

What this enables us to do is have all new files placed locally and moved to GD at a timeframe that we set. The applications reading these files will always be pointed to the one ‘merged’ directory however.

We do this for several reasons:

  • Plex will do analysis of files when it finds them, this is much faster if the files are local, whilst also avoiding uncesassary IO on GD, eating into our API hits.
  • We never want to directly write to GD. If we were to push files directly to GD via our mountpoint, we would incur very heavy performance penalties.
  • We want to be able to point Plex to our local directory and cloud at the same time without having to actually point it at two directories, to Plex this will be the same.

To install mergerfs, do the following:

sudo apt install mergerfs

Now running merger fs with the version flag we should get something akin to the following:

mergerfs --version

If the output of mergerfs --version gives you a version below 2.28 then this will now work. At the time of writing the version shipping with Debian is 2.24. If your version is below 2.28, then follow the instructions here to install the latest version of mergerfs without using the outdated apt source.

To configure mergerfs how we would like is very easy and I have done the work for you, we will need to substitute some variables in the service file like we did last time. There are two variables left to configure, again this is up to you if you wish to change the paths.

  • <local-mountpoint> – Where do you want your data to live before being shipped off to GD? This can be anywhere. I will be using /mnt/gmedia-local.
  • <mergerfs-mountpoint> – Where will the mergerfs mountpoint live? This will be the combined cloud and local mount points. I will be using /mnt/gmedia.

Create the directories, substituting your own if you’re using different ones.

sudo mkdir /mnt/gmedia
sudo mkdir /mnt/gmedia-local

With your own variables:

cd ~/rclone-gmedia/services/
sed -i 's,<local-mountpoint>,[YOUR LOCAL MOUNT],g' *
sed -i 's,<mergerfs-mountpoint>,[YOUR MERGERFS MOUNT],g' *

With my variables:

cd ~/rclone-gmedia/services/
sed -i 's,<local-mountpoint>,/mnt/gmedia-local,g' *
sed -i 's,<mergerfs-mountpoint>,/mnt/gmedia,g' *

Your gmedia-mergerfs.service file should look something like the below now:

[Unit]
Description=Gmedia MergerFS Mount
Requires=gmedia.service
After=gmedia.service

[Service]
Type=forking
ExecStart=/usr/bin/mergerfs /mnt/gmedia-local:/mnt/gmedia-cloud /mnt/gmedia -o rw,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full
ExecStop=/bin/fusermount -uz /mnt/gmedia
KillMode=none
Restart=on-failure

[Install]
WantedBy=multi-user.target

As before we will copy the service file into systemd and then enable it.

cd ~/rclone-gmedia/services/
sudo cp gmedia-mergerfs.service /etc/systemd/system
sudo systemctl enable gmedia-mergerfs.service
sudo systemctl start gmedia-mergerfs.service

If everything has gone correctly, you should be able to ls that directory and find that test file we uploaded earlier.

Note:

I have been informed that the above settings can result in extremely slow IO in some conditions, these conditions seem to be if you are using a mergerfs mount that is over the network. If this is the case in your setup it is advisable to test with both the above config, and with the following to see if this is better in your usecase:

cache.files=off

Configuring rclone move.

Getting your local data into GD from your local mount is easy enough, I’m doing this via a simple crontab job that runs every 6 hours but there are many scripts out there to do more fancy stuff, however with the rclone move flags I see no need for this.

I move my data every 6 hours as Plex scans every 2 hours. This allows Plex time to do its thing before it’s uploaded to the cloud. You will see here that I also have a flag set to ensure files must be older than 6 hours, stopping a scenario where a file is added just before the next run and Plex does not pick this up.

cat the crontab file from my git repo:

cd ~/rclone-gmedia/snippets/
cat crontab

And we’ll see my rclone move command with missing variables.

0 */6 * * * /usr/bin/rclone move <local-mountpoint> gdrive-media-crypt: --config /home/<youruser>/.config/rclone/rclone.conf --log-file <rclone-log-dir>/upload.log --log-level INFO --delete-empty-src-dirs --fast-list --min-age <min-age>
  • 0 */6 * * * – How often we want this to run, you can use a site like crontab guru to help you if you need help.
  • /usr/bin/rclone move <local-mountpoint> gdrive-media-crypt: – The actual move command. This is telling rclone move to move files from the local mount to our GD rclone config.
  • --config – Directory of the rclone config.
  • --log-file – Directory and file for our upload logs.
  • --log-level – Sets the logging level.
  • --delete-empty-src-dirs – Delete empty folders on the source (local mount).
  • --fast-list – Recursive file listing, supposed to vastly improve IO.
  • –min-age – Minimum age a file must be to be moved.

For more information on these flags and many others, as previous, see the global flags.

Substations need to be made as before. Again, use your own or copy mine. Make sure your user is correct, and as before you will need to manually edit the file or the sed command if you’re root to reflect the proper directory.

With your own variables:

cd ~/rclone-gmedia/snippets/
sed -i 's,<local-mountpoint>,[YOUR LOCAL MOUNT],g' *
sed -i 's,<youruser>,[YOUR USERNAME HERE],g' *
sed -i 's,<rclone-log-dir>,[YOUR LOG DIR HERE !!NO TRAILING /!!],g' *
sed -i 's,<min-age>,[YOUR MIN AGE HERE],g' *

My variables:

cd ~/rclone-gmedia/snippets/
sed -i 's,<local-mountpoint>,/mnt/gmedia-local,g' *
sed -i 's,<youruser>,root,g' *
sed -i 's,<rclone-log-dir>,/opt/rclone/logs,g' *
sed -i 's,<min-age>,6h,g' *

Running cat crontab should result in a file that looks something like this:

0 */6 * * * /usr/bin/rclone move /mnt/gmedia-local gdrive-media-crypt: --config /home/root/.config/rclone/rclone.conf --log-file /opt/rclone/logs/upload.log --log-level INFO --delete-empty-src-dirs --fast-list --min-age 6h

Copy this line and enter it into crontab by running:

sudo crontab -e

Paste the line as shown below, then exit and save the file.

Please note there is a mistake in this screenshot with the leading 0 missing!

Cron will now execute this every 6 hours, or however long you set. We can always do this manually too of course, and with a progress output. This is useful if you need to free up space quickly. Of course, you will need to remove the cron section and possibly replace it with sudo.

By adding a --progress to the end of the command and running it in your shell, you will be able to see the progress of your manually triggered move.

Here is an example of a complete run on my production machine. Whilst the move is occurring it will also show which files are currently being moved, and the status of each file.

The Docker bit.

As I said before I will not be going into great detail in this step as you can find out everything you need about setting up the individual apps themselves and running them in Docker on the internet.

If you want, you can use my included docker compose file which will create and setup all the apps with the config we are about to set, from there you can do your own configurations inside the apps themselves.

My compose file leverages a .env file where the variables are stored. Once the .env is configured to your settings, the docker compose file will install the apps to that configuration. 

Here are the apps that this file will install along with their GitHub/product pages:

Here is now I structure my downloads folder. This can, of course, be located on a network location, simply mount the share via SMB/NFS and then set your .env to reflect this, ensuring permissions are correctly set.

/opt/downloads/
├── nzb
│   ├── complete
│   │   ├── movies
│   │   └── tv
│   └── incomplete
└── torrent
    ├── complete
    │   ├── movies
    │   └── tv
    ├── incomplete
    └── watch

Your media directory should include 2 folders like so:

/mnt/gmedia
├── movies
└── tv

Once you have docker installed along with docker-compose, edit the .env file inside the docker folder to make sure the variables suit your setup.

#Global
TZ=Europe/London  - Your timezone.
PUID=1000  - The UID the container will run as.
PGID=1000  - The GID the container will run as.

#Directories
APPDATA=/opt/dockerdata/  - Docker data directory.
GMEDIA=/mnt/gmedia/:/gmedia  - gmedia directory.
DOWNLOAD=/opt/downloads/:/download  - Download directory.
WATCH=/opt/download/torrent/watch/:/watch  - Torrent watch directory.
GMEDIA_TV=/mnt/gmedia/tv:/tv  - TV Show directory.
GMEDIA_MOVIE=/mnt/gmedia/movies:/movies  - Movie Directory.

Your UID and GID should be a user created for running containers, usually docker. You must ensure this user has access to the download and gmedia shares.

Once you are happy with that, you can simply run the following to deploy the apps:

cd ~/rclone-gmedia/docker/
docker-compose -f compose.yml up -d

After this is done, running docker ps will show us that all of our containers are now running and will be accessible over from your web browser. You can also use portainer now by heading over to port 9000 on your host, and manage your containers from there.

When setting up your apps, you need to point Sonarr to /tv, Radarr to /movies and Plex libraries can be added from /gmedia.

Your downloads will go to /download inside the downloader container, which will have the same mapping under Sonarr and Radarr, so it will move completed files to the mergerfs gmedia.

Fin.

That’s it! This has taken me a stupid long time to write up as I wanted to be as detailed as possible, so I sincerely hope this has helped you!

I have been using this setup for a good few months now with amazing results, I finally have storage for an extensive 4K collection and can stream this directly to may clients.

If you have any questions, just leave a comment.

Muffin.

  • Bitcoin
Scan to Donate Bitcoin to 1KL2vvXFnaSYCYP6pLgnz7PVPmHdbvx3SA

Buy me a beer with Bitcoin.

Scan the QR code or copy the address below into your wallet to send some Bitcoin to fund something I probably don't need.

145 thoughts on “Unlimited Plex Storage via Google Drive And Rclone35 min read

  1. Hi – I set this up in Nov 21 following your guide. (Thanks by the way it was very helpful!)
    I see that there’s an email from Google about OAUTH. Does this affect how my rclone is currently set up, or does this affect new rclone setups?
    This is all well outside of my area of expertise so not sure if everything will break in October..

  2. Hey,

    Still no limitations from google? Want to use it now. Do you guys think they will limit me or can i overextend?

    Thanks!

      • I dont think this is true.

        I started 2 Weeks ago and hit the 5 TB Limit.

        Whenever i try to upload something to Google I get this error:
        googleapi: Error 403: The user’s Drive storage quota has been exceeded., storageQuotaExceeded

        :((

  3. First i want to thank you for this Detailed Guide.
    A lot of work went into this tutorial.

    I have a question regarding the Google Workspace Thing.

    Does this still work for new Users or will i be capped at the 5TB per User Limit?

    Can anyone confirm , that you could exceed this limit without problems?

    Also would love to know whats the Alternative you are talking about in the Edit from May.

    • Yes, one user is unlimited for now. Your dashboard may state 5TB but it will allow you to go over, what this actually means no one really knows yet.

  4. hello thanks for this detailed guide, it was working very well for me but now I get stutters very often, and reinstalled and followed all the steps again, but I still have constant stutters, has this happened to anyone else?

    • Depending on the type of file I try to stream, sometimes I get a couple of stutters early, but usually everything has buffered and is fine from the first 30 seconds on to the end of the movie/show. But those are huge files, like 35-50GB per hour.

  5. I thought I would give a quick update about using rclone union to handle a bit of the Google Drive quota limitations. I ended up using this configuration guide initially to get things set up and they worked really well. However, I started to hit Google Drive’s download and upload limits on a fairly regular basis. I am doing this on a shared hosting server that has plex as one of the dashboard “one-click” installs.

    Since some of the quota limits are per user, such as the 750gb per 24 hours upload limit, I created a number of team drives, set up encrypted rclone remotes on to each one, each with its own service account. Then I unioned them together with rclone union, and set the create policy to rand (random) and the action and search policies to ff (first found). Then using mergerfs, I merged the local, short term storage with the union, and continued to script uploads to the remote union.

    That “load balances” the uploads and downloads across all the remotes and I no longer run into the upload/download limits.

    • Can you go into more detail about this? Basically I was using one Drive account and have to switch to another drive account. The first wasn’t “admined” by me but the second one is. I’m trying to do a server side copy with rclone (by sharing the 65TB folder to my new Drive) and I’m running the following command;

      sudo rclone copy (name of new drive):/rclone/media –drive-shared-with-me (name of new drive): -v -P

      you can add additional flags to this but this seemed to work for all of 10min. until I ran up to the upload limit. I left it running but i don’t believe it picked up after 24hrs…plus 65TB its going to take like 45 days to transfer all of that. Personally Google should recognize that its a server side and there should be no limit, but alas here were are (one more thing that Google is stupid about).

      So, I have a Workplace account and don’t want to add more users to this ($20/mo per user) so how are you doing the Team Drive thing? I’m quickly coming to the resolution that I’m going to have to re-download everything and really want to avoid that, help me change my mind.

      • There’s a couple of things to keep in mind about this process. There are multiple Google Drive quotas and you end up having to deal with whatever the most restrictive quotas are. The simple version of quotas for Drive are:

        750GB uploads per day per account
        10TB downloads per day per drive
        Any individual file can only be accessed a limited (unknown number) times per day
        Only 10 transactions per second per drive

        So, best case scenario for you is that you will still only be able to transfer 10TB per day from the original Drive to the new Drive, and that’s only if you set up service accounts for the new Drive.

        Setting up service accounts is a little bit arcane, so I don’t know how much background you have. Basically you set up a project tied to your Google Workspace domain, and then you use IAM administrator to create as many service accounts for that project as you want. You can then add those service accounts to your team drive and use the JSON authentication file for each service account as the access accounts for your new Drive.

        Service accounts are not interactive login accounts so they don’t cost anything extra to add. I’ve got dozens of them.

        For most rclone commands, like copy, sync, move, etc. there’s an option called –drive-service-account-file that lets you decide which service account JSON authentication file to use, which matches up to a 750GB upload for each service account. By using different JSON files as you hit each 750GB limit you can keep uploading.

        Basically you can create a script to iterate through your JSON files and keep uploading because when rclone fails due to a Drive upload quota, it throws a specific error, and you can handle that error by changing to the next JSON file.

        My script looks like this:

        Delete the existing log file

        if [ -f ~/finished.txt ]; then
        rm finished.txt
        fi

        if [ -f ~/rclone.log ]; then
        rm ~/rclone.log
        fi

        Transfer up to the max 750G limit and then switch to the next service account

        for i in {11..20..1}
        do
        ~/bin/sbin/rclone -P -v move your_local_directory your_remote:/ \
        –drive-service-account-file ~/json/srv-acct-$i.json \
        –fast-list \
        –min-age 2d \
        –drive-stop-on-upload-limit \
        –delete-empty-src-dirs \
        –log-file ~/rclone.log

        if [[ $? -eq 0 ]]
        then
        echo -e "Finished: `date`\n\n""`tail --lines=6 ~/rclone.log`" >> finished.txt
        exit 0
        fi

        done

          • Yes, you can. I did go a bit further and I set up the union so I wouldn’t have to mess with iterating through service accounts. I have 8 team drives set up so each is in the rclone.conf file with its own service account. The union just puts all 8 together as a single remote, and files are randomly assigned to one of the 8 drives, but it is transparent because you just access everything through the union.

            it basically makes it so that all the quotas are increased by a factor of 8. I know technically since it is random there is some weird chance that the same one drive out of eight is chosen 50 times in a row and could end up being hampered by a quota, but I guess I’ll live with those odds.

        • Well I think I bypassed the 750GB quota…by simply switching ‘copy’ for ‘move’…because you’re moving existing data from one Google server to another and not creating ‘new’ data and because its a server side transfer…there was no quota cap. I used this command and it moved 65TB worth of data in less than 2hrs. (swap out “name of new/old drive for your own drive names with no parenthesis)

          sudo rclone move (name of old drive): –delete-empty-src-dirs (name of new drive): –drive-server-side-across-configs=true –fast-list -v -P

          The -P readout doesn’t actually show you the transfer rate and it shows the entire move as “checking.” Further, the data total it shows isn’t correct but it works. The data readout I got would go up as it “checked” files but sometimes it would go down until it finished the total “scan” then the total data started to drop as well as the # of files. I hope that makes sense. All-in-all moving a total of 65TB in less than 2hrs…not bad.

  6. First, let me thank you for this amazing guide, without it I wouldn’t be up and running. However, with that being said, I have run into an issue. I have to move everything I have (65TB) to another GD instance. Its going to be a pain in the butt and I’m hoping you can help. I’m trying to keep the same encryption so I don’t have to mess with that, however I’m doing to have to download from my current GD instance and then upload to the new GD instance as I don’t have enough local storage. I’m hoping you can help with how I can do that.

  7. I have had this running for quite a while. However, my Google token timed out. I renewed it without issue, but now my Plex library is missing. I have the actual files on my gmedia-cloud drive, but no content showing in Plex. How do I get the content catalogued again without copying all of the files to the local server again.

    • The reason everything disappeared was because you have auto trash empty enabled, this is a bad setting that I always disable for this exact reason. You shouldn’t need to download anything locally for Plex to detect it, just make sure the mount is showing the files and rescan in Plex. It may take a while though as Plex may need to rebuild it’s database.

  8. First – fantastic article. I am completely impresssed with the level of detail in this documentation. I was able to fill in the gaps of my knowledge with some Google searches – but that was only a few times. I do have a question though that doesn’t appear to be addressed to here. I’ve followed all of the naming, config settings, etc as you have laid out for the folder structures, mount points, etc. My NAS drive has 8 TB of storage on it, but the machine I am using for this setup has much less storage – really just enough for cache drive and that’s about it. How would you recommend I migrate the media from the old NAS to this solution? I can do NFS mounts, etc and thought I might mount my NAS to /mnt/gmedia/ so that the plex library has time to add all of the movies to its library before the rclone jobs start uploading the files to gdrive, but in general adding a mount point will clear out or make hidden anything that already exists in the folder /mnt/gmedia. Another idea is to copy the plex database files over and then just setup a separate rclone job to copy direct from the NAS to the google drive… I’d love to hear from anyone who has done this.

    If the best answer is just go buy a big enough drive for the migration – thats an acceptable answer too but thought I’d ask first. 🙂

    • Hey there and thanks for the kind words, as for your question:

      I would mount your NAS via NFS (or SMB, whatever is easiest) and then simply do an rclone copy. So, if you mounted your NAS media directory to /mnt/nas then you could simply run the following /usr/bin/rclone copy /mnt/nas gdrive-media-crypt: --config /home/<youruser>/.config/rclone/rclone.conf --log-file <rclone-log-dir>/upload.log --log-level INFO --fast-list --progress, obviously filling in the required arguments on your end. This will copy everything on your NAS to the cloud. You shouldn’t need to mess with the Plex database as it will find the new files and you can then remove the old path from the library and run a cleanup task in Plex.

      If you have a lot of media and a fast internet connection you can also add the bwlimit flag to make sure you don’t go over the 750GB daily upload limit.

      Hope this makes sense 🙂

      • Great article dude, how I wished I stumbled upon this earlier would have been a godsend 🙂

        I believe many of us are in the same boat as you in terms of “unlimited” usage. Hoping you could share your alternative method soon as our time is ticking out soon.

        BTW, you probably already do know about bypassing the 750GB daily upload limit via service accounts. That has been a live saver too, for creating multiple backups should your “edu” accounts goes out the window.

  9. Hello,

    I had this set up on another server but am moving over to a self-hosted set up. When I use rclone CLI I can list all files in the gdrive-media mount, but /mnt/gmedia;/mnt/gmedia-local; and /mnt/gmedia-cloud are all empty. Both log files are clean of errors and warnings.Any Ideas?

    • Something is up with your mount, did you check the logs of the services that do the mounts for both rclone mount and mergerfs?

  10. everytime i reboot i get this error !!

    ● gmedia-mergerfs.service – Gmedia MergerFS Mount
    Loaded: loaded (/etc/systemd/system/gmedia-mergerfs.service; enabled; vendor preset: enabled)
    Active: failed (Result: exit-code) since Tue 2022-05-24 16:05:44 UTC; 7s ago
    Process: 8556 ExecStart=/usr/bin/mergerfs /mnt/gmedia-local:/mnt/gmedia-cloud /mnt/gmedia -o rw,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full (code=exited, status=1/FAILURE)

    May 24 16:05:44 ltvps systemd[1]: Failed to start Gmedia MergerFS Mount.
    May 24 16:05:44 ltvps systemd[1]: gmedia-mergerfs.service: Scheduled restart job, restart counter is at 5.
    May 24 16:05:44 ltvps systemd[1]: Stopped Gmedia MergerFS Mount.
    May 24 16:05:44 ltvps systemd[1]: gmedia-mergerfs.service: Start request repeated too quickly.
    May 24 16:05:44 ltvps systemd[1]: gmedia-mergerfs.service: Failed with result ‘exit-code’.
    May 24 16:05:44 ltvps systemd[1]: Failed to start Gmedia MergerFS Mount.

    • — the configured Restart= setting for the unit.
      May 24 16:17:13 ltvps systemd[1]: Stopped Gmedia MergerFS Mount.
      — Subject: A stop job for unit gmedia-mergerfs.service has finished
      — Defined-By: systemd

      — Support: http://www.ubuntu.com/support

      — A stop job for unit gmedia-mergerfs.service has finished.

      — The job identifier is 3675 and the job result is done.
      May 24 16:17:13 ltvps systemd[1]: gmedia-mergerfs.service: Start request repeated too quickly.
      May 24 16:17:13 ltvps systemd[1]: gmedia-mergerfs.service: Failed with result ‘exit-code’.
      — Subject: Unit failed
      — Defined-By: systemd

      — Support: http://www.ubuntu.com/support

      — The unit gmedia-mergerfs.service has entered the ‘failed’ state with result ‘exit-code’.
      May 24 16:17:13 ltvps systemd[1]: Failed to start Gmedia MergerFS Mount.
      — Subject: A start job for unit gmedia-mergerfs.service has failed
      — Defined-By: systemd

      — Support: http://www.ubuntu.com/support

      — A start job for unit gmedia-mergerfs.service has finished with a failure.

      — The job identifier is 3675 and the job result is failed.
      May 24 16:17:15 ltvps sshd[17184]: Invalid user andy from 66.249.155.244 port 35150
      May 24 16:17:15 ltvps sshd[17184]: pam_unix(sshd:auth): check pass; user unknown
      May 24 16:17:15 ltvps sshd[17184]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=66.249.155.244
      May 24 16:17:15 ltvps sshd[17183]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=210.114.1.46 user=root
      May 24 16:17:16 ltvps sshd[17184]: Failed password for invalid user andy from 66.249.155.244 port 35150 ssh2
      May 24 16:17:17 ltvps sshd[17183]: Failed password for root from 210.114.1.46 port 54746 ssh2
      May 24 16:17:18 ltvps sshd[17183]: Received disconnect from 210.114.1.46 port 54746:11: Bye Bye [preauth]
      May 24 16:17:18 ltvps sshd[17183]: Disconnected from authenticating user root 210.114.1.46 port 54746 [preauth]
      May 24 16:17:18 ltvps sshd[17184]: Received disconnect from 66.249.155.244 port 35150:11: Bye Bye [preauth]
      May 24 16:17:18 ltvps sshd[17184]: Disconnected from invalid user andy 66.249.155.244 port 35150 [preauth]
      lines 2391-2424/2424 (END)

  11. Hello Muffin,

    Thank you for the tutorial, works really well!

    I have a question (albeit possibly a bit stupid), but how do I migrate my current collection to GD? I currently have an NFS with a couple of mounted folders on my ubuntu server (where I’ve set all of this up). Do I need to copy it into the gmedia-local folder then run the cron command or is there a simpler way for me to do it? Can I use the cron command and just point to my NFS folders?

    Thanking you in advance!

    • Hey,

      I would run this manually so you can keep an eye on the progress and potentially rate limit it if your internet is fast enough, as there is a 750GB daily upload limit.

      I would open a new tmux session via SSH so transfers will continue when you disconnect from SSH, and in your tmux session run something like there following:

      /usr/bin/rclone copy <local-mountpoint> gdrive-media-crypt: --config /home/<youruser>/.config/rclone/rclone.conf --log-file <rclone-log-dir>/upload.log --log-level INFO --delete-empty-src-dirs --fast-list --progress

      the --progress flag will show you the status of the copy. I would use copy and not move to make sure everything is nice and safe in the cloud before deleting your local copy. You can also add the bwlimit flag to the command if you need to limit the throughput.

      Hope this helps 🙂

  12. Hi MonsterMuffin,
    First of all… well done man. Your tutorials are really gold and gave me tips to improve my actual setup.

    FYI I tried to get unlimited storage from Google, going for Google Enterprise as you suggested.
    Turns out, it has now a special condition:
    ** Google will provide an initial 5 TB of pooled storage for each user. Customers who want additional storage can request it as needed by contacting Google Support.

    I only have access to 5TB, any suggestions on how to get it unlimited?
    After some digging, it seems that we can only solicitate support for additional storage once every 90 days…
    Any help welcome. Thanks in advance.
    V

    • Hey NupCakes,

      Thanks for the kind words. As for your question…

      This is all up in the air right now as for what’s happening. Google are transitioning everyone to Workspace from the old plans, myself included and it’s not clear what kind of limits they will enforce.

      I am actually putting a notice at the top of this post until we know more, but it appears that June/July 1st (not sure which) is when changes will be made, so we will have to see how they handle this. I am also seeing the same as you, I am only ‘allowed’ 5TB whilst using much, much more.

      I have contacted support to ‘request’ more storage as Google say, however, the support agent told me I don’t need to do anything and that warning can be ignored. Do I believe him? No. I don’t think he has any idea what he was talking about.

      So, there may or may not be roadblocks to this going forward, if you already have an account, I don’t think it will be an issue putting over 5TB in the account as most users doing this will have done so already. What happens after June/July 1st is really anyones guess as Google have not been clear at all.

      There are other solutions to doing this however without Google drive, a bit more complicated and pricer but can work out much better in the long run, keep an eye out for any updates on this post in that case…

      Best of luck, Muffin.

  13. Hi MonsterMuffin,

    This guide has been supremely helpful in moving to a google drive setup. May I ask, how do you handle when Sonarr/Radarr updates an existing file? Testing with the rclone move and no other scripts, it looks like you would just end up with multiple copies of the same file in the directory?

    Not a massive issue, but I’m curious if you have another automation or any ideas on handling that bit.

    • I’m glad it’s been helpful 😊.

      As for your query, new files will overwrite old ones with the same name with the configurations used in mergerfs and rclone.

      I’ve been using this setup for a while now and never had any issues with upgrades, it’s all seamless. The only place where some manual intervention has to be done in my setup is empty trash in Plex as I do not do this automatically.

      Hope that helps, let me know if not.

  14. Just have to say – for anyone contemplating … having a crappy Xfinity 30mbps upload speed makes this painful, although I’m sure it will be worth it in the end. It has been 2 months and my library is still uploading. The uploading cripples my network. I put a bwlimit flag on the crontab and while it has helped alleviate it a little, it’s still pretty crippling trying to work from home, upload, stream videos, etc. Muffin’s comments about making sure you have a good upload speed cannot be stressed enough, that’s for sure!

  15. Thank you for this guide! It’s been some time since I have setup rclone for plex but this really made the process of getting everything setup again a breeze. I will definitely be passing this along to anybody who is looking to create a rclone plex setup.

  16. Hello, I thought you can only get unlimited storage if you have more than 5 users? Please clarify for me, thank you.

  17. Hey muffin,

    Excellent guide!
    Just one question, if I want to throttle the bwlimit to 8MB/s and set unrestricted for downloads I can use –bwlimit=8M:off if I have read correctly, but in which of the files should that be placed? gmedia.service, crontab?

    • Thanks :).

      For this you simply need to add that argument to the command that is in the crontab. So you should simply be able to add --bwlimit 8M to that line and this will ensure that your uploads are capped, but everything else will continue at line speed as the mount configuration is completely seperate.

      I hope this helps, if not let me know.

      • Thanks for the quick reply, I will look at it!

        Totally different (stupid) question:

        When copying files locally to the mounts , is there any difference in putting files directly in /mnt/gmedia/ and/or /mnt/gmedia-local? Any downsides or something?

        • No real difference for manual moves no, but you should really be using the mergerfs mount /mnt/gmedia. The only thing that should be using the backend directories is mergerfs, It makes things a lot easier if you use the abstracted /mnt/gmedia for all operations.

  18. Hello! This is a super good write up! Great job.

    I do have a quick question though; Downloads on my system are currently going to /opt/downloads/torrents/. How do I get them from this folder to the gmedia mounted FS? I feel like Im either missing something or am misunderstanding a part.

    Thank you!

    • If you followed my docker setup then this is all done via the volumes defined in the .env file.

      The download folder is mounted on both the downloader VM and Sonarr/Radarr. Sonarr/Radarr also have the gmedia folders, so they will pickup the files from the download folder and then move them to the gdrive folder. Sonarr/Radarr should be doing all of this in the backend.

      Let me know if you need some more help.

  19. Hi,
    I have some bugs with your “rclone mv” command:
    – after moving, each file disappear during ~10 secondes and if I watch a movie with plex, I got an error because the file not exist during this time.
    – if the 750GB quota is reached during move, I have bugs.

    Do you have a solution for that?

    Thx.

    • This is not sufficient information to troubleshoot anything. You need to provide detailed logs and setup information if you need help.

  20. Few things changed in the tutorial but it was easy enough to work through em… any other way to tip ya Muffin? Gas prices a super expensive. Give me a buymeacoffe link or something!

    • Could you please tell me what has changed since I posted this? I would like to keep this guide as up-to-date as possible :).

      As for tips, I only do crypto for now, if you can’t do that then no worries whatsoever! Thank you for the thought regardless.

  21. heya. been following along on macOS I’ve hit a snag on the system portion I don’t think Mac supports it. based on my research they use launchd is there a way for me to convert your systems script to launchctl?

  22. PS – I forgot to include in my last comment. RClone v1.58 has since been updated to no longer give you a website to copy and paste in a browser to get the authentication code. You have to have same version rclone on a separate machine with a browser. Just a heads up.

    • Great guide, many thanks for posting this, it’s been a right eye opener in moving to cloud from a local server/nas and really appreciate the amount of effort you put in to making an all in one guide that’s easy to follow! Had a load of bother with running through the steps first time round as I accidently ran sudo su at some point so caused a lot of weird permission issues (rclone not playing ball on reboots as it didn’t know what config to use). All my fault though but a learning experience!

      As BMeissen said, the main change from your guide which has only recently happened in latest version of rclone is the lack of link being provided to authenticate with Google Drive. The exact output is below (I’ve removed the auth string it generates). I just installed rclone on the machine I was SSH’ing from (Windows based), ran the command below, accepted the authorisation request on the popup browser window (believe this is the same as the one from before) then rclone on the Windows machine spat out another long string which I pasted in to the “config_token>” prompt below.

      Use auto config?
      * Say Y if not sure
      * Say N if you are working on a remote or headless machine

      y) Yes (default)
      n) No
      y/n> n
      Option config_token.
      For this to work, you will need rclone available on a machine that has
      a web browser available.
      For more help and alternate methods see: https://rclone.org/remote_setup/
      Execute the following on the machine with the web browser (same rclone
      version recommended):
      rclone authorize “drive” “very_long_token_ID_generated_here”
      Then paste the result.
      Enter a value.
      config_token>

      • Thanks for this, I was not aware that these changes had been made.

        I have now amended that bit of this tutorial to include instructions as to how to do this.

        Thanks again and thank you for the kind words, I’m glad I could help!

  23. Thank you for this excellent tutorial! I had everything running for a few days, but I think after I edited the file to add bandwidth limits that something, somewhere got messed up. This morning I realized that the sync wasn’t working so I started checking logs and discovered that I had a whole slew of weird issues from error messages that rclone config was getting permission denied when trying to open my config file to then realizing I had to refresh my token for my drive. I finally have gotten things sorted out there and when I use the equivalent “rclone ls gdrive-media” it shows me the long list of all of my content that had been updated to my GDrive. But if I use the equivalent “rclone ls gdrive-media-crypt” command it comes back empty. I’m at a loss as to what I need to do next. Should I re-run the config for the crypt and enter in my two previously saved passwords to try to re-establish that connection?

    • Your token should get continually renewed when you use rclone, I’ve not come across a scenario where this hasn’t worked so that’s odd.

      If rclone ls gdrive-media is listing your files in a readable format then you have a problem as this should list obfuscated file names as that remote is not decrypting. You should make sure you are using the proper config in your mount.

      You can re-run the config without needing to worry about your data, as long as the crypt is created with the same keys, as you say. I would just nuke rclone config and recreate it, if you want to add bandwidth limits add them as options in the rclone move command.

      • I’m hoping the reason why it did not automatically renew the token is because the OAuth Consent was still in testing and not published — I couldn’t publish because once the consent screen was published the headless authorization no longer worked — thankfully that’s been fixed with rclone 1.58. My oauth consent screen is now published and I was able to successfully authorize so hopefully that should be smooth sailing.

        rclone ls gdrive-media was giving me the encrypted names, so based on your reply I went through rclone config and edited my crypt leaving everything the same as it was. Amazingly just re-running that rclone config and leaving everything the same fixed my issue – now rclone ls gdrive-media-crypt gives me my folders in readable format. Huge thank you again for the tutorial and for the quick reply today! Sending a beer or two your way via bitcoin. 🙂

  24. As you’ve mentioned that this should be a vm/server with some powerful specs.
    I already run a Plex on raspberry Pi 4 8GB, 1TB hdd, 32GB sd card for OS, can it be used for this setup? Or is encrypting/decrypting just too intensive to handled by it?
    Pls suggest I’m considering highly to change my software setup according to your blogpost.

  25. Question about the Plex & gmedia-local – I am adding files to my gmedia-local folder and I have Plex pointed to the merged gmedia folder. Plex is set to scan every hour and the job to sync the files to the gdrive is set to run every 6 hours. I am starting to notice after I copy movies into to the gmedia-local folder all have the naming pattern of movie name (1998).mkv and they then get scanned by Plex that sometimes I will see two box arts for the same movie one with the Movie Name / 1 Movie below it and the second one with Movie name / 1998 below it. It does not happen for every movie and i can manually delete the one that says 1 Movie below the box art but not sure why it is creating two entries for Plex ???? I double checked and I do not have any other folders set to scan

    • I think i figured out what is happening … I turned on add collections and so some of the movies I only have one movie in the collection but it is still generation the collection … Go figure I would not have thought that it would create the collection Box art unless it had more than one move.

  26. Been using this guide for a while with no issues. I was wondering if it can get updated to use rclone union instead of mergerfs. I’ve been trying to change it but struggling a lot getting the config right.

  27. Hello Muffin.
    Thank you so much for this awesome article. I can’t imagine how much work it must have been to wrote everything down this perfectly. I also downloaded your accompanying repo and just finished following along all of your steps- except for the last.
    You might have guessed it- I have a question 😀
    In my understanding, your gmedia-local folder is the location where the Arrs dump all their data so that Plex has an easier time inspecting them and for better upload performance. However, looking into your docker compose yaml, I only see the merged gmedia folder being referenced. Where does local come into play? I mean after the data gets moved to cloud, it gets wiped by rclone anyway. So it would always be an empty mount with no folder structure inside of it. Can you please help me understand this? Thanks again!

    • Hey and thanks for the kind words! As for your question, this is necessary for the apps to function properly. Apps will never use either gmedia-local or gmedia-cloud as this would result in them only seeing part of the data, and having to configure 2 directories in the apps. All apps (arr’s and plex) should use the mergerfs mount as the ‘cloudness’ or ‘localness’ of the files themselves is abstracted for the apps, they will never know stuff was once local and now lives in the cloud.

      Local is referenced in such a way in mergerfs that all files hitting that share gets written to -local, the rclone move job will then move -local to -cloud, again, transparently to the apps.

      This is why all apps must use the mergerfs mount and not -local or -cloud. I hope this has helped your understanding, if not let me know 🙂

      • Hey Muffin.

        Yeah, this helped me. Everything is up and running now and Im amazed by it- all thanks to you 🙂
        By the way I discovered something about the gmedia-mergerfs.service: when the system reboots, fuse will complain because the directory that you want to mount is not empty. So I added “nonempty” to the command like so:

        /usr/bin/mergerfs /mnt/gmedia-local:/mnt/gmedia-cloud /mnt/gmedia -o rw,nonempty,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full

        works like a charm. Just thought you might want to know.

        Have a nice day!

        • Nevermind, dont do that! It led to a lot of confusion for docker. I just deleted gmedia/ now, created the folder again- this time all empty and started the service again. Thats probably not the most elegant way, but it works.

  28. Just found this site and wanted to check two things before moving forward… 1. Are people still able to get the business standard tier from Google, upgrade to Enterprise and still get unlimited storage / greater than 1 TB ? 2. If you set all this up and the server that is doing the encryption / sync fails will you still be able to bring up a new server and have the files still work? Thank you great information

    • Yes you can still upgrade to Enterprise and get unlimited storage, I followed this tutorial a few weeks ago.
      As long as you have your secret keys you should be able to load up another server with rclone.

      • Thank you for the responses! I saw someone online mid last year saying they contacted support and were told for a 1 person team it was a 1 TB limit …

        • The answer you received is correct, you can still do this. If anything ever changes I will be updating this blog post. Please make sure you keep your keys extremely safe. As long as you have your keys you can mount your encrypted data on anything that runs rclone.

          Good luck and if you need any more information please contact me on the links provided on my about me page :).

  29. Learned a lot from this, got it working great even on a shared media server. I did have to do a non-superuser dpkg -x for mergerfs because it wasn’t preloaded on the system (or I needed root to get to it) but other than that I followed these steps and it worked great. The local/remote transparency is wonderful for Plex, I’m blown away on how well that works. Sent along some bitcoin as thanks.

  30. Hello.

    Just want to say i think this guide is out of date. came to the install of rclone, tryd to get pass the thingy but did not manage pass that.

    • Not sure what you did but this is not out of date. Please post some output or logs of the issue you are having.

    • It’s in the github repo, you can find it here.

      You will get this when you clone the repo, you may not be seeing it since it is a dot file, if you run ls -la in that folder. you should see it.

  31. Complete and total noob to all of this. I got as far as installing and setting up Rclone and gdrive. Completely lost how to mount Gdrive to windows 10 so I can add media. Also how do I get Plex to play from it? Any tutorials for doing that on windows 10 (that a complete noob would understand)?

  32. I wanted to post my experience of late. Unfortunately i have terrible upload with my ISP (45mbps). And yes i have the fastest possible available for me. I was running into an issue where rclone move was eating all my upload and not allowing remote streams to connect and stream. What i have done is started my rclone move with a bwlimit with about 2-3mbps to spare and rc. This allows PMS enough bandwidth to respond to stream requests and rclone to upload as fast as possible. I then have a script that Tautulli will call when a stream is started/paused/stopped/modified etc. which send the total WAN bandwidth. I then subtract the requested WAN bandwidth from my 45 to allow the stream to use what it needs and send the rest to rclone move. This is done by using the rclone rc command to change bwlimit accordingly. This has been working great for a few days now and allows for a stable stream and constant upload. Yes this could be remedied by a dedicated cloud server with a good connection. However i would need a graphics card for HW transcoding of which is not cost effective. I already have a local server fully capable of transcoding albiet behind terrible upload limits.

  33. Okay I got this all setup and working how i think it should. But looking at your setup it seems different than how i set it up.
    Basically there are 3 directories: gdrive-local, gdrive-cloud, and gdrive. Your setup doesnt appear to use gdrive-local as your docker apps all point to gdrive for downloads. Nothing seems to reference the gdrive-local except for mergerfs which take gdrive-cloud and gdrive-local to create gdrive which i have plex pointing to. If i point all downloaders and PMS to gdrive, when and how is gdrive-local used?

    • Also should i see content in gmedia-cloud? It is empty. I have stuff in gmedia-local and gmedia reflects what is in gmedia-local. I am concerned that once moved form local to cloud that gmedia will reflect the empty cloud directory?

    • Okay disregard all i had said before.

      tldr; I am all set! Thanks for the awesome write up!

      long verison: In short gmedia and gmedia-local become the same. The rclone move takes care of items missing from the ‘true’ cloud (gdrive-media-crypt) Items also show up in my gdrive-cloud as expected. These items were moved from gdrive-local to ‘true’ cloud as expected. And sense the gdrive-cloud reflects that of gdrive-media-crypt files dont really disappear from a downloader or pms aspect as merge fs takes file form local and cloud.

      • Hey, glad to see you got this working! As you say, mergerfs exists to abstract the local and cloud gmedias into one which all the apps use. This allows us to store stuff locally and in the cloud without the apps knowing or caring.

  34. can you tell what’s the work of r clone here
    can r clone connect google drive server forever?
    please let me know
    how can i use plex like RaiDrive ?
    please rply ASAP

  35. Hi there

    What do you do with the downloads from Sonarr and Radarr? Since they normally hardlink to the file in the download directory?

    • I’ve never used hardlinking so I wouldn’t know, doesn’t sound like it would work well. Just move the files upon completion into the merged directory and let rclone manage the move.

      • I was thinking of changing it to that but the problem I would run into is that I can’t seed the torrents I downloaded. I unfortunately don’t think this is going to work for me.

        Do you have any ideas possibly? Seeding from GDrive sounds like a terrible idea.

        • Hi, i’m going to hop in this conversation since i had a similar problem.

          For hardlinks to work you would need to download directly to the merged directory. (media-merged/downloads) To prevent the downloads to go to cloud add =NC (no-create) to your mergerfs command like this /path/to/local:/path/to/cloud=NC.

          Now, for the move command i have an if that checks if there is less then 100GB on the local disk and only then it moves the files to cloud (this is to maximize seed time)

          declare -i minsize=$((10**11)) # 100 GB
          if(($(df -B1 –output=avail /your/local/disk | tail -n1)<=minsize))
          then
          echo ‘There is less than 100 GB left’
          sudo /usr/bin/rclone move /path/to/local/media your cloud path \
          –config #path to config
          –log-file #path to log
          –log-level INFO \
          –delete-empty-src-dirs \
          –max-transfer 120G \ #moves only up to 120GB, again to maximize seed time
          –cutoff-mode=cautious \ #it picks files so that the 120GB isn’t reached
          –min-age 50h \
          –order-by ‘modtime,asc’ \ #moves only the oldest files
          -P
          fi

          now, the problem is that due to the way hardlinks work, there are files left in /downloads
          to fix this i have the next script:

          find “/path/to/local/torrents/” -type f -name *.mkv -print0 | while IFS= read -r -d ” file #only pick .mkv files
          do
          if [[ $(ls -l “$file” | cut -d’ ‘ -f2) == 1 ]] #if the file has no hardlinks !!!verify that hardlinks work properly before using this!!!
          then
          # This is a wierd part. The problem is that sometimes a file is still downloading and the script would pick it and delete it
          # To fix this, the script searches if the file exists in the merged directory
          # If it is there and it has no hardlinks it means it is stored in the cloud
          file_mod=$(echo “$file” | rev |cut -d’/’ -f1 | rev)
          file_find=$(find “/path/to/merge/media” -type f -name “$file_mod” -print)
          if [[ ! -z “$file_find” ]]
          then
          # rm “$file” #Don’t uncomment this unless you are 100% sure that it works
          echo “remove $file”
          fi
          fi
          file_find=
          done

          This is it, i have this scripts running in a crontab, rclone at 4 and the deletion 6 and i don’t have any problems.

          DISCLAIMER:
          I have little to no experience in bash and scripting, most of the scripts are compiled from stackoverflow answers so use at your own risk.
          Also, for whatever reason, he move command doesen’t pick the files cronologically, it is pretty random, maybe somenone more experienced can look at it and figure out what is wrong.

  36. On my VPS I am getting a lot of inbound traffic but normal outbound traffic. For example 250GB per day inbound and 20GB outbound. From what I have found, this is Plex, not rClone.

    Do you have any recommendations in Plex to turn off. I am guessing that Plex is downloading every tv/movie to do analysis over it.

    • IMPORTANT:
      High inbound traffic on the VPS?????

      In Plex go to Settings > Scheduled Tasks > and untick the option Perform extensive media analysis during maintenance.

      If this is enabled, Plex will then smash the Google Drive drive during the maintenance window of 3 hours (default) causing about 250GB of inbound traffic to the VPS.

    • I see another commenter has replied to you with the correct answer.

      If you want to do media analysis you need to do it whilst the media is stored locally and not when it’s in Google Drive. You can just disable this entirely which will save you a lot of traffic and API calls.

      Settings > Scheduled Tasks > uncheck ‘Perform extensive media analysis during maintenance.’

      I still have intro detection enabled on mine but this is all done when media is scanned locally, before being sent to Google, media analysis / creating thumbnails is disabled.

  37. Hey, i know this is an old post, but i have based my setup mainly on yours and it has worked great! However, i can’t seem to figure out how to move the files from local to cloud. I use a torrent client that downloads to /data/torrents and after that radarr hardlinks it to /data/media. When i move the files from /data/media to cloud, the same files remain in /data/torrents. Is there any way I can delete the remaining files in /data/torrents?

    Normally I could just erase the contents of the folder, but i have configured my move command to only move movies older than 10 days. So if i move MOVIE1 and MOVIE2, from /data/media, there are still MOVIE1,MOVIE2,MOVIE3…. in /data/torrents. How can i delete only the movies that have been moved?

  38. Keep getting this error when trying to start gmedia-mergerfs.service “Job for gmedia-mergerfs.service failed because the control process exited with error code.
    See “systemctl status gmedia-mergerfs.service” and “journalctl -xe” for details”.

  39. Hello!

    Thank you so much for putting this together. It was extremely helpful. I’m a bit confused about the use of mergerfs with Sonarr and Radarr though. One of your diagrams shows Radarr, Sonarr and Plex all pointing to the ‘gmedia’ folder. Shouldn’t Radarr and Sonarr have ‘gmedia-local’ setup as its root folder though rather than ‘gmedia’? I understand that Plex should use the merged share gmedia as it’s media location but seems like the other two should use gmedia-local as their root so that the cron job will be able to move downloaded files to the cloud.

    Could you explain this a bit more? I may be misunderstanding how mergerfs works.

    • The way mergerfs is setup in the serviced file tells mergerfs to put all files in that merged directory onto the first specified mount, which is gmedia-local. It’s best to use this merged mountpoint for all files and to keep the -local and -cloud abstracted.

      Hope that helps :).

  40. Hi there, thank you for the very thorough tutorial. I’m just stuck now when I type

    sed -i ‘s,,MY USERNAME,g’
    I get this error
    sed: no input files

    Do have any idea on how I can fix it?

    Thank You!

    • Ensure you are in the correct directory. You can also edit the files manually to match the outputs that I detail in the post :).

    • It’s just how I decided to do it. Plex has a GPU so that has its own VM with the GPU passed through, rclone is accessed by more than just Plex so it acts as a dedicated VM for accessing GD from various apps via SMB.

      It’s just how I’ve chosen to do it in my setup.

  41. Hello, Thanks for the contribution it is helping me a lot; my query is how many simultaneous connections can I have watching videos by plex and not be blocked by google.
    Thanks

    • A lot. You’ll be limited by your hardware most likely and internet connection. If you’re planning on using this to sell services, I cannot comment how well this wold work but I wouldn’t.

  42. Hey Muffin,

    Great job.
    I was wondering if the setup above would be different for using a service like AllDebrid to pull content instead of from the local drive ?

    Any details will be appreciated 😜.

  43. WOW firstly thank you for this very detailed post. Really Appreciate it, and can see it’s helped so many people but I have no idea what a lot of this means 🙁

    Like I can see images which shows 255GB Ram. You talk about “with a co-located server with a decent uplink. This server has an Nvidia P2000 passed through to the Plex VM”, then a R720 and last a Hetzner Server. I have no idea what any of those means lol. I’ve read it twice now but still SO LOST!

    I want to create my own Plex Server using Google Drive as you have. I’m not an ubuntu person never used it but looking at this, it seems like you have 3 different computers doing the work, wouldn’t it be easy to just make your own? Be also cheap? I’m not sure as a said, I’m still lost on a lot of things.

  44. Hi Muffin. Thanks a lot for the tutorial. Can I setup a server with encrypted GDrive when having already unencrypted files in the Cloud? I am currently running my server on Windows Home Server and already have started pushing files to my Workspace Google Drive because I have reached local storage limits. The files are not encrypted, neither on local drives nor on GDrive. I would like to change OS because Ubuntu allows Sonarr and Radarr to read the mounted storage which MergerFS unfortunately doesn’t.

    • So it depends how you want to go about this. You can totally mount the unencrypted stuff in rclone as another remote, and add that to the mergerfs pool without adding additional files to it. In this case you can leave it as is, or slowly move the files from non-encrp>encrp. All depends how you want to handle it but it’s possible to continue using what you have now without any disruption.

  45. This is an incredible writeup, thank you. I am wondering if any of these products included have a vpn for the Usenet and BitTorrent use, or if it is not necessary. Or perhaps you have that configured on your router.

    • This is indeed done on pfSense (I have a guide for this on my blog too, albeit old now it’s still relevant).

  46. Hey man,

    Thank you for all of your hard work here! Your post has been a HUUUUUUUGE help! What I’ve done is merge some of what you have here with the work from the DockSTARTer to make something work on a Ubuntu desktop VM on my Windows host (a.k.a standing on the shoulders of giants imho). My goal currently was to set up something like in the first picture that you have towards the beginning of the article/post here. Do you happen to already have a more in-depth write up explaining the set-up/process entailed for that “high-level” set-up? I would imagine that it’s not terribly different from the setup discussed in this post, just broken out into different, specialized VMs with the SMB shares. I am currently trying to figure out how to do it myself but I feel that I am in too deep for it. If I can figure out how the SMB shares would work between VMs and the Windows host, I think I’ll be set, but it’s not looking too good as of right now. I would be totally willing to talk to you about on the discord server but I can’t seem to find a link to where that would be. The “about me” page on your blog has a logo for discord but no corresponding link. Is there a way I could reach you for some input, either via reddit or discord? I would love to pick your brain for like 15-20 minutes or so to see if I’m on the right path. I hope to hear from you soon. Keep up the amazing work!

    Cheers,
    Oshura3

  47. This was extremely helpful. It was my first endeavor into playing with variables and a “.env” file. Unfortuantely, I was not able to get it to work all the way. I verified that all of my values matched what you have and I think I may have missed something at some point. For a while there, it kept giving me this error: [Invalid interpolation format for “volumes” option in service “sonarr”: “${GMEDIA-TV}”] which somehow I managed to fix (but I have no idea how honestly). After I did the docker-compose up command, I got this last error and I was hoping for some light into it.

    WARNING: The USVC variable is not set. Defaulting to a blank string.
    WARNING: The GSVC variable is not set. Defaulting to a blank string.

    Did I have a slip of the finger somewhere here? Great work with what you did. This is not a lost cause by any means!

    • Hey there,

      I just realised I failed to define those variables in the .env file! I have no idea why I named those variables differently to the others, but I have now fixed the docker-compose.yml file so this is now the same variable name as the others. You can now repull the repo and it should deploy as expected.

      You can fix this manually by editing the docker-compose.yml and changing the following under the Plex service:

      “` – PUID=${USVC}
      – PGID=${GSVC}
      “`

      becomes

      “` – PUID=${PUID}
      – PGID=${PGID}
      “`

      Let me know if you have any issues with this.

      Remember, with these variables you are ensuring the docker containers are running as a non-root user, so make sure the user with ID=1000 has access to the shares we created. If you just want everything to ‘just work’ you can remove the ‘PUID=${PUID}’ and ‘PGID=${PGID}’ lines from the services in docker-compose.yml. This will make all containers run as root and permissions won’t be an issue, but obviously this is not recommended!

      Cheers, Muffin.

  48. Great tutorial! With regard to the cron job running every 6 hours… what if the media that is transferring from gmedia-local to the cloud takes more than 6 hours to transfer. Would there be a messy/lossy overlap of cronjobs?

    • Hey there and thanks.

      As for your question, no, rsync is aware of files that are currently in use and these will be ignored by the subsequent job. In this scenario, new files that meet the criteria to be moved that did not previously will be actioned by this new job. 🙂

  49. Thanks for your awesome tutorial!

    Two questions:
    – After first cronjon run, my files are in gdrive and locatet in gmedia-cloud and not in gmedia-local anymore. my displayed free storage in sabnzbd doesnt free up. Does it take some days until everything is locally deleted when no one is using the plex server?
    – why does it take so long when i “mv” folders into gmedia? or should i move it into gmedia-local?

    best regards

    • Hey there and thanks :).

      > I’m not sure I understand your first point, files are supposed to be moved from gmedia-local to gmedia-cloud via the cron job, so nothing older than 6 hours (or whatever flag you have set for this) should be in gmedia-local anymore.

      As for downloads, sab should really have no insight into the gmedia-local/cloud directories. In this guide, sab is downloading to a local folder on the server and then Radarr/Sonarr should be moving this to gmedia-local, the storage sab sees should be freed up as soon as Radarr/Sonarr moves them from that directory. You should check permissions and ensure that the files are actually being moved.

      > Yes, doing any file operations directly into the gmedia-cloud directories will be slow, you will only get your full speed to GD when using the rclone API, so you can either use rclone move commands to move stuff directly into gmedia-cloud, or as you say move files into gmedia-local and let the cron job move the files for you.

      Let me know if you have any more questions 🙂

  50. Keep getting 403 quota limit reached but haven’t been able to play any videos thru plex yet. I’m only using the rclone side of things since the uploaders like sonarr are on a different server. I have partial scan turned on and full scan set to daily. Any suggestions on how to fix this or is it because its a new server and needs a few days cool off before Google will unblock everything. Thank you for the help and for the great arrival.

    • Have you ensured Plex is not doing thumbnail generation on your media in the cloud? This will almost certainly make you hit your API limit. I would avoid partial scans and just set Plex to do a full scan every few hours, as this should use the VFS cache.

  51. kept getting this error !
    crontab: installing new crontab
    “/tmp/crontab.dmtQml/crontab”:23: bad day-of-week
    errors in crontab file, can’t install.
    Do you want to retry the same edit? (y/n)

    • You can simply ignore the 2nd rclone setup phase when applying encryption onto the remote, everything else will remain the same.

  52. Hey Monster, have dropped you a few messages on /r/homelab but you may have missed them. Am seriously looking at a colo and would like to get a further understanding of yours, outside of the blog posts. Would it be possible to mention (in private and never shared) who your provider is at all? The ones I’ve been looking at in London are horrenously expensive and you get hardly anything 🙁

    • Sorry for not getting back to you sooner, completely missed this. Please try and reach out again on Discord and I’ll look out for you!

    • Thanks, it seems like Debain’s apt is indeed still using 2.24 at the time of posting. I have included instructions now how to install the latest version from Github now if your apt version is too old.

  53. It’s working now! I had pointed to wrong mounting points. In my case mnt/gmedia-local is mnt/Plex. Thanks again for the great tutorial!

  54. First, let me say properly, thank you so much for this tutorial because I’m learning a lot!

    Second, let me clarify: mergerfs is running on the same VM as rclone.

    the output of “sudo systemctl status gmedia-mergerfs.service” is:

    systemctl status gmedia-mergerfs.service
    ● gmedia-mergerfs.service – Gmedia MergerFS Mount
    Loaded: loaded (/etc/systemd/system/gmedia-mergerfs.service; enabled; vendor preset: enabled)
    Active: failed (Result: exit-code) since Fri 2021-01-29 19:14:42 UTC; 7s ago
    Process: 21766 ExecStart=/usr/bin/mergerfs /mnt/temp-local:/mnt/gmedia-cloud /mnt/Plex -o rw,use_ino,allow_other,func.getattr=new>

    Jan 29 19:14:42 gdrive systemd[1]: gmedia-mergerfs.service: Scheduled restart job, restart counter is at 5.
    Jan 29 19:14:42 gdrive systemd[1]: Stopped Gmedia MergerFS Mount.
    Jan 29 19:14:42 gdrive systemd[1]: gmedia-mergerfs.service: Start request repeated too quickly.
    Jan 29 19:14:42 gdrive systemd[1]: gmedia-mergerfs.service: Failed with result ‘exit-code’.
    Jan 29 19:14:42 gdrive systemd[1]: Failed to start Gmedia MergerFS Mount.

    let me break it down:

    – /mnt/temp-local is on freenas connected to VM via SMB
    – /mnt/Plex is on freenas connected to VM via SMB
    – /mnt/gmedia-cloud is on VM

    So my problem is that gmedia-mergerfs doesn’t start for some reason.

  55. Hi!

    I saw your post on reddit but I can’t find it anymore to comment there (probably erased?).

    I have a problem when I want to run: sudo systemctl start gmedia-mergerfs.service

    It fails to start, but gmedia.service runs normally. What I have differently is that I have gmedia-cloud and gmedia-local mounted via smb on my other server.

    Could that be a reason?

    • Hey,

      Why are you running mergerfs on a different box to rclone, I’m assuming you’ve done this because /mnt/gmedia-cloud is on the network. What you’re trying to do should work regardless though, what’s the output of “`sudo journalctl -xe“`? There should be an error about the service as to why it couldn’t start. systemctl status will also give you some information.

      If you are running mergerfs and rclone, I would probably move the /mnt/gmedia-cloud locally, it doesn’t use any storage so I can’t see any reason not to.

      • Hi I’m new to this I need some help I’m getting this error “ob for gmedia-mergerfs.service failed because the control process exited with error code.” How can I solve this error issue?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: