Skip to main content

Set Up Disaster Recovery With Docker

Your LND node is running, and I'm sure you're ready to start opening some channels and sending payments around.

But first, we need to be sure that, if our computer suddenly dies, or gets stolen, and we can't get back into the LND installation, that we are still able to rescue our funds.

Comparison with web applications

I said earlier in this tutorial that it would be good to have some experience running web application before running a lightning node.

Well, if you have that experience, you might also know that the current standard for data persistence and backup for web applications is extremely high.

People have been using databases with web applications since around 1994, and it's a very mature ecosystem.

A typical Django or Ruby On Rails or Wordpress installation, these days, will use a hosted database provider, such as Amazon RDS or Elephant SQL.

These hosted database providers are VERY FUCKING RELIABLE, such that, even if your web application blows up, and one of the hosted database provider's servers goes down, and both of these events happen at the exact same millisecond, you STILL won't lose any data!

You won't lose data because enterprise-grade databases like Postgres and MySQL have extremely well-designed and battle-tested systems for storing data across multiple data center regions, and keeping all that data perfectly "in sync".

In my 10 years of running web applications, I have never lost data due to a database or hardware problem. Never!

Lightning nodes are more sketchy

Lightning nodes have only been around for about five years, and some things are still a work-in-progress. And unfortunately, backup and recovery are one of "those things".

Currently, Lightning Nodes are not easy to set up with any kind of backup/persistence system that matches what is achievable with Postgres and MySql.

Yes, you will find tutorials of people setting up their LND node with a Postgres backend, but it seems nobody is willing to recommend using a "hosted" database provider like Amazon RDS or Elephant SQL. There seems to be a lot of concern about what could happen if too much latency was introduced between the running LND, and the Postgres instance in the cloud. "There be dragons."

So, right now the recommendation is that "persistence" in a LND installation -- wallet data, data about the channels, other files that the node keeps -- really should live on a LOCAL disk.

Yes, this is exactly why we set up a ZFS pool, and yes, this will help protect us against drive failure. But still, any data, just living on a LOCAL disk -- that's sketchy.

But don't lose all hope

You'll be happy to learn that LND does have a system in place to recover your funds in case of disaster: Every time you open or close a channel, LND will produce something called a "Static Channel Backup" (SCB).

This little data file is what you will need to recover the funds in your channels in case of disaster. Without the SCB, you are fucked.

Also: The SCB changes every time a channel opens or closes... and to make a recovery, you absolutely MUST have the MOST RECENT version of the SCB that LND has produced.

This means that you absolutely must be running a script, at all times, which will upload this SCB to secure cloud storage, and make these uploads IMMEDIATELY after every channel "open" or "close" on your node.

Strangely, I was unable to find a great script to do this, so I wrote my own, and I'm now going to share it with you.

Prerequisites

You need the following things to set up this script

  1. An AWS account protected by 2 factor authentication
  2. An S3 bucket
  3. 100% certainty that this S3 bucket has been set to "Block All Public Access"
  4. An AWS user with enough privileges to write to this bucket

If you don't know how to set this up, this is some basic development knowledge that is really useful to learn, and there are a lot of tutorials online to walk you through this.

And, you don't have to use S3 .... you should be able to easily modify the script to upload to your preferred backup destination. But be sure the backup destination is secure. Don't put this file in a place where others could find it and download it!

So, we're going to proceed assuming that you'll be using S3.

Review the pworker directory

You will find a folder at the LND-With-Docker/pworker path.

I've named it pworker to suggest "Python Worker"... There are bunch of python scripts in here. You'll just be using one of them, for now.

Save information about your AWS account in the PRIVATE directory

Run these commands

cd pworker/PRIVATE
touch secrets.env

Open pworker/PRIVATE/secrets.env in your editor, and paste in these environment variables:

AWS_ACCESS_KEY_ID=PUT-YOUR-AWS-KEY-HERE
AWS_SECRET_ACCESS_KEY=PUT-YOUR-AWS-SECRET-KEY-HERE
S3_BUCKET_NAME=PUT-THE-NAME-OF-YOUR-BUCKET-HERE
S3_BUCKET_REGION=PUT-THE-S3-REGION-HERE

You'll need to modify each variable, by pasting in your actual access key, secret access key, s3 bucket name, and s3 bucket region. (Note, the region looks like us-east-1 or similar.)

OK, we've written secrets.env.

Start pworker

./start-watch-backups.sh

This script always uploads the most recent channel.backup file, and then it keeps running continuously, and will upload the channel.backup file any time it changes.

You should see log output like this

Attaching to pworker-pworker-1
pworker-pworker-1 | starting watch_backups.py
pworker-pworker-1 | current working directory /workspace/lightning/pworker
pworker-pworker-1 | monitoring this file /workspace/lightning/lnd/lnd-data/data/chain/bitcoin/mainnet/channel.backup
pworker-pworker-1 | Uploaded /workspace/lightning/lnd/lnd-data/data/chain/bitcoin/mainnet/channel.backup to S3 as lnd-channel-backups/9a6a4c-channel.backup

This means your file has uploaded to S3!

You should now look in your S3 bucket, to see the uploaded file. You can easily browse the contents of S3 buckets just through the AWS console. If you are logged into AWS, s3.console.aws.amazon.com/s3/buckets should get you to a list of your buckets.

OK! We now have automated channel backups going to cloud storage!