How to use Hetzner S3 Object Storage as OpenTofu backend
Posted on in DevOps HostingLast edited on
OpenTofu (just like terraform) supports multiple backends for storing your state. For quite a while I kept my state as files on my desktop machine, because running a dedicated database server seemed a bit excessive to me.
When Hetzner announced the beta test phase for their S3 Object Storage offering, I just had to try it out and make my OpenTofu state a bit more resilient and easier to recover. 😬
Adding a bucket to your project
Adding a bucket to one of your projects is done in less than a minute.
First, navigate to the cloud project that you want your bucket to belong to. Then chose Object Storage from the menu on the left.
Next, click Bucket erstellen (this should be “create bucket” on the english interface version).
This opens a dialog, prompting you to input some information about the new bucket, such as a location, name, and access type.
During the beta phase, only Falkenstein was made available. This should now also be available in Nuremberg and Helsinki (since this post will be released with the official end of the beta test). Your bucket name is required to be unique, contain only lower case characters, digits and hyphens. It must start and end with a character or digit.
Finally, choose whether you want the bucket to be private or publicly availabe (can be used for static pages, or download storage, etc.)
Your bucket will then have the format of https://[bucket_name].[location].your-objectstorage.com
, just as shown in the dialog.
For a bucket name I strongly recommend to use an additional random string as prefix or suffix to your bucket name. This is not a requirement, nor is it a replacement for access permissions, but it will at least help a bit when it comes to folks trying to iterate over dictionaries or common bucket names to find possible data leaks due to wrong permissions on buckets. In the screenshot you can see I appended -x4lk7x
to the bucket name my-hetzner-bucket
.
Generate credentials
In order to access a private bucket, you will need to have credentials. If you just created your bucket, you should now be presented with the list view of all your buckets. Click on the one you have (or want to configure, if there are more than one).
This will show you a detail page of the selected bucket. Under “S3-Zugangsdaten” in the lower left corner, click “Zugangsdaten verwalten” (manage credentials).
This will forward you to the “security” section of your project, with the “S3-Zugangsdaten” tab selected.
Click on “Zugangsdaten generieren” (generate credentials).
After that, a new dialog prompts you to add a description for the generated credentials. It also informs that the credentials are valid for every bucket in this project, so keep that in mind!
Enter a description (opentofu-state-credentials
for example) and klick “Zugangsdaten generieren” (generate credentials).
Now finally, your centials will be displayed, but only once! After that, the secret key will not be displayed any more.
Make sure to save them first, before you use ’em 😉
Configuring opentofu to use the s3 backend
To now use your new and shiny bucket, tell opentofu how to use it.
/my-infra-project/providers.tofu
:
terraform {
backend "s3" {
bucket = "my-hetzner-bucket-x4lk7x" # Name of your S3 bucket
endpoint = "https://fsn1.your-objectstorage.com" # Hetzner's endpoint
key = "my-infra.tfstate" # Name of the tfstate file
region = "main" # this is required, but will be skipped!
skip_credentials_validation = true # this will skip AWS related validation
skip_metadata_api_check = true
skip_region_validation = true
}
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.66.3"
}
[... shortened! ...]
}
}
As you can see in the example above, the region
variable is set to a nonsensical value. Since it is required, we need to set it anyway. But it will be disregarded by setting skip_region_validation = true
.
After that, there’s only one more thing to do: “storing” the credentials. This is always a hassle, but we’ll take care of that now.
Handling credentials
In general, there are multiple ways you can use the credentials:
- add them with the
access_key
andsecret_key
variables directly into yourproviders.tofu
file (not recommended!) - add them to a separate
hetzner_s3_credentials.tfvars
file and runtofu init -backend-config=hetzner_s3_credentials.tfvars
(not recommended!) - add the
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
to your environment - add an aws provider and load
~/.aws/credentials
TF_VAR_
prefixed environment variables (which opentofu will recognize automatically) don’t work in this case, because the backend configuration does not accept variables. Trying to use something like access_key = var.hetzner_s3_access_key
will result in an error.
Adding the credentials to a .tfvars
file seems like an easy solution and it sure is. The problem with this approach is, that your credentials will be leaked into ./terraform/terraform.tfstate
and plans that you save either on your local machine or when running automated pipelines.
This “might” not seem like a problem, but data ends up in public git repos quicker than you think and we don’t really want that, do we?
Adding an aws provider seems a bit excessive. Also, if you already have it configured, you’ll probably be using AWS S3 and have no reason to read this text. 😂
So we’re left with environment variables.
In s3.env
set:
export AWS_ACCESS_KEY_ID=GYJTKTEGSZQH0NMYOPIX
export AWS_SECRET_ACCESS_KEY=iKLz5codZqtq5Pyqtjf6wOtp5izBeg5tAJwzlgFY
Now, the file name here is only an example to show one possible way of doing it. You can just source s3.env
and be done with it. However, this still is a little dangerous, if you keep it in your infrastructure repository, even if you remember to configure your .gitignore
to ignore the file name.
You can also set these variables in your .bashrc
, .zshrc
, your .profile
, or in a separate file that get’s sourced by you or a task runner just before planning or applying.
However, this way the credentials will not be stored in the .terraform
subdirectory or a plan file. Yay!
Now you can run tofu init
and should see something like this:
Initializing the backend...
Successfully configured the backend "s3"! OpenTofu will automatically
use this backend unless the backend configuration changes.
Once you run opentofu it will create a statefile and directly push it to the s3 bucket.
Congrats!
You can now head back to the Hetzner Cloud UI and check that everything worked. Navigate to [your project] - Object Storage, select your bucket and click on Dateien (files) tab in the top navigation of your bucket.
And yes, I made sure all data related to this post was deleted once it was written 😉
Bonus: Enable versioning on your bucket
Having your state saved in object storage is good for easy access with multiple workstations/people. It’s even better if you enable versioning on your bucket, so you can still retrieve it, should it be deleted by accident.