Hetzner has now released an S3 compatible block storage solution for their cloud offerings, this article will cover how to set it up and push artifacts securely from GitHub actions using s3cmd to a folder inside a non-public bucket.

Setting up Hetzner bucket

First things first, log into your Hetzner account and navigate to the Cloud console, and then open Object Storage.
Here you will create a new bucket, note the name has to be unique across all of Hetzner and then choose a region as well.
For this guide you can make the bucket private or public, this is up to you, for writing from GitHub we still need credentials.

For this example, we have a bucket “called my-test-bucket”

Setting up GitHub repository secrets

When you have a bucket you can then under security create new credentials, these will apply to all buckets you create for the current project. We then add these credentials as secrets to your GitHub repository under Settings/Secrets/Actions, here you should add Key, Secret and even the Url if you so choose.

In this example we will save the following in secrets

bucket_key_id = <The credentials you created>
bucket_key_secret = <The credentials you created>

Creating the GitHub action

Now we have all the parts to create a GitHub Action that will build and send an artifact to our new bucket, this will not cover building an application and just use a dummy file that we create to simulate building something, and then upload to the bucket.

name: ci_app

jobs:
# #########################################
# Build for linux
# #########################################
  Build_Linux-x64:
    env:
      PATH_BASE: ${{github.workspace}}
      PATH_ARTIFACTS: ${{github.workspace}}/artifacts

    runs-on: ubuntu-latest
    steps:
      # #########################################
      # Setup
      # #########################################
      - uses: actions/checkout@v4

      # #########################################
      # Create dummy artifact
      # #########################################
      - name: Create artifact
        run: |
          echo "IT WORKED!" > my-artifact.txt
          mkdir -p ${{env.PATH_ARTIFACTS}}
          mv my-artifact.txt ${{env.PATH_ARTIFACTS}}/my-artifact.txt
        working-directory: ${{env.PATH_BASE}
        
      # #########################################
      # Upload to Hetzner S3 storage
      # #########################################
      - name: Install s3cmd
        run: |
          sudo apt-get update
          sudo apt-get install -y s3cmd

      - name: Configure s3cmd for Hetzner
        run: |
          cat > ~/.s3cfg <<EOF
          [default]
          access_key = ${{ secrets.bucket_key_id }}
          secret_key = ${{ secrets.bucket_key_secret }}
          host_base = https://fsn1.your-objectstorage.com
          host_bucket = %(bucket)s.fsn1.your-objectstorage.com
          use_https = True
          signature_v2 = False
          EOF

      - name: Upload artifact to Hetzner
        run: |
          s3cmd put ${{env.PATH_ARTIFACTS}}/my-artifact.txt s3://my-test-bucket/

Please note that the above is missing some things and just meant as an example.

Why not use latest AWS CLI?

If using AWS CLI 2.23.0 or later upload will always fail as they require a new checksum on all PUT calls to handle the default integrity protections, you can read more here: Using S3 compatible API tools – Hetzner Docs

Final thoughts

And that is all we need to install, configure and upload any artifact to our new bucket.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.