Setting up my secrets database


With my private git server set up, my next thoughts were on backing up my repos on GitHub and making some public, but that added a lot more responsibility on my end to ensure my repositories were safe and secure.

Luckily, so far I’ve been dealing with - mostly - static sites so I’ve been able to take my time deciding what secrets manager I’d like for my current architecture

Journey

Vault

I started out where we all end, Hashicorp’s Vault. I already had previous experience using it, and it is extremely extensible, offering things like secret rotation, kv stores, audit logs, and dynamic secrets generation. It was everything I needed and then some It was everything I needed and then more, a whole lot more

I mean it’s cool, and I’d love to play around with having a temporal agent smuggle itself into my container to deliver my .env file before meeting its gracious end, but I have one VM, one user, and a load balancer already taking care of my TLS certificates. It just wasn’t worth it

Infisical

Continuing on Infisical, it just wasn’t it either

I immediately ruled it out because it had a UI, and I’m just a guy who hasn’t figured out windows server hosting on Azure and knows better than to expose the port serving their secrets to the internet

But it led me to a realization and the end of my journey. Infisical was simple, friendly, it felt right, and in any situation where I needed Vault, it would be a strong contender

So what simple, lightweight, intuitive way could I store my secrets that, at this point, amounted to a glorious .env file?

A kv store

That’s right, no encryption, no UI, no fancy agent. Just a token password protected persistent kv store for my VM and its repos, and I’ve been dying to use redis

Setting up

So I got to work and got it setup in record time

Very anti-climatic, I know, but at this point I already had my repo up and hooks in place, all I needed was a compose file and my shoddy pipeline takes care of the rest

Here’s the compose file

services:
  secrets-db:
    image: redis:latest
    restart: always
    container_name: secrets-db
    command: ["redis-server", "--appendonly", "yes", "--requirepass", "${REDIS_PASSWORD}"]
    volumes:
      - redis_data:/data
    ports:
      - "6379:6379"

volumes:
  redis_data:

I also messed around with the /etc/environment file on my git server/VM and added this line before pushing

REDIS_PASSWORD=<your-password-here>

Integrating secrets to pipeline

With redis setup, I created some secrets using redis-cli and updated my post-receive hook - don’t worry I’ll be writing a blog about my hooks soon.

TLDR: The post-receive hook looks for a /scripts/production/pre-build bash file and runs it before docker compose if it finds one

That way I can create my .env file locally for developement, but also conditionally generate my .env file in production using redis and a little extra code

Lastly, I updated my docker compose file to use the .env file. By the end of the update my compose file environment attribute looked something like this

environment:
  NGINX_PROXY_CONTAINER: ${NGINX_PROXY_CONTAINER}
  NGINX_DOCKER_GEN_CONTAINER: ${NGINX_DOCKER_GEN_CONTAINER}
  DEFAULT_EMAIL: ${LETSENCRYPT_EMAIL}

Author’s Notes

My VM is backed up, but I’m also looking for ways to back up my redis volume on a third party as it slowly stores more and more important information

I’m currently debating using a raspberry pi I own, but let me know if there are better options!