Tom Wojcik personal blog

Local S3 with Django and MinIO

☕️ 4 min read

Minions are cool but have you ever heard about minio? It’s also cool.

The why

One of the most helpful yet easy to grasp guide that helps you become a better web developer is The Twelve Factors. I say guide because while it’s good to follow these principles it’s definitely not required to say the least.

There are two rules that are related to this post.

  • III. Config - Store config in the environment
  • X. Dev/prod parity - Keep development, staging, and production as similar as possible

Chances are you are here because you want to employ these rules in your project. Some people mistake following them with having a single settings file that is completely configurable using env vars. I mean, it sounds great, but is it remotely possible? I haven’t seen a single real world project that uses a single settings file and even if it was possible, it’s definitely not a good idea. You can still follow these rules but have different settings per environment. Because you disabled DEBUG and adjusted CORS, right?

Using MinIO makes following Dev/prod parity easier. You can simulate having s3 on your local machine. The biggest advantages of this solutions are:

  • you can use the same settings for your static files across all envs
  • you can test AWS policies locally
  • if you use Django with S3, chances are you are using django-storages. If you are implementing your own storage, you can test it before going to staging env

And all of that without any 3rd party dependencies for your Django project! You use boto3 / django-storages like you used to for other enviros.

The how

New services in docker-compose

You need two services in your dev docker-compose.

  • minio/minio:latest - MiNIO service which will be simulating S3
  • minio/mc:latest - MinIO Client. Allows you to access your “S3” directly with a convenient API. Of course, instead of using it you could probably run another service, could be even another Django container and you would achieve more or less the same thing but it’d be more costly in terms of computing resources (and your time to implement it). This client has a simple yet powerful API that allows you to creates buckets, policies and whatever it is that you need. If creates buckets with a default policy none which doesn’t allow neither reads nor writes, unless you are authenticated. The equivalent from boto/storages is private. Other policies that are available out of the box are writeonly, readonly, readwrite and public. But of course you can create yours if you need.
version: '3.7'

services:
  django_backend:
    depends_on:
      - minio
    environment:
      AWS_ACCESS_KEY: minio
      AWS_SECRET_KEY: minio123
      AWS_BUCKET_NAME: yourbucketname

  minio:
    image: minio/minio:latest
    volumes:
      - s3-volume/:/data
    ports:
      - "9000:9000"
    expose:
      - "9000"
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server /data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  create_buckets:
    image: minio/mc:latest
    depends_on:
      - minio
    entrypoint: >
      /bin/sh -c '
      mc config host add s3 http://minio:9000 minio minio123 --api S3v4;
      [[ ! -z "`mc ls s3 | grep yourbucketname`" ]] || mc mb s3/yourbucketname;
      exit 0;
      '

volumes:
  s3-volume:

Here client basically adds a bucket named yourbucketname if it doesn’t exist yet.

Changes in your Django settings

Now you can move your stage/prod settings to the dev settings as well

DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"

AWS_ACCESS_KEY_ID = os.environ['AWS_ACCESS_KEY']
AWS_SECRET_ACCESS_KEY = os.environ['AWS_SECRET_KEY']
AWS_STORAGE_BUCKET_NAME = os.environ['AWS_BUCKET_NAME']

By now you probably noticed that you can access MinIO web panel via http:localhost:9000 with credentials from your compose file. Test that upload/download works via the web panel. If by know you managed to:

  • add these two services to your compose file
  • change Django settings
  • access MinIO via the web panel

You need to do one, final thing. Add more to your settings! Why I haven’t mentioned them earlier? These settings need to be there ONLY for your dev Django settings env. These two settings are

AWS_S3_ENDPOINT_URL = "http://minio:9000/"
AWS_S3_USE_SSL = False

Of course those settings come from django-storages. The first one, AWS_S3_ENDPOINT_URL, tells storages to look there for the files instead of the real s3 uri. The latter one, well. You know what it does.

Beware! You will be able to access http://minio only if your Django service is connected via the same network as your MinIO container. If you don’t want to use the default network, create one yourself.

Things to consider

To sum up, if you followed step by step, you probably have it working. If you just started a new project and haven’t setup django-storages yet, things might not work out of the box. It’s because your django-storages are not set up and setting up django-storages is not the point of this post. Also, remember that now you have a separate container to setup your bucket. If your managed to run everything locally it doesn’t mean it will “just run” on staging as you might not have created your bucket on s3 or policies might differ.