3 min read

Connect to Minio Object Storage Bucket Remotely

Connect to Minio Object Storage Bucket Remotely
Photo by jesse orrico / Unsplash

Ok, so this article is slightly different than most I post up here. Today I'm jotting down an issue I ran into connecting to a Minio Bucket remotely within my LAN.

I'm playing around with an open source file upload service. I'd like to host the site and have the files the server will be storing hosted up on my Minio instance. Minio is mimicking an AWS S3 storage bucket. Perfect for a site like Zipline.

It was super easy to get a docker compose instance up and running. The reverse proxy setup was straight forward. I decided to host the data on an Object Storage server like Minio.

Minio itself was pretty easy and straight forward to get up and running. At least the basic single instance setup I decided to test and roll out. I highly recommend it if you're in need of an Object Storage server in your environment. Main reason I'd like this in my setup is due to the expandability of a Minio cluster. It's so easy to expand the data storage capacity.

To the guts of my issue and the reason for this article. In setting up Zipline, I was unable to get it to connect properly to the Minio bucket I was  trying to use.

I was getting this obscure error that I really could not locate much information about. Well, other than it was a connection issue, most likely due to credentials.

zipline-zipline-1   | 2022-12-30 05:22:37,371 PM info  [server] started production [email protected] server
zipline-zipline-1   | node:events:491
zipline-zipline-1   |       throw er; // Unhandled 'error' event
zipline-zipline-1   |       ^
zipline-zipline-1   |
zipline-zipline-1   | Error: connect ECONNREFUSED 192.168.10.90:80
zipline-zipline-1   |     at __node_internal_captureLargerStackTrace (node:internal/errors:491:5)
zipline-zipline-1   |     at __node_internal_exceptionWithHostPort (node:internal/errors:669:12)
zipline-zipline-1   |     at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1471:16)
zipline-zipline-1   | Emitted 'error' event on Readable instance at:
zipline-zipline-1   |     at DestroyableTransform.<anonymous> (/zipline/node_modules/minio/dist/main/minio.js:1831:127)
zipline-zipline-1   |     at DestroyableTransform.emit (node:events:513:28)
zipline-zipline-1   |     at /zipline/node_modules/minio/dist/main/minio.js:1766:33
zipline-zipline-1   |     at ClientRequest.<anonymous> (/zipline/node_modules/minio/dist/main/minio.js:574:9)
zipline-zipline-1   |     at ClientRequest.emit (node:events:525:35)
zipline-zipline-1   |     at Socket.socketErrorListener (node:_http_client:490:9)
zipline-zipline-1   |     at Socket.emit (node:events:513:28)
zipline-zipline-1   |     at emitErrorNT (node:internal/streams/destroy:151:8)
zipline-zipline-1   |     at emitErrorCloseNT (node:internal/streams/destroy:116:3)
zipline-zipline-1   |     at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
zipline-zipline-1   |   errno: -111,
zipline-zipline-1   |   code: 'ECONNREFUSED',
zipline-zipline-1   |   syscall: 'connect',
zipline-zipline-1   |   address: '192.168.10.90',
zipline-zipline-1   |   port: 80
zipline-zipline-1   | }
zipline-zipline-1   |
zipline-zipline-1   | Node.js v19.3.0
zipline-zipline-1 exited with code 1

All the Zipline documentation in regards to connecting to an S3 instance explained that you needed to add the following DATASOURCE options to connect.

DATASOURCE_TYPE=s3 
DATASOURCE_S3_ACCESS_KEY_ID=key 
DATASOURCE_S3_SECRET_ACCESS_KEY=secret 
DATASOURCE_S3_BUCKET=bucket 
DATASOURCE_S3_ENDPOINT=s3.amazonaws.com 
DATASOURCE_S3_REGION=us-west-2 
DATASOURCE_S3_FORCE_S3_PATH=false 
DATASOURCE_S3_USE_SSL=false 

You can find the details here: https://github.com/diced/zipline/blob/trunk/.env.local.example

This looked easy enough to fill in. Until you get the above error over and over. No matter what I did it resulted in the error. I asked myself why it was failing when I was giving it the right IP. If I added the :9000 port at the end of the address it failed hard and the error was a straight up rejection. So adding the port is not the syntax it is looking for.  

For my Minio instance you need to connect to port 9000 like I mentioned. Under the ENDPOINT entry I decided to add a PORT entry. So I assumed it would look like this:

DATASOURCE_S3_PORT=9000

Started up docker and boom it worked. So the final docker-compose in the end is as follows.

version: '3'
services:
  postgres:
    image: postgres:15
    restart: always
    environment:
      - POSTGRES_USER=changeme
      - POSTGRES_PASSWORD=changeme
      - POSTGRES_DATABASE=changeme
    volumes:
      - pg_data02:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U postgres']
      interval: 10s
      timeout: 5s
      retries: 5

  zipline:
    image: ghcr.io/diced/zipline
    ports:
      - '3000:3000'
    restart: always
    environment:
      - CORE_RETURN_HTTPS=true
      - CORE_SECRET=changeme
      - CORE_HOST=0.0.0.0
      - CORE_PORT=3000
      - CORE_DATABASE_URL=postgres://changeme:changeme@postgres/changeme
      - CORE_LOGGER=true
      - DATASOURCE_TYPE=s3
      - DATASOURCE_S3_ACCESS_KEY_ID=changeme
      - DATASOURCE_S3_SECRET_ACCESS_KEY=changeme
      - DATASOURCE_S3_BUCKET=changeme
      - DATASOURCE_S3_ENDPOINT=192.168.1.25
      - DATASOURCE_S3_PORT=9000
      - DATASOURCE_S3_REGION=us-west-eug01
      - DATASOURCE_S3_FORCE_S3_PATH=true
      - DATASOURCE_S3_USE_SSL=false
    volumes:
      - '$PWD/public:/zipline/public'
    depends_on:
      - 'postgres'

volumes:
  pg_data02:

It's pretty cool to watch your first uploaded file hit the bucket on your very own Minio instance.

Enjoy!