howtos

To help me to remember, and perhaps to help you :)

Why

Because we want to use our own object storage system, on-premisses.

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed.

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate
  • CloudFlare
    • DNS Manager

Where I can find out more about the project?

Project

Docker installation

Single Node Multi Drive Arch

Hardware Requirements

Virtual Machine

vpcu: 8
memory: 8 gb ram
network: 1 gbit
disk: 350 gb

Disk layout

root (30g)
/var/lib/docker (30g)
/opt/minio (300g)

Network requirements

These are all the necessary ports to open

22 TCP (ssh)
80 (minio api)
8080 (minio console)

Any other port should be closed.

DNS requirements

We'll use 2 DNS Records

minio-admin.domain.tld (console)
minio.domain.tld (api)

How to install it?

updating your node

apt-get update
apt-get upgrade -y

installing utilities

apt install screen htop net-tools ccze git

Docker

Docker Install

curl https://get.docker.com|bash

Docker Configuration

Let's create the configuration file.

vim /etc/docker/daemon.json

Content

{
  "default-address-pools": [
    {
      "base": "10.20.30.0/24",
      "size": 24
    },
    {
      "base": "10.20.31.0/24",
      "size": 24
    }
  ]
}

Here we're defining uncommon networks to avoid conflicts with your provider or organization networks. You need to restart docker after it.

systemclt restart docker
systemclt enable docker

Docker-compose

Docker-compose install

Download

curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url  | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi -

Adjusting permissions

chmod +x docker-compose-linux-x86_64

Moving the binary to the usr/local directory

mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose

Minio

Creating directories

mkdir -p /opt/minio/{docker,storage}

Creating docker-compose config

vim /opt/minio/docker/docker-compose.yaml

Content

version: '3.7'

# Settings and configurations that are common for all containers
x-minio-common: &minio-common
  image: quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z
  command: server --console-address ":9001" http://minio{1...4}/data{1...2}
  expose:
    - "9000"
    - "9001"
  environment:
    MINIO_ROOT_USER: minio
    MINIO_ROOT_PASSWORD: your_password_here
    MINIO_SERVER_URL: https://minio.domain.ltd
    MINIO_DOMAIN: minio.domain.ltd
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
    interval: 30s
    timeout: 20s
    retries: 3

# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
  minio1:
    <<: *minio-common
    hostname: minio1
    restart: always
    volumes:
      - /MinIO/storage/data1-1:/data1
      - /MinIO/storage/data1-2:/data2

  minio2:
    <<: *minio-common
    hostname: minio2
    restart: always
    volumes:
      - /MinIO/storage/data2-1:/data1
      - /MinIO/storage/data2-2:/data2

  minio3:
    <<: *minio-common
    hostname: minio3
    restart: always
    volumes:
      - /MinIO/storage/data3-1:/data1
      - /MinIO/storage/data3-2:/data2

  minio4:
    <<: *minio-common
    hostname: minio4
    restart: always
    volumes:
      - /MinIO/storage/data4-1:/data1
      - /MinIO/storage/data4-2:/data2

  nginx:
    image: nginx:1.19.2-alpine
    hostname: nginx
    restart: always
    volumes:
      - /MinIO/docker/nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
      - "8080:8080"
    depends_on:
      - minio1
      - minio2
      - minio3
      - minio4

## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
  data1-1:
  data1-2:
  data2-1:
  data2-2:
  data3-1:
  data3-2:
  data4-1:
  data4-2:

Creating nginx config

vim /opt/minio/docker/nginx.conf 

Content

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  4096;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    sendfile        on;
    keepalive_timeout  65;

    upstream minio {
        server minio1:9000;
        server minio2:9000;
        server minio3:9000;
        server minio4:9000;
    }

    upstream console {
        ip_hash;
        server minio1:9001;
        server minio2:9001;
        server minio3:9001;
        server minio4:9001;
    }

    server {
        listen 80;
        ignore_invalid_headers off;
        client_max_body_size 0;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_connect_timeout 300;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            chunked_transfer_encoding off;
            proxy_pass http://minio;
        }
    }

    server {
        listen       8080;
        ignore_invalid_headers off;
        client_max_body_size 0;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-NginX-Proxy true;
            real_ip_header X-Real-IP;
            proxy_connect_timeout 300;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            chunked_transfer_encoding off;

            proxy_pass http://console;
        }
    }
}

starting containers

cd /opt/minio/docker
docker-compose up -d

checking services

docker-compose ps

Expected output

NAME                IMAGE                                              COMMAND                  SERVICE             CREATED             STATUS                   PORTS
docker-minio1-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio1              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio2-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio2              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio3-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio3              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio4-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio4              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-nginx-1      nginx:1.19.2-alpine                                "/docker-entrypoint.…"   nginx               11 minutes ago      Up 9 minutes             0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp

Check it the ports 9001 and 9001

netstat -ntpl|grep docker

Expected Ouput

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2116141/docker-prox
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      2116110/docker-prox
tcp6       0      0 :::80                   :::*                    LISTEN      2116149/docker-prox
tcp6       0      0 :::8080                 :::*                    LISTEN      2116123/docker-prox

You can now validate the console

curl http://localhost:80

Expected Output

<!doctype html><html lang="en"><head><meta charset="utf-8"/><base href="/"/><meta content="width=device-width,initial-scale=1" name="viewport"/><meta content="#081C42" media="(prefers-color-scheme: light)" name="theme-color"/><meta content="#081C42" media="(prefers-color-scheme: dark)" name="theme-color"/><meta content="MinIO Console" name="description"/><meta name="minio-license" content="agpl" /><link href="./styles/root-styles.css" rel="stylesheet"/><link href="./apple-icon-180x180.png" rel="apple-touch-icon" sizes="180x180"/><link href="./favicon-32x32.png" rel="icon" sizes="32x32" type="image/png"/><link href="./favicon-96x96.png" rel="icon" sizes="96x96" type="image/png"/><link href="./favicon-16x16.png" rel="icon" sizes="16x16" type="image/png"/><link href="./manifest.json" rel="manifest"/><link color="#3a4e54" href="./safari-pinned-tab.svg" rel="mask-icon"/><title>MinIO Console</title><script defer="defer" src="./static/js/main.92fa0385.js"></script><link href="./static/css/main.02c1b6fd.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"><div id="preload"><img src="./images/background.svg"/> <img src="./images/background-wave-orig2.svg"/></div><div id="loader-block"><img src="./Loader.svg"/></div></div></body></html>

You can now validate if the API is running

curl http://localhost:80

Expected output

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>177E5BC14618C529</RequestId><HostId>e0c385c033c4356721cc9121d3109c9b9bfdefb22fd2747078acd22328799e36</HostId></Error>root@bolha.io:/MinIO/docker#

Validate if the API is Healthly

curl -si http:///localhost/minio/health/live

Expected output

HTTP/1.1 200 OK
Server: nginx/1.19.2
Date: Thu, 24 Aug 2023 15:38:38 GMT
Content-Length: 0
Connection: keep-alive
Accept-Ranges: bytes
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
X-Amz-Id-2: 46efbbb7efbd81c7d995bde03cc6fabf60c12f80d4e074c1c972dbc4d583c3d4
X-Amz-Request-Id: 177E5BDDF79EDEF8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block

Reverse Proxy

You can now configure your reverse proxy

minio-admin.domain.tld => the ip-of-the-vm port 8080.
minio.domain.tlds => ip-of-the-vm port 80.

We'll not cover the reverse proxy config yet, maybe in the future.

Accessing Minio

After the configuration you can visite the admin console

https://minio-admin.domain.tld

Viewing logs

You can follow the containers logs during the minio usage.

cd /opt/minio/docker
docker-compose logs -f --tail=10

Cheers [s]


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

Why?

Because it's important to use the last version with the latest bug fixes and features.

What do I need to update?

You need a working mastodon, and we're expecting that you had followed our howto

How to upgrade

The Docker side

stop the containers

cd /opt/mastodon/docker
docker-compose down

edit the versions.env file

vim /opt/mastodon/docker/versions.env

and change the version to the latest

MASTODON_VERSION=v4.1.2

to

MASTODON_VERSION=v4.1.4

clear the web directories

rm -rf /opt/mastodon/data/web/public/*
rm -rf /opt/mastodon/data/web/config/*
rm -rf /opt/mastodon/data/web/app/*
rm -rf /opt/mastodon/data/web/system/*

and start all containers again

cd /opt/mastodon/docker
docker-compose up -d

and run the migration

cd /opt/mastodon/docker
docker-compose run --rm shell bundle exec rake db:migrate

customizations

you need to apply any customization to these directory files again if you had modified anything before.

/opt/mastodon/data/web/public/*
/opt/mastodon/data/web/config/*
/opt/mastodon/data/web/app/*
/opt/mastodon/data/web/system/*

The External Nginx side

Now we need to update the static files cache on our nginx reverse proxy.

nginx cache config

Edit your mastodon vhost filer

vim /etc/nginx/conf.d/mastodon.conf

fine the cache line

proxy_cache_path /var/cache/mastodon/public/4.1.2 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;

change the cache directory

proxy_cache_path /var/cache/mastodon/public/4.1.4 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;

create the new directory

mkdir -p /var/cache/mastodon/public/4.1.4

root directory

find the root directory line

  root /var/www/mastodon/dev.bolha.us/public/4.1.2;

change it

  root /var/www/mastodon/dev.bolha.us/public/4.1.4;

create the new directory

mkdir -p /var/www/mastodon/dev.bolha.us/public/4.1.4;

creating a docker volume to copy the new static files

docker volume create --opt type=none --opt device=/opt/www/mastodon/dev.bolha.us/public/4.1.4 --opt o=bind mastodon_public_4.1.4

copying the new static files from the new version to the volume

docker run --rm -v "mastodon_public_4.1.4:/static" tootsuite/mastodon:v4.1.4 bash -c "cp -r /opt/mastodon/public/* /static/"

checking the files

ls /opt/www/mastodon/dev.bolha.us/public/4.1.4

remove the temporary volume

docker volume rm mastodon_public_4.1.4

now verify your nginx config

# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

now reload your nginx

systemctl restart nginx && systemctl status nginx

That's it!


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

Last update: 2023-07-07

Why?

Because we want to use a federated forum and link aggregator :)

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed. creating directories

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate

Where I can find out more about the project?

How I can install it?

creating directories

mkdir -p /opt/lemmy
mkdir -p /opt/lemmy/{docker,data,config}
mkdir -p /opt/lemmy/data/{postgresql,pictrs,themes}
mkdir -p /opt/lemmy/config/{lemmy,postgresql,nginx}

defining permissions

chown -R 991:991 /opt/lemmy/data/pictrs

nginx config

creating the nginx.conf file

vim /opt/lemmy/config/nginx/nginx.conf

content

worker_processes auto;

events {
    worker_connections 1024;
}

http {

    map "$request_method:$http_accept" $proxpass {
        default "http://lemmy-ui";
        "~^(?:GET|HEAD):.*?application\/(?:activity|ld)\+json" "http://lemmy";
        "~^(?!(GET|HEAD)).*:" "http://lemmy";
    }

    upstream lemmy {
        server "lemmy:8536";
    }

    upstream lemmy-ui {
        server "lemmy-ui:1234";
    }

    server {
        listen 1236;
        listen 8536;

        server_name localhost;
        server_tokens off;

        gzip on;
        gzip_types text/css application/javascript image/svg+xml;
        gzip_vary on;

        client_max_body_size 20M;

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        location / {
            proxy_pass $proxpass;
            rewrite ^(.+)/+$ $1 permanent;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
            proxy_pass "http://lemmy";
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

lemmy backend config

creating the lemmy config file

vim /opt/lemmy/config/config.hjson

content

{
  database: {
    host: postgres
    password: "your_postgresql_password_here"
  }
  hostname: "bolha.forum"
  pictrs: {
    url: "http://pictrs:8080/"
    api_key: "your_postgresql_password_here"
  }
  email: {
    smtp_server: "postfix:25"
    smtp_from_address: "noreply@bolha.forum"
    tls_type: "none"
  }
}

docker config

creating the docker-compose.yaml

vim /opt/lemmy/docker/docker-compose.yml

content

version: "3.7"

services:

  proxy:
    image: nginx:1-alpine
    container_name: lemmy_proxy
    ports:
      - "8000:8536"
    volumes:
      - /opt/lemmy/config/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,Z
    restart: always
    depends_on:
      - pictrs
      - lemmy-ui

  lemmy:
    image: dessalines/lemmy:0.18.1
    container_name: lemmy_backend
    hostname: lemmy
    restart: always
    environment:
      - RUST_LOG="warn"
    volumes:
      - lemmy_config:/config
    depends_on:
      - postgres
      - pictrs

  lemmy-ui:
    image: dessalines/lemmy-ui:0.18.1
    container_name: lemmy_frontend
    environment:
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
      - LEMMY_UI_LEMMY_EXTERNAL_HOST=bolha.forum
      - LEMMY_UI_HTTPS=true
    volumes:
      - extra_themes:/app/extra_themes
    depends_on:
      - lemmy
    restart: always

  pictrs:
    image: asonix/pictrs:0.4.0-rc.7
    container_name: lemmy_images_backend
    hostname: pictrs
    environment:
      - PICTRS__API_KEY=your_postgresql_password_here
      - RUST_LOG=debug
      - RUST_BACKTRACE=full
      - PICTRS__MEDIA__VIDEO_CODEC=vp9
      - PICTRS__MEDIA__GIF__MAX_WIDTH=256
      - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
      - PICTRS__MEDIA__GIF__MAX_AREA=65536
      - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
    user: 991:991
    volumes:
      - pictrs:/mnt:Z
    restart: always
    deploy:
      resources:
        limits:
          memory: 690m

  postgres:
    image: postgres:15-alpine
    container_name: lemmy_database
    hostname: postgres
    environment:
      - POSTGRES_USER=lemmy
      - POSTGRES_PASSWORD=your_postgresql_password_here
      - POSTGRES_DB=lemmy
    volumes:
      - postgresql:/var/lib/postgresql/data:Z
      - /opt/lemmy/config/postgresql/postgresql.conf:/etc/postgresql.conf
    restart: always

  postfix:
    image: mwader/postfix-relay
    container_name: lemmy_smtp_relay
    environment:
      - POSTFIX_myhostname=bolha.forum
      - POSTFIX_smtp_sasl_auth_enable= yes
      - POSTFIX_smtp_sasl_password_maps=static:user@domain.tld:user_password_here
      - POSTFIX_smtp_sasl_security_options=noanonymous
      - POSTFIX_relayhost=smtp.domain.tld:587
    restart: "always"

volumes:
  lemmy_config:
    driver_opts:
      type: none
      device: /opt/lemmy/config/lemmy
      o: bind
  extra_themes:
    driver_opts:
      type: none
      device: /opt/lemmy/data/themes
      o: bind
  pictrs:
    driver_opts:
      type: none
      device: /opt/lemmy/data/pictrs
      o: bind
  postgresql:
    driver_opts:
      type: none
      device: /opt/lemmy/data/postgresql
      o: bind

spinning up the lemmy instance

$ cd /opt/lemmy/docker
$ docker-compose up -d

checking

$ docker-compose ps

NAME                   IMAGE                        COMMAND                  SERVICE             CREATED             STATUS              PORTS
lemmy_backend          dessalines/lemmy:0.18.1      "/app/lemmy"             lemmy               34 minutes ago      Up 34 minutes
lemmy_database         postgres:15-alpine           "docker-entrypoint.s…"   postgres            34 minutes ago      Up 34 minutes       5432/tcp
lemmy_frontend         dessalines/lemmy-ui:0.18.1   "docker-entrypoint.s…"   lemmy-ui            34 minutes ago      Up 34 minutes       1234/tcp
lemmy_images_backend   asonix/pictrs:0.4.0-rc.7     "/sbin/tini -- /usr/…"   pictrs              34 minutes ago      Up 34 minutes       6669/tcp, 8080/tcp
lemmy_proxy            nginx:1-alpine               "/docker-entrypoint.…"   proxy               34 minutes ago      Up 34 minutes       80/tcp, 0.0.0.0:8000->8536/tcp, :::8000->8536/tcp
lemmy_smtp_relay       mwader/postfix-relay         "/root/run"              postfix             34 minutes ago      Up 34 minutes       25/tcp

You can see that our lemmy_proxy (nginx) is running on the port 8000.

Now let's configure the external reverse proxy.

external reverse-proxy config

certbot + letsencrypt

we're using cloudflare plugin with certbot, you need to have the configuration ready, like this example

# cat /etc/letsencrypt/cloudflare/bolha-forum.conf
dns_cloudflare_email = dns@bolha.forum
dns_cloudflare_api_key = your_token_here

then you can generate the certificate

# certbot certonly --dns-cloudflare --dns-cloudflare-credential /etc/letsencrypt/cloudflare/bolha-forum.conf -d "*.bolha.forum,bolha.forum"

now we can configure our nginx!

nginx config

external reverse proxy

server {
    listen your_listen_ip_here:80;
    server_name bolha.forum;
    location / {
        return 301 https://bolha.forum$request_uri;
    }
}

server {
    listen your_listen_ip_here:443 ssl http2;
    server_name bolha.forum;

    ssl_certificate /etc/letsencrypt/live/bolha.forum/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/bolha.forum/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;

    ssl_dhparam /etc/letsencrypt/dh-param.pem;

    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

    # Specifies a curve for ECDHE ciphers.
    ssl_ecdh_curve prime256v1;

    # Server should determine the ciphers, not the client
    ssl_prefer_server_ciphers on;

    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    # Enable compression for JS/CSS/HTML bundle, for improved client load times.
    # It might be nice to compress JSON, but leaving that out to protect against potential
    # compression+encryption information leak attacks like BREACH.
    gzip on;
    gzip_types text/css application/javascript image/svg+xml;
    gzip_vary on;

    # Only connect to this site via HTTPS for the two years
    add_header Strict-Transport-Security "max-age=63072000";

    # Various content security headers
    add_header Referrer-Policy "same-origin";
    add_header X-Content-Type-Options "nosniff";
    add_header X-Frame-Options "DENY";
    add_header X-XSS-Protection "1; mode=block";

    # Upload limit for pictrs
    client_max_body_size 25M;


    location / {
      proxy_pass http://your_docker_host_ip_here:your_port_here;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

check the config

nginx -t

expected output

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

and reload the configuration

# nginx -s reload

that's it!

Go to your lemmy!

Now you can access your lemmy instance

Enjoy!


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

Why?

We want to offer translations inside Mastodon using libretranslate as our backend.

What do I need to install?

Your need a working mastodon, we recommend this howto

Your need a working libretranslate, we recommend this howto

How to integrate them?

You need to add these two variables on the application.env file if you are following our mastodon howto.

LIBRE_TRANSLATE_ENDPOINT=https://libretranslate.bolha.tools
LIBRE_TRANSLATE_API_KEY=ecae7db0-bolha-us-is-cool-c84c14d2117a

Then restart it

cd /opt/mastodon/docker
docker-compose restart

After that you can check the logs

docker-compose logs -f|grep TranslationsController

Expected output with status code 200

website            | [01fa1ece-5ab3-411d-bd6b-4b5131096735] method=POST path=/api/v1/statuses/110658724777490930/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=200 duration=2988.25 view=0.77 db=2.32

Sometimes you can get a status code 503, yes, it will happen, it's not perfect but works well most of the time.

website            | [752a45c9-a94a-408a-8262-7b71cc1528e9] method=POST path=/api/v1/statuses/110658727361133356/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=503 duration=10117.47 view=0.49 db=2.19

Enjoy it!

:)


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

Why?

The main use of this Libretranslate is to translate mastodon toots.

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed.

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate

Where I can find more about the project?

How I can install it?

first, let's create the directories

mkdir -p /opt/libretranslate/docker
mkdir -p /opt/libretransalte/data/{key,local}

now let's configure the permissions

chown 1032:1032 /opt/libretransalte/data
chown 1032:1032 /opt/libretransalte/data/key
chown 1032:1032 /opt/libretransalte/data/local

then, let's create the docker-compose file

cd /opt/libretranslate/docker
vim docker-compose.yaml

here follows the content, change the parameters for you setup

version: "3"

services:
  libretranslate:
    container_name: libretranslate
    image: libretranslate/libretranslate:v1.3.11
    restart: unless-stopped
    dns:
      - 1.1.1.1
      - 8.8.8.8
    ports:
      - "5000:5000"
    healthcheck:
      test: ['CMD-SHELL', './venv/bin/python scripts/healthcheck.py']
    env_file:
      - libretranslate.env
    volumes:
     - libretranslate_api_keys:/app/db
     - libretranslate_local:/home/libretranslate/.local

volumes:
  libretranslate_api_keys:
    driver_opts:
      type: none
      device: /opt/libretranslate/data/keys
      o: bind
  libretranslate_local:
    driver_opts:
      type: none
      device: /opt/libretranslate/data/local
      o: bind

then, let's create the libetranslate env file

vim libretranslate.env

here follows the content, change the parameters for you setup

LT_DEBUG=true
LT_UPDATE_MODELS=true
LT_SSL=true
LT_SUGGESTIONS=false
LT_METRICS=true

LT_API_KEYS=true

LT_THREADS=12
LT_FRONTEND_TIMEOUT=2000

#LT_REQ_LIMIT=400
#LT_CHAR_LIMIT=1200

LT_API_KEYS_DB_PATH=/app/db/api_keys.db

all right, let's spin up the libretranslate

docker-compose up -d

installing the model files

you should enter the container

docker exec -it libretranslate bash

and run the command to install all languages

for i in `/app/venv/bin/argospm list`;do /app/venv/bin/argospm install $i;done

it will took some time to install, go drink a coffee, then check the directory in your host

$ exit
$ ls -1 /opt/libretranslate/data/local/share/argos-translate/packages/

ar_en
de_en
en_ar
en_de
en_es
en_fi
en_fr
en_ga
en_hi
en_hu
en_id
en_it
en_ja
en_ko
en_nl
en_pl
en_pt
en_sv
en_uk
es_en
fi_en
fr_en
ga_en
hi_en
hu_en
id_en
it_en
ja_en
ko_en
nl_en
pl_en
pt_en
ru_en
sv_en
translate-az_en-1_5
translate-ca_en-1_7
translate-cs_en-1_5
translate-da_en-1_3
translate-el_en-1_5
translate-en_az-1_5
translate-en_ca-1_7
translate-en_cs-1_5
translate-en_da-1_3
translate-en_el-1_5
translate-en_eo-1_5
translate-en_fa-1_5
translate-en_he-1_5
translate-en_ru-1_7
translate-en_sk-1_5
translate-en_th-1_0
translate-en_tr-1_5
translate-en_zh-1_7
translate-eo_en-1_5
translate-fa_en-1_5
translate-he_en-1_5
translate-sk_en-1_5
translate-th_en-1_0
translate-tr_en-1_5
translate-zh_en-1_7
uk_en

Awesome it's all there.

creating the api key

Since we're using “LTAPIKEYS=true” we need to create an API KEY to be able to use libretranslate via API. Let's go to the container again

docker exec -it libretranslate bash

let's create a key with permission to run 120 requests per minute.

/app/venv/bin/ltmanage keys add 120

example of the expected output

libretranslate@ba7f705d97b9:/app$ /app/venv/bin/ltmanage keys
ecae7db0-bolha-us-is-cool-c84c14d2117a: 1200

nice, everything is ready to be used, now let's configure your nginx!

testing the api

curl -XPOST -H "Content-type: application/json" -d '{
"q": "Bolha.io is the most cool project in the fediverso",
"source": "en",
"target": "pt"
}' 'http://localhost:5000/translate'

expected output

{"translatedText":"Bolha.io é o projeto mais legal no fediverso"}

external nginx

In our case, libretranslate is behind an External NGINX Reverse Proxy.

Here follows the config snippet used

upstream libretranslate {
    server your_docker_host_ip:your_libretranslate_port fail_timeout=0;
}

server {
  listen 144.217.95.91:80;
  server_name libretranslate.domain.tld;
  return 301 https://libretranslate.domain.tld$request_uri;
}

server {

  listen 144.217.95.91:443 ssl http2;
  server_name libretranslate;

  access_log /var/log/nginx/libretranslate-domain-tld.log;
  error_log /var/log/nginx/libretranslate-domain-tld.log;

  ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_prefer_server_ciphers on;
  ssl_dhparam /etc/letsencrypt/dh-param.pem;

  ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  location / {
    proxy_pass http://libretranslate;
  }

}

Now you can access your libretranstale via web

That's it :)

[s]


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

Why?

We need to organize our docs and information about the gcn and bolha collective and dokuwiki is the best wiki because we don't need a database, it's a flatfile system.

Fast, flexible and easy to backup.

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed.

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate

Where I can find more about the project?

How I can install it?

first, let's create the directories

mkdir -p /opt/dokuwiki/docker
mkdir -p /opt/dokuwiki/data

then, let's create the docker-compose file

cd /opt/dokuwiki/docker
vim docker-compose.yaml

here follows the content, change the parameters for you setup

version: '3'
services:
  dokuwiki:
    image: lscr.io/linuxserver/dokuwiki:latest
    container_name: dokuwiki
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
    volumes:
      - dokuwiki_config:/config
    ports:
      - 8081:80
    restart: unless-stopped

volumes:
  dokuwiki_config:
    driver_opts:
      type: none
      device: /opt/dokuwiki/data
      o: bind

all right, let's spin up the dokuwiki

docker-compose up -d

No finish the configuration using the web browser

https://dokuwiki.domain.tld/install.php

external nginx

In our case, Dokuwiki is behind an External NGINX Reverse Proxy.

Here follow the config snippet used

upstream dokuwiki {
    server your_docker_host_ip:your_dokuwiki_port fail_timeout=0;
}

server {
  listen 144.217.95.91:80;
  server_name dokuwiki.domain.tld;
  return 301 https://dokuwkrequest_uri;
}

server {

  listen 144.217.95.91:443 ssl http2;
  server_name dokuwiki.domain.tld;

  access_log /var/log/nginx/dokuwiki-domain-tld.log;
  error_log /var/log/nginx/dokuwiki-domain-tld.log;

  ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_prefer_server_ciphers on;
  ssl_dhparam /etc/letsencrypt/dh-param.pem;

  ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  location / {
    proxy_pass http://dokuwiki;
  }

}

That's it :)

[s]


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

Why?

I like to have my own password manager, self-hosted, I trust only myself :)

What do I need to install?

You need a Linux Server with Docker, and Docker-Compose installed.

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate

Where I can find more about the project?

How I can install it?

first, let's create the directories

mkdir -p /opt/passbolt/docker
mkdir -p /opt/passbolt/data/{database,gpg,jwt}

then, let's create the docker-compose file

cd /opt/passbolt/docker
vim docker-compose.yaml

here follows the content, change the parameters for you setup

version: "3.9"
services:
  db:
    image: mariadb:10.11
    restart: unless-stopped
    environment:
      MYSQL_RANDOM_ROOT_PASSWORD: "true"
      MYSQL_DATABASE: "passbolt"
      MYSQL_USER: "passbolt"
      MYSQL_PASSWORD: "your_mysql_password_here"
    volumes:
      - database_volume:/var/lib/mysql

  passbolt:
    image: passbolt/passbolt:latest-ce
    #Alternatively you can use rootless:
    #image: passbolt/passbolt:latest-ce-non-root
    restart: unless-stopped
    depends_on:
      - db
    environment:
      DATASOURCES_DEFAULT_HOST: "db"
      DATASOURCES_DEFAULT_USERNAME: "passbolt"
      DATASOURCES_DEFAULT_PASSWORD: "your_mysql_password_here"
      DATASOURCES_DEFAULT_DATABASE: "passbolt"
      APP_FULL_BASE_URL: https://passbolt.domain.tld
      EMAIL_DEFAULT_FROM: passbolt@domain.tld
      EMAIL_TRANSPORT_DEFAULT_HOST: mail.domain.tld
      EMAIL_TRANSPORT_DEFAULT_PORT: 587
      EMAIL_TRANSPORT_DEFAULT_USERNAME: user@domain.tld
      EMAIL_TRANSPORT_DEFAULT_PASSWORD: user_password_here
      EMAIL_TRANSPORT_DEFAULT_TLS: true
      PASSBOLT_KEY_EMAIL: passbolt@domain.tld

    volumes:
      - gpg_volume:/etc/passbolt/gpg
      - jwt_volume:/etc/passbolt/jwt
    command:
      [
        "/usr/bin/wait-for.sh",
        "-t",
        "0",
        "db:3306",
        "--",
        "/docker-entrypoint.sh",
      ]
    ports:
      - 80:80

volumes:
  database_volume:
    driver_opts:
      type: none
      device: /opt/passbolt/data/database
      o: bind
  gpg_volume:
    driver_opts:
      type: none
      device: /opt/passbolt/data/gpg
      o: bind
  jwt_volume:
    driver_opts:
      type: none
      device: /opt/passbolt/data/jwt
      o: bind

all right, let's spin up the passbolt

docker-compose up -d

now let's test the e-mail configuration, we cannot create our user without a working e-mail relay.

docker-compose exec passbolt su -m -c "bin/cake passbolt send_test_email -r user@domain.tld"

if you got the e-mail, it's time to create the first admin user

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u user@domain.tld -f Guto -l Carvalho -r admin" -s /bin/sh www-data

output expected

     ____                  __          ____
    / __ \____  _____ ____/ /_  ____  / / /_
   / /_/ / __ `/ ___/ ___/ __ \/ __ \/ / __/
  / ____/ /_/ (__  |__  ) /_/ / /_/ / / /
 /_/    \__,_/____/____/_.___/\____/_/\__/

 Open source password manager for teams
-------------------------------------------------------------------------------
User saved successfully.
To start registration follow the link provided in your mailbox or here:
https://passbolt.domain.tld/setup/install/1111111-8d5c-43a7-8fc2-301403b93766/efd71548-bcb4-4d58-b98d-a6877799d548

Now you can access your Passbolt and finish the configuration!

external nginx

In our case, Passbolt is behind an External NGINX Reserve Proxy.

Here follow the config snippet used

upstream passbolt {
    server your_passbolt_docker_server_ip_here:your_port_here fail_timeout=0;
}

server {
  listen your_nginx_listen_ip_here:80;
  server_name passbolt.domain.tld;
  return 301 https://passbolt.domain.tld$request_uri;
}

server {

  listen your_nginx_listen_ip_here:443 ssl http2;
  server_name passbolt.domain.tld;

  access_log /var/log/nginx/passbolt-domain-tld.log;
  error_log /var/log/nginx/passbolt-domain-tld.log;

  ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_prefer_server_ciphers on;
  ssl_dhparam /etc/letsencrypt/dh-param.pem;

  ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  location / {
    proxy_pass http://passbolt;
  }

}

That's it :)

[s]


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

Installing Unifi Network Controller Using Docker and Ubuntu

Why?

Usually, the Unifi Network Controller is installed on your personal computer, but, if you want to install it in a virtual machine, you can do that.

Reqs?

  • Linux
  • Docker
  • Docker-compose

Here we're using a Virtual Machine Ubuntu 20.04 for this, running inside a Hypervisor ProxMox 7.4.

How?

creating the directory

mkdir -p /opt/unifi/controller/docker
mkdir -p /opt/unifi/controller/data/sites/default
cd /opt/unifi/controller/docker

create a docker-compose file

vim docker-compose.yml

insert the content bellow

---
version: "2.1"
services:
  unifi-controller:
    image: lscr.io/linuxserver/unifi-controller:7.3.83
    container_name: unifi-controller
    environment:
      - PUID=1000
      - PGID=1000
    volumes:
      - /opt/unifi-controller/config:/config
    ports:
      - 8443:8443
      - 3478:3478/udp
      - 10001:10001/udp
      - 8080:8080
    restart: unless-stopped

in my case, since I'm using USG Router with two WAN connections, and I want to redirect ports to the WAN2, I'll create a file /opt/unifi/controller/data/sites/default/config.gateway.json with the content bellow

{
	"port-forward": {
		"wan-interface": "eth2"
	}
}

This will set the eth2 as the default interface for port-forward, usually it's the eth0. Now le'ts spin up the container.

docker-compose up -d

and now you can access the dashboard from your browser

https://your_ip_here:8443

[s] Guto


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

This post is a continuation of this post

It depends on the first one.

1. Wasabi Config

Since I'm running my Baremetal in OVH Canada, I'll create my bucket in the same country.

Wasabi CA Central 1 (Toronto)	s3.ca-central-1.wasabisys.com

1.1 create a bucket mastodon-prod-media

Add this polity to the bucket policy configuration

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AddPerm",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::mastodon-prod-media/*"
    }
  ]
}

1.2 create a user mastodon-prod-user

  1. Create an user with Programmatic API Key.
  2. Save the key in your password manager.

1.3 create a policy mastodon-prod-policy

Create a policy with the content bellow

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::mastodon-prod-media"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
      "Resource": "arn:aws:s3:::mastodon-prod-media/*"
    }
  ]
}

1.4 edit the user and add the police

Go to the url

Configure the policy

  1. select the policies tab
  2. select the mastodon-prod-policy

Done!

2. Mastodon Config

2.1 Sync static files

install the aws-cli

apt install python3-pip
pip install awscli

configure your aws-credentials

$aws configure
AWS Access Key ID [None]: XXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXX
Default region name [None]:
Default output format [None]:

copy the existing files to the object storage

cd /opt/mastodon/data
screen
aws s3 sync system/ s3://mastodon-prod-media/ --endpoint-url=https://s3.ca-central-1.wasabisys.com

2.2 Mastodon Configuration

Edit the application.env file

vim /etc/mastodon/docker/application.env

You will change this on the application.env

# rails will serve static files?

RAILS_SERVE_STATIC_FILES=false

And you will add this section to the application.env

# File storage (optional)
# -----------------------
S3_ENABLED=true
S3_BUCKET=mastodon-prod-media
AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_HERE
AWS_SECRET_ACCESS_KEY=YOUR_ACCESS_SECRET_KEY_HERE
S3_HOSTNAME=media.gcn.sh
S3_PROTOCOL=https
S3_ENDPOINT=https://s3.ca-central-1.wasabisys.com
S3_FORCE_SINGLE_REQUEST=true

Restart your mastodon stack.

docker-compose down
docker-compose up -d

Done!

3. NGINX Config

We'll use a NGINX in front of our mastodon to have a friendly URL for our media files and a cache layer to improve the performance.

proxy_cache_path /var/cache/mastodon-prod-media levels=1:2 keys_zone=mastodon_media:100m max_size=2g inactive=12h;

server {
    listen your_listen_ip_here:80;
    server_name media.gcn.sh;
    return 301 https://media.gcn.sh$request_uri;

    access_log /dev/null;
    error_log /dev/null;
}

server {
    listen your_listen_ip_here:443 ssl http2;
    server_name media.gcn.sh;

    access_log /var/log/nginx/media-mastodon-access.log;
    error_log /var/log/nginx/media-mastodon-error.log;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  ssl_certificate /etc/letsencrypt/live/gcn.sh/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/gcn.sh/privkey.pem;

    location /mastodon-prod-media/ {
        proxy_cache mastodon_media;
        proxy_cache_revalidate on;
        proxy_buffering on;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_background_update on;
        proxy_cache_lock on;
        proxy_cache_valid 1d;
        proxy_cache_valid 404 1h;
        proxy_ignore_headers Cache-Control;
        add_header X-Cached $upstream_cache_status;
        proxy_pass https://s3.us-central-1.wasabisys.com/mastodon-prod-media/;
    }
}

It'll cache the requests for about 12h and we're limiting the disk usage to 2 GB.

3.1 NGINX cache validation

Go to your account, copy a image URL address and use CURL againt the URL.

curl -I https://media.gcn.sh/mastodon-prod-media/media_attachments/cache/media_attachments/files/110/220/330/440/550/660/small/filename.jpeg

On the first try need to get a MISS on the x-cached header response

x-cached: MISS

On the second try, you need to get a HIT on the x-cached header response

x-cached: HIT

If you get the MISS and the HIT after, everything is fine.

If you get the HIT, it's ok too :)

4. Cleanup session

  1. Run the sync one more time, just to be sure
  2. Remove the contents of the directory /opt/mastodon/data/web/system
rm -rf /opt/mastodon/data/web/system/*

4.1 Why am I removing this?

Your media files are now on the Wasabi Object Storage, you don't need the local files anymore.

5. References


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]

about the project

Invidious is an open-source alternative front-end to YouTube.

  • Privacy focused
  • Ethically designed
  • No Ads
  • Developer API
  • Multilingual
  • No need of an Youtube Account

Visit the project

installation steps

generate a password for your postgresql.

openssl rand -hex 15

create the directory

mkdir -p /opt/invidious/data/postgresql
cd /opt/invidious/

clone the project

git clone https://github.com/iv-org/invidious.git docker
cd docker

configure your installation

vim docker-compose.yml

content

version: "3"
services:

  invidious:
    image: quay.io/invidious/invidious:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      INVIDIOUS_CONFIG: |
        db:
          dbname: invidious
          user: kemal
          password: your_postgresql_password_here
          host: invidious-db
          port: 5432
        check_tables: true
    healthcheck:
      test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
      interval: 30s
      timeout: 5s
      retries: 2
    logging:
      options:
        max-size: "1G"
        max-file: "4"
    depends_on:
      - invidious-db

  invidious-db:
    image: docker.io/library/postgres:14
    restart: unless-stopped
    volumes:
      - postgresdata:/var/lib/postgresql/data
      - ./config/sql:/config/sql
      - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
    environment:
      POSTGRES_DB: invidious
      POSTGRES_USER: kemal
      POSTGRES_PASSWORD: your_postgresql_password_here
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]

volumes:
  postgresql:
    driver_opts:
      type: none
      device: /opt/invidious/data/postgresql
      o: bind

start your invidious

docker-compose up -d

visit your instance

https://your_docker_host_ip:3000

nginx configuration example

server {
    listen your_listen_ip_here:80;
    server_name tube.bolha.tools;
    location / {
        return 301 https://tube.bolha.tools$request_uri;
    }
}

server {
    listen your_listen_ip_here:443 ssl http2;
    server_name tube.bolha.tools;

    ssl_certificate /etc/letsencrypt/live/bolha.tools/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/bolha.tools/privkey.pem;

    access_log /var/log/nginx/tube-bolha-tools-access.log;
    error_log /var/log/nginx/tube-bolha-tools-error.log;

    location / {
        proxy_pass http://your_invidious_ip_here:your_port_here;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $host;    # so Invidious knows domain
        proxy_http_version 1.1;     # to keep alive
        proxy_set_header Connection ""; # to keep alive
    }

}

Enjoy!


Did you like our content?

We have a lot to share; visit our site!

Our fediverse services ;)

Chat and video? We have it!

Translation tools

Video Platform Frontends

Text Editors

You can also visit our hacking space!

Follow our founder!

Follow the status of our tools

Do you want to support us? You can!

See you!

[s]