blog.gcn.sh

Reader

Read the latest posts from blog.gcn.sh.

from mindnotes

on VMWARE/Settings/Sharing

Here I'm creating a share “gutocarvalho” pointing to

/Users/gutocarvalho

on Guest (Ubuntu 22.04)

install open-vm-tools

apt update && apt install open-vm-tools -y
reboot

create a dir to mount the shared folder

cd  ~/Desktop
mkdir MacOs

now mount it

vmhgfs-fuse .host:/gutocarvalho /home/gutocarvalho/Desktop/MacOs -o subtype=vmhgfs-fuse 

fstab config

.host:/gutocarvalho    /home/gutocarvalho/Desktop/MacOs       fuse.vmhgfs-fuse    defaults   0    0

It's done!

references

 
Read more...

from mindnotes

In this example we'll expand a partition sdb1 mounted on the /opt directory.

on the proxmox side

The first thing to do is expand the disk using the proxmox UI, for that you'll need to turn off the kvm instance, expand the disk and turn it on again.

on the linux side

now with the os running you can

umount /opt
parted /dev/sdb
print
fix
resizepart sdb1 100%
quit
xfs_repair /dev/sdb1
mount /opt
xfs_growfs /dev/sdb1

that's it!

 
Leia mais...

from howtos

Why

Because we want to use our own object storage system, on-premisses.

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed.

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate
  • CloudFlare
    • DNS Manager

Where I can find out more about the project?

Project

Docker installation

Single Node Multi Drive Arch

Hardware Requirements

Virtual Machine

vpcu: 8
memory: 8 gb ram
network: 1 gbit
disk: 350 gb

Disk layout

root (30g)
/var/lib/docker (30g)
/opt/minio (300g)

Network requirements

These are all the necessary ports to open

22 TCP (ssh)
80 (minio api)
8080 (minio console)

Any other port should be closed.

DNS requirements

We'll use 2 DNS Records

minio-admin.domain.tld (console)
minio.domain.tld (api)

How to install it?

updating your node

apt-get update
apt-get upgrade -y

installing utilities

apt install screen htop net-tools ccze git

Docker

Docker Install

curl https://get.docker.com|bash

Docker Configuration

Let's create the configuration file.

vim /etc/docker/daemon.json

Content

{
  "default-address-pools": [
    {
      "base": "10.20.30.0/24",
      "size": 24
    },
    {
      "base": "10.20.31.0/24",
      "size": 24
    }
  ]
}

Here we're defining uncommon networks to avoid conflicts with your provider or organization networks. You need to restart docker after it.

systemclt restart docker
systemclt enable docker

Docker-compose

Docker-compose install

Download

curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url  | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi -

Adjusting permissions

chmod +x docker-compose-linux-x86_64

Moving the binary to the usr/local directory

mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose

Minio

Creating directories

mkdir -p /opt/minio/{docker,storage}

Creating docker-compose config

vim /opt/minio/docker/docker-compose.yaml

Content

version: '3.7'

# Settings and configurations that are common for all containers
x-minio-common: &minio-common
  image: quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z
  command: server --console-address ":9001" http://minio{1...4}/data{1...2}
  expose:
    - "9000"
    - "9001"
  environment:
    MINIO_ROOT_USER: minio
    MINIO_ROOT_PASSWORD: your_password_here
    MINIO_SERVER_URL: https://minio.domain.ltd
    MINIO_DOMAIN: minio.domain.ltd
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
    interval: 30s
    timeout: 20s
    retries: 3

# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
  minio1:
    <<: *minio-common
    hostname: minio1
    restart: always
    volumes:
      - /MinIO/storage/data1-1:/data1
      - /MinIO/storage/data1-2:/data2

  minio2:
    <<: *minio-common
    hostname: minio2
    restart: always
    volumes:
      - /MinIO/storage/data2-1:/data1
      - /MinIO/storage/data2-2:/data2

  minio3:
    <<: *minio-common
    hostname: minio3
    restart: always
    volumes:
      - /MinIO/storage/data3-1:/data1
      - /MinIO/storage/data3-2:/data2

  minio4:
    <<: *minio-common
    hostname: minio4
    restart: always
    volumes:
      - /MinIO/storage/data4-1:/data1
      - /MinIO/storage/data4-2:/data2

  nginx:
    image: nginx:1.19.2-alpine
    hostname: nginx
    restart: always
    volumes:
      - /MinIO/docker/nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
      - "8080:8080"
    depends_on:
      - minio1
      - minio2
      - minio3
      - minio4

## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
  data1-1:
  data1-2:
  data2-1:
  data2-2:
  data3-1:
  data3-2:
  data4-1:
  data4-2:

Creating nginx config

vim /opt/minio/docker/nginx.conf 

Content

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  4096;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    sendfile        on;
    keepalive_timeout  65;

    upstream minio {
        server minio1:9000;
        server minio2:9000;
        server minio3:9000;
        server minio4:9000;
    }

    upstream console {
        ip_hash;
        server minio1:9001;
        server minio2:9001;
        server minio3:9001;
        server minio4:9001;
    }

    server {
        listen 80;
        ignore_invalid_headers off;
        client_max_body_size 0;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_connect_timeout 300;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            chunked_transfer_encoding off;
            proxy_pass http://minio;
        }
    }

    server {
        listen       8080;
        ignore_invalid_headers off;
        client_max_body_size 0;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-NginX-Proxy true;
            real_ip_header X-Real-IP;
            proxy_connect_timeout 300;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            chunked_transfer_encoding off;

            proxy_pass http://console;
        }
    }
}

starting containers

cd /opt/minio/docker
docker-compose up -d

checking services

docker-compose ps

Expected output

NAME                IMAGE                                              COMMAND                  SERVICE             CREATED             STATUS                   PORTS
docker-minio1-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio1              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio2-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio2              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio3-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio3              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio4-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio4              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-nginx-1      nginx:1.19.2-alpine                                "/docker-entrypoint.…"   nginx               11 minutes ago      Up 9 minutes             0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp

Check it the ports 9001 and 9001

netstat -ntpl|grep docker

Expected Ouput

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2116141/docker-prox
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      2116110/docker-prox
tcp6       0      0 :::80                   :::*                    LISTEN      2116149/docker-prox
tcp6       0      0 :::8080                 :::*                    LISTEN      2116123/docker-prox

You can now validate the console

curl http://localhost:80

Expected Output

<!doctype html><html lang="en"><head><meta charset="utf-8"/><base href="/"/><meta content="width=device-width,initial-scale=1" name="viewport"/><meta content="#081C42" media="(prefers-color-scheme: light)" name="theme-color"/><meta content="#081C42" media="(prefers-color-scheme: dark)" name="theme-color"/><meta content="MinIO Console" name="description"/><meta name="minio-license" content="agpl" /><link href="./styles/root-styles.css" rel="stylesheet"/><link href="./apple-icon-180x180.png" rel="apple-touch-icon" sizes="180x180"/><link href="./favicon-32x32.png" rel="icon" sizes="32x32" type="image/png"/><link href="./favicon-96x96.png" rel="icon" sizes="96x96" type="image/png"/><link href="./favicon-16x16.png" rel="icon" sizes="16x16" type="image/png"/><link href="./manifest.json" rel="manifest"/><link color="#3a4e54" href="./safari-pinned-tab.svg" rel="mask-icon"/><title>MinIO Console</title><script defer="defer" src="./static/js/main.92fa0385.js"></script><link href="./static/css/main.02c1b6fd.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"><div id="preload"><img src="./images/background.svg"/> <img src="./images/background-wave-orig2.svg"/></div><div id="loader-block"><img src="./Loader.svg"/></div></div></body></html>

You can now validate if the API is running

curl http://localhost:80

Expected output

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>177E5BC14618C529</RequestId><HostId>e0c385c033c4356721cc9121d3109c9b9bfdefb22fd2747078acd22328799e36</HostId></Error>root@bolha.io:/MinIO/docker#

Validate if the API is Healthly

curl -si http:///localhost/minio/health/live

Expected output

HTTP/1.1 200 OK
Server: nginx/1.19.2
Date: Thu, 24 Aug 2023 15:38:38 GMT
Content-Length: 0
Connection: keep-alive
Accept-Ranges: bytes
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
X-Amz-Id-2: 46efbbb7efbd81c7d995bde03cc6fabf60c12f80d4e074c1c972dbc4d583c3d4
X-Amz-Request-Id: 177E5BDDF79EDEF8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block

Reverse Proxy

You can now configure your reverse proxy

minio-admin.domain.tld => the ip-of-the-vm port 8080.
minio.domain.tlds => ip-of-the-vm port 80.

We'll not cover the reverse proxy config yet, maybe in the future.

Accessing Minio

After the configuration you can visite the admin console

https://minio-admin.domain.tld

Viewing logs

You can follow the containers logs during the minio usage.

cd /opt/minio/docker
docker-compose logs -f --tail=10

Cheers [s]

 
Read more...

from howtos

Why?

Because it's important to use the last version with the latest bug fixes and features.

What do I need to update?

You need a working mastodon, and we're expecting that you had followed our howto

How to upgrade

The Docker side

stop the containers

cd /opt/mastodon/docker
docker-compose down

edit the versions.env file

vim /opt/mastodon/docker/versions.env

and change the version to the latest

MASTODON_VERSION=v4.1.2

to

MASTODON_VERSION=v4.1.4

clear the web directories

rm -rf /opt/mastodon/data/web/public/*
rm -rf /opt/mastodon/data/web/config/*
rm -rf /opt/mastodon/data/web/app/*
rm -rf /opt/mastodon/data/web/system/*

and start all containers again

cd /opt/mastodon/docker
docker-compose up -d

and run the migration

cd /opt/mastodon/docker
docker-compose run --rm shell bundle exec rake db:migrate

customizations

you need to apply any customization to these directory files again if you had modified anything before.

/opt/mastodon/data/web/public/*
/opt/mastodon/data/web/config/*
/opt/mastodon/data/web/app/*
/opt/mastodon/data/web/system/*

The External Nginx side

Now we need to update the static files cache on our nginx reverse proxy.

nginx cache config

Edit your mastodon vhost filer

vim /etc/nginx/conf.d/mastodon.conf

fine the cache line

proxy_cache_path /var/cache/mastodon/public/4.1.2 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;

change the cache directory

proxy_cache_path /var/cache/mastodon/public/4.1.4 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;

create the new directory

mkdir -p /var/cache/mastodon/public/4.1.4

root directory

find the root directory line

  root /var/www/mastodon/dev.bolha.us/public/4.1.2;

change it

  root /var/www/mastodon/dev.bolha.us/public/4.1.4;

create the new directory

mkdir -p /var/www/mastodon/dev.bolha.us/public/4.1.4;

creating a docker volume to copy the new static files

docker volume create --opt type=none --opt device=/opt/www/mastodon/dev.bolha.us/public/4.1.4 --opt o=bind mastodon_public_4.1.4

copying the new static files from the new version to the volume

docker run --rm -v "mastodon_public_4.1.4:/static" tootsuite/mastodon:v4.1.4 bash -c "cp -r /opt/mastodon/public/* /static/"

checking the files

ls /opt/www/mastodon/dev.bolha.us/public/4.1.4

remove the temporary volume

docker volume rm mastodon_public_4.1.4

now verify your nginx config

# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

now reload your nginx

systemctl restart nginx && systemctl status nginx

That's it!

 
Read more...

from howtos

Last update: 2023-07-07

Why?

Because we want to use a federated forum and link aggregator :)

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed. creating directories

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate

Where I can find out more about the project?

How I can install it?

creating directories

mkdir -p /opt/lemmy
mkdir -p /opt/lemmy/{docker,data,config}
mkdir -p /opt/lemmy/data/{postgresql,pictrs,themes}
mkdir -p /opt/lemmy/config/{lemmy,postgresql,nginx}

defining permissions

chown -R 991:991 /opt/lemmy/data/pictrs

nginx config

creating the nginx.conf file

vim /opt/lemmy/config/nginx/nginx.conf

content

worker_processes auto;

events {
    worker_connections 1024;
}

http {

    map "$request_method:$http_accept" $proxpass {
        default "http://lemmy-ui";
        "~^(?:GET|HEAD):.*?application\/(?:activity|ld)\+json" "http://lemmy";
        "~^(?!(GET|HEAD)).*:" "http://lemmy";
    }

    upstream lemmy {
        server "lemmy:8536";
    }

    upstream lemmy-ui {
        server "lemmy-ui:1234";
    }

    server {
        listen 1236;
        listen 8536;

        server_name localhost;
        server_tokens off;

        gzip on;
        gzip_types text/css application/javascript image/svg+xml;
        gzip_vary on;

        client_max_body_size 20M;

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        location / {
            proxy_pass $proxpass;
            rewrite ^(.+)/+$ $1 permanent;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
            proxy_pass "http://lemmy";
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

lemmy backend config

creating the lemmy config file

vim /opt/lemmy/config/config.hjson

content

{
  database: {
    host: postgres
    password: "your_postgresql_password_here"
  }
  hostname: "bolha.forum"
  pictrs: {
    url: "http://pictrs:8080/"
    api_key: "your_postgresql_password_here"
  }
  email: {
    smtp_server: "postfix:25"
    smtp_from_address: "noreply@bolha.forum"
    tls_type: "none"
  }
}

docker config

creating the docker-compose.yaml

vim /opt/lemmy/docker/docker-compose.yml

content

version: "3.7"

services:

  proxy:
    image: nginx:1-alpine
    container_name: lemmy_proxy
    ports:
      - "8000:8536"
    volumes:
      - /opt/lemmy/config/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,Z
    restart: always
    depends_on:
      - pictrs
      - lemmy-ui

  lemmy:
    image: dessalines/lemmy:0.18.1
    container_name: lemmy_backend
    hostname: lemmy
    restart: always
    environment:
      - RUST_LOG="warn"
    volumes:
      - lemmy_config:/config
    depends_on:
      - postgres
      - pictrs

  lemmy-ui:
    image: dessalines/lemmy-ui:0.18.1
    container_name: lemmy_frontend
    environment:
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
      - LEMMY_UI_LEMMY_EXTERNAL_HOST=bolha.forum
      - LEMMY_UI_HTTPS=true
    volumes:
      - extra_themes:/app/extra_themes
    depends_on:
      - lemmy
    restart: always

  pictrs:
    image: asonix/pictrs:0.4.0-rc.7
    container_name: lemmy_images_backend
    hostname: pictrs
    environment:
      - PICTRS__API_KEY=your_postgresql_password_here
      - RUST_LOG=debug
      - RUST_BACKTRACE=full
      - PICTRS__MEDIA__VIDEO_CODEC=vp9
      - PICTRS__MEDIA__GIF__MAX_WIDTH=256
      - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
      - PICTRS__MEDIA__GIF__MAX_AREA=65536
      - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
    user: 991:991
    volumes:
      - pictrs:/mnt:Z
    restart: always
    deploy:
      resources:
        limits:
          memory: 690m

  postgres:
    image: postgres:15-alpine
    container_name: lemmy_database
    hostname: postgres
    environment:
      - POSTGRES_USER=lemmy
      - POSTGRES_PASSWORD=your_postgresql_password_here
      - POSTGRES_DB=lemmy
    volumes:
      - postgresql:/var/lib/postgresql/data:Z
      - /opt/lemmy/config/postgresql/postgresql.conf:/etc/postgresql.conf
    restart: always

  postfix:
    image: mwader/postfix-relay
    container_name: lemmy_smtp_relay
    environment:
      - POSTFIX_myhostname=bolha.forum
      - POSTFIX_smtp_sasl_auth_enable= yes
      - POSTFIX_smtp_sasl_password_maps=static:user@domain.tld:user_password_here
      - POSTFIX_smtp_sasl_security_options=noanonymous
      - POSTFIX_relayhost=smtp.domain.tld:587
    restart: "always"

volumes:
  lemmy_config:
    driver_opts:
      type: none
      device: /opt/lemmy/config/lemmy
      o: bind
  extra_themes:
    driver_opts:
      type: none
      device: /opt/lemmy/data/themes
      o: bind
  pictrs:
    driver_opts:
      type: none
      device: /opt/lemmy/data/pictrs
      o: bind
  postgresql:
    driver_opts:
      type: none
      device: /opt/lemmy/data/postgresql
      o: bind

spinning up the lemmy instance

$ cd /opt/lemmy/docker
$ docker-compose up -d

checking

$ docker-compose ps

NAME                   IMAGE                        COMMAND                  SERVICE             CREATED             STATUS              PORTS
lemmy_backend          dessalines/lemmy:0.18.1      "/app/lemmy"             lemmy               34 minutes ago      Up 34 minutes
lemmy_database         postgres:15-alpine           "docker-entrypoint.s…"   postgres            34 minutes ago      Up 34 minutes       5432/tcp
lemmy_frontend         dessalines/lemmy-ui:0.18.1   "docker-entrypoint.s…"   lemmy-ui            34 minutes ago      Up 34 minutes       1234/tcp
lemmy_images_backend   asonix/pictrs:0.4.0-rc.7     "/sbin/tini -- /usr/…"   pictrs              34 minutes ago      Up 34 minutes       6669/tcp, 8080/tcp
lemmy_proxy            nginx:1-alpine               "/docker-entrypoint.…"   proxy               34 minutes ago      Up 34 minutes       80/tcp, 0.0.0.0:8000->8536/tcp, :::8000->8536/tcp
lemmy_smtp_relay       mwader/postfix-relay         "/root/run"              postfix             34 minutes ago      Up 34 minutes       25/tcp

You can see that our lemmy_proxy (nginx) is running on the port 8000.

Now let's configure the external reverse proxy.

external reverse-proxy config

certbot + letsencrypt

we're using cloudflare plugin with certbot, you need to have the configuration ready, like this example

# cat /etc/letsencrypt/cloudflare/bolha-forum.conf
dns_cloudflare_email = dns@bolha.forum
dns_cloudflare_api_key = your_token_here

then you can generate the certificate

# certbot certonly --dns-cloudflare --dns-cloudflare-credential /etc/letsencrypt/cloudflare/bolha-forum.conf -d "*.bolha.forum,bolha.forum"

now we can configure our nginx!

nginx config

external reverse proxy

server {
    listen your_listen_ip_here:80;
    server_name bolha.forum;
    location / {
        return 301 https://bolha.forum$request_uri;
    }
}

server {
    listen your_listen_ip_here:443 ssl http2;
    server_name bolha.forum;

    ssl_certificate /etc/letsencrypt/live/bolha.forum/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/bolha.forum/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;

    ssl_dhparam /etc/letsencrypt/dh-param.pem;

    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

    # Specifies a curve for ECDHE ciphers.
    ssl_ecdh_curve prime256v1;

    # Server should determine the ciphers, not the client
    ssl_prefer_server_ciphers on;

    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    # Enable compression for JS/CSS/HTML bundle, for improved client load times.
    # It might be nice to compress JSON, but leaving that out to protect against potential
    # compression+encryption information leak attacks like BREACH.
    gzip on;
    gzip_types text/css application/javascript image/svg+xml;
    gzip_vary on;

    # Only connect to this site via HTTPS for the two years
    add_header Strict-Transport-Security "max-age=63072000";

    # Various content security headers
    add_header Referrer-Policy "same-origin";
    add_header X-Content-Type-Options "nosniff";
    add_header X-Frame-Options "DENY";
    add_header X-XSS-Protection "1; mode=block";

    # Upload limit for pictrs
    client_max_body_size 25M;


    location / {
      proxy_pass http://your_docker_host_ip_here:your_port_here;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

check the config

nginx -t

expected output

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

and reload the configuration

# nginx -s reload

that's it!

Go to your lemmy!

Now you can access your lemmy instance

Enjoy!

 
Read more...

from howtos

Why?

We want to offer translations inside Mastodon using libretranslate as our backend.

What do I need to install?

Your need a working mastodon, we recommend this howto

Your need a working libretranslate, we recommend this howto

How to integrate them?

You need to add these two variables on the application.env file if you are following our mastodon howto.

LIBRE_TRANSLATE_ENDPOINT=https://libretranslate.bolha.tools
LIBRE_TRANSLATE_API_KEY=ecae7db0-bolha-us-is-cool-c84c14d2117a

Then restart it

cd /opt/mastodon/docker
docker-compose restart

After that you can check the logs

docker-compose logs -f|grep TranslationsController

Expected output with status code 200

website            | [01fa1ece-5ab3-411d-bd6b-4b5131096735] method=POST path=/api/v1/statuses/110658724777490930/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=200 duration=2988.25 view=0.77 db=2.32

Sometimes you can get a status code 503, yes, it will happen, it's not perfect but works well most of the time.

website            | [752a45c9-a94a-408a-8262-7b71cc1528e9] method=POST path=/api/v1/statuses/110658727361133356/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=503 duration=10117.47 view=0.49 db=2.19

Enjoy it!

:)

 
Read more...

from howtos

Why?

The main use of this Libretranslate is to translate mastodon toots.

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed.

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate

Where I can find more about the project?

How I can install it?

first, let's create the directories

mkdir -p /opt/libretranslate/docker
mkdir -p /opt/libretransalte/data/{key,local}

now let's configure the permissions

chown 1032:1032 /opt/libretransalte/data
chown 1032:1032 /opt/libretransalte/data/key
chown 1032:1032 /opt/libretransalte/data/local

then, let's create the docker-compose file

cd /opt/libretranslate/docker
vim docker-compose.yaml

here follows the content, change the parameters for you setup

version: "3"

services:
  libretranslate:
    container_name: libretranslate
    image: libretranslate/libretranslate:v1.3.11
    restart: unless-stopped
    dns:
      - 1.1.1.1
      - 8.8.8.8
    ports:
      - "5000:5000"
    healthcheck:
      test: ['CMD-SHELL', './venv/bin/python scripts/healthcheck.py']
    env_file:
      - libretranslate.env
    volumes:
     - libretranslate_api_keys:/app/db
     - libretranslate_local:/home/libretranslate/.local

volumes:
  libretranslate_api_keys:
    driver_opts:
      type: none
      device: /opt/libretranslate/data/keys
      o: bind
  libretranslate_local:
    driver_opts:
      type: none
      device: /opt/libretranslate/data/local
      o: bind

then, let's create the libetranslate env file

vim libretranslate.env

here follows the content, change the parameters for you setup

LT_DEBUG=true
LT_UPDATE_MODELS=true
LT_SSL=true
LT_SUGGESTIONS=false
LT_METRICS=true

LT_API_KEYS=true

LT_THREADS=12
LT_FRONTEND_TIMEOUT=2000

#LT_REQ_LIMIT=400
#LT_CHAR_LIMIT=1200

LT_API_KEYS_DB_PATH=/app/db/api_keys.db

all right, let's spin up the libretranslate

docker-compose up -d

installing the model files

you should enter the container

docker exec -it libretranslate bash

and run the command to install all languages

for i in `/app/venv/bin/argospm list`;do /app/venv/bin/argospm install $i;done

it will took some time to install, go drink a coffee, then check the directory in your host

$ exit
$ ls -1 /opt/libretranslate/data/local/share/argos-translate/packages/

ar_en
de_en
en_ar
en_de
en_es
en_fi
en_fr
en_ga
en_hi
en_hu
en_id
en_it
en_ja
en_ko
en_nl
en_pl
en_pt
en_sv
en_uk
es_en
fi_en
fr_en
ga_en
hi_en
hu_en
id_en
it_en
ja_en
ko_en
nl_en
pl_en
pt_en
ru_en
sv_en
translate-az_en-1_5
translate-ca_en-1_7
translate-cs_en-1_5
translate-da_en-1_3
translate-el_en-1_5
translate-en_az-1_5
translate-en_ca-1_7
translate-en_cs-1_5
translate-en_da-1_3
translate-en_el-1_5
translate-en_eo-1_5
translate-en_fa-1_5
translate-en_he-1_5
translate-en_ru-1_7
translate-en_sk-1_5
translate-en_th-1_0
translate-en_tr-1_5
translate-en_zh-1_7
translate-eo_en-1_5
translate-fa_en-1_5
translate-he_en-1_5
translate-sk_en-1_5
translate-th_en-1_0
translate-tr_en-1_5
translate-zh_en-1_7
uk_en

Awesome it's all there.

creating the api key

Since we're using “LTAPIKEYS=true” we need to create an API KEY to be able to use libretranslate via API. Let's go to the container again

docker exec -it libretranslate bash

let's create a key with permission to run 120 requests per minute.

/app/venv/bin/ltmanage keys add 120

example of the expected output

libretranslate@ba7f705d97b9:/app$ /app/venv/bin/ltmanage keys
ecae7db0-bolha-us-is-cool-c84c14d2117a: 1200

nice, everything is ready to be used, now let's configure your nginx!

testing the api

curl -XPOST -H "Content-type: application/json" -d '{
"q": "Bolha.io is the most cool project in the fediverso",
"source": "en",
"target": "pt"
}' 'http://localhost:5000/translate'

expected output

{"translatedText":"Bolha.io é o projeto mais legal no fediverso"}

external nginx

In our case, libretranslate is behind an External NGINX Reverse Proxy.

Here follows the config snippet used

upstream libretranslate {
    server your_docker_host_ip:your_libretranslate_port fail_timeout=0;
}

server {
  listen 144.217.95.91:80;
  server_name libretranslate.domain.tld;
  return 301 https://libretranslate.domain.tld$request_uri;
}

server {

  listen 144.217.95.91:443 ssl http2;
  server_name libretranslate;

  access_log /var/log/nginx/libretranslate-domain-tld.log;
  error_log /var/log/nginx/libretranslate-domain-tld.log;

  ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;

  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_prefer_server_ciphers on;
  ssl_dhparam /etc/letsencrypt/dh-param.pem;

  ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

  ssl_session_cache shared:SSL:10m;
  ssl_session_tickets off;

  location / {
    proxy_pass http://libretranslate;
  }

}

Now you can access your libretranstale via web

That's it :)

[s]

 
Read more...

from fediverse

Read more...

from fediverse

This is a list of mastodon-related projects and apps.

Last update: 22/Jun/23

official project site

official git repo

oficial docker repo

official instances

recommend instances

Brazil

mastodon relevant forks

mastodon web-clients

mastodon frontends?

mastodon desktop clients

beta

mastodon ios clients

offical app

tapbots

others

beta

mastodon android clients

oficial client

others

mastodon terminal clients

mastodon utils

do you want to help with this list?

Please send your suggestions to:

mastodon – @gutocarvalho@gcn.sh

matrix – @gutocarvalho@bolha.chat

 
Read more...

from fediverse

Read more...

from fediverse

Read more...