blog.gcn.sh

Reader

Read the latest posts from blog.gcn.sh.

from mindnotes

In this example we'll expand a partition sdb1 mounted on the /opt directory.

on the proxmox side

The first thing to do is expand the disk using the proxmox UI, for that you'll need to turn off the kvm instance, expand the disk and turn it on again.

on the linux side

now with the os running you can

umount /opt
parted /dev/sdb
print
fix
resizepart sdb1 100%
quit
xfs_repair /dev/sdb1
mount /opt
xfs_growfs /dev/sdb1

that's it!

 
Leia mais...

from howtos

Why

Because we want to use our own object storage system, on-premisses.

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed.

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate
  • CloudFlare
    • DNS Manager

Where I can find out more about the project?

Project

Docker installation

Single Node Multi Drive Arch

Hardware Requirements

Virtual Machine

vpcu: 8
memory: 8 gb ram
network: 1 gbit
disk: 350 gb

Disk layout

root (30g)
/var/lib/docker (30g)
/opt/minio (300g)

Network requirements

These are all the necessary ports to open

22 TCP (ssh)
80 (minio api)
8080 (minio console)

Any other port should be closed.

DNS requirements

We'll use 2 DNS Records

minio-admin.domain.tld (console)
minio.domain.tld (api)

How to install it?

updating your node

apt-get update
apt-get upgrade -y

installing utilities

apt install screen htop net-tools ccze git

Docker

Docker Install

curl https://get.docker.com|bash

Docker Configuration

Let's create the configuration file.

vim /etc/docker/daemon.json

Content

{
  "default-address-pools": [
    {
      "base": "10.20.30.0/24",
      "size": 24
    },
    {
      "base": "10.20.31.0/24",
      "size": 24
    }
  ]
}

Here we're defining uncommon networks to avoid conflicts with your provider or organization networks. You need to restart docker after it.

systemclt restart docker
systemclt enable docker

Docker-compose

Docker-compose install

Download

curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url  | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi -

Adjusting permissions

chmod +x docker-compose-linux-x86_64

Moving the binary to the usr/local directory

mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose

Minio

Creating directories

mkdir -p /opt/minio/{docker,storage}

Creating docker-compose config

vim /opt/minio/docker/docker-compose.yaml

Content

version: '3.7'

# Settings and configurations that are common for all containers
x-minio-common: &minio-common
  image: quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z
  command: server --console-address ":9001" http://minio{1...4}/data{1...2}
  expose:
    - "9000"
    - "9001"
  environment:
    MINIO_ROOT_USER: minio
    MINIO_ROOT_PASSWORD: your_password_here
    MINIO_SERVER_URL: https://minio.domain.ltd
    MINIO_DOMAIN: minio.domain.ltd
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
    interval: 30s
    timeout: 20s
    retries: 3

# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
  minio1:
    <<: *minio-common
    hostname: minio1
    restart: always
    volumes:
      - /MinIO/storage/data1-1:/data1
      - /MinIO/storage/data1-2:/data2

  minio2:
    <<: *minio-common
    hostname: minio2
    restart: always
    volumes:
      - /MinIO/storage/data2-1:/data1
      - /MinIO/storage/data2-2:/data2

  minio3:
    <<: *minio-common
    hostname: minio3
    restart: always
    volumes:
      - /MinIO/storage/data3-1:/data1
      - /MinIO/storage/data3-2:/data2

  minio4:
    <<: *minio-common
    hostname: minio4
    restart: always
    volumes:
      - /MinIO/storage/data4-1:/data1
      - /MinIO/storage/data4-2:/data2

  nginx:
    image: nginx:1.19.2-alpine
    hostname: nginx
    restart: always
    volumes:
      - /MinIO/docker/nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
      - "8080:8080"
    depends_on:
      - minio1
      - minio2
      - minio3
      - minio4

## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
  data1-1:
  data1-2:
  data2-1:
  data2-2:
  data3-1:
  data3-2:
  data4-1:
  data4-2:

Creating nginx config

vim /opt/minio/docker/nginx.conf 

Content

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  4096;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    sendfile        on;
    keepalive_timeout  65;

    upstream minio {
        server minio1:9000;
        server minio2:9000;
        server minio3:9000;
        server minio4:9000;
    }

    upstream console {
        ip_hash;
        server minio1:9001;
        server minio2:9001;
        server minio3:9001;
        server minio4:9001;
    }

    server {
        listen 80;
        ignore_invalid_headers off;
        client_max_body_size 0;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_connect_timeout 300;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            chunked_transfer_encoding off;
            proxy_pass http://minio;
        }
    }

    server {
        listen       8080;
        ignore_invalid_headers off;
        client_max_body_size 0;
        proxy_buffering off;
        proxy_request_buffering off;

        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-NginX-Proxy true;
            real_ip_header X-Real-IP;
            proxy_connect_timeout 300;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            chunked_transfer_encoding off;

            proxy_pass http://console;
        }
    }
}

starting containers

cd /opt/minio/docker
docker-compose up -d

checking services

docker-compose ps

Expected output

NAME                IMAGE                                              COMMAND                  SERVICE             CREATED             STATUS                   PORTS
docker-minio1-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio1              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio2-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio2              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio3-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio3              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-minio4-1     quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z   "/usr/bin/docker-ent…"   minio4              11 minutes ago      Up 9 minutes (healthy)   9000-9001/tcp
docker-nginx-1      nginx:1.19.2-alpine                                "/docker-entrypoint.…"   nginx               11 minutes ago      Up 9 minutes             0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp

Check it the ports 9001 and 9001

netstat -ntpl|grep docker

Expected Ouput

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2116141/docker-prox
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      2116110/docker-prox
tcp6       0      0 :::80                   :::*                    LISTEN      2116149/docker-prox
tcp6       0      0 :::8080                 :::*                    LISTEN      2116123/docker-prox

You can now validate the console

curl http://localhost:80

Expected Output

<!doctype html><html lang="en"><head><meta charset="utf-8"/><base href="/"/><meta content="width=device-width,initial-scale=1" name="viewport"/><meta content="#081C42" media="(prefers-color-scheme: light)" name="theme-color"/><meta content="#081C42" media="(prefers-color-scheme: dark)" name="theme-color"/><meta content="MinIO Console" name="description"/><meta name="minio-license" content="agpl" /><link href="./styles/root-styles.css" rel="stylesheet"/><link href="./apple-icon-180x180.png" rel="apple-touch-icon" sizes="180x180"/><link href="./favicon-32x32.png" rel="icon" sizes="32x32" type="image/png"/><link href="./favicon-96x96.png" rel="icon" sizes="96x96" type="image/png"/><link href="./favicon-16x16.png" rel="icon" sizes="16x16" type="image/png"/><link href="./manifest.json" rel="manifest"/><link color="#3a4e54" href="./safari-pinned-tab.svg" rel="mask-icon"/><title>MinIO Console</title><script defer="defer" src="./static/js/main.92fa0385.js"></script><link href="./static/css/main.02c1b6fd.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"><div id="preload"><img src="./images/background.svg"/> <img src="./images/background-wave-orig2.svg"/></div><div id="loader-block"><img src="./Loader.svg"/></div></div></body></html>

You can now validate if the API is running

curl http://localhost:80

Expected output

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>177E5BC14618C529</RequestId><HostId>e0c385c033c4356721cc9121d3109c9b9bfdefb22fd2747078acd22328799e36</HostId></Error>root@bolha.io:/MinIO/docker#

Validate if the API is Healthly

curl -si http:///localhost/minio/health/live

Expected output

HTTP/1.1 200 OK
Server: nginx/1.19.2
Date: Thu, 24 Aug 2023 15:38:38 GMT
Content-Length: 0
Connection: keep-alive
Accept-Ranges: bytes
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
X-Amz-Id-2: 46efbbb7efbd81c7d995bde03cc6fabf60c12f80d4e074c1c972dbc4d583c3d4
X-Amz-Request-Id: 177E5BDDF79EDEF8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block

Reverse Proxy

You can now configure your reverse proxy

minio-admin.domain.tld => the ip-of-the-vm port 8080.
minio.domain.tlds => ip-of-the-vm port 80.

We'll not cover the reverse proxy config yet, maybe in the future.

Accessing Minio

After the configuration you can visite the admin console

https://minio-admin.domain.tld

Viewing logs

You can follow the containers logs during the minio usage.

cd /opt/minio/docker
docker-compose logs -f --tail=10

Cheers [s]

 
Read more...

from howtos

Why?

Because we want to use a federated book review system :)

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed. creating directories

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate
  • CloudFlare
    • DNS Manager

Where I can find out more about the project?

How I can install it?

Let's start with it

creating directories

mkdir -p /opt/bookwyrm
mkdir -p /opt/bookwyrm/nginx/conf
mkdir -p /opt/bookwyrm/pgsql/data
mkdir -p /opt/bookwyrm/pgsql/backup
mkdir -p /opt/bookwyrm/data/app/static
mkdir -p /opt/bookwyrm/data/app/media
mkdir -p /opt/bookwyrm/data/redis/config
mkdir -p /opt/bookwyrm/data/redis/activity_data
mkdir -p /opt/bookwyrm/data/redis/broker_data

cloning the project

cd /opt/bookwyrm
git clone https://github.com/bookwyrm-social/bookwyrm.git source
cd source
git checkout production

creating the redis config

copying redis.conf

cd /opt/bookwyrm/source
cp redis.conf /opt/bookwyrm/data/redis/config

creating nginx.conf

creting production.conf

cd /opt/bookwyrm/data/nginx/conf
vim production.conf

content

include /etc/nginx/conf.d/server_config;

upstream web {
    server web:8000;
}

server {
    access_log /var/log/nginx/access.log cache_log;

    listen 80;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    #include /etc/nginx/mime.types;
    #default_type application/octet-stream;

    gzip on;
    gzip_disable "msie6";

    proxy_read_timeout 1800s;
    chunked_transfer_encoding on;

    # store responses to anonymous users for up to 1 minute
    proxy_cache bookwyrm_cache;
    proxy_cache_valid any 1m;
    add_header X-Cache-Status $upstream_cache_status;

    # ignore the set cookie header when deciding to
    # store a response in the cache
    proxy_ignore_headers Cache-Control Set-Cookie Expires;

    # PUT requests always bypass the cache
    # logged in sessions also do not populate the cache
    # to avoid serving personal data to anonymous users
    proxy_cache_methods GET HEAD;
    proxy_no_cache      $cookie_sessionid;
    proxy_cache_bypass  $cookie_sessionid;

    # tell the web container the address of the outside client
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;
    proxy_redirect off;

    # rate limit the login or password reset pages
    location ~ ^/(login[^-/]|password-reset|resend-link|2fa-check) {
        limit_req zone=loginlimit;
        proxy_pass http://web;
    }

    # do not log periodic polling requests from logged in users
    location /api/updates/ {
        access_log off;
        proxy_pass http://web;
    }

    # forward any cache misses or bypass to the web container
    location / {
        proxy_pass http://web;
    }

    # directly serve images and static files from the
    # bookwyrm filesystem using sendfile.
    # make the logs quieter by not reporting these requests
    location ~ ^/(images|static)/ {
        root /app/;
        try_files $uri =404;
        add_header X-Cache-Status STATIC;
        access_log off;
    }

    # monitor the celery queues with flower, no caching enabled
    #location /flower/ {
    #   proxy_pass http://flower:8888;
    #   proxy_cache_bypass 1;
    #}
}

creting server_config file

cd /opt/bookwyrm/data/nginx/conf
vim server_config

content

client_max_body_size 10m;
limit_req_zone $binary_remote_addr zone=loginlimit:10m rate=1r/s;

# include the cache status in the log message
log_format cache_log '$upstream_cache_status - '
    '$remote_addr [$time_local] '
    '"$request" $status $body_bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '$upstream_response_time $request_time';

# Create a cache for responses from the web app
proxy_cache_path
    /var/cache/nginx/bookwyrm_cache
    keys_zone=bookwyrm_cache:20m
    loader_threshold=400
    loader_files=400
    max_size=400m;

# use the accept header as part of the cache key
# since activitypub endpoints have both HTML and JSON
# on the same URI.
proxy_cache_key $scheme$proxy_host$uri$is_args$args$http_accept;

docker env config

creating the env config

cd /opt/bookwyrm/source
cp .env.example .env
vim .env

content

SECRET_KEY="a-very-good-secret-here-25-chars-letter-numbers-symbols"

DEBUG=false

USE_HTTPS=true

DOMAIN=domain.tld
EMAIL=help@domain.tld

LANGUAGE_CODE="en-us"
DEFAULT_LANGUAGE="English"

ALLOWED_HOSTS="localhost,127.0.0.1,[::1],domain.tld"

MEDIA_ROOT=images/

# PostgreSQL

PGPORT=5432
POSTGRES_PASSWORD=a-very-good-password-here
POSTGRES_USER=bookwyrm
POSTGRES_DB=bookwyrm
POSTGRES_HOST=db

# Redis activity stream manager

MAX_STREAM_LENGTH=200
REDIS_ACTIVITY_HOST=redis_activity
REDIS_ACTIVITY_PORT=6379
REDIS_ACTIVITY_PASSWORD=a-very-good-password-here

# Redis as celery broker

REDIS_BROKER_HOST=redis_broker
REDIS_BROKER_PORT=6379
REDIS_BROKER_PASSWORD=a-very-good-password-here

# Monitoring for celery

FLOWER_PORT=8888
FLOWER_USER=admin
FLOWER_PASSWORD=a-very-good-password-here

# Email config

EMAIL_HOST=mail.domain.tld
EMAIL_PORT=587
EMAIL_HOST_USER=user@domain.tld
EMAIL_HOST_PASSWORD=a-very-good-password-here
EMAIL_USE_TLS=true
EMAIL_USE_SSL=false
EMAIL_SENDER_NAME=no-reply
EMAIL_SENDER_DOMAIN=domain.tld

# Query timeouts

SEARCH_TIMEOUT=5
QUERY_TIMEOUT=5

# Thumbnails Generation

ENABLE_THUMBNAIL_GENERATION=true

# S3 configuration

USE_S3=false
AWS_ACCESS_KEY_ID=your-access-key-here
AWS_SECRET_ACCESS_KEY=your-secret-access-key-here
AWS_STORAGE_BUCKET_NAME=your-bucket-name-here
AWS_S3_REGION_NAME=your-bucket-region-here
AWS_S3_CUSTOM_DOMAIN=https://[your-bucket-name].[your-endpoint_url]
AWS_S3_ENDPOINT_URL=https://your-endpoint-url

# Preview image generation can be computing and storage intensive

ENABLE_PREVIEW_IMAGES=true

# Specify RGB tuple or RGB hex strings,

PREVIEW_TEXT_COLOR=#363636
PREVIEW_IMG_WIDTH=1200
PREVIEW_IMG_HEIGHT=630
PREVIEW_DEFAULT_COVER_COLOR=#002549

# Set HTTP_X_FORWARDED_PROTO ONLY to true if you know what you are doing.
# Only use it if your proxy is "swallowing" if the original request was made
# via https. Please refer to the Django-Documentation and assess the risks
# for your instance:
# https://docs.djangoproject.com/en/3.2/ref/settings/#secure-proxy-ssl-header

HTTP_X_FORWARDED_PROTO=false

# TOTP settings

TWO_FACTOR_LOGIN_VALIDITY_WINDOW=2
TWO_FACTOR_LOGIN_MAX_SECONDS=60

# Additional hosts to allow in the Content-Security-Policy, "self" (should be DOMAIN)
# and AWS_S3_CUSTOM_DOMAIN (if used) are added by default.
# Value should be a comma-separated list of host names.
#CSP_ADDITIONAL_HOSTS=

docker-compose config

creating a new docker-compose file

cd /opt/bookwyrm/source
mv docker-compose.yml docker-compose.yml.original
vim docker-compose.yml

content

version: '3'

services:

  nginx:
    image: nginx:latest
    container_name: bookwyrm_nginx
    restart: unless-stopped
    ports:
      - "8001:80"
    depends_on:
      - web
    networks:
      - main
    volumes:
      - .:/app
      - app_static:/app/static
      - app_media:/app/images
      - nginx_conf:/etc/nginx/conf.d

  db:
    build: postgres-docker
    env_file: .env
    container_name: bookwyrm_pgsql
    entrypoint: /bookwyrm-entrypoint.sh
    command: cron postgres
    volumes:
      - pgdata:/var/lib/postgresql/data
      - pgbackup:/backups
    networks:
      - main

  web:
    build: .
    container_name: bookwyrm_web
    env_file: .env
    command: gunicorn bookwyrm.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - .:/app
      - app_static:/app/static
      - app_media:/app/images
    depends_on:
      - db
      - celery_worker
      - redis_activity
    networks:
      - main
    ports:
      - "8000:8000"

  redis_activity:
    image: redis
    container_name: bookwyrm_redis_activity
    command: redis-server --requirepass ${REDIS_ACTIVITY_PASSWORD} --appendonly yes --port ${REDIS_ACTIVITY_PORT}
    volumes:
      - /opt/bookwyrm/data/redis/config/redis.conf:/etc/redis/redis.conf
      - redis_activity_data:/data
    env_file: .env
    networks:
      - main
    restart: on-failure

  redis_broker:
    container_name: bookwyrm_redis_broker
    image: redis
    command: redis-server --requirepass ${REDIS_BROKER_PASSWORD} --appendonly yes --port ${REDIS_BROKER_PORT}
    volumes:
      - /opt/bookwyrm/data/redis/config/redis.conf:/etc/redis/redis.conf
      - redis_broker_data:/data
    env_file: .env
    networks:
      - main
    restart: on-failure

  celery_worker:
    container_name: bookwyrm_celery_worker
    env_file: .env
    build: .
    networks:
      - main
    command: celery -A celerywyrm worker -l info -Q high_priority,medium_priority,low_priority,imports,broadcast
    volumes:
      - .:/app
      - app_static:/app/static
      - app_media:/app/images
    depends_on:
      - db
      - redis_broker
    restart: on-failure

  celery_beat:
    container_name: bookwyrm_celery_beat
    env_file: .env
    build: .
    networks:
      - main
    command: celery -A celerywyrm beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
    volumes:
      - .:/app
      - app_static:/app/static
      - app_media:/app/images
    depends_on:
      - celery_worker
    restart: on-failure

  flower:
    container_name: bookwyrm_flower
    build: .
    command: celery -A celerywyrm flower --basic_auth=${FLOWER_USER}:${FLOWER_PASSWORD} --url_prefix=flower
    env_file: .env
    volumes:
      - .:/app
    networks:
      - main
    depends_on:
      - db
      - redis_broker
    restart: on-failure

  dev-tools:
    container_name: bookwyrm_devtools
    build: dev-tools
    env_file: .env
    volumes:
      - .:/app

volumes:
 nginx_conf:
    driver_opts:
      type: none
      device: /opt/bookwyrm/data/nginx/conf
      o: bind
 pgdata:
    driver_opts:
      type: none
      device: /opt/bookwyrm/data/pgsql/data
      o: bind
 pgbackup:
    driver_opts:
      type: none
      device: /opt/bookwyrm/data/pgsql/backup
      o: bind
 app_static:
    driver_opts:
      type: none
      device: /opt/bookwyrm/data/app/static
      o: bind
 app_media:
    driver_opts:
      type: none
      device: /opt/bookwyrm/data/app/media
      o: bind
 redis_activity_data:
    driver_opts:
      type: none
      device: /opt/bookwyrm/data/redis/activity_data
      o: bind
 redis_broker_data:
    driver_opts:
      type: none
      device: /opt/bookwyrm/data/redis/broker_data
      o: bind
networks:
  main:

initializing bookwyrm

database

cd /opt/bookwyrm/source
./bw-dev migrate

containers

cd /opt/bookwyrm/source
docker-compose up -d

initial setup

cd /opt/bookwyrm/source
./bw-dev setup

expect output

...
...
...
*******************************************
Use this code to create your admin account:
c6c35779-BOLHA-IS-COOL-c026610920d6
*******************************************

reverse proxy config

here we are using an external nginx as our reverse proxy, here follows our config, this is just an example

server {
    listen your_listen_ip_here:80;
    server_name domain.tld;
    location / {
        return 301 https://domain.tld$request_uri;
    }
}

server {
    listen your_listen_ip_here:443 ssl http2;
    server_name domain.tld;

    ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_dhparam /etc/letsencrypt/dh-param.pem;

    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

    ssl_ecdh_curve prime256v1;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    gzip on;
    gzip_types text/css application/javascript image/svg+xml;
    gzip_vary on;

    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://your_docker_instance_ip_here:8000;
        proxy_set_header Host $host;
    }

    location /images/ {
        proxy_pass http://your_docker_instance_ip_here:8001;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
    }

    location /static/ {
        proxy_pass http://your_docker_instance_ip_here:8001;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
    }

}

restart your nginx

finish the bookwyrm config via UI

now, go to your site and finish the configuration using the admin CODE

That's it, we're done, enjoy your Bookwyrm instance!

enabling object storage (s3)

If you want to enable the Object Storage, edit your env config file

cd /opt/bookwyrm/source
vim .env

Adjust the object storage configuration

USE_S3=true
AWS_ACCESS_KEY_ID=your-access-key-here
AWS_SECRET_ACCESS_KEY=your-secret-access-key-here
AWS_STORAGE_BUCKET_NAME=your-bucket-name-here
AWS_S3_REGION_NAME=your-bucket-region-here
AWS_S3_CUSTOM_DOMAIN=https://[your-bucket-name].[your-endpoint_url]
AWS_S3_ENDPOINT_URL=https://your-endpoint-url

Sync the files

./bw-dev copy_media_to_s3

Recreate all containers

cd /opt/bookwyrm/source
docker-compose up -d

That's it!

:)

 
Read more...

from howtos

Why?

Because it's important to use the last version with the latest bug fixes and features.

What do I need to update?

You need a working mastodon, and we're expecting that you had followed our howto

How to upgrade

The Docker side

stop the containers

cd /opt/mastodon/docker
docker-compose down

edit the versions.env file

vim /opt/mastodon/docker/versions.env

and change the version to the latest

MASTODON_VERSION=v4.1.2

to

MASTODON_VERSION=v4.1.4

clear the web directories

rm -rf /opt/mastodon/data/web/public/*
rm -rf /opt/mastodon/data/web/config/*
rm -rf /opt/mastodon/data/web/app/*
rm -rf /opt/mastodon/data/web/system/*

and start all containers again

cd /opt/mastodon/docker
docker-compose up -d

and run the migration

cd /opt/mastodon/docker
docker-compose run --rm shell bundle exec rake db:migrate

customizations

you need to apply any customization to these directory files again if you had modified anything before.

/opt/mastodon/data/web/public/*
/opt/mastodon/data/web/config/*
/opt/mastodon/data/web/app/*
/opt/mastodon/data/web/system/*

The External Nginx side

Now we need to update the static files cache on our nginx reverse proxy.

nginx cache config

Edit your mastodon vhost filer

vim /etc/nginx/conf.d/mastodon.conf

fine the cache line

proxy_cache_path /var/cache/mastodon/public/4.1.2 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;

change the cache directory

proxy_cache_path /var/cache/mastodon/public/4.1.4 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;

create the new directory

mkdir -p /var/cache/mastodon/public/4.1.4

root directory

find the root directory line

  root /var/www/mastodon/dev.bolha.us/public/4.1.2;

change it

  root /var/www/mastodon/dev.bolha.us/public/4.1.4;

create the new directory

mkdir -p /var/www/mastodon/dev.bolha.us/public/4.1.4;

creating a docker volume to copy the new static files

docker volume create --opt type=none --opt device=/opt/www/mastodon/dev.bolha.us/public/4.1.4 --opt o=bind mastodon_public_4.1.4

copying the new static files from the new version to the volume

docker run --rm -v "mastodon_public_4.1.4:/static" tootsuite/mastodon:v4.1.4 bash -c "cp -r /opt/mastodon/public/* /static/"

checking the files

ls /opt/www/mastodon/dev.bolha.us/public/4.1.4

remove the temporary volume

docker volume rm mastodon_public_4.1.4

now verify your nginx config

# nginx -t

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

now reload your nginx

systemctl restart nginx && systemctl status nginx

That's it!

 
Read more...

from howtos

Last update: 2023-07-07

Why?

Because we want to use a federated forum and link aggregator :)

What do I need to install?

You need a Linux Server with Docker, and Docker-compose installed. creating directories

What's my setup?

  • ProxMox
    • KVM
    • Ubuntu 20.04
      • Docker
      • Docker-compose
  • External Nginx
    • Reverse Proxy Configuration
    • LetsEncrypt Certificate

Where I can find out more about the project?

How I can install it?

creating directories

mkdir -p /opt/lemmy
mkdir -p /opt/lemmy/{docker,data,config}
mkdir -p /opt/lemmy/data/{postgresql,pictrs,themes}
mkdir -p /opt/lemmy/config/{lemmy,postgresql,nginx}

defining permissions

chown -R 991:991 /opt/lemmy/data/pictrs

nginx config

creating the nginx.conf file

vim /opt/lemmy/config/nginx/nginx.conf

content

worker_processes auto;

events {
    worker_connections 1024;
}

http {

    map "$request_method:$http_accept" $proxpass {
        default "http://lemmy-ui";
        "~^(?:GET|HEAD):.*?application\/(?:activity|ld)\+json" "http://lemmy";
        "~^(?!(GET|HEAD)).*:" "http://lemmy";
    }

    upstream lemmy {
        server "lemmy:8536";
    }

    upstream lemmy-ui {
        server "lemmy-ui:1234";
    }

    server {
        listen 1236;
        listen 8536;

        server_name localhost;
        server_tokens off;

        gzip on;
        gzip_types text/css application/javascript image/svg+xml;
        gzip_vary on;

        client_max_body_size 20M;

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        location / {
            proxy_pass $proxpass;
            rewrite ^(.+)/+$ $1 permanent;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
            proxy_pass "http://lemmy";
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

lemmy backend config

creating the lemmy config file

vim /opt/lemmy/config/config.hjson

content

{
  database: {
    host: postgres
    password: "your_postgresql_password_here"
  }
  hostname: "bolha.forum"
  pictrs: {
    url: "http://pictrs:8080/"
    api_key: "your_postgresql_password_here"
  }
  email: {
    smtp_server: "postfix:25"
    smtp_from_address: "noreply@bolha.forum"
    tls_type: "none"
  }
}

docker config

creating the docker-compose.yaml

vim /opt/lemmy/docker/docker-compose.yml

content

version: "3.7"

services:

  proxy:
    image: nginx:1-alpine
    container_name: lemmy_proxy
    ports:
      - "8000:8536"
    volumes:
      - /opt/lemmy/config/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,Z
    restart: always
    depends_on:
      - pictrs
      - lemmy-ui

  lemmy:
    image: dessalines/lemmy:0.18.1
    container_name: lemmy_backend
    hostname: lemmy
    restart: always
    environment:
      - RUST_LOG="warn"
    volumes:
      - lemmy_config:/config
    depends_on:
      - postgres
      - pictrs

  lemmy-ui:
    image: dessalines/lemmy-ui:0.18.1
    container_name: lemmy_frontend
    environment:
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
      - LEMMY_UI_LEMMY_EXTERNAL_HOST=bolha.forum
      - LEMMY_UI_HTTPS=true
    volumes:
      - extra_themes:/app/extra_themes
    depends_on:
      - lemmy
    restart: always

  pictrs:
    image: asonix/pictrs:0.4.0-rc.7
    container_name: lemmy_images_backend
    hostname: pictrs
    environment:
      - PICTRS__API_KEY=your_postgresql_password_here
      - RUST_LOG=debug
      - RUST_BACKTRACE=full
      - PICTRS__MEDIA__VIDEO_CODEC=vp9
      - PICTRS__MEDIA__GIF__MAX_WIDTH=256
      - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
      - PICTRS__MEDIA__GIF__MAX_AREA=65536
      - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
    user: 991:991
    volumes:
      - pictrs:/mnt:Z
    restart: always
    deploy:
      resources:
        limits:
          memory: 690m

  postgres:
    image: postgres:15-alpine
    container_name: lemmy_database
    hostname: postgres
    environment:
      - POSTGRES_USER=lemmy
      - POSTGRES_PASSWORD=your_postgresql_password_here
      - POSTGRES_DB=lemmy
    volumes:
      - postgresql:/var/lib/postgresql/data:Z
      - /opt/lemmy/config/postgresql/postgresql.conf:/etc/postgresql.conf
    restart: always

  postfix:
    image: mwader/postfix-relay
    container_name: lemmy_smtp_relay
    environment:
      - POSTFIX_myhostname=bolha.forum
      - POSTFIX_smtp_sasl_auth_enable= yes
      - POSTFIX_smtp_sasl_password_maps=static:user@domain.tld:user_password_here
      - POSTFIX_smtp_sasl_security_options=noanonymous
      - POSTFIX_relayhost=smtp.domain.tld:587
    restart: "always"

volumes:
  lemmy_config:
    driver_opts:
      type: none
      device: /opt/lemmy/config/lemmy
      o: bind
  extra_themes:
    driver_opts:
      type: none
      device: /opt/lemmy/data/themes
      o: bind
  pictrs:
    driver_opts:
      type: none
      device: /opt/lemmy/data/pictrs
      o: bind
  postgresql:
    driver_opts:
      type: none
      device: /opt/lemmy/data/postgresql
      o: bind

spinning up the lemmy instance

$ cd /opt/lemmy/docker
$ docker-compose up -d

checking

$ docker-compose ps

NAME                   IMAGE                        COMMAND                  SERVICE             CREATED             STATUS              PORTS
lemmy_backend          dessalines/lemmy:0.18.1      "/app/lemmy"             lemmy               34 minutes ago      Up 34 minutes
lemmy_database         postgres:15-alpine           "docker-entrypoint.s…"   postgres            34 minutes ago      Up 34 minutes       5432/tcp
lemmy_frontend         dessalines/lemmy-ui:0.18.1   "docker-entrypoint.s…"   lemmy-ui            34 minutes ago      Up 34 minutes       1234/tcp
lemmy_images_backend   asonix/pictrs:0.4.0-rc.7     "/sbin/tini -- /usr/…"   pictrs              34 minutes ago      Up 34 minutes       6669/tcp, 8080/tcp
lemmy_proxy            nginx:1-alpine               "/docker-entrypoint.…"   proxy               34 minutes ago      Up 34 minutes       80/tcp, 0.0.0.0:8000->8536/tcp, :::8000->8536/tcp
lemmy_smtp_relay       mwader/postfix-relay         "/root/run"              postfix             34 minutes ago      Up 34 minutes       25/tcp

You can see that our lemmy_proxy (nginx) is running on the port 8000.

Now let's configure the external reverse proxy.

external reverse-proxy config

certbot + letsencrypt

we're using cloudflare plugin with certbot, you need to have the configuration ready, like this example

# cat /etc/letsencrypt/cloudflare/bolha-forum.conf
dns_cloudflare_email = dns@bolha.forum
dns_cloudflare_api_key = your_token_here

then you can generate the certificate

# certbot certonly --dns-cloudflare --dns-cloudflare-credential /etc/letsencrypt/cloudflare/bolha-forum.conf -d "*.bolha.forum,bolha.forum"

now we can configure our nginx!

nginx config

external reverse proxy

server {
    listen your_listen_ip_here:80;
    server_name bolha.forum;
    location / {
        return 301 https://bolha.forum$request_uri;
    }
}

server {
    listen your_listen_ip_here:443 ssl http2;
    server_name bolha.forum;

    ssl_certificate /etc/letsencrypt/live/bolha.forum/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/bolha.forum/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;

    ssl_dhparam /etc/letsencrypt/dh-param.pem;

    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';

    # Specifies a curve for ECDHE ciphers.
    ssl_ecdh_curve prime256v1;

    # Server should determine the ciphers, not the client
    ssl_prefer_server_ciphers on;

    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;

    # Enable compression for JS/CSS/HTML bundle, for improved client load times.
    # It might be nice to compress JSON, but leaving that out to protect against potential
    # compression+encryption information leak attacks like BREACH.
    gzip on;
    gzip_types text/css application/javascript image/svg+xml;
    gzip_vary on;

    # Only connect to this site via HTTPS for the two years
    add_header Strict-Transport-Security "max-age=63072000";

    # Various content security headers
    add_header Referrer-Policy "same-origin";
    add_header X-Content-Type-Options "nosniff";
    add_header X-Frame-Options "DENY";
    add_header X-XSS-Protection "1; mode=block";

    # Upload limit for pictrs
    client_max_body_size 25M;


    location / {
      proxy_pass http://your_docker_host_ip_here:your_port_here;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

check the config

nginx -t

expected output

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

and reload the configuration

# nginx -s reload

that's it!

Go to your lemmy!

Now you can access your lemmy instance

Enjoy!

 
Read more...

from howtos

Why?

We want to offer translations inside Mastodon using libretranslate as our backend.

What do I need to install?

Your need a working mastodon, we recommend this howto

Your need a working libretranslate, we recommend this howto

How to integrate them?

You need to add these two variables on the application.env file if you are following our mastodon howto.

LIBRE_TRANSLATE_ENDPOINT=https://libretranslate.bolha.tools
LIBRE_TRANSLATE_API_KEY=ecae7db0-bolha-us-is-cool-c84c14d2117a

Then restart it

cd /opt/mastodon/docker
docker-compose restart

After that you can check the logs

docker-compose logs -f|grep TranslationsController

Expected output with status code 200

website            | [01fa1ece-5ab3-411d-bd6b-4b5131096735] method=POST path=/api/v1/statuses/110658724777490930/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=200 duration=2988.25 view=0.77 db=2.32

Sometimes you can get a status code 503, yes, it will happen, it's not perfect but works well most of the time.

website            | [752a45c9-a94a-408a-8262-7b71cc1528e9] method=POST path=/api/v1/statuses/110658727361133356/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=503 duration=10117.47 view=0.49 db=2.19

Enjoy it!

:)

 
Read more...

from mindnotes

Just a mind note, as always.

Host *
  User gutocarvalho
  # keepalive
  TCPKeepAlive yes
  ServerAliveInterval 10800
  # network config
  AddressFamily inet
  Compression yes
  Protocol 2
  # log config
  LogLevel INFO
  # GSSAPI config
  GSSAPIAuthentication no
  GSSAPIDelegateCredentials no
  # checkings
  VerifyHostKeyDNS no
  StrictHostKeyChecking no
  # hosts obfuscation
  #HashKnownHosts yes
  # ciphers
  Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
  # connection control
  ControlPath ~/.ssh/controlmasters/%r@%h:%p
  ControlMaster auto
  ControlPersist yes
  # algoritms
  HostKeyAlgorithms ssh-ed25519-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,ssh-rsa
 KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
 MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

## special domain

Host *.domain.ai *.domain.sh *.domain.io
  User gcarvalho
  Port 2222
  IdentityFile /path/to/your/ssh/key

## internal network password

Host 192.168.1.*
  User ubnt
  Port 22
  PreferredAuthentications password
  PubkeyAuthentication no
  ControlMaster no

## internal network sshkey

Host 192.168.222.*
  User ansible
  Port 8820
  IdentityFile /path/to/your/ssh/key
  ControlMaster no

## git services

Host github github.com
  HostName github.com
  PreferredAuthentications publickey
  IdentityFile /path/to/your/ssh/key
  User gutocarvalho

Host bitbucket bitbucket.org
  HostName bitbucket.org
  PreferredAuthentications publickey
  IdentityFile /path/to/your/ssh/key
  User gutocarvalho

Host gitlab gitlab.com
  HostName gitlab.com
  PreferredAuthentications publickey
  IdentityFile /path/to/your/ssh/key
  User gutocarvalho

## other services

Host mastodon-prod
  Hostname host.domain.tld
  User gutocarvalho
  Port 4430
  IdentityFile /path/to/your/ssh/key

Host mastodon-dev
  Hostname host.domain.tld
  User gutocarvalho
  Port 4431
  IdentityFile /path/to/your/ssh/key

 
Read more...

from fediverse

Read more...

from mindnotes

For more show options

show command [TAB] [TAB]

show

general usg information

version

show version

configuration

show configuration all

logs with tail-like view

show log tail

network

summary

show interfaces

detailed

show interfaces detail

arp table

show arp

debugging

show debuging

load balancer

status

show load-balance status

watchdog status

show load-balance watchdog

dns

statistics

show dns forwarding statistics

dhcp

leases

show dhcp leases

statistics

show dhcp statistics

ntp

configuration

show ntp

system information

disk usage

show system storage

memory usage

show system memory

processeses

show system processes

uptime

show system uptime

Connections information

show system connections

users

show system login users

defining the network controller

Informing the Controller

set-inform http://unifi_network_controller_ip_here:8080/inform

reset & restore

Resetting to the default config

syswrapper.sh restore=default

refs

 
Read more...

from mindnotes

why?

Ubuntu 20.04 certbot package is ancient, the packge offers version 0.40.0 instead the current version is 2.6.x.

I need some resource like —preferred-chain that only exists in the recent version.

let's install it

apt remove certbot -f

installing depencies

 apt install python3 python3-venv libaugeas0

creating a venv

python3 -m venv /opt/certbot/

upgrading pip

/opt/certbot/bin/pip install --upgrade pip

installing the plugins

/opt/certbot/bin/pip install certbot certbot-apache certbot-nginx certbot-dns-cloudflare

creating the simbolic link

ln -s /opt/certbot/bin/certbot /usr/bin/certbot

creating a certificate for my zimbra

certbot certonly --dns-cloudflare --dns-cloudflare-credentials /etc/letsencrypt/cloudflare/nativetrail.conf -d '*.nativetrail.io,nativetrail.io' -n --force-renewal --preferred-chain "ISRG Root X1"

That's it ;)

 
Read more...

from fediverse

This is a list of mastodon-related projects and apps.

Last update: 22/Jun/23

official project site

official git repo

oficial docker repo

official instances

recommend instances

Brazil

mastodon relevant forks

mastodon web-clients

mastodon frontends?

mastodon desktop clients

beta

mastodon ios clients

offical app

tapbots

others

beta

mastodon android clients

oficial client

others

mastodon terminal clients

mastodon utils

do you want to help with this list?

Please send your suggestions to:

mastodon – @gutocarvalho@gcn.sh

matrix – @gutocarvalho@bolha.chat

 
Read more...

from fediverse

Read more...

from fediverse

Read more...