MacOs Menubar Curated List
from mindnotes
Read more...Read the latest posts from blog.gcn.sh.
from mindnotes
Read more...from mindnotes
Another post to remember
Any type
Open Source Only
from mindnotes
Read more...from mindnotes
In this example we'll expand a partition sdb1 mounted on the /opt directory.
The first thing to do is expand the disk using the proxmox UI, for that you'll need to turn off the kvm instance, expand the disk and turn it on again.
now with the os running you can
umount /opt
parted /dev/sdb
print
fix
resizepart sdb1 100%
quit
xfs_repair /dev/sdb1
mount /opt
xfs_growfs /dev/sdb1
that's it!
from howtos
Because we want to use our own object storage system, on-premisses.
You need a Linux Server with Docker, and Docker-compose installed.
Project
Docker installation
Single Node Multi Drive Arch
Virtual Machine
vpcu: 8
memory: 8 gb ram
network: 1 gbit
disk: 350 gb
Disk layout
root (30g)
/var/lib/docker (30g)
/opt/minio (300g)
These are all the necessary ports to open
22 TCP (ssh)
80 (minio api)
8080 (minio console)
Any other port should be closed.
We'll use 2 DNS Records
minio-admin.domain.tld (console)
minio.domain.tld (api)
apt-get update
apt-get upgrade -y
apt install screen htop net-tools ccze git
curl https://get.docker.com|bash
Let's create the configuration file.
vim /etc/docker/daemon.json
Content
{
"default-address-pools": [
{
"base": "10.20.30.0/24",
"size": 24
},
{
"base": "10.20.31.0/24",
"size": 24
}
]
}
Here we're defining uncommon networks to avoid conflicts with your provider or organization networks. You need to restart docker after it.
systemclt restart docker
systemclt enable docker
Download
curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi -
Adjusting permissions
chmod +x docker-compose-linux-x86_64
Moving the binary to the usr/local directory
mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose
mkdir -p /opt/minio/{docker,storage}
vim /opt/minio/docker/docker-compose.yaml
Content
version: '3.7'
# Settings and configurations that are common for all containers
x-minio-common: &minio-common
image: quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z
command: server --console-address ":9001" http://minio{1...4}/data{1...2}
expose:
- "9000"
- "9001"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: your_password_here
MINIO_SERVER_URL: https://minio.domain.ltd
MINIO_DOMAIN: minio.domain.ltd
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
# starts 4 docker containers running minio server instances.
# using nginx reverse proxy, load balancing, you can access
# it through port 9000.
services:
minio1:
<<: *minio-common
hostname: minio1
restart: always
volumes:
- /MinIO/storage/data1-1:/data1
- /MinIO/storage/data1-2:/data2
minio2:
<<: *minio-common
hostname: minio2
restart: always
volumes:
- /MinIO/storage/data2-1:/data1
- /MinIO/storage/data2-2:/data2
minio3:
<<: *minio-common
hostname: minio3
restart: always
volumes:
- /MinIO/storage/data3-1:/data1
- /MinIO/storage/data3-2:/data2
minio4:
<<: *minio-common
hostname: minio4
restart: always
volumes:
- /MinIO/storage/data4-1:/data1
- /MinIO/storage/data4-2:/data2
nginx:
image: nginx:1.19.2-alpine
hostname: nginx
restart: always
volumes:
- /MinIO/docker/nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
- "8080:8080"
depends_on:
- minio1
- minio2
- minio3
- minio4
## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
data1-1:
data1-2:
data2-1:
data2-2:
data3-1:
data3-2:
data4-1:
data4-2:
vim /opt/minio/docker/nginx.conf
Content
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
keepalive_timeout 65;
upstream minio {
server minio1:9000;
server minio2:9000;
server minio3:9000;
server minio4:9000;
}
upstream console {
ip_hash;
server minio1:9001;
server minio2:9001;
server minio3:9001;
server minio4:9001;
}
server {
listen 80;
ignore_invalid_headers off;
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding off;
proxy_pass http://minio;
}
}
server {
listen 8080;
ignore_invalid_headers off;
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-NginX-Proxy true;
real_ip_header X-Real-IP;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
chunked_transfer_encoding off;
proxy_pass http://console;
}
}
}
cd /opt/minio/docker
docker-compose up -d
docker-compose ps
Expected output
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
docker-minio1-1 quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z "/usr/bin/docker-ent…" minio1 11 minutes ago Up 9 minutes (healthy) 9000-9001/tcp
docker-minio2-1 quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z "/usr/bin/docker-ent…" minio2 11 minutes ago Up 9 minutes (healthy) 9000-9001/tcp
docker-minio3-1 quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z "/usr/bin/docker-ent…" minio3 11 minutes ago Up 9 minutes (healthy) 9000-9001/tcp
docker-minio4-1 quay.io/minio/minio:RELEASE.2023-08-04T17-40-21Z "/usr/bin/docker-ent…" minio4 11 minutes ago Up 9 minutes (healthy) 9000-9001/tcp
docker-nginx-1 nginx:1.19.2-alpine "/docker-entrypoint.…" nginx 11 minutes ago Up 9 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp
Check it the ports 9001 and 9001
netstat -ntpl|grep docker
Expected Ouput
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2116141/docker-prox
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 2116110/docker-prox
tcp6 0 0 :::80 :::* LISTEN 2116149/docker-prox
tcp6 0 0 :::8080 :::* LISTEN 2116123/docker-prox
You can now validate the console
curl http://localhost:80
Expected Output
<!doctype html><html lang="en"><head><meta charset="utf-8"/><base href="/"/><meta content="width=device-width,initial-scale=1" name="viewport"/><meta content="#081C42" media="(prefers-color-scheme: light)" name="theme-color"/><meta content="#081C42" media="(prefers-color-scheme: dark)" name="theme-color"/><meta content="MinIO Console" name="description"/><meta name="minio-license" content="agpl" /><link href="./styles/root-styles.css" rel="stylesheet"/><link href="./apple-icon-180x180.png" rel="apple-touch-icon" sizes="180x180"/><link href="./favicon-32x32.png" rel="icon" sizes="32x32" type="image/png"/><link href="./favicon-96x96.png" rel="icon" sizes="96x96" type="image/png"/><link href="./favicon-16x16.png" rel="icon" sizes="16x16" type="image/png"/><link href="./manifest.json" rel="manifest"/><link color="#3a4e54" href="./safari-pinned-tab.svg" rel="mask-icon"/><title>MinIO Console</title><script defer="defer" src="./static/js/main.92fa0385.js"></script><link href="./static/css/main.02c1b6fd.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"><div id="preload"><img src="./images/background.svg"/> <img src="./images/background-wave-orig2.svg"/></div><div id="loader-block"><img src="./Loader.svg"/></div></div></body></html>
You can now validate if the API is running
curl http://localhost:80
Expected output
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><Resource>/</Resource><RequestId>177E5BC14618C529</RequestId><HostId>e0c385c033c4356721cc9121d3109c9b9bfdefb22fd2747078acd22328799e36</HostId></Error>root@bolha.io:/MinIO/docker#
Validate if the API is Healthly
curl -si http:///localhost/minio/health/live
Expected output
HTTP/1.1 200 OK
Server: nginx/1.19.2
Date: Thu, 24 Aug 2023 15:38:38 GMT
Content-Length: 0
Connection: keep-alive
Accept-Ranges: bytes
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
X-Amz-Id-2: 46efbbb7efbd81c7d995bde03cc6fabf60c12f80d4e074c1c972dbc4d583c3d4
X-Amz-Request-Id: 177E5BDDF79EDEF8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
You can now configure your reverse proxy
minio-admin.domain.tld => the ip-of-the-vm port 8080.
minio.domain.tlds => ip-of-the-vm port 80.
We'll not cover the reverse proxy config yet, maybe in the future.
After the configuration you can visite the admin console
https://minio-admin.domain.tld
You can follow the containers logs during the minio usage.
cd /opt/minio/docker
docker-compose logs -f --tail=10
Cheers [s]
from mindnotes
Fast and simple!
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved
echo nameserver 1.1.1.1 > /etc/resolv.conf
echo nameserver 8.8.8.8 >> /etc/resolv.conf
from howtos
Because we want to use a federated book review system :)
You need a Linux Server with Docker, and Docker-compose installed. creating directories
Let's start with it
mkdir -p /opt/bookwyrm
mkdir -p /opt/bookwyrm/nginx/conf
mkdir -p /opt/bookwyrm/pgsql/data
mkdir -p /opt/bookwyrm/pgsql/backup
mkdir -p /opt/bookwyrm/data/app/static
mkdir -p /opt/bookwyrm/data/app/media
mkdir -p /opt/bookwyrm/data/redis/config
mkdir -p /opt/bookwyrm/data/redis/activity_data
mkdir -p /opt/bookwyrm/data/redis/broker_data
cd /opt/bookwyrm
git clone https://github.com/bookwyrm-social/bookwyrm.git source
cd source
git checkout production
copying redis.conf
cd /opt/bookwyrm/source
cp redis.conf /opt/bookwyrm/data/redis/config
creting production.conf
cd /opt/bookwyrm/data/nginx/conf
vim production.conf
content
include /etc/nginx/conf.d/server_config;
upstream web {
server web:8000;
}
server {
access_log /var/log/nginx/access.log cache_log;
listen 80;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
#include /etc/nginx/mime.types;
#default_type application/octet-stream;
gzip on;
gzip_disable "msie6";
proxy_read_timeout 1800s;
chunked_transfer_encoding on;
# store responses to anonymous users for up to 1 minute
proxy_cache bookwyrm_cache;
proxy_cache_valid any 1m;
add_header X-Cache-Status $upstream_cache_status;
# ignore the set cookie header when deciding to
# store a response in the cache
proxy_ignore_headers Cache-Control Set-Cookie Expires;
# PUT requests always bypass the cache
# logged in sessions also do not populate the cache
# to avoid serving personal data to anonymous users
proxy_cache_methods GET HEAD;
proxy_no_cache $cookie_sessionid;
proxy_cache_bypass $cookie_sessionid;
# tell the web container the address of the outside client
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
# rate limit the login or password reset pages
location ~ ^/(login[^-/]|password-reset|resend-link|2fa-check) {
limit_req zone=loginlimit;
proxy_pass http://web;
}
# do not log periodic polling requests from logged in users
location /api/updates/ {
access_log off;
proxy_pass http://web;
}
# forward any cache misses or bypass to the web container
location / {
proxy_pass http://web;
}
# directly serve images and static files from the
# bookwyrm filesystem using sendfile.
# make the logs quieter by not reporting these requests
location ~ ^/(images|static)/ {
root /app/;
try_files $uri =404;
add_header X-Cache-Status STATIC;
access_log off;
}
# monitor the celery queues with flower, no caching enabled
#location /flower/ {
# proxy_pass http://flower:8888;
# proxy_cache_bypass 1;
#}
}
creting server_config file
cd /opt/bookwyrm/data/nginx/conf
vim server_config
content
client_max_body_size 10m;
limit_req_zone $binary_remote_addr zone=loginlimit:10m rate=1r/s;
# include the cache status in the log message
log_format cache_log '$upstream_cache_status - '
'$remote_addr [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$upstream_response_time $request_time';
# Create a cache for responses from the web app
proxy_cache_path
/var/cache/nginx/bookwyrm_cache
keys_zone=bookwyrm_cache:20m
loader_threshold=400
loader_files=400
max_size=400m;
# use the accept header as part of the cache key
# since activitypub endpoints have both HTML and JSON
# on the same URI.
proxy_cache_key $scheme$proxy_host$uri$is_args$args$http_accept;
creating the env config
cd /opt/bookwyrm/source
cp .env.example .env
vim .env
content
SECRET_KEY="a-very-good-secret-here-25-chars-letter-numbers-symbols"
DEBUG=false
USE_HTTPS=true
DOMAIN=domain.tld
EMAIL=help@domain.tld
LANGUAGE_CODE="en-us"
DEFAULT_LANGUAGE="English"
ALLOWED_HOSTS="localhost,127.0.0.1,[::1],domain.tld"
MEDIA_ROOT=images/
# PostgreSQL
PGPORT=5432
POSTGRES_PASSWORD=a-very-good-password-here
POSTGRES_USER=bookwyrm
POSTGRES_DB=bookwyrm
POSTGRES_HOST=db
# Redis activity stream manager
MAX_STREAM_LENGTH=200
REDIS_ACTIVITY_HOST=redis_activity
REDIS_ACTIVITY_PORT=6379
REDIS_ACTIVITY_PASSWORD=a-very-good-password-here
# Redis as celery broker
REDIS_BROKER_HOST=redis_broker
REDIS_BROKER_PORT=6379
REDIS_BROKER_PASSWORD=a-very-good-password-here
# Monitoring for celery
FLOWER_PORT=8888
FLOWER_USER=admin
FLOWER_PASSWORD=a-very-good-password-here
# Email config
EMAIL_HOST=mail.domain.tld
EMAIL_PORT=587
EMAIL_HOST_USER=user@domain.tld
EMAIL_HOST_PASSWORD=a-very-good-password-here
EMAIL_USE_TLS=true
EMAIL_USE_SSL=false
EMAIL_SENDER_NAME=no-reply
EMAIL_SENDER_DOMAIN=domain.tld
# Query timeouts
SEARCH_TIMEOUT=5
QUERY_TIMEOUT=5
# Thumbnails Generation
ENABLE_THUMBNAIL_GENERATION=true
# S3 configuration
USE_S3=false
AWS_ACCESS_KEY_ID=your-access-key-here
AWS_SECRET_ACCESS_KEY=your-secret-access-key-here
AWS_STORAGE_BUCKET_NAME=your-bucket-name-here
AWS_S3_REGION_NAME=your-bucket-region-here
AWS_S3_CUSTOM_DOMAIN=https://[your-bucket-name].[your-endpoint_url]
AWS_S3_ENDPOINT_URL=https://your-endpoint-url
# Preview image generation can be computing and storage intensive
ENABLE_PREVIEW_IMAGES=true
# Specify RGB tuple or RGB hex strings,
PREVIEW_TEXT_COLOR=#363636
PREVIEW_IMG_WIDTH=1200
PREVIEW_IMG_HEIGHT=630
PREVIEW_DEFAULT_COVER_COLOR=#002549
# Set HTTP_X_FORWARDED_PROTO ONLY to true if you know what you are doing.
# Only use it if your proxy is "swallowing" if the original request was made
# via https. Please refer to the Django-Documentation and assess the risks
# for your instance:
# https://docs.djangoproject.com/en/3.2/ref/settings/#secure-proxy-ssl-header
HTTP_X_FORWARDED_PROTO=false
# TOTP settings
TWO_FACTOR_LOGIN_VALIDITY_WINDOW=2
TWO_FACTOR_LOGIN_MAX_SECONDS=60
# Additional hosts to allow in the Content-Security-Policy, "self" (should be DOMAIN)
# and AWS_S3_CUSTOM_DOMAIN (if used) are added by default.
# Value should be a comma-separated list of host names.
#CSP_ADDITIONAL_HOSTS=
creating a new docker-compose file
cd /opt/bookwyrm/source
mv docker-compose.yml docker-compose.yml.original
vim docker-compose.yml
content
version: '3'
services:
nginx:
image: nginx:latest
container_name: bookwyrm_nginx
restart: unless-stopped
ports:
- "8001:80"
depends_on:
- web
networks:
- main
volumes:
- .:/app
- app_static:/app/static
- app_media:/app/images
- nginx_conf:/etc/nginx/conf.d
db:
build: postgres-docker
env_file: .env
container_name: bookwyrm_pgsql
entrypoint: /bookwyrm-entrypoint.sh
command: cron postgres
volumes:
- pgdata:/var/lib/postgresql/data
- pgbackup:/backups
networks:
- main
web:
build: .
container_name: bookwyrm_web
env_file: .env
command: gunicorn bookwyrm.wsgi:application --bind 0.0.0.0:8000
volumes:
- .:/app
- app_static:/app/static
- app_media:/app/images
depends_on:
- db
- celery_worker
- redis_activity
networks:
- main
ports:
- "8000:8000"
redis_activity:
image: redis
container_name: bookwyrm_redis_activity
command: redis-server --requirepass ${REDIS_ACTIVITY_PASSWORD} --appendonly yes --port ${REDIS_ACTIVITY_PORT}
volumes:
- /opt/bookwyrm/data/redis/config/redis.conf:/etc/redis/redis.conf
- redis_activity_data:/data
env_file: .env
networks:
- main
restart: on-failure
redis_broker:
container_name: bookwyrm_redis_broker
image: redis
command: redis-server --requirepass ${REDIS_BROKER_PASSWORD} --appendonly yes --port ${REDIS_BROKER_PORT}
volumes:
- /opt/bookwyrm/data/redis/config/redis.conf:/etc/redis/redis.conf
- redis_broker_data:/data
env_file: .env
networks:
- main
restart: on-failure
celery_worker:
container_name: bookwyrm_celery_worker
env_file: .env
build: .
networks:
- main
command: celery -A celerywyrm worker -l info -Q high_priority,medium_priority,low_priority,imports,broadcast
volumes:
- .:/app
- app_static:/app/static
- app_media:/app/images
depends_on:
- db
- redis_broker
restart: on-failure
celery_beat:
container_name: bookwyrm_celery_beat
env_file: .env
build: .
networks:
- main
command: celery -A celerywyrm beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
volumes:
- .:/app
- app_static:/app/static
- app_media:/app/images
depends_on:
- celery_worker
restart: on-failure
flower:
container_name: bookwyrm_flower
build: .
command: celery -A celerywyrm flower --basic_auth=${FLOWER_USER}:${FLOWER_PASSWORD} --url_prefix=flower
env_file: .env
volumes:
- .:/app
networks:
- main
depends_on:
- db
- redis_broker
restart: on-failure
dev-tools:
container_name: bookwyrm_devtools
build: dev-tools
env_file: .env
volumes:
- .:/app
volumes:
nginx_conf:
driver_opts:
type: none
device: /opt/bookwyrm/data/nginx/conf
o: bind
pgdata:
driver_opts:
type: none
device: /opt/bookwyrm/data/pgsql/data
o: bind
pgbackup:
driver_opts:
type: none
device: /opt/bookwyrm/data/pgsql/backup
o: bind
app_static:
driver_opts:
type: none
device: /opt/bookwyrm/data/app/static
o: bind
app_media:
driver_opts:
type: none
device: /opt/bookwyrm/data/app/media
o: bind
redis_activity_data:
driver_opts:
type: none
device: /opt/bookwyrm/data/redis/activity_data
o: bind
redis_broker_data:
driver_opts:
type: none
device: /opt/bookwyrm/data/redis/broker_data
o: bind
networks:
main:
cd /opt/bookwyrm/source
./bw-dev migrate
cd /opt/bookwyrm/source
docker-compose up -d
cd /opt/bookwyrm/source
./bw-dev setup
expect output
...
...
...
*******************************************
Use this code to create your admin account:
c6c35779-BOLHA-IS-COOL-c026610920d6
*******************************************
here we are using an external nginx as our reverse proxy, here follows our config, this is just an example
server {
listen your_listen_ip_here:80;
server_name domain.tld;
location / {
return 301 https://domain.tld$request_uri;
}
}
server {
listen your_listen_ip_here:443 ssl http2;
server_name domain.tld;
ssl_certificate /etc/letsencrypt/live/domain.tld/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.tld/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_dhparam /etc/letsencrypt/dh-param.pem;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_ecdh_curve prime256v1;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_vary on;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://your_docker_instance_ip_here:8000;
proxy_set_header Host $host;
}
location /images/ {
proxy_pass http://your_docker_instance_ip_here:8001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
location /static/ {
proxy_pass http://your_docker_instance_ip_here:8001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
restart your nginx
now, go to your site and finish the configuration using the admin CODE
That's it, we're done, enjoy your Bookwyrm instance!
If you want to enable the Object Storage, edit your env config file
cd /opt/bookwyrm/source
vim .env
Adjust the object storage configuration
USE_S3=true
AWS_ACCESS_KEY_ID=your-access-key-here
AWS_SECRET_ACCESS_KEY=your-secret-access-key-here
AWS_STORAGE_BUCKET_NAME=your-bucket-name-here
AWS_S3_REGION_NAME=your-bucket-region-here
AWS_S3_CUSTOM_DOMAIN=https://[your-bucket-name].[your-endpoint_url]
AWS_S3_ENDPOINT_URL=https://your-endpoint-url
Sync the files
./bw-dev copy_media_to_s3
Recreate all containers
cd /opt/bookwyrm/source
docker-compose up -d
That's it!
:)
from howtos
Because it's important to use the last version with the latest bug fixes and features.
You need a working mastodon, and we're expecting that you had followed our howto
stop the containers
cd /opt/mastodon/docker
docker-compose down
edit the versions.env file
vim /opt/mastodon/docker/versions.env
and change the version to the latest
MASTODON_VERSION=v4.1.2
to
MASTODON_VERSION=v4.1.4
clear the web directories
rm -rf /opt/mastodon/data/web/public/*
rm -rf /opt/mastodon/data/web/config/*
rm -rf /opt/mastodon/data/web/app/*
rm -rf /opt/mastodon/data/web/system/*
and start all containers again
cd /opt/mastodon/docker
docker-compose up -d
and run the migration
cd /opt/mastodon/docker
docker-compose run --rm shell bundle exec rake db:migrate
you need to apply any customization to these directory files again if you had modified anything before.
/opt/mastodon/data/web/public/*
/opt/mastodon/data/web/config/*
/opt/mastodon/data/web/app/*
/opt/mastodon/data/web/system/*
Now we need to update the static files cache on our nginx reverse proxy.
Edit your mastodon vhost filer
vim /etc/nginx/conf.d/mastodon.conf
fine the cache line
proxy_cache_path /var/cache/mastodon/public/4.1.2 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;
change the cache directory
proxy_cache_path /var/cache/mastodon/public/4.1.4 levels=1:2 keys_zone=MASTODON_CACHE_v412:10m inactive=7d max_size=3g;
create the new directory
mkdir -p /var/cache/mastodon/public/4.1.4
find the root directory line
root /var/www/mastodon/dev.bolha.us/public/4.1.2;
change it
root /var/www/mastodon/dev.bolha.us/public/4.1.4;
create the new directory
mkdir -p /var/www/mastodon/dev.bolha.us/public/4.1.4;
creating a docker volume to copy the new static files
docker volume create --opt type=none --opt device=/opt/www/mastodon/dev.bolha.us/public/4.1.4 --opt o=bind mastodon_public_4.1.4
copying the new static files from the new version to the volume
docker run --rm -v "mastodon_public_4.1.4:/static" tootsuite/mastodon:v4.1.4 bash -c "cp -r /opt/mastodon/public/* /static/"
checking the files
ls /opt/www/mastodon/dev.bolha.us/public/4.1.4
remove the temporary volume
docker volume rm mastodon_public_4.1.4
now verify your nginx config
# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
now reload your nginx
systemctl restart nginx && systemctl status nginx
That's it!
from howtos
Last update: 2023-07-07
Because we want to use a federated forum and link aggregator :)
You need a Linux Server with Docker, and Docker-compose installed. creating directories
creating directories
mkdir -p /opt/lemmy
mkdir -p /opt/lemmy/{docker,data,config}
mkdir -p /opt/lemmy/data/{postgresql,pictrs,themes}
mkdir -p /opt/lemmy/config/{lemmy,postgresql,nginx}
defining permissions
chown -R 991:991 /opt/lemmy/data/pictrs
creating the nginx.conf file
vim /opt/lemmy/config/nginx/nginx.conf
content
worker_processes auto;
events {
worker_connections 1024;
}
http {
map "$request_method:$http_accept" $proxpass {
default "http://lemmy-ui";
"~^(?:GET|HEAD):.*?application\/(?:activity|ld)\+json" "http://lemmy";
"~^(?!(GET|HEAD)).*:" "http://lemmy";
}
upstream lemmy {
server "lemmy:8536";
}
upstream lemmy-ui {
server "lemmy-ui:1234";
}
server {
listen 1236;
listen 8536;
server_name localhost;
server_tokens off;
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_vary on;
client_max_body_size 20M;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
location / {
proxy_pass $proxpass;
rewrite ^(.+)/+$ $1 permanent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
proxy_pass "http://lemmy";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
creating the lemmy config file
vim /opt/lemmy/config/config.hjson
content
{
database: {
host: postgres
password: "your_postgresql_password_here"
}
hostname: "bolha.forum"
pictrs: {
url: "http://pictrs:8080/"
api_key: "your_postgresql_password_here"
}
email: {
smtp_server: "postfix:25"
smtp_from_address: "noreply@bolha.forum"
tls_type: "none"
}
}
creating the docker-compose.yaml
vim /opt/lemmy/docker/docker-compose.yml
content
version: "3.7"
services:
proxy:
image: nginx:1-alpine
container_name: lemmy_proxy
ports:
- "8000:8536"
volumes:
- /opt/lemmy/config/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,Z
restart: always
depends_on:
- pictrs
- lemmy-ui
lemmy:
image: dessalines/lemmy:0.18.1
container_name: lemmy_backend
hostname: lemmy
restart: always
environment:
- RUST_LOG="warn"
volumes:
- lemmy_config:/config
depends_on:
- postgres
- pictrs
lemmy-ui:
image: dessalines/lemmy-ui:0.18.1
container_name: lemmy_frontend
environment:
- LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
- LEMMY_UI_LEMMY_EXTERNAL_HOST=bolha.forum
- LEMMY_UI_HTTPS=true
volumes:
- extra_themes:/app/extra_themes
depends_on:
- lemmy
restart: always
pictrs:
image: asonix/pictrs:0.4.0-rc.7
container_name: lemmy_images_backend
hostname: pictrs
environment:
- PICTRS__API_KEY=your_postgresql_password_here
- RUST_LOG=debug
- RUST_BACKTRACE=full
- PICTRS__MEDIA__VIDEO_CODEC=vp9
- PICTRS__MEDIA__GIF__MAX_WIDTH=256
- PICTRS__MEDIA__GIF__MAX_HEIGHT=256
- PICTRS__MEDIA__GIF__MAX_AREA=65536
- PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
user: 991:991
volumes:
- pictrs:/mnt:Z
restart: always
deploy:
resources:
limits:
memory: 690m
postgres:
image: postgres:15-alpine
container_name: lemmy_database
hostname: postgres
environment:
- POSTGRES_USER=lemmy
- POSTGRES_PASSWORD=your_postgresql_password_here
- POSTGRES_DB=lemmy
volumes:
- postgresql:/var/lib/postgresql/data:Z
- /opt/lemmy/config/postgresql/postgresql.conf:/etc/postgresql.conf
restart: always
postfix:
image: mwader/postfix-relay
container_name: lemmy_smtp_relay
environment:
- POSTFIX_myhostname=bolha.forum
- POSTFIX_smtp_sasl_auth_enable= yes
- POSTFIX_smtp_sasl_password_maps=static:user@domain.tld:user_password_here
- POSTFIX_smtp_sasl_security_options=noanonymous
- POSTFIX_relayhost=smtp.domain.tld:587
restart: "always"
volumes:
lemmy_config:
driver_opts:
type: none
device: /opt/lemmy/config/lemmy
o: bind
extra_themes:
driver_opts:
type: none
device: /opt/lemmy/data/themes
o: bind
pictrs:
driver_opts:
type: none
device: /opt/lemmy/data/pictrs
o: bind
postgresql:
driver_opts:
type: none
device: /opt/lemmy/data/postgresql
o: bind
$ cd /opt/lemmy/docker
$ docker-compose up -d
$ docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
lemmy_backend dessalines/lemmy:0.18.1 "/app/lemmy" lemmy 34 minutes ago Up 34 minutes
lemmy_database postgres:15-alpine "docker-entrypoint.s…" postgres 34 minutes ago Up 34 minutes 5432/tcp
lemmy_frontend dessalines/lemmy-ui:0.18.1 "docker-entrypoint.s…" lemmy-ui 34 minutes ago Up 34 minutes 1234/tcp
lemmy_images_backend asonix/pictrs:0.4.0-rc.7 "/sbin/tini -- /usr/…" pictrs 34 minutes ago Up 34 minutes 6669/tcp, 8080/tcp
lemmy_proxy nginx:1-alpine "/docker-entrypoint.…" proxy 34 minutes ago Up 34 minutes 80/tcp, 0.0.0.0:8000->8536/tcp, :::8000->8536/tcp
lemmy_smtp_relay mwader/postfix-relay "/root/run" postfix 34 minutes ago Up 34 minutes 25/tcp
You can see that our lemmy_proxy (nginx) is running on the port 8000.
Now let's configure the external reverse proxy.
we're using cloudflare plugin with certbot, you need to have the configuration ready, like this example
# cat /etc/letsencrypt/cloudflare/bolha-forum.conf
dns_cloudflare_email = dns@bolha.forum
dns_cloudflare_api_key = your_token_here
then you can generate the certificate
# certbot certonly --dns-cloudflare --dns-cloudflare-credential /etc/letsencrypt/cloudflare/bolha-forum.conf -d "*.bolha.forum,bolha.forum"
now we can configure our nginx!
external reverse proxy
server {
listen your_listen_ip_here:80;
server_name bolha.forum;
location / {
return 301 https://bolha.forum$request_uri;
}
}
server {
listen your_listen_ip_here:443 ssl http2;
server_name bolha.forum;
ssl_certificate /etc/letsencrypt/live/bolha.forum/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/bolha.forum/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_dhparam /etc/letsencrypt/dh-param.pem;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
# Specifies a curve for ECDHE ciphers.
ssl_ecdh_curve prime256v1;
# Server should determine the ciphers, not the client
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Enable compression for JS/CSS/HTML bundle, for improved client load times.
# It might be nice to compress JSON, but leaving that out to protect against potential
# compression+encryption information leak attacks like BREACH.
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_vary on;
# Only connect to this site via HTTPS for the two years
add_header Strict-Transport-Security "max-age=63072000";
# Various content security headers
add_header Referrer-Policy "same-origin";
add_header X-Content-Type-Options "nosniff";
add_header X-Frame-Options "DENY";
add_header X-XSS-Protection "1; mode=block";
# Upload limit for pictrs
client_max_body_size 25M;
location / {
proxy_pass http://your_docker_host_ip_here:your_port_here;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
check the config
nginx -t
expected output
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and reload the configuration
# nginx -s reload
that's it!
Now you can access your lemmy instance
Enjoy!
from howtos
We want to offer translations inside Mastodon using libretranslate as our backend.
Your need a working mastodon, we recommend this howto
Your need a working libretranslate, we recommend this howto
You need to add these two variables on the application.env file if you are following our mastodon howto.
LIBRE_TRANSLATE_ENDPOINT=https://libretranslate.bolha.tools
LIBRE_TRANSLATE_API_KEY=ecae7db0-bolha-us-is-cool-c84c14d2117a
Then restart it
cd /opt/mastodon/docker
docker-compose restart
After that you can check the logs
docker-compose logs -f|grep TranslationsController
Expected output with status code 200
website | [01fa1ece-5ab3-411d-bd6b-4b5131096735] method=POST path=/api/v1/statuses/110658724777490930/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=200 duration=2988.25 view=0.77 db=2.32
Sometimes you can get a status code 503, yes, it will happen, it's not perfect but works well most of the time.
website | [752a45c9-a94a-408a-8262-7b71cc1528e9] method=POST path=/api/v1/statuses/110658727361133356/translate format=html controller=Api::V1::Statuses::TranslationsController action=create status=503 duration=10117.47 view=0.49 db=2.19
Enjoy it!
:)
from fediverse
from fediverse
This is a list of mastodon-related projects and apps.
Last update: 22/Jun/23
offical app
tapbots
others
oficial client
others
Please send your suggestions to:
mastodon – @gutocarvalho@gcn.sh
matrix – @gutocarvalho@bolha.chat
from fediverse
beta
mastodon Forks
mastodon frontends & webclients
from fediverse
Bolha Data!