Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Ayer — 24 Noviembre 2024IT And Programming

A small self-hosted library organizing web app I made

(I am the author of this application. It is free and open-source. You can get it here: https://github.com/seanboyce/ubiblio)

I wanted a tool to organize my personal library -- something that scales to hundreds or low thousands of books. I found a few tools out there, but all had a bunch of flashy features I didn't need. I wanted something I could host on a potato and would run really fast (so, text-only).

So I threw together something fast over a weekend. Memory requirements are ~100MB until your database gets huge. It has some basic DDoS protection, HTTPS support, and is indeed pretty fast. It's written in Python (FastAPI) so is very easy to extend / adapt.

I don't imagine that many of you specifically need this thing, but I figured I would publish it just in case.

Besides the basic features of adding / removing / updating / searching books, it supports:

  1. Book wishlists for the library (for when I encounter random book sales and need to know what I want / already have)

  2. Reading lists by user

  3. Try to auto-add a new book by ISBN so you can add books quickly, e.g. when you're in a bookstore and want to add something to the library wishlist for when it goes on sale later. Or you just have a big stack of books to enter.

  4. Withdraw / return books

  5. A glorious two user types -- Admin and not-Admin

  6. Reasonable security via JWT. User passwords are properly hashed.

  7. Content discovery by book genre -- currently a bit cumbersome if you have thousands of books

Missing features you might expect it to have:

  1. No user-friendly password recovery (because I don't want to depend on e.g. Gmail). Manual password reset is possible though.

  2. No user management. You add a few users at setup. There are exactly two people that use my library so I didn't bother. I'm open to adding it in though.

submitted by /u/No-Economist3977
[link] [comments]

Docker Compose: Splitting one big yml file, carefully, and what about these extra thoughts

Hi Everyone,

I'm geek enough to have a few things set up in docker compose. I'm not geek enough to reall know how it works. I'm just glad that it does. If/when things go wrong, it takes me a long time to fix then because I'm editing things blindly with best guesses.

For this reason, I'm a firm believe in, "If it's not broken, don't fix it".

So, I've kept to my one docker-compose.yml so far.

Now, I've got a few different docker apps that I'd like to install, so I'm wondering if now is the time to split things up.

Or, maybe I just keep everything that's working in one file, and add new things from here?

The file contains gluetun and cloudflare, so however it's split, I need some things to run through those.

My biggest concerns in splitting are:

  • I'd need a really simple guide on how to split it up
  • I'm worried about losing all of my setup from all of my current apps.
  • Updating things.

In terms of updating things, it currently takes about 1 minute.

I run:

docker compose down

then

docker compose pull

then

docker compose up -d

What will that process be like if I split everything up?

I'm only running about 15 apps at the moment, and want to add 2 or 3 more.

Thanks for any help that you can offer.

submitted by /u/damskibobs
[link] [comments]

I created a Raspberry Pi Backup Tool/Script

I was really annoyed that I couldn't find a proper backup tool to reliably back up my Raspberry Pis.
Most tools were just Rsync backups, and others didn’t support ARM clients.

However, I want an image backup so that in case of a damaged SD card, I can simply flash a new one and then optionally restore incremental data from Rsync backups. This way, the Pi can be up and running again quickly.

So, I wrote my own script to plan and schedule such jobs.
I thought this might be useful for some others in this community as well.

https://github.com/Multigestern/Raspi-Backup

submitted by /u/Multigestern
[link] [comments]

Rube Goldberg, but it works!

I'm very proud of this.

I have an Asterisk server running on a Raspberry Pi 4. I wanted it to transcribe voice messages and email me the transcriptions. Unfortunately, Whisper on the Pi is very, very slow... as in 15 minutes to transcribe 30 seconds of audio.

But, I do have a VPS. Even though it's a KVM instance, it turns out Whisper on the VPS is a lot faster than on the Pi 4 - about 0.5x real-time... which is good enough.

So now, when someone leaves me a voice message, it gets copied up to the VPS, converted to text, and then the transcription is emailed to me. (These's no actual file copying; it's all done on stdin/stdout over SSH.)

I get the transcription within a couple of minutes of the voicemail being left.

submitted by /u/DFS_0019287
[link] [comments]

Hashi Corp

On a different platform(might have been on Reddit) I saw someone propose using Hashicorp’s Nomad instead of Dockage or Docker swarm. That got me wondering what’s your experience with Hashi Corp offerings. Also has anyone used Nomad?

PS: apologies for the scattered thoughts

submitted by /u/mo_with_the_floof
[link] [comments]

Alternatives to oznu/cloudflare-ddns for Cloudflare Dynamic DNS Updates

I've been using the oznu/cloudflare-ddns Docker container to manage dynamic DNS updates with Cloudflare, which has been great for automatically updating my IP address. However, I recently noticed that the project on GitHub (https://github.com/oznu/docker-cloudflare-ddns) has been archived.

Could anyone recommend a reliable alternative for dynamic DNS updates with Cloudflare that works well in a Docker environment? I'm looking for something to handle changes in my external IP and update my DNS records accordingly. Ideally, it would be great if it also supports environment configurations similar to the oznu/cloudflare-ddns setup, such as API keys, zone settings, proxy settings, and user/group IDs.

Thank you for your suggestions!

submitted by /u/slayerlob
[link] [comments]

Trouble with Ikea ZigBee devices

I am tearing my hair out, am I the only one having issues with Ikea ZigBee devices? I have different models: Fyrtur blinds, Tretrakt plug, Rodret switch. And I have multiple of each model. Some devices are pairing without problems, some are... The thing that gets me is that each device will act differently.

I tried a lot of things: pair near the ZigBee dongle, change my Wi-Fi 2,4Ghz channel, reset devices, etc.

Devices from other manufacturer work fine. I am using ZHA with a Sonoff ZBDongle-P.

Does someone else have this kind of issue? Should I switch to Z2M?

submitted by /u/billboq
[link] [comments]

Reduce SSD writes on Proxmox nodes w/o HA

This post follows up on excessive idle writes of Proxmox nodes due to the HA stack onto the underlying cluster file system and highlights what changed in Proxmox VE 8.3 released recently.

If you do not use High Availability features, there's a simple tip on reducing unnecessary idle writes and completely taking out auto-reboot watchdog that is otherwise always active - useful to eliminate potential non-hardware related reboots as well.

NOTE If you e.g. disable HA with the popular tteck scripts, you are already doing this, ableit with a little different method.

submitted by /u/esiy0676
[link] [comments]

Notification system for non-technical users

I'm looking for some software that would work as a solution to send mobile notifications to users. There are plenty of ways I could notify myself, but I want a system where a user just has to install an app, maybe type in a code and have everything else 'just work'. I'd want to be able to send notifications both manually and automatically with API calls/webhooks.

Does something like this exist?

submitted by /u/th-crt
[link] [comments]

Migrating to rootless docker

I've been running my docker containers as root because I didn't know better. I now do and wish to migrate my infrastructure to rootless docker. Since I want minimal downtime and headache: what issues am I likely to run into, and how can I best avoid them? (I do, for instance, have some services like Homepage and Portainer that need access to the docker socket, although I've already proxied that using docker socket proxy).

submitted by /u/Routine_Librarian330
[link] [comments]

Which photo solution do recommend for “on-this-day”?

I want to have a seamless mobile experience for automatic on-this-day kind of albums that surface old memories in a serendipitous way (widgets?, notifications?). Obviously I’d like to see meaningful photos with relevant faces/objects.

Google Photos is my current benchmark. Among self-hosted alternatives, Photoprism, Immich, Synology Photos, etc., which one comes close it?

I have a Synology NAS and a 4Gb Raspberry Pi 4 for at home. “Users” have iphones for the most part.

submitted by /u/stat-insig-005
[link] [comments]

Personal Finance App for Americans

I've been using Quicken Simplifi (not self hosted), but they don't support Robinhood Gold credit card so I'll have to move elsewhere since that is my main used credit card. I'd like to self host since I built a server recently. I heard people say good things about FireFly III on this sub, but that's only EU/UK AFAIK.

I want to have something that does the following:

  1. Connects to US banks, credit cards, and brokerages

  2. Gives me a notification on every time money comes in or out and asks me to put it in a category (bonus points if it auto-categorizes)

  3. Gives charts showing my spending per category per given time scale

  4. Has a dark mode

submitted by /u/E_coli42
[link] [comments]

Eebeepeebee: The Extremely Basic PasteBin for a small number of users

I was on my phone and my friend was on his computer. We were working on finding a bunch of links and sharing them between the phone and the computer was just a little bit too irritating. I made this to solve that problem. Enjoy!
https://github.com/inkplayart/eebeepeebee/

submitted by /u/crono760
[link] [comments]

Migrating Docker to Podman: things to be aware of?

podman.io

I've been wanting to do the rootless mode of Docker but then i found podman

is it really as easy as changing the command to podman?

are there other things that need to be considered in terms of functionality or lack thereof?

submitted by /u/J6j6
[link] [comments]

I made xplex.me — Self-hosted, Open Source, Multi-Streaming Server

I wanted to multi-stream but never found a multi-streaming service that I really liked. One that I can self-host; one that's open source. So I made one.

Introducing xplex v1.0.0 — a self-hosted, containerized, multi-streaming server with a user-friendly web dashboard. It gives you full control to:

  • host anywhere you like
  • manage cost with instance uptime
  • stream to as many platforms you want

To make it even easier, I've put up xplex as an 1-click app on the DigitalOcean Marketplace. This is what I use now for convenience: spin up a server when I go live, then delete the instance when done streaming, to keep costs minimal.

xplex is for anyone who wants to multi-stream, and it doesn't need advanced technical wizardry. It's designed it to be accessible, but I'm actively looking for feedback to make it even simpler.

Relevant Links

I'll also be multi-streaming at 15:00 UTC on Twitch and YouTube; so drop by with your questions or suggestions to improve xplex!

submitted by /u/Debloper
[link] [comments]

Ingress Nginx not setting "X-Forwarded-For" header

Hi

I've deployed ingress-nginx as my IngressController to my k3s cluster using ArgoCD. I have a backend that needs to read the X-Forwarded-For header to count unique visits. When I test from my laptop using Postman, or from the main node using curl the X-Forwarded-For is missing.

Ive tried different configurations for Ingress Nginx, from none, to setting `use-forwarded-headers`, `enable-real-ip` and `compute-full-forwarded-for` even though it doesn't make much sense after reading the docs.

Ingress Nginx Application manifest

apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: ingress-nginx namespace: argocd spec: project: default source: repoURL: https://kubernetes.github.io/ingress-nginx targetRevision: 4.11.3 chart: ingress-nginx helm: values: | controller: ingressClassResource: # -- Name of the ingressClass name: nginx-internal # -- Is this ingressClass enabled or not enabled: true # -- Is this the default ingressClass for the cluster default: true # -- Controller-value of the controller that is processing this ingressClass controllerValue: "k8s.io/ingress-nginx" # -- Parameters is a link to a custom resource containing additional # configuration for the controller. This is optional if the controller # does not require extra parameters. parameters: {} ingressClass: nginx-internal config: use-forwarded-headers: true allow-snippet-annotations: true destination: namespace: ingress-nginx-system server: https://kubernetes.default.svc syncPolicy: syncOptions: - CreateNamespace=true automated: prune: true selfHeal: true 

This is the ingress configuration for my app:

ingress: enabled: true className: "nginx-internal" # Use NGINX as the ingress class annotations: nginx.ingress.kubernetes.io/proxy-body-size: "10m" # Example of setting proxy limits hosts: - host: teleworker.local paths: # Forward /api and subpaths to the backend - path: /api pathType: Prefix backend: service: name: postter-back-dev port: number: 80 # Forward /actuator and subpaths to the backend - path: /actuator pathType: Prefix backend: service: name: postter-back-dev port: number: 80 

curl output

curl -I http://app.local/api/test

HTTP/1.1 400

Date: Sun, 24 Nov 2024 13:58:14 GMT

Content-Type: application/json

Connection: keep-alive

Vary: Origin

Vary: Access-Control-Request-Method

Vary: Access-Control-Request-Headers

X-Content-Type-Options: nosniff

X-XSS-Protection: 0

Cache-Control: no-cache, no-store, max-age=0, must-revalidate

Pragma: no-cache

Expires: 0

X-Frame-Options: DENY

submitted by /u/rozularen
[link] [comments]

How to remove the need to remember ports?

I have a Portainer instance within a VM on Proxmox. From there i'm serving a number of different services via docker. Is there a way, perhaps with something like a url shortening service, where remembering all the ports isn't needed anymore?

Say I have Apache on port 80, the next thing that needs port 80 I change to 3127 because there's another service using 8080, then another service that conflicts I change to use port 81 and so it goes on. It's getting mighty confusing to remember so many port numbers to access service front ends, api's etc.

How do people abstract ports to meaningful text?

submitted by /u/Dookanooka
[link] [comments]

steamheadless - two docker one host - trouble with mouse and keyboard

I use two Docker with separated GPUs. The GPU Part works so far very good. But the Problem is mouse & keyboard. If i move the Mouse Docker 1 the mouse moves in Docker2 as well. But the interesting part ist, if i move the mouse in docker 2, the mouse in docker 1 isnt moving. Same with keyboard. if i disable the evdev in the dockers, the mouse isnt moving at all in moonlight. Over the Webui it works without problem.

To be honest, i tried different things but in the end, i dont know where to start to debug. Host ist Debian 11 with a nvidia 4070 ti super and 3090. For Streaming i use sunshine/moonlight

xconf.org on the host:

Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "ServerLayout" Identifier "Layout1" Screen 0 "Screen1" InputDevice "Keyboard1" "CoreKeyboard" InputDevice "Mouse1" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mouse0" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Mouse1" Driver "mouse" Option "Device" "/dev/input/mouse2" Option "Protocol" "auto" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" # generated from default Identifier "Keyboard1" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Unknown" Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:1:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:9:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection 

the compose-file Docker1:

services: steam-headless: image: josh5/steam-headless:latest restart: unless-stopped shm_size: ${SHM_SIZE} #privileged: true ipc: host # Could also be set to 'shareable' ulimits: nofile: soft: 1024 hard: 524288 cap_add: - NET_ADMIN - SYS_ADMIN - SYS_NICE security_opt: - seccomp:unconfined - apparmor:unconfined # NETWORK: ## NOTE: With this configuration, if we do not use the host network, then physical device input ## is not possible and your USB connected controllers will not work in steam games. network_mode: host hostname: ${NAME} extra_hosts: - "${NAME}:127.0.0.1" # ENVIRONMENT: ## Read all config variables from the .env file environment: # System - TZ=${TZ} - USER_LOCALES=${USER_LOCALES} - DISPLAY=${DISPLAY} # User - PUID=${PUID} - PGID=${PGID} - UMASK=${UMASK} - USER_PASSWORD=${USER_PASSWORD} # Mode - MODE=${MODE} # Web UI - WEB_UI_MODE=${WEB_UI_MODE} - ENABLE_VNC_AUDIO=${ENABLE_VNC_AUDIO} - PORT_NOVNC_WEB=${PORT_NOVNC_WEB} - NEKO_NAT1TO1=${NEKO_NAT1TO1} # Steam - ENABLE_STEAM=${ENABLE_STEAM} - STEAM_ARGS=${STEAM_ARGS} # Sunshine - ENABLE_SUNSHINE=${ENABLE_SUNSHINE} - SUNSHINE_USER=${SUNSHINE_USER} - SUNSHINE_PASS=${SUNSHINE_PASS} # Xorg - ENABLE_EVDEV_INPUTS=${ENABLE_EVDEV_INPUTS} - FORCE_X11_DUMMY_CONFIG=${FORCE_X11_DUMMY_CONFIG} # Nvidia specific config - NVIDIA_DRIVER_CAPABILITIES=${NVIDIA_DRIVER_CAPABILITIES} - NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES} - NVIDIA_DRIVER_VERSION=${NVIDIA_DRIVER_VERSION} runtime: nvidia # DEVICES: devices: # Use the host fuse device [REQUIRED]. - /dev/fuse # Add the host uinput device [REQUIRED]. - /dev/uinput # Add NVIDIA HW accelerated devices [OPTIONAL]. # Add NVIDIA HW accelerated devices [OPTIONAL]. # NOTE: If you use the nvidia container toolkit, this is not needed. # Installing the nvidia container toolkit is the recommended method for running this container #- /dev/nvidia3 # - /dev/nvidia0 # - /dev/nvidia1 # - /dev/nvidia2 # - /dev/nvidiactl # - /dev/nvidia-modeset # - /dev/nvidia-uvm # - /dev/nvidia-uvm-tools # - /dev/nvidia-caps/nvidia-cap1 # - /dev/nvidia-caps/nvidia-cap2 # - /dev/dri/ # Ensure container access to devices 13:* # NOTE: If you use the nvidia container toolkit, this is not needed. # Installing the nvidia container toolkit is the recommended method for running this container # GPU PASSTHROUGH deploy: resources: reservations: # Enable support for NVIDIA GPUs. # # Ref: https://docs.docker.com/compose/gpu-support/#enabling-gpu-access-to-service-containers devices: - capabilities: [gpu] device_ids: ["${NVIDIA_VISIBLE_DEVICES}"] device_cgroup_rules: - 'c 13:* rmw' # VOLUMES: volumes: # The location of your home directory. - ${HOME_DIR}/:/home/default/:rw # The location where all games should be installed. # This path needs to be set as a library path in Steam after logging in. # Otherwise, Steam will store games in the home directory above. - ${GAMES_DIR}/:/mnt/games/:rw # The Xorg socket. - ${SHARED_SOCKETS_DIR}/.X11-unix/:/tmp/.X11-unix/:rw # Pulse audio socket. - ${SHARED_SOCKETS_DIR}/pulse/:/tmp/pulse/:rw 

the .env file Docker1:

NAME=SteamHeadless TZ=Pacific/Auckland USER_LOCALES=en_US.UTF-8 UTF-8 DISPLAY=:1 SHM_SIZE=2G HOME_DIR=/srv/dev-disk-by-uuid-2ba384f1-3b00-4b0d-8e09-007772e6f1bf/Docker/steamos1/home SHARED_SOCKETS_DIR=/opt/container-data/steam-headless/sockets GAMES_DIR=/srv/dev-disk-by-uuid-2ba384f1-3b00-4b0d-8e09-007772e6f1bf/Docker/steamos/games PUID=1000 PGID=100 UMASK=000 USER_PASSWORD=password MODE=primary WEB_UI_MODE=vnc ENABLE_VNC_AUDIO=false PORT_NOVNC_WEB=8084 NEKO_NAT1TO1= ENABLE_STEAM=true STEAM_ARGS=-silent ENABLE_SUNSHINE=true SUNSHINE_USER=admin SUNSHINE_PASS=admin ENABLE_EVDEV_INPUTS=true FORCE_X11_DUMMY_CONFIG=false NVIDIA_DRIVER_CAPABILITIES=all NVIDIA_VISIBLE_DEVICES=GPU-4d3a6b72-e370-87f2-721f-428302c1a40d NVIDIA_DRIVER_VERSION=565.57.01 

The docker-compose file Docker 2:

services: steam-headless: image: josh5/steam-headless:latest restart: unless-stopped shm_size: ${SHM_SIZE} #privileged: true ipc: host # Could also be set to 'shareable' ulimits: nofile: soft: 1024 hard: 524288 cap_add: - NET_ADMIN - SYS_ADMIN - SYS_NICE security_opt: - seccomp:unconfined - apparmor:unconfined # NETWORK: ## NOTE: With this configuration, if we do not use the host network, then physical device input ## is not possible and your USB connected controllers will not work in steam games. network_mode: host hostname: ${NAME} extra_hosts: - "${NAME}:127.0.0.1" # ENVIRONMENT: ## Read all config variables from the .env file environment: # System - TZ=${TZ} - USER_LOCALES=${USER_LOCALES} - DISPLAY=${DISPLAY} # User - PUID=${PUID} - PGID=${PGID} - UMASK=${UMASK} - USER_PASSWORD=${USER_PASSWORD} # Mode - MODE=${MODE} # Web UI - WEB_UI_MODE=${WEB_UI_MODE} - ENABLE_VNC_AUDIO=${ENABLE_VNC_AUDIO} - PORT_NOVNC_WEB=${PORT_NOVNC_WEB} - NEKO_NAT1TO1=${NEKO_NAT1TO1} # Steam - ENABLE_STEAM=${ENABLE_STEAM} - STEAM_ARGS=${STEAM_ARGS} # Sunshine - ENABLE_SUNSHINE=${ENABLE_SUNSHINE} - SUNSHINE_USER=${SUNSHINE_USER} - SUNSHINE_PASS=${SUNSHINE_PASS} # Xorg - ENABLE_EVDEV_INPUTS=${ENABLE_EVDEV_INPUTS} - FORCE_X11_DUMMY_CONFIG=${FORCE_X11_DUMMY_CONFIG} # Nvidia specific config - NVIDIA_DRIVER_CAPABILITIES=${NVIDIA_DRIVER_CAPABILITIES} - NVIDIA_VISIBLE_DEVICES=${NVIDIA_VISIBLE_DEVICES} - NVIDIA_DRIVER_VERSION=${NVIDIA_DRIVER_VERSION} runtime: nvidia # DEVICES: devices: # Use the host fuse device [REQUIRED]. - /dev/fuse # Add the host uinput device [REQUIRED]. - /dev/uinput # Add NVIDIA HW accelerated devices [OPTIONAL]. # Add NVIDIA HW accelerated devices [OPTIONAL]. # NOTE: If you use the nvidia container toolkit, this is not needed. # Installing the nvidia container toolkit is the recommended method for running this container #- /dev/nvidia3 # - /dev/nvidia0 # - /dev/nvidia1 # - /dev/nvidia2 # - /dev/nvidiactl # - /dev/nvidia-modeset # - /dev/nvidia-uvm # - /dev/nvidia-uvm-tools # - /dev/nvidia-caps/nvidia-cap1 # - /dev/nvidia-caps/nvidia-cap2 # - /dev/dri/ # Ensure container access to devices 13:* # NOTE: If you use the nvidia container toolkit, this is not needed. # Installing the nvidia container toolkit is the recommended method for running this container # GPU PASSTHROUGH deploy: resources: reservations: # Enable support for NVIDIA GPUs. # # Ref: https://docs.docker.com/compose/gpu-support/#enabling-gpu-access-to-service-containers devices: - capabilities: [gpu] device_ids: ["${NVIDIA_VISIBLE_DEVICES}"] device_cgroup_rules: - 'c 13:* rmw' # VOLUMES: volumes: # The location of your home directory. - ${HOME_DIR}/:/home/default/:rw # The location where all games should be installed. # This path needs to be set as a library path in Steam after logging in. # Otherwise, Steam will store games in the home directory above. - ${GAMES_DIR}/:/mnt/games/:rw # The Xorg socket. - ${SHARED_SOCKETS_DIR}/.X11-unix/:/tmp/.X11-unix/:rw # Pulse audio socket. - ${SHARED_SOCKETS_DIR}/pulse/:/tmp/pulse/:rw ``` 

.env file Docker2:

NAME=SteamHeadless2 TZ=Pacific/Auckland USER_LOCALES=en_US.UTF-8 UTF-8 DISPLAY=:1 SHM_SIZE=2G HOME_DIR=/srv/dev-disk-by-uuid-2ba384f1-3b00-4b0d-8e09-007772e6f1bf/Docker/steamos1/home SHARED_SOCKETS_DIR=/opt/container-data/steam-headless/sockets GAMES_DIR=/srv/dev-disk-by-uuid-2ba384f1-3b00-4b0d-8e09-007772e6f1bf/Docker/steamos/games PUID=1000 PGID=100 UMASK=000 USER_PASSWORD=password MODE=primary WEB_UI_MODE=vnc ENABLE_VNC_AUDIO=false PORT_NOVNC_WEB=8084 NEKO_NAT1TO1= STEAM_ARGS=-silent ENABLE_SUNSHINE=true SUNSHINE_USER=admin SUNSHINE_PASS=admin ENABLE_EVDEV_INPUTS=true FORCE_X11_DUMMY_CONFIG=false NVIDIA_DRIVER_CAPABILITIES=all NVIDIA_VISIBLE_DEVICES=GPU-4d3a6b72-e370-87f2-721f-428302c1a40d NVIDIA_DRIVER_VERSION=565.57.01 thanks for helping 
submitted by /u/Sumsiro
[link] [comments]

Any 12$/Y VPS offers this Black Friday?

I’m looking for a VPS to deploy headscale and maybe setup a wiregaurd connection to my home to expose some services.

My friends access jellyfin via tailscale from Ireland, Sweden, UK and India, so im looking for a VPS somewhere the proximity between all these locations are equal (dubai maybe idk) or in India

This is my first time buying a VPS, im not even sure how much compute power i need for these.

submitted by /u/brightestsummer
[link] [comments]
❌
❌