Leantime 3.3 released! Open Source Project Management for Non-Project Managers
submitted by /u/intheleantime [link] [comments] |
For something like 20+ services, are you already using something like k3s? Docker-compose? Portainer ? proxmox vms? What is the reasoning behind it ? Cheers!
There was a recent post asking for guidance on this topic and I wanted to share my experience, so that it might help those who are lost on this topic.
If you are self-hosting an application, such as AdGuard Home, then you will undoubtedly find yourself encountering a browser warning about the application being unsafe and requiring you to bypass the warning before continuing. This is particularly noticeable when you want to access your application via HTTPS instead of HTTP. The point is that any application with access to traffic on your LAN's subnet will be able to access unencrypted traffic. To avoid this issue and secure your self-hosted application, you ultimately want a trusted certificate being presented to your browser when navigating to the application.
Depending on how you have implemented your applications, you may want to use a reverse proxy, such as Traefik or Nginx Proxy Manager, as the initial point of entry to your applications. For example, if you are running your applications via Docker on a single host machine, then this may be the best solution, as you can then link your applications to Traefik directly.
As an example, this is a Docker Compose file for running Traefik with a nginx-hello test application:
name: traefik-nginx-hello secrets: CLOUDFLARE_EMAIL: file: ./secrets/CLOUDFLARE_EMAIL CLOUDFLARE_DNS_API_TOKEN: file: ./secrets/CLOUDFLARE_DNS_API_TOKEN networks: proxy: external: true services: nginx: image: nginxdemos/nginx-hello labels: - traefik.enable=true - traefik.http.routers.nginx.rule=Host(`nginx.example.com`) - traefik.http.routers.nginx.entrypoints=https - traefik.http.routers.nginx.tls=true - traefik.http.services.nginx.loadbalancer.server.port=8080 networks: - proxy traefik: image: traefik:v3.1.4 restart: unless-stopped networks: - proxy labels: - traefik.enable=true - traefik.http.routers.traefik.entrypoints=http - traefik.http.routers.traefik.rule=Host(`traefik-dashboard.example.com`) - traefik.http.routers.traefik.middlewares=traefik-https-redirect - traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https - traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https - traefik.http.routers.traefik-secure.entrypoints=https - traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.example.com`) - traefik.http.routers.traefik-secure.service=api@internal - traefik.http.routers.traefik-secure.tls=true - traefik.http.routers.traefik-secure.tls.certresolver=cloudflare - traefik.http.routers.traefik-secure.tls.domains[0].main=example.com - traefik.http.routers.traefik-secure.tls.domains[0].sans=*.example.com ports: - 80:80 - 443:443 environment: - CLOUDFLARE_EMAIL_FILE=/run/secrets/CLOUDFLARE_EMAIL - CLOUDFLARE_DNS_API_TOKEN_FILE=/run/secrets/CLOUDFLARE_DNS_API_TOKEN secrets: - CLOUDFLARE_EMAIL - CLOUDFLARE_DNS_API_TOKEN security_opt: - no-new-privileges:true volumes: - /etc/localtime:/etc/localtime:ro - /var/run/docker.sock:/var/run/docker.sock:ro - ./data/traefik.yml:/etc/traefik/traefik.yml:ro - ./data/configs:/etc/traefik/configs:ro - ./data/certs/acme.json:/acme.json
Note that this expects several files:
# ./data/traefik.yml api: dashboard: true debug: true entryPoints: http: address: ":80" http: redirections: entryPoint: to: https scheme: https https: address: ":443" serversTransport: insecureSkipVerify: true providers: docker: endpoint: "unix:///var/run/docker.sock" exposedByDefault: false file: directory: /etc/traefik/configs/ watch: true certificatesResolvers: cloudflare: acme: storage: acme.json # Production caServer: https://acme-v02.api.letsencrypt.org/directory # Staging # caServer: https://acme-staging-v02.api.letsencrypt.org/directory dnsChallenge: provider: cloudflare #disablePropagationCheck: true #delayBeforeCheck: 60s resolvers: - "1.1.1.1:53" - "1.0.0.1:53" # ./secrets/CLOUDFLARE_DNS_API_TOKEN your long and super secret api token # ./secrets/CLOUDFLARE_EMAIL Your Cloudflare account email
You will also note that I included the option for additional dynamic configuration files to be included via './data/configs/[dynamic config files]'. This is particularly handy if you wish to manually add routes for services, such as Proxmox, that you don't have the ability to setup via Docker service labels.
# ./data/configs/proxmox.yml http: routers: proxmox: entryPoints: - "https" rule: "Host(`proxmox.nickfedor.dev`)" middlewares: - secured tls: certresolver: cloudflare service: proxmox services: proxmox: loadBalancer: servers: # - url: "https://192.168.50.51:8006" # - url: "https://192.168.50.52:8006" # - url: "https://192.168.50.53:8006" - url: "https://192.168.50.5:8006" passHostHeader: true
Or middlewares:
# ./data/configs/middleware-chain-secured.yml http: middlewares: https-redirectscheme: redirectScheme: scheme: https permanent: true default-headers: headers: frameDeny: true browserXssFilter: true contentTypeNosniff: true forceSTSHeader: true stsIncludeSubdomains: true stsPreload: true stsSeconds: 15552000 customFrameOptionsValue: SAMEORIGIN customRequestHeaders: X-Forwarded-Proto: https default-whitelist: ipAllowList: sourceRange: - "10.0.0.0/8" - "192.168.0.0/16" - "172.16.0.0/12" secured: chain: middlewares: - https-redirectscheme - default-whitelist - default-headers
Alternatively, if you are running your services via individual Proxmox LXC containers or VM's, then you may find yourself needing to request SSL certificates and pointing the applications to their respective certificate file paths.
In the case of AdGuard Home running as a VM or LXC Container, as an example, I have found that using Certbot to request SSL certificates, and then pointing AdGuard Home to the SSL certfiles is the easiest method.
In other cases, such as running an Apt-Mirror, you may find yourself needing to run Nginx in front of the application as either a webserver and/or reverse proxy for the single application.
The easiest method of setting up and running Certbot that I've found is as follows:
apt install -y certbot python3-certbot-dns-cloudflare
sudo mkdir -p ~/.secrets/certbot
Zone > Zone > Read
and Zone > DNS > Edit
permissions.echo 'dns_cloudflare_api_token = [yoursupersecretapitoken]' > ~/.secrets/certbot/cloudflare.ini
sudo chmod 600 ~/.secrets/certbot/cloudflare.ini
sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini -d
service.example.com
In the case if you're using Nginx, then do the following instead:
sudo apt install -y nginx
sudo apt install -y python3-certbot-nginx
sudo certbot run -i nginx -a dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini -d
service.example.com
If you are using Plex, as an example, then it is possible to use Certbot to generate a certificate and then run a script to generate the PFX cert file.
#!/bin/sh # /etc/letsencrypt/renewal-hooks/post/create_pfs_file.sh openssl pkcs12 -export \ -inkey /etc/letsencrypt/live/service.example.com/privkey.pem \ -in /etc/letsencrypt/live/service.example.com/cert.pem \ -out /var/lib/service/service_certificate.pfx \ -passout pass:PASSWORD chmod 755 /var/lib/service/service_certificate.pfx
Note: The output file: /var/lib/service/service_certificate.pfx will need to be renamed to the respective service, i.e. /var/lib/radarr/radarr_certificate.pfx
Then, you can reference the file and password in the application.
For personal-use, this implementation is fine; however, a dedicated reverse proxy is recommended and preferable.
As mentioned before, Nginx Proxy Manager is another viable option, particularly for those interested in using something with a GUI to help manage their services. It's usage is very self explanatory, as you simply use the GUI to enter in the details of whatever service you wish to forward traffic towards and includes a simple menu system for setting up requesting SSL certificates.
The key thing to recall is that some applications, such as Proxmox, TrueNAS, Portainer, etc, may have their own built-in SSL certificate management. In the case of Proxmox, as an example, it's possible to use its built-in SSL management to request a certificate and then install and configure Nginx to forward the default management port from 8006 to 443:
# /etc/nginx/conf.d/proxmox.conf upstream proxmox { server "pve.example.com"; } server { listen 80 default_server; listen [::]:80 default_server; rewrite ^(.*) https://$host$1 permanent; } server { listen 443 ssl; listen [::]:443 ssl; server_name _; ssl_certificate /etc/pve/local/pveproxy-ssl.pem; ssl_certificate_key /etc/pve/local/pveproxy-ssl.key; proxy_redirect off; location / { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass https://localhost:8006; proxy_buffering off; client_max_body_size 0; proxy_connect_timeout 3600s; proxy_read_timeout 3600s; proxy_send_timeout 3600s; send_timeout 3600s; } }
Once all is said and done, the last step will always be pointing your DNS to your services.
If you're using a single reverse proxy, then use a wildcard entry, i.e. *.example.com, to point to your reverse proxy's IP address, which will then forward traffic to the respective service.
Example: Nginx Proxy Manager > 192.168.1.2 and Pihole > 192.168.1.10
Point DNS entry for pihole.example.com to 192.168.1.2 and configure Nginx Proxy Manager to forward to 192.168.1.10 .
If you're not using a reverse proxy in front of the service, then simply point the service's domain name to the server's IP address, i.e. pihole.example.com > 192.168.1.10 .
tl;dr - If you're self-hosting and want to secure your services with SSL, so that you may use HTTPS and port 443, then you'll want a domain that you can use for requesting a trusted Let's Encrypt certificate. This opens up options for whether the service itself has SSL management options built-in, such as Proxmox or you want to setup a single point of entry that forwards traffic to the respective service.
There are several different reverse proxy solutions available that have SSL management features, such as Nginx Proxy Manager and Traefik. Depending on your implementation, i.e. using Docker, Kubernetes, etc, there's a variety of ways to implement TLS encryption for your services, especially when considering limited use-cases, such as personal homelabs.
If you need to publicly expose your homelab services, then I would highly recommend considering using something like Cloudflare Tunnels. Depending on use case, you might also want to just simply use Tailscale or Wireguard instead.
This is by no means a comprehensive or production-level/best-practices guide, but hopefully it provides some ideas on several ways to help implement to your homelab.
Hey all, I'm here to update everyone on Retrom's most recent major release! Since last time there are two major changes to note:
Learn more about Retrom on the GitHub repo, or join the budding discord communityScreenshots for fullscreen mode:Previous release announcementTo get ahead of the questions that always pop up in these threads, here is a quick FAQ:
[link] [comments] |
I came across the video online where they showed live dashboard where it showed all push/pull on GitHub in their HQ building. Has anyone tried such a thing ? This could show local / external traffic of our server and it looks super cool. Check the link below for video [link] [comments] |
I understand the complexity of having a functional email is hard and many people often advice against self hosting this part, but still I want to give it a try before giving up.
The main motive is to get rid of google as much as possible, regain control of my privacy and my data as much as possible.
I rarely send out email at all, I'd say less than 100 a month, I'm not using email for business communication anyway, it's mostly for receiving account info, receipts, etc. And I surely don't send any sketchy email as well, if anytime I need to send email it's mostly to inquiry about some stuff.
So with that usage I'm thinking I could get by of using SMTP relay to handle the email sending, and handle the incoming email on my own, so probably just a cheap vps running mailcow or mail-in-a-box then use a cheap relay like amazon ses.
Is this a workable idea or am I missing out something?
This is broken down into 2 parts. How I go about identifying what needs to be hidden, and how to actually hide them. I'll use Gitlab as an example. At the time, I chose the Enterprise version instead of Community (serves me right) thinking I might want some premium feature way ahead in the future and I don't want potential migration headaches, but because it kept annoying me again and again to start a trial of the Ultimate version, I decided not to. If you go into your repository settings, you will see a banner like this: Looking at the CSS id for this widget in Inspect Element, I see
Now all we need is a CSS style to hide these. I put this in a css file called custom.css. In the docker compose config, I add a mount to make my custom css file available in the container like this: Now we need a way to actually make Gitlab use this file. We can configure it like this as an environment variable GITLAB_OMNIBUS_CONFIG in the docker compose file: And there we have it. Without changing anything in the Gitlab source or doing some ugly patching, we have our CSS file. Now the nagging banners are all gone! Gitlab also has a Update #1: Optional script to generate the custom css. [link] [comments] |
long story short I have a NAS that acts as a torrent server (z97mobo based) and another networked device that has a strong GPU that I use as a proxmox compute server/stuff
but I feel like idling a 3090 is overkill
is there any sub 100$ GPU that you can recommend that can do 4K-h.264/h265 streaming for 2-4 clients and is power efficient?
also is it a good idea to have that jellyfin server on a i3-4130 if the GPU does the heavy lifting and there is already a Zpool and an nginx attached to it?
Hey, r/selfhosted! In light of the recent Omnivore news, it felt like an appropriate time to post a brief overview of the fantastic landscape of self-hosted bookmark and read later applications.
As usual, I'd recommending exploring every option on the list and finding the one that is best suited to your needs. Feel free to reach out with feedback or if I missed anything!
Self-Hosting Guide to Alternatives - Pocket, Omnivore (selfh.st)
Before you start hating me wait. I have some UMAX laptop with Intel Pentium G4400 3.3GHz 4 Gb RAM and 64 Gb of ssd storage and I'd like to use it for something, it is currently running ubuntu server and I don't know how could I utilize something that I cannot plug into the network by cable (it literarlly doesn't have ethernet port)
Any recommendations what to do with this piece of hardware?
I'd like to use it in my homelab (Now one desktop and one laptop both with proxmox installed) somehow, it sits in my closet for more than a year and I have no other use for it now, maybe I'd use it just as client for media streaming (with non wifi TV) but this can be done using raspberry or I could just plug hdmi in my daily drive laptop, that I use for school note taking mainly and sometimes for development.
I'm looking for a specific self-hosted service or application that allows me to manage a list of YouTube channels with individual configurations. The ideal tool should:
I tried TubeSync but really didn't like it at all. Building a custom solution sounds like a fun weekend project, but before I dive in, I wanted to check if there are any existing self-hosted services apart from TubeSync that can accomplish this.
Does anyone know of a tool that fits these requirements?
Hello everyone!
CALL FOR CONTRIBUTORS
I have been working on a Markdown based, git synced notes app for android. Skipping any bs, here are the features that u can explore rn (albeit without polish):
Git based syncing (clone over https, pull, add (staging and unstaging), commit and push implemented)
Allowing storage of repositories on external storage (fr this time)
Markdown rendering supported, opening files in other apps supported using intent framework
Multiple repos supported by default
MIT license, no hidden subscription/donations... its FOSS (fr this time).
Here's what I have planned for the near future (if there is demand):
Customizing the way markdown looks and feels, from font to its color, size, weight, style, etc.
A polished ui with pretty animations.
Support for sharing, converting and editing files (not just markdown)
SSH support
Using GitHub auth and something similar on GitLab for easy cloning and stuff.
Here are some more ideas that are just ideas (I have no clue how I will implement them or unsure if it will be of any use):
Potentially add support for a pen based input using a tab/drawing pad. (for now onenote files can be used maybe?)
Let each repo have a .{app name} folder with various configuration files, these files could have app settings in them. This means, for example you can have the apps theme change for different repos.
I hear you ask the name of the app?
GitNotes or MarGitDown... I am not sure yet, suggestions are welcome!
Here is the GitHub link if you find this project interesting!
https://github.com/psomani16k/GitNotes
Feel free to ask for any more information.
Hello all,
I've recently set up alerts on google scholar for new papers coming out. Google scholar only works for one search engine (google scholar), can only notify you by email, and the emails they send aren't that informative etc etc. I can't help but think there must be better self hosted solutions. No luck finding one so far though, do you know of any ?
(But actually, how can i hide this from my ISP?) I am hosting a grav site for me and a few others, as well as Immich for me and a few others, and a small (2 person) Minecraft server. So far all I have done is use a cloudflared tunnel for the grav site and the immich server, using custom subdomains via cloudflare, and TCPShield for the Minecraft server. I also use ProtonVPN on my devices but I have the Minecraft server set to split tunneling in ProtonVPN as i could not get the cloudflared tunnel to work with the server with TCP. [link] [comments] |
Hello !
I'm happy to publish the first public release of Broadcastarr.
This project aims to provide access to web broadcasts (such as sport streamings for instance) through a Jellyfin server.
It provides a Discord bot to perform basic actions, indexing is also published on Discord and Matrix channels.
JSON descriptions of the indexers are not provided on the repository, but you can ask me for the ones I have already implemented, or ask me for some help if the documentation is not clear enough.
This project has been in development since summer 2023 and took a lot of time to get to this point.
Starting from a simple script to grab url links, it now works as a full service running in background.
Don't hesitate to ask questions, report bugs or suggest improvements.
Hello !
I'm happy to publish the first public release of Broadcastarr.
This project aims to provide access to media broadcasts (such as sport streaming for instance) through a Jellyfin server.
It provides a Discord bot to perform basic actions, indexing is also published on Discord and Matrix channels.
JSON descriptions of the indexers are not provided on the repository, but you can ask me for the ones I have already implemented, or ask me for some help if the documentation is not clear enough.
This project has been in development since summer 2023 and took a lot of time to get to this point.
Starting from a simple script to grab url links, it now works as a full service running in background.
Everything is available here: https://github.com/Billos/Broadcastarr
Don't hesitate to ask questions, report bugs or suggest improvements.
I just purchased a 1L PC to replace my current Docker server (Synology NAS). I mostly host all my services via Docker. But I do plan on trying out some VMs for more remote desktop type stuff not for hosting services.
The 1L PC is a HP Elite Mini G9. I will be adding 2 SSDs for redundant OS disks. And I will be utilizing my Synology NAS for storage. So the 1L PC doesn't need crazy amounts of storage and I will not be using it like a NAS.
Which OS should I use as the basis of the 1L PC? I like the idea of the easy nature of TrueNAS as a base OS, so I can set up shares and permissions easily. But do VMs run well under TrueNAS? Is there another OS I should consider as the basis of this mainly Docker server with some VMs?
I some self hosted services running in docker containers. They are all on the same server (with static ip). I was able to configure Traefik (also in a container with Docker Compose) as a reverse proxy with a self-signed certificate in my local network. This was surprisingly easy to do. Now I want to expose these self hosted services to the internet so I can access them everywhere, but only via a VPN tunnel (WG Easy). What I have done so far:
Then I created another small test service: But no matter what I do, if I navigate to whoami.my-domain.com, I get following response: If I ping that domain, I can see the DNS A record pointing to my personal external IP address.... [link] [comments] |
Is it time for me to move on from my Tesla P4, everyone seems to be getting arc cards?