Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Hoy — 10 Julio 2025Self-Hosted Alternatives to Popular Services

Introducing PrintGuard - A new open-source 3D print failure detector running 40x faster than Spaghetti Detective whilst requiring less than 1Gb of RAM for edge deployability

Hi everyone,

As part of my dissertation for my Computer Science degree at Newcastle University, I investigated how to enhance the current state of 3D print failure detection. Current approaches such as Obico’s “Spaghetti Detective” utilise a vision based machine learning model, trained to only detect spaghetti related defects with a slow throughput on edge devices (<1fps on 2Gb Raspberry Pi 4b), making it not edge deployable, real-time or able to capture a wide plethora of defects. Whilst their model can be inferred locally, it’s expensive to run, using a lot of compute, typically inferred over their paid cloud service which introduces potential privacy concerns.

My research led to the creation of a new vision-based ML model, focusing on edge deployability so that it could be deployed for free on cheap, local hardware. I used a modified architecture of ShuffleNetv2 backbone encoding images for a Prototypical Network to ensure it can run in real-time with minimal hardware requirements (averaging 15FPS on the same 2Gb Raspberry Pi, a >40x improvement over Obico’s model). My benchmarks also indicate enhanced precision with an averaged 2x improvement in precision and recall over Spaghetti Detective.

My model is completely free to use, open-source, private, deployable anywhere and outperforms current approaches. To utilise it I have created PrintGuard, an easily installable PyPi Python package providing a web interface for monitoring multiple different printers, receiving real-time defect notifications on mobile and desktop through web push notifications, and the ability to link printers through services like Octoprint for optional automatic print pausing or cancellation, requiring <1Gb of RAM to operate. A simple setup process also guides you through how to setup the application for local or external access, utilising free technologies like Cloudflare Tunnels and Ngrok reverse proxies for secure remote access for long prints you may not be at home for.

Whilst feature rich, the package is currently in beta and any feedback would be greatly appreciated. Please use the below links to find out more. Let's keep failure detection open-source, local and accessible for all!

📦 PrintGuard Python Package - https://pypi.org/project/printguard/

🎓 Model Research Paper - https://github.com/oliverbravery/Edge-FDM-Fault-Detection

🛠️ PrintGuard Repository - https://github.com/oliverbravery/PrintGuard

submitted by /u/oliverbravery
[link] [comments]

Releasing Baserow 1.34: Field indexes for 10x faster filtering, value constraints, custom CSS & JS and more — Open Source Airtable Alternative

We’ve just released Baserow 1.34, and it’s packed with powerful upgrades. Key highlights:

→ Field indexes: Up to 10x faster filtering

→ Field value constraints: Enforce unique values and boost data integrity

→ Multi-row selection: Bulk delete/duplicate in one click

→ Custom CSS & JS: Take your App Builder customization further

→ Application debugging: See misconfigurations directly in the editor

🔗 Try Baserow 1.34: https://baserow.io

📖 Full release notes: https://baserow.io/blog/baserow-1-34-release-notes

📦 GitLab repo: https://gitlab.com/baserow/baserow

💬 Join the community: https://community.baserow.io/

submitted by /u/bram2w
[link] [comments]

Receipt Wrangler Updates v6.4.0

Receipt Wrangler Updates v6.4.0

Hello everyone, Noah here with some updates.

For those of you that are new, welcome! Receipt Wrangler is a self-hosted, ai powered app that makes managing receipts easy. Receipt Wrangler is capable of scanning your receipts from desktop uploads, mobile app scans, or via email, or entering manually. Users can itemize, categorize, and split them amongst users in the app. Check out https://receiptwrangler.io/ for more information.

Despite being in maintenance mode for a while, I've still been working on it. Turns out I just like making stuff 🤷 so here we are. Development is a bit slower, but I'm having fun with it. It's out of maintenance mode now, back in the swing of things.. Let's go over what got done since last time.

Development Highlights:

Custom Fields (mobile): Now in the mobile app, users can view, add and edit custom fields on forms, similar to desktop.

Split By Percent (desktop and mobile): Now users may split by percent in desktop and mobile, by either preset split percentages (25, 50, 75, 100), or by custom percents.

Receipt Navigation Consolidation (mobile): In the mobile app, the receipt form had tabs for the receipt, images, and comments. This has been consolidated down to just one tab, with pages that pop up to display comments and images instead. This greatly simplifies the code, and in my opinion the UX as well.

Major Major UI Update (desktop): This time around, there are some major UI updates. The overall UX of app is more or less the same with some minor improvements in some spots, but the major changes are:

* Updated colors, better use of colors for better contrast and accessibility in some spots

* Updated the look and feel of tables to have rounded edges, fixed some annoying visual bugs with them to have a cleaner and smoother look

* Some minor UX improvements like in the receipt filter, added the ability to add/hide columns on receipt table, improved responsiveness across the app - particularly in on the dashboard

Below is a small example of the difference:

Before

After

Coming Up Next:

Add Custom Fields to Export: Custom fields are awesome to capture data, but now those custom fields need to be included in exported data.

Implement Itemization: Itemization hasn't really existed in Receipt Wrangler in a nice way, so coming soon, users will be able to add items to receipts, and share items with users, if they'd like.

OIDC SSO Implementation: Coming up, SSO via OIDC will be coming, allowing to login and create users with social logins, or perhaps your own oidc server (Authentik, Authelia, ect).

Custom Export: This will allow users to export data in a customized way. Users will be able to export their data in a way that suites them.

Notes:

PikaPod: Drop a vote here: https://feedback.pikapods.com/posts/707/add-app-receipt-wrangler if you'd like to see Receipt Wrangler get added to PikaPods as an easy one click install for Receipt Wrangler!

Project Status: The project is no longer in maintenance mode and is in active development. Prior to this, I was getting a bit burnt out with the project, and life. Coming back to the project in a different headspace has helped a lot. I am going to take development at my own pace, and above all, have fun.

Thanks for reading and your support!

Cheers,

Noah

submitted by /u/Dramatic_Ad5442
[link] [comments]

Exactly how (not?) stupid would it be to self-host several low-traffic websites from my home?

I maintain about a half-dozen simple landing pages for businesses of friends and family and I'd like to save them a bunch of money by just moving things to something in the house. At most, across all the landing pages, we're looking at no more than a few hundred visits a day, tops (and that'd be an outlier event).

In my research into this topic, I feel like the common wisdom is "don't do it." But assuming I'm using basic security best practices, what are the drawbacks/dangers of hosting websites from home?

Currently, as a personal project, I'm hosting one website on the ol' world wide web. I have just port 443 open, ssh access locked with sha-256 rsa-2048, and using cloudlfare's dns proxy for the site.

So far, as near as I can tell, I've had no issues. This has led me to think that I could go ahead an self-host several more websites. Is this a bad idea? A fine idea? Should I use Cloudlfare Tunnels? Something else?

I'm in that late beginner stage where I know enough to know I don't know what the hell I'm doing. Any help is appreciated.

edit for extra context: I'm currently working off an old Raspberry Pi 3, though if I go forward with adding websites, I'd probably shell out for one of the new Raspberry Pi 5 16gb. That is, unless someone has a better suggestion.

submitted by /u/vivianvixxxen
[link] [comments]

Tinyauth v3.5.0 now with LDAP support!

Hello everyone,

I just released Tinyauth v3.5.0 which finally includes LDAP support. This means that you can now use something like LLDAP (just discovered it and it is AMAZING) to centralize your user management instead of having to rely on environment variables or a users file. It may not seem like a significant update but I am letting you know about it because I have gotten a lot of requests for this specific feature in my previous posts and in GitHub issues.

You may or may not know what Tinyauth is but if you don't, it's a lightweight authentication middleware (like Authelia/Authentik/Keycloak) that allows you to easily login to your apps using simple username and password authentication, OAuth with Google, GitHub or any OAuth provider, TOTP and now...LDAP. It requires minimal configuration and can be deployed in less than 5 minutes. It supports all popular proxies like Traefik, Nginx and Caddy.

Check out the new release over on GitHub.

Have fun!

Edit(s): Fix some typos

submitted by /u/steveiliop56
[link] [comments]

Open source digwebinterface

For work, I often use digwebinterface.com, really handy! But I guess a lot of people think that way, so it is quite slow sometimes.

I've created an open source variant of digwebinterface with the goal to make it work as similar as possible that runs completely on your own hardware.

You can find the project here: https://github.com/Lars-/opensource-digwebinterface

Let me know what you think!

submitted by /u/Kmillion2
[link] [comments]

PlexDL: A Chrome extension to download media directly from Plex Web (for those who want local backups)

PlexDL: A Chrome extension to download media directly from Plex Web (for those who want local backups)

Hey fellow selfhosters,

I built a small Chrome extension called PlexDL (yeah, not a great name) to help my dad download stuff directly from my Plex server. He watches a lot via my NAS but likes keeping local copies “just in case.”

I tried solutions like WebDAV or Nextcloud… but honestly, Plex already had the perfect UI. It just didn’t have a “Download” button.

So I built one.

What it does:

  • Adds a Download button directly in Plex Web
  • Lets you download an entire show, a season, or just a single episode/movie
  • 100% local, uses Plex’s internal API, no external calls
  • Keeps original filenames and formats
  • Works without Plex Pass (bypasses the offline download limitation)

I built this for a personal use case, but it might help others who selfhost Plex and want a simple way to extract media on demand without setting up an additional interface.

Not on the Chrome Web Store yet, you'll need to enable Developer Mode to install it, even if using the .crx file:

  1. Go to chrome://extensions
  2. Toggle Developer Mode
  3. Either click “Load unpacked” (for the source folder) or drag and drop the .crx file into the page.

Github : PlexDL

Would love feedback or suggestions!

I hope it’s useful to someone else besides my dad!

https://preview.redd.it/ud5r5ridpzbf1.png?width=3344&format=png&auto=webp&s=32a52c8ad010599d0fffe806a7943c05cf592070

submitted by /u/Badraxas
[link] [comments]

REI3.11 - New feature release of the selfhosted, low-code platform

REI3.11 - New feature release of the selfhosted, low-code platform

Hello everyone,

Our second major feature release in 2025 is ready - REI3.11!

The free and open low code platform REI3 is built for addressing internal software needs - from replacing Excel lists and Access databases, to building complex software solutions, like inventory, request handling or order management.

https://preview.redd.it/8vjjp6zw40cf1.png?width=1370&format=png&auto=webp&s=db4420e9ef63b430c9186862d877cbc9874ff354

REI3.11 brings in a slew of new features, such as:

  • Global Search: Find records in all REI3 applications simultaneously; the global search feature offers results from anywhere - optimized to the data structure of each individual app.
  • OpenID Connect authentication: User authentication via identity providers like KeyCloak or Microsoft Entra is now possible. Role mapping is also supported, automatically authorizing new users.
  • File handling on the server: REI3 can now read and write files from disk. This is an often requested feature from our community, allowing for importing of CSV/XML/JSON data directly into REI3. This feature can also interact with the internal file management of REI3, enabling scenarios such as receiving an import file via email attachment, processing it and then writing a corresponding output file to disk.
  • New conditions that can influence form states based on policy access to individual records.
  • More options for configuring columns in lists, calendars, Gantt- & Kanban views.
  • Improved performance in large lists, especially when using access policies.
  • A new display option for workflow-like record states.

https://preview.redd.it/kd47ubpd40cf1.png?width=1066&format=png&auto=webp&s=2ef89cde5606b4a2f4dbf7ecea569229cfa92f7f

Many REI3 applications are publicly available to be downloaded and used at no cost. No subscription needed. Everything selfhosted.

https://preview.redd.it/kem4t33c50cf1.png?width=2162&format=png&auto=webp&s=36f3bef2ff2eda1d34795a25193977853a2eaec0

REI3 is fully open source (MIT license) and can run basically anywhere, on servers, in the cloud - even from a USB stick. An online demo version is available here.

We are constantly impressed with what people build with REI3. Thanks to continuous feedback as well as requirements from enterprise projects, REI3 keeps improving and expanding with every release.

submitted by /u/NetrasFent
[link] [comments]

Introducing swurApp, a simple program to prevent Sonarr from downloading episodes before they’ve aired

Hi r/selfhosted — I’ve built a simple python program ( https://github.com/OwlCaribou/swurApp ) to make sure episodes aren't grabbed until they've aired. This will help prevent things like malicious or fake files being downloaded before the episode is actually out.

It works by connecting to your Sonarr instance’s API and unmonitoring episodes that haven’t aired yet. Then, when the episodes air, swurApp will monitor them again and they should be picked up by Sonarr the next time it grabs episodes.

There’s a little bit of setup (you have to get Sonarr’s API key, and you have to tag the shows you don't want to track), but I’ve tried my best to detail the steps in the README file. Python is not my native language (I’m a Java dev by trade), so suggestions, feedback, and code contributions are welcome.

I know this issue has been plaguing some Sonarr users for a while, so I hope this makes a dent in solving the “why do I have Alien Romulus instead of xyz” problem.

(The stupid acronym stands for “Sonarr Wait Until Release” App[lication].)

Edit: This is a workaround for: https://github.com/Sonarr/Sonarr/issues/969 You CAN make Sonarr wait before grabbing a file, but it does not check if that file is actually within a valid timespan. It only checks for the age of the file itself. So last week someone seeded Alien Romulus as a bunch of TV series, and since it was seeded for several hours, Sonarr instances grabbed the file, even though the episodes hadn't aired.

Check out this thread for an example of why this issue isn't solved with the existing Sonarr settings: https://www.reddit.com/r/sonarr/comments/1lqxfuj/sonarr_grabbing_episodes_before_air_date/

submitted by /u/OwlCaribou
[link] [comments]

I built Webcap, a self-hosted tool that monitors websites with screenshots and change detection

Hey all,

I wanted to share something I’ve been working on: Webcap, a small self-hosted tool that watches websites for changes by taking full-page screenshots and tracking both visual and HTML diffs over time.

It’s useful for monitoring landing pages, docs, third-party dashboards, or really any site where you want to see when something shifts.

What makes it easy to run is Discode, another side project of mine that lets you install apps on your own server using a single curl command. Discode handles the setup, SSL, firewall, and reverse proxy configuration, so you can focus on the app itself.

You just need a public Linux server and a domain name pointed to it. After that, it's one command to get Webcap running.

If you're curious, here’s the Webcap site with more details and a live demo:
https://rubyup.dev/webcap

And if you’re building self-hosted tools in Rails yourself, I’d love feedback on Discode too:
[https://rubyup.dev/discode]()

Thanks for checking it out.

submitted by /u/roelbondoc
[link] [comments]

Setting up valutwarden with caddy & authelia

Setting up valutwarden with caddy & authelia

I got into self hosting recently and at the moment i'm trying to setup valutwarden with caddy and authelia.

I've set up the Caddyfile as such

auth.domain.tld { reverse_proxy authelia:9091 } # Vaultwarden vault.domain.tld { forward_auth authelia:9091 { uri /api/authz/forward-auth?authelia_url=https://auth.domain.tld/ copy_headers Remote-User Remote-Groups Remote-Name Remote-Email } reverse_proxy vaultwarden:8888 } 

Everything works using my tld, I get redirected to authelia, then back to valutwarden. However I'm unable to connect to my selfhosted vw via the bitwarden app.

submitted by /u/aymerci
[link] [comments]

Has anybody tried Readur as a Paperless-ngx alternative?

I just found this project, which seems to do virtually everything Paperless-ngx does, with a few niceties: - Simpler UI (that's not necessarily a positive thing for everybody, but I definitely don't use all the features in Paperless) - Built-in Prometheus metrics - Supports multi-instance deployments for high availability

On the other hand: - It's not entirely clear to me without deploying it that it supports multiple users (which is a hard requirement for me) - While the documentation really goes in-depth in some aspects, it's not as exhaustive as Paperless - It clearly has way, way less users (at least for now...)

Has anyone given it a try? What has your experience been?

submitted by /u/kernald31
[link] [comments]

Strugging with networking as a beginner

Hey

Im working a lot with copilot or chatgpt to try and set up the following:

I want my raspberry pi 5 to host Immich, Filebrowser and maybe other future things
I am using tailscale to connect my devices to reach the rbp

I want to set up filebrowser and immich and have decent looking urls without the port
For now I have been using MagicDNS to reach my rbp

Since immich can not handle base changes (so i cant use rbphostname/images) the LLMs directed me to set up a DNS on the rbp and add a nameserver in the tailscale settings

So i did a bunch of dnsmasq things and messed around with the nameservers but even at times the dns is reachable i can never get it to work

My first question is, is what I am doing possible?
My second is, is it a good option or would you suggest something else?
And lastly, if both are yes, could you give some tips to set it up or point me towards some documentation to help me do it?

submitted by /u/Falld0wn
[link] [comments]

Telegram or Discord bots for personal productivity

Hi community,

Please forgive me if you think this is not the right sub to ask this question.

After months of going down the rabbit hole of trying to find my productivity suite, I realised that I need a distraction free chatbot like interface and a self hosted bot backend that integrates with other tools that I use.

Initially I was thinking of developing the UI as it gives me flexibility but it will be time consuming as I am primarily a backend developer do not have a lot of experience with UI. So I thought using Telegram or Discord might save me a lot of time and I can focus more on the stuff that matters to me more.

Anyone who has already taken this path, could you give me your two cents on why you chose one tool over other (Discord vs Telegram)? Also open to any other tools or advice that you might have!

Thanks!

submitted by /u/ILoveDart
[link] [comments]

Running 3 Ubuntu hosts, wondering if there's a better option ?

Hi fellow hosters,

I'm running 3x Ubuntu hosts which each are running ONLY docker containers, 25 each or so.
Got into Cockpit and TuneD profile set to latency-performance.
Was wondering if there are any better options , distro-wise, maybe tailored to docker containers alone.
Or any distro that outperforms Ubuntu ...
Or any performance tweaks i should know of ...

submitted by /u/RazzFraggle81
[link] [comments]

Self-hosted URL based file sharing

Hi, I am looking for advice. I have a OMV NAS where I store all my files.

From time to time my friends ask me to share something with them, I mostly just copy the stuff to an USB stick and give it to them. I am looking for a solution that would allow me to just select files on my NAS, create share URL with expiration and send it to them, similarly to how OneDrive links work.

I went to through SeaFile and NextCloud, but both feel like an overkill. Do you know of any service that would work for me?

Another nice to have would be to have an option when others would be able to upload files to shaerd directory as well, but that is not needed at the moment.

submitted by /u/Training_Health_1062
[link] [comments]

One Pace for Jellyfin - First Release!

One Pace for Jellyfin - First Release!

Hey guys!

I've posted here before so I'm sorry if this is considered spam.

Opforjellyfin, or One Pace for Jellyfin, is a small CLI program meant for downloading One Pace-episodes and placing them in a folder together with proper metadata.

This combines both aquiring the episodes and sorting them in their proper arcs in a neat little package, tailored for Jellyfin use.

I've made some significant improvements to the program during the last few weeks and I believe it is mature for its first 'official' release!

Hence, there are now single-file binaries for Linux, MacOS, and Windows. No need to build from source!

I'm pretty happy with where the program is right now, but I will still ofcourse accept any criticisms or feature requests!

I will also happily accept any contribution toward the metadata repo! Be it either episode .nfo files or suggestions on backdrop images!

See you on the Grand Line!

submitted by /u/tissla-xyz
[link] [comments]

FINALLY: Recursive archiving of domains, with ArchiveBox 0.8.0+

FINALLY: Recursive archiving of domains, with ArchiveBox 0.8.0+

After trying a number of self-hosted options for archiving websites I settled on Archivebox, with the caveat that I could really only archive one link at a time - whatever the browser extension gave to the archiver.

I looked at Fess and wondered if I could do something similar, on a smaller scale. As it turns out, ArchiveBox 0.8.0+ has a REST API so adding URLs programmatically is now trivial.

This little set of Docker containers was my solution to this issue which has been a long-standing problem for ArchiveBox users with way too much storage space available to them.

Enjoy!

Oh, and a small caveat- the primary developer has put ArchiveBox on the backburner for now, though that doesn't mean it won't work. The latest 0.8.5rc51 seems to work perfectly fine. That said, release candidates and use-at-your-own-risk, yada yada.

Github: https://github.com/egg82/archivers
domain_archiver: https://hub.docker.com/r/egg82/domain_archiver
gov_archiver: https://hub.docker.com/r/egg82/gov_archiver

submitted by /u/eggys82
[link] [comments]

Self-hosted AI setups – curious how people here approach this?

Hey folks,

I'm doing some quiet research into how individuals and small teams are using AI without relying heavily on cloud services like OpenAI, Google, or Azure.

I’m especially interested in:

  • Local LLM setups (Ollama, LM Studio, Jan, etc.)
  • Hardware you’re using (NUC, Pi clusters, small servers?)
  • Challenges you've hit with performance, integration, or privacy

Not trying to promote anything — just exploring current use cases and frustrations.

If you're running anything semi-local or hybrid, I'd love to hear how you're doing it, what works, and what doesn't.

Appreciate any input — especially the weird edge cases.

submitted by /u/ExcellentSector3561
[link] [comments]
❌
❌