Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
Hoy — 10 Abril 2025homelab.

My Homelab setup so far

10 Abril 2025 at 03:00
My Homelab setup so far

APC 24U Netshelter - picked up from local city hall auction for $150

Top to bottom Generic 24 Port Patch Panel Netgear Prosafe JGS524E V2 24 Port Managed Switch

2000s AMD Gaming PC with 2GB DDR3 ram - first NAS server with 2TB of total Raid 5 Storage - Not in use

Hyve Zeus V1 (First Homelab Server) - Dual xeon something - 128GB RAM - 1TB Sata SSD

5x Dell Poweredge R310 - 32GB Ram - Single Xeon - 4x 1GBe PCIe Cards - 1TB Sata SSD - purchased all five for a total of $50 from local university auction - Clustered Proxmox - Currently not in use

Dell Poweredge R730 - Dual Xeon something - 64GB RAM - 2x 1TB 2.5" Sata SSD - 14x 1TB Dell 2.5" SAS HDD (3 4Node ZFS with two hot spares) - Central Proxmox / NAS Server - Runs 24/7 for NAS/Gitlab/Bluesky PDS/Factorio/Plex/NGINX Reverse Proxy

APC 1500 Smart UPS - Old Batteries needing to be replaced - purchased for $20 from local university auction

Not Pictured - sitting on top of rack is - samsung 24" monitor purchased for $45 from local thrift store - 7 Node PoE RPI4 Kubernetes cluster

All the Dell Servers are on sliding rails with cable management arms the hyve is just on rails

submitted by /u/maydayM2
[link] [comments]

Proxmox Backup Server 3.4 released!

10 Abril 2025 at 13:38
Proxmox Backup Server 3.4 released!

Patchnotes copied from https://pbs.proxmox.com/wiki/index.php/Roadmap#Proxmox_Backup_Server_3.4

Proxmox Backup Server 3.4

Released: 10 April 2025 Based on: Debian Bookworm (12.10) Kernel: * Latest 6.8.12-9 Kernel (stable default) * Newer 6.14 Kernel (opt-in) ZFS: 2.2.7 (with compatibility patches for Kernel 6.14)

Highlights

  • Performance improvements for garbage collection.
    • Garbage collection frees up storage space by removing unused chunks from the datastore.
    • The marking phase now uses a cache to avoid redundant marking operations.
    • This increases memory consumption but can significantly decrease the runtime of garbage collection.
  • More fine-grained control over backup snapshot selection for sync jobs.
    • Sync jobs are useful for pushing or pulling backup snapshots to or from remote Proxmox Backup Server instances.
    • Group filters already allow selecting which backup groups should be synchronized.
    • Now, it is possible to only synchronize backup snapshots that are encrypted, or only backup snapshots that are verified.
  • Static build of the Proxmox Backup command-line client.
    • Proxmox Backup Server is tightly integrated with Proxmox VE, but its command-line client can also be used outside Proxmox VE.
    • Packages for the command-line client are already provided for hosts running Debian or Debian derivatives.
    • A new statically linked binary increases the compatibility with Linux hosts running other distributions.
    • This makes it easier to use Proxmox Backup Server to create file-level backups of arbitrary Linux hosts.
  • Latest Linux 6.14 kernel available as opt-in kernel.

Changelog Overview

Enhancements in the web interface (GUI)

  • Allow configuring a default realm which will be pre-selected in the login dialog (issue 5379).
  • The prune simulator now allows specifying schedules with both range and step size (issue 6069).
  • Ensure that the prune simulator shows kept backups in the list of backups.
  • Fix an issue where the GUI would not fully load after navigating to the "Prune & GC Jobs" tab in rare cases.
  • Deleting the comment of an API token is now possible.
  • Various smaller improvements to the GUI.
  • Fix some occurrences where translatable strings were split, which made potentially useful context unavailable for translators.

General backend improvements

  • Performance improvements for garbage collection (issue 5331).
    • Garbage collection frees up storage space by removing unused chunks from the datastore.
    • The marking phase now uses an improved chunk iteration logic and a cache to avoid redundant atime updates.
    • This increases memory consumption but can significantly decrease the runtime of garbage collection.
    • The cache capacity can be configured in the datastore's tuning options.
  • More fine-grained control over backup snapshot selection for sync jobs.
    • Sync jobs are useful for pushing or pulling backup snapshots to or from remote Proxmox Backup Server instances.
    • Group filters already allow selecting which backup groups should be synchronized.
    • Now, it is possible to only synchronize backup snapshots that are encrypted, or only backup snapshots that are verified (issue 6072).
    • The sync job's transfer-last setting has precedence over the verified-only and encrypted-only filtering.
  • Add a safeguard against filesystems that do not honor atime updates (issue 5982).
    • The first phase of garbage collection marks used chunk files by explicitly updating their atime.
    • If the filesystem backing the chunk store does not honor such atime updates, phase two may delete chunks that are still in use, leading to data loss.
    • Hence, datastore creation and garbage collection now perform an atime update on a test chunk, and report an error if the atime update is not honored.
    • The check is enabled by default and can be disabled in the datastore's tuning options.
  • Allow to customize the atime cutoff for garbage collection in the datastore's tuning options.
    • The atime cutoff defaults to 24 hours and 5 minutes, as a safeguard for filesystems that do not always immediately update the atime.
    • However, on filesystems that do immediately update the atime, this can cause unused chunks to be kept for longer than necessary.
    • Hence, allow advanced users to configure a custom atime cutoff in the datastore's tuning options.
  • Allow to generate a new token secret for an API token via the API and GUI (issue 3887).
  • Revert a check for known but missing chunks when creating a new backup snapshot (reverts fix for issue 5710).
    • This check was introduced in Proxmox Backup Server 3.3 to enable clients to re-send chunks that disappeared.
    • However, the check turned out to not scale well for large setups, as reported by the community.
    • Hence, revert the check and aim for an opt-in or opt-out approach in the future.
  • Ensure proper unmount if the creation of a removable datastore fails.
  • Remove a backup group if its last backup snapshot is removed (issue 3336).
    • Previously, the empty backup group persisted with the previous owner still set.
    • This caused issues when trying to add new snapshots with a different owner to the group.
  • Decouple the locking of backup groups, snapshots, and manifests from the underlying filesystem of the datastore (issue 3935).
    • Lock files are now created on the tmpfs under /run instead of the datastore's backing filesystem.
    • This can also alleviate issues concerning locking on datastores backed by network filesystems.
  • Ensure that permissions of an API token are deleted when the API token is deleted (issue 4382).
  • Ensure that chunk files are inserted with the correct owner if the process is running as root.
  • Fix an issue where prune jobs would not write a task log in some cases, causing the tasks to be displayed with status "Unknown".
  • When listing datastores, parse the configuration and check the mount status after the authorization check.
    • This can lead to performance improvements on large setups.
  • Improve the error reporting by including more details (for example the errno) in the description.
  • Ensure that "Wipe Disk" also wipes the GPT header backup at the end of the disk (issue 5946).
  • Ensure that the task status is reported even if logging is disabled using the PBS_LOG environment variable.
  • Fix an issue where proxmox-backup-manager would write log output twice.
  • Fix an issue where a worker task that failed during start would not be cleaned up.
  • Fix a race condition that could cause an incorrect update of the number of current tasks.
  • Increase the locking timeout for the task index file to alleviate issues due to lock contention.
  • Fix an issue where verify jobs would be too eagerly aborted if the manifest update fails.
  • Fix an issue where file descriptors would not be properly closed on daemon reload.
  • Fix an issue where the version of a remote Proxmox Backup Server instance was checked incorrectly.

Client improvements

  • Static build of the Proxmox Backup command-line client (issue 4788).
    • Proxmox Backup Server is tightly integrated with Proxmox VE, but its command-line client can also be used outside Proxmox VE.
    • Packages for the command-line client are already provided for hosts running Debian or Debian derivatives.
    • A new statically linked binary increases compatibility with Linux hosts running other distributions.
    • This makes it easier to interact with Proxmox Backup Server on arbitrary Linux hosts, for example to create or manage file-level host backups.
  • Allow to read passwords from credentials passed down by systemd.
    • Examples are the API token secret for the Proxmox Backup Server, or the password needed to unlock the encryption key.
  • Improvements to the vma-to-pbs tool, which allows importing Proxmox Virtual Machine Archives (VMA) into Proxmox Backup Server:
    • Optionally read the repository or passwords from environment variables, similarly to proxmox-backup-client.
    • Add support for the --version command-line option.
    • Avoid leaving behind zstd, lzop or zcat processes as zombies (issue 5994).
    • Clarify the error message in case the VMA file ends unexpectedly.
    • Mention restrictions for archive names in the documentation and manpage (issue 6185).
  • Improvements to the change detection modes for file-based backups introduced in Proxmox Backup Server 3.3:
    • Fix an issue where the file size was not considered for metadata comparison, which could cause subsequent restores to fail.
    • Fix a race condition that could prevent proper error propagation during a container backup to Proxmox Backup Server.
  • File restore from image-based backups: Switch to blockdev options when preparing drives for the file restore VM.
    • In addition, fix a short-lived regression when using namespaces or encryption due to this change.

Tape backup

  • Allow to increase the number of worker threads for reading chunks during tape backup.
    • On certain setups, this can significantly increase the throughput of tape backups.
  • Add a section on disaster recovery from tape to the documentation (issue 4408).

Installation ISO

  • Raise the minimum root password length from 5 to 8 characters for all installers.
    • This change is done in accordance with current NIST recommendations.
  • Print more user-visible information about the reasons why the automated installation failed.
  • Allow RAID levels to be set case-insensitively in the answer file for the automated installer.
  • Prevent the automated installer from printing progress messages while there has been no progress.
  • Correctly acknowledge the user's preference whether to reboot on error during automated installation (issue 5984).
  • Allow binary executables (in addition to shell scripts) to be used as the first-boot executable for the automated installer.
  • Allow properties in the answer file of the automated installer to be either in snake_case or kebab-case.
    • The kebab-case variant is preferred to be more consistent with other Proxmox configuration file formats.
    • The snake_case variant will be gradually deprecated and removed in future major version releases.
  • Validate the locale and first-boot-hook settings while preparing the automated installer ISO, instead of failing the installation due to wrong settings.
  • Prevent printing non-critical kernel logging messages, which drew over the TUI installer's interface.
  • Keep the network configuration detected via DHCP in the GUI installer, even when not clicking the Next button first (issue 2502).
  • Add an option to retrieve the fully qualified domain name (FQDN) from the DHCP server with the automated installer (issue 5811).
  • Improve the error handling if no DHCP server is configured on the network or no DHCP lease is received.
    • The GUI installer will pre-select the first found interface if the network was not configured with DHCP.
    • The installer will fall back to more sensible values for the interface address, gateway address, and DNS server if the network was not configured with DHCP.
  • Add an option to power off the machine after the successful installation with the automated installer (issue 5880).
  • Improve the ZFS ARC maximum size settings for systems with a limited amount of memory.
    • On these systems, the ZFS ARC maximum size is clamped in such a way, that there is always at least 1 GiB of memory left to the system.
  • Make Btrfs installations use the proxmox-boot-tool to manage the EFI system partitions (issue 5433).
  • Make GRUB install the bootloader to the disk directly to ensure that a system is still bootable even though the EFI variables were corrupted.
  • Fix a bug in the GUI installer's hard disk options, which causes ext4 and xfs to show the wrong options after switching back from Btrfs's advanced options tab.

Improved management of Proxmox Backup Server machines

  • Several vulnerabilities in GRUB that could be used to bypass SecureBoot were discovered and fixed (PSA-2025-00005-1)
    • The Documentation for SecureBoot now includes instructions to prevent using vulnerable components for booting via a revocation policy.
  • Improvements to the notification system:
    • Allow overriding templates used for notifications sent as plain text as well as HTML (issue 6143).
    • Streamline notification templates in preparation for user-overridable templates.
    • Clarify the descriptions for notification matcher modes (issue 6088).
    • Fix an error that occurred when creating or updating a notification target.
    • HTTP requests to webhook and gotify targets now set the Content-Length header.
    • Lift the requirement that InfluxDB organization and bucket names need to at least three characters long.
      • The new minimum length is one character.
  • Improve the accuracy of the "Used Memory" metric by relying on the MemAvailable statistic reported by the kernel.
    • Previously, the metric incorrectly ignored some reclaimable memory allocations and thus overestimated the amount of used memory.
  • Backport a kernel patch that avoids a performance penalty on Raptor Lake CPUs with recent microcode (issue 6065).
  • Backport a kernel patch that fixes Open vSwitch network crashes that would occur with a low probability when exiting ovs-tcpdump.

Known Issues & Breaking Changes

  • None
submitted by /u/HTTP_404_NotFound
[link] [comments]

Current Homelab; what else could I put in there?

10 Abril 2025 at 08:02
Current Homelab; what else could I put in there?

First of all, the title is a "That's what she said" and second: This is the current shape and form of my homelab.

Years ago I posted this thread: https://www.reddit.com/r/homelab/comments/h8qx2c/small_and_humble_homelab_because_better_doesnt/

And over the years I've done some changes and installed a few new VMS, accidentally killed a server by renaming it without thinking it was the Domain Controller and you're not supposed to just rename it because oopsie-doopsie it will break.

Before I start listing what it has inside these days, I am more than open for suggestions on what I could possibly get to tidy it up and expand it.

My current ideas:

  • A new Ubiquiti switch with more ports with a rackmount
  • A rackmount for the current 5 port switch
  • A small UPS (Open to suggestions!)

Now for those wondering: My Synology DS418 NAS is hosted downstairs in the hallway closet due to lack of ports currently.

Hosted on the Asrock Deskmini (Intel i5 8400, 32 GB SO-DIMM @ 2400 with 2 x 512 GB SSDs, 1 x 512 NVME SSD and a 64 GB SSD for the OS)

OS: VMWare ESX 7.0 (used to run 8.0 but I couldn't get the SSDs to show up)

VMs & their services:

CITADEL - Windows Server 2022 - Domain Controller services

EDI - Windows Server 2022 - Plex server

LEGION - Linux Ubuntu 24.04 LTS - Pi-hole

SAMARA - Linux Ubuntu 24.04 LTS - Dockers (Sonarr, Radarr, Sabnzbd, Portainer, Overseer, Tautulli, UnifiAlerts, Vaultwarden, Nginx Proxy Manager, Bazarr, Lidarr and Watchtower)

MORDIN - Linux Ubuntu 24.04 LTS - HomeAssistant

The Fujitsu PC was installed last night with Linux Ubuntu 24.04 and Portainer so most likely will host a few "backup" services. Like another Plex service. Open to suggestions.

submitted by /u/GodisanAstronaut
[link] [comments]

Home Lab Phase 1.5

10 Abril 2025 at 01:39
Home Lab Phase 1.5

* Yes, it's a repost. I deleted the last post seeing as it was pointed out that I for got to add pictures.

I moved towns almost a year ago for work and have been working on expanding my home lab into a small home cloud. So far I have the mgmt/ipmi network installed (blue copper) and servers and most switches racked.

Hardware installed (top to bottom):

  1. SuperMicro Server 505-2 Intel Atom 2.4GHz 8GB RAM SYS-5018A-FTN4 1U Rackmount running PFSense
  2. Edge-Core AS7712-32x 100g switch running SONiC network OS (Core/Spine switch)
  3. Cisco Catalyst 2960 POE
  4. NetApp SG1000 (not working)

5 & 6) Supermicro 4 node chassis currently running Hyper-v but will most likely change OS soon

7) QCT D51PH-1ULH 12 bay storage server running Ubuntu with ZFS

I'm waiting on a pair of Edge-Core AS5712-54x switches which will be running SONiC as well and be used as Access/Leaf switches. Also, please don't mind the printer, its already been moved.

I'm open for questions, comments and respectful criticism.

submitted by /u/Opheria13
[link] [comments]

What’s the oldest piece of hardware still running in your homelab — and why won’t you let it die?

9 Abril 2025 at 16:39

We all have that one piece of gear that’s ancient, loud, maybe even a bit cursed… but still refuses to give up

Maybe it's a Pentium 4 box still doing backups, or an old Dell server that sounds like a 747 on boot. Share your oldest running hardware and the reason you’re still keeping it alive. Pics welcome!

submitted by /u/LeonOderS0
[link] [comments]

[WIP] 3D-Printable 1U Disk Shelf (4 bays) With Custom SATA Backplane

9 Abril 2025 at 17:45
[WIP] 3D-Printable 1U Disk Shelf (4 bays) With Custom SATA Backplane

This is an update to my last post where I shared the custom SATA backplane PCB I was working on for my 3D printable disk shelf. Since then, I've made some updates to the PCB to improve supply routing, SATA signal integrity, and I also added PWM control for the fans.

The enclosure is fully 3D-printable, and is built in two halves. I've just finished the half unit, and next steps are to get a first run of the PCBs for testing, do some trial prints for fit, and play around with the duct length to optimize airflow. Once that's done, I'll add some mounting holes for rack ears, dovetails to connect the two halves together, and it should be all done!

If you'd like to play around with the 3D model, you can take a look here: https://a360.co/3ZuX03F

I've also pushed the PCB KiCad files to Github, and would appreciate any feedback from people with high speed board design experience: https://github.com/kaysond/1U-DiskShelf/tree/main

submitted by /u/kayson
[link] [comments]

My small set

9 Abril 2025 at 20:39
My small set

Hello, first post here, I want to show you my small set of random computers stored in IKEA Kallax that makes me happy to play with.

From top:

HP Chromebook G2

  • i5-7300U
  • 16 GB DDR4 RAM
  • 256 GB SATA M.2 SSD

For Home Assistant

Dell Optiplex 3080 Mini

  • i5-10500T
  • 32 GB DDR4 RAM
  • 128 GB NVMe M.2 SSD
  • + Intel i225V Ethernet M.2 NIC

For OPNsense

Dell Wyse 3040

Running Ubuntu Server with AdGuard

2 x HP Engage Flex Pro-C

  • i5-9500
  • 32 GB DDR4 RAM
  • 2x 512 GB NVMe M.2 SSD in RAID0

For Proxmox

Elitedesk 800 G3

  • i7-7700
  • 32 GB DDR4 RAM
  • 128 GB SATA for boot
  • 128 GB NVMe M.2 SSD for apps
  • 3x 2 TB HDD for storage

For TrueNAS and Tailscale

Any ideas what to add/change are highly welcome :D

submitted by /u/papryg
[link] [comments]

glances iFrame within Homepage tab

10 Abril 2025 at 10:58
glances iFrame within Homepage tab

Just finished setting up Homepage dashboard, only one thing can't figure out, setting glances within iFrame. Looked over all docs, for homepage, Tailwind, MDN docs, can't get it to show full height, using Chrome browser by default, tried with Edge as well.

Config in service.yaml:

- TheArk - Glances: - Glances: widget: type: iframe name: glances classes: h-96 sm:h-96 md:h-[32rem] lg:h-[36rem] xl:h-[40rem] 2xl:h-[48rem] src: https://glances-theark.domain.net referrerPolicy: same-origin allowPolicy: autoplay; fullscreen; gamepad allowFullscreen: true loadingStrategy: eager allowScrolling: yes 

Tweaking classes I managed to get it half sized, nothing bigger than this. Any clue on how to make it bigger?

TIA

submitted by /u/MoldavianRO
[link] [comments]

Homelab in China

9 Abril 2025 at 19:12
Homelab in China

I have a hard time connecting my router to VPN because I did try openVPN and Wireguard on my router but it won’t work here but I am able to connect VPN via their applet there for I set up Proxmox windows vm to host my VPN and share connection to my router and loop the connection back to media server on linux system that run on the same Proxmox I would love to further improve my storage and stability of my network if anyone have an idea that I can improve this system would love to try it out

submitted by /u/RoutinePossible5572
[link] [comments]

Anyone who could provide me HP BIOS file ?

10 Abril 2025 at 11:50

Hello,

I'm renewing all my homelab hardware, for better storage and less power consumption.

And so, I'm ditching all my beloved Dell stuff, for HPE stuff I found for cheap on eBay, on a bulk pallet.

While I'm not very happy to leave Dell for HPE server hardware, this is reinforced by the fact that HPE put a paywall to download BIOS updates.

Of course, I have an HPE Service Account that I created with a professional email, but without any servicing contract or any active warranty.

I will receive some DL380 Gen9, for which I easily found the updated BIOS file on the Web, but my issue is with my future Apollo 4200 Gen9, for which I found nothing, neither on Reddit rather than on the Web...

So does anyone could provide me the BIOS file for my Apollo 4200 Gen9 ?

I would really appreciate it.

Thank you very much to who could help me 😁

submitted by /u/RoroTitiFR
[link] [comments]

OS quandary: client OS, Server OS, or Hypervisor? Linux LTS and containers?

10 Abril 2025 at 14:43

I built a Ryzen Pro 5650G, microatx based PC With 64GB of ECC DDR4, in a home entertainment chassis, because it sits in the lounge atop a Denon receiver.

I wanted high power efficiency, 24/7 reliable running, and to host the following functions:

  • AgentDVR recording 12 POE cameras 24/7 to a 3.84tb SATA Enterprise SSD, then archiving to an 8tb SMR SATA disk.
  • NAS, by hosting two 18tb Disks, either RAID 1, or just periodically synchronized in software for fault-tolerance. this would contain all my music, films, photos and documents.
  • JellyFin mainly to stream music to two Denon receivers and a Denon Home Speaker.
  • Sonarr, Radarr
  • Home Assistant
  • Directly drive a 60" TV for movie playback via the Denon receiver. (I've always gotten on best using MPC-HC, and now use the clsid2/mpc-hc fork).

I built this PC initially with Windows 11 and migrated AgentDVR onto it, got Jellyfin (for windows) working, and that was about it. Still needed to do some hardware modifications, and then along came a Win 11 update that completely corrupted the OS install. It has been switched off since while I deliberate what to do.

So I can run any OS on here I need to, but it needs to have this weird split of both client and server functionality. If running Windows, I'd RDP into it mainly from another PC in the house (I use several), and just occassionally it gets used directly for the film playback on the TV it's connected to (It's only display).

AgentDVR under Linux is probably more efficient and reliable, as are most of the other services, BUT what do I do for local media playback to equal the functionality of HPC-HC (which has given me huge support over audio and video decoding options, frame-rate detection and automatic resolution switching, wide codec support etc... It's been a god send for driving my home cinema projectors over the years).

I have a lot of experience with ESXi, though wouldn't go there for this machine, and plenty of experience with HyperV too. Dabbled a little with Proxmox, but not experienced with it.

So what do I run for efficiency/reliability?

If I virtualise then there's an instant overhead to doing that, and I've got the issue of having to share the GPU with multiple VMs for accelerated decoding (I have achieved this in HyperV previously with some juggling).

I want OS reliability, without an update screwing the machine again, and the ability to rebuild easily in future would be great: so containers maybe? an Ubuntu LTS installation running Docker?

Thanks for listening.

submitted by /u/sadanorakman
[link] [comments]

What is a good quality multiple-capability cable tester that doesn't cost an arm and a leg? (CANADA)

10 Abril 2025 at 14:21

To make a long story short, I want to make sure some existing runs can handle higher speeds or else I have to run them again.

I would love a pockethernet but it is very much out of my price range.

The shop where its coming from must be located in Canada.

Is there a cable tester that is good quality with multiple capabilities that doesn't cost an arm and a leg?

Thanks!

submitted by /u/urbanracer34
[link] [comments]

Aoostar WTR Pro Ryzen 7 5825U - Proxmox ssd choice

10 Abril 2025 at 14:16

Hey everyone,

I just picked up an Aoostar WTR Pro (Ryzen 7 5825U) — planning to turn it into a Proxmox-based home lab for hosting a bunch of VMs. It’ll also serve as a media server and photo archive.

I want to get the SSD layout right for best performance and reliability. Here’s what the unit offers:

  • 2× M.2 2280 slots (NVMe supported)
  • 1× M.2 2230 slot (used for Wi-Fi, can be swapped for a SSD via M.2 key to M.2 nvme adapter)
  • 4× 3.5” SATA bays (HDDs)

My current plan:

  • 2× 2280 NVMe SSDs 1t(or 2t), thinking of striping them for fast VM storage
  • 1× 2230 NVMe SSD 512Gb for the Proxmox OS
  • 2× 8TB HDD in ZFS mirror for photo/video archive
  • 1× extra HDD for media storage (no redundancy)

Questions:

  1. Would a striped ZFS pool (RAID 0) with the two NVMe drives be worth it for performance? Or just use them separately?
  2. Is using the 2230 SATA SSD for Proxmox system a good idea? Or better to put Proxmox on NVMe and use 2230 for cache/log?

Appreciate any advice from folks running similar setups. Thanks!

submitted by /u/PhotoMot0
[link] [comments]
❌
❌