Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

I built a NAS

I built a NAS

One day, I saw a Jonsbo N1 case on the internet and decided I needed to build a NAS in this beautiful thing!

Meet unicomplex - a TrueNAS server I built myself.

Specs

Motherboard: Asus Prime H610I-PLUS-CSM

CPU: 10 cores, 16 threads Intel Core i5 13400

RAM: 64GB DDR5

PSU: FSP 550W SFX Dagger Pro

Storage

The case accommodates up to 6 drives: 5x 3.5" drive bays + 1x 2.5" SSD. But the motherboard had only 4 SATA ports. The solution was to use an HP H240 SAS controller in the PCIe slot to connect additional drives.

The SAS controller had just enough width to fit in the case, but its fixing plate was not low-profile. It was held only by the PCIe slot for a couple of days, which gave me some anxiety, but the replacement plate finally arrived, and the controller was fixed in place.

At the end, I have ZRAID1 pool 4 HDDs wide for data + SSD mirrored storage 2 drives wide for Apps and Instances + 1x NVMe drive for the Operating System.

submitted by /u/estevez__
[link] [comments]

💀 Meet the Dead Canary: My LAN watchdog in a plastic pot that gracefully kills my NAS when the power dies.

💀 Meet the Dead Canary: My LAN watchdog in a plastic pot that gracefully kills my NAS when the power dies.

The Problem:

My Zimacube (MU/TH/UR) runs off a cheaper dumb UPS, but I still wanted a guaranteed way to detect power outages and shut things down before ZFS could cry.

The Solution:

I built a Dead Canary using an ESP32 stuffed inside a translucent film cannister vhb taped to the power supply in a proper container.

It sits plugged into the same power strip as MU/TH/UR but not through the UPS, and serves a local / endpoint that responds with “CHIRP”.

If the canary goes silent for 5+ minutes, a cron-driven watchdog on MU/TH/UR initiates a graceful shutdown.

Bonus Layer:

Uptime Kuma monitors the canary’s IP as well, so if I get an alert it means MU/TH/UR is still up, as she sent it, but it means the ESP’s power was accidentally cut (hello, Arnold the cat). Thus starts my 5 min timer to revive the canary.

Why a film cannister?

I wanted to trap the red LED glow like some kind of techno-pagan shrine It's all I had to hand, and it fit, sort of.

Final Notes:

Uses cron, curl, and a simple timestamp file for logic

No cloud services, no dependencies

100% autonomous and LAN-contained

🧠✨ 10/10 would let this thing murder my NAS again.

submitted by /u/timotimotimotimotimo
[link] [comments]

Scored some free hardware to start my homelab

Scored some free hardware to start my homelab

A friend of mine's company was shutting down. He asked me if I was interested in any of the hardware before they had to pay to recycle it.

I opted to take anything that I could that was complete and figure out what to do with it later.

I currently run my 10 year old gaming desktop as a Truenas server that serves up my plex instance and nothing else.

Now that I have the horsepower, what are some fun projects I should delve into?

Hardware left to right, top to bottom:

Machine Processor / Ports / Wattage
3 - Dell Optiplex 3010 3rd gen i3 (i3-3220)
HP Elitedesk 800 G5 mini 9th gen i5 (9500T)
HP Z2 Mini G4 8th gen i7 (8700T)
HP Prodesk 400 G4 SFF 7th gen i5 (7500T)
HP HPE Office Connect JG926A 48 POE ports
3 APC UPS 650 650w
submitted by /u/RadioSwimmer
[link] [comments]

My main server

My main server

Built it once upgraded my main PC, with old+spare components.

I use it as mass storage and virtualization server, with Proxmox OS.
It has been great so far

Part list:
AMD Ryzen 7 5800X
64 GB Crucial DDR4
Nvidia GTX1650
3x Seagate Barracuda 8 TB (RaidZ2)
2x Generic Seagate for non-important virtual machines
750W Sharkoon PSU

submitted by /u/_ryzeon
[link] [comments]

Scaling up from minipc

Scaling up from minipc

Wanted to share my excitement to acquire hardware for future enterprise tower server.

Currently I have Intel N95 MiniPC and I hit massive bottleneck with CPU & RAM. It just not capable pushing multiple gigabits, 50k packets per second.

Since I have pretty dense Hyperconverged setup with Proxmox, I plan to hoard on good workstation tower server which LGA 2011 / 2066 socket for SR-IOV.

The SAS controller is Dell PERC H310. I researched and it seems to be true to support disk passthrough, as well since its based on LSI chipset there is option to crossflash LSI IT-mode firmware.

NIC in question is 10GbE HP 560FLR-SFP+ with Intel 82599ES controller, which does support SR-IOV that I will use for virtualized guests.

No more subpar usb attachments, no more low quality realtek garbage. I need rock solid performance for my data-intensive tests & experiments with multi-tenant on-prem cloud systems.

In this picture you can also see SFP+ DAC that will be used to interconnect server to Mikrotik CRS210. It is crucial to have separate management link (that will be motherboard NIC) and dedicated data NIC (the one in photo I showed).

Now challenge will be to find tower server / workstation where I could fit these PCI cards. Any ideas?

submitted by /u/Tinker0079
[link] [comments]

Homelab diagramm - how is my setup?

Homelab diagramm - how is my setup?

Hey everyone! I wanted to share my current homelab setup and get some advice on two main concerns I have:

  1. Keeping Services Updated with Minimal Maintenance
  2. Securing My Data

1. Updates & Maintenance

All my services run in Docker containers inside a Proxmox VM. I’m currently not using a VPN because some family members access my services, and using domains is much more user-friendly for them.

The trade-off, of course, is that I'm exposing my services to the public. So to minimize risk, keeping everything up to date is crucial.

What are your go-to methods for automating updates in a setup like this? I’d love to hear about tools, workflows, or best practices that help you stay secure with minimal manual intervention.

2. Data Security & Backup Strategy

Right now, I’m storing everything on two 4TB Seagate IronWolf drives in a mirrored setup. This includes:

  • Proxmox VM backups
  • Data from services like Immich, Jellyfin, and Nextcloud (shared via NFS)

I’m aware of the 3-2-1 backup rule and want to move toward a more redundant and reliable solution without breaking the bank.

Would it make more sense to:

  • Upgrade to larger drives and run something like RAID-Z2?
  • Stick with my current setup and use a cloud backup service for cold storage?

Open to suggestions here—especially ones that are cost-effective and practical for a home setup.

I’m still learning and far from a professional, so if you spot anything in my setup that could be improved, feel free to chime in. I appreciate any input!

Thanks in advance!

submitted by /u/JuliperTuD
[link] [comments]

I have 3 spare machines and am looking for experiments

I have 3 spare machines and am looking for experiments

Sorry if this doesn't quite fit the sub, it's my first post.

I've been running a perforce helix core server for my game studio (now at another programmers house due to issues with my ISP) and needed another cheap machine for an off-site backup.

I came across these 4 dell optiplex computers for £70 total and pulled the trigger. Now I have 3 spare machines for tinkering with.

I was thinking I could run a Rustdesk server on Docker but I'm not sure how well these would handle the video stream.

So I thought I'd ask what kinds of things I should run on these? Proxmox? Ubuntu server with Nix? TrueNAS Scale?

Anyways, I want to know what interesting projects you guys would suggest.

Specs: i3 6100T 8GB 2400MT/s No Drives (will be buying a bunch soon, probably 256GB m.2 drives, this can support 1 SATA drive too)

Also feel free to ask about the perforce server if you're interested.

submitted by /u/B1naryB0b
[link] [comments]

1 spare U and only just.. time to stop or get a 24U and spread out?

1 spare U and only just.. time to stop or get a 24U and spread out?

Just finished adding my 4th Proxmox node, debating on adding 2 more above in the final 1U space. I used to use it for the Pis but they've since been relocated to the gap next to the Synology in a custom designed mount to maximize space.

Back of the rack has 4 raceways for all of the power connectors and 2 PDUs. 1 hooked into the UPS and 1 direct to wall to make my life easy when picking what I want on it.

  • Synology - 8x 14TB HDDs, 2x 4TB SSDs

  • RPi4 - PoE - Home Assistant

  • RPi4 - PoE - Docker playground (dockge and portainer to compare, various other containers to test out what I want to keep, dashboards, monitoring, PiHole, etc)

  • 4x Lenovo P360 - Clustered in Proxmox, currently running self-hosted site, Nginx, game server, mealie instance for the wife and I. HA enabled by storing VM disks via NFS on Synology. (grossly underused currently)

  • APC Smart-UPS 1500 (currently only running backup power on network equipment to extend our WiFi time in power outages)

  • Black Box OPNSense - still learning/messing with it hence the strange connection order

  • Juniper EX3400 PoE+ - still learning how to manage/program it, free is free

I am fully aware it's all overkill but free is free so what's a guy to do?

submitted by /u/gregpxc
[link] [comments]

Upgrade to Ubiquiti

Upgrade to Ubiquiti

Today finish my upgrade to Ubiquiti hardware ( at least for now 😅)

UNas Pro Cloud Gateway Fiber ( ISP DIGI connect directly to the fiber) USW Pro XG 8 PoE USW Pro Max 16 PoE

Aqara Hub Philips hue hub Eufy Homebase Qnap Nas

UNas is running just one 4 TB ( Samsung 870 EVO) tomorrow amazon is going to deliver two more, going to be use for work, photography / video.

Next upgrade is going to be the qnap, need something for plex/torrents 24h, with 10gig link and ssd.

This rack is wife approved 😆

Tempered inside is normal 29g, hot days just leave the door open, and i use a sensor inside, if reach 32g some fans turn on, until it drops to 27g

submitted by /u/Elohim_JLTC
[link] [comments]

Pangolin/VPS Security

Like many of you, I have recently been testing Pangolin on a newly setup VPS. My question to /r/homelab is: how do you guys secure your VPS/Pangolin instance?

Given that the VPS is open to the internet, I have:

  • Created a non-root user

  • SSH: changed default port, disabled root login, disabled passwords, required public key authentication

  • Firewall: enabled UFW and blocked everything except for 80/443/51820

  • Setup fail2ban for ssh

  • Ensured all software is up to date and enabled unattended security updates

Is there anything else you'd suggest that makes sense to secure the VPS or Pangolin specifically? What does everyone here do to secure their setup?

submitted by /u/viperrrr
[link] [comments]

Several cheap x86’s or 1 large one to rule them all?

Hi-

I know on some level that the answer is “it depends on your workloads” but I’m trying to figure out if it’s better (more cost effective, power efficient, resilient, etc) to get a bunch of older generation 8i7s / 8i5s / 10i5s with 16gb ram & 256gb / 512gb ssds or a more up-to-date 13i9 with 64gb ddr5 and several tb of ssd? I’m running proxmox (not HA) and I need to run a couple pi-holes/unbounds, immich, plex, Roon, HQPlayer (for PCM upsampling), uptime kuma, icpd, etc. Nothing super burly, but when plex is running audio analysis on a 2TB flac store, that’s no joke, nor when Immich is analyzing 10 years of photos. But both are over eventually.

More generally - when does it make sense to have one burlier machine, when does it make sense to have several less burly machines?

submitted by /u/SparhawkBlather
[link] [comments]

Advices on my setup

Hello ! I'm upgrading my setup from a single Optiplex Micro 3080 and NAS to the following:

Hardware

  • Three Dell OptiPlex Micro PCs (3080 with i5-10500T & 64 GB RAM (the existing one); 7080 with i7-10700T & 32 GB RAM; plus a possible third 3080)
  • Home NAS: Ryzen 3400G, 16 GB RAM, 4×4 TB WD Red Pro in RAID 5
  • Spare RTX 2080 GPU for NAS (for Jellyfin/Tdarr transcoding)
  • Future additions: extra 1–2 TB SSDs in each OptiPlex, upgrade all OptiPlex to 64 GB RAM

Operating Systems & Virtualization

  • All three OptiPlex nodes will run Proxmox VE in a High-Availability (HA) cluster
  • NAS will run TrueNAS Scale

Storage for the OptiPlexses Micro

  • Local boot on NVMe; plan to add 1 or 2 TB SATA SSDs per node for distributed storage
  • I'm considering Ceph on my Proxmox cluster, but I'm not exprienced at all with distributed storage.

Workloads

VMs: - Networking labs - Guacamole (it would be cool to hide it behind a tailscale or headscale, to have an entrypoint to my VM's)

Two Kubernetes clusters:
- Production: 3 control-plane nodes + 3 workers spread across the three Proxmox hosts - Development: 1 control-plane + 1 worker

Kubernetes Storage

For now I'm using DemocraticCSI with my TrueNAS as a StorageClass, but I'm looking into Longhorn.

Kubernetes Stack

txt │ ├── apps │ │ ├── management (apps that will manage deployed apps) │ │ │ └── argocd │ │ └── services (deployed apps) │ │ ├── argocd │ │ ├── authentik │ │ ├── firefly-iii │ │ ├── forgejo # Gitea fork, SCM, would be cool to use Drone CI with it │ │ ├── homarr │ │ ├── mealie │ │ ├── media # arr stack, NFS PVC pointing to the library on the NAS │ │ ├── nextcloud # NFS pvc pointing on the NAS │ │ ├── pterodactyl-panel │ │ └── uptime-kuma │ └── cluster (cluster infrastructure) │ ├── backups │ │ └── velero │ ├── network │ │ ├── cilium │ │ ├── ingresscontroller │ │ │ └── traefik │ │ └── loadbalancer │ │ └── kubevip │ ├── security │ │ ├── cert-manager │ │ └── metrics-server │ └── storage │ └── democratic-csi

Backups

  • One MINIo container on my NAS, Velero backuping Kubernetes PVC in it
  • Maybe a Proxmox Backup Server VM on my NAS (Just an Idea, I don't really know how it works for now)
  • A distant Wasabi S3 Bucket

Networking & Infrastructure

  • Plan to deploy a managed switch with at least 2.5 GbE (ideally 10 GbE) for separating management, VM/storage replication, and Kubernetes traffic
  • Future 1 U rack router running OPNsense for firewalling, VLANs, and network segmentation
  • Future UPS
  • Future rack to put everything in

My question is, am I on a good road or is there a better way to do what I want to accomplish ?

submitted by /u/Haitoshura
[link] [comments]
❌