Set up my cabinets lighting to respond to the battery backup status.
![]() | submitted by /u/PhonicUK [link] [comments] |
![]() | submitted by /u/OmgSlayKween [link] [comments] |
![]() | Decided it was time, after an extremely (not) long wait since my first build, and upgraded my networking to ubiquiti with the udm pro as my router, the usw pro max 24 Poe for switching, and the u7 lite as my ap. I feel like this was definitely the right move, especially since I was coming from in-modem routing and a 10 year old gigabit switch Everything else in the rack is the same as in my first setup (link to post in comments) Let me know what you think! Have a great day [link] [comments] |
![]() | My old rack was a 12U metal cabinet from Lande but I grew out of it. (Second image) I needed a 3U media server to fit in somehow but did not work. The 18U cabinets were going for quite the price and they were ugly at the same time. Decided to build my own after someone gifted me an old rack case. It was really run down so I gave it a space themed paint job. Setup is relatively simple, I have the unifi stack with USG, USW and 4 APs. [link] [comments] |
![]() | What did you all start with? [link] [comments] |
![]() | 1st gen threadripper 1920x 64g ram 4060 gpu 4 nvme drives 2 sata ssds 8 sas drives Unraid as the OS. Array will hold 28tb Pool 2.5tb Primary use is frigate, with gpu processing via ollama. Secondary: NAS, media server In many ways it's over kill, and in others it's got a lot of gravity. Learned alot over the last couple weeks, started with no knowledge, still a noob though. Feel free to give feedback (positive or roasting) [link] [comments] |
![]() | What can I do with it? I wanna put these in my homelab. Minecraft. [link] [comments] |
Hey everyone, I currently have a home server running Ubuntu Server with a ZFS pool and two NVMe drives one for the host OS and the other for Docker volumes. I run Nginx, Emby, and a few other Docker containers, and I handle everything via the CLI.
I’ve been thinking about making the switch to TrueNAS since it now supports Docker Compose, but I’m wondering if there are any real benefits to this move. I’m pretty used to managing everything on Ubuntu and the CLI, so I’m wondering if it’s worth taking the time to learn the TrueNAS way and migrate everything over.
Does anyone here have experience with both? Are there any clear advantages to using TrueNAS, or should I stick with what I know on Ubuntu?
Looking forward to hearing your thoughts!
Hi,
I just saw Sipeed released somewhat recently a pcie kvm based on their nanoKVM solution and I was in the market for that kind of product.
But I also remember a lot of discussions and videos around the whole backdoors/security problem with that company and why they are proposing products very cheap.
Where are we on that point any more news or discoveries?
Because I found another solution (POE-compatible even but netween the pcb + the required CM4 this is around 160€ versus 60€ for the sipeed nanokvm-pcie.
Thanks!
My friend told me that discord is looking to go public, which may mean that you might need to start paying for it, or worse...you get ads. Are there any services one can host on a home server that can serve a similar purpose, a chat and voice server with friends?
![]() | I recently bought 2 of these hard drives for 50$ each. Plan is to use them in my computer for the time being, and later move them into a NAS (i dont have one yet). Sadly, up until i opened the first package, i did not know that there are more connector types other than SATA, and now I am stuck with them not being able to use them. Upon some basic research I found out these might be SAS connectors, however the pictures I see online have shorter connectors and I dont think will fit these drives. What do I still need to buy in terms of boards and cables to use them in a regular home pc? [link] [comments] |
![]() | After researching minimal, fanless NAS hardware with a small footprint, I chose the Fujitsu Futro S940 for my first DIY NAS project. This is my last setup after trying different cables, connectors, and SSD holders. I managed to install two 2.5-inch SSDs and upgraded the system with 2x 16GB of RAM. I was hoping to fit more 2.5 SSDs but it seems not doable. I curious to read your comments or suggestions for improvements specifically on cable management or ways to install SSDs even better. Has anyone else worked with the Futro S940 for similar projects? [link] [comments] |
So last week my server glitched during a RAID array volume expansion, but the controller recovered everything. Which is great. But it got me looking at a replacement. The current controller was PCIe 2.0 and my motherboard is PCIe 3.0. Areca make the ARC-1883iX-24 which is PCIe 3.0 and still a supported product even though they now have a PCIe 4.0 controller. So I bought one. It arrived today.
I've upgraded my Areca controllers over the years so I know that I can swap the old one out and the new one will mount the array without any special effort. Like backing up all 140TB of data first. Because after all, it's RAID, it's a great backup method. Right?
So I swapped over the card, connected a spare 6-pin power lead that's part of the dual 6-pin power connector for the GPU, installed all the drives, and powered up the server. Nada.
Black screen. No wait, it flickered. Black. flicker. Black
THIS IS A POWER PROBLEM. I've seen this before with this display (Wisecoco 14" ultrawide 4K touchscreen that's only 3U high). I fiddled with the USB-C power connector and the screen lit up again. Back to the array.
The Areca controller did it's startup scan but timed out after 300 seconds instead of completing in the usual 40, finding nothing. I unplugged all the drives and rebooted. The card completed the scan this time in 10 seconds but of course there's no drives installed. So I installed all the drives again, rebooted, and watched it time out again.
When I installed the card, it required a 6-pin power connector, so I used the spare one from a PSU lead that has 2 6-pin connectors. The other connector was to the GPU. The power-hungry GPU. You can see where this is going.
So I found a spare dedicated PSU power cable to supply the Areca card with it's own juice and rebooted. No drives. So I pulled them all out again, rebooted, then used the out-of-band CAT5 connection to view the card config (the OOB connection allows you to configure the card even when the server is not running).
It showed all 17 or 18 drives as failed, with capacity of 0.
OH FOR FUCK SAKE
I've been here before in that this is not the time to make hasty or frustration-based decisions, or to start trying anything that comes to mind. I know the 17 drives are fine. I know I can swap the old card back in and get it all back. But will I? Yeah right. (and how many of you are poised to write a response of "RAID ISN"T BACKUP". Shut the fuck up child. WE KNOW)
So I checked the firmware version, 1.52, same as the old card. I checked online and there's a 1.70 version available. But do I want to take a chance of making things worse by introducing a newer firmware that may need or expect to do something on first boot and will fail because the drives are in this state?
So I left the server powered up with no array, just sitting there. For about 2 hours.
Then just before I was heading to bed, I plugged in one of the drives. The drive light lit up for a moment. So I plugged in all the others. They all lit up too. I checked the array config and it now shows the array as Normal and running fine. I mounted the drive. It works. I rebooted. It works.
Long story short, it seems that if you're swapping controllers, you have to give it each drive one at a time after it's powered up in order for it to accept it. If all the drives are already installed during power on, it doesn't recognize them and simply says "yeah no.".
I had done extensive IO tests on the old controller and have now done them on the new one. The results of the FIO outputs are:
📊 PCIe 2.0 vs PCIe 3.0 RAID Controller Comparison (Areca ARC-1880 vs ARC-1883)
Test Type | PCIe 2.0 (Old) | PCIe 3.0 (New) | Improvement |
---|---|---|---|
Seq Write | ~120 MiB/s | 437 MiB/s | ✅ +3.6× |
Seq Read | ~150–250 MiB/s | 1527 MiB/s | ✅ +6–10× |
Rand Read | ~74–96 MiB/s | 58 MiB/s | ❌ Slight drop |
Rand Write | ~2.7 MiB/s | 2.7 MiB/s | ➖ No change |
Note: Write-back caching is disabled due to missing BBU, so random write performance is limited by mechanical disk latency. Sequential IO benefits the most from PCIe 3.0 bandwidth increase. I'm ordering a BBU and will re-run after. I expect the Random reads and writes will be similar to the older card that had a BBU and write-through enabled.
The array is all media files so they're only accessed as long sequential reads and written as long sequential writes. All my random IO is done on SSDs then finalized and sent to the array. That way I minimize disk writes, which reduces risk of catastrophic failure during a write (e.g journal cache flush).
![]() | Hi everyone, I'm trying to setup caddy as reverse proxy to access different services (HomeAssistant, ActualBudget, etc) on my LAN using domain names. No external access. Currently Caddy is installed on Proxmox in an unprivileged LXC (Community Plugin) with the extra Cloudfare module. My other services are also on the same Proxmox host#1 and in another Proxmox host#2 in the same LAN. Cloudfare account is setup, domain bought from Namecheap but configured to use Cloudfare DNS. SSL/TLS Encryption mode: FULL Here the DNS records pointing to Caddy's IP: Here the CaddyFile: When I access those handles, it takes me to a blank page. I don't see any obvious error in the logs. Do you see any error in the caddy file? [link] [comments] |
I recently bought one of these disk arrays, and I am having difficulty finding rails for it. Is there a methodology for figuring out what rails would work on the side of these arrays? Would any L-type rails work?
Photos
![]() | Hi guys. I currently have the above setup in my home lab and loving it. The only issue I have is that I am running out of storage. I have the on board M.2 slot filled, a PCIe M.2 carrier card which gives me 2 more slots and 1 sata SSD plugged into the board. For anyone familiar with this box knows the cable that comes with it has one sata power plug and one mini sata for the disk drive. Is there such a cable that allows for 2 sata power connectors instead? I have no need for the disk drive. In HPs wisdom they have made the 6 pin on the board proprietary meaning a normal splitter that comes with every new PSU does not fit [link] [comments] |
![]() | I was planning to purchase this lot to use some items myself and resell the rest. [link] [comments] |
![]() | I am in the process of building up my home lab. The first step is to set up a NAS. I have installed UnRAID 7.0.1(chose it over TrueNAS) on a fanless HP T638 Thin Client PC (J4125, 24GB RAM,128GB SSD), which I have been testing for a while. I have a whole bunch of media on fifteen 2.5" external drives (ten 1 TBs and one each of 500 GB and 2TB). I plan to set them up under UnRAID. I plan to use my Lenovo ThinkPad USB 3.0 Dock for connecting the five storage drives and the one parity drive. The cache will be set up on the internal M.2 SATA SSD setup as a striped ZFS pool. The NAS will be shared with my main PC through a Gigabit connection. New media will also be added to the NAS from my main PC over a Gigabit. I do. I have a Lenovo M710s planned as my Proxmox Host, which is now serving as my main machine while my actual main PC is under service. I plan to get started on my homelab journey and learn along the way with your advice. 1️⃣ Planned UnRAID Storage Configuration 2️⃣ Performance & Reliability Considerations 3️⃣ Migration to Lenovo ThinkCentre M710s 4️⃣ Network Bottleneck vs. Storage Upgrades [link] [comments] |