.. the old Reddit is also open-source, and you can even get your own instance up and running in less than a day?
Link: https://github.com/reddit/reddit Install guide: https://github.com/reddit/reddit/wiki/Install-guide
[link] [comments]
.. the old Reddit is also open-source, and you can even get your own instance up and running in less than a day?
Link: https://github.com/reddit/reddit Install guide: https://github.com/reddit/reddit/wiki/Install-guide
Added the Unas and rgb, some silver paint, looking purty now [link] [comments] |
Hello and thanks in advance. I've built my own computers for 20+ years now. But my knowledge ebs and flows depending on how often I need it. Home networking and servers are new to me. So please forgive my ignorance and lack of terminology or misuse of terminology. I'm currently using a synology NAS for plex. Maxed out drive size, and the fourth bay is nonfunctional. I would rather not spend $1k on a new 8 bay synology NAS, but instead spend $2k on building my own expandable NAS /s. 😅 My needs from this NAS is about 90% Plex, 5% Photo backup (RAW files), and 1% data backup. I do not need transcoding as I stream plex at full quality through an Nvidia shield. If I understand correctly. Would like to support 8+ HDDs as I like 4k remux videos. Would like a 10GB sfp+ NIC. I have 10GB from my personal computer to the switch. I'm leaning towards Unraid or Truenas. It seems that hardware RAID is no longer needed for a home server. Do either support the use of power top (or something similar) for checking C-states? Any good recommendations on HBAs and NICs that support ASPM? Should I be looking for ECC memory? It would be most excellent if I can keep my power usage at or better than what synology provides. It would be nice to have idle with all drives spun down to be 20W or less. Thanks again! [link] [comments] |
I have a Dell R720/PDU Dual 1100w Power Supplies rack mounted that's basically all that I have running in my rack.
I would like a UPS that can keep them up when the power flickers so I don't loose all of my lab config on things like windows boxes or other things that I cannot backup easily through Eve-NG.
It would be nice to have an uptime of 30mins but anything over 2mins would be nice.
Hi, I recently got an R640 server, and I’ve already installed two 6254 processors.
Now, I’m deciding on the memory configuration. I’ve found that if all 24 memory slots are filled with 2933 modules, they will run at 2666 speed.
Can anyone confirm this? Is it true that the speed will be reduced when all 24 slots are populated?
how convoluted would it be to setup a trio of vms to accomplish the following
vm1 - *sense - takes the inet from the host and handles the routing for the vlab
vm2 - *pihole or similar (i know vm1 could handle this but its a learning experience)
vm3 - Linux box to test the above (if my assumption is correct, i wouldn't be able to access this if vm1 was not functioning correctly, and I would have ads/intentionally blocked domains if vm2 was not functioning properly)
eventually ill move this all to real hardware but i want to work out some software knowledge before i take it live
For those looking for a great case for a server, nas or desktop B & H has the Fractal Design Define R5 case for 85$ with 2 day shipping included
Anyone know where I can find a PDU I can mount in a 10 inch rack? (I am in the US.)
I found one which is like $300 and is a “power conditioner”. I do need the power conditioned, nor am I worried about a UPS. (Although a UPS would be nice, depending on the price…)
I just need simple rack mounted power strip with 3/4 outlets on it. (Possible Fan, cable modem and unifi gateway and switch.)
Anyone have a link they can share?
Hi - I want to isolate some servers from the rest of my Home LAN. The servers are separated by a switch and Wifi point2 from the rest of the Home LAN, which has a Wifi point-1, where all other users connect to (Android, iPhones, Laptops). Do I need extra hardware to isolate all servers? Or can I configure isolation via VLAN settings?
`WAN----Broadband-Router--Wifi1------ Wifi2----Switch ------Server, Linux
------Server, Linux
------Server, Linux
Wifi1---->access for Android, iPhones, Laptops
`
Last night I had another one of those Home Lab qualifying moments with the missus, who after PiHole stopped working, was VERY annoyed by all the adds that were flooding into her games, web pages, and shopping sites and wanted it fixed. I found a hung service that after reenabling everything starting to trickle down. Yay!
It did made me reflect on having a death file. A file that explains what each server does, what passwords are, how to maintain, update services, etc. A lot of that has been acquired through hours of grueling coding and CLI which her eyes glaze over. However, last night, I felt if I gave some basic instructions, she would do it for her own sanity and that of the kids. No, I am not dying.
I’ve seen many posts on here where people throw up their parent’s server rack saying, “Help, what do I do with this?”
How are you all keeping/documenting a ‘death file’ for your family to keep things going/passwords/UI, etc.?
Found this subreddit a few months ago and decided I would try and make my own! I had bigger plans initially but had to downscale due to the realities of having a Homelab in the same room as my bed (and everything else I own) in an apartment. Nearly every part was found on Facebook marketplace, including an old optiplex for my opnsense box, an HP ProCurve 2520G switch, and the rack mounted case for my old gaming PC that I upgraded to an i7-7700, 64gb ram, and 2 6TB hard drives. Currently have proxmox running on the server functioning as a NAS and a game hosting server. Any suggestions about technologies to try and learn is appreciated! [link] [comments] |
Welcome all criticisms. [link] [comments] |
This is my home lab. So to start, at work I'm part of a small team of less than 50 engineers that run a cluster of over 10k physical servers. We process over half a trillion requests a day and ship well over 250TB of compressed logs a day. I'm used to "big infrastructures". Yet this is my home lab. It's - 2 Beelink S12 Pros. (Each is a n100 proc, 16 gigs of ram and 500gig pci nvme) - 2 Raspberry Pis. Which honestly... fit in this dinky little desktop rack but I hardly use them. I'm putting them off site for backups. - A rinky dink 5 port home switch. That's it. On them I run - Proxmox. - Honestly I barely use any of it's features. I use it as an enabler to easily spin up VMs as I need them with Cloud init. I can have a new vm in about 10 seconds. - Inside proxmox - Each S12 has a VM for general purpose linux tom-foolery and a VM that runs microk8s and exclusively k8s apps. I interact with it via SSH, configure it with Ansible. This lab has all of the guts I need to learn how new softwares work or to play with things from work in a safe way. AND it's all reasonably performant. I'm not saying this is the ONLY way to run a home lab. I AM saying, when you decided you want a home lab first and foremost: Know why you want a home lab! Do you want to learn how hardware works? Do you want to learn how software works? Do you want to host services for yourself and family members? They are all 100% valid approaches, and all widely more valid than spamming r/homelab and r/homeserver with "WILL THIS RUN PLEX" or "WHAT DO I NEED FOR A HOME SERVER" -- because honestly, that's so repetative, uncreative and it brings down the entire quality of these subs. Do some of your own research. Present what you've looked at. And why you are on the path you are. Try things. Experiment. It's a LAB. [link] [comments] |
It took me around 3 months to build it, and finally, it works perfectly. I run a Windows VM through GPU passthrough as my main Windows operating system. Sorry about the cable management; I'm still finalizing that. The small Dell beside the rack is an OPNsense router. I'm going to replace that with a Ubiquiti Dream Machine Switch. I have 54 TB of JBOD storage for all my media, which is full. I will have to upgrade it soon. How everything works: The top three servers are Proxmox servers. They are in a cluster, so Plex could still work even if one goes down, as well as other VMs. The Dell PowerEdge runs TrueNAS and is connected to the JBOD with a PCIe HBA SAS (the amount of mistakes and research that it took to finally get it to work!). The Plex VM uses iSCSI to access the TrueNAS JBOD storage. Up until now, it has never caused any issues and has been stable. And if anyone is wondering, yes, Ceph has its own VLAN called Ceph. I feel someone is going to ask this question. Each Proxmox server has two network cards: one for Proxmox and the other one for Ceph with its own VLAN. Future upgrade: I will be adding the 10G Cisco module to have faster speed between the Proxmox servers and TrueNAS. As for cable management, I need help with this. If anyone has any ideas, please let me know. I want something easy and not permanent so that if I need to change the cable locations or add more stuff down the line, I can do so easily. My previous setup was unRAID, but I outgrew it because there are features that Proxmox has that unRAID does not, such as clustering. I also enjoy the new challenges that Proxmox keeps presenting to me. [link] [comments] |
submitted by /u/freakspacecow [link] [comments] |