Vista de Lectura

Hay nuevos artículos disponibles. Pincha para refrescar la página.

How to design a data center?

Designing a data center might sound daunting, but it's all about nailing down the essentials. First off, think about scalability. You want a setup that can grow with your needs, so planning for future expansions is key. Cooling is another biggie those servers can heat up faster than a gaming PC running Cyberpunk. Personally, I swear by a good airflow design to keep things chill. Security? Absolutely crucial. It's not just about digital defenses but also physical access controls. Trust me, you don't want any Tom, Dick, or Harry waltzing in and messing with your racks. Cable management is a sneaky detail that often gets overlooked but can turn into a nightmare if not done right. Lastly, consider energy efficiency. Green is the new black, after all. Opt for energy-efficient hardware and explore renewable energy sources where possible. It's not just about saving the planet; it could shave off a few bucks from your bills too.

Anyone else geeking out over data centers? Let's swap tips and horror stories!

submitted by /u/Tale_Giant412
[link] [comments]

Open air Server Rack Mount

I bought a network rack way back in the day.

Currently have a jonsbo N1 inside of it works perfect however my needs are exceeding the size and I desire to utilize the entire rack.

Currently the 12u rack has a netgear modem, dream machine pro and a 24 port poe UniFi switch and the jonsbo.

The rack is super shallow less that 15 inches deep from back of rack case to the front rack mounts.

I’ve tried to find cases but not much success so considering an open air idea.

Just a shelf with a motherboard tray and then possibly a rack mounted hard drive bay thing maybe 3D printed.

I don’t mind it getting dusty it’s a pretty clean area and rarely gets dusty.

Anything else I should consider?

submitted by /u/Cor4eyh
[link] [comments]

Data center efficiency and sustainability.

I've got to say the innovations happening in this space are mind-blowing. It's not just about saving energy anymore, it's about how we can revolutionize technology while being kinder to the planet. From liquid cooling systems that reduce electricity usage to renewable energy-powered centers, the future looks promising. But here's the kicker these advancements aren't just good for the environment; they're also cutting costs and improving reliability. Imagine a world where our digital footprint isn't at odds with our ecological footprint.

I'm curious what are your thoughts on this? Are there any cool projects or technologies you've come across that are making waves? And how do you think we can push this agenda forward even more? Let's geek out together and discuss how we can make data centers not just more efficient, but also more sustainable. Let's hear your insights!

submitted by /u/miserablyelitecivili
[link] [comments]

Recommendations on how to configure my homelab (this is a cross post from learningml)

I am looking for some recommendations on how to set up my homelab. Specifically with software/technologies

I have:

3x R630s with 512GB each and 44t/88c

1x R730 with 384GB 36c/72t and a 42x16TB drive JBOD DAS array attached, a 4x NVME 2TB pcie card, and a GTX1660 (currently running unraid, but might change that)

1x R420 with 96GB RAM and 32c/64t cpus (I think)

1x C4140 with 16c/32t, 256GB ram, and 4x P100 GPUs (just bought V100s to replace)

All servers have Connectx3 cards in them (40G/56G) and a SX6036 switch. I just got these and have no idea what I am doing yet.. All servers also have dual 10G SPF Nics that are connected to a switch for regular ethernet

and my workstation that has a threadripper 5995wx, 1TB Ram, and 4x 3090s (will be upgraded to 5090s when they drop). It is running windows and WSL (also dual booted to Ubuntu 22.04 due to a bug with WSL and 4 GPUs)

I have a large dataset taking up 70% of the 500TBs from commoncrawl. I was thinking K8s with the r420 as the master and 630s as worker nodes. I also might throw the 4140 and the 730 in the cluster too. I currently have Minio on a docker image on the 730 but I think it is slow for what I am trying to do, therefore I was going to move it to the K8s cluster but I only have 1 chassis for the drives. I see all this other technology (Hadoop, Spark, Minio, etc). I am doing this to learn primarily. The only way I really learn is hands on. My goal is to try to replicate what the big guys do, at a much smaller scale, but learning the technologies that I will need if I want to shift into this field. So given this layout, wanting to be able to build models and use the hardware as efficiently as possible (meaning if I am preprocessing, all CPUs are at full tilt until its done, if I am training all GPUs are at full tilt until its done) and storage access is as fast as I can make it, how would you configure this?

Also, if there is something I need to buy that is inexpensive to make this much better, I am open to suggestions.

edit:

I also need the dataset externally accessible (that is why I am using Minio)

tl;dr:

given this equipment, and the workload (also being a home lab) how would you configure it? Do i bring in the 730 into the cluster, or set it up as a trunas/unraid setup, or something else since I have 56GbE and IB(RDMA, RCoE)

submitted by /u/Professional_Lychee9
[link] [comments]

APC Rack Air Removal Unit compatibility with APC AR3300 Rack

Hello all

Does anybody know if the APC "Rack Air Removal Unit", Model = ACF102BLK is compatible with the APC AR3300?

I was able to find the datasheet for the ACF102BLK model from the official APC website, but there is nothing writen if they fit on the AR3300 rack model.

I have the strong feeling it should because of the dimensions but I just want to be sure, before i spend any money.

https://www.apc.com/ch/de/product/AR3300/netshelter-sx-geh%C3%A4use-42-he-600-mm-b-x-1200-mm-t-mit-schwarzen-seitenteilen/

https://www.apc.com/ch/de/product/ACF102BLK/apc-air-removal-unit-208-230-50-60hz/

Thank you

submitted by /u/SuperbValue4505
[link] [comments]

Dell Poweredge R720 and GY1TD NvME pci

I recently made some necessary updates to our lab by upgrading some of our older servers to handle storage.

I currently have 3 poweredge R720's on my rack and I wanted to use them specifically for Ceph storage handling.

I have installed the GY1TD card which has a PEX 8734 switch internally and can handle x4x4x4x4 bifurcation. I had also replaced the sas backplane with the necessary one to allow u.2 drives to work. All these parts are Dell parts and the drives light up and looks like they connect.

The problem is the following..

If I have the drives connected at boot, the boot process gets stuck at "initializing firmware".

If I remove the drives out of the caddy but I have the backplane and pic card connected then the server boots fine. But if I put the drives back in then the drive caddy lights up green and looks like it's doing something but I can't see the drive at all on the host. fdisk, blkid, lsblk nothing shows the drives.

I do not want to boot from these drives but I do want to use them strictly for storage on ceph as the poweredge servers have all been updated to 100Gb fiber links in-between the cluster.

I have also removed the perc card that was in the servers originally.

What can I do to make this card work ? I want to create an all flash ceph cluster and im having a real hard time with it.

lspci output below

04:00.0 Ethernet controller [0200]: Mellanox Technologies MT27520 Family [ConnectX-3 Pro] [15b3:1007] `Subsystem: Mellanox Technologies MT27520 Family [ConnectX-3 Pro] [15b3:0007]` `Kernel driver in use: mlx4_core` `Kernel modules: mlx4_core` 05:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [10b5:8734] (rev ab) `Subsystem: Dell PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [1028:1f84]` `Kernel driver in use: pcieport` 06:04.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [10b5:8734] (rev ab) `Subsystem: Dell PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [1028:1f84]` `Kernel driver in use: pcieport` 06:05.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [10b5:8734] (rev ab) `Subsystem: Dell PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [1028:1f84]` `Kernel driver in use: pcieport` 06:06.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [10b5:8734] (rev ab) `Subsystem: Dell PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [1028:1f84]` `Kernel driver in use: pcieport` 06:07.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [10b5:8734] (rev ab) `Subsystem: Dell PEX 8734 32-lane, 8-Port PCI Express Gen 3 (8.0GT/s) Switch [1028:1f84]` `Kernel driver in use: pcieport` 0d:00.0 PCI bridge [0604]: Renesas Technology Corp. SH7757 PCIe Switch [PS] [1912:0013] `Subsystem: Renesas Technology Corp. SH7757 PCIe Switch [PS] [1912:0013]` `Kernel driver in use: pcieport` 0e:00.0 PCI bridge [0604]: Renesas Technology Corp. SH7757 PCIe Switch [PS] [1912:0013] `Subsystem: Renesas Technology Corp. SH7757 PCIe Switch [PS] [1912:0013]` `Kernel driver in use: pcieport` 0e:01.0 PCI bridge [0604]: Renesas Technology Corp. SH7757 PCIe Switch [PS] [1912:0013] `Subsystem: Renesas Technology Corp. SH7757 PCIe Switch [PS] [1912:0013]` `Kernel driver in use: pcieport` 0f:00.0 PCI bridge [0604]: Renesas Technology Corp. SH7757 PCIe-PCI Bridge [PPB] [1912:0012] `Subsystem: Renesas Technology Corp. SH7757 PCIe-PCI Bridge [PPB] [1912:0012]` 10:00.0 VGA compatible controller [0300]: Matrox Electronics Systems Ltd. G200eR2 [102b:0534] `DeviceName: Embedded Video` `Subsystem: Dell G200eR2 [1028:048c]` `Kernel driver in use: mgag200` `Kernel modules: mgag200` 
submitted by /u/mtheimpaler
[link] [comments]

DIY TNSR hardware for 10k+ request per second?

I download about 500tb of data per month using dual 1gbps connections and pfsense running on an old i7-3770k. I'm typically making 1k+ connections per second; 80% outbound get request, 20% inbound through tailscale tunnels from 10 budget VPS's.

I just upgraded my residential connection an 8gbps connection and am about two weeks out from adding another 8gbps connection. I have a combination of 10gb and 40gb connections between my servers.

Based on some reddit research I figured out that pfsense doesn't work well for 10gb L3 switching and that I need to migrate to TNSR or maybe Vyos(less preferred as I prefer GUI).

I'm trying to figure out what a decent setup would be based on my work load? I'm assuming like a xeon D1541 or any lga 3647 would be fine. Just not sure what is the best route to go, DIY 2U build or some dell/hpe setup which is hopefully cheap (less than $500). Any thoughts or suggestions?

p.s.Before anyone says anything, I have been downloading these large amounts of data for years out of my house and have never got a single warning message from an ISP. This server will be going into a sound deadening cabinet which i picked up for cheap and is where my 1.5pb of hdd and flash live, so ideally a 1U or 2U build to conserve space.

submitted by /u/9302462
[link] [comments]

Huawei Server Bios Password Reset.

Hello,

I have a Huawei RH2285 V2 rack server that I got from a friend. I added a bios password which I have forgotten and didn’t set up my access to Huawei’s management portal. How can I reset the Bios. I’ve tried removing the CMOS, jumping the BIOS-RCV pins and contacting Huawei which said I can’t get support unless I renew the device’s warranty. I can’t find any service manuals online. Any help would be greatly appreciated.

Thanks in advance

submitted by /u/CircuitMan8897
[link] [comments]

Server security

EDIT: I ditched Traefik, and Authentik. I am now using CloudFlare zero trust tunnels, closed all ports on my router and the attacks have completely stopped.

I recently posted about my server getting hundreds of requests and attacks, I followed through on some recommendations.

I ditched TrueNAS and went back to my Unraid Pro installation.

I’ve added JavaScript challenges through CloudFlare which has helped drop my traffic down to 200 from 20k per 24 hours. I set up Authelia, as well as CA Certs instead of Self Signed. HSTS. and a few other firewall rules for Trusted IPs.

I’m in the process of learning how to use crowdsec as another layer of protection. I’m looking for more recommendations. I don’t really like the feel of Authelia as the UI is rather huge lol for a login form.

The amount of attacks my router has detected since these changes have been 2 in the past day or two that is blocked.

submitted by /u/SpoofedXEX
[link] [comments]
❌