Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerA step up from a home lab

My Current Homelab

13 Noviembre 2024 at 17:48
My Current Homelab

My little lab

2 3000VA APC UPS's 1 Cisco 5108 Blade Chassis w/ 3 M5 blades with 384gb RAM 1 Netapp A300 AFF w/ 48 4TB SAS Drives. 1 Cisco ASA 5512 1 Cisco Nexus 9332 40Gb switch 1 Cisco Nexus 2248tp 2 Cisco 6332-16UP FI's 1 Digi CM48 Serial Console Server 2 Meraki Access points

All the major backhauls are 40Gb

I love my lab but I might get another 9332 and do VPC then I can do core switch upgrades fully online. I have an upgrade to do but I'm out of the country f something goes wrong then I don't have a backup. But the Nexus 9332 probably won't get much more firmware because it's EOL was sort of surprised I got the one I did.

All of that runs my hypervisors and VMs the Netapp is a development platform for all the scripts and such I code at work.

Love having a FlexPod in my house.

submitted by /u/__teebee__
[link] [comments]

I love racks! 😁

13 Noviembre 2024 at 18:50
I love racks! 😁

From top to bottom…

Cisco 8861-K9 IP H42 IP phone.

Dell PowerEdge 17FP 17" 1U KMM Server Rack Console. (Collapsible Monitor/Keyboard)

Cisco ASA 5555-X (IPS - 3DES/AES Encryption) Adaptive Security Appliance. 16GB memory, 4 Gbps Stateful inspection throughput. Also running Redundant Hot Plug Power Supplies.

Cisco ASA 5515-X (IPS - 3DES/AES Encryption) Adaptive Security Appliance. 16GB memory, 1.2 Gbps Stateful inspection throughput.

Cisco ISR4451-X-VSEC/K9 Cisco ISR 4451 VSEC Bundle Router w/ PVDM4-64. 16 GB memory. NIM-SSD module (400 GB SSD)

1U48Port Keystone Patch Panel Cat6A Keystone Patch Panel Shielded with Cable Management.

Cisco Catalyst C9300-48P-E 9300 48x Gigabit Ethernet PoE+ L3 1U Managed Switch. Dual power supplies.

Dell PowerEdge R640, 2x Xeon Gold 6140 2.3 GHz (2CPUs=36 cores), 128 GB DDR4 RAM, PERC 730 RAID controller, Broadcom 5720 NDC (Proxmox: Cisco Unity Connection VM)

Dell PowerEdge r740. 16 bay. 2x Intel Xeon Platinum 8168 - 2.7GHz (2 CPUs = 48 cores), 256GB DDR4. 2TB RAID 10 (OS) / 4TB RAID 0 (storage) on a PERC H730P custom RAID Controller, iDRAC 9 Remote Management Card, Intel X550 4xGigabit Ethernet ports, and Redundant Hot Plug Power Supplies. (Web/Email/Database Server | Storage)

Dell PowerEdge 620, 2x Xeon E5-2620 a@2GHz (2CPUs=24 cores) 128 GB ram (Abandoned in place)

Dell PowervVault MD1220 1TB RAID 1 & 500GB RAID 1 on PERC h810 for backups. Also running Redundant Hot Plug Power Supplies. (Abandoned in place)

Dell PowerEdge r910. 4x Intel Xeon X7560s - 2.26GHz (4 CPUs = 32 cores), 128 GB ram, 2TB RAID 10 (OS) / 4TB RAID 0 (storage) on a PERC H700 RAID Controller, iDRAC 6 Remote Management Card, Broadcom 5709 4xGigabit Ethernet ports, and Redundant Hot Plug Power Supplies. (Abandoned in place)

2x APC SMT1500RM2U Smart UPS Backup.

Category 8 SSTP wiring. Digi Portserver TS MEI for management.

3x Cisco 8861 IP Phones.

submitted by /u/Stray_Bullet78
[link] [comments]

Hardware for VPS hosting?

After doing some homelabbing I started looking into the idea of micro datacenters and somehow that led me to thinking about vps hosting. I have several mid to high tier desktops and I contemplated just starting with trying to sell off of them using proxmox and a dedicated fiber line. Is this an ok way to go about this venture to start until I can invest in proper server hardware? Or should I jump right in and get a rack unit? I ve done some research on cpus and parts everything seems very expensive on anything current and Its hard for me to tell how viable older gen parts are for what im trying to do.

submitted by /u/MarsupialLopsided737
[link] [comments]

Selling some stuff, 200G Active Optical Cables, QSFP56, QSFP28-DD SR8 to 2xQSFP28 SR4

8 Noviembre 2024 at 19:53

Hi all!

Got some excess inventory, selling the items below. Shipping via FedEx within the US, or pickup @ 78665. New, and comes in a bag. I am willing to discuss the price for all of these, It is way cheaper if you buy in bulk.

Img: Timestamps

Item Specs Quantity Price Shipping
QSFP56-QSFP56 AOC 200G 200Gb/s, IB, HDR, QSFP56-QSFP56, Active Optical Cable, Mellanox/NVIDIA MFS1S00-H020V, 20M, New 62 285$ ea 20$
QSFP28-DD to 2x 100G QSFP28 SR4 breakout AOC 200G 200Gb/s, QSFP28-DD SR8 to 2x 100G QSFP28 SR4 breakout, Active Optical Cable, MFS1S50-H010V, 10M, New 50 270$ ea 50$
QSFP-DD to QSFP-DD 800G 800Gb/s, QSFP-DD to QSFP-DD, Active Optical Cable 1 1400$ 20$
QSFP28 TO QSFP28 100G 0.5m QSFP28 TO QSFP28, 0.5m, 103.125Gbps,QSFP28, Twinax Cable 85 22$ 20$
QSFP28 TO QSFP28 100G 1m QSFP28 TO QSFP28, 1m, 103.125Gbps,QSFP28, Twinax Cable 13 22$ 20$
submitted by /u/BILLYCcraft1234
[link] [comments]

LTO Tape Drive Questions: Sanity Check My Idea

3 Noviembre 2024 at 06:44

I usual hang out on r/homelab and r/selfhosted but I am looking into a project that seems to fit in better here on r/HomeDataCenter. I want to see if I can get some LTO tape backup going without completely breaking the bank.

I am looking on eBay for used LTO tape drives. Current gens are far above my price range, so I have been looking at LTO6 or maybe LTO7. I know these are usually used in a large library with auto-loaders, but for my use case, I want to keep costs down, so I am OK with manually loading tapes. However, external enclosure self-contained LTO tape drives seem to be generally much more expensive on eBay than tape drives that are meant to be in a library. So, that leads me to my idea, and I'm hoping some of you might have some experience with these drives and can help sanity check my idea.

I came across this post about how HP LTO tape drives seem to "just work" as standalone units, with just a jumper pin setting, whereas IBM LTO drives can be set to standalone units with some hex code sent over to them. I looked into the GitHub tutorial-style page that was linked in that Reddit post, and it gave some details about the HBA fiber card used for that project.

For reference, I'm in the USA, so my price list here is in USD and using the US eBay.

  • A 2-port fiber channel (FC) HBA card seems to be around $30, like this one
  • An IBM LTO6 tape drive can be as low as around $150 with shipping, like this one
  • While LTO7 would be great with its increased storage size, the price jumps by almost an order of magnitude, with an inexpensive used drive costing at least $1400, like this one
  • I could get 20 LTO6 tapes, for a raw total of 50TB, for about $180, like this listing

Assuming I have a computer around with at least one free PCI-e slot and an SSD with at least 2.5 TB of free space that I can use as the space where I get the files ready and zipped up, ready to copy (which I certainly do), then my cost would be something like $180 for the drive and HBA and another $180 for 20 LTO6 tapes, bringing my total to $360 for 50 TB of storage. Now I might be able to get some great refurbished hard drives that could offer similar price per TB, but my focus here is on immutable backups that can be easily kept off site. That is what draws me to trying out tape backup. I want that extra protection against some sort of ransomeware or other attack messing up not only my main copy, but also my backup copy. (And I know that an offsite backup with some system that uses versioning would also help prevent against loss from ransomware attacks, and that is a fair option to consider. That is why I'm posting in this subreddit, because I know this idea is overkill, and I'm here looking for people who appreciate overkill.)

I know people tend to say that LTO tape backup is just too expensive to be practical until you have close to half a PB of data, but LTO6 seems to be a sweet spot right now, assuming I'm not missing something crucial in my plan here.

Please take a look at my parts list and let me know what I'm missing. Or if you have experience using LTO tape drives as standalone drives, please share your experience.

submitted by /u/ResearchTLDR
[link] [comments]

Incredibly confused about Network VF's in switchdev mode

3 Noviembre 2024 at 03:34

So I recently got my hands on a mellanox sn2700 switch and a few ConnectX-6 DX cards...

I have played with creating VF's before with my CX3-Pro cards before, but I was used to using the mlx4 driver which does not have the ability to put my card into switchdev mode...

What I have been doing on this new card so far is the following....

I create a VF on the card, set it up using the ip command to give the VF a vlan and then I actually add a static ip address on the VF . I know maybe this isn't what it's meant to be used for but I liked using it in this way. I could also setup more VF's with different vlans and use them as UPLINK OVN networks for my LXD setup.

So I understand that I have been using the legacy mode of my card ....

Now I would like to switch to using switchdev (because I want to understand it better), but im running into trouble and im not sure I can even achieve what Im trying to do..

I know that when I create my VF's I then unbind them from the card, switch the card to switchdev mode , add any offloading capabilities and then rebind the VF's back to the card.

I now have a Physical Nic , a virtual function for that Nic, and then (I guess its called) a physical representation of that virtual function (i.e physicalNic: eno1 , virtualFunction: eno1v0, physical representation: eth0).

I would like to setup one of my virtual functions on my card while im in switchdev mode with a static IP and a vlan. I want to do this because I am using NVME over RDMA on one of my nodes and it seems to be the best option to use my CX6-DX card for that reason.

I am unsure sure how to go about this , ive tried following quite a few guides like this one from Intel(link) or even this one from Nvidia that talks about VF-Lag(link) but have had no success.

I have ended up with some method to be able to attach an ip address to eth0 (physical representation of the virtual function eno1v0) after I put the card in switchdev mode but I can only ping the address I statically set on it and no other addresses on that same subnet.

My OVN setup is pretty simple and I only have a default br-int interface. So far I have no ports added to the br-int interface.

How can I achieve what I want to do which is to make a useable virtual function on my host OS with a vlan attached to it using switchdev mode?

submitted by /u/mtheimpaler
[link] [comments]

2u 2n server options (with shared front plane?)

25 Octubre 2024 at 23:12

As the title implies, I'm looking for some server that is 2u and has 2 "canisters" in it. Specifically I'm looking for something that has a shared front plane so if one canister goes down the other can pick up the resources of the other node; I'm would want to use it for a pair of BeeGFS storage nodes and would prefer to not have buddy groups if I can help it.

I know something like a Viking Enterprises VSSEP1EC exists (I use them at work), but they're extremely overpowered for what I need and super expensive. I know something like the SuperMicro 6028TP-DNCR exists, but the front plane isn't shared (maybe it could be?). Does anyone know if there are older generation Vikings I could buy or some other solution with a shared front plane?

submitted by /u/p00penstein
[link] [comments]

Need help with who can help best. -Building an educational cluster for myself and eventually my students

21 Octubre 2024 at 17:51

Hi All,

TL:DR at end.

I was manic a while back and had a great idea to build a home datacenter (this was before I met y'all) so I could learn how the cloud works better. I am an instructor at a technical college, but I've always focused on the analysis/presentation side of data work. Perhaps unsurprisingly, a data scientist can do cool stuff but not this. I was/am hoping to develop curriculum for a new course for those interested in either data-center work or using the cloud in general.

To that end, I'm hoping to talk to experts in basically every aspect of the data-center (infiniband, RDMA, RDMAoPCIe, PCIe networking in general, orchestration, defining workflows, security, etc...) at a scale that would fit on a benchtop or I could at least have control over the components and switch configurations as necessary. To that end, I have a bunch of small x86, Jetson (ARM), and Bluefield (ARM+NIC), Broadcom PCIe switch, and Infiniband router systems I was hoping to play with -bought mostly secondhand.

I'm hoping if I occasionally post questions about my goals in spinning this thing up I can get some feedback, suggestions, and critiques toward getting the construction of the physical layer stable. I know I'm doing it wrong because peak functionality is normally the goal and this is more about demonstration of the various technologies involved than an optimization problem (that would require me to circle back to my current class and I am not ready to introduce them to this yet, not while I still have no idea what I'm doing!)

I need guidance around what a reasonable entry point looks like given what I have and my thoughts vs the reality of what the data center is like today (which I have no vision into). Please, I don't think I'm asking for forbidden knowledge, but it sure feels that way.

TL:DR, may I ask dumb questions and hope for smart answers?

submitted by /u/Flying_Madlad
[link] [comments]

How to hide these pipes?

21 Octubre 2024 at 13:02
How to hide these pipes?

Looiking for some recommendations on a "clean" and "simple" means to hide these pipes/cables. Within that same spot, I'm going to put an 18U rack.

I'm looking at some panel boxes but they're thick (200mm) and the thickness will occupy some space (depth) for the rack's spot.

submitted by /u/elvinguitar
[link] [comments]

Dell 1000w UPS Compatible Rails?

14 Octubre 2024 at 21:59

Evening all

Finally got myself a rack (woooooooo) and trying to mount my Dell UPS J718N 1000. It came with the ears and the rear supports, but no rails.

Are there other compatible rails I can use or do I need to find the matching set?

Thanks in advance x

submitted by /u/SpadgeFox
[link] [comments]

Grounding my racks

11 Octubre 2024 at 23:25

I'm in the process of building out my new racks in my new home, and the question came up: What is the best way to ground the rack? Currently, my gear is in a colo (we moved it there for a year while we were doing work on the new house). At my colo, the doors have grounding connections that connect them to the frame, and the whole frame has some #6 ground wires that run along the whole row.

My question is, do I need to run a grounding wire to the racks? If so, what size wire? They are going in a utility room that is 10 feet from the water line coming into the house, and the main panel, so running the wire is no problem. Or is this overkill, and the ground from the outlet is more than fine?

Note: I'm going to be using 2 x 42U Sysracks (I got a terrific deal on them)

submitted by /u/cube8021
[link] [comments]

RoCE v2 switch at home

28 Septiembre 2024 at 02:27

I've posted this in r/homelab and r/HomeNetworking and have only gotten two recommendations which were functionally the same (Mellanox SX6036 and SX6012; IDK how to enable what's necessary on these), perhaps yall have answers.

I'm looking to eventually deploy RoCEv2 in my home lab but am not 100% sure on which switches I've seen can support it nor which have noob friendly interfaces (i have very little switch UI exposure). I know ECN, PFC, DCBx, and ETS are the required features, but I've read you can get away with the former two. Do you need all 4 or can just the 2 get you what you need?

For switches, I've found a small selection. Am I correct in my analysis' on them?

Arista DCS-7050QX-32S: p. 4 under "Quality of Service (QoS) Features" it lists all 4. This will work

Brocade BR-VDX6940-36Q-AC: p8. under "DCB features" lists PFC, ETS, DCBx by name and I think "Manual config of lossless queues" would be the other. This may work

Edge-corE AS77[12,16]-32X: I thought that I read NOS (or whatever OS this thing uses) has the 4 things I need. This may work

Dell S6010-ON: the last bullet on p.1 says "ROCE is also supported on S6010", but is that v2 or not? I see PFC, ETS, and "Flow Control", so I'm not 100%

Cisco Nexus N3K-C3132Q-XL: this has ECN and PFC but none of the other 2 features by name. This may work

I would get at least CX3's for this as they're the cheapest and meaningfully utilizing 50/100G is a long ways off for me. The goal of this would be to enhance my planned storage (a pair of ? nodes hooked into at least one DDN shelf running BeeGFS w/ ZFS backing) and compute (multiple Dell C6300/Precision 7820 type machines running suites like QuantumESPRESSO) systems

edit 1 (17 Oct): the above Arista and CX314A's have arrived at my pad and I'll be spinning them up for very boiler plate testing. Hopefully I can get RoCEv2 working with these NICs on Debian 12

submitted by /u/p00penstein
[link] [comments]

Tesla P40 in Dell R720xd woes

21 Septiembre 2024 at 12:17

I bought a couple of Dell R720xd servers a while back. One for Proxmox and one for TrueNAS. They work great for my needs and I’d like to upgrade them for some basic local LLM and other GPU workloads.

I’ve seen a number of folks post on YouTube with working Tesla P40s in their 720xd servers. So I buy a couple along with the wiring one of the posters linked.

I also picked up 1100W PSUs and threw those in there. iDRAC and the BIOS are updated to latest.

However, when I try to boot with the GPU installed the server won’t boot, the PSU blinks orange, and there are zero logs in iDRAC as to what the issue might be. This happens even on a dedicated 20A circuit with no other load.

Anyone out there have any ideas?

ETA I got them working. I’d tried two different cables and neither worked for me, but this cable from Amazon did: GinTai 8(pin) to 8(pin) Power Cable Replacement for DELL R730 and Nvidia K80/M40/M60/P40/P100 PCIE GPU

submitted by /u/Trustworthy_Fartzzz
[link] [comments]
❌
❌