This homelab server is absurd and I love it!

  Рет қаралды 15,183

2GuysTek

2GuysTek

Күн бұрын

Пікірлер: 92
@Antitux
@Antitux Ай бұрын
I haven’t finished the video yet, but 2 minutes in all I can think is “oh, that’s gonna be loud.”
@2GuysTek
@2GuysTek Ай бұрын
And you’re not entirely wrong! I address the fan noise a bit later in the video.
@Antitux
@Antitux Ай бұрын
@@2GuysTekYep, I just finished and saw that. I’m still waiting for you to give harvester HCI a shot and this would be a good use case with its backing storage being based off longhorn.
@VelislavVarbanov
@VelislavVarbanov Ай бұрын
4 node proxmox server with ceph FS across the SSD-s. Really with that hardware this should work great!
@gabrielpi314
@gabrielpi314 Ай бұрын
Would love to see some GPUs in those PCI-E slots to demonstrate pass-through and VGPU support in the various platforms.
@djstraussp
@djstraussp Ай бұрын
Clustering Proxmox with CEPH is tha bomb
@michaelrichardson8467
@michaelrichardson8467 Ай бұрын
Nice work on these videos man! That build montage was great! The music was a good choice. I have scalable CPU's in my homelab but this is a CRAZY build!
@2GuysTek
@2GuysTek Ай бұрын
Glad you enjoyed the montage! What are you running on your Scalables?
@RubberDuckDebugger
@RubberDuckDebugger Ай бұрын
It sounds like you have reservations about Hyper-V, but Hyper-V does have an interesting advantage when using a quad node system like you have. As far as I'm aware VMware, Proxmox, and Nutanix all need a minimum of 5 nodes if you want the cluster to be able to take any two storage failures (Such as a failed disk, a failed host, or even a host being rebooted). VMware for example will make 3 copies of the VM and place them on 3 different hosts and then a "witness" component on 2 more hosts (A small files used to vote for control of the VM). This way if the cluster is split and say 3 nodes can talk to each other, but not the other 2, they can use the fact they own 3 of 5 parts of each VM to know they can safely bring up the VM's and the other two nodes are not elsewhere doing the same. It probably sounds a bit complicated in a 5-node setup, but in a larger setup when a split cluster could have whole VM's it each half, it allows each half to operate independently starting only the VM's they have majority control over. Hyper-V handles this a bit differently by electing a master node based on a cluster vote and that node decides who can run what. If the master node fails, the remaining 3 of 4 nodes will vote to elect a new one. The tricky bit is the 2nd failure. If 2 nodes fail, the remaining two nodes have no way to know if failed nodes are actually offline or if they just lost communication with them. In this situation 2 of 4 nodes can't obtain a majority vote to run the cluster, as that would allow a 2-2 split cluster to each start a copy of all the VM's. To avoid this, we need a 5th vote. Rather than a 5th node, which is kind of a pain when running with a quad node chassis, Hyper-V can use an external witness. VMware has an external witness as well, but in their case the witness is a whole VM that has to be ran off cluster. Hyper-V just needs an SMB2 compliant share the hosts can read and write to. In face the requirements are so low that in Microsoft's own documentation, they advise even a flash drive shares out from a SOHO router is enough. No need to worry about router reboots as well, as this share is only needed when there are not more than 50% of the cluster hosts available. I'm not 100% sure about Proxmox/Ceph yet, but both VMware and Nutanix are limited to one host fault at 4 nodes. This is likely fine for a homelab, but I run clusters at work and home and it's something to keep in mind.
@blevenzon
@blevenzon Ай бұрын
Awesome video thank you for sharing and all the work
@danielmcclintock1135
@danielmcclintock1135 Ай бұрын
Nice Job Rich. I can’t wait to see Proxmox clustering with that bad boy!
@dagamore
@dagamore Ай бұрын
A real nice thing about having 4 identical builds in one system, a 2u in this case, is you can do true A:B testing of different setups. I know you are thing VMs right now, but showing how fast/complicated a Truenas VS Unraid setup is one idea, and then testing default speeds of network shares on same said systems. As one simple example. Going with the VM theme, have one as HyperV, Turenas scale, XCPNG, and ESXI all running at the same time with all running the same VM loads and see what cpu usage is on different hpervisres and how responsive the VMs are. I know at work we just moved all of our VMs from an NFS share on 10K rpm SAS drives to SAS SSD's and they are so much more snappy, some hardware otherwise, and I know this is spinning rust vs ssd, but wonder if same sort of testing could be done for esxi vs everything else.
@johnharrison712
@johnharrison712 Ай бұрын
I'm interested in seeing a video showcasing Netdata, where agents are installed on servers and communicate with a central dashboard. It would be great to see the agents' communication with the dashboard and how they can be turned off from their own web interface. Kinda like how Zabbix works
@2GuysTek
@2GuysTek Ай бұрын
This is a great idea! I wonder if @Netdata is interested in this as well? 😉
@realandrewhatfield
@realandrewhatfield Ай бұрын
How does it compare to Nagios or Zabbix? Why would I pay for this kind of functionality?
@johnharrison712
@johnharrison712 Ай бұрын
@@realandrewhatfield I would assume Netdata does the heavy lifting on getting everything configured where Zabbix requires more configuration.
@Netdata
@Netdata Ай бұрын
Yes, Netdata is fully automated, so besides the 1-2 minutes it takes to install - you don't really have to spend any time setting stuff up. You get all the metrics and dashboards and alerts and anomaly detection out of the box. You should try it out, there's a 30 day free trial and if the homelab plan is still not affordable, we also have a community plan (for < 5 nodes) which you'll automatically enter post trial.
@johnharrison712
@johnharrison712 Ай бұрын
@@Netdata I would love to have all the agents to report to 1 Server. And then disabled the web interface on each agent so you can only access it by going to the main dashboard server. Maybe 2GuyTek can get a sponsor to do that.
@TheOnlyEpsilonAlpha
@TheOnlyEpsilonAlpha Ай бұрын
Okay, before i insert the comment: I would like to see a massive Kubernetes cluster on those 4 Nodes! The amount of containers would be absurd. But you can stack them in Namespaces, the auto-heal capabilities lets you easy pull out the nodes live when the Kubernetes cluster pulls up the containers elsewhere. And Containers are the god king to effective use every last of resources on servers on a massive scale. And the core concepts of Kubernetes are fantastic made, because it’s declarative: Say via YAML what you want and let the automations take care of the rest. Would be nice to have such a big server, so much more power and ressources, hosted on prem. But the energy costs 😅 that lower my excitement, not gonna lie. Back to my initial comment: I‘m at 8:56 and what surprises me the most is the price for the cpus, i was sitting here like DOE?! The Chassis is a very good deal, the RAM, yeah ECC that always costs a ton of money but the upgrade capabilities means you won‘t get out of ram in the next 50 years. But what gives me a headache are the wd blues, i had issues with the HDD SATA ones of those, until i grabbed some more cash and when to the WD Red SATA HDD, and what i can say: My initial stupidity was thinking „well, I exchange them in my NAS when they going down“ but FFS they don‘t go down. On the current level of degeneration they will survive me and my dad 😅 i went lately one my server trio after the NVMe SSD begun to fail also to a WD RED NVMe SSD, currently on Amazon for 87 Bucks each and a PCIe to M.2 Card for 17 Bucks each. Will test it out on my first server, to put them in RAID 1 as the boot disk for my proxmox instance after migration the existing installation of the failing NVMe SSD to the RAID Array. When they degrade as slow as the SATA HDDs then if will have a lot of time with them 😊 and i will begin to switch out the main NVMe SSD on the other 2 servers to the same setup. And add an RAID 5 for data to each server on the long run. Currently my HA is degraded but still functional, that was the initial reason to buy 3 servers for HA Subscribed and Like already for a fellow server geek like me 😇 But one constructive tip: Please reduce the bass on the „building music“ that hurts a bit for headphone users
@K0gashuk0
@K0gashuk0 Ай бұрын
I am building a second R730 I got a couple years ago but never put together as a NAS. I am almost wondering if I should take my 4 node cluster and set it up with a combination of Proxmox running TrueNAS on each and combining the storage using Ceph. Then using the R730 for a backup only unit to stay powered off except when needed? I get two NVMe drives per node and four SATAs. I already have eight 4tb sata SSDs that were going to go into the R730 but could easily slap two in each node. Then buy four 8TB NVMes and slap them in. That way expansion would be available by getting eight more 4tb satas and four more 8TB NVMes. I am wondering the speed vs just having a dedicated R730. I would just load spinning rust into the R730 and use a 3.5 in JBOD for backup purposes.
@mamdouh-Tawadros
@mamdouh-Tawadros Ай бұрын
Hi, thank you for this great video. I would like to see how you cluster these nodes, for fail, or for enhanced function.
@KombiGnome
@KombiGnome Ай бұрын
That beast has afterburners!
@stefsmurf
@stefsmurf Ай бұрын
One thing I think that would be interesting is setting up Open Stack. It's complex enough that it should warrant a couple of videos getting it setup. Another thing you can do is setup a huge virtual network with this new server. Something with eve-ng or gns3 could stress even this machine if the network is big enough.
@LampJustin
@LampJustin Ай бұрын
Yeah I'd love to see that, but I'm sure that's too big of an ask. I was thinking about asking the same, but that's just too much work r8n. Hopefully Microstack will at some point evolve to a usable state, but thus far it's full of bugs. If that becomes stable, OpenStack will be far more approachable for many. Kolla ansible works quite well, too, but it needs experience or time. If you're experienced with Kubernetes, Atmosphere would be a good solution, too. But again, you'll need a lot of knowledge.
@K0gashuk0
@K0gashuk0 Ай бұрын
Just got a similar setup on ebay that runs on Epycs. I got an awesome deal with CPUs and RAM. Seems really cool and I am going to use the 4th node for a working Win11 RDP system.
@FranckEhret-ip4hu
@FranckEhret-ip4hu Ай бұрын
After your video series for the hypervisors, I'd love some to compare the full management stack of VMware alternatives (vCenter vs... x) and their potential administration overhead. Have a good time with this beast! 😉
@nicholasndb2745
@nicholasndb2745 Ай бұрын
I'd be interested in seeing the clustering/failover/live migration with xcpng, currently using vmware but looking at xcp-ng. Also passing through hardware such as gpu with xcpng with these nodes. What is the power draw like running the 4 nodes?
@2GuysTek
@2GuysTek Ай бұрын
When I get some real workloads on the unit I'll make sure to update with power draw numbers.
@markx6288
@markx6288 Ай бұрын
Having gone through this on a very similar Supermicro trying, Hyper-V, XCP-ng and PVE … I recommend this config for max performance: PVE with 192GB zfs boot mirror on the two NVME. then zpool with 2xz1 vdevs (3x mirror vdevs would be a bit faster but less storage). Use the 10Gb nics for replication/migration and backup. Add PBS to one of the PVE nodes is also fun to do. With the remaining space on the nvme drives you can play with testing a mirrored SLOG, small mirror vdev, special vdev or l2arc. All fun stuff. You could also do a ceph configuration and it would be usable for sure, but not max performance. If you had 40 or 100 Gbps nics then Ceph can really give some great performance.
@levifig
@levifig Ай бұрын
Holy crap, those CPUs are dirt cheap now!! I need some!! 😅
@nadtz
@nadtz Ай бұрын
Craft computing has done a couple videos with blade/multi node servers so you are not alone.
@DavidVincentSSM
@DavidVincentSSM Ай бұрын
how about looking into some of the sdn features of proxmox?
@doomboots
@doomboots Ай бұрын
So whats the draw to xeon golds, compared to say 7000 series epycs which can overtake in core capacity very easily and have gobs of pci-e lanes. I'm newish to the homelab game but you can get a 64 core epyc 7000 series, board and the same ram and still spend less than the price of that entire chassis (or two for something like 2500). I agree that the xeon scalables cpus are very frustrating to navigate compared to the previous generations. On a one to one basis in a different chassis xeons are crazy cheap compared to epycs. Gotta say, I love supermicros ipmi compared to idrac 6 and 7 series. The desktop app is really cool too.
@Patric-Kole
@Patric-Kole Ай бұрын
10:20 the nvme daughter board has the two plastic hold downs for tool less install. You can see them on the right side of the pcb.
@tomklein6540
@tomklein6540 Ай бұрын
Nutanix ! 🎉 you can even do that with Terraform 😅 not sure for CE though
@xigideht
@xigideht Ай бұрын
I installed a couple hundred of these nodes in Nutanix trim - the rails weren't you're standard ball bearing sliders instead having a little shelf the bottom of the node sat on with retention when the node was fully inserted. Haven't been able to find a picture of what I'm looking for. I think what you will find when you get to testing Nutanix CE is that 128GB of memory might be a little light when all SSD as I think the minimum CVM memory for all SSD was 32-48GB which won't leave a lot of space for your workload. But in that configuration you will have some very crazy IOPS/disk performance. Also would recommend balance-slb or balance-tcp network bonds instead of physically separating vm vs storage networks.
@xigideht
@xigideht Ай бұрын
Found the style of rails I was looking for if not the actual part number - MCP-290-41803-0N . Looks like their documented for 4U chassis, but this was the style that was sent with the 2U - 4 Node chassis that I installed
@2GuysTek
@2GuysTek Ай бұрын
This right here is why I love doing this! You rock! Thank you for the info!
@xigideht
@xigideht Ай бұрын
@@2GuysTek You're welcome - Nutanix has some publicly available best practices documentation on their support page. Searching "AHV Networking best practices" should get you to the public side of their documentation. If you have any questions I can try to help - Happy Labbing.
@kerbmaster2000
@kerbmaster2000 Ай бұрын
cluster PLEX. would love to see the scalability. just not sure that those cpus have Intel QS
@nicholasl.4330
@nicholasl.4330 Ай бұрын
Welcome to the blade family! There's (probably less than) dozens of us! I won't ask you to check out Hyper-V, but I will ask you to check out it's bigger buffer brother, Azure Stack HCI. Can't beat a 60-day free trial
@terryjohnson3100
@terryjohnson3100 Ай бұрын
I was thinking Hyper-V failover cluster... Come to the dark side!
@2GuysTek
@2GuysTek Ай бұрын
I'll give it a test!
@markarca6360
@markarca6360 Ай бұрын
Another options are XCP-ng or Proxmox VE.
@Fred_vBrain
@Fred_vBrain Ай бұрын
I wonder how much power this monster draws?
@tad2021
@tad2021 Ай бұрын
Getting wrong rails from ebay sellers is super common. I have several boxes full of useless rails I've gotten. Only twice have I gotten a server that either didn't have rails or the wrong rails, and the box-o-rails delivered. I don't know what is with SuperMicro, but their rails kits seem to be hyper specific to single models.
@poddo974
@poddo974 Ай бұрын
It seems good, do you know if is possible to install Nutanix CE?
@2GuysTek
@2GuysTek Ай бұрын
100% and we'll be testing it!
@BillLambert
@BillLambert Ай бұрын
I used to run fat stacks of Xeons in my lab, but I've been replacing them with single socket AMD Epyc systems, and in some cases mini PCs, because newer CPU architectures are dramatically faster and more power efficient than old Xeons, and having only one socket means far fewer SMP/NUMA speedbumps. And the noise... just one of my old Xeon Supermicros, even with the optimized fan setting, is louder than my other 5 nodes and all the switches combined. I don't wanna go back to the jet engines.
@MohamedBelgaiedHassine
@MohamedBelgaiedHassine Ай бұрын
It would really be great if you tried out Harvester HCI from SUSE. I am a huge of this open source project and this kind of sizing is perfect as a Harvester use case.
@zachradabaugh4925
@zachradabaugh4925 Ай бұрын
I haven’t seen it mentioned yet but what about Openstack?
@LampJustin
@LampJustin Ай бұрын
Yeah I'd love to see that, but I'm sure that's too big of an ask. I was thinking about asking the same, but that's just too much work r8n. Hopefully Microstack will at some point evolve to a usable state, but thus far it's full of bugs. If that becomes stable, OpenStack will be far more approachable for many. Kolla ansible works quite well, too, but it needs experience or time. If you're experienced with Kubernetes, Atmosphere would be a good solution, too. But again, you'll need a lot of knowledge.
@logicprohacks
@logicprohacks 27 күн бұрын
Run a Minecraft Server on each node!
@davysprocket
@davysprocket Ай бұрын
I'd love to see a proxmox cluster with Ceph and a bunch of VDI VM's running off of it with gpu passthrough. Would make 16 or even 32 very high performance workstations if you can slice up the GPU's enough. Proxmox templates would help build all of those instances.
@justwhyamerica
@justwhyamerica Ай бұрын
Since you have a big enough cluster you should try rancher hci with longhorn
@truckerallikatuk
@truckerallikatuk Ай бұрын
I would dearly love more 3647 kit in my homelab, and this machine isn't it... but the boards are NOT inexpensive by a long way, unlike the chips.
@johndoughto
@johndoughto 6 күн бұрын
next videos on nutanix and prox+ceph!!! this was a great video!
@2GuysTek
@2GuysTek 6 күн бұрын
I’m workin on the Nutanix install as we speak!
@amosgiture
@amosgiture Ай бұрын
Everything was impressive apart from the 2133mhz RAM. $27? What a deal!
@PatrickWang-d8i
@PatrickWang-d8i 27 күн бұрын
Video + Video Description + Link has conflicting info, "SuperMicro 2029BT-HNC0R" vs "Supermicro 2029BT-JNC04" vs "2029BT-HNC04" :-?
@JoeVSvolcano
@JoeVSvolcano Ай бұрын
nice work! Now build an "absurd" home AI server 😀
@jtiptit
@jtiptit Ай бұрын
I’d want to see XCP-ng, but specifically with their newly officially supported XOSTOR. Their equivalent to vSAN.
@MarkRouleau
@MarkRouleau Ай бұрын
Hyper-V! GO GO!
@nonametrackz7887
@nonametrackz7887 Ай бұрын
why is it staring at me 11:40
@TVJAY
@TVJAY Ай бұрын
XCP_NG and Nutanix CE
@2GuysTek
@2GuysTek Ай бұрын
It's on my list!
@renderwood
@renderwood Ай бұрын
U crazy ;) I was considering similar old server HW, but maybe I just enjoy your suffering while buying new regular HW instead.
@CalvinHenderson
@CalvinHenderson Ай бұрын
Incus and Kubernetes clustering. Incus is the relatively new player on the block in containerization BUT has a solid foundation.
@Wayne_Mather
@Wayne_Mather Ай бұрын
Openshit and other HCI offerings might be interesting too
@xust
@xust Ай бұрын
maybe cloudstack apache?
@smittyjones8755
@smittyjones8755 Ай бұрын
"Don't offer $50 below asking." Dude, you asking $500, i'm going to offer $350, you counter with what your bottom dollar is. If you get offended, hey, i'll take my money elsewhere. If you select the option to accept offers, you better be prepared for that.
@yaronilan2317
@yaronilan2317 Ай бұрын
What is the difference between this "4 node server" and the old blade server technology that is hardly being still used? Don't you actually have here a blade enclosure with 4 server blades, that instead of calling it 'blade technology' you call it a '4 node server'?
@2GuysTek
@2GuysTek Ай бұрын
Great question, and I can understand the confusion since they're very similar. Typically speaking a multi-node server like the one I'm showcasing has independent nodes, with their own IPMI management, storage, RAM, etc. Typically a blade chassis will have a singular centralized management controller to manage the blades. Blade chassis also can have shared storage and networking built into the backplane of the chassis where a multi-node server does not. Blade servers are also seriously dense, requiring a lot more power and cooling because there's a lot more compute in them, and they're a lot more expensive compared to a multi-node server. In short, a multi-node chassis is basically a simpler version of a blade chassis, with the difference being each node is fully independent.
@CineTechGeek
@CineTechGeek Ай бұрын
power consumption??? the only issue you missed covering.
@2GuysTek
@2GuysTek Ай бұрын
I plan on getting some real-world numbers out there once I get the host under load! Thanks for the comment!
@Michaelmaertzdorf
@Michaelmaertzdorf Ай бұрын
Azure Stack HCI
@markkoops2611
@markkoops2611 Ай бұрын
Hyper V, hyper V
@incandescentwithrage
@incandescentwithrage Ай бұрын
Monthly reboots, monthly reboots
@markkoops2611
@markkoops2611 Ай бұрын
Your WAF quota might require consolidation and selling older gear to recover some $ if she sees this😂
@2GuysTek
@2GuysTek Ай бұрын
😂
@napalmsteak
@napalmsteak Ай бұрын
It may not help but you should tell your wife you got a bargain. I bought a Dell T640 18-Bay new in 2016 and it was $3,500. It came with one single 1TB HDD, a 10-core Xeon Silver 4114, and 32GB of DDR4 PC2666.
@druxpack8531
@druxpack8531 Ай бұрын
you are killing me with only having 10gbe of bandwidth between them. if you spent this much already, get 4 100gbe cards and a mikrotik CRS504-4XQ-IN...or at least 4 more 10/25gbe cards and cross connect everything for storage and/or replication.
@2GuysTek
@2GuysTek Ай бұрын
I agree! And I expect to move them over to 25GbE when I can move some workloads off the UniFi Agg Pro switch.
@VivaldiJean
@VivaldiJean Ай бұрын
Unraid or proxmox
@pittfallgames
@pittfallgames Ай бұрын
First!
@OGParzoval
@OGParzoval Ай бұрын
HarvesterHCI
There's no wrong way to Homelab!
15:39
2GuysTek
Рет қаралды 26 М.
An Epyc Homelab Monster: the Perfect Media Server mega upgrade
29:36
КАК ДУМАЕТЕ КТО ВЫЙГРАЕТ😂
00:29
МЯТНАЯ ФАНТА
Рет қаралды 10 МЛН
Пранк пошел не по плану…🥲
00:59
Саша Квашеная
Рет қаралды 6 МЛН
Женская драка в Кызылорде
00:53
AIRAN
Рет қаралды 484 М.
WORLD'S SHORTEST WOMAN
00:58
Stokes Twins
Рет қаралды 46 МЛН
"Tech that let me down" Special 3
10:57
DankPods
Рет қаралды 226 М.
Life After VMware - A summary and comparison of hypervisors!
15:06
This homelab setup is my favorite one yet.
21:30
Dreams of Autonomy
Рет қаралды 117 М.
Off Grid Internet for the Common Man | Decentralized Information
28:57
Dirty Civilian
Рет қаралды 301 М.
I Built A $100 Storage Server! (2024)
15:24
Tech By Matt
Рет қаралды 79 М.
Checking out the Gigabyte H262-Z6A
16:15
Level1Techs
Рет қаралды 22 М.
An SSD NAS finally worth buying? - LincStation N1
9:07
2GuysTek
Рет қаралды 7 М.
A SERIOUS Home Server That's Affordable
18:59
Hardware Haven
Рет қаралды 127 М.
How efficient can I build the 100% Arm NAS?
30:09
Jeff Geerling
Рет қаралды 239 М.
Looks very comfortable. #leddisplay #ledscreen #ledwall #eagerled
0:19
LED Screen Factory-EagerLED
Рет қаралды 5 МЛН
$1 vs $100,000 Slow Motion Camera!
0:44
Hafu Go
Рет қаралды 28 МЛН
Kumanda İle Bilgisayarı Yönetmek #shorts
0:29
Osman Kabadayı
Рет қаралды 2,1 МЛН
Как удвоить напряжение? #электроника #умножитель
1:00
Hi Dev! – Электроника
Рет қаралды 1,1 МЛН
iPhone socket cleaning #Fixit
0:30
Tamar DB (mt)
Рет қаралды 17 МЛН