I really like your deep dives into these topics. You're one of the few KZfaqrs I've seen that actually knows what is being presented...
@dominick2537 ай бұрын
Apalards adventures is really knowledgeable as well.
@andrewjohnston3595 ай бұрын
@@dominick253 true, and Wendell from level one techs
@wecharg8 ай бұрын
Thanks for taking my request, that was really cool to see! I ended up going with CEPH but this is interesting and might use it in the future! -Josef K
@zyghom8 ай бұрын
imagine: I only use mirrors and stripes but I am still watching it ;-)
@makouille4958 ай бұрын
how the hell do you manage to make everything so water clear for noobs like me haha as always quality content and quality explainations ! thanks a lot for sharingyour knowledge with us ! keep it up ! 👍
@FredFredTheBurger8 ай бұрын
Fantastic video. I really appreciate the RaidZ3 9 disk + spare rebuild times - and the mirror rebuild times. Right now I have data striped across mirrors (Two mirrors, 8TB disks) that is starting to fill up and I've been trying to figure out the next progression. Maybe a 15 bay server - 10 bays for a new Z3 + 1 array, leaves enough space to migrate my current data to the new array.
@TheExard3k8 ай бұрын
If I had like 24 drives, I'd certainly use dRAID. Sequential resilver....just great, especially with today's drive capacities.
@boneappletee64167 ай бұрын
This was a very interesting video, thank you for the explanation! :) Unfortunately I haven't had the chance to really play around with ZFS yet, most of the hardware at work use hardware RAID controllers. But I'll definitely keep dRAID in mind when looking into ZFS in the future 😊
@awesomearizona-dino8 ай бұрын
Upside down construction picture?
@ElectronicsWizardry8 ай бұрын
I didn't realize the picture looks odd in video. The part of the picture that is visible in the video is a reflection, and the right side up part of the picture is hidden.
@Spoolingturbo64 ай бұрын
@2:15 can you explain how to set that up, or give a search term to look that up? The I installed promos, I split my 256GB NVMe drive up in the following GB sizes (120/40/40/16/16/1/.5) (Main, cache,unused,metadata,unused,EFI,Bios) I knew about this, but just now at the stage I need to use metadata and small files.
@Mikesco38 ай бұрын
I'm curious if you've looked into ceph
@ElectronicsWizardry8 ай бұрын
I did a video on a 3 node cluster a bit ago and used ceph for the video. I want to do more ceph videos in the future when I have hardware to show ceph and other distributed filesystem in a correct environment.
@andrewjohnston3595 ай бұрын
@@ElectronicsWizardry I would love to see that. There are zero videos I can find showing a promox+ceph cluster that are not homelabbers in either nested VM's or using very under powered hardware as a 'proof of concept' - and once it's setup the video finishes!!. I have in the past built a reasonably specced 3 node proxmox cluster with 10GB nics, mix of SSD's and spinners to run VM's at work. It was really cool - but the VM's performance was all over the place. A proper benchmark, deep dive into optimal ceph settings and emulating a production environment with a decent handful of VM's running would be amazing to see!
@severgun7 ай бұрын
why data sizes so weird? 7 5 9? None of them divisible by 2. why not 8d20c2s? Because of fixed width I thought it better to comply 2^n rule. Or I miss something? How compression works here?