The Perfect Sorting Algorithm?? Block Sort Explained (Wiki Sort, Grail Sort)

  Рет қаралды 37,610

Kuvina Saydaki

Kuvina Saydaki

Күн бұрын

In this video, I explain the sorting algorithms sqrt sort, block sort, wiki sort, and grail sort. I used arrayV for the visuals for wiki sort and grail sort. If you want to learn more about them, then here are some resources:
Sqrt sort:
My version is basically the original block sort without the buffers, but the version on arrayV seems to be grail sort without the buffers.
Original blocks sort:
It comes from this 2008 paper
itbe.hanyang.ac.kr/ak/papers/t...
Wiki sort:
The wikipedia of block sort describes wiki sort, but only ever calls it block sort
en.wikipedia.org/wiki/Block_sort
And here's the github project of wiki sort
github.com/BonzaiThePenguin/W...
Grail sort:
Here's the github project of grail sort rewritten, which is part of the larger holy grail sort project
github.com/HolyGrailSortProje...
And I figured out grail sort from this google translated article (the original is in Russian)
habr-com.translate.goog/en/ar...
Chapters:
0:00 Intro
4:48 Outline
7:20 Sqrt sort
11:52 Original block sort
18:18 Wiki sort
23:44 Grail sort
30:47 Outro
#math #computerScience #sorting #algorithm

Пікірлер: 139
@Kuvina
@Kuvina 6 ай бұрын
CORRECTIONS: none so far, but please tell me if there are any mistakes. I apologize for the long wait on this one. For most of October, I was working on a different video (hint: it's chemistry related), but I eventually realized it would take way too long to finish that, so I switched to block sort. And then I've been sick for most of November so far. My views are a bit down, so anything you can do to help the channel is very much appreciated. Anyway, thank you for your patience and continued support!
@ghostchris519
@ghostchris519 6 ай бұрын
Try making more videos on simple things expanded upon I’d say for more views. Your colors, first sorting video, and Platonic solids video were hits, and they all follow the “here’s a basic topic you already know about, I’m about to explain something complicated you didn’t know about it”
@Kuvina
@Kuvina 6 ай бұрын
Thank you! That is some really smart advice!
@pokemonjourneysfan5925
@pokemonjourneysfan5925 6 ай бұрын
I have a question, can we create say, a joke-sorting algorithm that works in prime number time. This means that the number of operations is O(p_n). I mean it is proven that p_n ~ n*(log(n)+log(log(n))-1) for n>=6. Note that here log represents log base e not 2.
@Kuvina
@Kuvina 6 ай бұрын
@@pokemonjourneysfan5925 that's really interesting! I've never seen a sorting algorithm that is O(log(logn)), but I've heard it's theoretically possible if the input is only integers.
@pokemonjourneysfan5925
@pokemonjourneysfan5925 6 ай бұрын
@@Kuvina More specifically I said, it runs in Prime number time. p_n is the nth prime.
@DeadCatX2
@DeadCatX2 6 ай бұрын
I think the perfect sort would exploit CPU architecture, and therefore have optimal temporal and spatial locality prioritized over time and space complexity. I also think it would benefit from considering registers and cache as space that doesn't technically count against the "in-place" constraint. An nlogn algorithm that makes better use of locality will easily outperform one with more unpredictable memory access
@DFPercush
@DFPercush 6 ай бұрын
I don't think local variables count against space complexity, as long as they're not arrays of a size dependent on n. You have to be able to swap stuff, after all. All the pointers and partitions, etc are part of that.
@Lossanaght
@Lossanaght 6 ай бұрын
​@@DFPercush en.wikipedia.org/wiki/XOR_swap_algorithm :P
@justineberlein5916
@justineberlein5916 6 ай бұрын
I mean, they technically do. But if it's just "And a temporary variable for swapping things", that's O(1) extra space. Contrast with a normal merge sort that needs a buffer the size of one of the blocks to be merged, which is half the size of the array or O(n)
@johaquila
@johaquila 6 ай бұрын
What you propose would obviously lead to enormous savings when sorting, for example, a huge number of integers on a specific machine. But I think what you usually have is at most a few thousand elements sorted on a more or less random CPU (little more than the architecture being fixed) with somewhat random cache sizes and the actual comparisons delegated to a function or macro of variable complexity. In that case you don't know what to optimize for, and whatever happens during the comparison calculations may well completely destroy your locality. The usual metric of number of comparisons is exactly right in this case.
@Musicombo
@Musicombo Ай бұрын
Very, very good explanation of Grailsort. I still plan on posting my own explanations in the future, but it's great that yours is available on the net for now! I'll be sure to give you credit. We also will continue developing Holy Grailsort sometime in the near future!
@StefanReich
@StefanReich 6 ай бұрын
I thought this was a project out of academic curiosity, but apparently Wikisort is extremely competitive in real world applications. Very cool
@dead-eyedarrel3878
@dead-eyedarrel3878 6 ай бұрын
I swear Kuvina follow up videos are like my highest anticipated KZfaq content.
@darqed
@darqed 6 ай бұрын
best sorting series on youtube
@gfasterOS
@gfasterOS 6 ай бұрын
I kinda want to see how this performs in terms of cache efficiency. It seems like this uses more operations than Quicksort, but if it is way better with cache efficiency then it might be useful outside of embedded cases.
@henriquefinger935
@henriquefinger935 6 ай бұрын
My beloved HeapSort shall be avenged.
@kered13
@kered13 6 ай бұрын
From benchmarks I recall seeing (awhile ago, no links, sorry), they're surprisingly competitive given the complexity involved. I suspect that a good quicksort will be faster on random data, but given the drawbacks of quicksort (unstable, hard to pick a good pivot for non-random data), they are probably worth considering. I believe wikisort is now used in some standard libraries.
@peceed
@peceed 6 ай бұрын
quicksort is unbeatable considering cache efficiency.
@batlin
@batlin 6 ай бұрын
Yes, a good set of benchmarks would help. Performance will depend on the size of the input and blocks, and which variant of block sort of course. There are some benchmarks for block sorts like quadsort, octosort and wikisort vs glibc qsort and std::stable_sort. Quadsort wasn't quite a fair comparison since it seems to use extra memory -- it is very fast but the author acknowledges "Quicksort has a slight advantage on random data as the array size increases beyond the L1 cache". Octosort seemed comparable to qsort for fully random order but much faster for a few scenarios (esp. already sorted ascending/descending). Wikisort is similar, but may not be as useful in embedded / low-memory environments because it slightly bends the "rules" by using a fixed-size 512-item cache -- this is still technically O(1) space because it's a constant size cache, but may not be what you want.
@simonwillover4175
@simonwillover4175 6 ай бұрын
This is very hard for me to understand. However, I think this video does an amazing job teaching it. I am just having a hard time because this is my first time ever encountering most of these ideas. I will definitely want to watch this multiple times.
@ghostchris519
@ghostchris519 6 ай бұрын
Long live the piano intro
@Tarou9000
@Tarou9000 6 ай бұрын
I think it's from a video called something "the song of the world" but i'm sure it's from a channel called "midi music"
@cowcat8124
@cowcat8124 6 ай бұрын
07
@Tarou9000
@Tarou9000 6 ай бұрын
kzfaq.info/get/bejne/qrFhgJlom8eUh40.htmlsi=7BXe6TAdIIGSdQmX
@Z_Inspector
@Z_Inspector 6 ай бұрын
RIP
@haipingcao2212_.
@haipingcao2212_. Ай бұрын
-yes😂bo-
@immigrantmammoth5162
@immigrantmammoth5162 6 ай бұрын
finally, the return of enby math person
@michawhite7613
@michawhite7613 6 ай бұрын
I found that selection sort can be surprisingly useful as a lazy sort. I don't know how many values I'll need, but I usually don't need to sort the entire list. I usually just need the two smallest values.
@Milan_Openfeint
@Milan_Openfeint 6 ай бұрын
That task is usually solved by quicksort, you just don't sort both halves recursively, only the half containing the k-th element to find k lowest elements.
@AubreyBarnard
@AubreyBarnard 6 ай бұрын
Which is known as "quick select".
@HansLemurson
@HansLemurson 6 ай бұрын
Alternately, a Heap automatically maintains the smallest value at it's head, and has log(n) time for getting the next smallest item.
@michawhite7613
@michawhite7613 6 ай бұрын
@@Milan_Openfeint I haven't benchmarked it, but the worst case time complexity for this algorithm is O(n²). My worst case time complexity is O(n). The only benefit I can think of is that it would make the second selection O(1). But that seems overkill for just grabbing two elements. In all honesty, I did think about trying quicksort, but the thing that stopped me was that the one library I found for it used a lot of memory allocations, which was already taking most of my time.
@yuantan9292
@yuantan9292 6 ай бұрын
If you just need to find the k-th largest/smallest value (in your case k=2), consider using quick select. Quick select is O(n), in-place, though not stable. If you write C++, it is called "std::nth_element" The resulting list's first k elements will be the k smallest elements, and the k-th element in the list will be exactly the k-th smallest element. If you then need the first k elements sorted, you only need to sort the first k-1 elements however you want, or if k
@itellyouforfree7238
@itellyouforfree7238 6 ай бұрын
Wow, what an extremely well made video! You summarized the key ideas very well! Thanks!
@giuseppefusco1189
@giuseppefusco1189 5 ай бұрын
Loved the Technetium 99m joke
@yaksher
@yaksher 6 ай бұрын
I would say that quick sort is an "in place" sorting algorithm. I've never heard of in place as being defined as specifically O(1) and log n space complexity is... well, if you want your sort to ever actually terminate, your logarithm better be less than, say, 64 lol.
@paradiseexpress3639
@paradiseexpress3639 6 ай бұрын
LETS GOOOOOOOOOO! A NEW Kuvina Saydaki video!!!!!!!!!!!!
@lilyyy411
@lilyyy411 6 ай бұрын
It's strange that most of the enbies that i've seen are all obsessed with sorting algorithms
@m_affiliates
@m_affiliates Күн бұрын
as an enby i can confirm
@Lucas-pj9ns
@Lucas-pj9ns 6 ай бұрын
Thanks for the cool video! How do you get the distinct elements in place?
@Kuvina
@Kuvina 6 ай бұрын
I think it's basically like rotation based merging, but in reverse. So in the algorithms where the left sublist is already sorted, you just start at the beginning and repeatedly find the first piece that is greater than the last one you used, and then rotate it into the next spot in the buffer. As for grail sort, you can use basically binary insertion sort, but if you find that the piece is exactly equal to one of the ones already in the buffer, you simply skip it and don't insert that one.
@jimiwills
@jimiwills 6 ай бұрын
It's like a brain hug for 30 mins ❤❤❤
@General12th
@General12th 6 ай бұрын
This is incredible.
@nebularzz
@nebularzz 6 ай бұрын
YESS I WAS WAITING FOR THIS VIDEO!!!!
@George70220
@George70220 6 ай бұрын
Amazing video ❤
@mlvluu9836
@mlvluu9836 Ай бұрын
9:16 Could this be done with the individual items rather than with blocks?
@peterreichg
@peterreichg 6 ай бұрын
In wikisort, how does it find the A block with the smallest tag value? Does it kind of selection sort it (check the second value of every A block, then block swap the first block with the one with the smallest tag value) or does it use a better method?
@Kuvina
@Kuvina 6 ай бұрын
Honestly I'm not sure. I don't know how you could do it faster, but I wouldn't be surprised if they found a way.
@xcoder1122
@xcoder1122 6 ай бұрын
Note that there is a simple variant of mergesort that is in-place, it's just not stable. Still it has advantages over quicksort as it requires less memory and it guarantees O(nlogn), which quicksort does not; quicksort is only typically O(nlogn) but unless you use a special variant of it, there are cases where it clearly is not, whereas meregsort is always O(nlogn), no matter the input. Of course, you could as well use heapsort instead of that mergesort variant if all you wanted was in-place and O(nlogn). But at some point, it also boils down to the fact that two algorithms aren't equally fast just because they are both O(nlogn) and speed is often the most important factor of all and it's not covered at all by the introduction table (complexity is not speed). Last but not least: In-place is a very stretched term. If an algorithm requires arrays of auxiliary structure, e.g. to track O(log n) bits, that is not truly in-place, as it just swaps stack memory for a bit array, which is a reduction of extra space but it is extra space.
@Kuvina
@Kuvina 6 ай бұрын
Thanks that's a good point! I should have mentioned just because it checks the boxes doesn't mean it's necessarily the most practical algorithm for every purpose!
@xcoder1122
@xcoder1122 6 ай бұрын
@@KuvinaWell, the big-O notation is always very deceiving. I usually tell people the story, that I required a lookup table for network protocol implementation in a system kernel. What did I choose? Well, a hasthable, of course, as it is O(1). Years later and just out of curiosity, I wanted to see what happens if I replace the hashtable by a sorted array instead, where lookups are O(logn), of course (binary search). And the result was: The sorted array resulted in a significant better performance. Of course, there is a turning point. The array will get slower the more elements it has, whereas the hashtable will roughly stay the same. I was able to calculate the turning point and later benchmarks confirmed that calculation to be correct. But that turning point was irrelevant, as it was way beyond 256 entries and the lookup table was bound to having at most 256 entries (the table entry count was stored in a byte) and would typically not have more than anything between 4 and 64 entries in real world scenarios. So just because two algorithms are both O(nlogn) and both have O(1) space complexity, doesn't mean it's totally irrelevant which one you pick. If there were no advantages and disadvantages, then there would be no reason why different variants even exist, right?
@killerbee.13
@killerbee.13 6 ай бұрын
@@xcoder1122 When it will have at most a small bounded number of elements, like 256, it's also worth considering just using a bitmap + the data (if the data has some kind of null state, you can also avoid the bitmap) and get lookup in a single pointer offset. It uses a bit more memory, but if this is a persistent structure that multiple copies won't exist of that's not much of an issue in most cases. Unless you're using a 16-bit microcontroller or something, anyway.
@ppantnt
@ppantnt 6 ай бұрын
At 3:03 I don't quite understand why the time to rotate length with sqrt(n) has a time complexity of O(sqrt(n)). Like if you try to rotate the left sublist to the place wouldn't it take time to shift element to the left?
@vgtcross
@vgtcross 6 ай бұрын
Each time you rotate, you need to move O(sqrt n) elements right and possibly O(n) elements left. This is done O(sqrt n) times in total, so it seems like it will take O(n sqrt n) time in total. But notice that each element is shifted left only once. This means that the time complexity of shifting elements left is O(n) in total. We shift O(sqrt n) elements right a total of O(sqrt n) times, so the time complexity of that is O(n) in total. Thus, the full merge process is linear.
@ppantnt
@ppantnt 6 ай бұрын
@@vgtcross But how should this algorithm be implemented, like keep shifting the next element after the sublist until reaching the destination? But if so then wouldn't it be shuffled and will need to be sorted every time it shift?
@Kuvina
@Kuvina 6 ай бұрын
So a rotation of 2 sublists can be done in linear time (compared to the length of the 2 sublists) if you simply flip each one then flip them together. This moves each piece twice, and there are faster ways that move each piece once, but the same time complexity.
@haipingcao2212_.
@haipingcao2212_. 3 ай бұрын
Welcome back
@etooamill9528
@etooamill9528 6 ай бұрын
i thought quicksort(choose a pivot(p), smaller than p to the left bigger to the right, split array at p and do the same on the two halves recursively) was in place? am i confusing with another algorithm?
@canaDavid1
@canaDavid1 6 ай бұрын
It requires O(logn) stack space for the recursion
@animowany111
@animowany111 6 ай бұрын
@@canaDavid1 And it's not guaranteed to be O(logn) time (nor space) complexity, that depends on choosing "good enough" pivots
@sonicwaveinfinitymiddwelle8555
@sonicwaveinfinitymiddwelle8555 6 ай бұрын
Now get ready for... *EVOLUTIONARY SORT!!!* > This sort trains an artificial intelligence into learning how to sort stuff like humans do. > The AI will learn its own way to sort. > After the sort is completed the AI will be destroyed. Yes, I made this thing completely up and I have no intention to try and replicate it into a real sorting algorithm.
@simonwillover4175
@simonwillover4175 6 ай бұрын
In the future, we will probably have 1 quadrillion parameter AIs that are actually useful for this. Maybe we will have them sorting something that can not be sorted easily, like quatum entangled particles.
@sonicwaveinfinitymiddwelle8555
@sonicwaveinfinitymiddwelle8555 6 ай бұрын
If we are going to have that strong AIs I don't think sorting anything will be a problem@@simonwillover4175
@user-qm4ev6jb7d
@user-qm4ev6jb7d 5 ай бұрын
That's basically the "Universal Search" algorithm. Ironically, it's the "absolute fastest" of all algorithms from the perspective of complexity theory, even though it's horribly slow. That's because the time to build "the AI", or whatever you call it, is CONSTANT, and thus negligible compared to O(n).
@simonwillover4175
@simonwillover4175 5 ай бұрын
@@user-qm4ev6jb7d so if we had to sort like a list of 10^1000 items, the evolution sort would probably be pretty effective!
@alonamaloh
@alonamaloh 5 ай бұрын
I didn't follow every detail of the video but, don't all these algorithms still require O(log(n)) space on a call stack, due to their recursive nature? EDIT: Never mind. I think these are not recursive in nature. I'm going to have to implement these ideas to fully understand them. Awesome video!
@kelema2097
@kelema2097 6 ай бұрын
Weclome back Kuvina!
@CharlesVanNoland
@CharlesVanNoland 6 ай бұрын
"Auxiliary" is the WOTD for me :P
@JamesSjaalman
@JamesSjaalman 6 ай бұрын
The blocking assumes fixed record size. For uneven record lengths (CSV files ...) the rotate/swap just doesn't work. [ or you would need a separate layer of "pointers", wich would result in terrible locality]
@paulstelian97
@paulstelian97 6 ай бұрын
Most sorting algorithms don't work well if you can't arbitrarily swap elements.
@andytroo
@andytroo 6 ай бұрын
for uneven record lengths you can store a fixed length header followed by a pointer to the rest - most of the comparisons are decided by the header.
@andytroo
@andytroo 6 ай бұрын
alternatively you can treat the header ties as their own sub-sorting problem - rows with the same header get sorted together, then you can do a second pass to 'tie-break' grabbing a second header chunk out of each group of ties.
@StefanReich
@StefanReich 6 ай бұрын
Is space complexity O(n) actually a big deal? Sounds pretty okay to me. It's just temporary storage.
@paulstelian97
@paulstelian97 6 ай бұрын
It can be with hyper large lists, like more than would fit into RAM (external sorting). External sorting is another can of worms anyway. But it would likely be something like doing a sort on blocks of a certain size where a standard algorithm would be able to do it in RAM, then use extra space and in a streaming fashion perform a (potentially n-way) merge sort.
@iwersonsch5131
@iwersonsch5131 6 ай бұрын
Does binary search insertion sort count as O(nlogn)? It only has O(nlogn) comparisons and O(n) writes, but each write is of size O(n) which I've been told may mean that each write takes O(n) time
@Kuvina
@Kuvina 6 ай бұрын
so the thing about insertion sort is that even though you can find a piece's destination within the sorted section in O(logn) time, to actually get it there, you have to shift everything over. On average, you have to shift over half the sorted section, which is on average half the list, so inserting 1 piece, even with a binary search is O(n) per piece, therefore in total it's O(n^2), the same time complexity as regular insertion sort. (It does have fewer operations though, just not a time complexity difference)
@nullplan01
@nullplan01 6 ай бұрын
According to Andrei Alexandrescu, binary search insertion sort is also worse experimentally, since the branches are less predictable. In the linear insertion sort, the CPU can learn that most of the time, the inner loop is continued, but in binary search insertion sort, you can take the upper or lower branch seemingly at random. Branch prediction matters!
@iwersonsch5131
@iwersonsch5131 6 ай бұрын
@@nullplan01 That's weird. How would e.g. 15 unpredicted outcomes be worse than 8000 loops?
@nullplan01
@nullplan01 6 ай бұрын
@@iwersonsch5131 if the branch is mispredicted, the effect is a pipeline flush, which can be as bad as doing nothing for hundreds of cycles. Plus Kuvina already explained that you still need the linear move operation.
@iwersonsch5131
@iwersonsch5131 6 ай бұрын
@@nullplan01 So linear insertion sort has 1 pipeline flush per entry, while binary insertion sort has about lb(n)/2 pipeline flushes per entry. A middle ground could be to break up the sorted list into linearly searched blocks of, say, sqrt(n) (or for very large n, n^(1/3) and n^(2/3)), resulting in 2 pipeline flushes per entry, or 3 for very large n, while keeping the comparisons for each entry below 1000*log_1000(n)
@dylanherrera5395
@dylanherrera5395 6 ай бұрын
one question, isnt O(nlogn) slower than O(n) at higher sizes of n?
@killerbee.13
@killerbee.13 6 ай бұрын
O(n log n) is always higher than O(n), but it doesn't matter because it's optimal for comparison sorting. O(n) comparison sorting doesn't exist. Also, log n grows so slowly that the difference barely matters anyway. By the time it log n is appreciably large, you're hopefully going to be using a parallelized sort instead.
@pandaqwanda
@pandaqwanda 6 ай бұрын
nice video !
@FishSticker
@FishSticker 6 ай бұрын
Could you do a proof of why nlogn is optimal, and not… say, loglogn
@Kuvina
@Kuvina 6 ай бұрын
I don't know the proof, but that's a good idea!
@FishSticker
@FishSticker 6 ай бұрын
@@Kuvina awesome!
@canaDavid1
@canaDavid1 6 ай бұрын
There are n! possibilites that the array can be in. Any comparison only gives 1 bit of information, but one needs log(n!) in total. n! ≈ n^n, so log(n^n) = nlog(n) is a lower bound on the number of comparisons needed to figure out what the permutation is.
@jetison333
@jetison333 6 ай бұрын
​@@canaDavid1n! Isnt about equal to n^n though, in fact n^n > n!.
@YourAverageLink
@YourAverageLink 6 ай бұрын
​@@jetison333log(n!) / log(n^n) asymptotically approaches 1 though, so they both evaluate to n log n in O notation
@put4558350
@put4558350 6 ай бұрын
If you have some idea of "possible" new sorting that you think really good and want to show to the world. What to do ?
@Milan_Openfeint
@Milan_Openfeint 6 ай бұрын
Implement, benchmark, give up.
@put4558350
@put4558350 6 ай бұрын
@@Milan_Openfeint Well, today sorting is result of years of optimize. Can't beat that easily. My idea can sort everything in two part. Read from main array once and get multiple sort lists. Then combine that lists to main array. Part 2 Multi thread able. And result is Stable. ... With big memory usage problem
@halneufmille
@halneufmille 4 ай бұрын
I thought Quick sort was in place too. Is it not in place because it uses recursion?
@Kuvina
@Kuvina 4 ай бұрын
Quick sort is O(logn) space due to recursion, which is sometimes considered in place, but usually only O(1) is considered in place
@flameofthephoenix8395
@flameofthephoenix8395 6 ай бұрын
1:39 Hm, shouldn't it be as simple as iterating through each item in the list once to make it stable again?
@EchoHeo
@EchoHeo 6 ай бұрын
cool video
@ladyravendale1
@ladyravendale1 6 ай бұрын
Banger vid
@knotsuchan
@knotsuchan 6 ай бұрын
i enjoyed this
@Kuvina
@Kuvina 6 ай бұрын
Thank you knots and Himari!
@iamsushi1056
@iamsushi1056 6 ай бұрын
Radix sorts are a type of block sort, aren’t they?
@Kuvina
@Kuvina 6 ай бұрын
Personally I wouldn't say that. Radix sort doesn't really have any type of block rearrangement or local merging. It does categorize pieces into groups, but I would say radix sort is a type of bucket sort or vice versa. I would instead classify block sort as a type of merge sort.
@pocarski
@pocarski 6 ай бұрын
When does stability even matter? Why would the order of entries matter if they have the same value?
@platinummyrr
@platinummyrr 6 ай бұрын
Because you might be sorting complex objects which have multiple keys. If you sort a list of people by age, you don't want the list to change relative order. If you did, it wouldnt be possible to guarantee a sort of two different keys since sorting by the second key could break the ordering of the first key.
@StefanReich
@StefanReich 6 ай бұрын
@@platinummyrr Yes, stability matters
@RiedlerMusics
@RiedlerMusics 6 ай бұрын
this looks lovely, but honestly, I stopped understanding the second you started talking specifics
@Flowery0
@Flowery0 6 ай бұрын
I (probably)will fucking write that one when i'll have the time
@redstowen
@redstowen 6 ай бұрын
Physic ep 6 when
@rewrose2838
@rewrose2838 6 ай бұрын
Please use a dark background, we have vampire children at this school and would like them to be able to enjoy the videos as well
@SkyboxMonster
@SkyboxMonster 6 ай бұрын
I came up with a proof of concept electronic sorting technique. But I cant find anything remotely similar to it in the existing sorting algorithm ecosystem. I dont know how to appraise it to see if its worth investing more time into developing it. Its Stable. Its not Technically in-place, but its impact on hardware is very limited. Its time complexity is very hard to define.... it is a multi-step process, but each step is measured in clock cycles,
@Speed001
@Speed001 6 ай бұрын
Found you
@wWvwvV
@wWvwvV 6 ай бұрын
I guess it has the same problems as a c++ hashmap, it's O(1) but it uses a linked list in it's implementation. How to work in place, how to work concise in memory (no pointers). I watched the video only halfways to now, sorry: Go also has hashmaps O(1). Most of the time I prevent to use them in a public API. Just lend over arrays/slices and search them with a binary search. It is a much simplier API only using arrays, and it is still faster as a map lookup, because of the cpu cache locality. I already know, that Bubble Sort can be faster than, lets say Quiksort, for small numbers. It's evident how the mathematical function curves behave, the extremes are in the big numbers thou.
@wildblack1
@wildblack1 6 ай бұрын
nlogn = logn^n
@scalibbdp9249
@scalibbdp9249 6 ай бұрын
Test ALL your sorting algorithms on a Data Set with 1,073,741,824 elements ( 1 Billion! )...
@scalibbdp9249
@scalibbdp9249 6 ай бұрын
// Data Set size: 32768 x 32768 - RTfloat ///////////////////////////////// Application - MgwTestApp - NOS64_MGW ( 64-bit ) - Release Tests: Start Command Line arguments - User Count : 6 Values: 32768 3 1 4 7 * Test0001 Start * Microsoft Visual Studio version Not Defined ********************************************** Configuration - NOS64_MGW ( 64-bit ) - Release CTestSet::InitTestEnv - Passed * CSortSet Start * * TSortSet Methods * * CSortSet Methods * GetSortOrder - Passed Data Set of RANDOM Values Data Set Size : 1073741824 elements Number of Iterations: 3 Number of Tests : 1 Number of Threads : 4 * CSortSet Algorithms * Data Set: RTfloat - in RANDOM Order GetSortOrder - Passed HeapSort - Sorting... HeapSort - RTfloat - 425415.00000 ms HeapSort - Passed QuickSort - Sorting... QuickSort - RTfloat - 10495076.00000 ms QuickSort - Passed MergeSort - Sorting... MergeSort - RTfloat - 146345.00000 ms MergeSort - Passed Data Set: RTfloat - in ASCENDING Order GetSortOrder - Passed GetSortOrder - RTfloat - 0.00000 ms GetSortOrder - Passed * CSortSet Algorithms - VALUESET * Converted to RTdouble type Converted to RTubyte type VALUESET:RTubyte - 0.00000 ms VALUESET:RTubyte - Passed * CSortSet Generic * * CSortSet End * * Test0001 End * Tests: Completed
@scalibbdp9249
@scalibbdp9249 6 ай бұрын
You really need to Re-Think of everything when there is a data set of 1 Billion elements. Your "toy-test-cases" with n=300 Absolutely Small!
@AgainPsychoX
@AgainPsychoX 6 ай бұрын
Sorting porn, I love it.
@567secret
@567secret 6 ай бұрын
Doesn't spaghetti sort beat nlog(n)?
@Kuvina
@Kuvina 6 ай бұрын
yeah I should've specified, nlogn is optimal for comparison based sorting algorithms if you don't use parallel processors.
@ImNetheN
@ImNetheN 6 ай бұрын
:3
@RichConnerGMN
@RichConnerGMN 6 ай бұрын
wow it's the nb math/compsci nerd (buzz lightyear gif) /pos
@animowany111
@animowany111 6 ай бұрын
I can't tell if you're being rude or if you're comedically signing the comment as if you're a "PoS"...
@noelletheradcat
@noelletheradcat 6 ай бұрын
​@@animowany111/pos means positive connotation
@animowany111
@animowany111 6 ай бұрын
@@noelletheradcat Huh, never heard of it. The acronym for "Piece of Shit" (or "Point of Sale", but those are basically the same thing lmao) is too ingrained in my head
@bobbsurname3140
@bobbsurname3140 6 ай бұрын
Sheesh, what a hassle
@christophera6554
@christophera6554 5 ай бұрын
🔥 Promo sm
@Zicrus
@Zicrus 6 ай бұрын
If it can't sort in linear time, then I wouldn't call it perfect, since some non-comparison based algorithms can do that.
@schwingedeshaehers
@schwingedeshaehers 6 ай бұрын
How does it work for arbitrary size of each element?
@Musicombo
@Musicombo Ай бұрын
You can only sort an arbitrarily-ordered array in linear time if you know the *distribution* of the data beforehand, otherwise the minimum complexity will be O(n log n).
@Zicrus
@Zicrus Ай бұрын
@@Musicombo In almost all practical cases, values are stored in 32, maybe 64 bit integers, which can always be sorted in linear time. If you don't make that assumption, then yes, you are right.
@rodrigoqteixeira
@rodrigoqteixeira 6 ай бұрын
LETS GOOOK FKNALLY 🎉
Every Sorting Algorithm (part 2): The Weird and Obscure
18:27
Kuvina Saydaki
Рет қаралды 33 М.
Explaining EVERY Sorting Algorithm:  Variants and Hybrids
35:12
Kuvina Saydaki
Рет қаралды 24 М.
UFC 302 : Махачев VS Порье
02:54
Setanta Sports UFC
Рет қаралды 1,4 МЛН
ТАМАЕВ vs ВЕНГАЛБИ. Самая Быстрая BMW M5 vs CLS 63
1:15:39
Асхаб Тамаев
Рет қаралды 3,3 МЛН
When Jax'S Love For Pomni Is Prevented By Pomni'S Door 😂️
00:26
Backstage 🤫 tutorial #elsarca #tiktok
00:13
Elsa Arca
Рет қаралды 34 МЛН
I Made a Graph of Wikipedia... This Is What I Found
19:44
adumb
Рет қаралды 2,4 МЛН
The Boundary of Computation
12:59
Mutual Information
Рет қаралды 952 М.
90 Sorts on Large Inputs - Scatter Plot
1:01:27
Musicombo
Рет қаралды 104 М.
The Ladder Paradox Explained
10:55
Kuvina Saydaki
Рет қаралды 9 М.
The Bubble Sort Curve
19:18
Lines That Connect
Рет қаралды 409 М.
How I made my own Fractal
17:33
Kuvina Saydaki
Рет қаралды 78 М.
How to average color
7:46
Gneiss Name
Рет қаралды 118 М.
Minesweeper With Multicolor Square Roots
35:35
Icely Puzzles
Рет қаралды 13 М.
Карточка Зарядка 📱 ( @ArshSoni )
0:23
EpicShortsRussia
Рет қаралды 776 М.
#miniphone
0:16
Miniphone
Рет қаралды 2,7 МЛН