coffeeaddict1 1 days ago [-]
dang 1 days ago [-]
We'll put that link in the top text too. Thanks!
m-schuetz 1 days ago [-]
That and https://github.com/b0nes164/GPUSorting have been a tremendous help for me, since CUB does not nicely work with the Cuda Driver Api. The author is doing amazing work.
mfabbri77 24 hours ago [-]
At what order of magnitude in the number of elements to be sorted (I'm thinking to the overhead of the GPU setup cost) is the break-even point reached, compared to a pure CPU sort?
m-schuetz 21 hours ago [-]
No idea unfortunately. For me it's mandatory to sort on the GPU because the data already resides on GPU, and copying it to CPU (and the results back to GPU) would be too costly.
luizfelberti 1 days ago [-]
This looks amazing, I've been shopping for an implementation of this I could play around with for a while now

They mention promising results on Apple Silicon GPUs and even cite the contributions from Vello, but I don't see a Metal implementation in there and the benchmark only shows results from an RTX 2080. Is it safe to assume that they're referring to the WGPU version when talking about M-series chips?

mooman219 19 hours ago [-]
Oh! I have a prefix sum laying around in SIMD in Rust, I use it for bitmap rasterization for fonts. Looking at the comments I guess this isn't a popular usecase, but useful nonetheless. Doing it on the GPU looks really fun

https://github.com/mooman219/fontdue/blob/master/src/platfor...

genpfault 2 days ago [-]
almostgotcaught 2 days ago [-]
this is missing the most important one (in today's world): extracting non-zero elements from a sparse vector/matrix

https://developer.nvidia.com/gpugems/gpugems3/part-vi-gpu-co...

merope14 1 days ago [-]
Not even close. The most important application (in today's world) is radix sort.
animal531 16 hours ago [-]
I'm working on a game that has a lot of units and I used to use the old Sebastian Lague + NVidia approach where you use 2d binning -> cells/keys -> sort -> being able to search for neighbours efficiently (along with some modifications such as using Morton encoding and so on that I added over time).

But then during a break the other day I read up on Radix sort and then right thereafter implemented a prefix sum for spatial partitioning that also incorporates a bit table, CAS operations for doing multithreaded modifications etc. After learning the core Radix concept I sort of came up with the idea of using it that way myself which was quite pleasing.

Props to the author, I'll definitely be spending some time scanning the collection to find some alternate options.

WJW 1 days ago [-]
What specific application do you have in mind that radix sort is more important than matrix multiplication?
m-schuetz 1 days ago [-]
Is that relevant for 4x4 multiplications? Because at least for me, radix sort is way more important than multiplying matrices beyond 4x4. E.g. for Gaussian Splatting.
otherjason 1 days ago [-]
I think they were trying to say “radix sort is a more important application of prefix sum than extraction of values from a sparse matrix/vector is.”
WJW 1 days ago [-]
I understand what GP meant, but extraction of values from a sparse matrix is an essential operation of multiplying two sparse matrices. Sparse matmult in turn is an absolutely fundamental operation in everything from weather forecasting to logistics planning to electric grid control to training LLMs. Radix sort on the other hand is very nice but (as far as I know) not nearly used as widely. Matrix multiplication is just super fundamental to the modern world.

I would love to be enlightened about some real-world applications of radix sort I may have missed though, since it's a cool algorithm. Hence my question above.

littlestymaar 1 days ago [-]
> to training LLMs

LLMs are made from dense matrices, aren't they?

WJW 1 days ago [-]
Not always, or rather not exclusively. For example, some types of distillation benefit from sparse-ifying the dense-ish matrices the original was made of [1]. There's also a lot of benefit to be had from sparsity in finetuning [2]. LLMs were merely one of the examples though, don't focus too much on them. The point was that sparse matmul makes up the bulk of scientific computations and a huge amount of industrial computations too. It's probably second only to the FFT in importance, so it would be wild if radix sort managed to eclipse it somehow.

[1] https://developer.nvidia.com/blog/mastering-llm-techniques-i...

[2] https://arxiv.org/html/2405.15525v1

almostgotcaught 1 days ago [-]
Almost all performant kernels employ structured sparsity
woadwarrior01 1 days ago [-]
Top K sampling comes to mind, although it's nowhere nearly as important as matmult.
almostgotcaught 1 days ago [-]
ranking models benefit from gpu impls of sort but yup they're not nearly as common/important as spmm/spmv