At least I'm not (second place currently), I'm using a single threaded rust program that chatgpt wrote for me. I'm getting 10-15 million hashes per second.
But seletskiy is, as you can see in his nonce which tells us that he uses an RTX4090 and has 18 Giga Hashes per second. I'm really curious how he wrote that program, as I have a Rx 7900 XTX and would love to use it for this competition haha
Hey. It's nothing very fancy. About 150 lines of C/CUDA code with no deps, including args parsing and logging.
The code runs at steady rate of 18.00-18.40 GH/s at cloud GPU. In fact it's not hashes-per-second, but actually messages-per-second checked.
It launches a 64⁶ kernels in a loop, where each launch checks first two bytes of the SHA of a message concatenated with unique 6-byte nonce per kernel + 6-byte incremental nonce for each launch. There is only one block, so SHA algorithm is heavily trimmed. Also, most of the message is hard-coded, so pre-calculated SHA state is used; it's 10 loops less than needed to encode whole block. Since we only 2 bytes of the hash to check, last two loops also unrolled by hand to exclude parts that we wouldn't need. All code also in big-endian since SHA is, so message hardcoded in big-endian as well.
Base64-encoding is pretty time-consuming, so I've optimized it a bit by reordering the alphabet to be in ASCII-ascending order. I've got to the point where single binary-op optimization can earn 100-500 MH/s speed-up, and I don't really know what else here is remaining.
I don't have RTX4090, so instead I just rented 4090 GPU to run code on. It's less than $0.3 per hour.
I've tried GPT-4 to get some directions for optimization, but every single proposal was useless or straight wrong.
I by no means a C/GPU programmer, so probably it can be optimized much more by someone who more knowledgeable of CUDA.
GPU's are ridiculously fast. It freaks me out that I can compute >18,000,000,000 non-trivial function calls per second.
Anyways, if you want to chat, my e-mail is in the profile.
I think we've done something similar in our kernels, because I've likewise struggled to squeeze more than ~18GH/sec from a rented 4090. I think the design of SHA256 makes it hard to avoid many of the loop iterations. It's possible there are some fun GPU specific instructions to parallelize some of the adds or memory accesses to make the kernel go faster.
If you limit your variable portion to a base16 alphabet like A-P, radix encoding would just be `nibble + 0x41`. Sure you're encoding 4x as many values, but with full branch predictability. You'd have to experimentally determine if that performs any better.
You could also do something theoretically clever with bitmasking the 4 nibbles then adding a constant like 0x4141 or 0x00410041 or something (I'm too tired to think this through) and then you can encode twice as many per op assuming 32-bit bitwise ops are similar in speed to 16-bit bitwise ops.
Anyways this has been a cool challenge. You might also enjoy hunting for amulets - short poems whose sha256 hash contains a substring that's all 8: https://news.ycombinator.com/item?id=26960729
SHA256 is designed as such that the maximum amount of data that can be contained within a single block is 440 bits (55 bytes.)
If you carefully organize the nonce at the end and use all 55 bytes, you can pre-hash the first ~20/64 rounds of state and the first several rounds of W generation and just base further iterations off of that static value (this is known as a "midstate optimization.")
> If you limit your variable portion to a base16 alphabet like A-P
The more nonce bits you decide to use, the less you can statically pre-hash.
In FPGA, I am using 64 deep, 8-bit-wide memories to do the alphabet expansion. I am guessing in CUDA you could something similar with `LOP3.LUT`.
I've also been spending today building a miner. Started in JS, then JS with worker threads (~2.2MH/s), then c++ with openssl (~3.7MH/s) and now attempting CUDA.
Also have been using ChatGPT to do the code translations.
I'm currently stuck in yak shaving, I cannot compile CUDA programs here yet as on fedora 40 I need an earlier gcc (13.2) which I am now also having to compile from source.
FWIW I got it working with rust and opencl, most of it is written by chatgpt as I have no clue about opencl.
GPU usage is only 50-60% and I get 100MH/s.
With hashcat and opencl I could get 12GH/s but I couldn't find a way to use hashcat for this use case.
Wow 24MH/s on an i7 with 8 cores sounds really good!
I don't know how I got it working, but I'm now at 3GH/s with my OpenCL implementation. I basically converted 90% of my rust logic to opencl and now my GPU is at 100% usage and I also needed to switch to a tty, as my window manager became unresponsive haha
I'm kind of glad about this HN post, as I had absolutely no clue about how sha256 and opencl worked before this challenge.
I'm glad you had some fun! This experiment went about as well as I could hope!
If anyone's curious, I'm getting 4.5MH/s single-threaded and 12.2MH/s multi-threaded on a slightly old i7 with 4 cores.
It's my own C++ implementation, which I've made about 20% faster than the fastest one I found online (Zig/stdlib, also tried Go/stdlib, C++/cgminer, rust/ring, C++/VanitySearch and Python/stdlib).
I think it might be faster just because I was able to skip some steps, since all inputs are short and of the same length.
I've just finished testing 10^12 inputs. I think I'll stop with 10 zeroes, which is very likely to happen in the next couple of days, according to my calculations. I might revisit it later to learn some GPU programming.
I'm using a janky CUDA miner that's getting around 6GH/s on a 3090. My code is kind of bad though since I hacked it together in a couple of hours last night, so I wouldn't be surprised if a proper/competent implementation got 2-3x the hashrate mine does.