I experimented with the proposed parallel data type extensions to the C++ standard library. I got impressive performance gains for calculating APFS fletcher checksums without resorting to compiler intrinsics or inline assembly.
Farm Subsidies are largely a myth and a misconstruction of "externalities" as subsidies. If you knew how to get the alleged subsidies the media likes to trot out to disparage farmers into the hands of actual farmers while charging a small % as a consulting fee you'd be unimaginably wealthy.
https://data.ers.usda.gov/reports.aspx?ID=17833
The past decade it has been only about 10 billion a year with fixed direct payments largely eliminated in 2014. 2020 will obviously be an exception due to Covid-19. Most of the recent payments are Market Protection Programs to prevent exports from being wrecked by retaliatory tariffs under Trump.
https://www.farmers.gov/manage/mfp
That's very much a means-tested program to prevent farmers from being driven into bankruptcy by tariffs on things they've already produced. It's not a magic money fountain.
This doesn't really answer my question. I knew the money was given for reasons. The question is who is really getting it.
Note that the same report says that excluding subsidies, farmer net income increased in 2020 over 2019, so I'm not convinced the Covid pandemic is a good reason.
Hard to say. Farm subsidies have limits to discourage this. There are a ton of loop holes, and not all crops qualify for subsidies. There is also debate about what even is a subsidy.
It is hard to explain what is `beta8` and `curve.P` specifically, but they are arbitrary-precision integers so you can see what went wrong with an appropriate pseudocode:
Essentially we want to compute `(alpha * alpha - beta * 8) % curve.P`, so to say. The modulo is expensive though, so for typical cases we can just repeatedly add `curve.P` to compute the modulo a few times. This is indeed a valid optimization when we are sure of the range of `alpha` and `beta`, but `beta` can be controlled outside. So a very large `beta` from an attacker will cause the while loop run forever---a denial-of-service attack.
I think you are right. Go `big.Mod` should be Euclidean (i.e. `x % y` follows `y`'s sign) so the code is redundant. It doesn't seem to be required to run in constant time (if so we won't have `if` at all), probably the committer wanted a minimal change?
I'm not sure when the last time you've tried, but in the last year VSCode has come a long way towards "just working" out of the box for C++. They've specifically focused on it. If you've got some time, I'd recommend that you checkout Rong Lu's CppCon 2018 talk. https://www.youtube.com/watch?v=JME1i3vCRR8
Believe it or not, I wasn't trying to be snarky. I honestly didn't know if I misunderstood your argument. It was possible that you were talking about using a flawed premise or something that I wasn't understanding.
This is analogous to adding uint8_t when we already had unsigned char. In C these would be exactly the same; in C++ they are different types. Same with uint8_t vs. byte: the former is an integer type, the latter is not. (Thus, a better question would be, why introduce byte when we already had unsigned char. I think, the answer to that is in a general tendency of moving away from the C way of looking at types and making code better reflect the intent and do it in a more type-safe manner.)
Overloading and templates. I can now use unsigned char, uint8_t and byte as distinct types, meaning they can be separately overloaded and used as separate template specialisations.
That's not a purely hypothetical point; I already create custom types to do this. Not every 8-bit type is a character, nor is it necessarily an integer. I always found it frustrating that the default stream output was a character when using numerical quantities; now we can specialise raw output accordingly.
Many programs require byte-oriented access to memory. Today, such programs must use either the char,
signed char, or unsigned char types for this purpose. However, these types perform a “triple duty”.
Not only are they used for byte addressing, but also as arithmetic types, and as character types. This
multiplicity of roles opens the door for programmer error – such as accidentally performing arithmetic on
memory that should be treated as a byte value – and confusion for both programmers and tools.
Having a distinct byte type improves type-safety, by distinguishing byte-oriented access to memory from
accessing memory as a character or integral value. It improves readability. Having the type would also
make the intent of code clearer to readers (as well as tooling for understanding and transforming
programs). It increases type-safety by removing ambiguities in expression of programmer’s intent,
thereby increasing the accuracy of analysis tools.
such as accidentally performing arithmetic on memory that should be treated as a byte value
My reaction to that can be summed up succinctly as "WTF!?" The whole point of uint8_t or (signed/unsigned) char is an 8-bit quantity that you can do arithmetic and bitwise operations on. To put it more bluntly, "have C++ programmers forgotten how computers work?"
The proposed solution is to add yet another same-yet-subtly-different type, with its own set of same-yet-subtly-different rules? If anything that would cause even more confusion due to the complexity it causes in interactions with all the other parts of the language.
IMHO this "let's do everything we can to stop people from even the very slightest change of possibly doing something wrong" line of thinking is ultimately unproductive... and actually rather dystopian. The end-result is quite scary to contemplate.
(The fact that an 11-page, text-only PDF somehow turns out to be over 800KB is somewhat less disturbing, but still notable.)
Yes an astroturfer aka shill I would personally define as someone portraying themselves to be a neutral observer/participant when in reality they are paid/sponsored/incentivized by a state, company, or organization pushing a specific agenda. They are intentionally planted to deceive, distort and sway those unsuspecting of such activity. There is an excellent TED talk [0] on the subject.
The problem is rampant and while not new, the recent election cycle has highlighted it greatly.
Propaganda does not spread just from 'fake news' or a heavily biased corporate media. It is co-ordinated & perpetuated online. The links and comments on Reddit and HN are prime mediums to infiltrate and carry this out.
> The links and comments on Reddit and HN are prime mediums to infiltrate and carry this out.
I don't know about Reddit but we see comparatively little on HN that appears suspicious once we look at the data (e.g. which users have voted on a post). Anyone who suspects astroturfing on HN is welcome to email us so we can look into the matter—which we always do—but not to accuse other users directly, because both the odds and the cost of an unfair accusation are much higher than people realize when they do that.
I listened to that talk you linked to. At the end the speaker mentions four "hallmarks of astroturfing": (1) use of inflammatory language; (2) use of charged language to "debunk myths"; (3) attacking an issue by controversializing the people around it rather than addressing the facts; (4) reserving all public skepticism for those exposing wrongdoing rather than wrongdoers. It seems to me HN is in pretty good shape here: the first three violate the site guidelines, and the fourth seems rather rare and is not received well by the community.
Maybe there are astroturfers getting away with it on HN. If they exist, though, they're being clever about it, so we'd be interested in anything the community can figure out. Just please don't accuse each other directly without real evidence.
Gains were even more impressive when adding some simple loop unrolling: https://jtsylve.blog/post/2022/12/24/Blazingly-Fast-er-SIMD-...