Hacker News new | past | comments | ask | show | jobs | submit | saywatnow's comments login

I find these modern artifacts fascinating, and a great example of how poorly we communicate {in,about} code. Not that either Wikipedia or RC is a reliable source of quality, but there's plenty a critical eye could object to in both pages :-).

Also a great example of how pseudocode can muddy the waters: that given by WP (and quoted by RC) strongly suggests that the list must be reversed as a distinct step, which seems to have coloured some of the RC examples. Of course, in some languages doing so would be idiomatic .. while in others traversing the list in reverse would be, and in others constructing it tail-first. Same with mapping the function and then filtering for overflow, which seems at odds with the pseudocode.

Also, specifics of the output are entirely lost along the way.


Does Site Reliability include using assets from no less than 7 domains and requiring Javascript to present a few paragraphs of text?


Presumably their blog is very low on the list of things they care about the reliability of.



Nice to see a fairly thorough set of examples, but my only conclusion is that C++ really has become a parody of itself.

The language describing each policy is awful, but okay.

"so concise code" for 10 lines consisting mostly of ceremony to get the results of one std function into a vector?

Does the first example with std::transform::reduce really need `std::uintmax_t{ 0 }` twice instead of `0`?

2x-3x speed up for specifying a parallel execution policy on reasonably large examples seems deeply disappointing. Author didn't specify how many cores the examples were run on, but ouch.

Best of all is the volume of comments here predicting that these annotations will eventually be deprecated ..

I really want to like C++.


This is nice .. but it could be nicer with (pun intended) use of the context manager protocols, a few less empty lines and a bit of minor refactoring for readability.

Is execution_loop really meant to start and end holding the GIL?


Two empty lines before function definitions is part of the PEP8 style standard, which is widely used in the Python community. Most Python autoformatters and linters default to PEP8.

As for empty lines within code, I personally find it helpful to visually separate logical segments.


> As for empty lines within code, I personally find it helpful to visually separate logical segments.

I see it as, if a function is like a paragraph, then the whitespace between lines deliminates sentences. I very much dislike run on sentences that are attempting to explain complex rules/topics in english and in code. Spatial grouping is very natural for the eyes and mind.


My opinion is almost the opposite: I find Granger's ambitious ideas and presentations stimulating, and I like that they're out there to inspire people to imagine more and experiment more agressively.

I wish his audiences were a bit less credulous though. Experiments like Light Table and Eve are just that: interesting and potentially useful explorations of design space, but as unlikely as anything else to radically change the discipline of programming overnight.


> Sounds like an ideal job for a 3d printer.

Or, you know, a piece of timber and some sand paper.

The consumer-grade 3d printers I have exposure to would have trouble with some combination of piece length, tensile consistency, strength and durability you'd want in a book press. Printing alone would take longer than sanding and drilling. And I suspect a printed part wouldn't be so forgiving of adjustments made after manufacture.

Using the 3d printer because it's fun and a learning experience is fine, but calling it "ideal" when it doesn't offer any real improvements over the cheaply and easily made "traditional" material is going a little bit too far :-).


I absolutely disagree. Printing might take longer than sanding and drilling, but: I can model it in code, I can make sub-scale proofs of concept, and when it comes to actually making the real thing, I can throw it on the printer and go do something else. While your wife is yelling that you're still in the garage making noise and sawdust at 8:00 PM, my wife and I will be on the deck eating barbecue and sipping a margarita. :)


I think you need to buy some quieter sandpaper


> This is without considering the false sense of security that address masquerading provides; I cannot recall how many times I’ve heard people say that (gasp!) NAT was fundamental piece in the security of their internal networks (it’s not).

It bugs me when this dogma gets repeated without further explanation, /particularly in the case of IPv6/. No, NAT probably doesn't provide as much security as you think it does, but it does provide benefits. A NAT network is a default-deny-incoming network that cannot fail open, protecting against common boundary firewall configuration errors. A small (but once very pervasive) class of firewall bypass attacks (fragmentation) is eliminated. Obscuring information about the number of devices, and especially (IPv6) their vendors is beneficial. When (inevitably) a bug in your firewall is discovered by bad guys, the presence of NAT limits the kinds of attacks they can make. In the world of IoT, These Things Matter.

It's commonly phrased "NAT is not a security feature, firewalls are", which is midly nonsensical as NAT is a firewall feature .. one which often improves the security posture of the network. Of course there are places you absolutely don't want NAT, but I think it still belongs between the internet and most networks made entirely of desktop, IoT & personal devices.


> I think it still belongs between the internet and most networks made entirely of desktop, IoT & personal devices.

I think your belief has been shaped by the fact that adoption of P2P protocols was hampered by NAT for over a decade, and that developers often write software that trusts the local network. Default deny policies help protect insecure servers for the time being, but I'd like to see servers that utilize encryption and authentication instead of relying on simple allow all/disallow all firewall policies at the connection level.


> developers often write software that trusts the local network

Yes, this is still a source of problems - DNS rebinding allowing websites to attack random sockets on LAN and localhost makes my skin crawl. That the protections are being implemented in the browser makes me sad.

> I'd like to see servers that utilize encryption and authentication

Me, I'd prefer architectural solutions further down the stack than /every single service/ that happens to benefit from a TCP control socket having to duplicate the work of encryption + authentication, with the attendant myriad opportunities for it to go horribly wrong. I already mentioned IoT and we know exactly what that's like when it comes to protecting itself.

Yes, I know, pipe dream .. and going off topic .. but I can wish.


Can you briefly describe how the msevector + ipointer works? I tried to look at the code but dense C++ is not my forte.


You mean how it's implemented? Umm, well it's been a while, but basically an ipointer is a proxy for an iterator that is stored internally by the msevector. These "internally stored" iterators are updated when necessary. For example, when insert() or erase() is called.

One nice thing about it is that it roughly conforms to the principle of "only pay for what you use". That is, the run-time cost is roughly proportional to the number of ipointers you have and the frequency of operations that modify the size of the vector.

One caveat is that this mechanism is not thread safe. But whenever you need to share the vector among threads, you can swap it with a vector that is safe to share[1].

And for those that are into memory safety, there is also a memory-safe vector[2] that supports ipointers.

Is this the sort of explanation you're looking for?

[1] https://github.com/duneroadrunner/SaferCPlusPlus#nii_vector

[2] https://github.com/duneroadrunner/SaferCPlusPlus#ivector


Thanks, that's clear enough :-). In hindsight I can't imagine what alternative I was thinking of .. I had some idea you might have put the additional cost in the iterator by maintaining only an epoch counter in the vector, but that's obviously not enough to do the right thing in the presence of insert and erase.

Your library looks like a good toolset. While I still find the code pretty impenetrable, the number of tests I can see give me confidence. Bookmarked for reference when I'm using C++ again.


It's not cmake's job to limit the behaviour of programs written with it.


OTOH, allocating & using memory correctly so that a maliciously-crafted Makefile can't get elevated permissions is.


A makefile can call whatever it wants so if you run a malicious one you're already hacked. There's nothing you can do with a cmake buffer overrun that you can't also do just by writing a normal cmake file to call out whichever malicious commands you want.


>> It's not cmake's job to limit the behaviour of programs written with it.

> OTOH, allocating & using memory correctly so that a maliciously-crafted Makefile can't get elevated permissions is.

https://en.wikipedia.org/wiki/Not_even_wrong


You are technically not wrong, of course, but if the attack vector got already to running Makefiles on your system, you should probably focus your effort to tighten security elsewhere.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: