Hacker Newsnew | past | comments | ask | show | jobs | submit | mananaysiempre's commentslogin

> No one would be surprised if you showed that you could cut a hole in pretty much any normal door

The definition of “normal” varies by region. In European cities, it means a pretty heavy door of multiple layers of steel (and pretty unpleasant stuff in the middle) that would probably take 15 minutes of deafeningly loud cutting with a circular saw. I understand the standard for US suburbs is much lower (as it might as well be, given windows exist and the walls aren’t all that sturdy either).


A very long time ago I worked in an office building that had several suites of offices. One of them was a biotechnics company that did things like genetic analysis of farmed fish for selective breeding, massively commercially sensitive stuff. They had a "secure document store" built within their suite, with a thick door made of 19mm ply layers either side of a 6mm steel plate, welded to a full-length hinge, which was in turn welded to a 25mm steel tubing frame, with big long brackets bolted into the brick work of the exterior wall on one side and a steel beam on the other. One key in the possession of the CIO, one in the possession of the CEO. CEO was at a fish farm in Norway. CIO was in the office, getting paperwork out of the safe in the secure room, got a phone call, stepped out of the room to get a better signal, slam <CLICK> <KACHUNK> as six spring-loaded bolts about as thick as your thumb pegged the door shut.

Rude words.

Can't get a locksmith that can pick that particular Ingersoll lock. Can't get a replacement key because the certificate is in the room, and you'd have to drive down to England to get it. Can't jemmy the door open, it's too strong.

Wait.

There's a guy who parks an old Citroën in the car park, I bet he has tools, doesn't he work for that video company downstairs? Let's ask him.

So yeah it took about ten seconds to get in to the secure room. I cut a hatch through the plasterboard with a Stanley knife, recovered the keys, taped the plasterboard back in place, and - the time-consuming bit - positioned their office fridge so no-one could see it.

A swift appointment with an interior decorator was made by a certain C-level exec, and a day or two later there was a cooler with about 25kg of assorted kinds of salmon and a bottle of whisky left in my edit suite.


I know it's OT but I wanna know what your old Citroën was. My first car was an S1 BX. Plasticky 80s goodness. I know it's not everybody's idea of a classic (at least in Australia where Citroëns aren't particularly common) but I loved it.

Our uncle had a CX when we were kids. When he would visit we loved waiting in the driveway for him to start it so we can watch the air suspension engage and lift the car a good foot up.

Not OP but my dad drove a CX for a while, but the real treat was our friend's DS.

If you hadn't been there to fish them out of the situation, they would have been boned to a scale they weren't prepared to deal with. You deserved the reward for getting them off the hook.

Hah, I love this sort of story. Recently I was on site and we needed some electrical as-built drawings. They’d been stashed in a tool box, which was locked (and pretty well designed to protect the padlock from bolt cutters / angle grinders). Unfortunately one of the guys had taken the key with him and it was now a two hour plane flight away. They already tried and failed to cut the lock, and were getting an angle grinder to just cut in through the lid (it was ~3mm steel sheet, so hardly impenetrable, but destroying the toolbox would not have been ideal) when I pulled the pin out of the hinge and recovered the drawings that way.

Turns out watching Pirates of the Caribbean wasn’t a waste of time after all. ;)


> They had a "secure document store" built within their suite, with a thick door made of 19mm ply layers either side of a 6mm steel plate, welded to a full-length hinge, which was in turn welded to a 25mm steel tubing frame, with big long brackets bolted into the brick work of the exterior wall on one side and a steel beam on the other.

Wow, that sounds like a pretty secure entry! I wonder how they secured the walls, that’s a lot of steel plate, enough to require structural reinforc—

> So yeah it took about ten seconds to get in to the secure room. I cut a hatch through the plasterboard with a Stanley knife, recovered the keys, taped the plasterboard back in place, and - the time-consuming bit - positioned their office fridge so no-one could see it.

Haha, that was my guess. This is like constructing a safe with a super heavy reinforced steel door on the front and construction paper on the sides and top! He could’ve kicked his way through 5/8” (prolly 16mm to you lot) drywall ;) Your solution was a lot cleaner and you earned that tasty reward!


Ahh, the classic Kool-Aid Man attack.

Right - the quality of your locks matter a lot less if your average 5-year-old tee-baller can through brick through the wind and climb in. One always needs to consider their threat model when considering what security to invest in getting.

In my experience, the hard part is getting everybody else to do that. And then also getting them to actually include the timezone in their communication with you.

> Context switching is virtually free, comparable to a function call.

If you’re counting that low, then you need to count carefully.

A coroutine switch, however well implemented, inevitably breaks the branch predictor’s idea of your return stack, but the effect of mispredicted returns will be smeared over the target coroutine’s execution rather than concentrated at the point of the switch. (Similar issues exist with e.g. measuring the effect of blowing the cache on a CPU migration.) I’m actually not sure if Zig’s async design even uses hardware call/return pairs when a (monomorphized-as-)async function calls another one, or if every return just gets translated to an indirect jump. (This option affords what I think is a cleaner design for coroutines with compact frames, but it is much less friendly to the CPU.)

So a foolproof benchmark would require one to compare the total execution time of a (compute-bound) program that constantly switches between (say) two tasks to that of an equivalent program that not only does not switch but (given what little I know about Zig’s “colorless” async) does not run under an async executor(?) at all. Those tasks would also need to yield on a non-trivial call stack each time. Seems quite tricky all in all.


If you constantly switch between two tasks from the bottom of their call stack (as for stackless coroutines) and your stack switching code is inlined, then you can mostly avoid the mispaired call/ret penalty.

Also, if you control the compiler, an option is to compile all call/rets in and out of "io" code in terms of explicit jumps. A ret implemented as pop+indirect jump will be less less predictable than a paired ret, but has more chances to be predicted than an unpaired one.

My hope is that, if stackful coroutines become more mainstreams, CPU microarchitectures will start using a meta-predictor to chose between the return stack predictor and the indirect predictor.


> I’m actually not sure if Zig’s async design even uses hardware call/return pairs

Zig no longer has async in the language (and hasn't for quite some time). The OP implemented task switching in user-space.


Even so. You're talking about storing and loading at least ~16 8-byte registers, including the instruction pointer which is essentially a jump. Even to L1 that takes some time; more than a simple function call (jump + pushed return address).

Only stack and instruction pointer are explicitly restored. The rest is handled by the compiler, instead of depending on the C calling convention, it can avoid having things in registers during yield.

See this for more details on how stackful coroutines can be made much faster:

https://photonlibos.github.io/blog/stackful-coroutine-made-f...


> The rest is handled by the compiler, instead of depending on the C calling convention, it can avoid having things in registers during yield.

Yep, the frame pointer as well if you're using it. This is exactly how its implemented in user-space in Zig's WIP std.Io branch green-threading implementation: https://github.com/ziglang/zig/blob/ce704963037fed60a30fd9d4...

On ARM64, only fp, sp and pc are explicitly restored; and on x86_64 only rbp, rsp, and rip. For everything else, the compiler is just informed that the registers will be clobbered by the call, so it can optimize allocation to avoid having to save/restore them from the stack when it can.


Is this just buttering the cost of switches by crippling the optimization options compiler have?

If this was done the classical C way, you would always have to stack-save a number of registers, even if they are not really needed. The only difference here is that the compiler will do the save for you, in whatever way fits the context best. Sometimes it will stack-save, sometimes it will decide to use a different option. It's always strictly better than explicitly saving/restoring N registers unaware of the context. Keep in mind, that in Zig, the compiler always knows the entire code base. It does not work on object/function boundaries. That leads to better optimizations.

This is amazing to me that you can do this in Zig code directly as opposed to messing with the compiler.

See https://github.com/alibaba/PhotonLibOS/blob/2fb4e979a4913e68... for GNU C++ example. It's a tiny bit more limited, because of how the compilation works, but the concept is the same.

To be fair, this can be done in GNU C as well. Like the Zig implementation, you'd still have to use inline assembly.

> If this was done the classical C way, you would always have to stack-save a number of registers

I see, so you're saying that GCC can be coaxed into gathering only the relevant registers to stack and unstack not blindly do all of them?


Yes, you write inline assembly that saves the frame pointer, stack pointer, and instruction pointer to the stack, and list every other register as a clobber. GCC will know which ones its using at the call-site (assuming the function gets inlined; this is more likely in Zig due to its single unit of compilation model), and save those to the stack. If it doesn't get inlined, it'll be treated as any other C function and only save the ones needed to be preserved by the target ABI.

I wonder how you see it. Stackful coroutines switch context on syscall in the top stack frame, the deeper frames are regular optimized code, but syscall/sysret is already big context switch. And read/epoll loop has exactly same structure, the point of async programming isn't optimization of computation, but optimization of memory consumption. Performance is determined by features and design (and Electron).

What do you mean by "buttering the cost of switches", can you elaborate? (I am trying to learn about this topic)

I think it is

> buttering the cost of switches [over the whole execution time]

The switches get cheaper but the rest of the code gets slower (because it has less flexibility in register allocation) so the cost of the switches is "buttered" (i.e. smeared) over the rest of the execution time.

But I don't think this argument holds water. The surrounding code can use whatever registers it wants. In the worst case it saves and restores all of them, which is what a standard context switch does anyway. In other words, this can be better and is never worse.


Which, with store forwarding, can be shockingly cheap. You may not actually be hitting L1, and if you are, you're probably not hitting it synchronously.

https://easyperf.net/blog/2018/03/09/Store-forwarding

and, section 15.10 of https://www.agner.org/optimize/microarchitecture.pdf


Are you talking about context switching every handful of cycles? This is going to be extremely inefficient even with store forwarding.

You are right that the statement was overblown, however when I was testing with "trivial" load between yields (synchronized ping-pong between coroutines), I was getting numbers that I had trouble believing, when comparing them to other solutions.

In my test of a similar setup in C++ (IIRC about 10 years ago!), I was able to do a context switch every other cycle. The bottleneck was literally the cycles per taken jump of the microarchitecture I was testing again. As in your case it was a trivial test with two coroutines doing nothing except context switching, so the compiler had no need to save any registers at all and I carefully defined the ABI to be able to keep stack and instruction pointers in registers even across switches.

Semi-unrelated, but async is coming soon to Zig. I'm sorta holding off getting deep into Zig until it lands. https://kristoff.it/blog/zig-new-async-io/

the point of all this io stuff is that you'll be able to start playing with zig before async comes and when async comes it will be either drop in if you choose an async io for main() or it will be a line or two of code if you pick an event loop manually.

> [Y]ou are getting over 100bn CPM right now. The reason it doesn't matter is that this is neutrinos and they're not interacting with you.

I mean, if you actually had a neutrino detector that produced 10e10 CPM over your cross-section, then it would matter for you, because particle physicists would kidnap you to learn the secret :)


Honestly, the military would probably come after you first. Or maybe an oil company? Frankly because if you could detect neutrinos at that resolution you would be able to produce a really good mapping of... just about anything. From the inside of the Earth to the inside of a secret military facility on the opposite side of the planet. Not to mention you've also invented a communication device that is essentially unjammable[0].

Sufficient to say that you'd be very popular, but in probably the least fun way possible.

[0] https://arxiv.org/abs/1203.2847


It would be a very up-close-and-personal variation of the resource curse.

This is exactly why no one can know my alter-ego is… Neutrino Man.

The problem with USB-C connectors for hobby projects is that they are ass to solder by hand—I’m still looking for one that would use a larger pitch by shorting the four USB pin pairs for either orientation. If you’re shipping something to a customer, I think it’s fair to assume that you don’t really have that problem :)

They're also ass to make PCBs for. The second you need 2oz or higher you start to really push the limits of what most prototype shops can do.

This is a pretty standard 2.0 receptacle, you've only got 0.2mm between pads if you follow their footprint (literally the limit for soldermask bridges on 2oz at JLCPCB): https://gct.co/download?type=PDFDrawing&name=USB4105.pdf


Get a hot air gun: it'll make your life way easier. You can tin the pads with a soldering iron, put the connector on and squirt some flux on the leads, and then just blow hot air until it reflows into place.

What do you do if the structural through-holes already have solder in them, that wick doesn’t seem to get? I’ve been trying to put a new USB C port onto my switch for quite a while now. (Now that I think about it, I can probably just shorten the prongs on the port and add solder after for structural strength).

The answer to almost every question in soldering is 'more flux'. Solder wick has flux in the center of the braid, but it's hard to get it into tight places like structural through-holes. Adding your own liquid/paste flux will make the wick much more effective.

Melt the solder and thwack the board on something hard? So the board stops but the molten solder doesn't.

Sometimes though you just have to pile on solder and flux because the via is small enough that surface tension and heat dissipation means its never coming out


Doesn't a pump make quick work of this?

Frequently not. It's always handy to know about extra techniques in soldering.

You can also scale this up in a solder oven and remove almost every single component. Used this for reversing a PCB a few times.


A desoldering pump (manual model, $10 or so for a decent one) is very suitable for removing solder from through-holes, if that is the main issue.

I often add solder to make it easier for the wick to get everything. If the original assy was Lead-Free, using low temp solder (I can has lead? As a treat?) may make a difference here as well. Flux pen on the solder wick also seems to help especially if your wick is kinda crusty.

How would tinning those tiny pads not create a massive bridge between them? Does the bridge somehow go away in the reflow phase? (Not familiar with reflow at all)

Yes, the surface tension of melted solder pulls the solder to just the pad areas (assuming you don’t have far too much)

Make sure there is soldermask between the pads. This makes soldering much easier!

(If your foundry can't fabricate it, then make the pads thinner until they can fabricate the soldermask.)


Using with a little flux while tinning usually prevents the pads from bridging

To add to the sister comments, you can quite easily remove such bridges by adding flux and then touching each individual pad with a fine tipped soldering iron. It sometimes takes a few tries, but eventually the solder that’s touching the solder mask will either be wicked onto the iron or move onto one of the neighboring pads. (The trick is to touch just the pads with the iron, and not to try to attack the solder bridge itself.)

Do you find the 6-pin charge-only Type-C connectors too small? Or the 16-pin 2.0-only ones? They seem reasonably hand-solder-friendly but I admit I've been fortunate enough to have the factory handling them for me.

Yeah, I find the 16-pin ones a little beyond my skill. They also feel silly—why can’t I have one with just six pins for D±, VBUS, GND, and CC1/2? I guess I could have a factory make a bunch of modules like that for me, but it definitely feels like a thing that should already exist.

(There are passive A-to-C adapters, so I see no reason why I couldn’t short pin pairs like that.)


I have soldered the 12-pin, power-only USB-C connectors. The real breakthrough came though when I tried a hotplate rather than soldering iron for the USB-C connector.

You cannot do that because of how the connector flips over.

(Believe me, I have tried to make it work.)


Could you clarify? As far as I can tell, GND is A1/B1 and A12/B12, VBUS is A4/B4 and A9/B9, D+ is A6/B6, and D− is A7/B7, and each A pin swaps with its B counterpart when I flip the connector.

Only one side of the cable is going to be lit, but you don't know which one it is: it depends on whatever happened upstream. So you have to be able to handle either side being lit up. You can't easily do that with a single set of contacts because of how D+/D- is handled (it would be a literal X-shaped crossover), so now you're kind of stuck.

It ends up just not being worth the trouble if you need the USB 2.0 pair. But power-only is much easier and, guess what, pretty available in the market.

The 6-pin Type-C provides 2 pins each for power, ground, and CC. (DO NOT LEAVE CC OUT. THIS IS WHY A LOT OF RECENT USB STUFF MISBEHAVES. GET CC1/2 RIGHT PLEASE.)

The 16-pin adds 10 more: 4 for D+/D-, 2 more each for power and ground, and then they add 2 more for SBU as well. I'm not entirely sure why SBU is important enough but I'd guess it's because it's physically vertically next to CC so probably helps the mechanicals to leave it in.

There actually do exist 8-pin guys like this https://www.lcsc.com/product-detail/C47326494.html (among others) that add D+/D- only to the 6-pin connectors. I can't imagine they work terribly well most of the time but they must have some use? They do seem to be from Asian vendors only, which might mean something.

(Side note: the way Type-C handles D+ and D- has caused me so much pain. I get that it was a difficult problem to solve... but there had to be a better way than this, right? Probably not, but I can still whine.)


> There actually do exist 8-pin guys like this https://www.lcsc.com/product-detail/C47326494.html

I was glad to see this at first (because I did page through Mouser and LCSC a bit before I came back here to continue my bitching and found nothing). Then I actually looked at the drawing, and— Excuse me, is that really a USB-C socket that only works in one orientation?.. The drawing shows that the socket has both CC1 (A5) and CC2 (B5) but only one of the two copies of D+ (A6 but not B6) and D- (A7 but not B7). Seriously? Even I don’t hate my users that much.


Yep. I couldn't believe it existed either. I was even curious enough to ask an LLM about it and got the same response: it doesn't know of a use beyond creating frustration.

I guess past-me was smart when drawing USB-C connector symbols in my library and this one doesn't exist there for a reason!


Part of it is the Glibc loader’s carnal knowledge of Glibc proper; there’s essentially no module boundary there. (That’s not completely unjustified, but Glibc is especially hostile there, like in its many other architectural choices.) Musl outright merges the two into a single binary. So if you want to do a loader then you’re also doing a libc.

Part of it for desktop Linux specifically is that a lot of the graphics stack is very unfriendly to alternative libcs or loaders. For example, Wayland is nominally a protocol admitting multiple implementations, but if you want to not be dumb[1] and do GPU-accelerated graphics, then the ABI ties you to libwayland.so specifically (event-loop opinions and all) in order to load vendor-specific userspace drivers, which entails your distro’s preferred libc (probably Glibc).

[1] There can of course be good engineering reasons to be dumb.


Why do you need "vendor-specific userspace drivers"? I thought graphic acceleration uses OpenGL/Vulkan, and non-accelerated graphics uses DRM? And there are no "drivers" for Wayland compositors?

OpenGL and Vulkan are implemented as libraries in user space as the Mesa project.

If you use a non-Latin alphabet, Microsoft Word’s RTF output is a horrific mess of encoding switches everywhere that makes manual text extraction pretty much untenable (and while RTF can use both UCS-2 and Windows codepages, Word seems to stick to—potentially multiple—codepages if it can, presumably for compatibility). That said, Microsoft always intended RTF to be Word’s exchange and archival format (unlike DOC, which was a mess they did not want to document), so it has enough of an official spec that extracting text, at least, is very possible.

RTF uses UTF-16, not UCS-2; you can in fact use two \u____ commands in a row using surrogate pairs.

Anyway, I wonder if this would work for you.

https://github.com/torstenvl/rtfproc


If you know your data is UTF-8, then bytes 0xFE and 0xFF are guaranteed to be free. Strictly speaking, 0xC0, 0xC1, and 0xF5 through 0xFD also are, but the two top values are free even if you are very lax and allow overlong encodings as well as codepoints up to 2³² − 1.

I think it would probably be better to invest in a proper framing design than trying to poke holes in UTF-8.

(This is true regardless of UTF-8 -- in-band encodings are almost always brittle!)


A small nitpick: I don’t think your intersection example does what you want it to do. Perhaps there’s some obscure difference in “PER-visibility” or whatnot, but at least set-theoretically,

  LegacyFlags2 ::= INTEGER (0 | 2 ^ 4..8) -- as in the article
is exactly equivalent to

  LegacyFlags2 ::= INTEGER (0) -- only a single value allowed
as (using standard mathematical notation and making precedence explicit) {0} ∪ ({2} ∩ {4,5,6,7,8}) = {0} ∪ ∅ = {0}.

Cryptonector[1] maintains an ASN.1 implementation[2] and usually has good things to say about the language and its specs. (Kind of surprised not he’s not in the comments here already :) )

[1] https://news.ycombinator.com/user?id=cryptonector

[2] https://github.com/heimdal/heimdal/tree/master/lib/asn1


Thanks for the shout-out! Yes, I do have nice things to say about ASN.1. It's all the others that mostly suck, with a few exceptions like XDR and DCE/Microsoft RPC's IDL.

Derail accepted! Is your approval of DCE based only on the serialization not being TLV or on something else too? I have to say, while I do think its IDL is tasteful, its only real distinguishing feature is AFAICT the array-passing/returning stuff, and that feels much too specialized to make sense of in anything but C (or largely-isomorphic low-level languages like vernacular varieties of Pascal).

Well, I do disapprove of the RPC-centric nature of both, XDR and DCE RPC, and I disapprove of the emphasis on "pointers" and -in the case of DCE- support for circular data structures and such. The 1980s penchant for "look ma'! I can have local things that are remote and you can't tell because I'm pretending that latency isn't part of the API hahahaha" research really shows in these. But yeah, at least they ain't TLV encodings, and the syntax is alright.

I especially like XDR, though maybe that's because I worked at Sun Microsystems :)

"Pointers" in XDR are really just `OPTIONAL` in ASN.1. Seems so silly to call them pointers. The reason they called them "pointers" is that that's how they represented optionality in the generated structures and code: if the field was present on the wire then the pointer is not null, and if it was absent the then pointer is null. And that's exactly what one does in ASN.1 tooling, though maybe with a host language Optional<> type rather than with pointers and null values. Whereas in hand-coded ASN.1 codecs one does sometimes see special values used as if the member had been `DEFAULT` rather than `OPTIONAL`.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: