Am i the only one to generally find those directories getting in the way ? I have very few videos or music, or even images worth storing as images and not related to other documents. Downloads and documents might be useful but then, documents is almost everything that is not online so why not put it in $HOME. And I don't like capitalized folders but that's me.
I 'hated' this too for a long time, but I finally gave in. It now also matches my backup workflow perfectly. And now I don't have to tweak that much after installing a new distro. Sometimes my tools have to adapt to me, sometimes I adapt a little to my tools.
Sure, let’s send private medical data to a cloud server somewhere for processing, because a medical professional in 2025 can’t be expected to know how to use a keyboard. That’s absurd.
I can type quite well. I can also troubleshoot minor IT issues. Neither is a better use of my time than seeing patients.
I’m in an unusual situation as an anesthesiologist; I don’t have a clinic to worry about, so my rate-limiting factor isn’t me, it’s the surgeon. EMR is extremely helpful for me because 90% of my preop workup is based on documentation, and EMR not only makes that easy but lets me do it while I still have the previous patient under anesthesia. I actually need to talk to 95% of patients for about 30 seconds, no more.
But my wife is primarily a thinking rather than doing doctor, and while she can type well, why in the hell do we want doctors being typists for dictation of their exams? Yes, back in the old days, doctors did it by hand, but they also wrote things like “exam normal” for a well-baby visit. You can’t get paid for that today; you have to generate a couple of paragraphs that say “exam normal”.
Incidentally, as for cloud service, if your hospital uses Epic, your patients’ info is already shared, so security is already out of your hands.
Macs have pretty decent on-device transcription these days. That’s what I set up for my wife and her practice’s owner for dictation because a whole lot of privacy issues disappear with that setup.
The absurdity is that doctors have to enter a metric shit ton of information after every single visit even when there’s no clearly compelling need for it for simple office visits beyond “insurance and/or Medicare” requires it. If you’re being seen for the first time because of chest pain, sure. If you’re returning for a follow up for a laceration you had sewn closed, “patient is in similar condition as last time, but the wound has healed and left a small scar” would be medically sufficient. Alas, no, the doctor still has to dictate “Crime and Punishment” to get paid.
This has been happening for years, long pre-dating LLMs or the current AI hype. There are a huge number of companies in the medical transcription space.
Some are software companies that ingest data to the cloud as you say. Some are remote/phone transcription services, which pass voice data to humans to transcribe it. Those humans then store it in the cloud when it is returned to a doctor's office. Some are EMR-integrated transcription services which are either cloud-based with the rest of the EMR or, for on-premise EMRs, ship data to/from the cloud for transcription.
Medical companies could self host their speech to text translation. At the end the medical data is also on some servers stored. So doing speech -> text translation seems just efficient and not too much worrying if done properly.
So you think the better solution to doctors not being able to try is for them to self-host a speech to text translation systems, rather than teaching doctors to type faster?
Their healthcare/IT provider like Epic would do it. And in fact some have already done it, from what I can see.
Furthermore, preparing/capturing docs is just one type of task specialization and isn’t that crazy: stenographers in courtrooms or historically secretaries taking dictation come to mind. Should we throw away an otherwise perfectly good doctor just for typing skills?
I imagine where the speech to text listens to the final diagnosis (or even the consultation) and summarizes everything in a PDF. Of course privacy aware (maybe some local hosted form).
And then the doctors double checks and signs everything.
I feel like, often you go to the doctor an 80% of the time they stare at the screen and type something. If this could get automated and more time is spent on the patient, great!
Who is responsible when the speech-to-text model (which often works well, but isn’t trained on the thousands of similar-sounding drug names) prescribes Klonopin instead of Clonidine and the patient ends up in a coma?
This isn't a speech recognition problem per se. The attending physician is legally accountable regardless of who does the transcription. Human transcriptionists also make mistakes. That's why physicians are required to sign the report before it becomes a final part of the patient chart.
In a lot of provider organizations, certain doctors are chronically late about reviewing and signing their reports. This slows down the revenue cycle because bills can't be sent out without final documentation so the administrative staff have to nag the doctors to clear their backlogs.
SEQUENCE {
SEQUENCE {
OBJECT IDENTIFIER '1 2 840 113549 1 1 1'
NULL
}
BIT STRING 0 unused bits, encapsulates {
SEQUENCE {
INTEGER
00 EB 11 E7 B4 46 2E 09 BB 3F 90 7E 25 98 BA 2F
C4 F5 41 92 5D AB BF D8 FF 0B 8E 74 C3 F1 5E 14
9E 7F B6 14 06 55 18 4D E4 2F 6D DB CD EA 14 2D
8B F8 3D E9 5E 07 78 1F 98 98 83 24 E2 94 DC DB
39 2F 82 89 01 45 07 8C 5C 03 79 BB 74 34 FF AC
04 AD 15 29 E4 C0 4C BD 98 AF F4 B7 6D 3F F1 87
2F B5 C6 D8 F8 46 47 55 ED F5 71 4E 7E 7A 2D BE
2E 75 49 F0 BB 12 B8 57 96 F9 3D D3 8A 8F FF 97
73
INTEGER 65537
}
}
}
or this:
(public-key
(rsa
(e 65537)
(n
165071726774300746220448927123206364028774814791758998398858897954156302007761692873754545479643969345816518330759318956949640997453881810518810470402537189804357876129675511237354284731082047260695951082386841026898616038200651610616199959087780217655249147161066729973643243611871694748249209548180369151859)))
I know that I’d prefer the latter. Yes, we could debate whether the big integer should be a Base64-encoded binary integer or not, but regardless writing a parser for the former is significantly more work.
And let’s not even get started with DER/BER/PEM and all that insanity. Just give me text!
The ASN.1 notation wasn't meant for parsing. And then people started writing parsing generators for it, so they adapted. However, you're abusing a text format for human reading and pretending it's a serialization format.
The BER/PER are binary formats and great where binary formats are needed. You also have XER (XML) and JER (JSON) if you want text. You can create an s-expr encoding if you want.
Separate ASN.1--the data model from ASN.1--the abstract syntax notation (what you wrote) from ASN.1's encoding formats.
> However, you're abusing a text format for human reading and pretending it's a serialization format.
They should be the same, in order to facilitate human debugging. And we were discussing ASN.1, not its serialisations. Frankly, I thought that it was fairer to compare the S-expression to ASN.1, because both are human-readable, rather than to an opaque blob like:
Sure, that blob is far more space-efficient, but it’s also completely opaque without tooling. Think how many XPKI errors over the years have been due to folks being unable to know at a glance what certificates and keys actually say.
No worries! ASN.1 is a bit weird because there’s a textual version and then a bunch of serialisations.
I think that human-readable is more important than space-efficient. At some point an engineer is going to be looking at bytes in a dump or debugger, and it sure is nice to be able to quickly know what they are.
That is a text format, although DER is a binary format and encodes the data which there is represented by text. I think they should not have made a bit string (or octet string) to encapsulate another ASN.1 data and would be better to put it directly, but nevertheless it can work. The actual data to be parsed will be binary, not the text format like that.
DER is a more restricted variant of BER and I think DER is better than BER. PEM is also DER format but is encoded as base64 and has a header to indicate what type of data is being stored, rather than directly.
Wasting hundreds, well maybe a thousand of bytes in the process ! (I understand it's better to cache and things can go out of hand fast. But we're talking about small animations here.
I don't disagree at all. There's way bigger fish to fry when it comes to maximizing network efficiency in websites.
I'm just pointing out the rationale of OP since someone asked, and I've worked with people who've made this and similar arguments before. It's been awhile, though- the ones I'm thinking of would treat code golf as a best practice for CSS (only slightly exaggerating).
Believe me, a few kb on every icon on every pageview adds up to a lot of wasted bandwidth and not everyone is on a gigabit connection at all times, even fancy iPhones are on shitty connections a lot more than you think. Inside parking garages, on deprioritized LTE, etc.
I very much like those optimizations articles , what could be interesting is to look at benchmarks not only wall time but also other metrics :
- cpu time (better CPU usage can mean shorter wall time but higher CPU)
- memory usage
- but also and maybe more interestingly complexity of code (not an absolute metric, but a very complex/non portable code for 5% speedup may or may not be worth it)
Maybe limit the contract that a person (who is not a professional in the subject of the contract) can sign to 200 words. Anything past the 200th word doesn't exist even if you sign it.
Dont containers imply a linux kernel interface ? hence, you can only have truly native containers on linux or use containers in a VM or some kind of Wine-like translation layer.
reply