Hacker News new | past | comments | ask | show | jobs | submit login

I don’t think productivity was ever the goal of this software. It was to have a record that is standard, digital, transferable, etc. Doctors fought it as long as they could because they knew what it meant for them.

I remember pretty early demos in early/mid 2000s when I was doing some clinical grunt work in college. I had written some software to make my department’s life easier so I was offered up as the hospital’s liaison for the software evaluation. This is when I formed my “never replace a terminal based app, with a GUI based app and expect productivity gains” theory. Everyone working in the hospital knew the terminal app, they type in some random 3 letter code and a screen would pop up. Then they would memorize how many tabs each field was apart from each other. Without a mouse, people could just hum along imputing data a blazing speed once some muscle memory was in place. Everyone had little cheat sheets printed out for the less frequently used commands/codes. When you replace this with a browser/desktop GUI with selectors and drop downs and reactive components of GUI, it tends to 1) require mouse usage for most people and 2) lose the ability to do this quick data entry I described. The pretty interface becomes a steady stream of speed bumps that reduce productivity. Since then I’ve witnessed it in banking and other industries too.




IMHO, this is because the people writting GUI's these days are mostly incompetent, or hamstrung by "web" technologies.

Early GUI's didn't have the problem you describe because they were designed as discovery mechanisms to the underlying function. AKA, the idea was that after clicking File->Save a dozen times you would remember the keyboard accelerators displayed on the right hand side of the menu. Or if nothing else, Remember that the F in File was underlined along with the "S" in Save (or whatever). Which would lead people to just press ctrl-s, or Alt-F, S. Then part of testing was making sure that that the tab key moved appropriately from field to field,etc.

I remember in the 1990's spending a fair amount of time doing keyboard optimization in a "reporting" application I wrote (which also had an early touchscreen) for use by people who's main job wasn't using a computer. Then we would have "training" classes and watch how they learned to use it.

So, much of this has been lost with modern "GUI's", even the OS vendors which should have been keeping their human interface guidelines updated, did stupid things like _HIDE_ the accelerator keys in windows if the user wasn't pressing the Alt key. Which destroys discoverability, because now users don't have the shortcut in their face. Nevermind the recent crazy nonsense where links and buttons are basically the same thing, sometimes triggering crazy behaviors like context menus and the like. Or just designing UI's where its impossible to know if something is actually a button because the link text is the same color as the rest of the text on the screen..


In my experience the rise of GUIs over TUIs they lost the command buffering. If you knew what you were doing with a well designed TUI you could hit a sequence of keys that would be buffered and "replayed" as the next screen(s) loaded. Hit a sequence of commands in a GUI and they'll just get lost after the first one as the app/website loads.


What you describe is the natural outcome of having a single message loop with synchronous handlers, and describes e.g. Win32 just as well - a sequence of keys would simply end up as the corresponding sequence of window messages in the queue, and processed in order.

Where we partially lost that is when UX started to get async. It's not fundamentally incompatible with well-ordered input messages, but in practice, people who write async code all too often forget that it's async all the way (and not just for their favorite scenario). And so you get travesties such as textboxes that let you type text into them, and then erase all that when the app or the page is "fully loaded".


Amen. I talked to some guys in their 20s in the oil and gas industry in Houston and they preferred using the old TUI systems because of this exact reason.


On a related note: the same is true for Keyboard Macros in Emacs, Vim etc. I often send a bunch of related, but each slightly different from the others, using Emacs keyboard macros and gnus. Felt great the first time I discovered it, saved perhaps 3 hours I would have spent writing and debugging a script.


The App Library feature is underrated. A clean screen and search is just so much better than having to scroll through page after page of icons.


Excuse me, what is TUI?


Terminal user interface


Thank you


I agree with you. Although I don't think it's incompetence so much as laziness. Not just "too lazy to make a good UI" but "too lazy to find out what makes a UI good." I've seen so many coworkers happy to slap some basic form together and expect that to be good enough.

I'm constantly writing UI for sports teams who do not at all like to waste time with these kind of fiddly UI elements and flows. Most of them would likely stick to Excel if our solutions are more cumbersome (which is a high bar to meet/beat, but rightfully so). They need to be able to easily get to data and relevant, connected pieces of data, quickly enter data into relatively complex forms, and have it all be clear, reliable, and fault-tolerant. This means making some tradeoffs, particularly around what is considered modern UI aesthetics, and doing things most UI developers don't need to do such as automating little things, adding hotkeys, etc.


> Most of them would likely stick to Excel if our solutions are more cumbersome (which is a high bar to meet/beat, but rightfully so)

Tying this back to the top-level comment: at some point I've realized that so many SaaS would be better off as an Excel sheet, because they're fundamentally just a slower, buggier, much less productive and uglier version of one. But the point isn't efficiency and empowerment. The point is control: the SaaS forces the user to a specific workflow. One that makes it easier for developers to develop (by constraining the problem space), or for company to monetize, or for corporate to retain legibility - but rarely to actually help the end-user.

As an end user from a generation that was taught Excel and MS Access at school, I'd go as far as saying that 50+% of web applications I use would be an order of magnitude more useful if the UX was that of Excel or Access.


> Tying this back to the top-level comment: at some point I've realized that so many SaaS would be better off as an Excel sheet

I preach this all the time. Anyone who has to enter data really wants excel and not the fancy wizardized stepped workflow that seems to be the norm nowadays


You have to (at least!) account for concurrent edit & access. I remember the hellscape that was emailing files like ProjectPlan_v6_beambotEdits_latest.xls to team members.


This is what MS Access was developed for.


Nowadays with team shared storage like Box, Dropbox, One Drive etc, it's "HEY EVERYONE ELSE CLOSE Project_Costs_and_Totals_11.22(15).xlsx I NEED TO ADD LAST WEEKS DATA"


> for company to monetize

I suspect an overwhelming majority of use cases is for this reason alone.


So what you're saying is HTML5 and server side rendering should be the go to before any client side junk.


I dunno if they are saying that, but I wholeheartedly endorse this idea. Add the "client side junk" (fancy Javascript stuff, etc.) as enhancement on top for those who want it after the required functionality is being properly and reliably served by the "core technologies". Serve the need properly first, then make it "nice".


The problem is that people make the decision about what to use as inexpert users and that pretty GUI with all the space and Next buttons looks so easy to use to them. By the time they realize they didn’t actually want it, it’s too late. This is the reason people stick with things like Excel, it’s easy to transition up the sophistication ladder.


I don't have much to add, but you really nailed the sentiment I was going for exactly. I have been lucky to both be serving a very small user group that I can work closely with, and one whose existing workflows still exist and they can go back to if ours suck, so we _have_ to be better.


I had a coworker tell me that javascript should be added like color: stuff that makes it nicer to use for those that have it turned on, but not necessary for those that don't have it at all (or can't see color).

Kind of a hard core position in my opinion considering it's ignoring the sometimes advantages / use cases for client side code (like complicated SPAs where the user is explicitly buying into the idea of running lots of code in their browser, figma comes to mind) but I still like to think about it here and there.


In the same sentiment of 99% of companies are not Google and do not need Google's infrastructure.

99% of web applications are not Figma and probably do not need any client side JavaScript. The amount of JavaScript that's actually just wrong is incredible.

1. I don't care what your client side validation thinks. That is my email address.

2. Why are you serving people all of the news story if you are going to then hide it? We can block the JavaScript and read the news.


Well both of your cases are things I happen to agree with just shouldn't be necessary anyway lol, like most of the times I'm giving up an email it's not because I want to get emails from these jerkwads that are trading some small amount of service for the right to blast me with marketing emails (and they ALWAYS ignore my choice on the "please don't send market emails" checkbox, if they even have one). And news stories that aren't served over RSS are ones I don't want to read... i hate the modern state of the internet-as-capitalist-entity.

So I don't disagree on merit alone, but it seems if we wanna do a good capitalism, we need to make sure people are actually giving us real emails apparently


But you don't need client side JavaScript for that.

In fact, it makes the experience generally worse.


I feel like the push to make software accessible (in a new-user, not disability, context) and intuitive has made complexity the enemy. Instead of having software that grows with the user's capability, features are hidden from the top layer of interactivity or just cut entirely.

I was at the post office here in Australia a few years ago and saw the screen. It was one of those DOS-era full screen red and blue text interfaces. She was flying through hotkeys and getting things done. People can learn, so much software treats them as infants.

And you know there's definitely someone looking at replacing that software with a modern GUI.


I did an internship there like 10 years ago at the end of my uni course, where one of my projects was to do a proof of concept of re-implementing their point-of-sale application using web technologies, maintaining the DOS-y look, feel, keyboard shortcuts etc.

At the time I had no idea why. In hindsight, it's hilarious. I'm guessing it was the result of a clash between someone that wanted to use modern tech for the sake of it and someone representing the users that told them to get fucked, probably went through 15 meetings before eventually getting palmed off on the intern so it didn't pick up too much steam (probably the only good decision that was made in the whole process).


For future reference (and anyone following along later), that is an "ncurses" terminal application.

You should see the customized JBHIFI terminal + keyboard.


I dont seem to have a shot of the terminal app, but I do for the keyboard:

https://photos.app.goo.gl/fLeDNp8rU7wN7siXA


what are those? searching for JBHIFI terminal only brings up a store in Australia of that name selling square POS terminals.

Bloomberg terminals also have a custom keyboard and are essentially a terminal program.


Yeah JB Hi-Fi is a local retailer in Aus (guessing it's like a Best Buy but a strong focus on music CDs originally), it's probably a reference to the software the staff uses. Haven't had a chance to look over their shoulder at that one.


So should we be doing demos in Bash, with an ncurses or gum workflow before going back to write the proper system in C?


I dont see the problem with this. What is "Gum workflow" in this context ?


Neither do I.

It was supposed to be read:

A [ncurses or gum] workflow.

Gum being an alternative method of making a Bash UI.

https://github.com/charmbracelet/gum


Thanks, thats a neat tool!


I remember that being a key difference between the graphical / modelling software SoftImage and Maya. With the former a TUI and the latter a GUI. The Maya approach won out because getting new people productive quickly was more important than their longer term productivity it seems. Maybe it has something to do with turnover but maybe there are people who can do good creative work but can't adapt to using a TUI or at least are frightened of it.


I think you’re onto something with this. SAP power users can input data at blazing speeds, because they remember so many of the codes. So this definitely isn’t just a GUI vs TUI paradigm.


It's also because it's enterprise software. Which actually isn't software, it's more of a platform. You have to do so much implementation detail that the GUI is just the result of some form-builder type module. Everything I've ever encountered that was enterprise software, felt like it's GUI was not made by humans at all. I don't actually know how they get built but they're almost never optimized for humans or the usage they're meant to benefit.


Nobody wants to pay for better GUIs in enterprise software, so no vendor puts any attention into them. An Enterprise Architect explicitly explained to me (when I was raising a point of choosing a software package that had much better UI) that good UX is a small factor and company (a bank in that case) would rather buy cheaper software and just have its workers suffer more, because it's deemed more cost-effective.


The definition of enterprise software is “the customer is not the user”. You don’t have to make the user happy, just the CxO.


And this is why the most polished part of most enterprise software packages is the dashboard/reporting function, the only part the C-levels might actually touch themselves.


Wouldn't productivity from better UI be something measurable that can be then advertised by vendors and extrapolated into savings for the customer? I feel like that could be a pretty compelling selling point to the CxOs.


How do you measure how much MS Teams sucks compared to Slack? All the CTO knows is that they have an enterprise license for Office along with Teams and the salesperson they talked to showed them how great it integrates with the rest of the suite.

In the EHR space, all the CMO (chief medical officer) cares about is it helps keep them in compliance. Why should they care if it’s hard for their office staff to use?


My company used UX as a competitive differentiator for Enterprise Software in the healthcare sector. It's so neglected by most software development companies, we were able to get the doctors as our champions to convince the bean counters to consider our products.


We did the same thing in the healthcare sector, but for safety event reporting! It's not like we even did anything groundbreaking with UX. One was just making it so it's easy to fill out a form quickly. You don't even have to touch the mouse if you don't want to. Users love it and it's increased event reporting by a good percentage.


This is true, look at time collection software or just about anything written with SAP.


I've not done that kind of SW in a long time, and never with someone else's platform. But that said, the reporting application I was describing above was a platform in the same sense. It was largely an engine for generating the forms being filled out by the end users. Which is why it there was so much effort doing usability stuff, because the underlying form descriptions had to have tons of optimization flags for doing things like list sorting common items to the top N items of drop-downs, or moving fields around in the form to match the ways the users thought about filling out the forms.

So there were two sides, the engine optimizations to assure things like tab orders on a form, and the was the actually writing the form descriptions. In the first couple organizations that adopted it I wrote the forms and the engine in parallel adding feature flags/controls as needed to support the desired UI outcomes. Later after I quit, the lady who wrote much of the RFP responses started writing the actual form descriptions because it was just as easy as drawing them out in the (visio?) plugin she was using with MS word and doing screen captures for the RFP. Then I guess because she knew how to do it, was doing the "tuning" as well.


> IMHO, this is because the people writting GUI's these days are mostly incompetent, or hamstrung by "web" technologies.

The latter is definitely not the problem. Even the Twitter re-design from a couple years back still supports all the old hotkeys.

All it takes to at least support a tab-based workflow is using the "tabindex" property if your form isn't logically laid out already, and the rest can be done by capturing hotkeys.

Even multimedia content can be operated using hotkeys. Youtube is a good example. There's no excuse but laziness and incompetence IMO.

[1] https://developer.mozilla.org/en-US/docs/Web/HTML/Global_att...


Its not always just about tabindex. For example it might be about understanding that there are multiple ways to fill out a form that makes sense. Then hiding/showing pieces as needed and/or providing hotkeys to jump from field 1 to field 5 because the user doesn't want to fill out 2,3,4 because they are optional. Its about keystroke optimization. Sure they can press tab 3 times, or they can just press ctrl-5 (or whatever) to get there.

If you watch people use the sabre command line interface (the one from the 1970s?), you can see some of what i'm talking about when people are just filling out the forms with the submission line, its less using the GUI and more just knowing some sequence of keystrokes that results in an action being taken.

AKA its possible to do both, without having the user wear out the tab key, or grabbing the mouse all the time.


I think it depends a lot on how it is used. In the case of sabre, that is somebody's job. They use that interface everyday so the user needs to be able to use the interface efficiently and is willing to learn the shortcuts.

A sign up form that a given user will only use once in their lifetime is another matter. It needs to be simple lest the user abandons the process. Most users in that case are not willing to learn keyboard shortcuts, tabbing over optional fields probably makes more sense here.


Modern web culture is very much the problem. The tech can be keyboard-friendly, sure... but in practice, even the most basic stuff, like Enter to submit forms, is often not working because the "Submit" button is not a proper button, but a bunch of JS.


It is a problem because with TUI keyboard is the first-class input device whereas with GUI and especially HTML it an afterthought most of times. Yes, there are exceptions like Twitter and Gmail and then there are millions other interfaces where mouse is the only way to navigate.


I think it’s less about the design being bad and more about simplicity as a goal overriding everything else.

So learning an interface is frowned upon. Every interface must be designed as if the user has never seen it before.

Which is ok, as long as there’s also an alternative path which may take time and effort to learn but leads to increased efficiency and productivity.

Unfortunately, that alternate path takes a lot of effort. And worse, it leads to very few extra sales. The company which puts in the effort to build a complex but efficient workflow in addition to the initial easy workflow will get beat in the market by competitors who only focus on the easy workflow that looks great in demos leading to the C-suiters to buy it.

The best example is the pioneer of both these trends. Apple. It used to be that Apple insisted on a single button mouse so the primary interface was extremely easy to use. Yet, it spent a lot of effort including keyboard shortcuts, which were prominently displayed, everywhere. Unlike Windows, which basically only had Ctrl/Alt modifier keys, Apple has had the Ctrl/CMD/Opt modifier keys as integral parts for a very long time, encouraging those shortcuts. Apple also put s lot of effort to make their, and 3rd party applications easily scriptable (the choice of AppleScript however always held this back). The most MS did was VBA for Office.

But once Apple entered the iEra, it realized it didn’t need any of this to sell products anymore. The massive lag in supporting basic keyboard shortcuts on the iPad when using a keyboard is one of the strongest evidence for this.


For Microsoft stuff in that time period, the scripting was meant to be done via OLE Automation (VBA is really just a scripting language with OLE Automation as the underlying object model). And it was much more pervasive than Office - remember the time when Microsoft products were all "Active ..."? Third-party apps were encouraged to do that as well, although few did.


> IMHO, this is because the people writting GUI's these days are mostly incompetent, or hamstrung by "web" technologies.

Completely agree with all of this.

Adding that it's not even web technologies that can't be used. My work Mac had an issue with a Bluetooth mouse, removing and adding a Bluetooth device with the keyboard is basically impossible using tab as nothing gets highlighted to show you switched to it and some of the crucial parts don't get toggled through.

Modern GUIs are absolutely not fit for purpose.


Having worked on enterprise systems, it's not that the developers are always incompetent, it's more that everything is top down designed by committee. Nothing can be harmonized as everyone wants their workflow in there, each one slightly different, there's political infighting, jobsworths, people scared to lose their jobs, all under a strict budget and deadlines to meet.

No one knows what they really want till the software starts getting written and someone finds out they can't do their job, then come change requests (made a lot of money off of them). You kind of just get numb to it all.


GUIs have really profoundly regressed. Go ready any UI design book from the 80s or 90s.

As you say the web is a culprit but so is attempting to shoehorn mobile designs into desktop.


This is nostalgia, you're remembering things as better than they were. Back then there were so many bad UI in software http://hallofshame.gp.co.at/shame.htm


Yes, there were plenty of bad UI's and they got called out on sites like that because of it. If you read some of those "bloopers" you realize that the functionality the guy is complaining about are pretty much the default broken behavior these days. For example: applications that don't honor system colors.

With some OS's you can't even have fine grained control over those defaults anymore. Dark modes have brought some of it back, so its better now than 5 years ago, but still worse than 20 years ago. But people pretend that having a dark/light switch is the same thing as being able to customize the color of just about every layer of the UI and have the vast majority of the applications honor it.


I suppose one way to save the situation would be to build libraries that allowed you to easily build tuis/efficient guis that interact with open-api or graphql endpoints? If there only was a way to encode the workflow in addition to just the apis it could almost be generated.


Could you recommend some well acclaimed older UI design books? I’m interested to learn what we’ve lost!


The Humane Interface by Jef Raskin


One other thing about the 90s data entry TUIs - we had very good tools that were specifically optimized for that, both in terms of ease of use for the developer, and in terms of efficiency for the end user of the resulting app. Remember dBase, Clipper, FoxPro etc?


Think how bad it's going to get when UX designers have only used phones and tablets.


They also fought it because they didn't go to medical school and survive residency to fill out forms all damn day—and they didn't used to have to, they had staff for that.

Then the computerized systems "replaced" that staff but all that really means is they cut the human time needed low enough that full-time workers weren't needed, but didn't eliminate it, so now that's another thing doctors have to do themselves.

AFAI can tell, the effect of tech overall is to cut some jobs while making the remaining ones harder and more stressful, while increasing so-called context switching.


> they cut the human time needed low enough that full-time workers weren't needed

No, that's not it at all. What GP is saying is that they cut the human expertise low enough that full-time workers weren't needed. The manpower savings never materialized because an app built for experts is faster than one built for casual users, and also because those experts, even with the high training cost, were ultimately cheaper per hour than the highly compensated people who now have to do the job because we 'made it easy'. First you devalue those experts by making their job harder, then you get rid of the job and make it someone else's, split between entry level staff and your most expensive employees.

> AFAI can tell, the effect of tech overall is to cut some jobs while making the remaining ones harder and more stressful, while increasing so-called context switching.

You still got there in the end.


> AFAI can tell, the effect of tech overall is to cut some jobs while making the remaining ones harder and more stressful, while increasing so-called context switching.

100% agree.

The Office suite is, in some ways, the worst thing that happened to corporations. Thanks to computers, everyone can now easily write a report, fill in a spreadsheet, or manage their meetings. Which means that everyone now has to write reports, fill in spreadsheets and manage their calendars. This used to be a separate job. You had people specializing in those tasks, and they were focused and efficient at it. Now, everyone does it on their own, and not only we suck at this, it's also distracting us from the "main" job we're supposed to be doing.

The older I get, the more I feel computer revolution was in big part a bait-and-switch.


Great point(s). There's no longer the barrier to entry. No longer a belief that just because you can doesn't mean you should. That behaviour gets reinforced because doing X or Y is easy, that lack of friction builds a bias that the end product / output is 10x better than it really is.

You're not the only one with that bait and switch feeling.


But the doctors almost always were taking notes anyway. My step father (A doctor) fought a losing battle against electronic records because he had _decades_ of paper records stored in the "records" room of his office. That was largely the responsibility of the front desk to pull the patients records and have them ready for him to read/check before seeing the patient. Then clean them up and refile them. Long term patients had pages and pages of hand written notes, prescription histories, etc.

So a part of the job has always been the record keeping, OTOH, as one of the other users mentioned, I've seen enough Dr's using their computer records systems to know that software is mostly garbage. The Dr's spend 2-4x the time dealing with the shitty UI as actually typing in the notes now.

(In the end he basically retired instead of convert to electronic records).


Sounds about right

For those that seemed to transfer successfully, I noted that at Mayo Clinic, the doctors use live dictation software and dictate at least some of their notes into the system while the patient is present near the end. This immediate review sometimes brings up a few new questions (from either Dr or patient), and a bit more notetaking. So, it looks like a very efficient system. They also have no apparent shortage of staff organizing things.

That said, I doubt every medical organization and office has the same quality setup as a top world-class institution. At some level of degradation, the system becomes more of a hindrance than a help, and that point is likely fairly near the top levels (so most of it is a hindrance).


Dictating while the patient is there is brilliant, because it double-checks both the doctor and the patient's understanding of what happened.


Forms give you validation errors or warnings instantly tho


Huh?

What form would give validation error or warning that a note the Dr is making either raises a new question by the patient or is in conflict with something the patient knows?

(& yes, from what I could see, there were also fields for patient data, date, etc, that are presumably validation-checked)


Yes, I thought so too

Edit: Also, I was generally quite impressed with their notes


I've never seen the same doctor twice in a row.

What I've noticed is that EMR has greatly reduced the amounts of screw-ups or delays caused by not having the right information at hand, or having to repeat tests. Also, since there's now a terminal in every examining room, I can see what amount of effort is required to use the EMR tool (Epic in the case of my provider), and it doesn't seem all that onerous. I can guesstimate the additional amount of time that they spend outside of clinic hours, completing their records for the day, and again, it doesn't seem onerous.

For a few years I had to fill out a lengthy medical history form, every time I visited a clinic, but that's pretty much gone today. My primary care doctor just retired, and her replacement took up the baton without skipping a beat. She can also easily delegate to her physician's assistant or nurse practitioner, so they can all work as a team, with instant access to the same information.

Now I have noticed something interesting. The urgent and primary care clinics all have a terminal in every examining room, and the clinicians perform their examinations while seated at the terminal, except when they actually have to poke around. That's where it seems quite efficient.

In the hospital wards, they still don't have a terminal in each room, meaning that each clinician has to look things up at centralized terminals, remember them (or not), and has no access to information. If they need some information, they will come back with it, next time their make their rounds, which might be the next day. And they screw up. My dad had an episode that took him through an ER, to a regular hospital bed for a few days, then to a rehab ward. I had all of his records at my fingertips thanks to MyChart on my laptop. The doctors and nurses were lost, they completely overlooked the documented diagnosis that was at the root of is condition, and didn't believe me about it.

Some of the nurses in the hospitals now have a terminal on a wheeled cart, that they bring on their rounds.

What I'm guessing is that in the days of handwritten records, the doctors were mostly winging it.


They’re taking notes with cheap pencils and paper! We should sell them an complex, messy to build, fleet of machines!

It’s like cutting off a head when stitching a cut on the leg was the problem.


Some healthcare provider organizations now employ medical scribes who follow physicians around and do all their EHR data entry. This is expensive, but can be cost effective because then the physicians have more time to perform billable procedures.


The least capable doctor's time is worth $300/hr. The scribe is paid what, $25/hr?

This is so much like hearing of engineers that will not hire a $20/hr maid due to egalitarian reasons so they squat in filth or waste all their free hours cleaning, all while capable and willing cleaners starve. Insane.


Any competent engineer should at least have a maid, driver, nanny, servant, chef, gardener, pool cleaner, a mistress, dog walker and personal assistant /s


You are overestimating how much least capable doctor makes (more like 100-150k) and underestimating how much somebody who can type medical information makes (more like $30-$35).


I live below the poverty level. I receive Section 8 funds for housing. The Section 8 inspections and quality standards are so important to me that I hire a cleaning service about 6 times a year to make sure everything in here is spotless, because I'm really not that psychologically or physically capable of cleaning everything, even if I had limitless free time to do it. The maids cost about $130 a visit and they're worth their weight in gold, just so I have peace of mind and a consistently clean place to live. The City thanks me for it, too.


Section 8 inspects your house to make sure you're cleaning your room? that seems... strange


Personal hours are not fungible. You can't replace a bit of cleaning time throughout the week with an additional $600.

Also there are plenty of costs to employing others besides the hourly cost. Large organizations have huge fixed costs to cover them. Consider:

- liability

- maintaining of knowledge and training to hire help

- skill to source and hire people


If so, that's hilarious, because that's precisely one of the jobs all these expensive, painful-to-use computer systems were supposed to replace. You'd take a year or two course at junior college, to learn shorthand and drill some medical terminology so you'd be less likely to make a bunch of simple transcription mistakes, then go to work.


It's doubly hilarious when you realize it's the inverse of what the office productivity software did to everyone.

Remind me again why do I have to manage my own meetings and prepare so many powerpoints and fill in so many forms as a... well, my position doesn't matter, because everyone is doing the same, no matter their role?


There's still a fairly large job market for medical transcriptionists, but that's a different job than being a medical scribe. Transcriptionists don't use shorthand any more, they mostly work from digital voice recordings. And they're typically not transcribing from scratch; now usually a voice recognition system does the first pass and then the human edits it to fix the ~2% errors. Transcriptionists don't usually work directly in EHRs, but their documents are fed into EHRs.


Microsoft recently bought Nuance for this very reason.


Ah this is great to hear. I been thinking about this approach for a while. Great to hear it’s a Thing.


Sounds like modern startup devops-without-devops culture


Shift left amirite? Same with DBAs.


Don't need DBAs if you're hiring 10x full stack developers.


Seems an artefact of doctors not being employees.

If their employment status was the same as everyone else’s, there wouldn’t be any effort to replace admin staff with someone getting paid 10x as much unless there was actually a 90% reduction in work (doubtful).


You can reduce 90% of the work - but if the remaining 10% is shifted to someone that gets paid 50x as much, it's still a loss.


I, yes, that was their point. You're just saying the same thing with different numbers.


Laughs from academia....


Trick is to be hourly!


That and the staff actually doubled over the same period


As someone who worked in the electronic health records industry, closely with design teams, and has thought more deeply about certain aspects of this problem than anyone else in history (not exaggerating), I think you're missing major factors.

First, yes, productivity was one of the goals of the forced move to electronic health records systems. The federal government passed the HITECH Act in 2009 creating economic incentives for doctors to switch to EHRs because it would be better for public health, Medicare billing, and also because it would supposedly unlock doctors to spend more time with patients and be more productive, via the use of technology.

The reason that third thing has failed, to my mind, is largely because the government, in the same act, required a HUGE list of requirements be met by the software designers making the EHRs. This list, by law, needs to be prioritized in scrums over customer requests and design thinking. Sometimes it makes good UX impossible.

Ironically, the government, hearing this feedback, actually added a new requirement to the list: "Safety-Enhanced Design" [0].

Go read that regulatory requirement and see if it makes any sense to you. That's why design sucks in EHRs.

[0] https://www.healthit.gov/test-method/safety-enhanced-design


What do you see as the way out of this mess?


Hmm. There are many layers of problems in the healthcare system, EHRs with bad design only one.

I will say that small, private clinics who only accept private insurance don't necessarily need to use a federally certified EHR. They are legally allowed to build their own system. One example is OneMedical. When you ask the doctors/PAs/nurses at OneMedical about their EHR, they actually love it.

I think the first step would be validating my anecdotal experience here by polling doctors using non-certified EHRs to see if they like them more. If they do by an overwhelming amount, I'd first take that evidence to the Office of the National Coordinator (the government agency created by the aforementioned HITECH Act who makes the certification criteria). I'd tell them that if they vastly simplify the criteria and add more flexibility, they'd make doctors happier.

I guess the problem is that happy doctors, unfortunately, isn't necessarily aligned with the existing federal laws. Probably you'd need to pass a new statute at Congress, this time around with the benefit of talking to a lot of people with experience making EHRs.


Do you think this is feasible and could happen within the next decade given moderate effort?

I considered targeting standardization in health tech for a while and uncovered a similar layer of red tape as you describe. We can't continue to let our medical system continue to lag behind the rest of the world's and good EHRs are a part of that.


I don’t think productivity was ever the goal of this software. It was to have a record that is standard, digital, transferable, etc.

Going a little further, this was appealing in part to avoid simple medical errors & oversights. Losing the record, mixing up records, incomplete history, and so on. Eliminating medical error is incredibly valuable but doesn't show up as "productivity".


This is amusing as, 10 years ago, my wife (a decade-plus under 60, even now) showed up to a consultation with a doctor who remarked that she looked very good for someone over 60 and who suffered from a series of conditions that she did not have but showed up in "her" medical records.


My wife is in 30s but has had a lot of women's health stuff going on the last decade. We stay completely within the same "healthcare network" of hospitals precisely because they actually use the same system and all doctors can access it (obviously we like the providers as well!) But even for basic procedures we could save a little on like lab work or imaging by going out of this network we've learned it doesn't really work as promised. It's still hard to get your records to stay together unless they're in the same company's database is what we've learned.


Yes. All the systems are set up this way.

The problem is: how do you allow departments to retain their fiefdoms in a world of centralised data? The answer is to spend a fortune on management consulting.


Or nationalize the documentation infrastructure. This is not a problem in the developed world. i give my doctor my tax ID and they can see my entire patient history, all of my medication, and relevant notes from other providers.


That won't happen in a federation like US or Canada, sadly. The states (provinces) would have to either agree on an infrastructure (hah!) or agree to give that power to the federal government (OMG lol).


I work in EHR as of this year. About a week into the new job, one of the older software engineers made a comment about how what we do should really just be a function of the federal government. I was like, "but wouldn't we be out of a job of they did?" And he said, "sure, but we'll probably be dead by the time they figure it out anyway".


If you're in the US, the federal government actually created one one of the most widely used EHRs and set of clinical applications in the world, called VistA. It was made for the VA but is used outside of it.

https://en.m.wikipedia.org/wiki/VistA

It's actually public domain and open-source. Unfortunately the VA is in an ongoing project to replace it with Cerner Millennium.


A project that's not going well. VA system might be MUMPS based, but it's very good, as you say.

The big issue from my EHR-adjacent vantage point is that all the money is in making giant, entrenched EHRs, whereas the future should be in lighter-weight services that conform to standards, and competition is between those services. That way you could gradually slice off VA functionality piece by piece, and have a successful, more efficient, gradual migration.


The UK has the NHS and does not have this in the slightest : - )


We do the same. My wife sees tons of suboptimal healthcare delivery due to lack of doctors having the necessary information. In our current area, it is easy to find doctors they use mychart and interface with local hospital, so if we were to end up in the hospital, our medical history is immediately available.


The GUI apps have the benefit of being easier for onboarding. We've redesigned the workplace to deal with constant employee turnover.

I guess they also make more sense to management since it looks like something they could do themselves, or at least understand.


You can have both. GUIs were a breakthrough because they enabled much better discoverability, allowed images in the UI and so on. But they were also designed to be fully keyboardable and low latency.

Web tech broke all that:

- UI was/still is very high latency. Keystrokes input whilst the browser is waiting do not buffer, unlike in classical mainframe/terminal designs. They're just lost or worse might randomly interrupt your current transaction.

- HTML has no concept of keyboard shortcuts, accelerator keys, menus, context menus, command lines and other power user features that allow regular users to go fast.

We adopted web tech even for productivity/crud apps, because browsers solved distribution at a time when Microsoft was badly dropping the ball on it. That solved problems for developers and allowed more rapid iteration, but ended up yielding lower productivity than older generations of apps for people who became highly skilled.


Well browsers solved multiple other issues too: cross platform apps, updating all clients in a single place, sharing data between devices, and the most important for many developers - switching software from an ownership to a rental model, killing piracy, and easy access to user metrics and data.

All of these (except logging on to the same data from all my devices, which is nice) benefit the developer at the expense of the user.


> All of these (except logging on to the same data from all my devices, which is nice) benefit the developer at the expense of the user.

Glad you pointed that out. And, in the most prevalent application of Conway's law[0], those changes enabled and are entrenched by the "agile" practices in software development. Incremental work, continuous deployment, endless bugfixing and webapps fit each other like a glove (the latex kind that's used for deep examination of users' behavior).

It also enables data siloes and prevents any app from becoming a commodity - making software one of the strongest supplier-driven markets out there, which is why the frequent dismissal of legitimate complaints, "vote with your feet/wallet", does not work.

----

[0] - https://en.wikipedia.org/wiki/Conway%27s_law


Yes, "updating all clients in one place" is what I meant by distribution. Windows distribution suffered for many years from problems like:

- Very high latency

- No support for online updates

- Impossible to easily administer

Cross platform was much less of a big deal when web apps started to get big. Windows just dominated in that time. Not many people cared about macOS Classic back then and desktop UNIX didn't matter at all. Browsers were nonetheless way easier to deal with than Windows itself.

Agree that killing piracy was a really big part of it. Of course, you can implement core logic and shared databases with non-web apps too, and the web has a semi-equivalent problem in the form of ad blockers.


You missed privacy. The user lost privacy with webapps.


I figured that came under "easy access to user metrics and data", but I did consider some kind of rhyme linking piracy to privacy but it was a little early in the day to commit that sin. It's probably worth mentioning twice anyway.


> HTML has no concept of keyboard shortcuts, accelerator keys, menus, context menus, command lines and other power user features that allow regular users to go fast.

HTML has had a limited concept of accelerator keys for years, but it's not pretty:

https://developer.mozilla.org/en-US/docs/Web/HTML/Global_att...


This is a good observation. Constant employee turnover also reduces worker productivity, as it means most current employees are juniors in their role (regardless of what their title says).


Problem is the GUI could have shortcuts for everything, but usually won’t.

It doesn’t help that the evaluators for a new system will also approach from the perspective of a new user, even though none of them will be a new user in some months.

I’ve so wanted to create auto-hot-keys for many tasks, but end up having to use (x,y) clicks where I get boned every design touch-up (deliberate or side-effect of another change).


> never replace a terminal based app, with a GUI based app and expect productivity gains

I can imagine this being true. It seems that almost the whole software industry has failed to grasp the distinction between an appliance and a tool. An appliance you expect almost anyone to be able to use without training. A tool, well you are expected to learn how to use it, and after that, you are much more productive than before. And most software seems to be moving towards appliance.


I like this dichotomy. I'd want to add a third, the product. The product has ego, it needs to look nice, it needs to demo well, it is marketing. This is what the auto industry has become since the model T and it's what software has become since it was a tool. The problem is with software, things like productivity typically take a hit as it moves further from tool to product. More so when the domain is something like EHR or ERP or E anything.


>This is when I formed my “never replace a terminal based app, with a GUI based app and expect productivity gains” theory.

Not in medicine (run a small e-commerce business selling mostly used video games), but definitely noticed the same thing for us.

We have some terminal-based Python scripts I wrote to automate a lot of the data entry tasks like listing and shipping (entering tracking numbers, printing labels).

Everyone that uses the scripts is initially apprehensive, but then after maybe a day of getting used to the terminal turns into a powerful data entry God and they love it. Even had an employee gush about our shipping tool to a random supplier.


Back in the late '80s the government department I worked at had a dedicated data entry team with their own system (hardware and all).

The greenscreen data entry was highly optimised to not even require tabbing to different fields, but just run the values together in a large single field and the software would split them into fields and validate them.

I assume it was very fast and efficient for the experienced operators.


"Fun" fact: the Therac-25 tragedy was in part caused by this type of usage - folks who know it so well they just blast through the screens from memory. But the software in question wasn't resilient to this use-case, and apparently resulted in an inconsistent state.


Good example.

---

The system distinguished between errors that halted the machine, requiring a restart, and errors which merely paused the machine (which allowed operators to continue with the same settings using a keypress). However, some errors which endangered the patient merely paused the machine, and the frequent occurrence of minor errors caused operators to become accustomed to habitually unpausing the machine.

One failure occurred when a particular sequence of keystrokes was entered on the VT-100 terminal which controlled the PDP-11 computer: if the operator were to press "X" to (erroneously) select 25 MeV photon mode, then use "cursor up" to edit the input to "E" to (correctly) select 25 MeV Electron mode, then "Enter", all within eight seconds of the first keypress, well within the capability of an experienced user of the machine. These edits weren't noticed as it would take 8 seconds for startup, so it would go with the default setup.[3]

---

... which allowed the electron beam to be set for X-ray mode without the X-ray target being in place. A second fault allowed the electron beam to activate during field-light mode, during which no beam scanner was active or target was in place.

Previous models had hardware interlocks to prevent such faults, but the Therac-25 had removed them, depending instead on software checks for safety.

The high-current electron beam struck the patients with approximately 100 times the intended dose of radiation, and over a narrower area, delivering a potentially lethal dose of beta radiation. The feeling was described by patient Ray Cox as "an intense electric shock", causing him to scream and run out of the treatment room.[4] Several days later, radiation burns appeared, and the patients showed the symptoms of radiation poisoning; in three cases, the injured patients later died as a result of the overdose.[5]

---

In response to incidents like those associated with Therac-25, the IEC 62304 standard was created, which introduces development life cycle standards for medical device software and specific guidance on using software of unknown pedigree.[7]

https://en.wikipedia.org/wiki/Therac-25


This sounds like poor consideration for edge cases - not really a problem with the UI or people clicking through it too fast. Anything that could be interpreted as remotely fatal should've shut the machine down.


The control software should not be physically able to command the hardware to enter an invalid state. You can do that by only exposing the 3 valid modes to the software or only enabling power to the emitter if every piece of hardware is in the correct place when the software request arrives.

You also have a hardware lock on the power - this can be as simple as a hardware timer (a RC circuit siffices) which limits how long the emitter can be on within in a given window to be safe.

Never trust the software. If you must trust some software, create a minimal set you CAN trust which isolates the rest of the software from the hardware.

You are correct, the discussion about how to exercise this bug (fast UI, blah blah) is interesting to hear but totally irrelevant to the lesson (don't trust software).


They basically did no testing at all on that machine, and reused the previous software which relied on hardware safety interlocks which had been removed from the newer model. It's literally a textbook case of how not to do mission-critical software.


It sounds like we should consider and test the possibility that users of the software will become extremely familiar and want to use it much more quickly than we anticipate.


At Uni as a summer job I worked processing Corporate Actions for a large custodial bank. We used exactly the same kind of system where every action was 4 characters. I can still remember some of them despite it being 10 years since I did that job. Even more importantly, the screens were trivially scriptable so lots of the grunt work could be handled by writing export scripts, pulling a bunch of data into excel, processing it and occasionally posting the results back the same way.

Absolutely no way a modern system could be half as efficient, short of completely automating the whole job (which involved a lot of communication with other parties and basically freeform restrictions).


You could provide a terminal UI from within the GUI (it's not unheard of and can work quite well)


Alas, most modern software doesn't come with the option of a GUI. It's a HTML document pressed into service as a GUI with a greater or lesser degree of success.


I see. Well... piss.


Lol!


This memorization and strict adherence to past ways of doing things killed me as a developer. I was tasked with maintaining and customizing an Enterprise Resource Planning system. It was a terminal based app. Sometimes system upgrades would add a field to a screen. For example, perhaps country code was added, split from the phone number in the customer screen. I would frequently be requested to suppress the upgrade, or move the new fields so they didn't 'ruin' peoples memorized routines.

As the company customized more and more of the code base, upgrades to the system became more and more difficult. Every upgrade required a manual comparison of custom code to be merged with the baseline code. This led to skipped upgrades, and eventually a cessation of upgrades. Of course, after no upgrades over a long period of time it was eventually decided to move to a new ERP system.

I am still appalled at the things sacrificed to prevent disruption of a small group of peoples work flow.


But CLI apps should be extensible in the same way that .bashrc is.

If it can't be upgraded, it's not because of its a terminal app, it's because it's poorly designed.


> It was to have a record that is standard, digital, transferable, etc.

Considering how often I have to fill out the same goddamn forms (sometimes literally down the hall in the same building as another doctor), I think that goal failed miserably.


No practice or healthcare system accepts anyone else's records. There is supposedly a way for the patient to release records and have them sent around, but none of the doctors I've asked will accept that sort of nonsense. It's "NIH" for healthcare - if they didn't generate the record, they don't want it.

While in the hospital, the phlebotomists came after me for routine lab work at hospital prices. I declined and then I had a conversation with the nurse about releasing my labs from June to them. On script, she said "Mmm, 4 months is kinda old! We'd rather do our own!" so I filled out the ROI anyway, and curiously nobody offered to draw my blood again.


> I don’t think productivity was ever the goal of this software.

Thing to remember finance/economists/rentiers have a different definition of efficiency and productivity than you do. In this case the productivity has to do with billing not the uninteresting things that doctors do. By reducing the cost of billing and forcing doctors to document more things to be billed more money can be extracted.


>I don’t think productivity was ever the goal of this software.

I'm not entirely sure about this. During the early digitization era productivity was a big driver. Modern word processors are a godsend if you've ever tried to typewrite a document for publication (forget anything with complex formulae) or dealt with actual physical spreadsheets? Office itself and it's now many clones is a fantastic set of tools for productivity.

My opinion is that we've created a world in technology now that drives technology for the sake of tech and financial drivers. I regularly deal with people who think it would be a great idea to build a system to automate some aspect of business that's already well optimized or generalize something they thing is general but is really quite niche.

There are certainly cases where it makes sense to develop a system around something but you need to consider the full cost/benefit tradeoffs, not just benefits which industry tends to do.


> [...] they type in some random 3 letter code and a screen would pop up. Then they would memorize how many tabs each field was apart from each other. Without a mouse, people could just hum along imputing data a blazing speed once some muscle memory was in place. Everyone had little cheat sheets printed out for the less frequently used commands/codes.

This comports with my most recent experience using SAP in 2018. I know, I know, SAP has GUIs and such now. This well known and profitable corporation under the Blackstone umbrella, though? Nope. It was exactly as you describe.

Those who had the time-in-service or the mentality to accept it excelled at their job, but uniformly skewed older (late 40s and up) or younger (under 25). At the time, almost everyone aged in between was entirely befuddled by it all.

Context: supply chain, procurement, purchasing, logistics, maintenance, work orders, inventory


> It was to have a record that is standard, digital, transferable, etc.

Which translates into productivity. If something is standard, digital and transferable it means you can increase the rate of output in relation to its input (which is the definition of productivity).


Right, but it's the records that are standard, digital and transferrable; not the work. So what you end up optimizing for is producing paperwork.


huh? if the records are "standard, digital and transferrable", it means all of the work associated with those records is sped up.

- Need to retrieve past doctor visits about a patient? person at front desk no longer needs to walk to the folder closet, then scan the whole thing to find your name and then read through all of the documents to find the relevant visits. just click a button.

- How about getting the prescriptions provided to you from a previous doctor? Reduction in time to phone / fax the previous doctor. just click a button.

- Want to check if your insurance covers your procedure? Receptionist calls the carrier, sits on a 6.5 minute customer service wait queue, then gets the info versus 1-click.

- and, and...

It was always about productivity.


The problem is that you've optimized time savings for the cheapest people for a hospital to employ at the cost of time spent by the most expensive people a hospital employs, eliminating a handful of cheap jobs while making the expensive jobs both less efficient and happy.


It is more productive if the person just knows if the procedure is covered because the insurance companies have stable standards and trust the medical providers rather than having it all be JIT decisions based on rules that either constantly shift or are so vague/low trust as to be "you medical person yourself can't decide if this procedure is covered, you have to call us."

And back in the paper days, the staff would pull up the records for the days appointments. ER visits would have less data but normal medical care would be fine.


> if the person just knows if the procedure is covered because the insurance companies have stable standards and trust the medical providers rather than having it all be JIT decisions based on rules that either constantly shift or are so vague/low trust

None those are related to use or lack of use of technology. Those are purely bureaucratic rules setup by insurance carriers.

> And back in the paper days, the staff would pull up the records for the days appointments.

And sometimes those papers would get lost, or maybe they're still sitting in the folder on a door because someone forget to clean them up, or they were in the wrong order so it took the person longer to find the person's name, appointments would shift, etc. etc.

I can't believe I'm having to explain to someone the productivity advantages of a system of record to a technology focused crowd...


> None those are related to use or lack of use of technology. Those are purely bureaucratic rules setup by insurance carriers.

They are very much related to use of technology, because they are enabled by technology. The degree of bullshit paperwork every white-collar worker has to deal with nowadays is a direct consequence of computers making it possible to make us do that work, and for the recipients to process it.

The benefits of those processes are whatever they are designed to be, but this creates a false image of net productivity, because the costs are now hidden, smeared across everyone's workload, adding to a vague sense of dissatisfaction and low productivity. In contrast, if you tried the same processes few decades ago, it would mean hiring dedicated people on both ends, and the costs - as measured by their salaries - would be clearly visible.

> I can't believe I'm having to explain to someone the productivity advantages of a system of record to a technology focused crowd...

You don't have to. But you're missing the disadvantages of the situation when maintenance of that system of records becomes a job distributed across everyone. It's not the digital recording per se that's the problem, but the fact that everyone is now also their own secretary.


> The degree of bullshit paperwork every white-collar worker has to deal with nowadays is a direct consequence of computers making it possible to make us do that work, and for the recipients to process it.

These are bold claims backed up by little to no data other than your anecdotal observations. Productivity has generally been on a steady upward trend in the US since it was first measured in 1947. My own professional service business, which does require a decent amount of "bullshit paperwork" would not have been possible at the scale it achieved without technology.

> In contrast, if you tried the same processes few decades ago, it would mean hiring dedicated people on both ends, and the costs - as measured by their salaries - would be clearly visible.

Ever seen Mad Men? There was literally a full floor full of human beings typing out bullshit letters on typewriters because computers didn't exist in the era.


> Ever seen Mad Men? There was literally a full floor full of human beings typing out bullshit letters on typewriters because computers didn't exist in the era.

That's my point: those people received salaries for typing out those letters, making the cost of it clearly visible to the business.


So your point is "we visibly saw the cost before, but no longer see the cost now with tech. Therefore we can conclude that the invisible cost now outweighs the visible cost...because ummmm it's not longer visible?"

Explain that one to me.


> and trust the medical providers

$68 Billion in medical fraud in the US

> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6139931/

Part of the opioid crisis caused by basically bribing doctors

Yes I’m well aware that when drug abuse was happening in the “inner cities” where the government looked the other way because it was more concerned with propping up countries during the Cold War, the same people who want to treat drug addiction like a “disease” when it’s happening in “rural America”, it blamed “single mothers” and “lack of morals”.


That will only be a speed-up if the time saved from easier information retrieval is smaller than the time spent in increased paperwork, which it may or may not be, but is an assertion that needs justification.

In general, I'll note producing documentation is fairly slow and tedious. It takes something like an order of magnitude longer to write a sentence than to read it. So this optimization is only going to be a productivity boost if this paperwork is accessed repeatedly, dozens of times in the course of treatment (the productive thing).


> easier information retrieval is smaller than the time spent in increased paperwork

What paperwork creation increased as a result of digital record use?

I'm beginning to think y'all are conflating the increase of documentation with the use of digitalization. The two aren't mutually exclusive.


> I'm beginning to think y'all are conflating the increase of documentation with the use of digitalization. The two aren't mutually exclusive.

I think what we're talking about is a clear (if non-typical) example of the Jevons paradox[0]. Digitalization makes creating and processing paperwork more efficient, allowing the overall organization as a system to support/afford more of it. As a result, amount of documentation and form filling increases.

----

[0] - https://en.wikipedia.org/wiki/Jevons_paradox


Hilariously that wikipedia page mentions nothing about productivity. In fact an article that talks about Jevon's paradox says this:

According to the Ford Motor Company, its fuel economy ranged between thirteen and twenty-one miles per gallon. There are vehicles on the road today that do worse than that; have we really made so little progress in more than a hundred years? But focussing on miles per gallon is the wrong way to assess the environmental impact of cars. Far more revealing is to consider the productivity of driving. Today, in contrast to the early nineteen-hundreds, any American with a license can cheaply travel almost anywhere, in almost any weather, in extraordinary comfort; can drive for thousands of miles with no maintenance other than refuelling; can easily find gas, food, lodging, and just about anything else within a short distance of almost any road; and can order and eat meals without undoing a seat belt or turning off the ceiling-mounted DVD player.

A modern driver, in other words, gets vastly more benefit from a gallon of gasoline—makes far more economical use of fuel—than any Model T owner ever did. Yet motorists’ energy consumption has grown by mind-boggling amounts, and, as the productivity of driving has increased and the cost of getting around has fallen, the global market for cars has surged. (Two of the biggest road-building efforts in the history of the world are currently under way in India and China.) And developing small, inexpensive vehicles that get a hundred miles to the gallon would only exacerbate that trend. The problem with efficiency gains is that we inevitably reinvest them in additional consumption.[0]

In other words, you're too narrowly focusing on "miles per gallon", not the fact that "any American with a license can cheaply travel almost anywhere, in almost any weather, in extraordinary comfort; can drive for thousands of miles with no maintenance other than refuelling; can easily find gas, food, lodging, and just about anything else within a short distance of almost any road; and can order and eat meals without undoing a seat belt or turning off the ceiling-mounted DVD player."

For example, in the show Mad Men, you'd see a whole floor full of human beings who's sole job was to type out letters on typewriters. Now, those jobs are obsolete and the white collar worker is responsible for them. Still bullshit paperwork, but the white collar worker now has to do them.

The overall trend for productivity since it was tracked in 1947 has been on an upward trend with its sharpest 1H downward trend ever this past year. Are you seriously suggesting white collar workers magically got more bullshit paperwork just these past 2 quarters?

[0] - https://www.newyorker.com/magazine/2010/12/20/the-efficiency...


That also never happened, did it?

Is there a "standard" medical record, or does each system implement its own proprietary format? Are the records transferrable? If so, why am I asked to fill out a complete medical history form on paper every time I visit the a doctor, as if I'm a new patient, when all the doctors I see are in the same network and presumably use the same EHR system.


You are optimizing the downstream consumers of the records not _necessarily_ care, which is what you probably _want_ to optimize.


I used to work in Healthcare software (Not Epic).

Productivity is indeed a selling point.

I will also tell you that EHR software is universally hated by doctors. Does not matter who makes it. The company that cracks that will make billions.

One interesting idea was a voice assistant wired up to take inputs as doctors did their work. I don't think it went anywhere (yet).


I work in it too. And the US govt is not approving or even looking to approve new EHRs. The bureaucratic hurdles (and regulatory capture) are such that it is no longer feasible in this country. I would write one in a heartbeat if it wasn't a doomed venture.


There are lots of new EHRs being approved. Just go look on CHPL and you can see that there are a 100 new EHRs that have made it through the 2015 Cures Update certification. Its hard and expensive but it can be done as I can attest to getting our product certified just this year.


> I don’t think productivity was ever the goal of this software.

Well, EHR is a glorified billing platform.


We can use an all HTML, Javascript-free interface that people can still memorize and quickly Tab through.


But that's not what anyone was selling at the time. I'm sure complexity has only increased since then.

It was pre-AJAX and pre-"Javascript being useful", I think even pre-Firefox and was IE6 only. So it was loading java applets and stuff to just get some basic functionality


Another reason is to satisfy insurers increasing demands for documentation to backup billing.


Mouse moves are crack for ml algorithms if the interface is maintained somehow.


Was your hospital by any chance using Meditech as the terminal based application?


Sounds like Bloomberg.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: