Hacker News new | past | comments | ask | show | jobs | submit login
AI Nursing Ethics: Viability of Robots and Artificial Intelligence in Nursing (tus.ac.jp)
29 points by rustoo on July 16, 2023 | hide | past | favorite | 16 comments



There's a difficult question to often answer with these kinds of things, because it isn't as straight forward as it seems. Using AI workers seems like a big benefit for some things because there is a huge demand for workers and not enough workers, so even a very low quality worker (i.e. AI) appears beneficial. But once AI enter a job it is entirely possible that this prevents future people from pursuing it, meaning that actual people get replaced with them. Through an entirely implicit process and not because the machines are better, thus lowering the quality of the market which was already suffering. But we don't know if this is going to happen or not and can be quite hard to predict. So do you fill in the gap with anything, even subpar, to benefit people now or do you hold off because doing so can prevent a lot more harm down the line. And that's just with all good intentions. Obviously the situation can be worse when people are willing to make substantial tradeoffs in cost for quality (which I think we've all see this to be quite common practice).

I really do think that a lot of these ethics questions are a lot harder than the public discourse (or even more savvy like HN) give them credit for. I don't think a strong stance can be taken unless you study these specific areas in depth, and even then there's a hell of a lot of noise. I think one of the best things we can do is to at least acknowledge the level of noise in our opinions/stances. It would be quite naive to take a hard stance on a solution method when such uncertainty exists in our models.


> But once AI enter a job it is entirely possible that this prevents future people from pursuing it, meaning that actual people get replaced with them. Through an entirely implicit process and not because the machines are better, thus lowering the quality of the market which was already suffering.

The opposite is also possible - AI entering a field would produce many new capabilities that require humans to be involved in their development, deployment, supervision, or working in supporting tasks.

AI is probably the field that will generate the most demand induction. See "lump of labour" and "Jevons paradox", they are related to this topic of demand growth vs automation growth.

- demand depends on supply

https://en.wikipedia.org/wiki/Induced_demand

- work depends on demand

https://en.wikipedia.org/wiki/Lump_of_labour_fallacy

- increased supply leads to increased demand

https://en.wikipedia.org/wiki/Jevons_paradox


There are just motivational aspects to it too, like do you want to supervise a bunch of AIs? Do you want a bunch of "AI" systems supervising you ?

As much as we love AI and as much as it is getting shoved into our lives whether we like it or not from the tech overlords. I'm not sure everyone wants to be living in an AI world by default.


"I'm not sure everyone wants to be living in an AI world by default."

I sure don't.

The problem is that the AI will be integrated into society far before people realize that they don't really want it, and it will be too late to turn back...


>>But once AI enter a job it is entirely possible that this prevents future people from pursuing it, meaning that actual people get replaced with them.... thus lowering the quality of the market which was already suffering.

>The opposite is also possible - AI entering a field would produce many new capabilities that require humans to be involved in their development, deployment, supervision, or working in supporting tasks.

A lot of humans entering a slightly different field may lower total unemployment, but parent was worried about quality, not unemployment per se.


> The opposite is also possible

Of course! That's what makes this hard. I apologize, I assumed we were already working with this prior but I think that was a bad assumption. It's just hard to see if things are cotton gins or the power loom.


In the case of japan, This focus on robotics and ai workforce is at the expense of working on the problem of proper support an integration for nurses from overseas.


My sister is a translator, a field where AI has already had a huge impact. She can still get high-end translation work (she has 25 years experience, a Masters in translation and is an extremely good writer in English) - some people do need really good translations. However it would be a daft field for anybody new to go into, which makes me wonder who will do that type of work in future. Of course there is a possibility that AI translations continue to improve so human translation isn't needed at all, but AI translation isn't that great at the moment, especially in languages which are less widely spoken.


It seems that with every study investigating whether AI and robots can replace humans in a specific task, the answer is always "Maybe, eventually..."


Isn't that what you'd expect? After all:

- the set of things which AI robots won't ever be able to do is tiny or nonexistent, and

- the stuff that we already know AI can do is in the realm of startups and VC, not academic papers


If we're going to have a evidence based discussion

The answer always measurable for a given task

For some tasks the answer is measurably demonstrably: yes, today.


> The answer always measurable for a given task

How do you measure a given task? (I think this is a much harder question that you let on)


I’d rather have an AI doctor than an AI nurse


I agree. Pathologists and radiologists are especially vulnerable to being replaced by AI, since they essentially use algorithms for diagnosis already and don't usually directly interact with patients. If the test result or the image shows this, the diagnosis is this.

Nurses, PAs, and doctors that interface directly with patients will likely remain human for the foreseeable future I think. I know I'd rather receive a potentially devastating diagnosis from a human being that is able to show empathy instead of a cold machine.


I wonder about the possibilities for AI to provide better bedside manner and support difficult conversations for clinicians. Obviously there are risks with increased reliance automation, but it seems like there are also possibilities for improving communication and reducing emotional labor while still letting humans drive the knowledge work part of medicine.


I always assumed it would be the opposite, knowledge work would be done by AI with close to no human input, with doctors becoming pretty much psychologists with machine operator training.

Emotional labor is a lot less taxing when you're not understaffed and overworked and you actually get to sleep. Especially if you have access to better tech to actually be able to help more people, then you don't have to watch as much suffering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: