Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I have come across people who do programming, legal work, business, accountancy and finance, fashion design, architecture, graphic design, research, teaching, cooking, travel planning, event management etc., all of whom have started using the same tool, ChatGPT

if I found out my lawyer/account/tax advisor/broker had fed my privileged information into ChatGPT I'd be reporting them to their regulator and they'd be struck off



I work in healthcare but have to do a lot of regulatory things and ChatGPT is great for brainstorming arguments and methods and particularly organizing responses. It's like having a mini debate about what things should be discussed and addressed. Often it's been "oh!!! I never thought of that angle!" Basically, it gives me first, very rough drafts to get started and into flow when I'm overthinking things. It's never more detailed than things I would be googling. It's sort of like having an inner monologue that's connected to all the knowledge of the internet. Yes, you often get completely insane, factually wrong responses... but I'm not looking for perfection. It just accelerates the early phases of research and thinking and finding flaws and oversights/blindspots in my ideas.


You can use ChatGPT without feeding a clients personal data to it. You can ask hypothetical questions about a certain situation. Like in coding, you don't have to copy paste in source code be able to get some useful feedback on how to proceed.


This requires care as it may only require a few details or other metadata like when/where one asked to narrow down who they're talking about. Considering ChatGPT leaks and hallucinations I would hope folks in law and medicine are being very careful.


That seems overly risk-averse. Legal privilege does not require such extreme vigilance in op sec. One does not breach privilege by googling keywords relevant to a case then the client's address for example.


the firm's compliance officers will be demanding that the services are completely blocked and that any use whatsoever is a disciplinary offence

(in their view: removing the risk almost entirely)


In my firm, it is the contrary. We are advised to remove any personal data in our queries and double check the output. We are encouraged to use chatgpt/bing on a daily basis to learn it. Our firm is also planning to purchase an LLM that will work locally.


in my (very large) Western firm: it's exactly what I said


The market will determine the winner: a few high profile cases where AI enhanced arguments dominate people who are tying their hands behind their back will turn the tide.


we don't live in pure market economies

legislation, not the market will determine the outcome of the artificial "intelligence" battle

and large traditional industries are rather good at lobbying against their upstart complements

(the fact the RIAA and MPAA even still exist is proof of this point)


I mean, Lexis-Nexis just made their own LLM and trained it with their database. This isn't even about OpenAI anymore. Lexis-Nexis is well-known to legal firms, I'd wager and they have well-established guidelines for what client information can be processed.

https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page


I work at big law. Law firms plan to purchase specialized closed LLMs. OpenAI's Harvey exactly aims this.


I know an associate using it to generate VBA scripts to automate document updates, searches, and email generation. ChatGPT never knows anything about the matter, but it’s really good at writing VBA fast.


Do you see the writing on the wall?


Big AI firms will offer their own legal(and more) LLM services ?


No, big AI firms will eventually cease to exist once OpenAI’s equivalent of TurboTax for law lets everyone get self-service legal advice.


You can license the model and run it on your own hardware. I guess a lot of LLM business see this as the profitable end goal (the free version is free but not secure, the paid version is expensive and secure!).


I know sellers realtors are using it today, with their clients permission. (And presumably some are doing it without permission.)


I read a post from someone who got as far as showing up to view an apartment on reservation only to find there was no reservation or available apartment. "Oh, that's our chatbot!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: