Hacker News new | past | comments | ask | show | jobs | submit login

This has existed for a long time, it's called "RPA" or Robotic Process Automation. The biggest incumbent in this space is UiPath, but there are a host of startups and large companies alike that are tackling it.

Most of the things that RPA is used for can be easily scripted, e.g. download a form from one website, open up Adobe. There are a lot of startups that are trying to build agentic versions of RPA, I'm glad to see Anthropic is investing in it now too.




RPA has been a huge pain to work with.

It's almost always a framework around existing tools like Selenium that you constantly have to fight against to get good results from. I was always left with the feeling that I could build something better myself just handrolling the scripts rather than using their frameworks.

Getting Claude integrated into the space is going to be a game changer.


Most RPA work is in dealing with errors and exceptions, not the "happy path". I don't see how Claude's Screen Agent is going to work out there - what do you do when an error pops up and you need to implement specific business logic how to respond? How about consistency over many executions, and enterprise accounts. You want a centralized way to control agent behavior. Scripting based RPA is also much faster and cheaper to run, and more consistent.

Maybe Anthropic should focus on building a flexible RPA primitive we could use to make RPA workflows with, like for example extracting values from components that need scrolling, selecting values from long drop-down menus, or handling error messages under form fields.


I agree with your post.

    > Most RPA work is in dealing with errors and exceptions, not the "happy path".
Isn't this most programming? I always chuckle when a junior hire looks at my code and says: "It is mostly error checking."


100% this. I am using the open source Ui.vision to automate some business tasks. Works well, but only 10% of the work is for automating the main workflow, 90% of the work goes into error and edge case handling (e. g. Internet down, website (to scrape data from) down, some input data has typos or the wrong date format, etc).

A human can work around all these error cases once she encounters them. Current RPA tools like Uipath or ui.vision need explicit programming for every potential situation. And I see no indication that Claude is doing any better than this.

For starters, for visual automation to work reliably the OCR quality needs to improve further and be 100% reliable. Even in that very basic "AI" area, Claude, ChatGPT, Gemini are good, but not good enough yet.


I can see it now, Claude generating expect scripts. 1994 and 2024 will be fully joined.


The big thing I expect at the next level is in using Claude to first generate UI-based automation based on an end user's instructions, then automatically defining a suite of end-to-end tests, confirming with the user "is this how it should work?", and then finally using this suite to reimplement the flow from first principles.

I know we're still a bit far from there, but I don't see a particular hurdle that strikes me as requiring novel research.


But does it do any better at soliciting the surprise requirements from the user, who after confirming that everything works, two months later reports a production bug because the software isn't correctly performing the different reqirements on the first Tuesday of each quarter that you never knew about.


I once had an executive ask to start an incident because he was showing a client the app and a feature that he wanted that had never been spec’d didn’t exist.


So basically, Tog's Paradox in action?


I was going to comment about this. Worked at a place that had a “Robotics Department”, wow I thought. Only to find out it was automating arcane software.

UI is now much more accessible as API. I hope we don’t start seeing captcha like behaviour in desktop or web software.


Wow, that's a grim potential future. I can already see software producers saying that e.g. the default license only allows operation of our CAD designer software by a human operator. If you want to make your bot use it in an automated way you must by the bot license which costs 10x more.


Exactly. I have been wondering for a while how GenAI might upend RPA providers guess this might be the answer.


I've been wondering the same and started exploring building a startup around this idea. My analysis led me to the conclusion that if AI gets even just 2 orders of magnitude better over the next two years, this will be "easy" and considered table stakes. Like connecting to the internet, syncing with cloud or using printer drivers

I don't think there will be a very big place for standalone next gen RPA pure plays. it makes sense that companies that are trying to deliver value would implement capabilities this. Over time, I expect some conventions/specs will emerge. Either Apple/Google or Anthropic/OpenAI are likely to come up with an implementation that everyone aligns on

In other words, I agree


> if AI gets even just 2 orders of magnitude better over the next two years

You realize this means '100 times better', right?


yes, thanks for pointing out the assumption here. I'm not sure how to quantify AI improvements and tbh not really up to speed on quantifiable rate of improvement from 4 to 4o to o1

100 times better seems to me in line with the bet that's justifying $250B per annum in Cap Ex (just among hyperscalers) but curious how you might project a few years out?

Having said that, my use of 100x better here applies to 100x more effective at navigating use cases not in training set, for example, as opposed to doing things that are 100x more awesome or doing them 100x more efficiently (though seemingly costs, context window and token per unit of electricity seem to continue to improve quickly)


I would think that such an increase in AI capability would basically be AGI...

Just to give a few comparisons, the following things are two orders of magnitude apart:

1. The force felt by a mosquito landing on your arm and getting punched by Mike Tyson in his prime

2. A firecracker exploding and a stick of dynamite exploding

3. The heat from a candle and the heat from a blowtorch

4. The sound from a whisper and the sound from jet engine


UiPath can't figure out how to make a profitable business since 2005 and we are nearing the end of this hype cycle. I am not so sure this will lead anywhere. I am a former investor in UiPath.


Attempts at commercialization in technology seem to often happen twice. First we get the much-hyped failure, and only later we get the actual thing that was promised.

So many examples come to mind… RealVideo -> YouTube, Myspace -> Facebook, Laserdisc -> DVD, MP3 players -> iPod…

UiPath may end up being the burned pancake, but the underlying problem they’re trying to address is immensely lucrative and possibly solvable (hey if we got the Turing test solved so quickly, I’m willing to believe anything is possible).


I love the “burned pancake” euphemism. Totally going to borrow this.


It didn’t help that UIPath forced a subscription model and “cloud orchestrator” on all users and many of which needed neither. They got greedy. We ditched it.


My impression is that actually solving this classic RPA problem with AI is exactly the raison d'etre of AI21Labs with their task specific models[1]. They don't have the biggest or best general purpose LLM, but they have an excellent model that's been pre-trained on specific types of business data and also made available for developers using simple APIs & "RPA-style" interfaces.

[1] https://www.ai21.com/use-cases


Honestly, this is going to be huge for healthcare. There's an incredible amount of waste due to incumbent tech making interoperability difficult.


Hopefully.

I’ve implemented quite a few RPA apps and the struggle is the request/response turn around time for realtime transactions. For batch data extract or input, RPA is great since there’s no expectation of process duration. However, when a client requests data in realtime that can only be retrieved from an app using RPA, the response time is abysmal. Just picture it - Start the app, log into the app if it requires authentication (hope that the authentication's MFA is email based rather than token based, and then access the mailbox using an in-place configuration with MS Graph/Google Workspace/etc), navigate to the app’s view that has the data or worse, bring up a search interface since the exact data isn’t known and try and find the requested data. So brittle...


It is.

CTO of healthcare org here.

I just put a hold on a new RPA project to keep an eye on this and see how it develops.

According to their docs, Anthropic will sign a BAA.


Out of curiosity, how are high risk liability enviroments like yours coming to terms with the non-deterministic nature of models like these? Eg. the non-zero chance that it might click a button it *really* shouldn't as demonstrated in the failure demo.


Technical director at another company here: We have humans double-check everything, because we're required by law to. We use automation to make response times faster, or to do the bulk of the work and then just have humans double-check the AI. To do otherwise would be classed as "a software medical device", which needs documentation out the wazoo, and for good reason. I'm not sure you could even have a medical device where most of your design doc is "well I just hope it does the right thing, I guess?".

Sometimes, the AI is more accurate or safer than humans, but it still reads better to say "we always have humans in the loop". In those cases, we reap the benefits of both: Use the AI for safety, but still have a human fallback.


I'm curious, what does your human verification process look like? Does it involve a separate interface or a generated report of some kind? I'm currently working on an tool for personal use, that records actions and triggers them at later stage on when specified event occurs. For verification, generating a CSV report after the process is complete and backing it up with screen recordings.


It's a separate interface where the output of the LLM is rated for safety, and anything unsafe opens a ticket to be acted upon by the medical professionals.


I don't know yet. We may not do it.

We haven't deployed a model like this, it's new.

I've done a ton of various RPAs over the years, using all the normal techniques, and they're always brittle and sensitive to minor updates.

For this, I'm taking a "wait and see" approach. I want to see and test how well it performs in the real world before I deploy it, and wait for it to come out of beta so Anthropic will sign a BAA.

The demo is impressive enough that I want to give the tech a chance to mature before my team and I invest a ton of time into a more traditional RPA.

At a minimum, if we do end up using it, we'll have solid guard rails in place - it'll run on an isolated VM, all of its user access will be restricted to "read only" for external systems, and any content that comes from it will go through review by our nurses.


AWS Bedrock deployed models, which include Anthropic Claude models, claim HIPAA compliance eligibility.


What is a BAA?


https://www.techtarget.com/healthtechsecurity/feature/What-I... agreement that lets a business associate handle HIPAA-protected data.


Healthcare has the extra complication of HIPAA / equivalent local laws, and institutions being extremely unwilling to process patient data on devices they don't directly control.

I don't think this is going to work in that industry until local models get good enough to do it, and small enoguh to be affordable to hospitals.


Hospitals use O365, there are HIPAA-compliant editions of any prominent cloud service.


That industry only thinks it controls its devices. Crowdstrike showed there are many bridges over that moat.


Their concern is compliance, not security.


Based on Tog's paradox (https://news.ycombinator.com/item?id=41913437) the moment this becomes easy, it will become hard again with extra regulation and oversight and documentation etc.

Similarly I expect that once processing/searching laws/legal records becomes easy through LLMs, we'll compensate by having orders of magnitude more laws, perhaps themselves generated in part by LLMs.


> There's an incredible amount of waste due to incumbent tech making interoperability difficult.

So the solution to that is to add another layer of complex AI tech on top of it?


Well nothing else we've tried has worked.


I work with healthcare in the UK. There’s a promising approach called CSV files which is revolutionising some of my workflows :)


We’ll see. Having worked in this space in the past, the technical challenges are able to overcome today with no new technology: its a business sales and regulation challenge more than a tech one.


Sometimes.

In my case I have a bunch of nurses that waste a huge amount of time dealing with clerical work and tech hoops, rather than operating at the top of their license.

Traditional RPAs are tough when you're dealing with VPNs, 2fa, remote desktop (in multiple ways), a variety of EHRs and scraping clinical documentation from poorly structured clinical notes or PDFs.

This technology looks like it could be a game changer for our organization.


True, 2FA and all these little details that exist now have made this automation quite insanely complicated. It is of course necessary that we have 2FA etc, but there is huge potential in solving this I believe.


From a security standpoint, what's considered the "proper" way of assigning a bot access based on a person's 2FA? Would that be some sort of limited scope expiring token like GitHub's fine-grained personal access tokens?


Security isn't the only issue here. There are more and less "proper" ways of giving bots access to a system. But the whole field of RPA exists in large part because the vendors don't want you to access the system this way. They aren't going to give you a "proper" way of assigning bot access in a secure way, because they explicitly don't want you to do it in the first place.


I don't know, I feel like it has to be some sort of near field identity proof. E.g. as long as you are wearing a piece of equipment to a physical computer near you can run all those automations for you, or similar. I haven't fully thought what the best solution could be or whether someone is already working on it, but I feel like there has to be something like that, which would allow you better UX in terms of access, but security at the same time.

So maybe like an automated ubikey that you can opt in to a nearby computer to have all the access. Especially if working from home, you can set it at a state where if you are in 15m radius of your laptop it is able to sign all access.

Because right now, considering amount of tools and everything I use and with single sign on, VPN, Okta, etc, and how slow they seem to be, it's extremely frustrating process constantly logging in to everywhere, and it's almost like it makes me procrastinate my work, because I can't be bothered. Everything about those weird little things is absolutely terrible experience, including things like cookie banners as well.

And it is ridiculous, because I'm working from home, but frustratingly high amount of time is spent on this bs.

A bluetooth wearable or similar to prove that I'm nearby essentially, to me that seems like it could alleviate a lot of safety concerns, while providing amazing dev/ux.


That's a really cool idea.

The main attack vector would then probably be some man-in-the-middle intercepting the signal from your wearable, which leads me to wonder whether you could protect yourself by having the responses valid for only an extremely short duration, e.g. ~1ms, such that there's no way for an attacker to do anything with the token unless they gain control over compute inside your house.


Maybe we could build an authenticator as part of the RPA tool or bot client itself. This way, the bot could generate time-based one-time passwords (TOTPs).


Precisely why I built therapedia.io


I agree that at the business contract level, it's more about sales and regulations than tech. But in my experience working close to minimum wage white-collar jobs, about 1 in 4 of my coworkers had automated most of their job with some unholy combination of VBScript, Excel wizardry, AutoHotKey, Selenium, and just a bit of basic Python sprinkled in; IT, security, and privacy concerns notwithstanding. Some were even dedicated enough to pay small amounts out-of-pocket for certain tools.

I'd bet that until we get the risks whittled down enough for larger organizations to adopt this on a wide scale, the biggest user group for AI automation tools will be at the level of individual workers who are eager to streamline their own tasks and aren't paid enough to care about those same risks.


Or you'll start getting a captcha while trying to pump insulin


(Shrug) AI is now better at CAPTCHAs than I am, so bring it on I guess.


Is "AI SaaS bro discovers not everything has a JSON API" the new "startup bro just reinvented a bus"?


Good one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: