Have you inspected or thought through the security of your open source library?
You are using debugger tools such as CDP, launching playwright without a sandbox, and guiding users to launch Chrome in debugger mode to connect to browser-use on their main browser.
The debugging tools you use have active exploits that Google doesn't fix because they are supposed to be for debugging and not for production/general use. This combined with your other two design choices let an exploit to escalate and infect their main machine.
Have you considered not using all these debugging permissions to productionize your service?
Thank you! It's constructive, helps the people who are making things while giving them the benefit of the doubt, keeps users safe, and educate those who have enough technical understanding on the topic.
Could you go into a bit more detail about this? Why is exposing devtools to the agent a problem? What's the attack vector? That the agent might do something malicious to exfil saved passwords?
Their key offering is an open source solution that you can run on your own laptop and Chrome browser, but their approach to doing this presents a huge security risk.
They do have a cloud offering that should not have these risks but then you have to enter your passwords into their cloud browser environment, presenting a different set of risks. Their cloud offering is basically similar to SkyVerne or even a higher cost tier subscription we have at rtrvr.ai
how would that work? Can you control the browser without debug mode? Especially in production the browsers are anyway running on single instance docker containers so the file system is not accesible... are there exploits that can do harm from a virtual machine?
Yes, I was able to figure out a secure way to control the browser with AI Agents at rtrvr.ai without using debugger permissions/tools so it is most definitely possible.
I meant by in production in the sense how you are advising your users to setup the local installation. Even if you launch browser use locally within a container but your restarting the user's Chrome in debug mode and controlling it with CDP from within the container, then the door is wide open to exploits and the container doesn't do anything?!
Your claim is analogous to saying that Apple's app store is not secure. We had to go through stringent vetting and testing by Google to list in the Chrome Store. Any basis or reasoning you can provide for your claim?
Regardless, its a wild leap to claim a Chrome Store Chrome Extension is more insecure than this arbitrary binary?
Yeah, sorta feels like docker on a new instance is safer than connecting to actual browsers and injecting js code there… would love to skip cdp protocol though, it’s quite restrictive
Are you making a straw man argument? I am not injecting js code, we solved this problem in a secure way with minimal permissions taken by our Chrome Extension, which runs in safe and secure sandbox within the browser.
Perhaps we are talking past each other, your literally giving instructions to your users to connect to their actual browsers:
https://docs.browser-use.com/customize/real-browser
Where under the hood your launching Chrome with debugging mode but with the user's credentials and passwords. This browser is then controlled via CDP by a highly insecure browser-use binary running in a container. Your users are bound to get pwned with this setup!
https://github.com/browser-use/browser-use/blob/70ae758a3bfa...
You are using debugger tools such as CDP, launching playwright without a sandbox, and guiding users to launch Chrome in debugger mode to connect to browser-use on their main browser.
The debugging tools you use have active exploits that Google doesn't fix because they are supposed to be for debugging and not for production/general use. This combined with your other two design choices let an exploit to escalate and infect their main machine.
Have you considered not using all these debugging permissions to productionize your service?