I don't get it. The point of OpenClaw is it's supposed to be an assistant, helping you with whatever random tasks you happen to have, in natural language. But for that to work, it has to have access to your personal data, your calendar, your emails, your credit card, etc., no?
Are there other tasks that people commonly want to run, that don't require this, that I'm not aware of? If so I'd love to hear about them.
The ClawBert thing makes a lot more sense to me, but implementing this with just a Claude Code instance again seems like a really easy way to get pwned. Without a human in the loop and heavy sandboxing, a agent can just get prompt injected by some user-controlled log or database entry and leak your entire database and whatever else it has access to.
Yes and even now if you tell the LLM any private information inside the sandbox it can now leak that if it gets misdirected/prompt injected.
So there isn't really a way to avoid this trade-off you can either have a useless agent with no info and no access. Or a useful agent that then is incredibly risky to use as it might go rogue any moment.
Sure you can slightly choose where on the scale you want to be but any usefulness inherently means it's also risky if you run LLMs async without supervision.
The only absolutely safe way to give access and info to an agent is with manual approvals for anything it does. Which gives you review fatigue in minutes.
A user could leave malicious instructions in their instance, but Clawbert only has access to that user's info in the database, so you only pwned yourself.
A user could leave malicious instructions in someone else's instance and then rely on Clawbert to execute them. But Clawbert seems like a worse attack vector than just getting OpenClaw itself to execute the malicious instructions. OpenClaw already has root access.
Re other use cases that don't rely on personal data: we have users doing research and sending reports from an AgentMail account to the personal account, maintaining sandboxing. Another user set up this diving conditions website, which requires no personal data: https://www.diveprosd.com/
> But Clawbert seems like a worse attack vector than just getting OpenClaw itself to execute the malicious instructions. OpenClaw already has root access.
Well the assumption was that you could secure OpenClaw or at least limit the damage it can do. I was also thinking more about the general usecase of a AI SRE, so not necessarily tied to OpenClaw, but for general self hosting. But yeah probably doesn't make much of a different in your case then.
You can solve that by requiring confirmation for anything except reading information from trusted sites. Web visits can be done without confirmation by reading a cached copy and not executing any JavaScript on it with network access (otherwise visiting arbitrary sites can leak information via the URLs sent to arbitrary servers)
> Tweaking user-hostile OSes into user-friendly ones is impressive, but not sustainable. Even worse, it slowing us down from leaving Android entirely.
Not sustainable as opposed to what, exactly? Developing and maintaining a completely different mobile operating system? Focusing on truly open platforms sound nice in theory, but completely falls apart the moment you consider what people want to do with their phones compared to the developing resources available.
> Every single chrome-fork has shut down MV2 extensions, even Brave is about to do it
That's just wrong, there are other forks that still support MV2 extensions right now, and at least brave has no plans of shutting down MV2 extensions even after Google removes MV2 from upstream completely. It will certainly add maintance effort on brave's side, but they already patch a million other things that upstream doesn't support.
Brave said they'll try to maintain temporarily limited MV2 support for only 4 specific extensions, but recommend Brave Shields as the go-to adblocker for the future. Google is about to remove most of the MV2 code from the codebase, which will explode the complexity soon.
The word "temporarily" isn't mentioned anywhere on that page, and that's already a very different claim to "Brave is about to shut down MV2". And the MV2 support is not specific to those 4 extensions, the hosting on Brave's servers is (though for other extensions not that much changes with MV3 anyway).
MV2 is behind a flag for now, but it is about to be removed from the Chrome codebase entirely. Which is why Brave recommends using Brave Shields as the long-term solution, which does not depend on it.
> Not sustainable as opposed to what, exactly? Developing and maintaining a completely different mobile operating system? Focusing on truly open platforms sound nice in theory, but completely falls apart the moment you consider what people want to do with their phones compared to the developing resources available.
Multiple open source desktop/laptop operating systems are maintained.
> Developing and maintaining a completely different mobile operating system?
The cost of writing code has fallen 100x in the past 3 years, and will likely fall 100x further. So actually, yes, thanks to AI it probably actually is reasonable to launch a fully new stack from scratch.
>The cost of writing code has fallen 100x in the past 3 years
Maybe, but the cost of actually shipping a product has fallen by maybe 10%. I don't see dozens of production ready mainstream OSes and web browsers popping up because LLM can dump tens of lines of code per second.
As a startup founder shipping product, I strongly disagree with that.
Give it 12 months, you will see dozens of from-scratch large scale software projects shipping. New web browsers, new operating systems, new gaming engines, new productivity software, we are at the threshold of having an abundance of software that was previously only available from large corporations.
> Seniors come from juniors. If you want seniors, you must let the juniors write the code.
Companies know this as well, but this is a prisoner dilemma type situation for them. A company can skip out on juniors, and instead offer to pay seniors a bit better to poach them from other companies, saving money. If everyone starts doing this, everyone obviously loses - there just won't be enough new seniors to satisfy demand. Avoiding this requires that most companies play by the rules so to say, not something that's easily achieved.
And the higher the cost of training juniors relative to their economic output, the greater the incentive to break the rules becomes.
One alternative might just be more strict non-competes and the like, to make it harder for employees to switch companies in the first place. But this is legally challenging and obviously not a great thing for employees in general.
There simply isn't a known solution to this problem. If you give users the ability to install unverified apps, then bad actors can trick them into installing bad ones that steal their auth codes and whatnot. If you want to disallow certain apps then you have to make decisions about what apps (stores) are "blessed" and what criteria are used to make those distinctions, necessarily restricting what users can do with their own devices.
You can go a softer route of requiring some complicated mechanism of "unlocking" your phone before you can install unverified apps - but by definition that mechanism needs to be more complicated then even a guided (by a scammer) normal non-technical user can manage. So you've essentially made it impossible for normies to install non-playstore apps and thus also made all other app stores irrelevant for the most part.
The scamming issue is real, but the proposed solutions seem worse then the disease, at least to me.
The solution would be a "noob mode" that disables sideloading and other security-critical features, which can be chosen when the device is first turned on and requires a factory reset to deactivate. People who still choose expert mode even though they are beginners would then only have themselves to blame.
This is just a variant of the "complicated unlocking mechanism" I was talking about. It still screws over everything not coming from the play store because the installation process for them essentially becomes a huge hassel, that even involves factory resetting their device, that most people won't want to deal with.
> There simply isn't a known solution to this problem. If you give users the ability to install unverified apps, then bad actors can trick them into installing bad ones that steal their auth codes and whatnot.
This is also true if they can only install verified apps, because no company on earth has the resources to have an actually functional verification process and stuff gets through every day.
> This is also true if they can only install verified apps, because no company on earth has the resources to have an actually functional verification process and stuff gets through every day.
This is true, but if this goes through, I imagine that the next step for safety fascists will be to require developer licensing and insurance like general contractors have. And after that, expensive audits, etc, until independent developers are shut out completely.
I never mentioned building critical software like medical diagnosis software, software for industrial equipment, etc.
If I write a trash library for a random project and someone else starts using it to run their nuke plant, that isn’t my fault. Read the license. NO WARRANTY.
I'm going to assume you're referring to auth codes, especially the ones sent via SMS? In which case yes, banks should definitely stop using those but that alone doesn't solve the overarching issue.
The next step is simply that the scammer modifies the official bank app, adds a backdoor to it, and convinces the victim to install that app and login with it. No hardware-bound credentials are going to help you with that, the only fix is attestation, which brings you back to the aformentioned issue of blessed apps.
I'm not sure if you understand what makes passkeys phishing-resistant?
The backdoored version of the app would need to have a different app ID, since the attacker does not have the legitimate publisher's signing keys. So the OS shouldn't let it access the legitimate app's credentials.
I understand how passkeys work. You don't need the legitimate app's credentials, we're talking about phishing attacks, you're trying to bring the victim to giving you access/control to their account without them realizing that that's what is happening.
A simple scenario adapted from the one given in the android blog post: the attacker calls the victim and convinces them that their banking account is compromised, and they need to act now to secure it. The scammer tells the victim, that their account got compromised because they're using and outdated version of the banking app that's no longer suppported. He then walks them through "updating" their app, effectively going through the "new device" workflow - except the new device is the same as the old one, just with the backdoored app.
You can prevent this with attestation of course, essentially giving the bank's backend the ability to verify that the credentials are actually tied to their app, and not some backdoored version. But now you have a "blessed" key that's in the hands of Google or Apple or whomever, and everyone who wants to run other operating systems or even just patched versions of official apps is out of luck.
> He then walks them through "updating" their app, effectively going through the "new device" workflow - except the new device is the same as the old one, just with the backdoored app.
This is where the scheme breaks down: the new passkey credential can never be associated with the legitimate RP. The attacker will not be able to use the credential to sign in to the legitimate app/site and steal money.
The attacker controls the fake/backdoored app, but they do not control the signing key which is ultimately used to associate app <-> domain <-> passkey, and they do not control the system credentials service which checks this association. You don't even need attestation to prevent this scenario.
> do not control the signing key which is ultimately used to associate app <-> domain <-> passkey, and they do not control the system credentials service which checks this association.
You're assuming the attacker must go through the credential manager and the backing hardware, but that is only the case with attestation. Without it, the attacker can simply generate their own passkey in software, because the backend on the banks side would have no way of telling where the passkey came from.
With banks, typically a combination of your account number, pin and some confirmation code sent via email or SMS. And of course unregistering your previous device. Not sure where you're going with this though?
I never said that passkeys can be phished, I said they don't solve this problem, but yeah. Locking the front door while leaving the back door wide open, as they say. But unless you can convince people to go into the bank counter every time they change their phone, that's life.
I understand how passkeys work. You don't need the legitimate app's credentials, we're talking about phishing attacks, you're trying to bring the victim to giving you access/control to their account without them realizing that that's what is happening.
That doesn't work, because the scammer's app will be signed with a different key, so the relying party ID is different and the secure element (or whatever hardware backing you use), refuses to do the challenge-response.
Correction: nothing prevents the attacker from using the app's legit package ID other than requiring the uninstall of the existing app.
The spoofed app can't request passkeys for the legit app because the legit app's domain is associated with the legit app's signing key fingerprint via .well-known/assetlinks.json, and the CredentialManager service checks that association.
If the side loaded app does not have permission to use the passkeys and cannot somehow get the user to approve passkey access of the new app, that would be a good alternative to still allow custom apps.
I don't think you understand. This exists _today_, regardless of how you install apps, because attackers can't spoof app signatures. If I don't have Bank of America's private signing key, I cannot make an app that requests passkeys for bankofamerica.com, because bankofamerica.com publishes a file [0] that says "only apps signed with this key fingerprint are allowed to request passkeys for bankofamerica.com" and Android's credential service checks that file.
No need for locking down the app ecosystem, no need to verify developers. Just don't use phishable credentials and you are not vulnerable to malware trying to phish credentials.
It's a question of what tradeoffs you're willing to make. If you're making a professional product then sure, but I've checked the chips you suggested and the cheapest one available on JLCPCB seems to be the LPC1820FB at $6.5. If you fuck up a revision or two at 5 pieces each that gets expensive rather quickly for a hobby project.
It's a shame really that ULPI is such a complicated interface, at least compared to RMII or SDIO, otherwise you could just buy a high speed USB ULPI phy and use it with the RP2350 via PIO. Eben even mentioned at some point someone working on a PIO ULPI implementation, but I'm assuming that went nowhere because they couldn't make it work reliably.
An alternative approach I've been considering is using a ch32v305 as a "smart" USB-HS bridge, and connecting the RP2350 via SDIO to it. The problem with that of course is you have to implement most of the USB stack on the ch32v305, and the documentation on those chips isn't great to put it mildly.
Who else is going to maintain and develop it? It's the same issue as with Chrome, even if you force Google to give it to some other company, they're all just as bad. And it's too big and too costly to maintain for anyone else but tech giants.
The only other options would be convincing users to pay 5 bucks a month for their software, or have some Government fork over the tens of millions required to pay open source developers. And good luck with that.
I'm thinking with ever increasing seriousness: let's split any company that grows past a certain size. Each side gets a copy of the codebase and half the assets, no one who's been on the board on one side can be on the other side's board, and neither side can buy off the other. They can use the existing branding for a limited time and with a qualifier (say Google Turnip vs Google Potato) but after that it's on the strength of the new brand which they're each building and for which they're competing against each other and the rest of the market.
This is not happening in my lifetime, of course it isn't. But by god does it need to happen.
Right? We need a "You won capitalism!" award where everybody in the org gets a huge bonus and then the company is split into small pieces and then they start over. On top of it we do what you describe and enforce the split so they can't collude.
Historical meaning is pretty worthless though. It's like saying CPU's are going backwards because the 386 was a bigger jump. Technology matures eventually and that's not a bad thing.
Android doesn't really work on hardware changes as AOSP doesn't run on a single phone on earth anyways, not even the emulators, this is the goal of the manufacturers.
For the features you can read here for example what Android 16 changed:
no my friend. i mean objectively, not some rose tinted nostalgia glasses. android2.3 allowed so much better control for people who are not installing all sort of crap apps, that the usability was something unimaginable to an android15 user. not to mention we had devices with trackballs, keyboards etc
sadly, given that both manufacturers on this duopoly are highly incentivised to push malicious apps, everything must be throw out for a cat and mouse game of sand boxing
I don't think you understand what that word means.
Regardless, your opinion (and mine) is irrelevant. People want at least some of the features of modern android, and any alternative lacking those is not going to be adopted by most people. Just look at how many people try GrapheneOS and find the minor things to be dealbreakers for them.
And as long as that's the case you can't expect people to vote for a scenario where they'll end up with a, in their eyes, worse product.
Name one feature android have today that was missing from comunity rom built on top of android 2.3. only sandox kerbel featurea, but that's a circular argument.
Only things nobody wanted were missing then. Like fake AI photo enhancement.
Fully hardware-based disk encryption with key management. Or live captions. Or a far better java runtime with superior memory management. Or a million other things.
But even your own example works, just because you dislike camera filters, doesn't mean everybody else agrees with that. There are probably more women with smartphones that use those, then there are that don't.
> and when Google pushed manifest v3 changes to block ad-blockers every single one of them was affected.
That's just objectively wrong, both Brave and Opera still support manifest v2 and are committed to continue doing so for the foreseeable future. Even Edge apparently still has it, funnily enough.
Nope, actually "both Brave and Opera still support manifest v2" is objectively wrong.
Brave does NOT support manifest v2. They have instead hand picked exactly 4 manifest v2 extensions (AdGuard, NoScript, uBlock Origin, and uMatrix) and have hard-coded special support for them. They quite literally say in https://brave.com/blog/brave-shields-manifest-v3/ that all other v2 extensions will go away from Brave once Google fully removes support for them (which may have happened already, since it was posted a while ago).
> They have instead hand picked exactly 4 manifest v2 extensions (AdGuard, NoScript, uBlock Origin, and uMatrix) and have hard-coded special support for them. They quite literally say in https://brave.com/blog/brave-shields-manifest-v3/
You're misreading that page, they have special cased the hosting of those 4 extensions, because they do not have their own addon web store and are relying on Chrome's instead. You can still install any manifest v2 addon manually, not that there are going to be many outside of those 4 that care about v2.
As for Opera:
"Today, we reiterate what we said back in October 2024: MV2 extensions are still available to use on Opera, and we are actively working to keep it that way for as long as it’s technically reasonable."
which begs the question, why ublock origin is not native on all browser yet?
addons for firefox were at first a way to test features. we only have devtookls because one person wrote an addon copying ie6 dev tool. next Firefox release it was part of the core browser.
> * Chip design pays better than software in many cases and many places (US and UK included;
Where are these companies? All you ever hear from the hardware side of things are that the tools suck, everyone makes you sign NDAs for everything and that the pay is around 30% less. You can come up with counterexamples like Nvidia I suppose, but that's a bit like saying just work for a startup that becomes a billion dollar unicorn.
If these well paying jobs truly exist (which I'm going to be honest I doubt quite a bit) the companies offering them seem to be doing a horrendous job advertising that fact.
The same seems to apply to software jobs in the embedded world as well, which seem to be consistently paid less then web developers despite arguably having a more difficult job.
Oh by the way, I agree, NDAs all the time, and many of the tools are super user-unfriendly. There's quite a bit of money being made in developing better tools.
As for a list of companies, in the UK or with a UK presence, the following come to mind: Graphcore, Fractile, Olix, Axelera, Codasip, Secqai, PQShield, Vaire, SCI Semiconductor and probably also look at Imagination Tech, AMD and Arm. There are many other companies of different sizes in the UK, these are just the ones that popped into my head in the moment tonight.
[Please note: I am not commenting on actual salaries paid by any of these companies, but if you went looking, I think you'd find roles that offer competitive compensation. My other comments mentioning salaries are based on salary guides I read at the end of last year, as well as my own experience paying people in my previous hardware startup up to May 2025 (VyperCore).]
Depends if you're looking at startups/scaleups or the big companies. Arm, Imagination Tech, etc. for a very long time did not pay anything like as well (even if you were doing software work for them). That's shifted a lot in the UK in recent years (can't speak for the rest of the world). Even so, I hear Intel and AMD still pay lower base salary than you might get at a rival startup.
As for startups/scaleups, I can testify from experience that you'll get the following kind of base salaries in the UK outside of hardware-for-finance companies (not including options/benefits/etc.). Note that my experience is around CPU, GPU, AI accelerators, etc. - novel stuff, not just incrementing the version number of a microcontroller design:
* Staff engineering salaries (software, hardware, computer architecture): £100k - £130k and beyond
* Principal, director, VP, etc. engeering salaries: £130k+ (and £200k to £250k not unreasonable expectation for people with 10+ years experience).
If you happen to be in physical design with experience on a cutting edge node: £250k - £350k (except at very early stage ventures)
Can you find software roles that pay more? Sure, of course you can. AI and Data Science roles can sometimes pay incredible salaries. But are there that many of those kinds of roles? I don't know - I think demand in hardware design outstrips availability in top-end AI roles, but maybe I'm wrong.
From personal experience, I've been paid double-digits percentage more being a computer architect in hardware startups than I have in senior software engineering roles in (complex) SaaS startups (across virtual conferencing, carbon accounting, and autonomous vehicle simulations). That's very much a personal journey and experience, so I appreciate it's not a reflection of the general market (unlike the figures I quoted above) so of course others will have found the opposite.
To get a sense of the UK markets for a wide range of roles across sectors and company sizes, I recommend looking at salary guides from the likes of:
* IC Resources
* SoCode
* Microtech
* Client-Server
It feels like software jobs will be moving more to Europe generally.
It is just the work ethic divide that is at issue. The staying power of San Francisco is that people may have been willing to work long hours for stock, but not even stock isn't worth enough these days.
> Not in Spain. I can access my bank's website but I can't do anything without their bank app.
I don't know about Spain specifically, but as far as I understand it no bank in the European Economic Area + UK should allow banking via just the website alone anymore, because of the "Revised Payment Services Directive" (PSD2) regulation.
Essentially, banks are required to implement "strong customer authentication", which in essence is just multi-factor authentication with a password + either biometrics or a security device of some sort.
And in practise that means a banking app, because most people do not want a separate token they have to buy and can lose. Though a lot of banks do offer those as well.
In Estonia you can easily do banking via the website on all the banks (LHV, Swedbank, SEB). That said, we do have it all integrated with our digital-ID (which every ID card has private keys encoded into with a PIN you know) so it's not like you can access it with a simple password (our online voting works the same way).
Voting, much like all other things in Estonia such as getting married/divorced, doing taxes, signing documents, starting/closing companies, notary dealings, bank dealings, selling/buying vehicles, and many more things I can't even think of right now are entirely done via the digital ID that every citizen has. This means that you authorize/sign actions with it, including voting, because only you have your private keys (either in your personal ID card, in your phone's sim card, etc) that you yourself know the PIN for, which then authenticates you as being you. I think we're now at a point where there isn't a single government or business dealing you can not do entirely online (https://e-estonia.com/solutions/).
Phones and sim cards a lot more temporary than ID cards. I don't know of a lot of theves that target ID cards for their authorization uses. Phones... people will steal those.
You can close your Mobile-ID when your phone gets stolen so the security keys on it will be useless, and even if you don't close it, nobody can use your security keys without your PIN, which is in your head.
There’s also digi-ID (similar e-signature certificate on a card, but without any ID features), Mobiil-ID (e-signature on a SIM-card, no idea how it works), Smart-ID (in app, tied to secure storage in Android/iOS, cross-signed by the server which is supposed to check the device somehow) and probably something else I don’t remember. All of these are independent options, so you can, for example, revoke your Mobiil-ID if you lose your phone, and still use the your main ID card to sign things.
Wow, that is definitely more sophisticated than we have in the states. It seems like you can use it for things that one would otherwise need a notary for, that is such a timesaver.
It costs as much as your ID card costs by the government, and lasts as long as well. They are one and the same. Applying for a new ID card / national ID document in Estonia costs 35€ and the document is valid for 5 years. If you forget your PIN code, you can reset it with your PUK codes, but if you also lose your PUK codes you need to apply for a new ID card. The process for getting a new ID card from the moment you applied for it takes no more than 30 days. You can also have it fast tracked for 250€ and get it in 2 days.
But, like the parent said, you have many other options other than the physical ID-card as well. Most people these days use Mobiil-ID or SmartID, which works on your phone and even smart watch. SmartID is completely free and Mobiil-ID is tied to your phones carrier, so the cost varies, but it's a one-time set-up fee of around 5€. Mobiil-ID certificate also lasts 5 years.
(When will people learn that biometrics are not another factor: they're entirely public and irrevocable. It's not just security theater, but Apple & Google know that this forces you into their ecosystem, which should be illegal. Of course, Brussels is full of rubes anyway.)
The question is what generated that TOTP code. The banks must ensure that they "are independent, in that the breach of one does not compromise the reliability of the others," as article 4(30) states. That text is vague as hell, but published opinion of the European Banking Authority on the matter[0] is:
"a device could be used as evidence of possession, provided that there is a ‘reliable means to confirm possession through the generation or receipt of a dynamic validation element on the device’"
So in essence the TOTP has to be bound to the device in a way that prevents users from just extracting the secret and putting in in their password manager. Hypothetically that would still allow Yubikeys and other security keys that provide attestation from the factory, but in practise banks probably don't want to deal with the support headache and just provide their own, like the TAN generator mentioned by other commentors.
Two other highlights from the interpretation of the EBA:
"App installed on the device" -> not sufficient/compliant
"In the case of an SMS, and as highlighted in Q&A 4039, the possession element
‘would not be the SMS itself, but rather, typically, the SIM-card associated with the respective
mobile number’."
"SIM-card associated with the mobile number" - is that even technically possible? Do mobile carriers provide a API for banks to verify that a number still corresponds to the same SIM card? If so I've never heard of it.
Some UK banks (Nationwide and Barclays I know for certain) have had mini card-reader PIN devices since around 2010 that they've given customers, that basically generate on an LCD screen an 8-digit code for authentication.
When confirming a large transfer, you also need to enter the payment amount in the device, and I assume this gets hashed into the number as well.
More recently (last 3/4 years), you can also use their mobile app to do this instead / as well as.
Moved from the UK to Germany. My German card reader is even better, no manually entering the transaction details, I just scan a QR code from my laptop, and the card reader display shows the IBAN and amounts, before I confirm to get the code.
> And in practise that means a banking app, because most people do not want a separate token they have to buy and can lose.
It can be SMS. As said in another comment, the main banks in Spain offer this authentication method while being PSD2 compliant. Some also offer a card with coordinates. So it's not mandatory in any way to use a banking app.
Probably not for much longer though. Several countries, including mine, have already banned SMS 2FA for banking, and it's likely that that will be implemented for all of Europe in the near future, possibly with PSD3. Not that SMS 2FA was ever a good idea in the first place.
But yes banking apps are not mandatory, and likely won't be in the near future either, though the alternatives are treated a bit like second class citizens.
> a zero-day in the closed source firmware from Qualcomm will probably screw you anyway.
All the devices that GrapheneOS supports implement a clear separation of the baseband and the CPU in the form of SMMU, ARMs version of IOMMU. So a zero-day in the baseband does not immediately screw you - unless the code on the CPU side also contains vulnerabilities or there is a major flaw in the SMMU implementation that somehow breaks isolation.
Thanks for the clarification (and to the others that answered as well).
I probably explained myself in a shitty manner, I didn't try to downplay GrapheneOS efforts, and I should have kept my initial statement about "next best thing can create a false sense of completeness" as a generic remark and not specific to GrapheneOS, for which I don't have enough knowledge to know if it applies or not.
Are there other tasks that people commonly want to run, that don't require this, that I'm not aware of? If so I'd love to hear about them.
The ClawBert thing makes a lot more sense to me, but implementing this with just a Claude Code instance again seems like a really easy way to get pwned. Without a human in the loop and heavy sandboxing, a agent can just get prompt injected by some user-controlled log or database entry and leak your entire database and whatever else it has access to.
reply