So what I really want to know is: what happens at Cloudflare--which uses v8 to implement Cloudflare Workers in shared memory space--when this kind of stuff happens? (Their use is in some sense way more "out on a limb" than a web browser, where you would have to wait for someone to come to your likely-niche page rather than just push the attack to get run everywhere.)
Within an hour of V8 pushing the fix for this, our build automation alerted me that it had picked up the patch and built a new release of the Workers Runtime for us. I clicked a button to start rolling it out. After quick one-click approvals from EM and SRE, the release went to canary. After running there for a short time to verify no problems, I clicked to roll it out world-wide, which is in progress now. It will be everywhere within a couple hours. Rolling out an update like this causes no visible impact to customers.
In comparison, when a zero-day is dropped in a VM implementation used by a cloud service, it generally takes much longer to roll out a fix, and often requires rebooting all customer VMs which can be disruptive.
> Their use is in some sense way worse than a web browser, where you would have to wait for someone to come to your likely-niche page rather than just push the attack to get run everywhere.
I may be biased but I think this is debatable. If you want to target a specific victim, it's much easier to get that person to click a link than it is to randomly land on the same machine as them in a cloud service.
> Within an hour of V8 pushing the fix for this, our build automation alerted me that it had picked up the patch and built a new release of the Workers Runtime for us. I clicked a button to start rolling it out. After quick one-click approvals from EM and SRE, the release went to canary. After running there for a short time to verify no problems, I clicked to roll it out world-wide, which is in progress now. It will be everywhere within a couple hours. Rolling out an update like this causes no visible impact to customers.
Great workflow! I long for the day when I can start for a company that actually has their automation as efficient as this.
Few question, do you have a way of differentiating critical patches as this? If so, does that trigger an alert for the on-call person? Or do you still wait until working hours before such a change is pushed?
Look for a company who's business model includes uptime, security and scalability. And is big enough to not outsource those parts. And in a mature market where customers can tell the difference.
I once worked for a company that tried to set up a new service, they asked for 99.99999% uptime. This worked really well for the 'ops' team which focused on the AWS setup and automation, but meanwhile the developers (of which I was one, but I didn't have any say in things because I was 'just' a front-ender) fucked about with microservices, first built in NodeJS (with a postgres database storing mostly JSON blobs behind them), then in Scala. Not because it was the best solution (neither microservices nor scala), but because the developers wanted to, and the guys responsible for hiring were Afraid that they'd get mediocre developers if they went for 'just' java.
I'm just so tired of the whole microservices and prima donna developer bullshit.
No. In some sense it doesn't matter though. There are plenty of services that have less than their claimed reliability:
* They set an easy measurement that doesn't match customer experience, so they say they're in-SLO when common sense suggests otherwise.
* They require customers jump through hoops to get a credit after a major incident.
* The credits are often not total and/or are tiered by reliability (so you could have a 100% uptime and not give a 100% discount if you serve some errors). At the very most, they give the customer a free month. It's not as if they make the customer whole on their lost revenue.
With a standard industry SLA, you can have a profitable business claiming uptime you never ever achieve.
Also look at their job ads. If they are looking to hire a devops to own their ci/cd pipeline, that means they don’t have one (and, with that approach, will never have one).
My guess is that the main feature which enables this kind of automation is that they can take down any node without consequences. So they can just install an update on all the machines, and then reboot/restart the software on the machines sequentially. If you have implemented redundancy correctly, then software updating becomes simple.
We actually update each machine while it is serving live traffic, with no downtime.
We start a new instance of the server, warm it up (pre-load popular Workers), then move all new requests over to the new instance, while allowing the old instance to complete any requests that are in-flight.
Fewer moving parts makes it really easy to push an update at any time. :)
Specifically you can do two things: 1) planned incremental improvements, 2) simpler designs.
For 1), write down the entire manual workflow. Start automating pieces that are easy to automate, even if someone has to run the automation manually. Continue to automate the in-between/manual pieces. For this you can use autonomation to fall back to manual work if complete automation is too difficult/risky.
For 2), look at your system's design. See where the design/tools/implementation/etc limit the ability to easily automate. To replace a given workflow section, you can a) replace some part of your system with a functionally-equivalent but easier to automate solution, or b) embed some new functionality/logic into that section of the system that extends and slightly abstracts the functionality, so that you can later easily replace the old system with a simpler one.
To get extra time/resources to spend on the automation, you can do a cost-benefit analysis. Record the manual processes' impact for a month, and compare this to an automated solution scaled out to 12-36 months (and the cost to automate it). Also include "costs" like time to market for deliverables and quality improvements. Business people really like charts, graphs, and cost saving estimates.
Thanks for the response! This didn't really answer what I was curious about, though: like, you answered what happens during the minutes after the fix being pushed, but I am curious about the minutes after the exploit being released, as the mention of "zero day" made me think that this bug could only have been fixed in the past few hours (and so there were likely hours of Cloudflare going "omg what now?" with engineers trying to help come up with a patch, etc.).
However... for this comment I then wanted to see how long ago that patch did drop, and it turns out "a week ago" :/... and the real issue is that neither Chrome nor Edge have merged the patch?!
> Agarwal said he responsibly reported the V8 security issue to the Chromium team, which patched the bug in the V8 code last week; however, the patch has not yet been integrated into official releases of downstream Chromium-based browsers such as Chrome, Edge, and others.
So uhh... damn ;P.
> I may be biased but I think this is debatable. If you want to target a specific victim, ...
FWIW, I had meant Cloudflare as the victim, not one of Cloudflare's users: I can push code to Cloudflare's servers and directly run it, but I can't do the same thing to a user (as they have to click a link). I appreciate your point, though (though I would also then want to look at "number of people I can quickly affect"). (I am curious about this because I want to better understand the mitigations in place by a service such as Cloudflare, as I am interested in the security ramifications of doing similar v8 work in distributed systems.)
(It was then rapidly cherry-picked into release branches, after which our automation picked it up.)
> I am curious about this because I want to better understand the mitigations in place by a service such as Cloudflare, as I am interested in the security ramifications of doing similar v8 work in distributed systems.
Thanks; FWIW, I'd definitely read that blog post, and watched the talk you gave a while back (paying careful attention to the Q&A, etc. ;P). (I had had a back/forth with you a while back, actually, surrounding how you limit the memory usage of workers, and in the end sam still unsure what strategy you went with.)
BTW: if there is any hope you can help put me in touch with people at Cloudflare who work on the Ethereum Gateway, I would be super grateful (I wanted to use it a lot--as I had an "all in on Cloudflare" strategy to help circumvent censorship--but then ran into a log of issues and am not at all sure how to file them... a new one just cropped up yesterday, wherein it is incorrectly parsing JSON/RPC id fields). On the off chance you are interested in helping me with such a contact (and I appreciate if you aren't; no need to even respond or apologize ;P): I am saurik@saurik.com and I am in charge of technology for Orchid.
I would be interested to hear a response to your V8 memory limit question. Years before Cloudflare workers we isolated Parse Cloud Code workers in exactly the same way, at least at the beginning (multiple V8 isolates in the same process). One of the big issues was not really being able to set a true memory limit in V8. There was a flag, but it was pretty advisory--there were still codepaths that just tried to GC multiple times and then abort if not enough space was freed up. Not ideal when running multiple tenants in the same process.
Dynamic worker isolation is something something I have dabbled with. I’ve been trying to figure out that if, once a misbehaving isolate is.. isolated it is possible to scrutinize its behavior to catch it in the act. What do you think? Would something like that even be useful? It seems to me that maybe if an isolate is confirmed malicious you can backtrack and identify data leaks.
I'd say for the 0-day exploit to affect services like CloudFlare someone would need to run the exploit first on their V8 infrastructure instances.
This would require ahead knowledge of the vulnerability and someone either within CloudFlare or at one of it's used code dependencies to plant malicious code. Since a rolling upgrade seems to be fully automated at CloudFlare and can be done within a few hours for the complete infrastructure, I don't see CF being at high risk here.
You don’t need to know anyone at Cloudflare to run the exploit on their v8 infrastructure... you just need to sign up here: https://workers.cloudflare.com/
There is something so pleasing about this process being so fully automated that it can be managed in a few clicks. Kudos, I love reading things like this.
What checks do you perform on that upstream code before building and running it? I imagine a supply-chain attack would be devastating, even if the code only made it to canaries. Your build infrastructure at least could easily be compromised by a nefarious makefile addition.
Someone would have to get a malicious patch merged into Google's V8 release branch first. I also personally do a sanity-check review of the patches before rolling out the update.
That's impressive, in fact I would argue that this is as close to best-practice as you can get. I would love to read a blog post with details how you set this up!
All code executed on Workers has to have been uploaded through our API -- we don't allow eval() or dynamically compiling Wasm at runtime. This ensures that we have a paper trail in case of an exploit.
I think you are right — it is debatable [1]. I would argue that it is easier to find ways to exploit determinism of scheduling/allocation than finding ways to exploit humans.
> Each isolate's memory is completely isolated, so each piece of code is protected from other untrusted or user-written code on the runtime.
But they don't quite specify if it's isolated at system level (separate threads with unshared memory) or something simpler ("you can't use native code, so v8 isolates your objects").
What kind of exploit is it (it doesn't say)? Is it possible that further sandboxing levels would isolate the affected instance from other customers pending additional exploits (processes, VMs, etc)? This isn't really my area so apologies if what I'm saying is way off base :P
People in this thread seem very confused about a lot of stuff so I’m going to try and make some clarifications:
1) This is an RCE, so what it does is achieving code execution in the browser, i.e. it can run arbitrary code from the attacker, it’s literally like running a compiled program inside the target’s browser. This doesn’t bypass Chrome’s sandbox, so a lot of OS operations are not reachable from within the browser (for example, a lot of syscalls can’t be called).
This is the first step in an exploit chain, the second one would be a sandbox escape to expand the attack surface and do nasty stuff, or maybe a kernel exploit to achieve even higher privileges (such as root)
2) WASM being RWX is really not a security flaw. W^X protections in modern times (even in JIT-like memory mappings) makes sense only when coupled with a strong CFI (control flow integrity) model, which basically tries to mitigate code reuse attacks (such as JOP or ROP). Without this kind of protection, which needs to be a system-wide effort as iOS has shown, W^X makes literally zero sense, since any attacker can easily bypass it by executing an incredibly small JOP/ROP chain that changes memory protections in the target area to run the shellcode, or can even mmap RWX memory directly.
See also this interesting series by Google Project Zero https://googleprojectzero.blogspot.com/2021/01/introducing-i... about some recent Chrome 0-days that seem to have been exploited by “Western government operatives actively conducting a counterterrorism operation”.
This is to say that with enough time, a sufficiently sophisticated and motivated actor can always find 0-days and achieve their goals.
It'd be good to raise the priority of process-wide WX and design out RCEs of this type once and for all. I am disappointed that Wasm is on the exploit chain for a bug like this, as I still feel responsible in some way. I know team priorities change, but this one I pushed hard for commitment on before I left.
Yeah. And it's not even bugs in the Wasm engine that are the problem; the RWX memory for Wasm JIT code makes all other bugs into potential RCE bugs. It must be banished! :)
Having worked in other spaces ensuring W^X on the basis that there is no good reason for it, the only exception is usually "because I need to code generate".
How do you even go about getting rid of WRX in a JIT? Do you generate and then remove W?
In V8, JITed JavaScript code is never writable and executable at the same time; the JIT writes into a buffer, then quiesces JS execution, then copies/flips permissions, then starts running JS code again.
In the Wasm engine inside of V8, the code is writable and executable at the same time because the JIT uses multiple concurrent threads in the background during execution and incrementally commits new JITed code as it is finished. (And, funnily enough, this performance optimization is mostly for asm.js code, which is verified and internally translated to Wasm, to be compiled and executed by the Wasm engine).
The long-term holy grail is to move the JIT compiler entirely to another process, so only that process has write permissions, and the renderer process (where Wasm and JS execute) has only R and execute permissions, and they used shared memory underneath.
You pretty much can't, because once memory has been written to at runtime it is assumed to be untrusted. JIT in and of itself is a W^X violation, so the only real solution is to not use it when security over performance is preferred.
Okay, so we remove strings. Good thing the in-memory object format isn't known by the atta– wait. Okay, never mind; we can get rid of objects too. And bignums, while we're at it; that leaves us just with bog-standard floating-point integer primitives. Which are stored in a JavaScript call frame. Oops.
ArrayBuffers aren't meant to be executable code either.
JS strings are not UTF-16, they are 16-bit chunks of (potentially) nonsense, and enforcing valid UTF-16 would break quite a few existing uses. For example, anything that stores encrypted data in a string. Which "shouldn't" be done, that should be a Uint8Array, but existing APIs basically force you to do it. And there's such a thing as backwards compatibility.
Your 3rd point is much more feasible. I doubt any "real" mangling would be good enough from a performance standpoint while still being too difficult for attackers to use. But I could imagine eg breaking any invalid UTF-16/UTF-8 string up into separate rope nodes, maybe even ensuring the nodes don't get allocated too close to each other and/or injecting disrupting hardcoded bytes in between them. (I work on SpiderMonkey, the Firefox JS engine, and we do at least make sure to allocate string data in a separate part of the heap from everything else.)
Valid UTF16 is already being sporadically enforced.
People who hack JS to store arbitrary data in strings are already fighting a loosing battle, and I see no point to help them.
But my point is that we have moved from the JS as a scripting language which did not allow for arbitrary binary data, to one which did without much though over that.
Half of existing problems with zero click, zero days, and zero browse exploits running in the wild, and Chrome becoming the ActiveX 2.0 is that.
There is really no reason for a web browser to do computing on the web, and thus no need for binary manipulations in Javascript on the web.
I'm not saying to axe it from JS, but JS may limit the the browser use case by a limited set of JS standard.
https://therecord.media/security-researcher-drops-chrome-and... says this isn't a fully weaponizable exploit because you still need to escape the Chrome sandbox after using this. But, the researcher shows a screenshot of having started calc.exe which seems like something that'd happen outside the sandbox?
The pr adding this says that you need to run chrome with —no-sandbox to get the exploit chain (since they don’t have a sandbox buster right now). Kinda feel like the PR to metasploit is more interesting as a link
I would imagine the researcher showed a screenshot of the exploit being run on a copy of Chrom{e,ium} where he had disabled parts of the sandbox (that, or he has a more complex exploit with another maybe-undisclosed sandbox escape).
The bug was originally submitted to Google as part of Pwn2Own by two other people.
The GitHub POC was built using the patch and regression test by a third person (who you just replied to). While it's been patched in the v8 Javascript engine, that patch has not made it to Chrome (unless you're compiling Chromium from scratch) as part of Chrome's 2 week release process.
We've had multiple exploit chains so far thanks to wasm in v8. IIRC someone developed a full exploit chain to get persistent root on chromebooks with a bug in v8's wasm implementation as the starting point, got a big bug bounty out of that one. That particular exploit also involved some holes in the chrome extensions security model (I believe they addressed them)
Broadly speaking, the wasm stuff is only there as a method of getting the browser to execute shellcode, its a pretty standard lump of code for turning a memory bug into code execution in v8. What this shellcode does is open calculator when the browser's sandbox is disabled (`--no-sandbox`). In general in v8 exploitation, once you've reached a point where you can read and write arbitrary memory, you find that v8 will only create either RW or RX pages for you when the JIT compilation happens. WASM is a neat little trick for getting a handle to a RWX page.
At first glance to me, the core bug is actually in abusing an array enough to get an unsigned int into a function that expects them all to be signed, causing an off-by-one error and leveraging that into a memory leak (to get the pointer to a FixedArray for floats and a pointer to a FixedArray of objects) and then replacing one with another to create a type confusion and read/write arbitrary memory through that. r4j will probably correct me on the subtlety here though!
Source: extremely similar to HackTheBox RopeTwo, which I spent more time than I am prepared to admit solving.
Disclaimer: am noob at v8 exploitation, but have done enough of it to know some of the tricks.
> In general in v8 exploitation, once you've reached a point where you can read and write arbitrary memory, you find that v8 will only create either RW or RX pages for you when the JIT compilation happens. WASM is a neat little trick for getting a handle to a RWX page.
It's not a neat trick, but a grave problem of WASM model.
WASM memory (in)security will be a big problem until all of memory security tricks from native code will be migrated to WASM world, and then there will be not much use of WASM anymore.
You understand that having W^X protections on any JIT area is fairly useless without a strong CFI model in place right? Any attacker could easily execute a ROP/JOP chain to switch JIT protections to RX or even more simply allocate an RWX area where the shellcode can be copied and executed.
Yes, and this is the part of the problem of the general direction of JS ecosystem development.
JS promoters want so hard for JS to subplant other major languages, but not noticing themselves ignoring the decades long other path major languages took on robustness, and security.
I ran both the exploits[0][1] on Chrome (89.0.4389.114) on Windows and got
AW SNAP: Access violation on both.
I then straced the tab processes on Debian (as best as I could guess which chrome child processes were tabs via top activity as I loaded stuff in them) and ran the exploit. Nothing seemed unusual. Just all the system calls stopped after the exploit was run, and the tab crashed. I guess it would be good to gdb to the actual JS VM process and stepi through the instructions, but I don't know how to get the actual process that runs the JSVM for a chrome tab.
I forked the repo so I could easily run it from a GitHub pages site. I don't know what it does but I don't think it does anything:
I converted it to binary with a pack("L", ...) loop and loaded it in Ghidra as a raw x86_64 binary. I don't fully understand it, but i think it's searching the address space for some kind of function entry point and then calling it with "calc.exe".
The program program is shellcode, so the first thing it does is set itself up with the state it needs. This includes clearing the direction flag, aligning the stack to 16 bytes, and figuring out where it is in memory (the call to 0xca both sticks an address on the stack and avoids some code described below). It then calls into a little subroutine (at 0xa) to read through the PEB → LoaderData → InMemoryOrderModuleList. The first entry is the entry for the main binary, from which it grabs the "FullDllName", does some sort of trivial hash on it, and compares it to a known value (presumably a check to see that it's running in a sane environment?) Then it largely skips the next entry (ntdll.dll) and goes to kernel32.dll, where it looks into the exports table to find what is presumably CreateProcessA (this is done using a similar hashing scheme, so the string is not directly present in the shellcode). There's a "calc.exe" string at the end of the code (perhaps you can spot it) and with that it has enough to pop a calc.
I guess the newline that echo without -n would add doesn't matter that much.
OTOH -e, depending on your shell, is either necessary (e.g. bash), or unnecessary but harmless (e.g. zsh) or insufficient (e.g. dash). If you want to print binary stuff portably, you need to use printf(1).
Here is an article with a lot more detail (high level, not technical) about this... on a random website none of us have ever heard of and so which might be a phishing attack for people like me who immediately went to Google to find more information (and with a few "shockers" in--such as Chrome and Edge not having patched it yet, despite the patch dropping a week ago--to maybe get some viral sharing, such as what I am doing now ;P).
The screenshot suggests that all you need to do for the exploit to work is to open the html/js file, but doing this on windows just results in "Aw, Snap!" page. Others in this thread have confirmed this.
The article says the exploit code released to GitHub doesn't include a sandbox escape, so that seems about like what I would expect: the renderer crashed. (The screenshot is from the researcher, who either has a more complete implementation with another maybe-undisclosed bug or has simply disabled some sandbox restrictions to help test the exploit in isolation or something.)
To add some further context that isn't written here, the exploit developer here has been tinkering around with v8 for a while. Last year he published a vulnerable VM to HackTheBox (a CTF platform) called RopeTwo[1], where the initial entry point looked extremely similar to this. Its largely regarded as one of the most difficult challenges to solve to date.
For those looking to learn the basics of v8 exploitation, there have been a handful of CTF problems on it, including the following recent ones (many with writeups!):
- PicoCTF 2021: Kit Engine, Download Horsepower and Turboflan
Using WASM is the method de'jour for turning arbitrary read/write/object leak into RCE. Without WASM, the writer would have to use ROP/etc, but the primitives are still there.
Check "I am an advanced user (required reading)", enter advanced-settings.html, last line is where you define your own uBlock scriptlets. Then in My Filters define
*##+js(myscript)
this will inject into every(1) executable context of every page. You still need to hotpatch workers/sharedworkers/webworkers(I disable the last one entirely).
This way you get some of the control over Chrome browser back. Not so fun fact - Google is very against users having the ability to execute arbitrary user defined code, afair Gorhill had a problem with google concerning injectable scriptlets.
(1) of course there are issues, you cant inject into <iframe src="data: https://github.com/whatwg/html/issues/1753 in Chrome :( so you would have to manipulate CSP to disable those.
I have uBlock disable all javascript and make permanent and temporary exceptions when I need to. It's really made browsing the web cumbersome. The more I'm aware of new 0-days the more I appreciate the tar I've laid before myself.
ublock is already executing in context of every loaded page, this is just one more line of js. The only website I use that could benefit from webassembly is twitch, but thankfully they detect webassembly capability and their fallback js webworker is working just fine.
"Particularly safe" is a bit alarmist, one could make the same argument about literally anything that connects to a public network isn't particularly safe
It isn't particularly safe compared to having JS disabled. RCE exploits, stealthy CSRFs, etc. that work with JS disabled are exceedingly rare compared their JS counterparts.
Yeah, today’s browsers remind me of flash and acrobat: they add features faster than they can fix security bugs. And those features are default on for everyone to exploit. Yeah, I know the comparison is a hyperbole.
I could go ahead and say diable internet unless you dont want to be particularly safe.
I mean there are CPU level zero days.
But on a serious note, how would one protect oneself from Apps other than browser and talk to internet?
I mean we can't ask regular users to learn and turn off stuff.
What I do, while using OpenBSD, is limit which users access which apps, dividing my activities by user according to risk level and which apps a given user uses and sites that user browses to (some user accounts I use regularly do not use a browser). Also obsd has pledge and unveil built in, which are kernel API calls they put in apps which limit which syscalls and directories an app can access. Those combined give me some increase of confidence.
(Edit: On Debian I could do this with multiple simultaneous X sessions, moving data between them via a shared text file. On obsd, one could use SSH and some scripts, so they can share a desktop if/when desired.)
Maybe there is a maturity and effort level continuum, where we can help people along as appropriate, per their desires, interest, and situation.
I forgot to mention: some of this can be done on Linux also, but another nice thing about openbsd is its ~ "only 2 remote holes in the default install", since ~ 1996. I've had to learn some things to use it effectively, but everything has tradeoffs.
Kind of a dishonest characterization - Flash was sandboxed too. WASM is simply easier to sandbox and has additional constraints that make it easier to keep code from trying to break out.
The parent was talking about how the process the WASM runtime is in is sandboxed. I could be wrong, but I don't think flash did this. I think the sandbox was more of just some limitations on the APIs you were provided.
Most modern browsers started running plugins (including Flash) in a sandbox long before WASM or asm.js existed. Firefox's was named 'plugin-container' and for a while its multiprocess support actually piggybacked on that process to do non-plugin things.
As with everything, it is only as secure as its implementation. I do take some small pleasure in this though, having had to listen to so many people saying how much Flash had to die and WASM totally wasn't going to cause similar problems.
The fix exists in the v8 source, but hasn't reached normal Chrome yet. Unless an org is compiling Chromium from scratch, there isn't an actual useful fix available.
Do not casually browse the web with JavaScript enabled. The idea of trustless secure computing is compelling but it’s ultimately not reality. There is a new browser engine rce vulnerability on a regular basis, whether it’s chrome or mobilesafari
You don’t have to use your notepad. You can just disable JavaScript. Vast majority of text content sites work fine. You always have the option to use JavaScript for sites you explicitly trust.
Chromium (or iridium), while having other downsides, makes it somewhat convenient in the settings to specify per site whether JS, cookies, or images are allowed or blocked, and also lets one leave separate config tabs open to toggle images, javascript, and cookies on quickly when needed, browsing with them off the rest of the time.
Not really. While the bug technically exists on Node, generally people do not use Node to run untrusted JS/Wasm code. Node isn't designed to be a sandbox.
Yes but unlike the browser, Node.js modules all run in a trusted environment anyway (you should never install modules you don't trust) because there are many ways that a malicious module could hijack the main process to read memory or user input so this vulnerability adds nothing.
"It depends". But potentially, yes. Depending on the basis of the flaw. It could potentially affect Electron apps too, as they contain both chromium and node - though the content that can be thrown at their internal instances is likely a lot less varied than that seen in the wider internet that your desktop web browser is routinely exposed to.
It's a V8 exploit, so Edge (as with all other Chromium-based browsers) is presumably vulnerable, although I do not know if the PoC above works on Edge or not.
The dependency of a whole industry on Chrome, and the fact a 18 year-old can create a WASM exploit for it with ease (which just might be the tip of the iceberg) is really, really scary.
Not really. I was much more focused and arguably a better coder at 18, albeit a somewhat less knowledgeable one.
At 26 I have twice the experience but only a third of the motivation and focus. When you're younger you do stuff just find out whether you can, when you're older you already know you could and decide not to do anything.
It's exceedingly rare I'll find something interesting enough that'll keep me coding for 15 hours straight, at 18 that was the norm on pretty much every free day - of which there are a lot more when you're younger and not already working as a software dev.
Im just curious on thoughts about sharing these on twitter. Shouldn't they be shared directly with Google. Since its web people can create exploits pretty quickly.
This is so irresponsible to disclose vulnerabilities this way. There is a process that many people have worked hard to create whereby vulnerabilities can be disclosed, patched--you can even be rewarded!--and both fix and bug are eventually made public in proper time.