Interesting project. Is the main value to "self-host your own ngrok", or is it to actually compete with ngrok using an open-source project ? If so how do you intend to monetize your project ?
About monetization, that is definitely very important. For now, and not sure how long, could be days or weeks, I am offering api-keys at no charge, because I want the feedback of real users. But obviously that is not sustainable in the long term, servers cost money and monetization also help keep the project alive.
At a second stage, I am planning to offer monthly or yearly plans with different limitations or unlimited also. But I am also planning on offering pay per use with short lives api-keys targeting agentic workflows, for example or people that dont really want to pay a monthly fee, but instead just cents for a short-lived tunnel.
For the monthly plans, I am planning to keep as low cost as possible, just enough to maintain the infrastructure and development + some overhead.
The code style seems to be targeting some old version of PHP...but I'm absolutely sure typed properties require a newer version of PHP than existed in 2008. (If I recall correctly, that's around the days of PHP 5.4 or so)
Let me explain why I feel emotional about this. Humans had already proven how much harm can be done via online harassment. This seems to be the 1st documented case (that I am aware of) of online harassment orchestrated and executed by AI.
Automated and personalized harassment seems pretty terrifying to me.
This game gave me a real-life déjà vu. A few months ago, three friends and I spent a long weekend trying to build a Game Boy emulator from scratch in Rust. None of us had ever worked on emulators before—we basically gave ourselves three days to read the docs, figure things out, and ship something. It was chaotic but also educational and an absolute blast. Encouraging anyone that wants to learn a bit more about simple computers and assembly to try that ! If anyone’s curious about what came out of it: https://github.com/chalune-dev/gameboy
This isn't a direct answer to your question because I am not OP and I do not know what docs they read but there is a book out called "Game Boy Coding Adventure: Learn Assembly and Master the Original 8-Bit Handheld" that came out last year.
Awesome, I've been getting more into messing with the nuts and bolts of my childhood Gameboy Color, one project I want to eventually do is to recreate it with modern hardware, and then take something similar to GB Studio and embed it into the hardware so I can read cartridges straight to a custom built clone. I've seen some impressive clones already like FPGBC but I would love to build my own. It's a slow burn project, but I also am fascinated by emulators for the platform as well.
I haven't played the game so I can't answer for sure, but my guess is: if you are writing an emulator throughout the game, it's very likely you are guided to write one using OOP.
That is correct. The emulator is implemented in JavaScript using OOP, and the tests that the game runs to validate your progress has certain expectations on what you export and what methods are available.
> My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily
Then why is half of the big tech companies using Microsoft Teams and sending mails with .docx embedded in ?
Of course marketing matters.
And of course the hard facts also matters, and I don't think anybody is saying that AI agents are purely marketing hype. But regardless, it is still interesting to take a step back and observe what marketing pressures we are subject to.
> * Programmers resistance to AI assisted programming has lowered considerably. Even if LLMs make mistakes, the ability of LLMs to deliver useful code and hints improved to the point most skeptics started to use LLMs anyway: now the return on the investment is acceptable for many more folks.
Could not agree more. I myself started 2025 being very skeptical, and finished it very convinced about the usefulness of LLMs for programming. I have also seen multiple colleagues and friends go through the same change of appreciation.
I noticed that for certain task, our productivity can be multiplied by 2 to 4. So hence comes my doubts: are we going to be too many developers / software engineers ? What will happen for the rests of us ?
I assume that other fields (other than software-related) should also benefits from the same productivity boosts. I wonder if our society is ready to accept that people should work less. I think the more likely continuation is that companies will either hire less, or fire more, instead of accepting to pay the same for less hours of human-work.
I don't think that will happen because it hasn't for other technological improvements. In the end people pay for "good enough" and that's that. If "good enough" is now cheaper to implement that's all they will do. I've seen it in other technologies. As an example due to more precise manufacturing many manufacturers have used it to cheapen things like cars, electronics, etc just to the point where it passes warranty mostly; in the old days they had to "overbuild" to get it to that point putting more quality into the product.
Quality is a risk mitigation strategy; if software is disposable just like cheap manufactured goods most people won't pay for it thinking they can just "build another one". What we don't realise is due to sheer cost of building software we've wanted quality because its too expensive to fix later; AI could change that.
Hoping we invest in quality, more software (which has a price inelastic curve mostly due to scale/high ROI) etc I'm starting to think is just false hope from people in the tech industry that want to be optimistic which generally is in our nature. Tech people understand very little about economics most of the time and how people outside tech (your customers) generally operate. My reflection is mostly I need to pivot out of software; it will be commoditized.
Yes, certainly agree. A few days ago here there was this blog claiming how formal verification would become widely more used with AI. The author claiming that AI will help us with the difficulty barrier to write formal proofs.
I'm not sure that it will scale to other fields other than coding and math. The approach with RLVR makes it more amenable to STEM fields in general and most jobs believe it or not aren't that. The level of open source software with good test suites effectively gave them all the training material they needed; most professions won't provide that knowing that they will be giving their moat away. LLM's to other fields from my understanding still exhibit the same hallucination rates if only mildly improved especially if there isn't public internet material in that field.
We have to accept in the end that coding/SWE is one of the most disrupted fields from this breed of AI. Disruption unfortunately probably means less jobs overall. The profession is on trend to disrupting and automating itself I think; plan accordingly. I've seen so many articles claiming its great we didn't learn to code now; that's what the AI's have done.
I think it's very likely the next iPhone will have some form of authenticity proof too, I just hope Apple doesn't go with its own standard again that's incompatible with everything else.
Samsung were also the ones who demonstrated a fatal flaw in C2PA: device manufacturers are explicitly trusted in implementation.
C2PA requires trust that manufacturers would not be materially modifying the scene using convolutional neural networks to detect objects and add/remove details[1]
That's tricky because it needs to store and verify metadata that the user cannot edit and that allows one to distinguish a "normal" photo from a professional photography of a photo. The only place where this can happen are the camera settings but these are limited on smart phones and it's not easy to discern the two cases. I'm sure someone would print a 10x10 meter fake image, put it at just the right distance, and wait for the best indirect light to prove that the Yeti exists.
Just include a depth sensor, lidar, etc. I'm sure over time that will become increasingly easy to defeat too, but then we can just keep improving the sensors too.
This is great ! Congratulations. I really like your project, especially I like how easily it is to peak at.
Do you plan on moving forward with this project ? I seem to understand that all the training is done on the CPU, and that you have next steps regarding optimizing that. Do you consider GPU accelerations ?
Also, do you have any benchmarks on known hardware ? Eg, how long would it take to train on a macbook latest gen or your own computer ?
Note that the diagram of a Mutex is not how a Mutex works today, at least on reasonable platforms.
First, Mara changed it to be something much less silly, on Linux and similar it's a Futex, while on Windows it was an SRWLock. However, more recently (last year IIRC) the Windows Mutex is also basically a Futex, albeit Microsoft doesn't call their analogous feature a Futex.
In either case this futex-based design means there's no "inner" pointer, and nothing for it to point to, instead there's some sort of atomic integer type, when we're contended we go to sleep waiting on the integer via futex or similar OS feature.
Edited to add:: Also, the niche trick in the bottom right is somewhat broader than this suggests. If the type doesn't need every bit pattern then Rust might and in some cases is guaranteed to see a niche it can use for this memory optimisation.
Option<&T> is the same size as &T but also Option<NonZeroU16> is the same size as u16, Option<OwnedFd> is the same size as OwnedFd (and thus same size as a C integer by definition), Option<Ordering> is the same size as the Ordering (either of them) and Option<bool> is of course the same size as a bool.
reply