Hacker News new | past | comments | ask | show | jobs | submit login

Isn't this already possible with -XX:+UseEpsilonGC? (disables the GC) And then you manage memory manually in big (primitive) arrays.



Beat me to this answer ;) - I presume you can just allocate objects but you have to keep those allocations in check to prevent the JVM from terminating when going out of memory.

Maybe object pooling (which helped performance in old JVM's in the 90s) will make a comeback? ;)


Object pooling, mutable objects, flyweight encoding[0], and being allocation-free on the steady state are all alive and well in latency-sensitive areas like financial trading, plenty of which is written in Java

[0] https://github.com/real-logic/simple-binary-encoding/wiki/De...


I'm going to try and prevent heap allocation in runtime. But since I'm going to use javac it's going to be ugly.

Basically only static atomic arrays (int/float) in classes will be allowed, for cache and parallelism, and AoS up to 64 bytes encouraged to avoid parallel cache invalidation.

And I'm even considering dropping float, and only have integer fixed point... but then I'll need to convert those in shaders as GPUs are hardcoded to float.


Not to be too negative here but... why? You'd be creating something syntax-compatible with Java, but where you wouldn't be able to use any existing Java code & unable to use most of the interesting features of Java (even string concatenation is handled by instantiating a new StringBuilder on your behalf by javac). Aren't you just reinventing a worse C at that point?


See my other 2 responses.

On mobile here would reference on PC.


What would be the point of that things relationship with java though? Is it somehow important that the bytecode run by a tiny vm that is apparently distributed alongside could also run on a proper JVM?


Javac, I don't want to write a compiler.

The bytecode is only meant to run in my VM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: