Hacker News new | past | comments | ask | show | jobs | submit login

No offenses. :)

At first, FLTK, does not look much better than the old Windows widgets.

In addition, I strongly want Fresh IDE to be portable to MenuetOS, KolibriOS and other assembly written OSes. As a rule, they all are written with FASM and a good IDE that can be ported for days (not for years) can be great tool for the OS developers.

That is why I started the development of special GUI toolkit.




RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear[1] (cross-platform ANSI C89) but it might save you some time.

I don't have much HLL asm/demoscene experience personally so I'm not sure what's "impressive" as engineering feats these days but this looks cool. As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level with a decent layer of POSIX compatibility and the ability to run AVX512 instructions, I like the idea that tools like this are out there. Cheers, mate

[1] https://github.com/vurtun/nuklear


> RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear (cross-platform ANSI C89) but it might save you some time.

The big problem with using "gcc -S" is that as a result you have a HLL program, simply written as an assembly language listing.

The humans write assembly code very different than HLL. Even translated to asm notation, this difference will persist. Asm programmer will choose different algorithms, different data structures, different architecture of the program.

Actually this is why in the real world tasks, regardless of the great compiler quality, the assembly programmer will always write faster program than HLL programmer.

Another effect is that in most cases, deeply optimized asm program is still more readable and maintainable than deeply optimized HLL program.

In this regard, some early optimizations in assembly programming are acceptable and even good for the code quality.


> As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level

That's an interesting pile of keywords you've got there.

I don't know about Smalltalk (I find Squeak, Pharo, etc utterly incomprehensible - I have no idea what to do with them), but for some time I've been fascinated with the idea of a fundamentally mutable and even self-modifying environment. My favorite optimization would be that, in the case of tight loops with tons of if()s and other types of conditional logic, the language could JIT-_rearrange_ the code to nop the if()s and other logic just before the tight loop was entered - or even better, gather up the parts of code that will be executed and dump all of it somewhere contiguous.

C compilers could probably be made to do this too, but that would break things like W^X and also squarely violate lots of expectations as well.


This is sort of implemented in various forms.

For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].

At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].

At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].

At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).

RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom. When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved). If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs". This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees). In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.

Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.

=====

[1] See ISL-TAGE of CBP3 and other, more modern reportings from "Championship Branch Prediction" if it's still being run).

[2] https://stackoverflow.com/a/8874314 Here's how it's done with the CLR. The JVM is crazy good so I'd imagine the analogue exists there as well.

[3] https://en.wikipedia.org/wiki/Polymorphic_code

[4] http://wiki.c2.com/?TheKenThompsonHack

[5] Use some micro-kernel OS architecture so process $foo won't alter $critical-driver-talking-to-SATA-devices or modifying malloc. I'd probably co-opt QNXs Neutrino designs since it's tried and true. Plus that sort of architecture has the design benefit of intrinsically safe high-availability integrated into the network stack.


> This is sort of implemented in various forms.

> For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].

You mean Dynamic Code Evolution?

Regarding [2], branch prediction hinting being unnecessary (as well as statically storing n.length in `for (...; n.length; ...)`) is very neat. I like that. :D

> At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].

Right. The only problem is people's expectation for C to remain static. Early implementations of such a system may cause glitches due to these expectations being shattered, and result in people a) thinking it won't work or b) thinking the implementation is incompetent. I strongly suspect that the collective masses would probably refuse to use it citing "it's not Rust, it's not safe." Hmph.

> At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].

That c2 article very strongly reminded me of https://www.teamten.com/lawrence/writings/coding-machines/ - particularly the theoreticalness of the idea.

For example,

> And it is "almost" impossible to detect because TheKenThompsonHack easily propagates into the binaries of all the inspectors, debuggers, disassemblers, and dumpers a programmer would use to try to detect it. And defeats them. Unless you're coding in binary, or you're using tools compiled before the KTH was installed, you simply have no access to an uncompromised tool.

...Nn..n-no, I don't quite think it can actually work in practice like that. What Coding Machines made me realize was that for such an attack to be possible, the hack would need to have local intelligence.

> There are no C compilers out there that don't use yacc and lex. But again, the really frightening thing is via linkers and below this hack can propagate transparently across languages and language generations. In the case of cross compilers it can leap across whole architectures. It may be that the paranoiac rapacity of the hack is the reason KT didn't put any finer point on such implications in his speech ...

Again, with the intelligence thing. The amount of logic needed to be able to dance around like that would be REALLY, REALLY HARD to hide.

Reflections on Trusting Trust didn't provide concrete code to alter /usr/bin/cc or /bin/login, only abstract theory, discussion and philosophy. It would have been interesting to be able to observe how the code was written.

I don't truly think that it's possible to make a program that can truly propagate to an extent that it can traverse hardware and even (in the case of Coding Machines) affect routers, etc.

> At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).

Oh ok.

> RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom.

Okay, I just clicked my way through to get the ISO and MSI (must say the way the site offers the downloads is very nice). Haven't tested whether Wine likes them yet, hopefully it does.

> When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved).

Right.

> If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs".

Ruby is (heh) also on my todo list, but I did recently play with the new JavaScript Proxy object, which basically makes it easy to do things like

  var curcfg = null, defcfg = { ... };
  var cfg = new Proxy({}, {
    get: (_, key) => {
      return (curcfg[key] !== null) ? curcfg[key] : defcfg[key];
    },
    set: (_, key, val) => {
      curcfg[key] = val;
      return true;
    }
  });
implementing default parameters, overlays, etc.

> This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees).

Mmm. More work for JITs...

> In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.

Very interesting, particularly fault recovery.

> Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.

oooo :)

Okay, okay, I'll be looking at Cincom ST pretty soon, heh.

FWIW, while Smalltalk is a bit over my head (it's mostly the semantic-browser UI, which is specifically what completely throws me), I strongly resonate with a lot of the ideas in it, particularly message passing, which I have some Big Ideas™ I hope to play with at some point. I keep QNX 4.5 and 6.5.0 (the ones with Photon!) running in QEMU and VNC to them when I'm bored.

Oh, also - searching for DCE found me Dynamic Code Evolution, a fork of the HotSpot VM that allows for runtime code re-evaluation - ie, live reload, without JVM restart. If only that were mainstream and open source. It's awesome.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: