Hacker News new | past | comments | ask | show | jobs | submit | johnfound's comments login

What is wrong if your debugging of experimental branch is pushed to the repository? Someone can find this code useful for something else later. And after all, all the experiments are part of the project history, that fossil strive to keep unchanged and as it actually happened.

The same is about the rebase - the difference is that while git keeps the history "as the developers want it to be", fossil keeps it "as it actually happened".


Fossil is chauvinist if it's asserting that everything I save locally "is part for the project history" and must be pushed. Don't tell me what's important for me to push.

Git preserves history as it happened on the remote, which is what matters for collaboration. Why foul up the canonical history up with what amounts to scratch paper? Typically, people only amend local history if they feel it would easier to follow later. Why take that option out of their hands?

The way I do work and the way I submit it are two separate problems with overlap. I use git for both and it works beautifully. Even for personal non-code projects I never intend to collaborate with others, I still use git because it promotes a workflow I'm comfortable with.

On the team I'm in, the convention is to include code changes in the same commit as any test changes. There are advantages and disadvantages, but it's the convention, and it's best we follow it or else it might lead to CI problems. But, in my personal workflow, I tend to try to change the tests, commit, code changes, commit, and then iterate. With git, this is as simple as doing it however which way I want, and then squashing the commits together before pushing. What do I do with Fossil?

I'm trying to figure out what problem it's trying to solve with this, and it just comes off as hollowly idealistic.


According to the wiktionary[1] it is "rhyming slang" from:

    apples and spice = nice
[1] https://en.wiktionary.org/wiki/she%27ll_be_apples


Well, as I said several posts above, I am working on a new, assembly language, portable GUI toolkit that to be used for v3.x series of Fresh IDE.

Unfortunately it will add another 50..100kB to the code, but the portability has price. :(


I didn't mean it as a criticism to you or your work and fresh certainly looks decent, more that I found the name ironic given the "classic" look of the GUI. My first comment at least, the second one was commenting on how many low level tools are to look dated and maybe it's the norm. Again, I didn't really mean it as criticism even though I suppose it sounds like it. :/

Could you instead call into an existing portable UI library? Perhaps FLTK or something more lightweight than the popular ones. It just seems to me that creating a new GUI toolkit from scratch is a massive undertaking and I'm unsure what value you will get over spending the time improving fresh itself? I guess there's some appeal to having a self contained assembly system through and through.


No offenses. :)

At first, FLTK, does not look much better than the old Windows widgets.

In addition, I strongly want Fresh IDE to be portable to MenuetOS, KolibriOS and other assembly written OSes. As a rule, they all are written with FASM and a good IDE that can be ported for days (not for years) can be great tool for the OS developers.

That is why I started the development of special GUI toolkit.


RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear[1] (cross-platform ANSI C89) but it might save you some time.

I don't have much HLL asm/demoscene experience personally so I'm not sure what's "impressive" as engineering feats these days but this looks cool. As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level with a decent layer of POSIX compatibility and the ability to run AVX512 instructions, I like the idea that tools like this are out there. Cheers, mate

[1] https://github.com/vurtun/nuklear


> RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear (cross-platform ANSI C89) but it might save you some time.

The big problem with using "gcc -S" is that as a result you have a HLL program, simply written as an assembly language listing.

The humans write assembly code very different than HLL. Even translated to asm notation, this difference will persist. Asm programmer will choose different algorithms, different data structures, different architecture of the program.

Actually this is why in the real world tasks, regardless of the great compiler quality, the assembly programmer will always write faster program than HLL programmer.

Another effect is that in most cases, deeply optimized asm program is still more readable and maintainable than deeply optimized HLL program.

In this regard, some early optimizations in assembly programming are acceptable and even good for the code quality.


> As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level

That's an interesting pile of keywords you've got there.

I don't know about Smalltalk (I find Squeak, Pharo, etc utterly incomprehensible - I have no idea what to do with them), but for some time I've been fascinated with the idea of a fundamentally mutable and even self-modifying environment. My favorite optimization would be that, in the case of tight loops with tons of if()s and other types of conditional logic, the language could JIT-_rearrange_ the code to nop the if()s and other logic just before the tight loop was entered - or even better, gather up the parts of code that will be executed and dump all of it somewhere contiguous.

C compilers could probably be made to do this too, but that would break things like W^X and also squarely violate lots of expectations as well.


This is sort of implemented in various forms.

For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].

At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].

At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].

At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).

RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom. When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved). If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs". This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees). In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.

Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.

=====

[1] See ISL-TAGE of CBP3 and other, more modern reportings from "Championship Branch Prediction" if it's still being run).

[2] https://stackoverflow.com/a/8874314 Here's how it's done with the CLR. The JVM is crazy good so I'd imagine the analogue exists there as well.

[3] https://en.wikipedia.org/wiki/Polymorphic_code

[4] http://wiki.c2.com/?TheKenThompsonHack

[5] Use some micro-kernel OS architecture so process $foo won't alter $critical-driver-talking-to-SATA-devices or modifying malloc. I'd probably co-opt QNXs Neutrino designs since it's tried and true. Plus that sort of architecture has the design benefit of intrinsically safe high-availability integrated into the network stack.


> This is sort of implemented in various forms.

> For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].

You mean Dynamic Code Evolution?

Regarding [2], branch prediction hinting being unnecessary (as well as statically storing n.length in `for (...; n.length; ...)`) is very neat. I like that. :D

> At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].

Right. The only problem is people's expectation for C to remain static. Early implementations of such a system may cause glitches due to these expectations being shattered, and result in people a) thinking it won't work or b) thinking the implementation is incompetent. I strongly suspect that the collective masses would probably refuse to use it citing "it's not Rust, it's not safe." Hmph.

> At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].

That c2 article very strongly reminded me of https://www.teamten.com/lawrence/writings/coding-machines/ - particularly the theoreticalness of the idea.

For example,

> And it is "almost" impossible to detect because TheKenThompsonHack easily propagates into the binaries of all the inspectors, debuggers, disassemblers, and dumpers a programmer would use to try to detect it. And defeats them. Unless you're coding in binary, or you're using tools compiled before the KTH was installed, you simply have no access to an uncompromised tool.

...Nn..n-no, I don't quite think it can actually work in practice like that. What Coding Machines made me realize was that for such an attack to be possible, the hack would need to have local intelligence.

> There are no C compilers out there that don't use yacc and lex. But again, the really frightening thing is via linkers and below this hack can propagate transparently across languages and language generations. In the case of cross compilers it can leap across whole architectures. It may be that the paranoiac rapacity of the hack is the reason KT didn't put any finer point on such implications in his speech ...

Again, with the intelligence thing. The amount of logic needed to be able to dance around like that would be REALLY, REALLY HARD to hide.

Reflections on Trusting Trust didn't provide concrete code to alter /usr/bin/cc or /bin/login, only abstract theory, discussion and philosophy. It would have been interesting to be able to observe how the code was written.

I don't truly think that it's possible to make a program that can truly propagate to an extent that it can traverse hardware and even (in the case of Coding Machines) affect routers, etc.

> At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).

Oh ok.

> RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom.

Okay, I just clicked my way through to get the ISO and MSI (must say the way the site offers the downloads is very nice). Haven't tested whether Wine likes them yet, hopefully it does.

> When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved).

Right.

> If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs".

Ruby is (heh) also on my todo list, but I did recently play with the new JavaScript Proxy object, which basically makes it easy to do things like

  var curcfg = null, defcfg = { ... };
  var cfg = new Proxy({}, {
    get: (_, key) => {
      return (curcfg[key] !== null) ? curcfg[key] : defcfg[key];
    },
    set: (_, key, val) => {
      curcfg[key] = val;
      return true;
    }
  });
implementing default parameters, overlays, etc.

> This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees).

Mmm. More work for JITs...

> In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.

Very interesting, particularly fault recovery.

> Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.

oooo :)

Okay, okay, I'll be looking at Cincom ST pretty soon, heh.

FWIW, while Smalltalk is a bit over my head (it's mostly the semantic-browser UI, which is specifically what completely throws me), I strongly resonate with a lot of the ideas in it, particularly message passing, which I have some Big Ideas™ I hope to play with at some point. I keep QNX 4.5 and 6.5.0 (the ones with Photon!) running in QEMU and VNC to them when I'm bored.

Oh, also - searching for DCE found me Dynamic Code Evolution, a fork of the HotSpot VM that allows for runtime code re-evaluation - ie, live reload, without JVM restart. If only that were mainstream and open source. It's awesome.


Well, no, there is no "book" in the real meaning of this word.

But some documentation is available in the "Documentation" section of the web site and inside the "Help|Help file" (Ctrl+F1) menu in the IDE itself.

There are small example and template projects as well.

Also the FASM forum is a good place to ask: https://board.flatassembler.net


You always can reduce the usage the of macros or make them more assembly-centric. Check the Fresh IDE sources as an example of moderate macro use.

On the other hand, definition of complex data structures is much easier with powerful macro engine.


I am using assembly language for all my programming tasks.

And most of they are application programming (Fresh IDE itself and many closed source projects in my work) or even web programming (https://board.asm32.info).

That is why I needed a powerful IDE, suitable for rapid programming of relatively big projects (500Kloc or higher).


This is interesting. Do you not find the dev process significantly slower than using a higher level language?


About twice slower than in HLL, with code reusing of course.

But the code is more reliable and the debugging process is much easier. After some short debugging stage, most of the projects runs for years without single bug report or other support issues.

I am not talking about the significantly higher speed, lower memory footprint and better UX (especially the response time of the UI is really much faster).

As a whole the advantages are more than disadvantages IMHO.


Interesting. Are you sure the increase in code reliability goes down to the language and not your skills? It feels quite contrary to my experience that a lower-level language would be more reliable.


Well, I am pretty average programmer. Not the worst, not the best.

The code reliability of assembly programs is better because programming algorithms in low level, the programmer controls every aspect of the execution. Notice, that excessive use of code generation macros will cancel this advantage.

Another advantage is that the bugs in assembly programs usually cause immediate crash of the program and this making the fixing easy.

Defer crashes and strange/random/undefined behavior of bugs in assembly programs is rare. IMO, this is because of reduced count of abstraction layers.


What about code maintainability and readability - I'm guessing that must be worse when compared to a HLL? Also, what made you get into writing complex programs in assembly - was it just the extra control? I've used assembly when I needed to optimise my C code, but it was a slow and difficult process! I would not really choose it for complex stuff, but I'm really interested to hear your point of view.


Code maintainability and readability depends only on the programmers knowledge of the language/framework/libraries used.

For example, I don't know Lisp, so for me it is much harder to read/maintain Lisp project than assembly language project.


Thanks for sharing! This is really interesting, especially the part about the reduced count of abstraction layers. Do you think the abstraction layers are the problem, or the fact that the overwhelming majority of "abstractions" that materialize in modern high-level software are leaky?


Why adding abstraction layers makes programming easier?

Because allow the programmer to not think (and even know) about some things and leaving them to the layer/libraries.

But every layer adds also a level of obscurity. The interaction between multiply layers is even more undefined and random.

It is OK while everything goes as expected. But when there are problems, the obscurity can make the debugging a hell.

In addition, the behavior of the bugs hidden deep in the layers (or in the way the layers interacts in between and with the application) can be really weird.

That is why, IMHO, the programmer should keep the abstraction layers to the minimal count that allows solving programming tasks with minimal effort, counting not only the coding time, but debugging and supporting time as well.

In my practice, I decided that using FASM with Fresh IDE and set of assembly libraries gives me the needed quality of the code.


The screenshots are taken in Linux: XFCE+Wine. Fresh IDE is actually some kind of hybrid application. It works in Linux better and with more features than in Windows. :)


I know Windows is a lost cause, but Unix apps don't have to be ugly like that ;-)

And even Windows can use GTK.


I am working on v3.0 that will use its own portable GUI toolkit (in assembly language) with much prettier UI. On this page you can see some preliminary experimental screenshots: https://fresh.flatassembler.net/index.cgi?page=content/artic...

Still not GTK though. It is too heavy for assembly language programming and will not allow portability for example on MenuetOS or KolibriOS assembly written OSes.


Nice improvement! The screenshots have much nicer font rendering (and a better font, for that matter). The fact that it uses more than the 16 colours that were available in Windows 3.1 helps a lot too.


> in assembly language

That's pretty impressive.


That looks cool as hell!


Sorry, bad links on the front page. Fixed now.

Also, there is a popup menu at left with the navigation links. Although, the repository interface is not very mobile friendly.

Thanks for the report!


Thanks. I found the interface kind of hard to navigate on my 24" screen too. Mainly the the location of the "Menu" is hardest to figure out. The link to source tree is there.

If anyone else is wondering, here's the direct link to repo browser: https://fresh.flatassembler.net/fossil/repo/fresh/dir?ci=tip...


It is a work in progress. I needed good transactional database in order to handle multiply connections simultaneously and SQLite fit this goal pretty well. But later if I find (or write) assembly language database, I always can change it. Combined with good assembly written web server (already available: https://2ton.com.au/rwasa/) and assembly written OS (like MenuetOS or KolibriOS) we will have the full stack for assembly language based web hosting. :)


"gcc -S sqlite3.c" will give you an assembly language database engine. :-)

OK, maybe you meant "hand-written" assembly language. But on the other hand, SQLite claims to be written in C, yet a fair amount of that C code is automatically generated using other scripts and programs. Does that mean SQLite is not really written in C?

FWIW, we actually use assembly language sqlite3.s file (as generated above) during testing. We have scripts that go through and punch out individual opcodes, then assemble the result and verify that the test suite detects the error. This is a test of the SQLite test suite more than a test of SQLite, but a strong test suite makes for a strong product, so it still helps.


"gcc -S sqlite3.c" will give me the result of the C code compilation. The fact it is in form of "assembly code" changes nothing - it still is HLL code.

The humans write assembly language in different way, because they are not limited by the HLL rules, only by the hardware resources.


I am not agree. The human writes assembly language in very different way, compared to the HLL compiler. Taking the compiler as a reference, you will get all the disadvantages and none of the advantages it have.

I would suggest simply to take some good quality, human written assembly language source and to try to modify it to fit your needs. Or start to program some small assembly language program from scratch and ask on the asm forums for help.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: