Hacker News new | past | comments | ask | show | jobs | submit | hermanhermitage's comments login

Every time Walter posts it reminds me my dream language would simply be C with https://www.digitalmars.com/articles/C-biggest-mistake.html and probably go lang style interfaces. Maybe a little less UB and some extensions for memory safety proofs.


That's why DasBetterC has done very well! You could call it C with array bounds checking.

I occasionally look at statistics on the sources of bugs and security problems in released software. Array bounds overflows far and away are the top cause.

Why aren't people just sick of array overflows? In the latest C and C++ versions, all kinds of new features are trumpeted, but again no progress on array overflows.

I can confidently say that in the 2 decades of D in production use, the incidence of array overflows has dropped to essentially zero. (To trigger a runtime array overflow, you have to write @system code and throw a compiler switch.)

The solution for C I proposed is backwards compatible, and does not make existing code slower.

It would be the greatest feature added to C, singularly worth more than all the other stuff in C23.


I don't even understand how the flip from having C++ collection frameworks being bounds checked by default (Turbo Vision, BIDS, OWL, MFC, Powerplant,...) ended happing, with C++98 getting a standard library that does exactly the opposite by default, and a strong cultural resistance on WG21 to change it until goverments started talking about security liabilities and what programming languages to accept in public projects.

As for WG14, I have no hope, they ignored several proposals, and seem keen in having C being as safe as hand writing Assembly code, and even then, Assembly tends to be safer, as UB only happens when doing something the CPU did not expect, macro assemblers don't do clever optimizations.


i think what happened was that turbo vision, owl, mfc, etc., were mostly for line of business applications: work order tracking, mail merge databases, hotel reservations, inventory management, whatever. but since the late 90s those have moved to visual basic, perl, java, microsoft java, python, and js. only the people who really needed c++'s performance (and predictable memory footprint) kept using c++, and similarly for c

maybe as the center of gravity moves from people writing game engines and kernels to people keeping legacy code running we will get more of a constituency for bounds checking

agreed about asm being safer


Wow, you mentioned Turbo Vision, OWL etc.. what a blast from the past! had fun developing applications using them


> The solution for C I proposed is backwards compatible, and does not make existing code slower.

Where can I read about it? The only way to make ptrs to array elements also safe that I can think of, is to replace them with triples: (base, element ptr, limit).



Thanks. I got interested in this topic as people are talking about writing OS kernel code in Rust but a) it only helps new code and b) very hard to justify rewriting millions of lines of C code in Rust (plus rewrites are never 100% faithful feature wise). If on the other hand if C can be made safer, may be through a stepwise process where the code is rewritten incrementally to pass through C->C0->C1->Cn compilers, each making incremental language changes, much more of code can be made safer. It will never be as good as Rust but I do think this space is worth exploring.


I would much prefer a safe C to Rust.


I don’t always agree but I’ll join you on this particular hill!


When writing software I almost never find myself in a situation where UB is a design concern or needs to be factored around in the structure.

I almost always find myself struggling to name and namespace things correctly for long term durability. Almost all compiled languages get this wrong. They generally force you to consider this before you start writing code so you can explore the shape of the solution first.

I think lisp is the only language I've used where this wasn't a burden, but in reality, lisp then forces you to deeply ponder your data structures and access ideology first, so I didn't find it to be that rewarding in the long run.

I love that Go lets you bang simple "method like functions" straight onto the type. This solves the first layer of namespace problems. It does nothing for the second though, and in fact makes it worse, by applying "style guidelines" to the names of the containing types. I am constantly let down by this when writing Go code and I find it hard to write "good looking" code in the language which is all the more frustrating because this was what the guidelines were supposed to solve in the first place.

I really just want C and I want to namespace my functions that act on structs into the structs themselves. Then I can name stuff however I want and I don't have to prefix_every_single_function() just so the me and the assembler can fully agree on the unmangled symbol table name which I will almost certainly never care about in 99% of what I compile.

There's a real joy to the fast initial development and easy refactoring you can find in scripting languages. Too bad they all have wacky C interfaces and are slower than molasses.


If you haven't already I'd check out Zig. It does what you're describing if I am understanding correctly. There are some choices in that language I find annoying, but maybe you'll still enjoy it


Perhaps C3? It has interfaces, slices, array iterators and bills itself as "an evolution of C (not revolution) with full ABI compatibility".

--

1: https://c3-lang.org/generic-programming/anyinterfaces/


I'm with you on this. I tried Defold for prototyping a game concept and was less than impressed. It got in the way in a framework sense, and didnt seem to provide much bang for buck (ie it consume more hours of my time learning its perculiar ways of doing things than it probably have taken to roll my own). Of course peoples mileage may vary. I was expecting things like auto atlassing, batch processing on import, proper layer z-order (versus using a 3d style z value). I recall the UI did weird stuff like sorting alphabetically, instead of by creation order or z-order. The message passing thing was annoying. I guess it was a bad choice for me. I'm probably more productive with a library and tools approach, rather than "opinionated" engine. It felt like someone was caught up in an architecture more suited to creating very large games and applied it here. It felt like it was created by too many programmers working to a feature list, rather than creating a useful cohesive tool.


Defold is definitely not for everyone and not for all kinds of games. Defold is better at 2D than 3D for instance. But it is a 3D engine so it will sort everything on z-value.

> It felt like someone was caught up in an architecture more suited to creating very large games and applied it here.

Defold is supposed to be the exact opposite. The two initial creators of Defold worked at a big AAA developer and with Defold they wanted something different from their day-to-day experience working on huge console games. They wanted a small and performant engine with quick turnaround time on changes.

This is why it is somewhat opinionated regarding certain things. To avoid some of the costly things that usually comes with huge productions and big complex engines.

It is also why you can usually build and bundle your game in seconds to any platform without any setup.

It is the reason why you can hot reload your changes into a running game to further reduce iteration time.


Did you try with the speaker notes on? That gives a bit more detail.


I recommend the old bitsquid blog as well - it gives a good snapshot of the thinking that goes into creating game engines these days.

http://bitsquid.blogspot.com


I recommend the new "Our Machinery" blog [1], where they write about a new engine development. Bitsquid was sold to Autodesk, rebranded Stingray and as of now seems to be discontinued [2].

[1] https://ourmachinery.com/post/ [2] https://www.autodesk.com/products/stingray/overview


Excellent, I'll check that out. As (at the risk of exposing my geriatric age) someone who did 3d engine and driver work back in the mid 90s to early 00s, I found the bitsquid blog really useful to warping my brain to the near era and the tradeoffs that matter today. Great stuff.


If you are into no-nonsense software design, Molecular Musings [1] written by Stefan Reinalter is also a goldmine. As is anything written or said on topic by Mike Acton, whose ramblings de-facto brought DOD into the mainstream [2][3]. Regarding good old days, remember Flipcode? ;) [4]

[1] https://blog.molecular-matters.com/ [2] https://macton.smugmug.com/Other/2008-07-15-by-Eye-Fi/n-xmKD... [3] https://www.youtube.com/watch?v=rX0ItVEVjHc [4] https://flipcode.com/archives/articles.shtml


Excellent, thank you for those references.

I also recently picked up "Data-oriented design: software engineering for limited resources and short schedules" by Richard Fabian, which I've not had a chance to read properly but looks like it covers things in detail.


I'm not sure its preposterous. Based on what I've read she and her team focussed on teasing out abstraction and modularity features in CLU based on experience building previous large/medium scale software systems.

It doesn't preclude others inventing before, after or at the same time. Nor the same ideas being explored in other fields earlier with other terminology or focus.

I'm sure there is some field of human endeavour that has explored and documented the same ideas in a different context much earlier.


Fascinating - thanks for sharing. Another request for dosage levels here.


The native disk format was actually broken into sectors, see:

http://amigadev.elowar.com/read/ADCD_2.1/Devices_Manual_guid...

It read full tracks at a time, but you can see from the doc there were 11/22 sectors with no inter sector gaps, but there are separate sectors of 512 bytes which are addressed in the file system structures.


That's just a clash of nomenclature between the logical and physical layout. It had logical sectors, but on disk, it was written as one contiguous sector.


Yes, it read the whole track, starting from wherever the head was, then the floppy device handler figured where each logical block was, based on the magic sync word $4489 (which is not supposed to be output by the default MFM encoding) and track/sector IDs embedded in the track's bitstream.


Not even "Not supposed to be" -- it can't be. $4489 decoded is a valid byte, but with one of the clock transitions 'blanked'.

If you decode $4489 via MFM then re-encode that byte, you'll get a 1-bit difference. This is why it works as a sync marker: even if you wrote that byte in the data area of the sector, it wouldn't encode the same way because of the missing clock :)

It's a common trick on radio systems too -- a short burst of data which can't be obtained through normal encoding processes (invalid FEC bits, flipped parity, etc).


For another CP/M take on 6502 see DOS/65 at http://www.z80.eu/dos65.html


Nomenclature varies, but I'm guessing the author of comment is using 'union' as the sum analog of tuples (unnamed fields) and 'sum' as the sum analog of struct/records (ie named fields).


"monoglot" might be a more accurate term, but "isomorphic" seems to have the mind share.

"isomorphic" would seem to imply having the same structural composition - eg. big ball of mud on the front end and back end, or broken into services implemented as objects (sequential) or actors (concurrent).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: