Hacker News new | past | comments | ask | show | jobs | submit login

Exactly. My only gripe with modern machines is how narrow their interface is, I wish they exposed more of their internals.

e.g., I'm too young to remember the Commodore computers, but it's my understanding that you could change the display colors and sprites by poking a memory address. I'm not advocating for specifically that, modern computers are connected to the Internet and it would be a security disaster, but that kind of interaction with the machine is something that's missing.




People are forever saying they want this*, and bemoaning that modern computers are so wrapped in layers of abstraction, but the fact is you absolutely don't want this on the same machine you use to browse the web and run untrusted javascript.

Meanwhile there's Raspberry Pi and a whole host of retro-clones out there for cheap, and emulators out there for free. We live in a golden age where all the old stuff is still there if you want it, but you can also download incredibly sophisticated enterprise level development tools at zero cost, and hundreds of pages of tutorials and guides just a Google search away. It's never been easier for kids to get into programming.

* Example from here yesterday https://news.ycombinator.com/item?id=28441563


> absolutely don't want this on the same machine you use to browse the web and run untrusted javascript

100% agree, I said it in my comment :)

There's no chance you can get away with unrestricted memory access on a machine that ever talks to any other unknown machine ever, on any protocol and for any reason.

> It's never been easier for kids to get into programming.

I honestly don't know about that. Some things are easier, others are harder. There's no question the amount of quality information out there makes it much easier for someone who wants to learn, but even setting up a development environment is a huge blocker for a complete beginner. Hello world is the hardest program you're ever going to code, it's all downhill from there. And that's where the layers of abstraction are coming back to bite you.

Of course everyone who wants to learn how to program is going to overcome that, most people are just not interested in the first place and it's kind of dumb to assume they'd learn if only the environment were different, etc.

OTOH I wonder if the layers of crap aren't just making it harder for those who are interested.


You don't even need to install anything, there are plenty of programming environments online. My kids wrote their first programs using Scratch, there are dozens of sites that will let you type in Python and run it right there and then. There are programming apps for mobile phones and tablets. You can even develop applications in Swift on the iPad using Playgrounds and soon will be able to upload them directly to the App Store.


I agree that the actual barriers are probably lower than ever, e.g. with Scratch, Playgrounds, replit or even the browser JS console. I think the bigger issue is that the competition for (child) attention and interest is much fiercer nowadays. In the 1980s we had a few channels of fixed TV programs, maybe a handful of expensive computer/video games, LEGO and some plastic toys. Even back then, interest in programming was only for a select few.

Lets not fool ourselves, programming has a steep effort-reward curve, maybe even steeper than chess or music instruments. Nowadays it's competing against an infinite supply of deliberately tuned shallow-curved attention seekers like youtube/tiktok videos and app store quasi-games.


> I agree that the actual barriers are probably lower than ever

The barriers for programming something that other find usefull is way higher today.


Ergonomics were much, much worse before. I had my Commodore 64 hooked up to the TV in the living room. It was not easy to program using an old tube TV displaying text with 320x200 resolution graphics.


Which is why the first thing 1980's game developers would do when getting out of their bedrooms into proper offices, was migrating into development systems based on systems like VMS/UNIX or similar, and then upload the games into the Speccy and C64 via the extensions port.


If you want to have a modern day bare metal experience, playing with microcontrollers is probably your best bet. Buying a board and attaching it to a PC for development has never been easier. Device drivers are usually rather thin wrappers over memory mapped IO. A simple scheduler for multithreading isn't that hard to write and those in FreeRTOS or ThreadX are quite readable despite their production quality feature sets.

However, moving up from there to more powerful systems adds more and more (mostly necessary) complexity: address space isolation for robust multitasking, more complicated busses and attached devices, more asynchonous operations, etc. There is no way to have both true simplicity and the comfort / performance of a modern personal computer.


>it's my understanding that you could change the display colors and sprites by poking a memory address

As far as I know FreeBSD still lets you poke around in /dev/mem, you could make the poor OS have a seizure by doing "sudo dd if=/dev/random of=/dev/mem" last time I tried it. Linux is more restrictive with /dev/mem by default, but I think it can be configured to be more lax if you compile it yourself.


You could basically do similar stuff like that across all home computers up to the 16 bit days.

You can still do that in PCs by booting into real mode, OSDev has plenty examples.


Install FreeBSD, tabs(1) is still a thing! It's interesting that FreeBSD man says "A tabs utility appeared in PWB UNIX," while Wiki refers PWB/UNIX as having its initial release in 1977.


PWB Unix was the earliest sources I could find. FreeBSD lists history items for releases going back to the earliest unix release because BSD Unix is derived from AT&T Research Unix and FreeBSD is derived from 4.4BSD...

The tabs file does predate this, and I'll make a note of that (which is what this article is about).


> My only gripe with modern machines is how narrow their interface is, I wish they exposed more of their internals.

Modern machines are quite the opposite. It's the current crop of main stream OS's and their associated API's which are to blame, not the hardware designs.

> e.g., I'm too young to remember the Commodore computers, but it's my understanding that you could change the display colors and sprites by poking a memory address.

The commodore 64 was a very simple machine with little memory. Only one program ran at a time so no OS was needed. And the memory registers you poked at was the API as things were very simple back then.

However, you can certainly do the same with modern GPUs[1] but they are so massively complex that the manual for the Intel graphics controllers are well over 1000 pages, some manuals exceeding 2000 pages[2]. For comparison, the manual for the voodoo2 is 132 pages[3].

> I'm not advocating for specifically that, modern computers are connected to the Internet and it would be a security disaster, but that kind of interaction with the machine is something that's missing.

That's why we have an OS to control access to those bits of hardware. The internet has nothing to do with it.

The problem you face is most modern operating systems are massive, bloated even, to the point where the interfaces are buried underneath miles of code. How does one approach a simple hardware project of poke at bitmaps and pixels stored in the Intel GPU from a "modern" Operating system.

The Linux kernel is something like 10% AMD GPU driver code, mostly auto-generated by massive build tools which are as complex as the kernel itself. So no wonder you see the system as narrow, its so massive it blurs into one indistinguishable monolithic blob which gives it that narrow feel.

[1] https://wiki.osdev.org/Accelerated_Graphic_Cards

[2] https://www.x.org/docs/intel/

[3] http://darwin-3dfx.sourceforge.net/


> Only one program ran at a time so no OS was needed.

The Commodore 64 had a simple OS, and it does have all the elements to be called a primitive OS. The features include:

- Devices (0 was the keyboard, 1 was cassette, 2 was RS-232, 3 was the screen, 4-30 were the serial bus where printers and disks lived)

- Uniform I/O calls across those devices (you must OPEN a device, and then can use CHRIN, GETIN, CHROUT, LOAD, SAVE calls to move data, and then you have to CLOSE it).

- Handles (called "logical file numbers", up to 10 open at once supported) - a bit more sophisticated than CP/M really.

- A rudimentary notion of standard input/output (called the "default" input and output device)

But this primitive OS definitely depended upon what could be considered a single background task to read the keyboard (SCNKEY) and update the timer (UDTIM) - triggered by an IRQ that was set to fire off 60 times a second (50 for PAL). Tape I/O overtook this IRQ and messed up the timer though.

However nothing in this primitive OS except for reset routines even acknowledged the presence of the SID chip or features of the VIC beyond the text display, so you were definitely on your own there.

> How does one approach a simple hardware project of poke at bitmaps and pixels stored in the Intel GPU from a "modern" Operating system.

Linux exposes a `/dev/fb0` device, doesn't. Can't you `mmap()` this device and peek/poke to your hearts content, assuming something else isn't trying to write to `/dev/fb0`?


I'm still infatuated with the likes of Smalltalk and Genera where the entire system seems somewhat advanced and hackable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: