Hacker News new | past | comments | ask | show | jobs | submit login
1980s computer controls Grand Rapids Public School heat and AC (woodtv.com)
278 points by ScottWRobinson on June 12, 2015 | hide | past | favorite | 264 comments



I remember a conversation I had with a friend's father, an HVAC engineer. He brought home a control board one-time, showed me the sensor inputs and control outputs. I don't remember how many there were, but let's say a dozen of each.

He described how a sensor in would connect to this spot and feed the system temperatures from a sensor in some remote part of the building. The control output would then go to the appropriate HVAC system and turn on/off A/C, heat or whatever to get the temperature to what it needed to be at.

The system would spider out around the building with these sensors and regulate the building's temperature.

At the heart of the board (which looked like a PC motherboard) was the CPU, a z80, already hilariously ancient when he showed this to me. So I asked him "why not use a more modern CPU?"

He responded, "why? This z80 can control an office building's entire HVAC system, and poll each sensor 200 times a second, how many times per second do you need? Temperature in a zone doesn't change that fast."

It was my introduction to the concept of "Lateral Thinking with Withered Technology" https://en.wikipedia.org/wiki/Gunpei_Yokoi#Lateral_Thinking_...


This is so true, and even today with new designs you end up with overkill. A Cortex M0 (32 bit ARM system) with 128K of flash and 64K of RAM is 75 cents in quantity. That means the processor complex is essentially "free" with respect to the cost of the other bits (sensors, actuators, communication over distance). The risk is that with all that "extra power" the programmer decides to use it creatively for something like "A built in web server to show you status of everything" and that "feature" requires you connect it to the wider network, and that "webserver" never gets patched, and now you have an HVAC system which becomes the exploit vector to get into a much bigger facility/network.

All because designing in a "limited" computer didn't make economic sense, and programmers couldn't help but use the extra CPU capacity that was available.

That is what makes IoT a challenge / bad-idea to a lot of people.


Spot on. Programmers in general are not well-versed in security. I don't mean you, the reader of this comment. But as a collective. The other people who write code through which you can pass buses full of black hats. Not you. Web applications with huge budgets get owned by common mistakes. And those web applications don't have access to anything but data. Imagine having all those same, eh, security issues, in devices that can interact with the real world. I really don't want my microwave to suddenly turn on and keep going for hours at time while I'm out of the house.

This comment is purely fictional. IoT is perfect.


Fnord.


I've noticed the opposite thing. Most of the hardware around is built on the weakest specs that still let the thing run. Those various 10 cent savings on flash and μC tend to add up quickly when you go into mass production.

But the primary problem, which is not limited to but obviously visible in IoT, is that companies ask themselves "what sells?" instead of "what is good and useful?". All that crap that is being created, with useless "features" that introduce security holes foster the fragmentation of the ecosystem, is pushed because someone out there figures out that people will buy it. But almost no one understands the implications of all these "features" so the buying decision they make is usually wrong and stupid.

I wish someone cut out sales people from the design process. You should be able to get designers and engineers together, having them ask themselves what would be an optimal, actually useful smartwatch/smartfridge/smarttoilet/whatever and how to build it, and then tell sales people to figure out how to sell it. But no optimizing for better sellability.


I too have seen the intense penny pinching, here in California the soda bottler removed one thread from the tops of plastic bottles, it saves probably a fraction of a cent in plastic, but makes the detached retaining ring for the cap rub on your lips when drinking. That makes it uncomfortable to sip from those bottles. Such a huge price to pay in user dissatisfaction for such a small savings.

Can't go this far though :

   > I wish someone cut out sales people from the design 
   > process. ... no optimizing for better sellability.
In my experience, actually doing things this way leads to less economic success for the product and eventually it gets outsold by a competitor without those restraints. And At FreeGate I told sales people "you have to sell what we have, not what we don't have" and still had them come back with complaints about how the competitor could install their box in a data center etc etc. Not a productive conversation (or fun for that matter).

There does seem to be a minimally required feature set for selling things these days. "High Quality" isn't the compelling feature it once was.


There's some penny pinching, for sure - I had a coworker whose brother is on the iPhone hardware team and they have a lot of trouble with samples coming back from manufacturing with the wrong resistor here or a missing capacitor there to save a few bucks, because the factory sees it as overengineering, but doesn't understand the purpose it's built for.

That said, relying on an older processor may actually not save money. Sure, there's a premium on the absolute newest processor, but in general what's cheapest is what is most mass produced Right Now(tm).

I think a z80 on something like this was likely similar to the reasons that NASA control systems typically use the most reliable hardware they can, which means something that has been in use for many years.

For HVAC, maybe a little of each, but also the software may have been written to the z80, and if you change that out, you have to do all the testing you'd have to do if you built a new machine.

I often think back on this old chat I had with my grandfather, where he kind of tilted his head at something I was explaining about 90s tech and said something like:

   "Interesting.  In my day, we programmed the software to the hardware, it kind of seems like now you all are programming the hardware to the software."


> I had a coworker whose brother is on the iPhone hardware team and they have a lot of trouble with samples coming back from manufacturing with the wrong resistor here or a missing capacitor there to save a few bucks, because the factory sees it as overengineering, but doesn't understand the purpose it's built for.

I find that story utterly implausible.

The day Foxconn makes unapproved changes to Apple designs is the day that...well, never.


I think you have an interesting point but it ignores humanity. People want what they want for different reasons. As a marketer, it's probably easier to give people what they want than to change people's minds to accept what they need. I blame neither person in this situation only I'd try to change the system which surrounds them.


I know that salespeople can often be the source of bad decisions, but determining market fit is still vitally important. Who wants to build (or, more importantly, fund) something that no one wants?


But then won't you be outsold by the products that have focused on sellability and features?


Totally off the top of my head, but:

It seems like a service discovery system for IoT devices might be a good idea where the service discovery system is tracking what is actually allowed to run a particular service - like an HVAC system running embedded webserver.

For example, imagine if the industry had it so that the HVAC system was to announce that it had the capability to be an embedded webserver for status - but it instead should check to see if there is a different host to where it should send its metrics. This way you could control what the core host is for said website - and have all systems in the community basically ask for direction on self-hosting or publishing...


You have to not do that. Cycles you don't use cost absolutely nothing. Indeed, they may actually reduce jitter if you're careful ( they should not but that's another story ) .

This is the ultimate YAGNI.


It seems insane to think that a programmer would just "decide" to build a feature like that. It would have to be decided by the product people, who probably think it's a good idea.


That is exactly right. A product person who wants to sell this system wants as many bells and whistles as possible because, hey who knows what the one thing is that will push the customer over the edge into the "buy zone" right? So you end up with all sorts of stuff in there. In my experience it is rare to have a programmer who will push back on that request.


[deleted]


Constrained resources encourages simplicity by design. Use only whats necessary, no more, no less.


Well, "no less" is not always true. How many controllers exposing vulnerable interfaces are out there because encryption added too much overhead?


I'd consider security, and the resources to enable it, necessary.


Yes, and the constraints encouraged those developers to use less than necessary.


Z80s are still available new and in IP core form for integration into SoCs... as is the 6502, 8051, and several other "classic" MCUs designed in the late 70s/early 80s.

As I'm typing this my keyboard's controller is an 8051 variant, the touchscreen of my phone also uses an 8051, the mouse has a 6502, and the monitors in front of me have an 80186 core in them.

They are fast enough for the task, cheap, and development tools widely available, so they really don't need to be replaced.


Interesting considering the article, it was a defining part of Commodore hardware design that they compensated for slow CPU's by using co-processors all over the place, including putting a 6502 compatible CPU in the keyboard controller for the A500 and A2000...

(But you'd also find this spilling over into 3rd party hardware: My HD controller for my Amiga 2000 back in the day had a Z80 on it.

That machine was totally schizophrenic: In addition to the 6502 core on the keyboard, the M68k main CPU and the Z80 on the HD controller it also had an x86 on a bridge board - the A2000 had both Amiga-style Zorro slots and ISA slots, including one slot where both connectors were in line where you could slot in a "bridge board" with an 8086 that effectively gave you a PC inside your Amiga, with the display available in a window on the Amiga desktop).


It's kind of mind blowing that our cheap peripherals are driven by what used to be top-of-the-line processors only a few decades ago. I guess all that firmware has to run on something.


As I mentioned in another comment, already the ca. 1987 Amiga 2000 in this article had a 6502 compatible core on the keyboard controller, and some same era hd controllers had Z80's on them - they were cheap already then.


Do you think its better to teach uni students on new processors and tools e.g. Freescale, ARM, etc. or on older z-80, 80186 cpus?


I think university students should definitely start with older processors, and then gradually change the levels. I agree there is an architectural change in the newer processors, plus the additional cores. But, working with an older processor with limited memory and processing ensures the programmer realizes how important is each line of code and appreciates the comfort provided by newer processors and thus their complexity.


The first computer I programmed was a Z80 micro-controller connected to some basic peripherals (LED readout, sensors, actuators, stepper motors, potentiometers, etc...). There was no compiler, no assembler; nothing but a keypad to enter the instructions into memory and a button to start execution.

The CPU was less powerful than any of the x86 32bit chips that were widely available at the time, but as a kid it still really gave me the idea that whatever I could think of, I could make a computer do.

I'd agree, understanding things at a really basic level first helped me to better understand things at a higher level later on. It probably helps me to keep in mind what a computer actually needs to do to run code as well. I think it's probably one of the reasons Knuth uses MIX in TAOCP.


Kind of a "which students" sort of question.

I'd say with the older ones. With those, you can put a logic analyzer on the memory bus and see what's going on - if the pins aren't on a BGA under the chip and the board has no vias.


Working on the older CPUs is more approachable to understanding all the low level details plus it makes you appreciate all that the newer CPUs offer. However when actually working, I don't think one should work with an older CPU unless it really makes sense (sufficient computer power, low power requirements, etc.) Working with a powerful CPU lets you focus on the job at hand instead of the idiosyncrasies.


I don't think this is true at all, older CPUs are not a "more purified" and "cleaner" version of todays, they have the same and often considerably more cruft and crazyness.

To work with them is to teach bad habits and useless skills.


Some older CPU's maybe, but you can't seriously look at e.g. the 68000 next to an x86 CPU and tell me the 68000 is not cleaner.

It's not that they don't have craziness, it's that the functionality that mere mortals need to use to write efficient code is simpler.

The M68k's 8 general purpose data registers and 8 general purpose address registers alone is sufficient to make a huge difference.

For me, moving to an x86 machine was what made me give up assembler - in disgust - and it is something I've heard many times over the years: it takes a special kind of masochist to program x86 assembly; for a lot of people who grew up with other architectures, it's one step too far into insanity.


I have the pleasure of working with PowerPC in my day job. Also a relatively clean architecture. I really do wish that Apple had been more successful with it, that Microsoft would have continued supporting it in NT, that Motorola / IBM had kept up with Intel in raw performance, and that it had a larger user base than it does today.


Not to mention the m68k flat address space. A clean architecture for clean code.


Just look at the 6502. No two instructions follow the same pattern - every one is a moss-covered three-handled family credenza, to quote the good Doctor.


The 6502's instruction set is pretty regular, with most instructions of the form aaabbbcc. For instance, if cc==01, aaa specifies the arithmetic operation and bbb specifies the addressing mode. Likewise with cc==01, aaa specifies the operation and bbb the address mode. See http://www.llx.com/~nparker/a2/opcodes.html

The regularity of the 6502's instruction set is a partially a consequence of using a PLA for instruction decoding. If you can decode a bunch of instructions with a simple bit pattern, it saves space.


Aftger arithmetic, instructions have little or no regularity. They omit addressing modes, swap codings for modes. There's internal hardware reasons for this, but for the programmer its chaotic.


Not that it's in the least bit relevant to the discussion, but the moss-covered three-handled family credenza is not a Dr. Seuss quote found anywhere in his books, it came from the 70's era 'Cat in the Hat' TV adaptation, authored by Chuck Jones.


Cool! I never knew. I guess it shouldn't be considered 'canon' then.


That's just not true. It has irregularities, but most of the instructions fit into a small set of groups that follow very simple patterns.

But secondly, where the 6502 deviates from a tiny set of regular patterns it is largely by omitting specific forms of instructions, either because the variation would make no sense, or to save space - the beauty of the 6502 is how simple it is:

You can fit the 6502 instruction set on a single sheet of paper with sufficient detail that someone with some asm exposure could understand most of it.


The x86 family is the same.


Oh there is quite a lot of consistency in the structure of instructions across the basic set - register numbering, many instructions allow full register and addressing modes. The 6502 had pretty much no two instructions the same.


What mouse uses a 6502?


This one:

http://www.mcuic.com/bookpic/200811516244620817.pdf

(Look at page 9. This IC is found in a lot of generic mouses.)


TI runs all its low-level calculator stuff on Z80 emulators which are then helpfully run by whatever actual chip they are putting in the calculators these days.


Nope; with one exception, the Z80-family calculators are still run by real, bona-fide Z80s (or, in the case of the new TI-84 Plus CE, an eZ80).

(That "one exception" was the TI-84 Keyboard for the original Nspire, which did run the 84's firmware in a Z80 emulator on the Nspire's ARM processor.)


From time to time, I am greeted with looks of shocked disbelief when a younger employee finds out how much of my employer's business gets done on OpenVMS Alphas and IBM Mainframes. They think it's stupid that we're not running it all on HP servers in the data center.

The thought never occurs to them that it's rock solid, only needs quarterly patching(at most) and has had 20+ years of tweaks that make it fit our needs. We don't need to replace it, yet.


When people really need something that's reliable, there's really no limit to how much effort can be put into producing a system with unfailing integrity and availability.

Take, for example, the lockstep facility on certain IBM processors:

https://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/29...

You can take two or more of them, run identical software on them, and compare their output on a cycle-by-cycle basis.

Now, the 750GX series may be a bit out of date in the modern era ... but good luck achieving that level of paranoid system integrity with just about any truly modern system.

One thing that I think they don't teach so well in most colleges is that a system's compute performance is not always the most important measurement of the system's capability.


Not to mention there's VMS clusters with 20+ years of uptime. The systems, especially Alphas, were so reliable that at least one sysadmin forgot how to restart them and had to consult the manual lol. I wish they made a good desktop. I'd have used it and probably lost less work. ;)

A link for you: http://h71000.www7.hp.com/openvms/brochures/commerzbank/comm...

Notice how "all systems crashed" from heat except the AlphaServer. That's great engineering, right there. It's why I wish they were still around outside eBay. That plus PALmode: most benefits of microprogramming without knowing microprogramming. :)


>It was my introduction to the concept of "Lateral Thinking with Withered Technology" https://en.wikipedia.org/wiki/Gunpei_Yokoi#Lateral_Thinking_....

Thanks for sharing this, I love finding creative new ways to take advantage of 'tried & true' technology and it's something that regularly feeds into how I build software–sometimes to the displeasure of colleagues who are most interested in the shiniest new tools. It's interesting to read about how this sort of thinking worked for Nintendo.


Why write your own 10-line function to do it when you could use this library and do it with 3 (not including the 3k line lib)


Why? So you can use modern, friendly UIs to control it instead of scary there-be-dragons only-one-guy-in-facilities-is-allowed-to-touch-it Win32 apps.


the counter to lateral thinking nowadays is power efficiency. Imagine underclocking the iphone 6 processor to 1st gen iphone screens. I image you'd get quite a bang for your buck on that one.

Though I am a big proponent of lateral thinking in general, for battery powered devices the optics change a little I think.


At some point it may become cost-prohibitive (or unwieldy in some other way) to continue to manufacture such chip designs, even though many applications may not require additional raw horsepower.


At best can replace it with a Raspberry PI instead of a full on PC.


If your first thought is that $2M to replace an Amiga is ridiculous, stop.

They wouldn't be replacing the Amiga for $2M. They'll be replacing a 19 site remote HVAC control system, certainly including new radio systems, local controllers, and the central controller. They will want something with a warranty and a service contract for the next 20-30 years.


The radio system made sense back in the 80s before cheap, reliable internet access was widely available, but I feel like IP would be a much more practical communications solution today.. especially considering the fact that the 2-way radios used by the maintenance staff interferes with the existing system.

Though your point is still totally valid, I'm just thinking out loud.


> before cheap, reliable internet access was widely available

Cheap and reliable Internet access is still not widely available. You don't want to have your HVAC stop working because ISP botched something up. Frankly, IMO there should be zero reason for any of the HVAC control loop to extend beyond the building. Sending data from your controller to your device next to it through half the world and back is insane, and yet it is what's happening with IoT right now.


You couldn't be more right. To this day, I encourage the use of radio, dial-up, closed-circuits, cheap leased lines... anything but the Internet... for mission-critical transmission. Just way too many issues from reliability to security. Using simple lines and protocols with little to no routing let's one use equally simple hardware. If resources are left over, one can use them for boosting manageability, reliability, or security.


This is so true. As someone who worked at a company that did power monitioring and wanted to (and did) expand into thermostat control.

1) Reliable internet isn't everywhere, it cuts out a bit.

2) A lot of internet enabled thermostats are just not that great at reconnecting when the eventually loose their connection.

Though to be fair the thermostats generally will just continue on the normal schedule if they loose connectivity.

The company ended up using on a different wireless protocol to talk from the base station to the thermostats. (z-wave or zigbee fwiw).

Ironically we had a 80s error programmable thermostat in the office, which worked pretty well.


Well as far as the cheap part goes, my thinking was that these buildings definitely already have internet service in the first place, may as well just utilize that.. I supposed you make a point about reliability, but then again they're putting up with the whole 2-way radio interference, seems like the bar is already set low :P

I don't know, again I'm just thinking out loud, I'm no HVAC professional. We do use the internet to manage off site devices in my company and we tend to have more issues with power outages than we do with our ISP uptime, it's been a good solution for us.


If it is a really critical system, the proper way would be a T1 circuit between the two buildings. The circuit would only go from one building to the local telephone office, back to the other office. They tend to be pretty reliable, but expensive. You could then run a lan over this.


Funnily enough it likely is already using TCP/IP: https://en.wikipedia.org/wiki/AMPRNet


Man I really hope someone out there has streamed Spotify or NPR over this. Recurs-inception.


Whoa that is funny! Ha I had no idea


What's wrong with just using thermostats in each building? This seems way over-engineered.


The same reason you don't put a space heater and window AC in every room. Because it's inefficient, ineffective, and archaic.


Yep, you want the whole thing to act like a system not a bunch of parts. Energy savings and a comfortable environment really depends on this. A poorly setup system can be a nightmare for the maintenance staff.


And so this is why, in our office with a thermostat and ducts in every room, it's typically freezing in 1/3rd of the conference rooms, another 1/3 are way too hot, and there is zero meaningful control using the thermostats in the rooms over this?

Not saying it's not a good idea, just saying that in reality I've seen these systems work very poorly together in most office environments I've been in.


Well, if you have thermostats in every room, I would imagine that it would be impossible to get a consistent temperature. All the commands and one system to sort it out and move air cannot be good.

If I ever teach a CompSci again, I would think about making a climate controller a problem set. My first choice is a storage building controller, but this would be interesting.


That sounds like bad air balancing, but that the control system is "working".


A big reason is cost-savings. This prevents each building occupant from setting the AC too cold, or forgetting to turn it up/off when they leave at the end of the day and for breaks.


The best way to set the temperature at a comfortable level is guessing every 30 years what you want each building to feel like.


Do you expect that over the next 30 years our idea of what a comfortable temperature is will change?


Temperatures are set based off things like use of the room and season. Both of these change and usually only one is somewhat predictable.


Are we sure that the current system doesn't have that?

Even if they do install local controls, a central monitoring system makes some sense for buildings that go unoccupied for days at a time.


I dunno. You could find plenty of people that would make it their full time job to just turn on and off systems for 2M over 20 or maybe 30 years. :)


At 20 years, $2m means you can hire 3 people for $33k a year to do the job. At 30 years, you can only hire 2 people for $33k/year. That figure does not include benefits. That may or may not be viable for 19 schools (depending on how far apart they are, etc).


The $2 million doesn't have to sit around for 20 years in a non-interest bearing checking account. Put $1.5million in a 5 year CD and get another ~$50k/year. Give them other maintenance work to do while they are at the school turning on and off the heat, and it becomes more viable.


It would mostly be all the contractor and subcontractor costs. I'd say they need at least a couple of electricians, which are usually unionized and charge plenty (I have no issues with unions, it just costs more). The plumbers, because you can't hack into a wall without hitting a water pipe. Then the HVAC people, who have different sub specialties. Then the people who do the cement/wood/boarding. Not including the inspectors...


Here was an $88M project to replace a system of similar vintage: http://www.smh.com.au/technology/technology-news/30yearold-r...


thanks for the clarification , the article was misleading to the max


Welcome to West Michigan local media (though I'm sure this phenomenon isn't unique to this area). While it can probably be explained by bad writing, they've also learned that anything that provokes the ire of tea-partiers is golden for driving pageviews.


I don't see how anyone could have thought that it cost $2 million to replace this computer. It became very clean that it was for a revamp of the entire system.


Yeah as well as coding a new system from scratch. If they wanted to replace the Amiga they can go on Ebay and spend $400.


And it probably would not function as well as this (even with this setup's many issues).


I see no reason for it not to. Keep in mind that this system has also required maintenance over the years (just like any system). The cost savings from improved efficiency (more granular control, reacting to weather & building occupancy, etc.) will also be a huge improvement.


I tend to be very cynical these days. Far too many things are not designed for longevity. But, I can certainly see where a newer HVAC system would be more efficient.


Modern climate control systems are quite nice if setup correctly and do a very good job. Probably save them quite a bit of money over the 30+ year lifespan of the new system.


I still don't completely see why it'll be $2M, unless that's the total cost including the service contract for the system's lifetime. I mean, the hardware and installation labor can't be that expensive, right?


The GRPS website says that they operate 67 buildings, which leads me to speculate that the $2 million includes upgrading other parts of the HVAC system at each school.

Not all that much pops out of a few searches, but there are discussions of projects that make me think that $100,000 per building isn't a lot to spend on such a thing.


Oh, weird, the article said 18 buildings. 67 seems more like 2 million would be appropriate.


The upgrade would apply to only the 18 buildings.

The point I was trying to make was that they likely already have other systems in the remaining buildings.


Yes, it doesn't quite add up. For that money, you could hire the guy to turn things on and off by hand for 50 years.


The record for obsolete computing probably goes to Sparkler Filters, which still uses an IBM 402 tabulator from 1948 for their accounting. This machine reads information from punch cards, adds it up, and prints reports. That wouldn't be too bad, except this tabulator is before the vacuum tube generation. It is electromechanical, using mechanical counter wheels for addition.

It is programmed by plugging dozens of wires into a control panel; each wire indicates something like this card column goes to this adder and that printer column. To change the program, you pull out the panel and replace it with a different panel. The "software library" consists of shelves of panels wired up for each task.

This system actually surprisingly sophisticated considering the lack of technology. For instance, there's a mechanical mechanism to suppress leading zeros in numbers. It can print text as well as numbers and supports three levels of subtotals as well as conditionals. It It's also fairly fast, processing 150 cards per minute.

Links: http://www.pcworld.com/article/249951/if_it_aint_broke_dont_... https://en.wikipedia.org/wiki/IBM_402 http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/punched...


This legacy system issue doesn't just happen to remote/low budget operations either. While they've now moved to emulating the system on what are presumably VMs, Columbia University uses what amounts to a 40 year old system to post final grades to the student administration portal. Official transcript grades are updated once daily, at some time in the wee hours of the morning. From a Columbia CS professor:

"Columbia processes grades by using a system that was first released in 1972. In order to have kept it running for more than 40 years, it has consistently run special-purpose emulators that make its otherwise state-of-the-art systems think that they are stuck in the 70s and using an operating system called “CP/CMS”:

http://en.wikipedia.org/wiki/VM_(operating_system)

The grading system is written in a programming language called “Focus,” which in 1975 was one of the very first database languages developed and released:

http://en.wikipedia.org/wiki/FOCUS

But because of this, grades are processed only once per day, in a batch job that runs at about midnight every night. I am not making this up.

The university, recognizing that it is time for it to upgrade, does have plans for replacing the grading system. The upgrade is scheduled for the year 2020. I am not making this up either."

http://bwog.com/2014/01/06/why-your-grades-take-so-long-to-s...

I imagine these schools will eventually move to something similar: an emulated system with specialized hardware to interface with the mechanical equipment.


Many universities have systems like that. Banks too. Probably true of many early adopters: they built systems that worked and didn't rewrite them simply for the sake of having something new.


At my first job, we built I/O cards that did stuff like the Amiga in the article does. The cards used adapters so they could run on anything from TRS80's to PC's because at the time the system was conceived, the IBM PC hadn't emerged as the dominant desktop yet.

The company is long out of business, but every so often someone finds some of their products at a garage sale or industrial auction or it comes in a box of other things on eBay. When they search for manuals on the product they often find me because by coincidence someone asked about those products on a forum I frequent and that conversation tends to come up in the first page on google.

It's amazing how many really old systems are still running after decades without change and no one gives it a thought until something breaks. I've often wished I had a list of their old customers so I could contact them and offer my services to upgrade old controls.

[sigh] another business opportunity missed.


Make a webpage with all the info you have on the subject, state what you just stated. Done. Free business.


Yes and please do blog about it regularly if you feel like it - this whole thread on legacy systems is the most interesting read on HN for a long time IMHO.


[smacks self on head] Thank you. For some inexplicable reason the most obvious solution honestly never occurred to me!


Most of my credit cards will not update the balance immediately after I make a payment. It takes a day (sometimes more, depending on timing). Clearly batch processes are involved.

There's nothing wrong with batch processes. They are often easier to reason about than having a system where everything can change in real time.

Sometimes you want real time. Sometimes (many times) a daily batch is just fine.


Credit card processing does seem to be moving towards real-time. Lately I've found more and more merchants where I get a notification on my phone instantly after my Amex is swiped.


Also gives you a chance to change your mind, say to cancel a pending bank transaction in the evening.


You can have delays for that purpose in real-time systems too. And they'd actually be consistent, instead of having variable lengths to cancel the transaction.


When I interned at IBM from 2003-2006, we submitted our timecards using a terminal application accessed over telnet. I don't know exactly what system it was, but the key commands certainly evoked thoughts of early mainframes.


It's not for the sake of having something new. New technologies do often provide obvious benefits. It's the cost of a new version (both in software development and reorganization of processes) that may be too high to justify these benefits.


Probably works a lot better than the crappy peoplesoft system my college bought into...


You have that too. Are you from UCT?


That Peoplesoft system dominates a lot of the schools in the US. It spread like the plague on promises of being tailored to every school's business logic. Unfortunately it did so via vomit-inducing over-abstraction to the point where interfacing with it just to lookup a student record takes hundreds of lines of XML/SOAP wrangling.


> grades are processed only once per day

Playing devil's advocate here...

Do grades change during the day?


Grades on the Columbia equivalent of the Moodle/Blackboard student portal can and do update instantly as soon as teachers hit submit. This is a recently updated, fairly modern system. However, these are not necessarily the official grades that go on your transcript. In my experience, teachers often never post the (possibly curved) final letter grade on this system. I assume that transcript grade system is a separate portal that teachers have to input the final letter grades into. This system, from the POV of the student, only updates once per day. Teachers put their grades in at 5pm? You get your "grade has been posted email" at like 2 am, and practically speaking get your grade whenever you wake up.


Is that really that big of a deal, though? There would be effectively zero benefit to giving the students their new grades on demand instead of after a batch process. It's not a reason to upgrade the system, it just shows that the system was made when batch jobs were considered acceptable.


I do not miss those days of waking up early in the morning after finals to find those emails. I would nervously read them in hopes of not being disappointed by my grade


My memory of University is that professors are the bottleneck to updating your grades, not the nightly batching system.


Different teachers putting their grades in at different times of the day? Sure.


Why should the computer have to deal with what is obviously a poor policy decision on the part of the teachers and administrators?

Enter the grades when you're working - i.e. during your work day.

Oh .. I know, teachers work way more than they are given credit, and don't make nearly enough for the work they do, given that they're up until 4am doing some paperwork or other in order that a few hundred students get their results .. just playing devils advocate here. This is a 40-year old system, still working. If it ain't broke, why fix it?

Seriously though, my personal nerd-ego says that anyone who can keep a system running in emulation over 4 decades deserves a special kind of award for the high status of their accomplishment. This is something few of us are capable of doing these days, alas ..


It doesn't really have anything to do with policy or working odd hours. Unless everyone presses enter at the same time, grades will be input at different times. (I'd love to see administrators trying to corral senior faculty into doing that, though.)

As to whether a system should process grades more than once a day, that's obviously a judgement call that Columbia has made. And as someone who worked in the administration of similar university, a small part of me is cheering on whoever at Columbia has held fast against the demand for up-to-the-minute GPAs.


Actually, so you're saying you think it is all the result of smart policy, and I agree with you now. I see that up-to-the-minute GPA's are a real burden. Funny though, that they all have to hit enter at the same time, didn't realize it was that brilliant. ;)


Grades would change any time an instructor chooses to input new grades. Some exam is graded, homework is turned in.

The vast majority of them wait til the last minute to do it, right at the end of term.


IBM mainframes incorporate VM technology in both senses of the word: the "abstract ISA" sense and the "simulated hardware" sense. They knew they were building systems that may have to be online for decades; and the exact same object code that ran on the original System/360 runs bit-identical on a brand new POWER-based mainframe (though obviously faster).


I know what you mean comparing it to Columbia University, but Grand Rapids isn't exactly a remote location. It's the second largest city in Michigan. We're not talking about northern Alaska.


It's not remote, but the school district hasn't exactly been flush with cash over the past few decades. If this occurred in East Grand Rapids, Forest Hills, etc., I would be more surprised.


Hi Chris :)


Even modern software would work like this. Many universities use a product called Appworx that schedules jobs... and your grades are only going to be upgraded once per day.


'Old code' !== 'Bad Code'

I would say that's an assumption we as developers need to teach society, but we should really work on teaching it to ourselves first. I can't tell you how many times I've heard "We should just start over" rather than try and understand what we have.


Agreed. It ran for 30 years and hasn't had problems (other than the radio interference issue). Anything that has lasted for 30 years has had nearly every bug worked out or worked around. The only things that survive for 30 years are systems that do the job.


Until a single point of failure hits and you're in a mad-scramble to get a replacement system up and running.


Exactly! That's the ultimate problem with these kinds of systems. It's why I advocate incrementally re-engineering a compatible system onto modern hardware and standards. Not even anything fancy necessarily: just easy to maintain, portable, and fast enough. Companies like Semantic Designs even make source-to-source compilers that do bug-for-bug translations. I'm sure we could do dynamic translation of binaries for the dependencies, too, although I'm not sure anyone does that.


This. Nobody today would even consider trying to design something so simple it could last three decades. And certainly nobody would want to support it. Everyone wants shiny and new...


Whether or not they realise it, I think people are building brand new systems right now, using cutting-edge technology, which will survive until 2045.


Some arduinos with an ethernet shield and a super simple web backend?


Nah, somebody would say it need json-serialized config files, a flash-file system, a rails-based admin login running on a few amazon instances for fail over....


"Simple" Ha. There's this whole coprocessor in the ethernet shield with its own proprietary implementation of TCP/IP and who knows when someone will find a serious bug in it. Then in 30 years you have a TCP/IP stack with a serious security issue and no way to patch it!


Ethernet is great, but thirty years from now it and/or RJ-45 might sound just as antiquated.


Doubt it; 10Base-T is 26 years old, and still works with today's modern 10Gbase-T NICs.

There will be backwards compatibility options for an awfully long time.


How right you are. On the other hand I'm continually surprised that my paycheck in 2015 comes (more or less) from a descendent of an operating system I learned in the 80's.


Sure. And then it's time to retire it, after a job well done.


The software would only be a small part of this project. The issues they discussed in the clip (risk of sudden failure, radio interference) were hardware issues.


Its not the code that's concerning, its the hardware. The idea that they never bothered to virtualize or emulate this in 30 years says a lot about how IT is run at that school.

A single developer could emulate whatever IO goes to that radio and have it controlled by a simple computer with redundancy. I wonder what they're paying the original developer to maintain this turkey. I imagine he's charging enough to where the sunk cost fallacy is obvious here.

What I find interesting is all the places where FOSS is lacking. There doesn't seem to be a popular and well managed FOSS climate control system. Is anyone making standardized controllers, thermostats, etc? This stuff should be a commodity by now. I'm guessing its hard to work with all the proprietary HVAC stuff and its also an unsexy problem to solve. I'd love to replace my thermostat with a raspberry pi and have a lot of fun modern features. There's a hackaday article about someone doing this, but its a pretty primitive project.


There are some projects to start bring FOSS ideas/ideals to building system controls and decision making like this, but I think there are significant difficulties in the variety of proprietary systems and reluctance to change due to capital costs and/or maintenance contracts.

https://github.com/VOLTTRON/volttron from Pacific Northwest National Lab is one example.


> Is anyone making standardized controllers, thermostats, etc?

http://www.bacnet.org/


It is true that old code is not necessarily bad code. However, HVAC system controls have had very large improvements in efficiency over the last 30 years. HVAC systems are the single largest use of energy in a commercial building and the improvements that modern controls can offer in efficiency can greatly offset the costs of installing those new systems.


I think they got very lucky that the original programmer didn't move away or decide he didn't want to maintain this thing anymore, whenever some dingus from the school district decided to bother him. I'm seeing a lot of people ignore that aspect of the situation. There's a reason we talk about the bus number.


I'm mostly a back-end guy... does it really matter if you you use != vs !== when you know that both operands have the same type in javascript? In this case, you've got string literals, so (flying spaghetti monster, I hope there won't be any type conversions going on)


It matters for anyone who maintains your code. "!=" is a big red flag, better to consistently use "!==".


True. Hardware, though, does get old.


Was hoping it would be an Amiga, and wasn't disappointed. Though it seems like this particular system is ripe for replacement because of the issues mentioned, I have a lot of respect for staying with old soft- and hardware instead of chasing something new and shiny when the old system still gets the job done.

A few years back a seller came to my workplace to ask if we wanted to advertise at our local cinema. We didn't, but I had a chat with the guy and was pleasantly surprised to discover that he was still creating and showing the ads using an A1200 with a genlock card running Scala (25 years old this year, real popular in the nineties with cable companies and broadcasters as big as CNN and BBC). Reason for not upgrading? "It still works great." Put a big smile on my face for the rest of the day.


For those confused by 25 year old Scala: https://en.wikipedia.org/wiki/Scala_(company).


Thank you. This being HN, I immediately was thinking the FP-ish programming language, which would seem to indicate that it predates the JVM...


I'm a bit confused: are you saying that the system was running Scala for 25 years? Thought Scala was just a few years ago.

And was the genlock card a Video Toaster by any chance?


The first version of Scala was released in 1990 if I'm not mistaken, he probably used a later version as he was using a A1200 which had the AGA-chipset that improved graphics considerably. He'd been using it "for about twenty years", and as this was two or three years ago and the A1200 was released in 1992 he probably had a 22 year run on the same hardware and software. He didn't specify the card he used, but I think it's safe to say it was the Video Toaster given its popularity at the time.


Scala (the programming language) and Scala (the Amiga video software) are two different things. :)


> A Kentwood High School student programmed it when it was installed in the 1980s. Whenever the district has a problem with it, they go back to the original programmer who still lives in the area.

Now that's job security!


I wonder what the hourly rate is ;)


30 years of service? Sounds like a success story.


Especially in light of the cost of "updating" it. I realize it's not the $2M to replace an amiga.


Back when I was in high school, I decided it would be cool to try to hack into the school computer system. I grabbed the white pages, figured out the range of numbers belonging to the school, and fired up my demon dialer.

Sadly, the only system that picked up was an HVAC control computer with no authentication. Apparently they figured that no one would find the number. After some experimentation, I was able to change the temperature in specific buildings where I went to class, which was neat for about a day. Anyway, this article makes me wonder if that system is still online.


My company, an ASX30 corporation, still runs its operations through an emulated IBM 3270 terminal sitting on top of a DB2 v8 mainframe with literally thousands of tables in it. 4 character transaction codes and ctrl as 'enter command' for all!

We're currently developing a replacement SAP solution. I'm yet to decide which is more obtuse.

edit: The SAP solution was spurred not because the old system was breaking but rather that when the COO asked for some daily reports and the IT propellerheads replied that the data crunching to generate said 24 hour reports took more than 24 hours to run.


How common are remote radio controllers in historical or modern systems?

I think the most interesting part of this article is the use of radio for a wide-area network, even if it only got a passing mention in the article.

Some questions I'd love to know: What kind of protocol was used? Are computer uses of walkie-talkie radio bands allowed by the FCC? How do the receivers work? If they don't run on their own microprocessor, how were they designed?


Data on radio channels is common! All you need is a "radio" and a "modem," and there are many variants of either.

These kinds of radio channels are already allocated to 'business' or 'government' service by the FCC. The user has licensed the use of one or more in a specific geographical area. These channels are 2.5kHz wide, but the old fashion was 5kHz bandwidth. They were allocated with FM audio in mind, which is what the portable radios do. You're normally allowed to run data on these channels.

The radio used at each node is probably a "mobile" radio intended for mounting in a vehicle but configured as a kind of base station by adding a DC power supply. Motorola has been selling microprocessor controlled, frequency-synthesized radios since the early 1980's, and the earliest one I know of used a 6800 processor (actually Hitachi 6300 series clone of M's cpu). The channel is probably shared with the maintenance chatters because the county already had the license, a radio fleet, and a shop to maintain the radios. By changing a "CTCSS" or "PL" tone to one that is different to the ones used by the voice radios, the maintenance chatters don't have to hear squawks and whistles all day and night.

The modem is probably an FSK kind of modem with a rate of something like 1200 bps. This works by representing 0 and 1 with different audio tones and is very simple with complete integrated silicon available during the 1980's. It's interfaced to the radio's audio input and output, the computer/modem needs a way to cause the radio to transmit, and it is nice to have a signal to show whether the radio channel is busy or idle. Otherwise it is probably a serial interface to the local controller at each site. The modem itself may have a microprocessor to do simple tasks like ensure the transmitter does not get stuck on by a fault or to prevent transmitting when the radio is receiving a signal already.

As far as the protocol goes, it can be anything and is probably something extremely simple: My address, length, payload, checksum.


Almost certainly Packet Radio: https://en.wikipedia.org/wiki/Packet_radio, probably on the amateur band: https://en.wikipedia.org/wiki/AMPRNet


This application should not be allowed in Amateur Radio service, because it's being used as part of a business or government operation.

The piece of spectrum this system actually uses, since it is shared with their maintenance radio fleet, is definitely not part of any amateur service allocation ("band")

The term 'packet radio' also implies (to me anyway) certain kinds of applications and messages, which don't apply to a system like this. Anyway, if there was some kind of IP or other routable protocol on the air here, that would be overdoing it. The messages probably look more like what you'd find on a multidrop serial bus, with elements including source and destination address, length, payload, checksum. A bunch of FSK modems working on the same radio channel is not much different to a multidrop serial bus.


This is actually surprisingly novel, and especially impressive given the developer was in high school at the time.

Running ethernet to these locations can be challenging, and often an SMS based approach or mesh network is used instead - which is really not that far off from this approach (with some obvious benefits).


Thanks! I knew they used something but nobody ever told me what the old standards were. I'm adding it to my link farm as some call it. Bet I find a use for it, too. :)


Pedantic, but if it's an Amiga 2000 (which it appears to be from the video) then this can't have been running since 1985, as the Amiga 2000 was released in 1987. But the guy does say the computer came from eBay, so perhaps this is a replacement for their original Amiga.


I'm not sure how the RF interface works, but it it uses a zorro card, the A2000 would be the first computer that could run it. If it used serial, the even a A1000 would be a possible system.


When I worked on the Hubble Space Telescope in the late 90's I kept hearing the electrical guys talking about the new computer we were going to install. They called it simply, "the 486". I one day joked that I had had a 486 a few years ago in college and maybe they could use that and save some bucks. And they told me it was the same thing - literally an intel 486 processor. They said that the design happened to be very resistant to radiation, and met their purposes very well.

I haven't really kept up on it since, but I believe it's still running a humble 486. Granted, the rest of the machine isn't much like my old Dell, but I always found that interesting.


Yep, that radiation hardened 486 was used quite a bit. I believe it was used for the Space Shuttle as well.

The Mars rovers have moved to a more efficient PowerPC platform, but its still old tech. Radiation hardening CPUs isn't cheap and the processing requirements for what these projects do isn't high (its more sane to just send the data to NASA and have terrestrial computers do all the heavy lifting), not to mention you don't run a bloated desktop OS on these things. A minimalist VxWorks RTOS is typically used.

Price? The RAD750 on the Curiosity Rover starts at about a quarter million dollars.


The Space Shuttle should be so lucky - 8086 (though I read 80386 for a cockpit upgrade). First flight 1981, predating the 486 (1989). It's interesting, NASA was raiding all sorts of old machinery to keep in stock of 8086s.


If memory serves, older process technologies are intrinsically a little more rad-hard, just because a 1um 6T cell is roughly a hundred times larger than a 14nm 6T cell... (giving it greater drive strength and larger "ballast parasitics" to absorb radiation events)

They probably built them with a lot more margin back then too.


The GRPS director of communications just gave a presentation on the bond proposal last night at our neighborhood association meeting. Like any urban district, GRPS has faced a very challenging few decades with declining enrollment, competition from charters and suburban districts, and a largely impoverished student population. They've made huge strides over the past few years, since the current superintendent took over in 2012. It's been really exciting to watch.

At the presentation, the spokesman stressed that the previous bond was strictly to shore things up—anything that could be delayed, was. I can see that he wasn't kidding.


And here I am reading this article in 2015, with top of the page broken saying "Please install the latest Adobe Flash Player Plugin to watch this content."


Amiga? That had a Motorola 68000 CPU, if I remember correctly.

I knew when I saw the MC68k the first time that it had the Schwartz to rule them all — and decades later it's still true :)


I thought that. Then living in the UK our first 32-bit ARM personal computers turned up in 1987. They even ran rings around high end Unix kit. And here I am in 2015 typing this on a 32-bit ARM A7...


Polish Railways(state owned) had an Odra 1305 machine, manufactured in 1973, still operating on one of their stations until 2010. It's fascinating really.

https://en.wikipedia.org/wiki/Odra_(computer)


A 30-year-old computer that has run day and night for decades...

I know its a "ha-ha look at that antique" piece but seriously, how many computers could you buy at BigBox store today that will run for 30 years non-stop? The Amiga was just an off the shelf consumer PC.


I wonder how many software from the 70s and 80s in not so famous languages are lost forever. And if is there an effort to preserve such works?


The Computer History Museum has established a Software Preservation Group: http://www.softwarepreservation.org/

They also have artifacts like MacPaint source code in their collection: http://www.computerhistory.org/collections/catalog/102658076


Here is one effort. There may be more of course:

http://www.textfiles.com/bitsavers/


What for?


History, for starters.


And PDP-11s still run nuclear plants.


Ah! PDP-11! I mis-spent my youth on one. I remember seeing one standing out in the rain in Los Alamos at an electronics surplus store. Didn't even rate getting inside the building under cover. Now that's melancholy.


Nuclear launch commands are still stored on 8-inch (yes!) floppy disks!



"A new, more current system would cost between $1.5 and 2 million."

How on earth! And where can I apply?


The proposal wouldn't be for replacing a computer and a 1200buad modem. It's for replacing a distributed HVAC control system at 19 locations. If you're doing that, you're going to want to bring the system up to current standards and that includes things like commissioning [1] the system for energy performance and replacing controls and logic at both ends...or maybe going away from a hub and spoke architecture. And it all comes with RFP's for design, public bids, performance bonds, insurance, warranties and all sorts of things that grandma's Wordpress site doesn't typically require.

[edit] To put it in perspective, getting the controls right dwarfs energy use. Let's call the cost $1.9 million and the number of sites 19 giving a cost per site of $100,000. Let's assume that the energy cost per site is $100,000/month and that the controls are capable of +10%. That puts the payback period at 10 months, so let's call it a year. Even if the efficiency improvement is only 1%, the payback period is well before the 30 year life cycle of the building (which not by coincidence matches the 30 year maturity of typical bonds and the bond financed control system built around an Amiga).


$100k per month per site for energy? That number seems high to me, but I can't seem to find a link suggesting for/otherwise. Is that really what schools pay?


Doesn't seem high to me. I recall that my school district replaced the HVAC system while I was in high school and the ROI paid for the new multimillion dollar system in something like 5 years.

http://www.schoolenergysaving.com/schoolEnergyFacts.php

"A mid-sized school district with 800,000 square feet of space pays more than $1M annually for energy."

"Space heating, cooling, and lighting together account for nearly 70% of school total energy use."


That 800,000 sq ft is for an entire district. With a little searching, I found that Saline High School is the single largest school in Michigan at 480,000 sq ft. Most large schools seem to be in the low hundreds of thousands.

So the $100k/mo figure is probably high by a few factors - maybe an order of magnitude at most.


Knock it down an order of magnitude and the payback goes to 100 months versus 360 months on the bonds financing the renovation.


You might have noticed the trend of "de-malling," transforming indoor malls into open air shopping centers. While there are many drivers for this, the ongoing cost HVAC is a big one. So it doesn't surprise me that schools would pay so much.


That doesn't seem unreasonable to me. My guess is they'll also be updating some of the controlled equipment at the same time too.


2 million for 19 buildings gives you slightly more than 100K per location.

To me, it looks like a very optimistic guess.


I'm guessing they mean to replace all the heating equipment as well.


$2m might cover design fees for swapping out mechanical systems across 19 schools but probably not considering that this would be on the high end of the typical public project fee curve because of how messy HVAC renovations are for the sort of water distributed systems usually installed in large 40 year public buildings. It's not going to be craning in and out some package units from the rooftop.


From TFA:

Bringing Stocking Elementary out of moth balls, replacing boilers and roofs, and removing asbestos were just some of the projects GRPS put on the Warm, Safe and Dry list before the Commodore computer.

It seems they've already replaced boilers. They could have fixed any leaking radiators while they did that. ISTM they really are just talking about the control system. Perhaps it made sense to build a custom system in 1982, although I really doubt it. Nowadays, Honeywell certainly sells units that can just be dropped in, at each site. Unless they have steam tunnels running between the 19 buildings, centralization of control seems unnecessary. Sure, holidays, snow days, and terms move around slightly, but schools are on the internet now so staff at district HQ can manage each site's schedules when they need to do so. It would make sense to have centralized reporting, but just run that over the internet instead of the walkie-talkie bands.

IANACivilEngineer!


Current best practice [and perhaps state regulations for public schools] require commissioning the controls. And RFQ's, RFP's, sealed bids, etc. The project requirements are discontinuous with a homeowner thermostat replacement in the same way that database cluster requirements are discontinuous with a Comcast residential gateway.


That seems orthogonal to centralization? Sure, an HVAC tech should regularly inspect each site, and should be on call for any issues. That doesn't require split-second coordination of mechanical operation among separate sites. We're talking about heating some schools, not operating a nuclear enrichment facility.

Neither is there anything about the "commissioning" process that mandates that control systems may only be replaced when mechanicals are also replaced. That would be ridiculous. Please note that I am not quibbling about the $2M cost. I merely observe that TFA discusses the control system only.


Commissioning etc. are orthogonal to the system configuration, and we should just agree to agree that a centralized control point is not necessarily necessary. OTOH IMPO, the control system requirements should be specified by a HVAC PE.


Yeah we're agreed on that. I have seen bad results from e.g. duct layout design by "experienced technicians".


you could probably virtualise it onto raspberry pis.


If it ain't broke, don't fix it.


> The only problem is that the computer operates on the same frequency as some of the walkie-talkies used by the maintenance department.

It's awesome that it's been functional for this long, but given the press attention I hope they don't receive unwanted attention given that this system was implemented in the 1980s, before computer security was as large an issue as it has become.[1]

With the amount of publicity this article has generated, I bet someone with an SDR is going to drive over and analyze the RF signals, and then hijack the HVAC systems for the lulz.

[1] https://en.wikipedia.org/wiki/Notable_computer_viruses_and_w...


Except its in Michigan.


As a Grand Rapids resident who knows quite well how to use a SDR, I'm going to choose a positive interpretation of this comment and assume you're referring to the phenomena of "West Michigan nice", where hacking a public system for lulz would be absolutely unconscionable.

That being said, I'm actually rather tempted to at least figure out what the air interface is. The article mentions a 1200 baud modem - does a standard UHF/VHF voice channel have enough bandwidth to run PSK/FSK V.22 or V.23? That'd be quite a hack.


Very likely it's Bell 202 AFSK. This is specifically designed to run at audio frequencies with 1200 Hz and 2200 Hz tones. You'd have to figure out how the higher protocol levels are implemented but it's probably going to fall within the realm of what is done with AX.25.


I meant that its far the heck away from everywhere and the population is low. Thus, a low incidence of griefers of any kind. But your take is good too.


Two hours from both Detroit and Chicago isn't far. A city of 180,000 isn't low; it's the largest city in the state behind Detroit.


And hey! In a list of US states by population, Michigan is #10! Who knew?!

https://en.wikipedia.org/wiki/List_of_U.S._states_and_territ...


Anyone who has ever been there? I guess unless you flew directly into UP... b^)


Is there an airport up there?


Hey I've been there, and remember endless empty foresty spaces, with some towns every hour or so.


Yep. The Northwest Territory is now home to over 46 million people. Just because it's less dense doesn't make it empty.


If it's about to fail, have a replacement ready.


It's not like an Amiga 3000 or 4000 is that expensive anyway.

And, probably, they could run their software under an emulator, on any US$200 desktop PC.


It's not just the software. They're controlling these systems via radios that operate on the same frequencies as other radios in use. It's (at this point) clumsy and error prone. To be fair, they probably saved the tax payers a lot of money by using this system for as long as they have.


> To be fair, they probably saved the tax payers a lot of money by using this system for as long as they have.

Indeed. The major annoyance I see is the radios interfering with each other. The centralized control architecture is also something I'd like to change. Each building does not require anything much smarter than an Arduino to be completely standalone and, if you want to be fancy, an RPi to send collected data to a centralized location.


Another viable approach could be to setup Amiga emulators on old/discarded PCs and repurpose the existing software using wired instead of wireless connections.


The 2M number is probably a major overhaul of every part of the system. Utility bills alone will probably more than cover the difference.

I suggested a US$ 200 PC because many of those have no moving parts and more storage space than any Amiga ever dreamed of. They should be able to easily last 30 years or more.


Well, at least planning for the future seems prudent. Otherwise if such a system breaks permanently, then you are scrambling to get something else up while the whole system is down (even ignoring the question of backups and data migration to a new system).


Well the article says "If the computer stopped working tomorrow, a staff person would have to turn each building’s climate control systems on and off by hand" - I know that's not exactly a tiny job across 19 schools but it doesn't seem to be a hugely tough task


There can be dozens of air conditioners on the roof of a school. And they have to be cycled on and off all day long as they cool the building, else it'll turn into winter in there. Definitely a fulltime job for somebody. That's why we have thermostats.


You do wonder if those staff people would be ok turning it on and off by hand if they got the "$1.5 to $2 million" needed to replace the system.

That said, I'm pretty sure I could get an Amiga emulator running on a Raspberry Pi for a lot less than $1.5 million.


As has been said multiple times before, it's not $2 million to keep this kid's school project running for another 30 years, it's $2 million for a better HVAC automation system. They've probably pissed away multiples of $2 million in energy costs not having a better system in place the past 30 years.


No kidding, but that's not how the article was written. My second paragraph was a comment on that.

Every budget ever written stuffs things that management thinks can be put off until later into funding for things that absolutely need to be done today.


Often it's dealing with interfaces. A couple years ago I worked at a place with a laser cutter, like an old one. You sent instructions to a program that talked to an interface card, which controlled the cutter. The program ran under DOS. It wasn't possible to replace the OS or machine easily, because it was an ISA card and had custom interface software that relied on having direct access.


If nobody can fix it, break it.


There is at least one Amiga emulator. http://www.winuae.net/


What's the realistic life expectancy of modern hardware?

I ran a Ras. Pi system for monitoring my sprinklers and such but after about 9 months it has had some funkiness, I suspect power related and then the flash has had a couple little glitches. I couldn't see it lasting 5 years. I have an odroid-c1 that seems to be more robust in terms of power and such (it has a dedicated transformer, not some cheap USB that seems to "work") I do worry about the flash crapping out... Although if the policy is to burn a few flashes and just keep some spares, you should have quick recovery.

Maybe some good server grade hardware, in a data center, you can expect some decent life.


Most embedded system failures are due to power and I/O.

If you have robust power supplies from a reputable manufacturer (I like TDK/Lambda) with good filtering on the input (maybe external powerline filter modules), and you protect your external inputs and outputs, then the next thing to worry about is flash lifetime which should be measured in decades if you're not writing to it.

Problem with a Rasp Pi/Beaglebone is that it's running Linux and there are likely disk writes occurring, so your lifetime drops. If you can both limit disk writes and get long lifetime SSDs, I don't see why it shouldn't last at least 10 years.


The flash survival rate heavily depends on how the system is configured. If you move every writable directory (/tmp, /var/log, /var/run, etc) to a tmpfs filesystem and make the main filesystem readonly, it'll live much longer (and better survive hard reboots).

The Debian Wiki has an overview of what you might need to change to achieve it: https://wiki.debian.org/ReadonlyRoot


"What's the realistic life expectancy of modern hardware?"

Lower that it used to be. The design life for the Ford EEC-IV engine control system from the 1980s was 30 years. The program is mask-programmed onto the chip, and the parameter table is in a fuse-blowing type ROM. Many of those are still on the road.

With newer hardware, lifetimes are shorter. This is a big problem for long-lived military systems.[1] Electromigration becomes more of a problem as IC features get smaller.

There are embedded systems where 30-50 years of operation is needed. Pumping stations, HVAC, railroad signals, etc. have equipment which can run for decades with occasional maintenance. The NYC subway is still using century-old technology for signaling. It's bulky, but works well.

[1] http://mil-embedded.com/articles/obsolete-cycle-costing-you/


I read somewhere, actually I think it was an interview with Google's Saul Griffith, who said no smart phone was designed to run longer than 6 or 7 years in ideal circumstances. Realistically I imagine 5 years as unusually long regardless.



Amiga Forever is basically WinUAE.


There's FS-UAE as well.


"That's probably because he wasn't sick. He was skipping school. Wake up and smell the coffee, Mrs. Bueller. It's a fool's paradise. He is just leading you down the primrose path.... I've got it right here in front of me. He has missed nine days...." eight, seven, six..


At the end of the TV news bit, the GRPS Maintenance Supervisor said something to the effect of "if we had to replace it tomorrow, we would be looking on ebay.. which is where this one came from.." This doesn't make sense if the Amiga has been running for 30 years and was purchased in the 80s. Perhaps he meant a failed part or accessory they had purchased that had failed.


I don't know, Amigas are tough little critters. Part of it is that they're designed to load most of the OS in RAM and keep it there, so there are very few moving parts.


Why am I not surprised that it's an Amiga?

I did a bit of work at NASA-AMES and they were just getting rid of Amiga 1000s used for video compositing.


Wow, an Amiga still running 30 years later, 24/7! That's solid hardware. As a former Amiga owner, I find this impressive.


There are still several forums full of current Amiga users [1] [2] [3] [4] being the most prominent English language ones. The most amazing thing is that there is still new hardware expansion being designed and manufactured for the original machines...

[1] http://www.amiga.org

[2] http://www.amigaworld.net

[3] http://www.amigans.net

[4] http://eab.abime.net/


Those kind of results are why I encourage copying past designs for new hardware projects in safety-critical space. Especially NonStop architecture, which is patent-free now. Take one of older ones, make it 32-64 bit, put modern interfaces on it, put it on an older SOI process node for reliability, optional lockstep, and the result should run very well for a very long time.


Seattle's University Bridge was raised and lowered by an early-80s Compaq until recently.

http://blogs.seattletimes.com/today/2014/03/seattles-univers...


There is nothing bad about a computer system lasting 30 years. If its the right kinds of system. Desktop computers like this one were never made for this kind of application.


So from now they don't have to worry about viruses.


I thought this was going to be about a C64. You'd have to POKE it to switch the heating on.

It. Just. Works. though - they should never upgrade.


Michigan hipsters


The GRPS headquarters is dangerously close to Eastown.


A successful application of agile methodology despite the term having been yet to be invented.


Hackerman exists afterall!


Well, at least they don't have to worry about viruses.


Virii were mostly spread over floppy (boot block or trojan) back then. While I agree that it's very unlikely that this system will get infected, don't think that times were better back then :)

http://www.teyko.com/View.aspx?id=346&name=Saddam+Virus

...which was a rather brilliant little thing, but a personal hatred as it messed up many a .s files (.s stood for "source code" or "Seka", I guess).


Obligatory nitpick regarding the plural of "virus": http://linuxmafia.com/~rick/faq/plural-of-virus.html


Richard Karsmakers used "virii" to describe a plural of computer virus in the 16-bit era, largely on the basis that it was shorter. It isn't correct Latin, but it is a VX scene thing, so is absolutely correct in this context.


1980's computers control most of manufacturing today.


"Don't fix what's not broken."


Many companies run on SAP R/3 ABAP (1983) code or even still SAP R/2 ASM code. The ABAP code syntax is somewhat similar to COBOL: https://en.wikipedia.org/wiki/ABAP . And many banks still use PL/1 (1964) or COBOL (1959): https://en.wikipedia.org/wiki/PL/I and https://en.wikipedia.org/wiki/COBOL . And Fortran (1957) has still the best optimized compilers, surpassing often even C compilers: https://en.wikipedia.org/wiki/Fortran



I'll build the replacement for $800k


That it would cost 1.2 million to replace something that was probably done for, oh, $20K in (say) 1980... says something about where we've gone.

For reference, 1.2 million in 2014 dollars is around $418K in 1980 dollars.


I'm pretty sure they could buy a $50 laptop off craigslist instead of a 1.5-2 million dollar systems upgrade. These are the morons teaching children.


They aren't replacing a computer. They're replacing a (at this point) hard to maintain system for controlling their HVAC system and other aspects of their HVAC system. For 19 schools, this seems cheap to me. More modern HVAC and HVAC controls will also result in reduced energy costs saving the district money further down the line.


The irony of accusing someone of being morons without understanding the scope of the project you're criticising them for.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: