Yeah, like 'baq said the data wasn't stored in a tabular form, it was actually XML. So sometimes you could just look at the textual diff and it would make perfect sense, although it wasn't expected users would work with XML at the source level.
There was also a semantic object-level diff we got for "free" by virtue of building on top of the Eclipse Modeling Framework. It was integrated into the Eclipse git UI and could help resolve merge conflicts without having to touch the XML directly, but merge conflicts were still annoying to deal with so generally engineers coordinated with each other to not touch the same part of the model at the same time.
Normally for review though I think users tended to compare reports generated from the model rather than trying to diff the source model files directly. There was a sort of automated build process that took care of that once you pushed your branch to Github.
not OP but the usual applies - data is not actually stored as a table in git, tables are an UI thing. git would store standard issue json, xml or whatever custom git-friendly format is used by the tool.
I will just chime in to mention Draftable (https://www.draftable.com/compare). It really works well. It’s not so easy to have a visually comfortable diff of two PDFs.
I cannot agree more with the author’s point of view. As an illustration, many people want to use GPS for the safe positioning of trains in the European Train Control Systems. This makes the space sector happy because it justifies the expenditures incurred for putting things like Galileo in orbit. However, in a pre-war check exercise, one immediately come to the conclusion that all European trains would crawl to a stop in case the GPS is jammed or interfered with. We were not very listened to… until Ukraine.
Critical infrastructures should not depend from things that are located in space or on the other side of the planet. These are one of those things were market logic should be anticipated with regulations (we can’t wait for the next Titanic). Another point touched by the article.
Railroads can now outsource train control.
Wabtec's "Wabtec Cloud Positive Train Control Communication Solution" - "A complete turnkey hosted office solution for I-ETMS-based Positive Train Control (PTC) systems"[1] (Wabtec used to be Westinghouse Air Brake.)
Wabtec has had break-ins, but claims they only involved employee info, not control systems.[2]
Lol is this basically a train SaaS solution? Whats wild to me is that SaaS products aren’t actually required to issue CVEs since customers aren’t the ones responsible for patching.
This may be the first time I had that "well, that's enough Internet today ..." reactions on HN from a cybersecurity/cyber-physical protection perspective, and not something gross on Reddit.
The entire Ascension Healthcare system of hospitals (142 hospitals, 2600 total facilities) in on divert since 8 May because they had to switch back to paper records. Change Healthcare has lost $872M since it was attacked in February.
Maybe it's more like the pandemic: seems like nothing, unless it affects you.
#2 > Does your stuff need computers working 5,000 kilometers away? [implying that's bad]
What if you live on the Gulf Coast, exposed to hurricanes? You want compute resources warm and ready far away from that region. After Katrina, the Tulane medical school was able to re-form quickly because the noteservice was running a bulletin board forum on a VM in Romania. Everything else was underwater.
#3 > This is the sound-powered phone
Have you used a sound-powered phone? I managed damage control in a ship. Sound powered phones barely works. And the coordination system to actually fight that fire requires radios and making overhead announcements that definitely depend on electrical power.
#4 > They tried to sort of renew this emergency telephone network
When the entire San Diego region lost power during rush hour for 4 hours in 2011, the cell phone system still worked. I was able to email documents to Tokyo from a car despite no traffic lights.
#5 > Because if the cable to the US is down
Sure, but there are a lot of disasters where the cables are fine. Graceful degradation is all about having widely distributed options. Lots of people have What. Signal is even better for people with more serious responsibilities, IMHO. And, friends, if you think IP networks are vulnerable, get yourself a starlink terminal and a HAM radio license.
> but there are a lot of disasters where the cables are fine
We are talking about war-like situations, and where one state actor has incentive to cause maximum harm to another. Exposing your infrastructure like this is unlike damage that can come from natural disaster. For example, disrupting the communications exactly before the attack. Similar issues (though through lower tech hacking) happened in 7th of October during the Hamas attack in Israel, where the over-reliance on advanced, complicated technology became a liability.
The stuff you describe make sense in normal, peaceful situations, where the cost of securing certain infrastructure can be higher than the cost of a power cut once. That has nothing to do with what the article really says, which is basically that infrastructure is currently not as secure from a potential hostile state attack. Also, in that case, a hostile state actor can combine attacks that together cause more damage than the sum of the attacks independently.
There was a massive DDoS in Israel on 10/7, fake alerts of a nuclear missile launch were sent, and newspapers like the Jerusalem Post were taken down and temporarily defaced[0]
The back of my head is screaming "defense in depth! Redundant systems!"
The whole idea of the internet (and even some of our infra, like suburbs or highways/rail) is that there's no one single point of failure. Like designed-to-survive-nuclear-war redundant.
Definitely incorporate the most advanced tech you can for when things are going smoothly to get that efficency gain, but there's a reason all branches of the military (that I'm aware of) still train and test their aptitude using paper maps and trig instead of relying 100% on GPS and electronic devices.
>The whole idea of the internet (and even some of our infra, like suburbs or highways/rail) is that there's no one single point of failure. Like designed-to-survive-nuclear-war redundant.
The reality of course is that the internet has turned into a fragile, centralized system of complication that rests on single failure points like Cloudflare, AWS, and Chrome. The internet as envisioned by DARPA would have survived to be used by cockroaches, the internet today would not survive.
Thinking about this though it’s really the big tech companies manufacturing “the latest thing” to be tossed in the bin after a year. Dollars over longevity. Then they become “no longer maintained.” Could we STILL use a 3g network? Or is there a simpler, slow network that should be good enough barring our pointless desire for cat videos?
And some folks wonder why companies still use floppy disks on air-gapped infrastructure. Because it fucking works don’t litter it with complexity to modernize.
Now… the situation with skills to manage infrastructure? Now that the whole AI thing is happening? The internet is going to be fucked people. It’s time to go analog.
> The whole idea of the internet (and even some of our infra, like suburbs or highways/rail) is that there's no one single point of failure. Like designed-to-survive-nuclear-war redundant.
Sure, the routing algorithms can quickly adapt to changes in network topology, but they assume infinite bandwidth, which hasn't been the case since a long time now.
In other words, if a couple of important pipes disappear between tier1 peers, alternate routes will certainly have trouble handling all the new traffic, which would make everything grind to a halt, and will only be solved by pissed network admins null-routing that additional load.
Definitely, we've seen this in fiber cuts before. That said a degraded availability is better than no availability.
I know it's controversial in the context of net neutrality but personally I'd be okay with traffic shaping/prioritization for critical infra in cases such as this. Keep the power plants, emergency services, military, government, transit running over intsagram and netflix when things come down to it.
Does the government not maintain its own dedicated communication infrastructure between important installations? Or has it all been replaced with public connections?
"It depends." Two data points that I know of first hand:
1) There is a dedicated microwave link between Vandenberg Space Force Base and Edwards Air Force Base. Mil. owned and operated solely for their own use.
2) The US Federal government decided to build a standardized communications network for government/first responders/etc. This is FirstNet. They farmed the build-out to AT&T and gave them 20 MHz of bandwidth (Band 14) but it runs over their standard wireless infrastructure and network but FirstNet traffic gets prioritized.
It’s been a 25 years since I’ve even been remotely exposed to them, but I believe the military currently has a non-classified NIPRnet, a classified secret SIPRnet, and a network called JWICS for top secret.
I think all three are physically seperated from the commercial internet and each other but don’t quote me on it.
> When the entire San Diego region lost power during rush hour for 4 hours in 2011, the cell phone system still worked. I was able to email documents to Tokyo from a car despite no traffic lights.
Around me, cell towers have 3-5 hours of battery when utility power is out. If your outage had gone on much longer, you would likely have seen cell towers start dropping out.
Of course, my area also has some other nasty SPoFs. A couple years ago, a telco cable was severed and DSL for everyone was out and at least some of the cell towers were live, but no service. A few weeks ago, the cableco had its wires severed, and cable tv and internet was offline, and so were some cell towers. IIRC, for the telco one t-mobile worked and verizon didn't, and for the cableco t-mobile didn't work and verizon did. Not sure about at&t.
Part of this is that nobody has cared about security since the beginning, for basically anything in tech.
It’s an industry-wide issue that permeates every level of the stack. And so yeah, individual companies trying to retrofit security onto a jenga tower of technology is going to have to spend a ridiculous amount of resources to have any kind of impact.
I don’t know what the answer is, but I too believe things won’t change until the day someone figures out how to push a “kill all humans” OTA update to all the self-driving cars on some random Tuesday afternoon.
> I don’t know what the answer is, but I too believe things won’t change until the day someone figures out how to push a “kill all humans” OTA update to all the self-driving cars on some random Tuesday afternoon.
Even in that case I’m pessimistic that any action will happen. People will go on TV and say grave things, hearings will be held. Fingers will be pointed. Task Forces will kick off. Reports will be written. Bureaucrats will have stern conversations with bureaucrats. Politicians will say: we must this and we shall that. IT companies will sell their “solutions”. But no actual action will happen. It will be all talk and commerce but no actual hands unplugging and plugging in cables. We have completely lost the societal will to actually do anything besides generate words and reports.
You are describing the current world, where politicians dissolve issues. There’s a saying in Europe that no minister of defense was ever nominated. Real ministers of war, when there is war, appoint themselves into position.
When there is a real problem, people act upon it (assuming society is functional - otherwise the country simply dies). That’s why there is no better training for war than war itself. Ukraine has already unrooted all of the peace & love & no armament folklore in France, and even turned a lot of ecologists into pro-nuclear voters.
So yes, I wouldn’t be surprised if guarantees of offline mode (with regular drills) were passed into law for electric cars and everything cloudy, including IntelliJ.
> Part of this is that nobody has cared about security since the beginning, for basically anything in tech.
> It’s an industry-wide issue that permeates every level of the stack.
Can you explain? I don't understand. Here's my take.
Let's start from the bottom of the stack. CPU has some good security protections. They have ways to ensure that boot code is signed. They have hardware protection for memory. They have memory encryption to isolate VMs. They have many amazing security technologies. I can run VM inaccessible by host.
Let's move to OS. Well, there's lot of security stuff in any OS. Process isolation, namespace isolation, encrypted storage.
Next level is container orchestrator which happens to be Kubernetes these days. Again, there's lots of security stuff there. Built-in and add-ons. Everything is authenticated with cryptography. Many ways to implement very granular secret sharing. Secret stuff is encrypted in-rest.
Next level is application framework. Can't tell for every framework, but all frameworks I've seen so far was quite security-cautious. They try to safeguard known security issues (like SQL injections), they make it easy to add security layers on top, and so on.
Nobody cared about security in 1984, I guess. That's not the case anymore. Everyone cares about security. May be there's still space for improvement.
The only people who don't care about security are end-users. They don't even know what security is. They don't care about their passwords. They don't care about sharing their access. They don't care to check domain before typing password.
Also some application developers don't care much about security, that I admit. But that's not the every level of the stack. That's the last level of the stack.
Zero days capable of nuking the OS are not going to be found in random apps or malware. Anyone with that kind of ability will be using it for nation-state targeted intelligence ops, not wasting it on random individuals.
Security wasn’t really a design consideration especially in the one use one PC era. We’re still trying to secure hardware and software descended from that era.
One reason, is probably because retrofitting security is a freaking nightmare.
In my opinion, security (as well as Quality, and things like error handling, accessibility, and localization) is something that needs to be planned and implemented, from Day One.
Do a better job from the start, and the cost will drop like a stone.
I’ve found that there’s quite a few things that you can do, from the start, that make implementing security measures later, a lot easier.
Think of it as a “pegboard.” It has a bunch of holes to hook things onto. You make sure to brace it well, and use good masonite. That way, you may not know exactly what you’re going to hang on it, but you have a good infrastructure for it.
I find it additionally odd that the author calls this era pre war. Ukraine is certainly at war right now with a very potent cyber state. Their infrastructure seems to hold up ok. It’s not perfect but definitely not doomsday like described in this article.
Tbf their infra holds up because their infrastructure workers put their lives on the line every single day repairing it under horrible conditions of shelling, etc.
On my most recent trip there - I was amazed at how despite being routinely hit by missiles, their train systems “on time” status is better than British or even German trains.
This is only possible because their railway workers have balls of steel and go out to repair damage fast, and sometimes get hit in follow up strikes.
Same with energy workers - they go out and repair stuff during air alarms, in the immediate aftermath of strikes they perform damage control and mitigations.
On the hospital system part, there are actual timelines and goals to harden their systems after seeing what happened to the HSE in 2021. The issue is some parts of the chain have been slow on the uptick.
That said, paper based redundancies do exist as a massive ransomware attack is similar in impact to a multiweek power outage.
> What if you live on the Gulf Coast, exposed to hurricanes? [...] Sure, but there are a lot of disasters where the cables are fine.
You have to understand that this article was written by an European technologist worrying about a war situation. Sure, you can make a counter-point, but your counter-example is very different in many aspects: nature of the threat, jurisdictions involved, orgs involved, etc.
It has been a couple of years since I worked in the area, but back then that wasn’t the plan and would’ve been deemed impossible both for safety and for accuracy reasons. Do you maybe have a source?
> A failsafe on-board multi-sensors localisation unit consisting of a navigation core (IMU, tachometer, etc.) brought in reference using GNSS, track map and a minimal number of reference points
> to complement the existing European Train Control System (ETCS) odometry system by using GNSS to enable absolute safe train positioning whilst also transforming today’s train localisation by demonstrating a GNSS based multi-sensor fusion architecture.
Okay, so as I expected they want to add GNSS as an additional sensor input. That is useful because without it train odometry is purely relative and the train doesn't know where it is until is reads the first balise. The plan doesn't seem to be to remove all other sensors. Denial of GNSS would then mean that start-of-mission is about as tedious as it is today and odometry accuracy might be reduced. Depending on the number of balises on the track that lowers the capacity of the track a little but is far from catastrophic.
I see a balise antenna even in the „long term“ architecture diagram and don’t have the time to parse ninety pages of technical documentation. Of course I wouldn’t be surprised if they went to reduce the number of balises, but I don’t think it’s possible to go completely without.
Rails has clever systems for locating trains by detecting circuit shorted by trains' wheels, no need to replace that with GPS. Besides railroads passes valleys and tunnels, GPS won't work anyway.
The absolute last resort for trains is semaphores and mutexes based on physical tokens. Those concepts came from there, and were still used sometimes to this day. Doesn't sound high tech, but it works.
This correct. Power lines wear induced by pantographs is the limiting factor for TGV top speed.
The engineer that can solve this problem will allow top speeds in the range of 400/500 kph on /existing/ TGV tracks. Much better problem to solve than hacking a lame hyperloop.
>However, on July 1, 2011 in order to save energy and reduce operating costs, the maximum speed of Chinese high-speed trains was reduced to 300 km/h, and the average speed of the fastest trains on the Wuhan-Guangzhou High-Speed Railway was reduced to 272.68 km/h (169 mph).
They're maglev trains, so there's no pantograph involved there.
https://youtu.be/_ZZMViMDjto at 16:40 explains the problem. It's in french, but you can get the gist visually as it's a for-kids documentary.
Tldr is a wave on the cable is created at high speed, making it impossible for the train to stay in contact with the cable. The documentary also touches on another top speed limitation: how agressive the turn angles are, limiting the top speed.
Ah true, maybe they have higher operating costs on pantograph and/or the power lines maintenance?
(I don't fully believe GP's comment of "The engineer that can solve this problem will allow top speeds in the range of 400/500 kph on /existing/ TGV tracks.", as narrow turns is probably a bigger contraint than the pantograph problem. Maybe a few of the more straight lines can benefit, but there's more constraints to solve before taking turns at 400/500 kph!)
There is something very wrong going on in Germany AND France where, despite the nice speeches, investment in railway infrastructure is being run into the ground. If nothing changes, expect disaster-level quality of service within a 5 years time.
You have to live in a world where there is no Netflix and no Disney/Pixar yet. Where the standard for mass-media japanimation is Dragon Ball Z. People had to wait for each episode for days if not weeks. At that point in time Ideaki Anno was a name only known to relatively few people, but we knew he could deliver stellar stuff. Evangelion started hyper-well animated and characters resonated with many viewers. Cool music too. The story arc looked promising with dark brooding villains and misteries everywhere. It was really something at the time. Where it ended is another (controversial) story.
Maybe that's the problem. We stopped giving dance lessons to the ruling class a long time ago, but those lessons have always been the only reason why, as working class, we have the few good things we have.