Hacker News new | past | comments | ask | show | jobs | submit | andy_ppp's comments login

You mean the sort of conferences that attract charlatans and conspiracy theorists?

Some people wrap their phone in an elastic band or there’s always Opal if you want more fine grained control: https://apps.apple.com/gb/app/opal-screen-time-control/id149...

Physical barrier on the phone is probably the best way to tackle with such things, but that's not what always available or convenient.

I liked Opal, but with Intenty I tried to create an alternative way without blockers or limits. For some reason, app blockers and time limits are very frustrating for me and rarely work. That was one of the primary motivations for the app creation. While I admit that for the majority setting proper limits on certain apps will work.


Opal is incredibly helpful for me. Exactly the right amount of control and annoyance to get me off social media.

Opal's ads were very good actually. I got targeted ones on Instagram Reels and they legitimately made me uninstall everything that wasn't serving me. Ironically I never installed Opal, but their marketing team really did me a solid.

I wonder if we should design infrastructure that is resilient to cables being cut, I’m pretty certain everything would break right now if the Atlantic cables were cut. Does anyone know of an easy way to test this? Would cloudflare or AWS go down for example? What about my local bank?

We do have resiliency - the internet reroutes around these cuts l. Knock out every Atlantic cable and traffic from NY to London would route via LA and singapore.

I see this all the time on traffic from far east to Europe when a Red Sea cable dies, and I’ve seen it from India to Europe too (and seen traffic rerouted via South Africa and up the west coast a fair bit)

Latency is higher, but on the whole things continue to work. Until there’s enough damage - problems tend to be if you cut enough that cables and the routers they are connected to start to bottleneck.


What kind of resiliency are you hoping to achieve? For most routes, there are many dozens of cables starting at many different locations and taking many distinct paths across the ocean. The companies using these cables take great pains to ensure redundancy for critical paths: they'll validate minimum distances between the cables, ensure that they have a variety of landing points, ensure that they have enough spare capacity to handle a certain number of cables all being out for repair simultaneously. Alternatives to cables would be either land-based wireless (radio, point-to-point microwave) or satellite, both of which have much lower throughput capabilities and also are vulnerable to sabotage of transmission/receiver locations.

While the number of cables is not large enough to put it out of the reach of many nations, it's also something that no group with the capability of doing it would really want to do: it's a surefire way to invite retribution from basically the rest of the world, while not really achieving much militarily: armies almost invariably have their own communication systems (satellite, microwave, transoceanic fiber whose location is secret, etc).


It's a great shame that our devices and operating systems don't support "connectionless" and intermediate-infrastructure-less https://en.wikipedia.org/wiki/Wireless_ad_hoc_network by default.

Hypothetically you could test this by forcing your connections to avoid certain data centers... how you would do this, I'm unsure. It's been a while since I've taken a course on it but I swear CompTIA Network+ covered this

Sure, which of the 24 Atlantic cables are you going to cut with their 5-15 day repair time estimates? Or maybe the 20+ routing the other way around the Earth. (See map linked above)

Where do you get these repair time estimates? I couldn't find a good source, but for example [1] says that, yes, one to two weeks to repair, but two to three weeks to get the ship from Europe to West Africa.

[1] https://www.channelstv.com/2024/03/16/internet-disruption-su...


Reference quoted by news articles on the subject on how long it was estimated to take for the C-Lion1 to get repaired. However, you likely have a point, that the mid-Atlantic probably add some % to the on-site repair time and ++ on time to arrive. Much further away than Sweden to Lithuania.

Right, Baltic Sea has considerably shorter distances on average. I imagine weather conditions might change the equation, though, if you need to send icebreakers first to open up some dozens of kilometers of passage. Right now the sea is still open, except for the coastal areas in the very north.[1]

[1] https://en.ilmatieteenlaitos.fi/ice-conditions/


I know of a way to test it…

But realistically, I would think the US/Americas would be approximately fine. Most, if not practically all, services people on the NA continent use are based in the US from both a corporate and technical perspective. The command+control stuff for distributed systems is probably in the US.

Across the pond(s), yeah, I’d expect more disruption.


When I was in high school, our cable out of the country got cut on the border with Austria. For a few hours we could only access domestic websites, which was a pretty interesting experience.

20 years later I wonder how many of those are hosted on AWS/GCP/Azure and would break anyway. Probably all but the biggest.


An interesting thought experiment is what would happen to each region’s internet culture if cut off like this from one another? It would be like a speciation event like when animals get cut off from one another by continental drift etc.

EU Internet would be dominated by cookie banners, which I regard as an invasive species they exported to other continents.

I understand the US would be fine! Europe would struggle I think.

So you are saying the EU and the US were working together on a coop, where is your evidence for that?

Here’s what really happened: https://en.m.wikipedia.org/wiki/Revolution_of_Dignity

I think you’ve been taken in by Russian propaganda.


Add walking for 2h per day is the recommended I’ve seen.

Thanks, 2h is a bit too much for me, so what I do is about 3-4 10-12 mins walk-sprint walk reps. Basically half walk (3.5m/h) and half sprint-walk (4.4m/h). I wish I could do more but my joints are not really good.

Just walking is better. You get a steady burn. If you do high intensity you burn calories for a good while afterwards. Mild intensity doesnt do much.

What I’ve found is foods I could usually binge on like pizza I’m quite full on GLP-1 inhibitors and can quite happily stop at half or 2/3 of a pizza. Usually I’d have eaten the whole thing (12” think napoleon style pizza Americans) and want more, refined carbs I never feel full from.

Yeah, my four donuts per day fill me up just fine or an extra large milkshake and a burger and I’m done for the day with food is definitely happening for some people. Let’s wait and see these drugs might prove to be very beneficial and more testing definitely needed.

Strongly agree, it’s has loads of problems, my least favourite being the schema is not checked in the way you might think, there’s not even a checksum to say this message and this version of the schema match. So when there’s old services/clients around and people haven’t versioned their schema’s safely (there was no mechanism for this apart from manually checking in PRs) you can get gibberish back for fields that should contain data. It’s basically just a binary blob with whatever schema the client has overlaid so debugging is an absolute pain. Unless you are Google scale use a text based format like JSON and save yourself a lot of hassle.

There is an art to having forwards and backwards compatible RPC schemas. It is easy, but it is surprisingly difficult to get people to follow easy rules. The rules are as follows:

  1) Never change the type of a field
  2) Never change the semantic meaning of a field
  3) If you need a different type or semantics, add a new field
Pretty simple if you ask me.

If I got to choose my colleagues this would be fine, unfortunately I had people who couldn’t understand eventual consistency. One of the guys writing Go admitted he didn’t understand what a pointer was etc. etc.

How does JSON protect you from that?

People understand JSON fairly commonly as they can see what is happening in a browser or any other system - what is the equivalent for GRPC if I want to do console.log(json)?

GRPC for most people is a completely black box with unclear error conditions that are not as clear to me at least. For example what happens if I have an old schema and I'm not seeing a field, there's loads of things that can be wrong - old services, old client, even messages not being routed correctly due to networking settings in docker or k8s.

Are you denying there is absolutely tones to learn here and it is trickier to debug and maintain?


I'd go with `cerr << request.DebugString() << '...' << response.DebugString();`, preferably with your favourite logger instead of `stdout`. My browser does the equivalent for me just fine, but that required an extension.

I buy the familiarity argument, but I usually don't see the wire format at all. And maintenance-wise, protobufs seem easier to me. But that's because, e.g., someone set up a presubmit for me that yells at me if my change isn't backwards compatible. That's kind of hard to do if you don't have a formal specification of what goes into your protocol.


You can trivially make breaking changes in a JSON blob too. GRPC has well documented ways to make non-breaking changes. If you're working somewhere where breaking schema changes go in with little fanfare and much debugging then I'm not sure JSON will save you.

The only way to know is to dig through CLs? Write a test.

There's also automated tooling to compare protobuff schemas for breaking changes.


JSON contains a description of the structure of the data that is readable by both machines and humans. JSON can certainly go wrong but it’s much simpler to see when it has because of this. GRPC is usually a binary black box that adds loads of developer time to upskill, debug, figure out error cases and introduces whole new classes of potential bugs.

If you are building something that needs binary performance that GRPC provides, go for it, but pretending there is no extra cost over doing the obvious thing is not true.


> JSON contains a description of the structure of the data that is readable by both machines and humans.

No, it by definition does not, because JSON has no schema. Only your application contains and knows the (expected) structure of the data, but you literally cannot know what structure any random blob of JSON objects will have without a separate schema. When you read a random /docs page telling you "the structure of the resulting JSON object from this request is ...", that's just a schema but written in English instead of code. This has big downstream ramifications.

For example, many APIs make the mistake of parsing JSON and only returning some opaque "Object" type, which you then have to map onto your own domain objects, meaning you actually parse every JSON object twice: once into the opaque structure, and once into your actual application type. This has major efficiency ramifications when you are actually dealing with a lot of JSON. The only way to do better than this is to have a schema in some form -- any form at all, even English prose -- so you can go from the JSON text representation directly into your domain type at parse-time. This is part of the reason why so many JSON libraries in every language tend to have some high level way of declaring a JSON object in the host language, typically as some kind of 'struct' or enum, so that they can automatically derive an actually efficient parsing step and skip intermediate objects. There's just no way around it. JSON doesn't have any schema, and that's part of its appeal, but in practice one always exists somewhere.

You can use protobuf in text-based form too, but from what you said, you're probably screwed anyway if your coworkers are just churning stuff and changing the values of fields and stuff randomly. They're going to change the meaning of JSON fields willy nilly too and there will be nothing to stop you from landing back in step 1.

I will say that the quality of gRPC integrations tends to vary wildly based on language though, which adds debt, you're definitely right about that.


If I gave you a JSON object with name, age, position, gender etc. etc. would you not say it has structure? If I give you a GRPC binary you need the separate schema and tools to be able to comprehend it. That’s all I’m saying is the separation of the schema from some minimal structure makes the debugging of services more difficult. I would also add the GRPC implementation I used in Javascript (long ago) was not actually checking the types of the field in a lot of cases so rather than being a schema that rejects if some field is not a text field it would just return binary junk. JSON Schema or almost anything else will give you a parsing error instead.

Maybe the tools are fantastic not but I still think being able to debug messages without them is an advantage in almost all systems, you probably don’t need the level of performance GRPC provides.

If you’re using JSON Protobufs why would you add this extra complexity - it will mean messaging is just as slow as using JSON. What are the core advantages of GRPC under these conditions?


> If I gave you a JSON object with name, age, position, gender etc. etc. would you not say it has structure?

That's too easy. What if I give you a 200KiB JSON object with 40+ nested fields that's whitespace stripped and has base64 encoded values? Its "structure" is a red herring. It is not a matter of text or binary. The net result is I still have to use a tool to inspect it, even if that's only something like gron/jq in order to make it actually human readable. But at the end of the day the structure is a concern of the application, I have to evaluate its structure in the context of that application. I don't just look at JSON objects for fun. I do it mostly to debug stuff. I still need the schematic structure of the object to even know what I need to write.

FWIW, I normally use something like grpcurl in order to do curl-like requests/responses to a gRPC endpoint and you can even have it give you the schema for a given service. This has worked quite well IME for almost all my needs, but I accept with this stuff you often have lots of "one-off" cases that you have to cobble stuff together or just get dirty with printf'ing somewhere inside your middleware, etc.

> I would also add the GRPC implementation I used in Javascript (long ago) was not actually checking the types of the field in a lot of cases so rather than being a schema that rejects if some field is not a text field it would just return binary junk. JSON Schema or almost anything else will give you a parsing error instead.

Yes, I totally am with you on this. Many of the implementations just totally suck and JSON is common enough nowadays that you kind of have to at least have something that doesn't completely fall over, if you want to be taken remotely seriously. It's hard to write a good JSON library, but it's definitely harder to write a good full gRPC stack. I 100% have your back on this. I would probably dislike gRPC even more but I'm lucky enough to use it with a "good" toolkit (Rust/Prost.)

> If you’re using JSON Protobufs why would you add this extra complexity - it will mean messaging is just as slow as using JSON. What are the core advantages of GRPC under these conditions?

I mean, if your entire complaint is about text vs binary, not efficiency or correctness, JSON Protobuf seems like it fits your needs. You still get the other benefits of gRPC you'd have anywhere (an honest-to-god schema, better transport efficiency over mandated HTTP/2, some amount of schema-generic middleware, first-class streaming, etc etc.)

FWIW, I don't particularly love gRPC. And while I admit I loathe JSON, I'm mainly pushing back on the notion that JSON has some "schema" or structure. No, it doesn't! Your application has and knows structure. A JSON object is just a big bag of stuff. For all its failings, gRPC having a schema is a matter of it actually putting the correct foot first and admitting that your schema is real, it exists, and most importantly can be written down precisely and checked by tools!


Here are some sad news for you: The flexibility of JSON and CBOR cannot be matched by any schema based system, because it is equivalent to giving up that advantage.

Sure, the removal of a field can cause an application level error, but that is probably the most benign form of failure there is. What's worse is when no error occurs and the data is simply reinterpreted to fit the schema. Then your database will slowly fill up with corrupted garbage data and you'll have to restore from a backup.

What you have essentially accomplished in your response is to miss the entire point.

There are also other problems with protobuf in the sense that the savings aren't actually as big as you'd expect. E.g. there is still costly parsing, the data transmitted over the wire isn't significantly smaller unless you have data that is a poor fit for JSON.


It's also worth noting CDDL [1], which adds schema-like utility to CBOR (and technically JSON.) We've started to use it in more places where we use CBOR.

[1] https://datatracker.ietf.org/doc/rfc8610/


- JSON doesn't have any schema checking either.

- You can encode the protocol buffers as JSON if you want a text based format.


I’m certain the sea is as mapped as you can possibly imagine, cutting say 50% of cables would lead to a lot of Russian ships sinking and a ban on them entering western waters. Their equipment is absolutely shit compared to ours and we know exactly where it all is. Surely they have been told this is a declaration of war which clearly they are scared of too.



There’s loads more we can do but the Russian government might just collapse if they go too far attacking western assets. They know there will be a response “at a time and place of our choosing” and cutting the Internet properly will be extremely expensive for Russia, they will have no banking system at all and we will give Ukraine weapons to attack their oil infrastructure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: