Hacker News new | past | comments | ask | show | jobs | submit | xavxav's comments login

That's unfortunate, I really preferred Whisky to crossover purely for the UX, I would happily pay for a crossover license if i got to keep the whisky app itself, crossover's ui is archaic and ugly in comparison.


basically every european country? they've all had much larger datacaps than north america for years preceding 5g and most are quite densely inhabited.


I’m from Germany and I don’t know a single person with unlimited mobile data. That’s very rare here.


And yet probably everyone you know in EU has a cheaper Internet per GB that folks in the US. I have 2 SIM cards, one provider charges me $10/GB, while the other has a 2-GB packet for $6.


In Finland I pay 20€/mo for unlimited data (bandwidth capped at 200 Mbps). With some shopping around it can be cheaper/have more bandwidth. The pricing has been similar at least since 3g. And I recall having a similar deal in the UK five years ago.

There's also 28 GB EU roaming per month included, and 2.23€/GB after that.


Both of those prices are considerably more expensive than what I pay for service in the US. Even the cheaper one is more than 2x more expensive than what I pay per gig, including unlimited calls and texts + roaming to a lot of North America.


Who's your provider if you don't mind me asking?


Mint. 15GB for $20/mo works out to $1.33/GB while your 2GB plan is $3/GB.

But there are other MVNOs out there like tello which also have a 2GB/$6 plan in the US, and other MVNOs which offer unlimited data for like $25-30/mo like visible and US Cellular.

Plenty of cheap MVNOs out there these days.


Tello is actually what I use for my secondary data, Fi is my main (mostly because I travel somewhat and the data costs the same in all the destinations I care about without having to juggle SIM cards).

I'm not a good case study because I rarely use more than 2gb in a month, so Mint would come closer to $10 a gig... :)


£10/month pay as you go SIM for 30gb here in the UK and im sure there are better offers


What do you mean? let-bindings don't interfere with referential transparency. `let x = 1 in let x = 2 in foo` is referentially transparent.


I think you're thinking of shadowing, not re-binding.


Yup, as a Haskeller, it's important to remember that rebinding means something else in other languages.


The example given in the article is:

    counter = 0
    counter = counter + 1
This is very different to shadowing where there is a clear scope to the rebinding. In this case, I cannot employ equational reasoning within a scope but must instead trace back through every intervening statement in the scope to check whether the variable is rebound.


It's a straightforward syntactic transformation. The two are equivalent. The scope of the rebound variable begins at the rebinding and ends when the surrounding scope ends. Perfectly clear - the only difference is a "let" keyword.

  counter = 0
  ...
  counter = counter + 1
  ...
vs

  let counter = 0 in
  ...
  let counter = counter + 1 in
  ...


The ellipses in your straightforward transformation are doing some heavy lifting there. Typically the let…in construct has some way to indicate where the scope of the “in” part ends: indentation (Haskell), explicit “end” marker (SML) etc. Even with that, shadowing does make equational reasoning harder (you have to look at more surrounding context) and should generally be avoided.


Sure, scopes are not as easily syntactically visible, but each assignment is creating a new scope, that doesn't change anything for equational reasoning which has to account for captures / substitutions anyways.


> each assignment is creating a new scope

No, it’s rebinding the variable within the same scope. Even if you view it as implicitly creating a new scope, the implicit part means it can happen on any line so you have to scan them all and mentally keep track of what changes where, (almost) exactly as you would for imperative code.


> * "As provable as Ada/SPARK": I'll let you read the design in [2] and decide for yourself. But Yao will also have contracts.

Without being too self-indulgent, I'm not sure there is that big of a gap between the two in provability, there are now a huge array of verifiers for Rust code which are being used to verify real code: SAT/SMT solvers, kernels, memory allocators etc...


I agree. I think getting provability right would bring reliability along with it.


Not really, conceptually it probably shares a lot of the same foundations that a useful simulator would have, but its important to keep in mind that they aren't actually simulators of cities in a realistic sense.

Games such as cities, inherently embed a view of how the "right" city would be organized, providing tools and incentives to nudge you in that direction. Consider how all social problems can be solved by simply plopping down the relevant class of building nearby. Or simply the absence of parking lots!

There's this old article on the subject: https://www.polygon.com/videos/2021/4/1/22352583/simcity-hid...


1. the article explains how many materials could potentially be sourced on the moon and provides potential magnet compositions to address that as well. 2. The moon weighs ~8.1 x 10^19 tons, if we use their iron based magnet design and ship everything from the earth (iron is present on the moon), it would require ~1 million tons aka 10^6 tons or approximately 0% of the weight of the moon. 3. See above.


It's only ~400 times the circumference of the LHC so that seems like a pretty good increase.


There is still academic work on Prolog, and more broadly deductive / logic programming. If you are looking at things with a more industrial bent, I would look to Datalog which trades generality in Prolog for performance and predictability. Alternatively, you can go the other way and look at lambdaProlog which adds real abstractions / HOFs to Prolog.

What I've seen in practice is that while Prolog may be good at describing a solution, its performance is often too lackluster and brittle for actual deployment: it probably fits more as a prototyping language before you do a classic implementation of the solution in a more traditional language.


I did the same thing, with the same limitations for years, but I've transitioned to using the tiny package `DailyOrganizer` which can create a note for each day, along with a small custom command to open my note directory in the quickpick (to browse old notes). Having this has meant that I just throw notes down, maybe I forget them maybe not, but it at least they'll be saved properly.


I'm a researcher in formal verification; my thesis was building a tool to do this kind of stuff and I agree with the grandparent (though I would say probably closer to 5-10x slowdown not 100x).

Proofs are hard and often not for interesting reasons; its stuff like proving that you won't overflow a 2^64 counter which you only increment (aka something which won't happen for another couple billion years).

Current tools are only useful for specific kinds of problems in specific domains; things where a life really depends on correctness. Outside of those cases, lightweight techniques provide much more bang for your buck imo.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: