Hacker News new | past | comments | ask | show | jobs | submit | lylecubed's comments login

> doing something with that tax revenue does.

Not really. SF already spends more than $250 million annually on programs for the homeless[0]. That's about $33,333 per homeless (7,499[1]). And yet, the problem is worse than ever.

[0]: https://www.sfchronicle.com/bayarea/article/29-million-incre...

[1]: https://projects.sfchronicle.com/sf-homeless/2018-state-of-h...


> An introduction to dependent types, demonstrating the most beautiful aspects, one step at a time.

Is there a companion book detailing the ugly downsides of dependent types and how to avoid them, one step at a time?


I feel like people are misreading this comment, which is asking how to avoid the pitfalls of dependent types, not how to avoid dependent types.


This. Thank you. I'm being rate limited, so this is the only post I'll be making on this thread. My question was genuine. I want to learn both the positives and the pitfalls of dependent types. I mimicked the book's description because it amused me from a linguistic perspective. Looking back, I probably should have phrased it differently.


> the ugly downsides of dependent types and how to avoid them

There's this Reddit answer from two years ago, and the subsequent comments https://www.reddit.com/r/haskell/comments/3zc81v/tradeoffs_o...


This is exactly what I'm looking for. Thanks!


Note that many of those problems are being actively worked on in research, and progress is being made, albeit slowly. That said, we can still get some of the benefits of dependent types, even without all the problems being solved right now! Just gotta be aware that it's not all roses yet.


Can you elaborate? Do you have war stories?


Here is a quick guide to avoiding them: "Unless you are using Haskell at a very high level of abstraction, Coq, Agda, Pie or Idris: congratulations you have avoided them."

It's not really clear why you'd want a book about the downsides of a quite recent development in practically usable programming models. Is it just because you are a hater?


My guess is because OP has been burned by the "LEARN THIS NEW THING, IT'S REALLY COOL AND POWERFUL AND ALL OF THE COOL KIDS ARE DOING IT (and oh by the way many of the simple things you do all the time are incredibly inconvenient...)" narrative one too many times.

Adopting something purely on it's merits is a bad idea, but nobody ever writes the book about a language/paradigm's downsides. I'm pretty sure that was the joke, but I might be off.


nobody ever writes the book about a language/paradigm's downsides

Indeed. One can argue that books like "optimizing X" or "secure X" are about ways to easily write slow or insecure code in X, but this is somewhat narrow. Is there never enough demand for a broader book on downsides of X?


Bertrand Meyer's book on Eiffel was all about the downsides of C++.

And then there's the Unix-Hater's Handbook...

https://en.wikipedia.org/wiki/The_Unix-Haters_Handbook

I wrote a whole chapter about the downsides of X -- do you think there's a need for a broader book? ;)

https://medium.com/@donhopkins/the-x-windows-disaster-128d39...


> I wrote a whole chapter about the downsides of X

I lol'ed 6-8 times... Still, I have to say that I'm loving my distros lately; distro maintainers, you rule!

Some typos in the article:

eents/events client an/client can because it the/because it solved? the tires/tries ore dump/core dump trwe/tree piel/pixel resulution/resolution ertain/certain N ot/Not screens een have/screens have


The one where you demonstrated that you don't know the difference between a client and a server? :-)


How's it "not knowing the difference" to explain that the usage of the terms client and server switched over time, in the sense of which is local and which is remote?

An xterm client running on a VAX mainframe connects to an X11 server running on a Sun workstation: the X11 client is remote, the X11 server is local.

A web browser client running on a phone connects to a web server running in the cloud: the web client is local, the web server is remote.

Right?


Exactly! I, too, used to live in an office next to a machine room full of "web clients"! :-D

(Actually, I'm one of those deluded fools who claim a "server" provides a "service" to one or more "clients", who make "requests". Yeah, I know, but I figure somebody has to keep the joke funny.)


Here's how Jim Fulton (who wrote the early graphics drivers for X6) explains it:

https://www.quora.com/profile/Jim-Fulton

https://www.quora.com/Why-is-the-X-Windows-architecture-clie...

>Given it's roots, X naturally used the technically-correct terms to describe its major components: the portion that abstracted display and input hardware into a service that could be used by other programs was called the "display server"; the portion that made use of those services was called the "display client." This later caused endless confusion who thought that "server" was a synonym for "big computer" (file server, database server, etc.) and "client" meant "small computer" (diskless client, etc.). Who knows, maybe it would have been easier had they been called "application server" and "application client" but that revisionist history.

Also interesting:

>Ultimately, the pendulum swung back with the advent of Web 2.0 technology and mobile devices. Now, we take it for granted that applications can run anywhere in the network and be accessed by any type of device. While X is primitive compared to JavaScript and HTML5 (whose ability to push computational tasks over the network into the display device were inspired by Java and NeWS), X did lay the initial groundwork. It also proved that an open source model could work for in business environments. Not too shabby for a technology that will soon be hitting its 30th anniversary.

>Jim Fulton, alumnus of Project Athena, the MIT X Consortium, Cognition (first commercial use of X on DOS) and Network Computing Devices (leading X terminal vendor)


Loled. I used to do phone support for x11 servers for pcs. This terminology confused people to no end


There are tons, but we really need to ask if any of this is a fair topic for Dependent Typing, which has only emerged from pi-calculus musings on paper to compilers that can do more than prove simple structural recursion.

It's like seeing a photo of a baby and asking why there isn't a service to provide police records for babies, because you "just want to be careful, you know?"


It's not even really possible to "adopt" dependent typing today. It's only emerged from the realm of academic curiosity and only two implementations exist that are anywhere near "practical" in the context you're describing.

Both of those implementations are very honest about their shortcomings, and nearly every talk and blogpost for them mentions you can't yet use this in many industrial contexts.

It's very difficult to see this as anything but the usual distate for PL theory that constantly swirls around this community. If the author didn't intend to associate a post with that, then they've done it by accident.


>only two implementations exist that are anywhere near "practical"

Out of curiosity, why F* or Idris are not practical?


F* and Idris are precisely the ones I had in mind, although I guess you could make an argument for Agda.

What did you think I was imagining? I can only name 5 DT languages off the top of my head.


So, why are they not practical in your opinion?



Unless said language is PHP, in which case there's no end of diatribes about it


> why do folks completely avoid a language for a single relatively bland syntactic feature?

Personal preference isn't a good enough reason?

I don't like white space sensitive languages because I've seen what happens in python when somebody accidentally adds a couple of lines formatted with spaces into a file formatted with tabs. I've seen git and svn mangle tabs. Long blocks are harder to track. Refactoring functions and nested ifs are much harder to keep track of. If you somehow lose all of the formatting in a block or a file, it's much more difficult to recreate the code if the only block delimiters are whitespace.

Essentially, white space delimiters are just one more thing that can go wrong and ruin my day. I try to keep those to a minimum. That said, Nim is my new go to for short scripts. I wouldn't write anything large in it for the reasons mentioned above.


Nim disallows tabs entirely, and in Python 3 it's an error to mix the two in the same file. So those errors can't happen anymore.

Out of your list, the only one that seems like a real problem is recreating blocks if the code lost all formatting.


You just described two errors that do actually happen and in the next sentence say those errors can't happen anymore. What am I missing here?


The comment I replied to was talking about errors arising from mixing tabs and spaces and incorrect indentation levels that arise from it.

If a language either disallows tabs entirely or will refuse to run/compile code that mixes tabs and spaces in the same source file, you obviously can't get errors related to mixing tabs and spaces.


loosing parts of your code is bad. The same goes for braces, if you loose them in a big c program your day is ruined as well.


Nim has multiple garbage collectors you can change at compile time. Here's the --gc option from the documentation.

    --gc:refc|v2|markAndSweep|boehm|go|none|regions
I don't know if this is up to date or not. There's an open issue on github to improve the documentation on the garbage collector.

https://github.com/nim-lang/Nim/issues/8802


Embedded development.


Here's an insight from a developer with 15+ years of experience: be very, very careful who you take advice from. The so called "best and brightest" in our industry have led us down the path we're at today. I have a supercomputer in my pocket compared to computing speeds in 2000, but it can't even smoothly scroll down a webpage. Object Oriented programming alone has derailed progress in computing by at least 20 years.


Thank you for this - Can you clarify why OO derailed us? But if that explanation might be a long story haha so what was before OO, so I can do my own research


This video does a great job of explaining why modern programming in general has failed: https://www.youtube.com/watch?v=uZgbKrDEzAs

I think OOP specifically derailed programming because of how big it was, how fast it took over and how long it was considered to be the one true programming paradigm. When OOP hit, it hit hard and fast. It wasn't long before colleges were teaching OOP as The One True Way To Program. Every employer required knowledge of OOP before they would interview you. And I mean every employer, from startups to corporate enterprise. And once it took hold, it took people (and me!) 10-20 years to realize the OOP emperor had no clothes on.

I still don't understand why or how it got so big so quickly. All I can figure is that the software industry as a whole pays attention to the wrong people.


Because it led us down a very fulfilling path of inventing interesting abstractions to solve problems, which tickled our mental fancy but led to bloated, sub-optimal solutions.

Unfortunately, it worked ... but eventually the bloat caused projects to hit a wall of unmaintainability. (If it had not worked so well in the medium-term, we would have moved past it much sooner).

Another reason it derailed us is simply the level of buy-in it received from the entire industry. Everyone drank the cool-aid, so thought advancement in more powerful approaches -- such as functional programming, data structure-oriented programming and even data oriented programming -- was neglected.


This (very long) article explains why OO might not be such a good idea as initially believed.

http://www.smashcompany.com/technology/object-oriented-progr...


Another tidbit ... imagine how science might feel if string theory is shown via undeniable objective evidence to be completely wrong. "All that time was wasted...."

But the software world bought in to OOP much more deeply than science has bought into string theory.


HFT? Don't you need a PhD for that?


So the Washington Post is saying that the Tesla is so successful at mimicking human behavior that it now mimics our common mistakes as well?


It was probably distracted while browsing internet for news on new iPhone.


T-Mobile still has an unlimited plan, right?


I'm assuming that, like most "unlimited" plans, it includes substantial data caps after the first few GB.


They increased it last year from 32GB to 50GB a month. Frankly, I think even a 50GB cap on an "unlimited" plan should be illegal, but it's better than most carriers.

https://www.theverge.com/2017/9/19/16334690/t-mobile-unlimit...


It's not a cap, you're just throttled. If they advertise unlimited 4g or whatever speeds then it already is illegal. No one does that though, they just say unlimited data


It's not a cap. Once you exceed 50 GB in a month your speed is throttled if and only if the cell is congested.


It's a limit. It's called an unlimited plan. That's deceptive advertising at best and fraud at worst.


I just wish they'd stop selling unlimited. Everyone.

I want to pay for my bytes, and I want you (carriers) to give me the service I pay for, at an guaranteed rate. A SLA, basically.

Consumers get shafted on these deals. We pay for products with no agreement over what we're actually going to get. It gets tiring :/


> Consumers get shafted on these deals. We pay for products with no agreement over what we're actually going to get. It gets tiring :/

Consumers greatly benefit from these deals, because they pay a fraction of what it would cost to get service with an SLA. (Put another way, it costs a fraction as much to build a network with over-subscription than it costs to build a network where each user has guaranteed bandwidth.)

Where I work, we pay Cogent almost $1,000 for sub-gigabit, and that's in a building with several competing providers to choose from. I pay a tenth that for consumer gigabit from Verizon. That's because I'm sharing a 2.4 gigabit PON node with 16-32 other users, and Verizon can assume that nobody will be using the full gigabit more than a small fraction of the time. If Verizon was only allowed to sell guaranteed bandwidth, they could only sell a 75 mbps service. Which would suck for the consumer when they went to go download an iTunes movie or a game on Steam.


Until there is legitimate competition in these spaces you will keep getting "up to X mbps" and "for the first 50GB, subject to change" in every providers terms.

It doesn't help that actually pricing out Internet access is more complex than X cents per byte. For most providers data between the hours of 0100 to 0600 would be free because of how little network usage there is, while data from 1700 to 2300 would be most expensive due to that being when everyone is using the network simultaneously. But throughout the years there have been very few time limited unlimited plans.

And even then, its not actually "expensive" to use the Internet in the evening. Its just saturated, and pressures the ISP to either throttle everyone or expand capacity. Expanding capacity is expensive, but just promising "up to X mbps" is easy and I'm surprised at how eager operators are to adopt data caps over simply throttling heavy data users during peak times.

Well, I'm not really surprised, because the former lets you hit people with surprise bills for ludicrous amounts while the later just saturates your useless customer support lines with complaints.


My old university dorm internet was like that. There were 3 or 4 time zones per day, and only the few hours in the middle of the night were unlimited, otherwise iirc 500GB/month, measured by double and iirc. even triple-counting during prime time and such. For some reasons they were rather oversubscribed, offering 1000BASE-T in the dorms (with some L2 crypto auth), but only had 10Gbit/s fiber uplink in a few dorms. Considering they already had active equipment on both sides, this shouldn't be much of a problem though.


I know, ya'll want unlimited speeds for cheap. I get that. I just want reliability. Currently every internet I've had craps the bed during prime hours. Which hey, is exactly what you're paying for.


Sounds like your on some shifty cable ISP or an older ADSL based ISP. For cable provided internet, its truly shared, and one person plugging a VCR in the wrong way means your speeds may drop significantly.

Meanwhile, older ADSL terminals are often backhauled by a couple T1s, so 6Mbps may be split across more than a few customers, resulting in poor service. Centurylink calls it exhaustion when that occurs.


The one plan throttles video quality at the base unlimited, unless you were grandfathered into the simple choice


yup


The world isn't fucked up. The world operates the way it operates. It's your expectations that are fucked up.

Human beings love resources (money), power and control. Anything that gives them that can and will be exploited. Approach the world with that in mind and you can anticipate the vast majority of these issues. Game designers do this all the time without worrying about being "cynical".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: