Hacker Newsnew | past | comments | ask | show | jobs | submit | notallama's commentslogin

solution 1: spill over to second card.

solution 2: hash function


if classes didn't have such a terribly verbose syntax in basically every language that has them, i'd be less opposed to using them.

in java, c++, or c#, to add a variable to a class, you have to repeat its name 4 times. once to declare, once in the constructor parameters, and once on each side of the assignment. why am i writing the same thing 4 times for what should be a core part of the language?

in haskell, you write it once (i'm not saying haskell's records are nice, but they got that part right). same with rust.

and with a function, you write it once.


FWIW, with C#'s object initialisation syntax you can cut that down to once too. eg

  public class Foo
  {
     public string Bar { get; set;}
  }

  var foo = new Foo { Bar = "Hey there" };


I've come to love Scala case classes:

  case class Foo(bar: String, foo: String = "hello")
  new Foo(bar = "Hey there")
  new Foo(foo = "Hey there", bar = "Hello")
  new Foo("Hey there")


Case classes, I believe, have a companion object with an apply method build for them, so you can get it down to:

  case class Foo(bar: String, foo: String = "hello")
  val f1 = Foo(bar = "Hey there")
  val f2 = Foo(foo = "Hey there", bar = "Hello")
  val f3 = Foo("Hey there")


You're right!


I get what you're saying, but good OO for me usually suggests nullary constructors, letting a class know when something happens as opposed to setting internal state directly, and the class saying let me do something for you using my state instead of giving you my state directly.

Providing a getter at least is sometimes practically unavoidable, so there's 1 repetition, but with a good IDE that's just a quick key combination over the variable name.


> good OO for me usually suggests nullary constructors

I'm curious, could you please elaborate on this point? If a class has any dependency, i usually find it better to require that dependency to be passed in the constructor, so the constructed instance can always be in a valid initialized state. Why do you find nullary constructors to be good OO?


I came off too heavy-handed there. My intention was more to just say that variables shouldn't be put in constructors if they don't have to be. So, just railing more against the pattern that Java seems to have popularized where every private variable automatically has a getter/setter and can be set through the constructor. Certainly if an object requires some initial state that can never change, passing the value in through the constructor is usually most appropriate. Even then, I sometimes like to have a nullary constructor with an init() method. Just makes certain design patterns utilizing e.g. reflection or pooling a bit easier to manage in some languages.


Ok, i didn't get your point then. Well, i definitely agree on that the pattern/trend of auto-generating getters, setters and constructor parameters for every attribute in a class without considering if those attributes should be visible (or, worse, mutable) from the outside is pretty hideous.

About the nullary constructor + init() method for deserializing purposes, i don't have any strong opinion really, as long as consistency is kept throughout the code base/module that relies on that. Using Java's reflection API though, you can extract all necessary type information from non-nullary constructors in order to call them with the required dependencies, which is not essentially more complicated than instantiating the class and then setting its properties thought setter methods.


I don't think that nullary constructors are so good. If you have dependencies then after constructing your object it'll be in an invalid state because the dependencies aren't set up correctly.


http://news.ycombinator.com/item?id=5207262

Is there a preferred way on HN to address a "duplicate" reply?


I would argue that this repetition is not essential. It happens when you assign a constructor argument to an attribute directly without any modification. But there are case where it doesn't appear:

  - the attribute has a different name
  - an attribute is set whose value is a function of more than one argument
  - an attribute is set to a constant value independent of constructor arguments
In languages requiring attribute declarations the minimum number of occurrences is 2 (declaration and initialization in a constructor). When declarations aren't required then it's just initialization.

How often such circumstances occur is another issue. The form of initialization that you mentioned is probably the most common so languages can provide shortcuts in this case.


> in java, c++, or c#, to add a variable to a class, you have to repeat its name 4 times. once to declare, once in the constructor parameters, and once on each side of the assignment. why am i writing the same thing 4 times for what should be a core part of the language?

Because Java, C++, and C# suck at this. I'm trying to see the relevance to the OP, which is about Python, which rather decidedly lacks this problem. To add a new instance variable in a class, you need to mention it once, or twice if your using a __slots__-based class for compact memory footprint (because then you have to add it to __slots__ and then actually assign it a value somewhere.)


Scala and TypeScript do it right, though.


how is this website so ugly?


the linux thing is not universally true.

i have installed a few different distros on my w500 over the years, and it runs hot every time. the ati drivers suck, the os can't seem to control the fans properly, and switchable graphics don't work. tried with opensuse years ago, and ubuntu a few times more recently.

i imagine it's better with a nvidia card, and no switchable graphics.


T400 here with same results - with Ubuntu (12.04) it is hotter than with W7, switchable graphics does not work (works in Intel X4500 mode).


does this guy not know how to use type declarations?

[1,2,3,4,5] :: [Int] problem solved!


He knows, as it is clearly visible in his article. His point is, he should not need to.


Actually, you don't need to in most real programs. You usually use the elements of a list with some function that makes it clear what type they have.

In GHCi (the interpreter) this doesn't happen. It has some defaulting rules to alleviate this, but I don't quite understand them, so I can't say why they didn't kick in in this case.


The thing with Haskell is that it doesn't automatically coerce types the way other languages do. It uses Hindley-Milner type inference up front if you don't specify your types, but this is far from foolproof. Once it's decided you're using type Foo, you're stuck. You need explicit typing in many cases.

One of the first things you learn as a Haskell programmer is to treat type inference with suspicion.


Languages like SML and OCaml also use this sort of type inference, but you tend to not need type annotations in those languages except in very special cases. I think this is also fairly true of Haskell -- type inference works in 99% of the cases, and explicit type annotations are just a stylistic thing.


Most of the time you're right. But I've done a fair bit of tinkering with Haskell's OpenGL library. Trying to get GHC to differentiate between a Float and a GLfloat inside a Vector is pretty much impossible without resorting to explicit typing. This definitely qualifies as a special case, but it's cropped up often enough as to make me a little gun shy. You don't need to explicitly type everything, just enough to give the compiler a credible hint as to what you're trying to do -- such as the last term in your vector.

The sad thing about the original article is that the writer is throwing his hands up and raging over what is really a small implementation hiccup. I kind of like that the compiler withholds judgment on whether '5' is a Float, Int, or Integer. It means that I no longer need to type ".0" after every Float.


This is a property of many statically typed languages, and Haskell isn't known for being one of the more forgiving languages in that family.

I appreciate that his goals were more mathematically oriented than not, but surely he would've seen something like this coming?


dropping out to work on something can be a good idea. start while you're in school, and only drop out once you can't handle both, though. maybe take a light course load at first, and if your project really picks up steam, drop the courses.

dropping out to work for someone is just stupid.


the superhero stuff is fun. i have only been programming for ~3 years, though, so i'm probably closer to sidekick than superhero at this point.

architecture is fun too, but i don't do it as much as i probably ought to. and debugging is something i know i'll have to do, but try to avoid.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: