Hacker News new | past | comments | ask | show | jobs | submit | cs648's comments login

87/100. Very interesting game, it definitely helped me start to understand why some people get so upset about incorrect kerning. Would have been nice to see a running total score throughout.


Very interesting article, especially the t_scope allocator - I never knew you could get GCC to perform that cleanup automagically. One minor grammar point: isn't a lock under contention a contended lock, not a contented lock?


Wording issue fixed.


Is the source for the allocators freely available? Would love to study those.


Unfortunately, not for the moment.


I'd also like to put in a request for either open-sourcing or a more detailed overview of the implementations, they sound really interesting.


I'll consider writing a more detailed article on the subject. Open-sourcing the code will not be possible in the short-term.


In lieu of the implementation, I'd like to know if these allocators are themselves based on malloc or if you have some tricky assembly/kernel code going on somewhere.


These allocators are based on mmap to build the arenas.


+1, for request of implementation details. I'm really curious about this.


Yes, I wish that cleanup was a portable C feature! Glad to see that GNU is trying something here, as it would serve as a prototype for standardization. Perhaps in a future version of C...


There is an extended version of C, which is has this feature and is nearly as widely ported as C. They aptly named it C++.


C++ has an IMHO worse version of this feature, that requires a custom type, and that only allows a single function to be called for that type.

This is more like Go's defer, and is far more appropriate for my use cases.


I haven't written much C++ in quite a few years - mostly do Ruby these days, so I'm sure I'm making some terribly embarrassing faux pas or other with the example below. But you don't need more than the C++ functionality to compose your own variations if you want more flexibility.

For example:

  #include <vector>
  #include <iostream>

  class Scope {

  private:
    typedef std::vector<void (*)()> FV;
    FV fv;
  public:
    void on_return(void (* f)()) {
      fv.push_back(f);
    }

    ~Scope() {
      for (FV::iterator it = fv.begin(); it != fv.end(); ++it) {
        (*it)();
      }
    }
  };

  void foo() {
    std::cout << "Hello ";
  }

  void bar() {
    std::cout << "World" << std::endl;
  }

  int main() {
    Scope scope;

    scope.on_return(foo);
    scope.on_return(bar);

    std::cout << "Hi" << std::endl;
  }

With C++11 lambda syntax you can do quite a bit better.

Expanding that into something providing at least most of what Go's "defer" does shouldn't be too hard.


Better would be to just have the Scope class take a single function to run and establish multiple stack entries for them. This would be my version:

  #include <cstdio>

  template <typename tFunc>
  class ScopeFunc {
  private:
    tFunc mFunc;

  public:
    ScopeFunc(tFunc func) : mFunc(func) {}

    ~ScopeFunc() {
      mFunc();
    }
  };

  template <typename tFunc>
  ScopeFunc<tFunc> on_return(tFunc func)
  {
    return ScopeFunc<tFunc>(func);
  }

  int main() {
    auto first = on_return([]() { std::puts("first"); });
    auto second = on_return([]() { std::puts("second"); });

    std::puts("Hello");
  }
Note that this is C++11, using lambdas. Also that the output will be reversed from your expectation since it's a lifo, but that's probably actually what you want for real deferred behaviour and not text output.

It also optimizes nicely, which yours may not because the loop unrolling may be complicated and there's a higher chance of aliasing of the function pointers in the vector:

  0000000000400600 <main>:
    400600:	53                   	push   %rbx
    400601:	bf e4 07 40 00       	mov    $0x4007e4,%edi
    400606:	e8 b5 ff ff ff       	callq  4005c0 <puts@plt>
    40060b:	bf ea 07 40 00       	mov    $0x4007ea,%edi
    400610:	e8 ab ff ff ff       	callq  4005c0 <puts@plt>
    400615:	bf f1 07 40 00       	mov    $0x4007f1,%edi
    40061a:	e8 a1 ff ff ff       	callq  4005c0 <puts@plt>
    40061f:	31 c0                	xor    %eax,%eax
    400621:	5b                   	pop    %rbx
    400622:	c3                   	retq


Good idea, this definitely accomplishes the same functionality, but it's done at runtime, rather than the compiler knowing all the functions at compile time. Perhaps a minor difference for most cases, though...

I would still prefer a C extension... Perhaps I should just use Go these days, though ironically all these memory allocation policies are useless in a GC language.


You underestimate the compiler. This formulation might be tricky, but the version I just posted (adjacent to this post) definitely leaves the compiler knowing exactly what's going to be called when the function exits.


Then "they" later discovered the name was no longer apt and it should have been called ++C.


C++ is the correct name, you write a few thousand lines C code and then write class LargeObject in the first line. So C++.


I'm concerned that would create a temporary variable, I'll try and decipher the ISO spec to verify. Is LargeObject POD?


Would C++ completely replace C, EVER? On top of that extensions like Unified Parallel C and Cello come up!


No. The C vs C++ question is simple language vs comprehensive language. A simple language might be preferred for various reasons. For example, code is easier to review, which is Linus Torvalds argument. Bindings to other languages are easier to create. Compiler is simpler, which might be important for embedded and high-reliability jobs.


Microsoft just bought Nokia...


No, they bought their mobile & services businesses unit, and licensed some patents. Nokia still exists.

From Microsoft's press release: "Nokia will continue to own and manage the Nokia brand."

http://www.microsoft.com/en-us/news/press/2013/sep13/09-02An...



Nope! I did it the old fashioned way with a Canon 1Ds Mark III and a macro lens.


I find it incredibly annoying that laptop lights blink when they're on standby - I often put my laptop on standby when I sleep and then I have a light either blinking or "waxing/waning" in my face (I like to sleep in complete darkness). My improvised solution is to throw an item of clothing over the laptop now.


Black gaffer's tape is great for almost all the annoying "hey, I'm here!" LEDs in the office. The blue ones are especially gruesome.


It was abuse, using a misogynistic word, in a department where there is already a dearth of women due to the hostility they face. Brushing it under the carpet by denying her experience isn't helpful. You don't need to read this story to learn of sexism in CS departments, just ask any woman who does computer science and you can hear plenty of stories.


Right, but it's an anecdote. The title of the piece suggests some sort of comprehensive study, but fails to deliver anything beyond a single story of friction between two people (one of whom clearly has women issues).


"Bitch" being a misogynistic term is a stretch...


Its more about what was communicated/interpreted than the actual words.

Take for instance: "I'm not going to let some bitch tell me what to do" vs. "Hey bitches, whats up?"

the first statement could communicate that the speaker has issues with females telling him what to do. The second statement is playful and affectionate.

However, because the word bitch is so generic and well used, the first speaker could easily only have issues with unpleasant people, not females. Its not clear.

That's why it is better to talk about a person's pattern of behavior, rather than labeling an isolated incident as sexism.


where there is already a dearth of women due to the hostility they face

Where the hell did you pull that from? I suppose there's a "dearth" of men in nursing too? Can I infer an epidemic of misandry because we don't have 50% male nurses? Ludicrous.

just ask any woman

Yeah, because the plural of anecdote is data </sarcasm>


This really is the sensible solution.


Can you really use this as an argument against the government protecting workers by ensuring they get a guaranteed amount of holiday a year? I don't think its valid to say EU workers have their pay downsized to compensate for enforced holidays, what about things like minimum wage? (After all, we are talking about people on the bottom end of the scale here, who have minimal protection from employers already)


"But the site acknowledges that they are largely not to blame. It reads: "While giving millennials grief is highly entertaining, we want to acknowledge that the woeful state of the economy is not their fault. These free issues and e-cards are intended to help a generation that could sure use a hand, not to blame them.""

The quote "These free issues and e-cards are intended to help a generation that could sure use a hand, not to blame them." does not seem very genuine to me. It is very difficult for people in my generation to advance themselves in this climate. Youth unemployment has risen globally, and personally I'm horrified at how this government (in the UK) is sabotaging the future of so many children and young people. These kind of glib pot shots were funny 10 years ago, when social mobility was possible but they come across as arrogant and out of touch in today's world.


Seems to be an ever increasing trend to replace Python components experiencing high load with ones written in Go. Is this Go's niche? Definitely a language I will be learning in the near future, it doesn't seem like it will be going away any time soon.


I've settled on Go to replace a worn out Java mess (otherwise a Python shop). We need the computational performance, and I do like the general feel of the language. I think this is something you're going to see a lot of going forward. It's the same niche Scala has been filling to an extent, but I personally think Go is a much better option (unless you need the JVM of course).


Is there evidence that Go is better than Java in computational performance? Truly curious are their links?


You might have misunderstood the parent, I think. It seems like he was saying that they used Java (and then Go) in lieu of Python because of the performance.

But to answer your question, the Language Shootout seems to suggest Java and Go are on the same plane in terms of speed. Take that with however many grains of salt you like.


It's also interesting that according to the Language Shootout Go uses a fraction the memory of Java - meaning you can save a lot of money by deploying it on cheaper machines or VMs and get similar performance to Java.


I just checked out the language shootout. Go has really pulled ahead from where it used to be. Faster than SBCL or Ocaml or Free pascal on quad core 64-bit - that's impressive.

Clojure's done some impressive improving, too.


And be aware that Go is quite young compared to Java, the optimisations will be better and better


I wouldn't use Go for high-frequency trading systems yet. But you get 80% of the performance with a lot fewer type declarations.


I can use Scala and not pay that penalty. I'm looking at Go to replace components of my system where I don't want a full bore JVM, but I have to be thoughtful about latency. I prefer C for this (C++ seems to be the standard there). Would love to move to something like Go once I can.


Well some persons are ready to trust Go even for this http://andrewwdeane.blogspot.de/2013/05/the-reliability-of-g...


I'd say Go's niche is pretty much any sort of network server. Right combination of high performance, easy multithreading, and making it hard to shoot yourself in the foot (no buffer overflows, for instance)


I've heard this several times. But the success stories are mostly concerned with smaller parts under heavy load. Go still is a lot harder to work with than Python if you know Python and its tools well...

With Go you don't have anything comparable to Django, Numpy, Pandas...


> With Go you don't have anything comparable to Django, Numpy, Pandas...

Given the age of the language I would assume building out its associated toolchain is only a matter of time. Python didnt ship with Django, Numpy, Pandas...


Python is rapidly becoming the lingua franca of data science; but there the vast majority of your inner loop isn't Python (numpy, scipy, pandas, numba, theano; C, Fortran, assembler, Cython, LLVM, CUDA...).

Basically Python is the glue language data people wanted, it turns out.


The best parts of python... are C.


The most-frequently-executed parts of Python are C. The best part is being able to glue all those together easily.


It will be a matter of years. I for one, am developing stuff today.


As are most of us lol. I would say if you want to use golang for a serious project and need additional libs you have to ready to write it yourself, which grants the opportunity to give back to the community and contribute to open source etc.


This is very true. We're definitely looking at Go for "services", not as something to replace our entire Django app with.


Will this change with time? I know numpy, pandas and matplotlib are likely relatively large projects, but my assumption is someone will likely put together data analysis / matrix math libraries to perform some of these functions.


I think Julia (http://julialang.org/) might be a better alternative to Scientific Python than Go. I'm not sure you can get the same flexibility/expressiveness you get in Numpy/R/Matlab in Go.


I find it rather annoying that everyone seems to be advocating Go, it may be a good language, but I can't help but feel that the only reason it's being used is because Google's behind it. You guys should look into alternatives like Nimrod (http://nimrod-code.org/), which is in fact a lot closer to Python than Go will ever be.


People are advocating Go because it delivers. It has great concurrency primitives (channels), it is "boring" language syntax wise, and it is fast.

Beyond that, it deals with a lot of the 'ugly' things around the edges of other languages. Dependency management, build management, deployments... all these IMHO are much more well thought out in go.


It delivers? Lol. The GC used to suck for 32 bit systems and it still sucks for realtime. As opposed to Nimrod's which pretty much guarantees a maximum pause time of 2 milliseconds -- independent of the heap size. And that's only the start: Go also lacks generics and meta programming in general. And its memory/concurrency model is just as racy as Java's.


Go was never designed for "realtime". Also, 32 bits wasn't the main compiler focus, 64 bits was. This problem being mainly fixed with the 1.1 release, this is a non issue now. The memory model seems pretty well defined without being too restrictive, with the recent addition of the race detector.. Go looks well equiped for this kind of problems and some pretty interesting projects are there to prove it.


You are correct on some points. 32 bit was broken, realtime is a non-feature. It does (by design) lack generics and meta programming (Pike talked about these at length at one point).

I have to disagree on the concurrency model, I think message passing channels are a much more natural primitive to model concurrency in, and goroutines are exceptionally light.

EDIT: When you talk about nimrod, you might go ahead and mention you are the designer of nimrod... it might color your judgement.


"Nobody is smart enough to optimize for everything, nor to anticipate all the uses to which their software might be put." -- Eric Raymond, _The Art of Unix Programming_

Go's primary niche is server software, and in that niche, it is gaining in popularity and has the backing of a large company. For servers, neither support for a 32-bit address space nor real-time support is important.

Does support for generics really matter when the language has built-in support for the most common collection types?

The allowance of shared mutable memory between goroutines does worry me somewhat.


Personally, having it backed by Google makes Go better in my opinion. I feel a ton of smart, very insane individuals are working on it and it can only get better.

Nimrod? Never heard of it or the person backing it.

I'm _not_ a language snob, trust me! I'm just a regular, family guy, software developer and I try to put my proverbial eggs in reliable baskets.

Go doesn't seem to be going away any time soon and it's really really fast.


If you want to put your eggs in reliable baskets then why not use Java, or C#, or even C++. Those languages are most definitely not going to go anywhere anytime soon.


Because I already know C#, and why waste time learning Java which is very similar; what would I gain?

I chose Go over C++, because it looks easier.


What do you gain from learning Go?


Obscene speed and learning a language built with concurrency in mind.


Go's goroutines are superior to threads in C#.

C++ is harder and there's no garbage collection.

I now have to yet again close my apps and restart my computer for an IT-forced Java runtime update, for some app I rarely use. For that reason alone I'd not consider Java.


From what I can tell Go's goroutines are pretty similar to C#'s await functionality.

But I don't want you to use C#/C++/Java, I want you to use Nimrod.


I forgot that C# added more asynchronous stuff since I last coded in it. I used to love C# but it's more verbose than Go and I'm weary of the obfuscated MS documentation that pushes me straight to blogs. Also I gave up on using Windows for web development.

This is the first I've heard of Nimrod.


Plus there is rx reactive, tpl and tpl dataflow.


If I'm not mistaken, that's a fallacious slippery-slope argument. I'm still undecided on how to weigh language popularity against other factors, but surely relative popularity is an important factor in things that have a big effect on real-world software development, such as availability of libraries and tools. For now, at least, Go is much more popular than Nimrod, and also has more people working on the language implementation and surrounding tools.


Perhaps. But in my travels, I have yet to find anyone really pushing Dart just because it's google. (perhaps it's still too early)


I believe Go purpose is exactly that of replacing more familiar languages with a static compiled version that is palatable. It's fulfilling a niche where, otherwise, one would resort to C/C++.


If you're CPU bound, I think it's a good option to at least check out. It can work with real threads, and doesn't compete with the GIL. In our case, our actual CPU was low, but due to the high concurrency, any little bit of CPU work requires a context switch and contends with everything, for a little work. We alleviated that by just using a bunch of processes.

Will it replace all of our Python? Highly doubt it. Some very specific pieces? Probably.


Do you know about GOMAXPROCS? Sounds like tuning it could help.


I was referring to our Python processes being CPU bound. Not Go. :) Yes, I'm familiar with GOMAXPROCS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: