Hacker Newsnew | past | comments | ask | show | jobs | submit | ozataman's commentslogin

Any competing product has to absolutely nail tab autocomplete like Cursor has. It's super fast, very smart (even guessing across modules) and very often correct.


I think what you want here is a radar chart, not a 2-2 matrix. I suspect most people embody these different types to varying degrees over time, topics and circumstances.


If I could trouble you for the common discourse here, would you mind summarizing why one may prefer to use Guix in the place of Nix? They seem to be based on the very same ideas and Guix even admits to being inspired by Nix.


Scheme instead of a bespoke programming language, and more focus on reducing the bootstrap set that compromises the goal of reproducibility shared by both systems.


But the bespoke language has some benefits. It’s concise and specifically designed for the problem. Also Nix has more contributors and that is very important to have a large collection of recipes to start from.


Scheme, being a Lisp, is exceptionally well-suited to serve as a base for DSLs.


Ok I’ll give another reason. Nix community is more relaxed and doesn’t downvote just because they don’t like some comment or somebody disagrees with them.


I bet you the irony of your downvotes is lost on the people downvoting.


For sure they are pilling on ... for what I don’t get. Relax people.


Do Nix and Guix expressions not interoperate? Is that a fundamental limitation of the system (at lest as fundamental as, say, dpkg and rpm) or could one write a source-level translator?


It's a good question. They don't interoperate now.


Yes, Scheme is a bit verbose for my taste.


There are obviously some differences between the two, but I'm not really perceiving any real verbosity gap between the two, one way or the other.

Here is a guix package declaration for tmux:

[Edit: fixed indentation]

    (define-module (gnu packages tmux)
      #:use-module ((guix licenses) #:prefix license:)
      #:use-module (guix packages)
      #:use-module (guix download)
      #:use-module (guix git-download)
      #:use-module (guix build-system gnu)
      #:use-module (guix build-system trivial)
      #:use-module (gnu packages)
      #:use-module (gnu packages bash)
      #:use-module (gnu packages libevent)
      #:use-module (gnu packages ncurses))
    (define-public tmux
      (package
       (name "tmux")
       (version "3.0a")
       (source (origin
                (method url-fetch)
                (uri (string-append
                      "https://github.com/tmux/tmux/releases/download/"
                      version "/tmux-" version ".tar.gz"))
                (sha256
                 (base32
                  "1fcdbw77nz918f7gqc1ga7zlkp1g112in1h8kkjnkadgnhldzlaa"))))
       (build-system gnu-build-system)
       (inputs
        `(("libevent" ,libevent)
          ("ncurses" ,ncurses)))
       (home-page "https://tmux.github.io/")
       (synopsis "Terminal multiplexer")
       (description
        "tmux is a terminal multiplexer: it enables a number of terminals (or
    windows), each running a separate program, to be created, accessed, and
    controlled from a single screen.  tmux may be detached from a screen and
    continue running in the background, then later reattached.")
       (license license:isc)))

Here is (roughtly) the equivalent in Nix:

    { stdenv, fetchFromGitHub, autoreconfHook, ncurses, libevent, pkgconfig, makeWrapper }:
    
    let
    
      bashCompletion = fetchFromGitHub {
        owner = "imomaliev";
        repo = "tmux-bash-completion";
        rev = "fcda450d452f07d36d2f9f27e7e863ba5241200d";
        sha256 = "092jpkhggjqspmknw7h3icm0154rg21mkhbc71j5bxfmfjdxmya8";
      };
    
    in
    
    stdenv.mkDerivation rec {
      pname = "tmux";
      version = "2.9a";
    
      outputs = [ "out" "man" ];
    
      src = fetchFromGitHub {
        owner = pname;
        repo = pname;
        rev = version;
        sha256 = "040plbgxlz14q5p0p3wapr576jbirwripmsjyq3g1nxh76jh1ipg";
      };
    
      nativeBuildInputs = [ pkgconfig autoreconfHook ];
    
      buildInputs = [ ncurses libevent makeWrapper ];
    
      configureFlags = [
        "--sysconfdir=/etc"
        "--localstatedir=/var"
      ];
    
      postInstall = ''
        mkdir -p $out/share/bash-completion/completions
        cp -v ${bashCompletion}/completions/tmux $out/share/bash-completion/completions/tmux
      '';
    
      meta = {
        homepage = http://tmux.github.io/;
        description = "Terminal multiplexer";
    
        longDescription =
          '' tmux is intended to be a modern, BSD-licensed alternative to programs such as GNU screen. Major features include:
              * A powerful, consistent, well-documented and easily scriptable command interface.
              * A window may be split horizontally and vertically into panes.
              * Panes can be freely moved and resized, or arranged into preset layouts.
              * Support for UTF-8 and 256-colour terminals.
              * Copy and paste with multiple buffers.
              * Interactive menus to select windows, sessions or clients.
              * Change the current window by searching for text in the target.
              * Terminal locking, manually or after a timeout.
              * A clean, easily extended, BSD-licensed codebase, under active development.
          '';
    
        license = stdenv.lib.licenses.bsd3;
    
        platforms = stdenv.lib.platforms.unix;
        maintainers = with stdenv.lib.maintainers; [ thammers fpletz ];
      };
    }


I feel like (inputs `(("libevent" ,libevent) ("ncurses" ,ncurses))) is pretty bad compared to buildInputs = [ ncurses libevent makeWrapper ]; even if you go by token count. (I think it's 16 vs. 8.) A different problem is that syntactically the Scheme version looks like a call to a function called “inputs” and I don't think it is; that depends on context. In general in Lisps the interpretation of everything depends on syntactic context, so you have to do a lot of processing consciously that you can do subconsciously in languages that have a syntax.

(There's an indentation error in either your example or my browser that makes that input clause appear to belong to the origin clause rather than the package clause, btw. The extra redundancy of the different kinds of delimiters makes that error harder to make in Nix. I wrote about this more at length in http://www.paulgraham.com/redund.html )

The module imports at the top are a lot more egregious but that's because they're using Guile’s module system naked; it's not really the fault of Scheme's syntax per se and I think you could hack together some kind of macrological solution.

I think Scheme is brilliant and probably a better choice, but I think the syntactic cost is pretty heavy in your example.

When it comes to Nix and Guix, though, these are kind of minor details. More important questions are things like “does it have the software I want in it” and “how reproducible is it” and “how do I figure out what's broken”.


On the other hand you've got 'stdenv' all over the place in the Nix example, e.g. stdenv.lib.licenses.bsd3 vs license:bsd3. Also stdenv.mkDerivation is kind of an eyesore compared to define-public / package.

One is nicer than the other in a few different minor ways, but overall I think it's basically a wash. I'd not consider verbosity a factor if choosing between the two.

>(There's an indentation error in either your example or my browser that makes that input clause appear to belong to the origin clause rather than the package clause, btw.

Sorry about that, I botched the indentation when I pasted that into my scratch buffer, which had unbalanced parens in it. That's on me.


That sounds like a reasonable point of view.


Those two package descriptions don't appear equivalent. The Nix one includes tmux-bash-completion and some extra build configuration compared to the Guix one, as well as a much more verbose description.


I meant both to be examples of the general look and feel of each DSL. They aren't precisely equivalent, but I do think they're illustrative examples of the two DSLs.


I think the differences are small.


After deleting the bash completion stuff and replacing the verbose description with Guix's, it cut the package from 63 lines down to 36. Deleting the blank lines cut it down to a further 27. For comparison the Guix package (which had no blank lines to begin with) is 35 lines.

Here's the trimmed Nix derivation:

  { stdenv, fetchFromGitHub, autoreconfHook, ncurses, libevent, pkgconfig, makeWrapper }:
  stdenv.mkDerivation rec {
    pname = "tmux";
    version = "2.9a";
    outputs = [ "out" "man" ];
    src = fetchFromGitHub {
      owner = pname;
      repo = pname;
      rev = version;
      sha256 = "040plbgxlz14q5p0p3wapr576jbirwripmsjyq3g1nxh76jh1ipg";
    };
    nativeBuildInputs = [ pkgconfig autoreconfHook ];
    buildInputs = [ ncurses libevent makeWrapper ];
    meta = {
      homepage = http://tmux.github.io/;
      description = "Terminal multiplexer";
      longDescription =
        '' tmux is a terminal multiplexer: it enables a number of terminals (or
  windows), each running a separate program, to be created, accessed, and
  controlled from a single screen.  tmux may be detached from a screen and
  continue running in the background, then later reattached.
        '';
      license = stdenv.lib.licenses.bsd3;
      platforms = stdenv.lib.platforms.unix;
      maintainers = with stdenv.lib.maintainers; [ thammers fpletz ];
    };
  }


Nice! I find this a lot more readable than the Scheme, and it certainly contains many fewer tokens; what do you think?


That looks almost identical to the Scheme one with the only real difference being foo=bar; vs (foo bar). Hardly enough of a difference to change anything "a lot" either way.


Guix package definitions were unreadable to me initially, as I had never used Scheme/Lisp before.

I've written a couple of them now and the definition above is extremely easy to read. Big part is just formatting & parentheses, I think my eyes just needed a little bit of adjustment time.


It’s absolutely more readable.


Guix is a GNU project which underpins today’s largest eco-system of OS, utilities, tools, apps and language related work. It has larger community as well.

So although it’s inspired by Nix, I personally will chose it as it has evolved quickly and if you look at all three aspects Guix, Guix System and documentation it’s now better than Nix. Also last but not the least I work with emacs lisp, so I feel at home with Guile Scheme so I will prefer Guix over Nix.

Personally I will like Nix also to flourish and being a non GNU project it will be able to provide closed source proprietary packages which is not part of core Guix. I think a healthy competition between the two is good, and whichever gets popular is overall good for advances in OS eco-system. Guix System is a new OS, not just package manager.

But by making Guix package manager available to other systems, it might move people who see benefits to move to transactional, predictable, secure OS like NixOS or Guix System.


> Guix is a GNU project which underpins today’s largest eco-system of OS, utilities, tools, apps and language related work. It has larger community as well.

To be clear, this is talking about the entire GNU project compared to just Nix? All metrics I can find show Nix's adoption is significantly above that of Guix.

https://news.ycombinator.com/item?id=16490027 seems like a pretty interesting comment on the benefits of Guix, albeit old.


There is a different side to the cost benchmark that's not captured by the description here. If your use case needs a lot of stored data but not necessarily a matching degree of peak CPU (even if your query load is otherwise pretty consistent), Redshift will become really expensive really fast and it will feel like a waste. BigQuery will meanwhile keep costs linear (almost) in your actual query usage with very low storage costs.

For example, you may need to provision a 20-node cluster only because you need the 10+ terabytes in storage across several datasets you need to keep "hot" for sporadic use throughout the day/week, but don't nearly need all that computational capacity around the clock. Unlike BigQuery, Redshift doesn't separate storage from querying. Redshift also doesn't offer a practically acceptable way to scale up/down; resizes at that scale take up to a day, deleting/restoring datasets would cause lots of administrative overhead and even capacity tuning between multiple users is a frequent concern.

Making matters worse, it is common for a small number of tables to be the large "source of truth" tables that you need to keep around to re-populate various intermediate tables even if they themselves don't get queries that often. In Redshift, you will provision a large cluster just to be able to keep them around even though 99% of your queries will hit one of the smaller tables.

That said, I haven't tried the relatively new "query data on S3" Redshift functionality. It doesn't seem quite the equivalent of what BigQuery does, but may perhaps alleviate this issue.

Sidenote: I have been a huge Redshift fan pretty much since its release under AWS. I do however think that it is starting to lose its edge and show its age among the recent advances in the space; I have been increasingly impressed with the ease of use (including intra team and even inter-team collaboration) in the BigQuery camp.


Redshift offers hard disk based nodes with huge amounts of storage at low cost for precisely the use case you mention. The performance of these is actually very good, especially with a little effort applied to choosing sort keys and dist keys.

Spectrum extends that even further, allowing you to have recent and reference data locally stored and keep archival data in S3 available for query at any time.


No idea why people reacting here so far got fixated on the "cheating" versions - it's clear to me they were included mainly to set a maximal speed baseline/benchmark and are not the main point of the article.


I find the "cheating" versions peculiar because I don't see the purpose of it. What's the point, and in what way is it cheating? It's just a different algorithm, and doesn't add any useful information to the subject at hand.


Numerical operations in a loop are often subject to aggressive optimisation by C compilers, which makes them tricky to use in benchmarks: are we measuring the intended loop, or has the work been optimised away? Often comparisons are made of "Blub vs C", where the C result is an order of magnitude smaller, and it's not clear if that's because C is fast or whether it's been optimised away.

Including an "optimised away" version lets us know when this has happened: the "non-cheating" benchmarks take much longer than the "cheating" ones, so we can assume they've not been optimised away.

I assume the author only went into detail about them because they're independently interesting, regardless of the main topic of the post.


The solution to this is to deliver arguments at runtime, rather than baking them into the program as constants. Describe some computation by a data structure that is delivered at runtime, and see which implementation does best. That way there can be no cheating.


> The solution to this is to deliver arguments at runtime, rather than baking them into the program as constants.

Functions already take their arguments at runtime. Except when they don't, due to optimisation.

For a benchmark to be automated, reproducible, etc. those constants have to be baked in somewhere, even if it's in a Haskell program using FFI (as in the article), or a shell script, etc. Whilst optimisers don't (yet) cross the language/process boundary, it still makes sense to include such sanity checks, rather than assuming we know what the optimiser will/won't do.

After all, the whole point of a benchmark is to gather evidence to question/ground the assumptions of our mental model. The less we assume, the better. The more evidence we gather, the better.


Well, the article starts with them and the chart starts with them! But the whole article would be better off without them.


And data management / preparation is where major mistakes are made - get a join wrong and you may easily be missing data or double counting something just obscure enough to go unnoticed like "ancillary sales".

There is something potentially harmful, or perhaps that needs addressing, about end-user tools growing in expressive power. A good friend who does statistical genetics work once told me "but I don't want every user running their own regressions and drawing nonsensical conclusions from badly prepared data!"


Yep. Wake me when an AI can tell me "hey, that monthly trending conversion report you asked me to pull...yeah, you're missing two days worth of data when tracking broke, so it will just make your numbers look lower when rolled-up monthly and be hard to notice."

BTW, that is also the reason I not only set alerts, but review data at a daily level when pulling any rolled-up reports of significance.


Why can't AI tell you you're missing data?


For most use cases, you don't even need AI. Once you reach a certain scale, there are certainly ML-based solutions available: https://medium.com/netflix-techblog/rad-outlier-detection-on...


That ship sailed somewhere around Excel 95...


Well, it's really tempting and financially desirable to try and get on Amazon anyway. There are lots of consumers on Amazon with both an appetite for that kind of an artisan item and the purchasing power to go for it despite the higher lead time. So the demand side is usually a given.

Don't forget that, from the manufacturer's perspective, Amazon is not only the logistical facilitator, but also a sales channel where huge numbers of potential customers are introduced to companies they wouldn't naturally be exposed to.


I don't want to have to pin every single tab I want unthrottled - that's a mixing/piggybacking of otherwise orthogonal concerns. Just make it a separate option I can set per tab.


In addition to automatically inferred signals, it would be great to give the user optional complete control over throttling as well at a per-tab level. I.e. a switch somewhere that user can freely enable/disable for a given tab to mark it throttled vs. unthrottled, whatever the right terminology would be.


Any great chess player (let's define that as near-IM fide ratings and up) will tell you that it's a highly iterative process between practice, analysis and reading that is very much anchored around practice.

They play thousands and thousands of hours and, yes, also spend quite a bit of time reading, thinking, analyzing (both their own and others') games and learning from mentors/teachers. However, practice is king and all the reading/analysis would be worthless in its absence. They would have no anchors to grab onto in your brain - no way to really become operational.

A chess "player" that mostly reads, studies and analyzes with a little bit of practice sprinkled in between would indeed be hilariously weak.


Take a beginner player who has been playing at the local chess club once a week (two hours) for 12 weeks. This player enjoys the game and wants to be better. He plays anyone who will give him a game. He occasionally wins against other beginners. He loses to the stronger players when he gets a chance to play them. This player has 24 hours available to work on chess over the next 12 weeks. What is the best way for this player to improve? Suppose the options are, 1) continue attending chess club for the next 12 weeks, or 2) stay at home and study two hours a week for the next 12 weeks. The study material is Logical Chess Move By Move by Irving Chernev (mentioned above by msluyter). The book covers 33 master-level games in two categories: kingside attack, and queen's pawn opening. Chernev, a Grandmaster, explains the reasoning behind every move in all 33 games. When the player shows up at chess club on week 25, will he be better off having chosen option 1 or option 2?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: