Hacker News new | past | comments | ask | show | jobs | submit login

I had rare occasion to compile a Rust application recently because a prebuilt binary didn’t exist for my platform. It took just over two hours! Coming from primarily Go development recently, I was astonished.

How do you get anything done when the cost of testing an idea is several hours?! I’d feel like the sunk cost fallacy would kick in for even the most lackluster of changes just because you took the time to try it.




I have worked for 3 years on a project where it took a whole week to get the code compiled, signed by an external company and deployed to the device so that I could see the results.

I just learned to work without compiling for a long time. Over time my productivity increased and the number of bugs fell dramatically.

Working this way requires you to really think about what you are doing, which is always a good idea.

This was over a decade ago and now I work mostly on Java backends and I am happy that I typically spend days or even weeks without ever compiling the code and that it usually works the first time I run it.

I can't think how I could get back. It looks really strange to me to observe other developers constantly compiling and running their code just to see if it works. It kinda looks as if they did not exactly understand what they are doing because if they did, they would be confident the implementation works.

The only time when I actually run a lot of compile/execute iterations is when I actually don't know how something works. I typically do this to learn, and I typically use a separate toy project for this.


Often you can be certain that you know what tp expect from your own code, but not from dependencies or external systems. So checking that your assumptions about them are right is a major reason to run and rerun your code.


That is good reason to minimize amount of dependencies and only use the ones you know and can reason about. It is also part of what I do to help me reason about code before I execute it.

As I mentioned, if I don't understand a dependency or external system I make separate toy project where I can rapidly experiment and learn.

Think of it as having fun in a aircraft simulator. You play with it so that you are prepared to fly the actual plane.

Also checking your assumptions by trying to see if the code works is a problem in itself. A lot of these broken assumptions will not break your code immediately but maybe sometime later. Maybe when your application is under load or maybe when clock on the server is moved back by an hour or maybe when the connection breaks in a certain way.

Base your work on knowing how something works and not assuming.

Best way to limit the risk of your application failing due to broken assumption is to limit your reliance on assumptions in the first place.


I used to use approach, but the new heavily typed languages bring a lot of really nice tools that you only get to use at compile time.

Specifically in Rust, you can use the language to guide you through things like refactoring, thread synchronization, correctness verification and a lot of other things. But you have to run the compiler.


I don't write production code in Rust (though I learn for my embedded hobby).

But you can say the same for Java. IntelliJ IDEA internally does equivalent of compilation and tells me exactly where my code would fail to compile.

So in a sense I am not strictly practicing my approach, but I also don't see reason to do so if the tools are reliably giving me hints when I made mistake writing something that will not compile.


> So in a sense I am not strictly practicing my approach

Developing in an IDE that compiles almost continuously is about as far from the development philosophy you're advocating for here as one could get :P


That doesn't make any sense.

This isn't about throwing away tools for some idealized goal. It is about using the tools that are available to achieve best results without making you reliant on the tools to the point you don't know what your program is going to do without compiling and running.

IDE helps catch a lot of stupid simple mistakes and that helps save time. Why would that be bad?


I don't think using an IDE to catch lots of stupid simple mistakes is bad. It's how I prefer to work.

> It looks really strange to me to observe other developers constantly compiling and running their code just to see if it works. It kinda looks as if they did not exactly understand what they are doing because if they did, they would be confident the implementation works.

Explain to me how this statement doesn't apply to your use of an IDE, but the other engineers you've observed don't understand what they're doing.


If you can't read that sentence with comprehension none of my explanations are going to help.


It's legitimately surprising that you would double down here instead of realize that your tooling is recompiling your code and showing you the result continuously, making your workflow essentially the same as the people you seem to feel so superior to.


They did read that sentence with comprehension. It is you who can't connect the lines. Your IDE already does typechecking and finds other issues for you. Basically the only thing that you are missing is the ability to run and test your program.


How do you like embedded Rust?

I’m looking forward to someone making a legit IDE/suite with support, no indication of it yet but I assume some day!


I mainly work with STM32 Cortex-M3 MCUs (again, these are my personal projects).

Rust, well, "works". But there is still a bunch of issues so I keep developing using C until I get the kinks ironed out.


It's working really well for us at Oxide. I even do it on Windows :)


First job was in a mainframe environment, compiles were pretty quick but the job queue was so big and developer compiles were low enough priority that it could take hours of wall clock time to turn around.

I don't remember the specifics but while watching the job queue on the terminal screen I discovered that the job priority was editable. So I bumped it up, my compile job ran, and I was happy. I only did this a few times before I got a lecture from the system operations guys that was quite explicit that I should never do this again.

Yes, you figure out how to run code mentally, how to check carefully for typos and syntax errors, etc. No Intellisense then either, that's another modern crutch.


I am one of those devs that is always compiling and checking if their program is working. I am mainly working with java and still doing this.

Less so than when working with JavaScript. But please teach me your ways hahha


Start by understanding that this compile/run process is a crutch. Rather than use your knowledge, experience and intelligence to predict if it works you resign yourself to just waiting to see.

This is a problem for many reasons. One is that this may help you get something working, but with lack of deep understanding the outcome will likely be subpar if only because not all problems that you could have predicted will show themselves on execution.

Another is that it basically shrinks the part of brain that is necessary for understanding and predicting behavior of your code (figuratively). Sort of like driving with GPS makes me helpless without it.

Try to write larger stretches of code without compilation.

Try to focus on modularizing your application so that you can reason about modules separately. This is always a good idea, but it is even more important when you need to be able to predict how something works without trying it.

When you have finally compiled your code and it failed, do not immediately go to fix the problem. Try to spend a moment to learn from the failure and improve your process so that you minimize chance of this happening in the future.

Ask yourself, what you could have done to prevent this problem from happening? Could you have specified some function or module better? Could you have simplified your code to be able to better reason about it? Would it help if you have spent a bit more time getting acquainted with this internal or external library before you decided to use it?

From my experience, most of this comes down to following things:

- defining your modules and APIs correctly -- badly defined modules make it difficult to predict the behavior,

- finding simple solutions to problems -- complex solutions tend to make it difficult to predict behavior,

- using only tools you understand,

- only interacting with existing code after you have understood how it works (I typically at least look over the code that I plan to use),

- thinking hygiene (make sure you base your work on hard facts and not beliefs),

- refactoring, refactoring, refactoring -- first solution to the problem is rarely optimal. I write something that works and then immediately keep refactoring it removing any unnecessary complexity until I am satisfied. Don't leave refactoring for later -- when you have just written a piece of code it is easiest to change it.

- as much as possible, writing your code in a way that it is not even allowed to produce wrong result. This is very large topic so I won't explain. There is a lot of techniques that you can research.


Do you think there is a time and place where it is more sensible to just go by trial and error.

For example when I am interacting with a codebase for the first time and I want to implement something I just keep bashing and throwing shit at the wall untill something sticks. After that I start working more in line of what you described.


How exactly you arrive at understanding the codebase is not as important.

Just make sure you keep actual development separate from learning the tool if you care for your results and especially reliability.

Now, I use various ways to learn the tools and codebase. Running PoC for my idea or maintaining separate toy project helps me with maintaining hygiene.

For example, for the past year I have been spending a lot of time learning reactive programming with RxJava and Reactor. I have created a dozen small projects illustrating various ideas for making reactive APIs, processing pipelines, separating business logic from infrastructure, composing reactive modules, etc.

I did this with aim of purposeful learning rather than writing production code, even though some of these in the end migrated to be part of production codebase.

I am now at a level where I can, again, write large swaths of modules, libraries but now using reactive paradigm, with a very good chance of it working correctly, which for me validates that I more or less understand what is going on.


In case anyone else does this and is new to Rust - you can use `cargo check` to type check/borrow check/everything else without doing any codegen


This thread is no longer with regards to Rust or checking whether the code compiles or not. It is about how you can work with compilation times that are longer than a coffee break.


My theory for Java is that it was frog boiling turned cargo culting.

Comparatively speaking Java compilation was very fast at the beginning, so for instance Red-Green-Refactor (RGR) works pretty well. There's a parallel to other creative jobs where sometimes shuffling around the things you already have reveals a pattern that leads to a breakthrough.

But there are other feedback loops that, with J2EE in particular, the cycle times started to creep up, and up, and at some point if you haven't stopped and looked at what you're doing you don't see how crazy things have gotten. RGR still has a place there because you are typically not recompiling everything and you aren't spooling up the application to run unit tests. But making one line changes to see how a page loads is just bonkers amounts of busy work.

One of the bad dynamics is that people more like you also tend to memorize the code, which is both bad for new hires (circular logic does not reveal itself when you introduce one assertion at at time, but does when you get hit with all of them at once), and also incentivizes you push back on refactoring. Because those damned smartasses keep moving things around and they were Just Fine where they were. If that happens you have cemented the entire codebase and anything that is really wrong with it is going to stay wrong until someone proposes a rewrite. And having learned the wrong lessons the first time, we repeat them again in the second iteration.


Compiling Servo, a web browser engine, takes 5-6 minutes for an optimized build from scratch on my 6C/12T machine. So no, 2 hour build times are not normal for all but the largest software projects.


Compiling paru, an AUR helper for Arch Linux, on the other hand, takes me around 3 - 4 minutes on a 4C/8T i5 for an optimised build. I think Rust compile times might just fall within a narrow range.


I suspect the machine you were using was swapping itself to death, because I've never experienced anything resembling that in Rust.

I also presume you were compiling in release mode, which is good for producing extremely fast programs but not something I bother with in regular development (in contrast to Go, which has no distinction between debug and release optimization levels).

> How do you get anything done.

The vast majority of my workflow just involves seeing if my code typechecks, for which I don't even need to build the program (so no codegen, no linking). This I do very frequently, as a sanity check on my work. The command for this is `cargo check`. This takes less than a second for small projects, one to five seconds for medium projects, and one to twenty seconds for large projects.


> I suspect the machine you were using was swapping itself to death

Good point. I recently upgraded my machine to 64 GiB RAM, because 16 GiB filled up pretty quickly with parallel compilation, several VS Code/rust-analyzer instances and a couple of browser tabs open.


In my experience, Rust compilation times aren't that different from what I'm used to in the JavaScript world. Initial compilation takes a minute or two (comparable to the runtime of `npm install` on Windows), when you make changes you usually get incremental compilation within a few seconds.

I guess you can have projects that are just huge and/or run into some pathological case that increases compile time a lot (just like with C++), but for any subsequent compilations you should get very fast incremental builds.


I doubt your compile times were due to pattern matching which TFA demonstrates to be NP-complete, any more than C++'s compile-times are due to the undecidability of template metaprogramming.

(Though I guess one might argue that slow compile times on typical code and rarely-used computationally hard features have the same root cause of not treating fast compile times as a top priority.)


Right, and C++ got a lot of its compiler performance for large projects by making this an embarrassingly parallel problem even though the consequences are negative for the main consumers of the code (people, not compilers).

One cost of being embarrassingly parallel is the One Definition Rule. If we can compile N different code units in parallel, but they're all allowed to define things in the same namespace, obviously those definitions might contradict each other and we wouldn't notice. So, the C++ language explicitly forbids this, knowing you'll probably do it anyway, at least by mistake. If (when) you do, that isn't a valid C++ program, but the compiler isn't expected to produce any diagnostic (warning or error). So, you get a binary, but the language doesn't care what that binary does. Maybe it does exactly what you expected. Maybe it does almost exactly what you expected. If not too bad, the One Definition Rule means your compiler was fast and that's what matters.


I usually fails at link time doesn't it?


No. I have intentionally violated the one definition rule and nothing broke at all. In my case I wrote a second timer, which had hooks my unit test system could use to advance time without having to inject timers.

It will fail at link time if you link everything into the same library. Even here there is an escape: there are ways to mark something as a weak symbol and than the linker won't complain about more than one definition.

See your OS documentation for how this works on your implementation. (though don't be surprised if the documentation is wrong...)


> No. I have intentionally violated the one definition rule and nothing broke at all.

did you use LTO ? It always catches ODR issues for me. There is also GNU Gold's --detect-odr-violations switch.


No. LTO is something I've been meaning to look at. Though if I can't violate the ODR intentionally I might not be interested.


If you violate ODR intentionally you are just writing code that will break in a future version of GCC / Clang / ..


That is a risk I know I'm taking.

Though I've been tempted to write a paper for C++ to make it defined behavior. I know it works in some form on most implementations even though it isn't legal. Thus there seems to be some other use cases for it that could/should be formalized. If anyone can give me other examples of why you want to do this and what the rules are on each platform I'll take a shot at it.


> I know it works in some form on most implementations even though it isn't legal.

it definitely does not, every time I had an ODR issue that caused actual bugs. For instance, dynamic_cast not working because a typeid was defined in two shared objects, etc.

What would be the behaviour you expect if you have

    a.cpp: 
    int constant() { return 123; }

    b.cpp:     
    int constant() { return 456; }

    c.cpp: 
    int constant();
    int main() { return constant(); } 
how could you define this meaningfully other than "hard error" ?

e.g. here with gcc, if a and b are put into shared libraries, the return value depends on the order in which the libraries are passed to g++ when linking. e.g.

    g++ c.cpp a.so b.so
calls the version in a.so, while

    g++ c.cpp b.so a.so
calls the version in b.so


You can do that because the order in which libraries are passed to the linker is something you can control. Of course linkers don't have to do this, and future versions can do something different, but it works and the rules for gcc are currently "the order in which the libraries are passed to g++ when linking", which is defined. Of course gcc has the right to change those rules (I suspect the real rules are a bit more complex)

Gcc also has the concept of weak symboles which if invoked (and the linker supports it) would allow you to make one of the two weaker than the others and then the whole doesn't depend on link order. Visual C++ also seems to have something like this, but I'm sure it is different.

Like I said, I want to write a paper to make it defined - but the paper will be a lot longer than would fit in a response here, and depending on information that I currently don't know.


> I wrote a second timer, which had hooks my unit test system could use to advance time without having to inject timers.

This use case isn't an ODR violation. Its just using the linker to mock an interface.

> It will fail at link time if you link everything into the same library.

That is an ODR violation, although there are variations on this pattern that are not required to be detectable. Template instantiation is an easy way to get a silent ODR violation.


What on earth did you compile? I can't recall ever having compiled a rust project that took more than 10 minutes from scratch.

I do have memories of the days when compiling Chromium or AOSP was a 4 hour battle, though :)


The rust compiler itself can take as long as 2 hours on a modern laptop if you do a full stage 2 build. I don't think that's what the top poster was talking about though, it sounded like they had a rust toolchain already.


(You know this, but for anyone else reading, don't forget that that doing this compiles LLVM, which is in of itself a massive C++ project, as well as compiling the compiler itself multiple times. That's part of why it would be such an outlier.)


Right. Nested C/C++ dependencies are the only case I can think of where drastically-long compile times are common.


Or maybe someone included Boost?

Sometimes it is just that someone drank the Kool-aid and made everything a template. Then split their code across hundreds of tiny files and did the one big massive include at the top of every file that pulls in everything.


> How do you get anything done when the cost of testing an idea is several hours?!

I haven't used rust in production, but usually you can just use `cargo check` to only run the typecheck. Using `cargo build` is also usually much faster than `cargo build --release`.

Having said that, at least in my toy projects it was not uncommon to have to compile using `--release` to run the tests (since otherwise the tests would take forever).


> Having said that, at least in my toy projects it was not uncommon to have to compile using `--release` to run the tests (since otherwise the tests would take forever).

Maybe you're already aware of that, but if the reason why your tests are slow is not “your code not being optimized” but “your dependencies not being optimized” then Cargo profile override[1] can save your life. You can have one specific dependency (or all of them) being build in release mode even when you're building you own code in debug mode. The first development build is going to be quite slow, but after that, you're going to have the best of both worlds.

[1]: https://doc.rust-lang.org/cargo/reference/profiles.html#over...


I'm really curious what you were compiling, and if you remember the machine specs (and was the machine under other load at the time). Two hours seems beyond ridiculous.

Also, I'm assuming it was a full (as opposed to an incremental) release build? But even then, I've never had to wait for that long.


Two hours is not beyond ridiculous. A lot of projects in the past compiled for way longer than that, days even.

In 90s and early 2000s if you worked on a large desktop application (the size of Office or Photoshop) you could expect to spend a business day waiting for compilation results.


Sorry, I should've clarified that I meant specifically meant in recent versions of Rust. I wasn't speaking in the general sense. I've experienced slow compilation times in C++ projects, although not of the same magnitude you describe (ie having to wait days)!


Around the turn of the millennium I remember recompiling, for some reason I don't remember, the whole of KDE. That took a bit...


Go is geared towards productivity. If you have a problem you can solve adequately in Go, why not just do it in Go?

Because they gained popularity roughly around the same time many people think they are competing languages, but they're really not.

Ruby is my main programming language, if I need something to be fast and/or light weight I switch to Go, sacrificing some comfort. And if I need absolute lowest possible overhead, I switch to Rust, sacrificing a whole lot more comfort. Rust is more expressive than Go, but if your Go project is so large that it needs a stronger typesystem in my opinion you should switch to a language like C# or Scala.


Go is geared towards productivity of specific people at specific environment: fresh grads at Google.


First time I used it, I had not written a line of Go prior, I rewrote our data ingestion gateway from Ruby to Go. It took me 1 week, and it had full feature parity. I don't think I had the GPA at university to land a fresh grad job at Google.

Sure the tooling around it (besides the compiler itself) was the dumbest 21st century tooling I've seen (this was ~6 years ago), but it was quick to learn, quick to write, quick to compile and quick to run so who am I to complain.


How was the tooling dumb?


It had this weird system where all your projects were supposed to be located in one directory structure that mapped to their git namespaces, it made no sense at all and was a pain to work around.


they got rid of that


Incremental compilation. Once you get the first build done, subsequent builds are much faster.

Also a project is likely to be split up over multiple crates, so a lot of the changes you make will require building and testing just that one crate.


Isn't incremental compile disabled currently? I thought they found a significant bug and so turned it off until they could validate a fix.


The grandparent was building a binary utility, which means they were (hopefully!) building in release mode, which doesn't use incremental compilation by default in any version of the compiler.

For debug builds, where incremental is usually on by default, it was disabled for 1.52.1 due to bugs, and then kept off in 1.53 out of an abundance of caution. It should be back on in 1.54.


okay that's what I thought (and the not turned back on yet officially was what I was trying to hint at with the abundance of caution).


I worked on a Rust project at work for several months. Compile times were not a significant issues for developers where it was generally cached. For example, repeatedly running a test only compiled a small amount of code.

It was a bit slow in CI/CD build environments with no cache, but so are the k8s Go projects I work on (go mod pulling in the world).

The only thing approaching 2 hours I've ever seen is building the compiler from scratch.


Were you compiling Servo on a Raspberry Pi or something? 2 hours is ridiculous even for Rust.

> How do you get anything done when the cost of testing an idea is several hours?

Incremental compilation. Depending on the project you can get the edit compile cycle down to somewhere between 1 second and 5 minutes. It's nowhere near as fast as Go but it's not like anyone is actually making edits then waiting 2 hours for them to compile.


Compiling linkerd-proxy took me also north of an hour on an Intel 6850k. Most of the time was spent compiling dependencies though. I also compiled envoy proxy which also took more than an hour.

Subsequent rust builds for linkerd-proxy were quite fast. For envoy most time was spent in Bazel itself. Probably because I just did a Bazel build instead of specifying a more specific target.


Out of curiousity I tried linkerd2-proxy here on a i5-1145G7 (I guess similar total CPU? Less cores, laptop model, newer). Took 5 minutes but used ~6GB resident. Maybe yours was swapping a lot?


My previous comment didn’t get saved apparently. I can hardly imagine that it was swapping. The machine has 32g ram. Perhaps compiling on macOS is slower. Most of the time was spent on compiling dependencies though. I can imagine that would be faster if you already have those builds lying around.


I've only had 2 hour build times when I was compiling a rust toolchain on an emulated ARM system running on x86.


I get this question about large scala codebases too - clean build and test cycles for things on underpowered containers in cloud build pipelines are in the tens of minutes sometimes.

One: my local dev machine has 16 gigs of memory and 16 cores. What takes the tiny docker container with 1 gig and 2 vcpus 30 minutes takes my computer about 1.

Two: Incremental compilation and testOnly make build/test cycles maybe a second / twenty seconds max, and most of that is test runtime on complex property based tests.

You just get by without a clean compile most of the time after you build the project the first time. And really, a lot of builds spend an inordinate amount of time just pulling in external dependencies (which are also cached on my local machine, but not on the containerized builds a lot of the time).


Short answer is it doesn’t normally take that long. You also don’t have the problem of cgo being slow or ever having to wait for the data race checker to run.

initial compile has to download and compile all dependencies. after that compiling is incremental. still slower than go though


> How do you get anything done when the cost of testing an idea is several hours?!

My Rust compile times are usually between 10 and 30 seconds. Some tricks that help:

- Get a good development machine. I prefer a Dell Precision laptop from the last couple of years, with plenty of cores. A 5-year old laptop with 2 cores will be a lot slower.

- Use Visual Studio Code and the rust-analyzer plugin. This will give you feedback in the editor while you're working.

- Rely on the type system to identify most problems before ever generating code.

- Minimize the use of slow-to-compile libraries that rely on complex generic types. Diesel, for example, is a great library but it's slow to compile.

- Install cargo-watch, and use it to automatically re-run the tests.

Also, remember that incrementally re-compiling a single crate in debug mode will be much faster than building an optimized application and all its dependencies from scratch.

TL;Dr: Rust compilation times are less than ideal, but with a fast laptop and rust-analyzer, it's possible to spend very little time actually waiting for the compiler.


Just get a new machine for a couple zillion bucks, why haven’t we thought of it earlier? :^)


Yes. If you are a professional developer who gets paid to work on large Rust projects all day long, then a US$2500 high-end laptop will pay for itself many times over. (I'm assuming US or European salaries here.)

I've gotten by with much less for personal projects. But Rust will make very efficient use of (say) an 8-core i9 if you have one.


As a developer who uses Rust a bunch (work & personal), this is what I ended up having to do. I'm joking, but this one simple trick can cut compile times in half.


Yeah, can really see that "empowering everyone" motto coming into play.


If I had to guess it is because all of the dependencies had to be compiled too.

In a normal Rust application you break it up into crates. Crates are the unit of compilation, and are only recompiled when something in the crate changes. In a "normal" developer flow you only touch 1 or two crates at a time so normal developer compile time would be a couple of minutes at most. Even for a new build on a machine most developers would never see the two hour compile time because they would have gotten precompiled dependencies.


> In a "normal" developer flow you only touch 1 or two crates at a time so normal developer compile time would be a couple of minutes at most.

Obviously this depends on your computer, but generally incremental Rust builds should typically be on the order of seconds, not minutes.

> Even for a new build on a machine most developers would never see the two hour compile time because they would have gotten precompiled dependencies.

Cargo doesn't do precompiled dependencies (even between projects on the same machine).


Compiling for production and compiling for development are two very different processes, with the latter benefiting from less optimizations and incremental compilation. It wouldn't take two hours to test an idea.


jesus christ

rust's compilation times are as terrible as c++'s?


I don't know about Him, but at least when working with C++ I feel the compile time is entirely up to deliberate tradeoffs. With better planning and more effort made with forward headers, better implementation boundaries w/pimpl like constructs, and otherwise linking, and avoiding meta-programming entirely, I've not found it to be an issue.

Incremental C++ builds can be lightning fast, or it can be ridiculously slow. I've worked on large projects where change + incremental compile + test was less than 5 seconds. And I've worked with project where no effort was made where the same could take 10 minutes.


Keeping your C++ compilation times reasonable requires eternal vigilance (or a distributed system with aggressive caching).


In my experience Rust is faster for full clean builds, even if you use slow dependencies like Serde.

However C++'s compilation units are smaller than Rust's which helps it to have faster incremental compilation times for small changes.

So, same order of magnitude basically.


It's the same ballpark


In my experience (C++ dev at work) C++ generally feels slower than Rust.

Edit: I quickly realised, this might only because of Rust's—in my opinion—superior type system and tooling, so maybe it just requires less waiting for compilations than when working C++.

Edit 2: I've never actually measured Rust compilation times, but even for large-ish graphical codebases (wezterm, alacritty, bevy games) I've compiled from scratch, it felt noticeably faster even then.


I think we must do better

AAA games need less resources than those tree walkers


I'm pretty excited for the Linux kernel to take three weeks to build.


Although Rust compile times are rather bad, IME such astronomical compile times are rather a result of terrible code quality, a lot of code doing essentially nothing, just gluing glue, to the point where it's normal to get a 50-60 frames deep stack trace with most functions being named such that it would suggest that all of them do basically the same. Then you look at their bodies and... yep... doing the same, aka nothing.

With Rust, procedural macros add a lot of overhead when compiling, so I try to avoid them.

At work, we have a C++ project which, without using distributed builds with distributed ccache, running on powerful cloud computers, takes 3h+. Code quality is adequate. Debugging anything is a nightmare in such a codebase.


While I agree that the compile times are bizarre, I don't think we can jump to conclusions such as "such astronomical compile times are rather a result of terrible code quality" without knowing more about what's being compiled and under what conditions it was being compiled!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: