Hacker Newsnew | past | comments | ask | show | jobs | submit | gendoikari's commentslogin

I think the language that wanted "to rule them all" was C++, and we all know how it ended. C++ can be procedural, oop, functional, you have metaprogramming, generics, everything. Everything. C++ is everything. Every pattern, every design philosophy can be implemented in C++. How much time does it take to compile? How many developers do know every C++ feature and pattern?


I don't think the whole industry chooses its tools randomly. I think that in the long term, if a tool emerges from the dust it's because of some actual reasons. If the simplicity of Go will win against the complexity of Rust, for example, I think we should think about the reasons. In my opinion the problem it's not about Go limits, but instead is about our perspective as developers using our tools. We do really need complexity? We do really need oop everywhere? If the answer will turn out to be "no", I will need to change my attitude toward Go.


> I don't think the whole industry chooses its tools randomly.

It may not choose entirely randomly, but that doesn't mean that it chooses reasonably. You mentioned OOP everywhere -- a philosophy that's driven several dominantly popular languages and has been seen as a mark of professionalism. If the industry is any guide, Go's break from the norm here already calls that decision into question.

(I happen to think this is one of several areas in which the industry is wrong, but then again I don't see the industry at large as particularly rational.)

> We do really need complexity?

No software developer wants complexity, every software developer is trying to manage and limit it to the extent of their resources.

The question is whether language simplicity leads to software simplicity.

It seems apparent Go's designers believe this is the case, and have offered a simple-ish language on a feature diet that avoids much in terms of type expressivity or facilities for abstraction that rise to the level of augmenting the language itself.

I think this can work for some problem domains, particularly one that is closely-fitted to built in types and libraries. But once your problem domain isn't close to native language facilities any more, you're forced to write more and more code to get around the limits of the language's expressivity. That ends up being more machinery and surface area to keep track of interactions between... which, in my experience, is where complexity creeps in rather than having to understand language features.


If rationality were a requirement for progress, progress wouldn't exist. The beautiful fact of evolutionary processes it's that rationality is not necessary. But I digress...

Needs change with time. In the current context, if Go is an improvement in some tech areas I think it will get some degree of success. Otherwise I think will decline after the first hype.

By the way, I agree with you about writing more code to overcome language limits. The hope here is that using idiomatic Go you will end up, no matter what, with a reasonably understandable code base, even for libraries and tools. It's a goal, I don't know if reachable or not.


And this is a clever response if you understand their point: messing with the language it's rarely the correct answer. Do you want to remove some code? How about a function? How about hiding boring details behind more elegant objects? How about using a macro in the editor? And so on... The answer is not (from their perspective) "put some shit in the language".

The problem with Go is that is very idiomatic. If you want to work with Go you need to learn its way, and somehow you need to accept the trade-offs behind its design.

If you do not see any advantages in any way, probably it's not the right tool for you, and that's ok. No problem at all.


I think that if you begin with the idea that you have multiple conventions and you cannot avoid that, therefore you won't solve the problem with "+" operator. You have just one more convention. Type 1 defines: .add(x), Type 2 defines: .plus(x), Type 3 defines: operator+, ... If you assume that you can convince people to adopt a convention, you can use .add(x) and avoid the problem in the first place. Go for example tries to have always one obvious way to write things. The Go standard library it's the idiomatic Go bible.


About generics, this is highly debatable. This is a design choice, it's not a decision made by accident. You'll surely gonna have boilerplate code in some cases, but all the language will be a lot more readable, and simple. Simplicity it's the most wanted feature of Go, from the designers perspective, I think. In the long term it's preferable to have explicit and simple code, instead of complex magic. This is a correct view? We'll see. Honestly I'm starting to appreciate that. They may have a good point.


How is code with generics more complex than without it?


For example the C++ templates rules are themselves a turing-complete language. If by generics you meant some really basic features, I think you can do pretty well with interfaces. They're already in the language. If by generics you meant the full-package, i think you could end up with something pretty complex all the time.


Most generic implementations are not Turing-complete, and looking at C++, this doesn't sound like a particularly tempting proposition.

And while I'm not particularly familiar with Go, I don't see how you can get a feature set equivalent to a simple implementation like Java's out of Go interfaces ("you can just cast" is not a good answer).


I'm probably missing the use case. What is the problem that you are thinking of, you can't solve with a Go interface?


Say, a generic container.


They're talking about the compiler and the language spec not Go code.


That's not how I interpret parent's comment.


> all the language will be a lot more readable, and simple. Simplicity it's the most wanted feature of Go

This is what Java designers thought as well. See where this has led them.


Without generics, you can never write a type safe datastructure. I can't imagine the hubris that thinks the language already contains all of the datastructures it needs.


This isn't true. You can write a type safe data structure. It is simply for one type; you can't share it. That inability to share is the part that generics fixes, but saying you can't write type safe structures is inaccurate.


[deleted]


> to keep the language spec and compiler simple

How is this not a design choice?


Yea, You are right, guess it's to early in the morning for me to be responding to others. =D


What do you think about goroutines and channels?


Using go for goroutines and channels is a bit like using Perl for regular expressions. The features have been added in a way that makes them easy to use and serves as a nice idiomatic platform, but fundamentally it's functionality other languages can provide via library support.


> but fundamentally it's functionality other languages can provide via library support.

Implementing goroutines and channels requires language and runtime support for green threads that are n:m multiplexed on top of native threads. It can not be implemented as a library in most languages, at least not efficiently. Any language with thread support can set up threads and put a concurrent queue between them, but that's hardly the same thing.

Languages such as Go, Erlang and Haskell do this. Interestingly, early versions of Rust had green threads (iirc) but later migrated to using native threads only.


Any language that has continuations, or at least thread-safe coroutines, can implement goroutines and channels. This includes scheme and lua. Also any language where the stack can be directly manipulated can implement those thread-safe coroutines, so that opens up C/C++ and perl and possibly some others.


Yes, all the languages you mention have the necessary language and runtime support required, probably a handful of others too (but by no means every language out there).

C and C++ are a bit different because you need to resort to assembly and know details about the target arch to do stack switching but that's acceptable.


It's been done in C; check out libmill[0], which even matches the syntax pretty well.

[0]: http://libmill.org/


If "it" includes parallelism then no, libmill has not done it:

"Libmill is intended for writing single-threaded applications." http://libmill.org/documentation.html#multiprocessing


Yeah, you can do this in C if you do stack switching with a little bit of assembly. It's kind of doing a custom runtime environment for C. Not many other languages can do this.


No ? Qt, Gtk, ... all do it.


Huh? Care to elaborate on this? As far as I know, GTK (and Qt IIRC) use a single threaded event loop. That's not at all the same thing (albeit can be used for similar things).


Qt and Gtk's single threaded event loop are akin to what Go calls it's scheduler. That scheduler in Go is also (partially) single threaded, but event handlers run in other threads.

Having event handlers run in separate threads is very much supported in both Qt and Gtk (they can't be UI event handlers in quite a few cases, but network events and file reading in separate threads scheduled by the central event loop like in Go is not a problem).

I will say that it's much better organised and with much less caveats in Go.

And a point of personal frustration : both Qt and Gtk support promises through the event loop, Go does not. I find that a much more natural way to work with threads.


Go scheduler is not single threaded (whatever that means).


Go's scheduler is a single threaded event loop, like every other scheduler on the planet. It runs, sequentially, in different threads which makes the situation confusing, but it's still single threaded.

It's also cooperative. An infinite loop will effectively kill it (a single infinite loop will kill it before Go 1.2 I believe, but now you need enough of them). More importantly, there's a number of simultaneous syscalls that will kill a go program.

I like about go that it's moving the OS into the application. The thing is, Go's OS is not a very good one. It doesn't have the basic isolation that OSes provide. I hope it will improve.


> Go's scheduler is a single threaded event loop

No, it isn't.

> like every other scheduler on the planet

This is not true either. In fact it doesn't make sense. Schedulers are not single threaded event loops. The scheduler (any scheduler) is entered in various scenarios. Sometimes voluntarily, sometimes not. Sometimes the scheduler code can run concurrently, sometimes not. Sometimes the scheduler code can run in parallel, sometimes not.

The Go scheduler is both concurrent and parallel.

> It runs, sequentially, in different threads which makes the situation confusing

I don't know what this statement means. The Go scheduler certainly runs on different threads. So what.

> It's also cooperative.

Actually it's not purely cooperative, it does voluntary preemption, very similar to voluntary preemption in the Linux kernel. The check happens in every function prolog.

> More importantly, there's a number of simultaneous syscalls that will kill a go program.

There's self-imposed user-configurable limit that defaults to 10000 threads for running system calls. The limit has nothing to do with the Go scheduler, it can be set arbitrarily high with no penalty.

> It doesn't have the basic isolation that OSes provide.

The most basic isolation provided by operating systems is virtual memory. Go is a shared-memory execution environment, so this doesn't apply. What other "basic isolation" is provided by operating systems that's missing from Go?


> I don't know what this statement means. The Go scheduler certainly runs on different threads. So what.

It means that some of the data structures the scheduler examines on every run are shared data, with locking. That makes it effectively single threaded, even if it technically runs on different CPUs (at different times). Put another way it means that it'll never run faster than a single-threaded scheduler would.

> Actually it's not purely cooperative, it does voluntary preemption, very similar to voluntary preemption in the Linux kernel.

You mean preemption inside the linux kernel, in some kernel-space threads ? Because it sounds very different to preemption of applications.

> The check happens in every function prolog.

So it's cooperative. The standard that is normally used is simple : does "for {}" crash("block" if you prefer) some part of the system ? On the linux scheduler the answer is no. In Erlang the answer is no. On the Go scheduler, the answer is yes.

On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too (and the JVM comes quite close to this ideal, some JVMs actually have it even).


> It means that some of the data structures the scheduler examines on every run are shared data, with locking.

This is true for every scheduler, not only for the Go scheduler.

> That makes it effectively single threaded

It would limit parallelism to one, if there was a single lock. This used to be the case, but now the locking is more finely grained. But this only matters if there's lock-contention anyway, which is not the case for current Go programs.

> single-threaded scheduler

Again, no such thing as a single-threaded scheduler. Even when talking about systems with a big kernel lock, or with a global scheduler lock. The scheduler is not "single-threaded" or any other term like that because the scheduler is not a thread, it's not an independent thing, it only runs in the context of many other things.

> Put another way it means that it'll never run faster than a single-threaded scheduler would.

As mentioned already, this is not strictly trye for Go, but this matters more for thread schedulers in kernels, less so for the Go scheduler, mostly because the number of threads executing Go code is very restrictive, maximum 32 threads at the moment. It's very likely that this situation might change. For example, the SPARC64 port that I am doing supports 512-way machines, so I'd need to increase the GOMAXPROCS limit. Then maybe we'd have more lock contention (I doubt it).

It's true that the scheduler will probably not scale this well, and will need improvement, but it's unlikely it will be because of lock contention.

> You mean preemption inside the linux kernel, in some kernel-space threads?

Yes, voluntary preemption inside the Linux kernel, not preemption of user-space threads. The Linux kernel can run a mode (common and useful on servers) where it might yield only at well defined points. The name is a misnomer, this is not really preemption, but it's not cooperative scheduling either. It's something in the middle and it's a very useful mode of operation, nothing wrong with it.

> So it's cooperative.

It has the good parts of both cooperative and preemptive scheduling, but yes, it's certainly cooperative.

> The standard that is normally used is simple : does "for {}" crash("block" if you prefer) some part of the system?

Not with GOMAXPROCS > 1, which is now the default on multi-way machines (all machines).

> On the Go scheduler, the answer is yes.

Only sometimes. This is fixable while still preserving voluntary preemption, since the voluntary preemption-check is so cheap that you can do it on backward branches if you really need it. This wasn't done since this wasn't a big problem in practice, even with the old GOMAXPROCS=1 default, but there's room for improvement.

> On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too.

I don't understand the analogy. It is not clear what "crash" means here, and it is not clear how it would apply to a runtime environment. All that stuff, forkbombs, etc, means that you can configure the system so arbitrary code can't affect the system in those particular ways.

But for a language runtime you don't have arbitrary code usually, you control all the code. So I don't understand how any of these would apply.

Coming back to the scheduler. There's always room for improvement. Until relatively recently, the Go scheduler barely scaled past two threads! (although not because of lock contention). Now it scales really well to (at least) 32 threads. There are still improvements to be made, and I am sure they will be made. I was just addressing the "single-thread" issue.


I see we mostly agree, but one thing here is a glaring error :

> It has the good parts of both cooperative and preemptive scheduling, but yes, it's certainly cooperative.

For me, the best part of cooperative scheduling is that you can work entirely without locking shared data structures, because you get "transactions" for free. This means it's rather difficult to get data races, deadlocks, etc. Go's scheduler certainly does not give you that, trades it for spreading work over different cpus.

So it has problems:

1) necessity of locking, using IPC mechanisms, ... (like preemptive schedulers, and let's face facts here : channels aren't enough in real world apps)

2) everything gets blocked by large calculations (like cooperative schedulers)

3) more generally, easy to crash by misbehaving thread due to unrestricted access shared resources (not just cpu) (like cooperative schedulers)

And advantages:

1) Actually uses multiple cpu's/cores/... (like preemptive schedulers)

2) integrated event loop that scales (like cooperative schedulers)

If you want to see a programming language with a "scheduler" that doesn't have the bad parts of cooperative schedulers, check out Erlang. If you attempt to crash erlang with infinite loops, bad memory allocation, ... (on a properly configured system) that just won't work, the offending threads/"goroutines" crash leaving the rest of your program running fine. The offending threads will restart if you configure them to do so (which is really easy).

The same can be achieved, with much more work, on the JVM, or, also with much more work, with python's "multiprocessing" library, part of the standard library.

> > On the linux scheduler with proper ulimits it's bloody hard to crash the system, for instance, forkbombs, memory bombs, ... won't do it. I hope we'll get a language where you can do that too. > I don't understand the analogy. It is not clear what "crash" means here, and it is not clear how it would apply to a runtime environment. All that stuff, forkbombs, etc, means that you can configure the system so arbitrary code can't affect the system in those particular ways.

Crash means that the system/"program" doesn't respond (in a useful manner) anymore.


+1 on this. Clojure's core.async[0] is the perfect example of an implementation of CSP as a library.

Even JS can be used to implement such concepts via the use of generators[1].

[0] https://github.com/clojure/core.async

[1] https://github.com/ubolonton/js-csp


I used to feel this way until I realized the limitations of core.async. In Go I don't have to worry about whether the particular functions I'm calling, especially IO-related, are blocking or not, as Go will create new lightweight goroutines as necessary to deal with all that. With core.async, if I use blocking IO inside of a coroutine I risk causing thread starvation. See http://martintrojer.github.io/clojure/2013/07/07/coreasync-a...

Maybe things have changed since 2013, but I feel like this is a fundamental limitation of running on the JVM vs what Go can provide in its runtime.

Edit: Also, it appears to be much easier to simply "run out" of Clojure coroutines than Go goroutines, but perhaps that's also changed. Anyways, my point is that by core.async operating as a macro you still can't overcome limitations of the underlying runtime, whereas Go's runtime was purposely-built to support goroutines.


Small remark: Go I/O layer is safe to use only with network I/O. Only network I/O plays nice with goroutines.

File I/O or everything else treated as syscall by Go runtime might turn you program into 10k-os-threads-monster. Scheduler will be creating new OS threads to replace those locked on syscalls until thread limit is reached and whole program crashes. Only way to prevent it is to restrict your syscall layer into fixed-size goroutine pool.

I had an interesting case recently - my app serves some data from tons of files laying in NAS, accessing it by NFS mount and one day NAS hunged completely, every I/O call to it was lasting forever. Even 'ls /mount-point-of-nas' was just doing nothing forever until Ctrl-C. In my case I've applied poweroff-poweron cycle to NAS, and everything went right in minutes, just as NAS booted. And after it I wondered, what if my server was written in Go, instead of Erlang...

And, BTW, you can never be sure, that underlying libraries of your code are safe to use.


Correct me if I'm wrong, but describing core.async as "a library" isn't perfect in the context of a golang discussion. Doesn't the `go` macro rewrite the abstract syntax tree / JVM bytecode to make e.g. the `!<` macro co-operate with the channel?

https://github.com/clojure/core.async/blob/master/src/main/c...

That's not something that could be done with golang as far as I know.


I'm not sure to understand your point. Could your clarify?


`hackcasual` was saying that certain language features can be added as a library rather than needing to be integrated into the core language

> "fundamentally it's functionality other languages can provide via library support"

You were saying that CSP can be added as a library, citing Clojure's core.async.

All I was saying was that the way in which core.async was implemented doesn't feel like a great example of a 'library' in the sense that most people would understand in the context of a discussion about Golang.

Golang is a static, compiled-to-machine-code language without macros (in the LISP or C sense) or homoiconicity. The reason core.async can be implemented as a library in Clojure is that it has these things.

If you're talking about adding CSP to a language just by adding a library and without having to get into the internals of the language, core.async isn't a good example.

Again, happy to be corrected.


Well core.async is a Clojure library so it uses features available in Clojure. I don't see how it would affect the fact that it is a library.

I've also linked to js-csp, a JS library obviously not implemented using macros.

I can also find other examples of implementation as libraries, but I have no experience with them:

- Scala: https://github.com/rssh/scala-gopher

- F#: https://github.com/Hopac/Hopac

- C++: http://www.cs.kent.ac.uk/projects/ofa/c++csp/


Wasn't saying it can't be done! Just nit-picking at the particular example wrt to golang.


Ok, I've got it now :)


It's also a perfect example of the limitations of that approach - core.async had to make serious compromises in its interface because it was 'just a library': expressions with <!'s and other calls can't just be pulled into functions or for-comprehensions like normal code. That's not to say it's poorly-done, or not useful - it is well done and useful, and those compromises are in line with clojure's goal of integrating well with host vms. It's just an example of how builtins can be "simpler" sometimes. https://github.com/clojure/core.async/wiki/Go-Block-Best-Pra...


> +1 on this. Clojure's core.async[0] is the perfect example of an implementation of CSP as a library.

With the added convenience that shared mutability is pretty much nonexistent.


I tried making a CSP library for C. It was not very pleasant. At best you end up with something slightly significantly less safe than POSIX threads, but now with message passing. While C is an extreme example it's certainly not true that all languages can add CSP/actors/whatever in an appetizing form through a library.

Anyway, there's a reason the phrase "tacked-on" has such negative connotations.


What's your take on libmill?


They're fine concepts, but... On the little project I was tasked with using Go with at Google I got slapped down by the readability reviewers for using them. I think this is an interesting construct, but not sure that community really knows how to use them well?

Erlang at least is consistent on this -- it has that hammer well tuned and isn't afraid to pound nails with it.


Mmmm I'm puzzled. I don't think we are consulting the same community... Goroutines are everywhere in all the main go projects. Goroutines and channels are one of the main reasons Go exists.


"Readability reviews" at Google are somewhat notorious for imposing fairly arbitrary style choices extremely rigidly, for instance 80 characters per line (woe betide you if even one line in a 5000 line patch is 81 characters...). It doesn't sound so surprising to me that the people behind such a process might have decided that one of Go's primary selling points is 'confusing', given that the Go authors appear to believe their colleagues can't handle a brilliant language!


It's quite possible. After my experience with Go code reviews internally at Google, though, I am not eager to go back. I'd work on a project if I was paid to do it, but I wouldn't start one or advocate for it.


He said at Google not the community. I wouldn't doubt there is a difference.


I'm pretty sure at Google they know why they created Go, why they're using Go, and so on... The assumption that Google reviewers had something against goroutine puzzles me a bit. Maybe the problem was not about goroutines, but about how they were used... I don't know, I'm guessing...


Yea, not having OTP to create proper structure, instead having goroutines and channels created and destroyed all over the place, really hurts readability.

But OTP isn't really possible either because Go lacks links and monitors.

Plus not being asynchronous and no distribution (wouldn't want to go over the network with sync channels anyway)...


I think the way Go addresses concurrency is simple and straight-forward. It's trivial to write concurrent applications. If your target is writing semi-low-level infrastructure software I think Go is a great choice. It's definitely got a lot of fans in the world of people writing software for DevOpsy type applications.

From a personal standpoint, it's missing a lot of the features I like, namely ADTs, list comprehensions, folds, maps, etc. But that's just my personal style, not something wrong with Go. Go programs don't necessarily always look pretty but you can usually understand them after a minimal amount of study because the language is so simple.


> I think the way Go addresses concurrency is simple and straight-forward. It's trivial to write concurrent applications.

Yes, CSP[0] is a very interesting concept. But it's not something unique to the go language.

[0] https://en.wikipedia.org/wiki/Communicating_sequential_proce...


"I think the way Go addresses concurrency is simple and straight-forward. It's trivial to write concurrent applications."

Except when it is not. Message passing style of concurrency is just a dual of the classical blocking concurrency with critical sections, mutexes, monitors and conditional variables. An actor is a dual of a critical section. Actor's mailbox is a dual of a mutex. Sending/receiving messages is a dual of wait/notify. With any complex CSP program you can have all the same problems: race conditions, starvation, deadlocks (livelocks) etc.


Goroutines are for all practical purposes threads. Threaded code is generally thought to be difficult to write correctly, in ways that can't be solved just by making the threads cheaper to spin up.

Queues ("channels") are a good way to limit complexity of threaded code by treating each process as an agent. Besides some syntactical sugar Go doesn't really support this better than most other languages with threading, like Java.

Go has a weird thing going on where channels are sometimes used as a kind-of-replacement for iterators, which is error-prone since the "obvious" way to do it doesn't allow the consumer to stop the generator without a side channel. This can lead to buggy code that leaks goroutines, since goroutines can not be garbage collected.

One of the best ways to reduce the complexity of threaded code is immutability - preventing race conditions by making sure a structure never changes while being read. Curiously, Go has no way to mark an object as immutable, and does nothing to detect or prevent objects being unsafely accessed from different threads.


> Queues ("channels") are a good way to limit complexity of threaded code by treating each process as an agent. Besides some syntactical sugar Go doesn't really support this better than most other languages with threading, like Java.

The magic of channels comes with select{}. Considering them to be only threadsafe queues is really missing out.

Supporting select{} in other languages is possible, but difficult and rare.


Also, by default channels are zero length and block. What is a zero length queue in other languages? Doesn't even make sense!

Zero length channels are a key part of coordinating concurrent threads in Go.


We'll see in the next few years... We'll see...


I don't think a language's popularity has anything to do with it being a good, well-thought-out language. Just look at Javascript.


I didn't mean that. I meant that in the next years we'll see if some design decisions were bad or wise.


I see. I apologize for reading your post incorrectly.


Javascript is a very poor analogy since it's been grandfathered into our workflow, and has coincidentally been shoehorned into being workable, at the very least.


In relation to the blog we're discussing, Javascript is an excellent example, since one of the blog's points is that we shouldn't allow Go to become yet another Javascript.


Your post (and in turn, my reply) had absolutely nothing to do with the article.

> I don't think a language's popularity has anything to do with it being a good, well-thought-out language. Just look at Javascript.

My point is that it's popular because we've been forced into using it, and the web is now much larger than it used to be. I'm explaining why it became popular, despite it's downfalls.

No other recent (past decade) poor language has come close to popular because we haven't had a similar issue arise in which we were grandfathered into.


Furthermore, there are plenty of less-than-stellar languages out there that are popular for a time, so companies flock to them, build applications and services in them, and are then forced to maintain that code for years and years to come; which is one of the points of this article.

You are the only one who inferred Javascript's unique circumstances and then extrapolated an irrelevant argument with me based on your own misunderstandings.


If that's the case, care to name off some instances of a language becoming wildly popular for businesses and then just as quickly dying off outside of the JavaScript ecosystem in the last decade?

You seem to be missing the forest for the trees here. You said:

> I don't think a language's popularity has anything to do with it being a good, well-thought-out language.

Then used JavaScript as an example of this:

> Just look at Javascript.

I have sufficiently explained why your correlation does not equal causation, as JavaScript is the only language I'm aware of to have gone through being popular while at the same time being a poorly-built language in the past decade, and hence, being a poor analogy as an attempt to prove your point.

If you'd care to at the very least show some other examples, I'd be overjoyed to see them.

For the record, I completely agree with your point—I also don't believe popularity is directly correlated by the design of the language—but I don't have a single shred of evidence to back it up, only examples from outside of the programming ecosystem. That's why I'm inquiring.


My first post was directly related to the comment I replied to. It also relates to the article. I agree that you're reply was irrelevant, as is your second.


Javascript is pretty good.


Counterpoint: Javascript is pretty bad.

There is a brilliant language that looks like Javascript that is the language you might be thinking of when you say "Javascript is pretty good". Unfortunately, that language only exists in people's heads.

In practice, Javascript is full of crazy weird edge cases where it is required to behave in an insane manner because the browser vendors have all been juggling the idiot ball amongst themselves when implementing it.

e.g. https://medium.com/@daffl/javascript-the-weird-parts-8ff3da5...


I like Javascript. I don't see any problem with the language itself. I see problems with the browser ecosystem as a whole, but that's another story. I understand the argument "popular != good", but I think that if in the long term a tool raises in popularity this is not random. There should be reasons for that.


Overall, it's a good thing. Flash is a proprietary technology from the past. Lets move on guys...


To what? Last I checked the browsers haven't implemented a proper means of doing cross-platform video publishing (video + audio) reliably yet.


Let's move on to Silverlight!


If only that were the case, but they're abandoning Silverlight. It was actually much more stable for me under Linux than Flash ever was sadly, but they stopped developing Moonlight as well.


Moreover, the simulation speed would not be related with any "speed" inside the simulation. If the processing speed decrease, the simulation will be slowed, but inside nobody would possibly notice it. If you stop the simulation, everything is stopped, anybody would be "frozen", and there's no way to detect something like that inside the simulation.


Exactly. Maybe to calculate one iteration of our universe normally would take a second for the "outside computer", but because of very heavy load it now takes a year. We'd be none the wiser.


Too late for me. Atom replaces sublime in my workflow.


Just tried opening a file larger than 2MB....nope. End of Atom tryout.


Its like they didn't even read it? http://www.finseth.com/craft/

(I'm kidding, I understand the slowness is due to the webkit rendering...)


I'm using Atom to code. If you have a source file bigger than 2mb, trust me, the Atom limit is the last of your problems. :)


I have to open CSV files all the time that can easily be hundreds of MB. I'm not doing anything wrong, it's a dataset.


I see no need to switch to Atom since Sublime is faster, has a more established plugin base, and can open large files. Sublime ain't broke, so why fix it?


Same for me at the moment. I've heard a lot of complaints that it cannot open large files and it can be sluggish; I don't regularly need to open large files (I'd rather process large log files from a terminal for instance) and the performance is fine for me. I like that Atom is open source, plugins are written in JavaScript and the rate at which new plugins are coming out is impressive.


I tried Atom for a week. The little half-second delays start to bite after a few days, and after a week you want to throw your computer out the window.

Think about it. It's 2015. My PC has 8 cores running at unimaginable speeds. It has 16 GB of memory. And my text editor couldn't open files over 2097152 bytes.

Am I living on a different planet from everyone else? How can people accept this as a normal situation?


Your remark about modern computers being really fast and software still managing to be slow doesn't just apply to Atom, really lots of software is frustrating in that way. Maybe that's why people accept this as normal: because sadly it is. Try opening an "Open file" dialog on basically any modern desktop...

More and more I tend to use software that is either minimalistic or started in another era (Emacs + command line tools). The one exception is the web browser which has to be modern or a bunch of websites won't work.


I agree the 2MB limit is weird but, like I said, it rarely impacts me. I use Atom for coding and don't think 2MB source code files should be a normal situation.

I don't notice any half second delays though. I used to be an Eclipse user and recall it being much more sluggish.


I agree completely with this. In my job, I need to open up a lot of CSV files ranging from several MB to hundreds.

Sublime Text, whilst not perfect is far better than anything else out there. It doesn't struggle with these files either.


whilst not perfect is far better than anything else out there. It doesn't struggle with these files either

Except on Windows it does struggle, a lot, and others like Notepad++ open large files way faster.

Just checked it again: a 20MB matlab m-file opens instantly in NPP however it takes ST3 about half a minute. Half a minute, seriously? After renaming it to txt to get rid of syntax parsing it takes like a second. Which is still longer than NPP and starts to get seriously annoying with files of hundreds of MB.

Still, it's my go-to editor.


vim would open multi-hundred megabyte files in seconds, if only you can invest in learning its modal editing mechanism (which, to be fair, is fairly arcane, and is proven to be a worse UI than free form editing).


Proven? Where?


No, it isn't acceptable. That's why I use tools like vi/m, ed, and sam to open gigabyte sized files. On my intel dual core laptop with 3GB of RAM (yes 3GB, it is weird).


3GB... 4GB - 1GB carved off for shared memory graphics?


If he has got 4Gb of RAM it's more likely he's using a 32bit OS that can't address the extra memory.


I'm a Dynamics AX developer. Our environment takes minutes to compare text files. I would love to have your "half-second" delays :)


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: