But why have a huge complex browser in the first place? Do people really get much satisfaction from HTML gimmicks? Are they more interested in media (images, audio, video)?
Let's say you want to watch video. Download the video using a program designed for that. Then watch the video using mplayer which works fine with the Linux fb.
Same goes for photos. Download the photos quickly and efficiently using the command line and pipe them into a fb-friendly photo viewer.
There used to be this idea called MIME. Using separate programs to do separate things. Small simple programs that do one thing well. Mozilla has all but abandoned this concept. We all had to suffer through the hassle of Adobe Flash, only to finally admit it sucks and watch Adobe kill it. (.sfw = "small web file" - nice try guys)
And it now takes hours if not days for most consumer machines to compile Firefox. The compexity burden for a "modern web browser" is through the roof. The program is a monstrosity.
(And the codec jungle makes a simple video player like mplayer far too complex. We need some common sense here folks.)
I use gmail as my email manager, calendar, tasks, contacts, documents, and general organizer. That's why I haven't migrated to mutt or other console email client.
I agree that not all javascript is needed, but I wouldn't consider Gmail a gimmick
The more abstraction you shove down the programmer's throat, the less he will understand what's really going on behind the scenes. Eventually the abstractions become more of an impediment than a panacea, e.g. when you try to do simple things. And we also end up with the creation of "leaky abstractions", since programmers who create them have developed a love for abstraction (what might contribute to that?) but little understanding of the lower levels.
There was a recent post on HN regarding /usr/local that seemed to suggest some programmers do not understand what a partition is nor should they need to become acquainted with such unimportant details. I hope I'm wrong in that interpretation. Because if true, that is just sad.
I think Lua is still the most simple and easiest in this regard. And Lua is embeddable.
Everyone is striving for concurrency as a language feature but sometimes it seems like a solution looking for a problem. What is it that you _cannot_ do now that you must have concurrency in order to do? What specific real world problem is it that you cannot solve? What are the cost-savings you will achieve with concurrency?
Maybe concurrency is a matter of smart programming, not dummy-proof languages with concurrency "built-in"?
Concurrency seems to be Go's main selling point (along with fast compilation). But Go is not small, it's not embeddable and it can't leverage C functions with the same ease as Lua. With Lua you can extend apps written in C. With Go you are writing the same old C apps again in Go. All the Go libraries I've see so far are just libraries that exist in other languages to do the same old things we've been doing for years, rewritten in Go.
At least with Lua, creativity and expression is encouraged by letting people write their own libraries. All the existing C libraries don't have to be rewritten as Lua libraries. What a boring exercise that would be. All you need to do is understand how to interface with C functions and there's little you cannot do vis-a-vis existing C libraries to do the "usual things".
Just my opinion. Lua is an arbitrary example. It's the general concepts of simplicity, small size, extending applications, and leveraging existing C code of which I am a "fanboy". It just happens that Lua meets some of these criteria. Sorry if it offends anyone.
I've evaluated Lua for a number of use cases and even use it on my Ti NSpire calculator occasionally, but it has some horrible problems, particularly related to metatables and the concept of 1-based indexes. It also doesn't give us any advantages over other languages. LuaJIT is impressive I will say though.
However, Go's apparent concurrency focus is a bad assumption.
The problems that Go solves are simply: simplicity, synchronisation, thread scalability, time to market, modularity, testability, consistency, security and memory management.
All of those, you really can't do in C quickly and safely.
Go is a fixed C, with a fixed standard library that suits NOW, not 20-30 years ago. That's all it is and that should be applauded and praised. It also fixes the inevitable mountain of stuff you have to do to get something significant done in C and ignores the utter retarded complexity of C++ and Java.
Ultimately:
A single person can build something significant in Go in a week.
A single person cannot build something signficant in C in a week.
cgo is very easy. I've plugged SDL into it in a few minutes.
I agree with you mostly. Although you are comparing Go to a language with manual memory management, not another GC language, like Lua (sorry to keep using that example).
A question: What is your idea of "something significant"?
Go seems like an ideal language to quickly build servers. Am I missing something else?
I never liked SPDY to begin with. It always seemed to be gratuitously promoting a "new" protocol when many of the speed gains can be had from simply paying better attention to existing protocols, e.g. using pipelining for multiple resources from the same domain. A lot of the "slowness" of the web comes from ignorance and laziness, not lack of capable protocols. It was disturbing how much mindshare SPDY seemed to be getting just based on hype.
And TLS always seemed like a replacement for SSL, when what's really needed are _alternatives_ to the SSL model, not a GNU clone that proclaims it can do the same or better.
This is just my opinion. I apologise if it offends anyone.
Fortunately there are alternatives. You just have to look beyond the hype.
You're confusing SSL and TLS with OpenSSL and gnutls. TLS is basically a revised version of SSL developed by the same groups and through the same processes as SSL. The GNU project subsequently created gnutls, which is an implementation of both SSL and TLS just like OpenSSL is - the only reason it's named after TLS and the older libraries are named after SSL is because the older ones predate TLS.
Does the SPDY spec require TLS compression? IIRC, TLS is needed. Hence, SPDY+http compress+no TLS compression is workable, no? It's not ideal but would still work...
Let's say you want to watch video. Download the video using a program designed for that. Then watch the video using mplayer which works fine with the Linux fb.
Same goes for photos. Download the photos quickly and efficiently using the command line and pipe them into a fb-friendly photo viewer.
There used to be this idea called MIME. Using separate programs to do separate things. Small simple programs that do one thing well. Mozilla has all but abandoned this concept. We all had to suffer through the hassle of Adobe Flash, only to finally admit it sucks and watch Adobe kill it. (.sfw = "small web file" - nice try guys)
And it now takes hours if not days for most consumer machines to compile Firefox. The compexity burden for a "modern web browser" is through the roof. The program is a monstrosity.
(And the codec jungle makes a simple video player like mplayer far too complex. We need some common sense here folks.)