150 million US Smartphone Users Are Downloading Apps, Data Shows
At some point (already?) there will be so many smartphone users that even if only 50% of smartphone users download apps then smartphone apps will be bigger than almost anything else.
You can believe whatever you want, but most systems of justice recognize things like proportionality and imminence.
While a nearby coal power plant is likely to harm your health, it's unlikely to kill you, so killing someone over it would be disproportional.
Also, even if it were going to kill you, it's not going to kill you right now, so shooting on sight instead of exhausting other avenues would be avoidable and hence not self-defense or justifiable.
Most people would sympathize with you as a victim of this coal plant, but find you guilty of a crime.
> mistrustful of arguments that the free market will sort things like this out. There's just no good mechanism to stop massive harm to common resources
Most arguments that the free market can handle these problems start out with the recommendation that the resources involved (rivers, lakes, ocean) should be privatized. Economist Walter Block and others have written about ways this could be done. To fault free market arguments for not working when the waterways aren't privatized is to misrepresent the arguments. Most people aren't arguing that the free market is going to solve problems relating to unowned, unownable, or government owned property without first recognizing private property rights in those resources.
Governments often protects polluters by limiting their liability. If that were changed, and assuming these resources (bodies of water) were privatized so non-governmental parties had standing to sue, if you have an argument why a class action lawsuit or something like that can't handle these problems, then that would be an interesting comment. But just saying the free market doesn't solve problems where there are no property rights is rather uninteresting, because free marketers agree with that.
Also it's pretty amusing that a failure of government (who owns the waterways, and most of the sewer systems) to solve this problem sooner somehow gets twisted into a failure of the free market (who doesn't own these resources). Without property rights there's no free market.
I agree that governments, who subsidize animal feed, water, land, and waste, and prosecute activists, do a great deal of harm to the environment by promoting and protecting animal exploitation. I'm fine with banning animal exploitation, even on private property, and I see that as no more anti-free market than banning slavery on private property.
Yes, because a private entity completely owning natural resources like lakes and forests, will completely prevent those resources from being ruthlessly exploited. Those private entities would rationally take a long term vision, and certainly wouldn't exploit those resources until there's nothing left.
There's certainly merit in trying to make people pay for externalities (offsetting carbon pollution for example), but you can't do that for everything. Privatizing everything might be an appealing free-market pipe dream, but unless we can completely/mostly stop externalities from happening, such an experiment would be a disaster.
I'm reminded of when hedge funds bought out old family owned logging companies via leveraged buyouts. The only way it made business sense was to clear cut everything and then close the mills down when the trees were gone.
I guess it would work in the magical world of economists. In that world, there would be active competition for the ownership of the rivers (i.e. lots of players) and consumer have perfect information and are rational (i.e. they will avoid product that directly or indirectly cause pollution )
This doesn't reflect the content of any econ course I've taken, nor the views of any of my econ professors. There's a ton of study in economics about the types of effective roles government can play when dealing with positive and negative externalities, and "tragedy of the commons" situations.
Economists that believe everything can be sorted out in the free market are pretty fringe.
You can't take your wealth with you upon death -- Property owners of valuable resources nearly always maximize exploitation in the short term and could give a shit about the long term.
If they took a long term view, we'd still have old growth forest in the US.
> Yes, because a private entity completely owning natural resources like lakes and forests, will completely prevent those resources from being ruthlessly exploited.
I don't think this has anything to do with exploiting resources. If someone owned all the rivers, then they could sue the companies producing microbeads for any damages.
Not caring for the long term is still a massive problem for our civilization. I don't know if that has anything to do with privatization. I'm very skeptical that governments or voters care more about the long term than private markets. At least individuals care about leaving money for retirement. And even if you don't care, it's still senseless to deprecate your assets' value more than you gain from exploiting it.
> completely owning natural resources like lakes and forests
They wouldn't completely own them, they'd have property rights in them. Depending on how the property rights are structured you could have many owners of say a lake or a river.
> Those private entities would rationally take a long term vision
If there's anything we know about politicians up for re-election in two years or unable to run for another term it's that they take a long-term approach to problems. /s
Or the polluting company with deep pockets would just buy the land and do whatever they wanted.
An unregulated free market is a terrible idea. It ends in massive monopolies and natural resource destruction at an absolute level.
It's not a bad thing that that's what a free market moves towards, per se, but that's why it needs to be and is regulated. Any worthwhile conversation isn't about whether or not to regulate it, it's a what level.
I spent many years trying to work out how that could be made to work, in a world where the people with the most money and ability to buy those resources being destroyed are also the people with the most interest in seeing them destroyed without impediment. I just can't make the math work out. When the oil and gas industry can cause wars involving the world's largest nations, in pursuit of their profits, how can I believe empowering them to buy every river will save those rivers? In a world where they can literally buy armies (even more directly and with even less impediment than they have today), how can I believe there will be less violence over oil?
I've read the same authors you've read on the subject (believe me, I hear you, and I have made the same arguments you're making more times than I can count). I was for many years a libertarian (big L and little l...card-carrying member of the party, worked for ballot access, etc.). I just don't believe in the premises of libertarianism any more. At least, not the free market uber alles part of those premises.
Also, I can't reconcile the idea of unlimited capital accumulation in the hands of a few that spans generations (e.g. land, water access, etc.) in a world of limited resources with my own beliefs about fairness, justice, and human freedom.
How do you feel about the notion of breaking from our feudal tradition and advocating for basic income? What excites me most about the idea is people would be free to check out of the system and use their own creativity to survive and thrive. Soon, when intellectual capital is understood to be the most valuable, this investment will be returned with dividends.
I support a basic income. Honestly, I think it is inevitable, or a lot of people will starve as we move past the need for a lot of unskilled labor. There simply won't be work for everyone...if we adhere to the old notion that everyone has to earn the basic necessities of survival, well, the results will be catastrophic. It's unfortunate that we'll have to wait until everything else has been tried before settling on the one thing that could actually positively change outcomes for huge swaths of people. There's already a lost generation of people who will never escape their school debt.
> I was for many years a libertarian (big L and little l...card-carrying member of the party, worked for ballot access, etc.). I just don't believe in the premises of libertarianism any more.
Limited exposure to some parts of the real world. I read a lot, worked a lot, and didn't make time for travel. I had few friends who were significantly different from me in terms of money/education/class/race/etc. Kinda like most people. A few years traveling full-time, getting to know homeless folks and undocumented folks, and seeing how class and race plays out in our "free market" system changed my mind on a few things.
And, I think dismissing libertarianism out of hand, as though it has no interesting/valuable ideas, is somewhat silly. The LP was literally decades ahead of the curve on LGBTQ rights, ending the war on drugs, and opposition to war (of all sorts). All at a time when those ideas were extremely unpopular in mainstream politics. I disagree with the premises behind their economic policy ideas, but it doesn't mean I don't understand the allure of the non-aggression principle (I just think they're mistaken about capitalism being free of aggression).
So, how about you? What took you so long to come to your views? Why weren't you born with the correct ideas on every issue? Or were you? You reckon you're right on everything now? How embarrassing it'll be when you find out in five years you were wrong about something today.
I think you misread my comment as sarcasm. It was an honest inquiry, if a little facetious. I should have written it differently. Thanks for sharing.
Given that you ask, my views on these things seem to be very similar to yours today, based on what you've written here. I've considered myself a small-l libertarian for most of my life. I think the difference is that I've never found the ideological purity of big-L Libertarianism very attractive.
That's why I asked the question. I really would like to understand what it takes to convince someone who buys into the Libertarian party line to embrace ideas like basic income, and to realize that privatizing everything simply will not result in the outcomes they think it will. I wonder if it's possible to convince them without their having had the kinds of life experiences you have had.
I think that libertarian ideas and libertarian activists could be a effective force for reform in this country--if only the most motivated (people like you who are motivated enough to work on things like ballot access) were willing to make the kinds of ideological compromises and embrace the kinds of ideas (like basic income) that could make libertarianism more broadly appealing.
My inquiry was also sincere, even if the tone seems harsh online. I was joking, on the assumption that most folks here are at least willing to examine their views on occasion.
The answer to what it takes to change minds on any subject?
Not taking a tone of "you're clearly an idiot". I do it all the time (particularly on issues I'm passionate about, like the horror that is animal agriculture), but it doesn't convince anyone, it just puts them on the defensive. And humans have somewhat broken brains such that defending a position makes one believe that position more strongly and more fiercely (even if it is demonstrably ridiculous; e.g. anti-vaccine folks).
Convince them to get outside of their comfort zone. Travel, activism, and volunteering, is what did it for me. Activism and volunteering probably need to be with and on behalf of folks unlike oneself to have any impact.
Ask questions rather than arguing. If someone discovers the uncomfortable points of their position on their own, they'll be willing to change their mind. One of the founders of CFAR (Center for Applied Rationality) once asked me a few questions that may have even planted the seeds of my change of heart when we happened to meet in NYC...specifically, she asked about the source of property rights, since I don't believe in gods, so I can't simply handwave it away as a " god given right". That stuck with me, because it's clear to anyone who is sincere that property is merely a fiction we all agree on, and it is a fiction that can be taken to unhealthy extremes. Asking the right questions is harder than ranting, but it actually works to change opinions, and serves to keep the conversation on a level of friendly chat rather than two ideologues bloviating.
Those arguments are based on the premise that the free market can solve all problems. Thus when something fails it's because there's not enough free market.
The other explanation is that the premise is wrong.
If the free market doesn't work together with anything that isn't organized as a free market then it's a flaw of the free market system. There always will be things that aren't a free market.
More specifically, a free market system can solve most problems it is applied to, given:
1. Property rights exist and are enforced.
2. There are minimal barriers to entry.
3. Transaction costs are low.
(#2 is of less interest here.)
The classic market failures all involve a violation of one of these. The tragedy of the commons is a property-rights failure, monopolies are a barrier-to-entry failure, and lots of other miscellaneous exploitation and big-corporate-player centralization issues are related to transaction-cost issues.
In this case, it's quite clear that property rights do not exist and are not enforced on things like the Atmosphere or the Ocean (good luck doing that internationally), and even if they did, imagine the transaction costs of tracking exactly how much in microbead pollution a given person has flushed down the drain? Anyone crying 'free market solution!' is being quite silly.
> What would prevent someone from, say, buying up all the water and then not selling any of it, or selling just a little to a handful of rich people?
First, no one has enough money to buy up all of the water. Second, even if they did, it wouldn't make sense not to sell it. Most people like making a profit and having more money. It wouldn't make sense to only sell it to wealthy people, because they don't use enough more water to exhaust supply in most places people inhabit and they'd be forgoing a lot of profit from selling water to normal people. Also governments also often sell water to politically connected business and agriculture groups at a lower rate than they sell to normal people.
It sounds like you see profit as the sole motivation of people's actions.
If someone owned all the water, they would have not just profit-making potential, but they would have a lot of power. In particular, they would have the power of life and death over virtually everyone on the planet.
In a free market utopia, these people could kill as many people as they liked by simply refusing to sell them water, and believers in a completely free market wouldn't lift a finger to stop them -- because, after all they're just freely doing what they like with their own property.
The world is full of people with malicious motives. The prisons are full of them, and there are plenty more outside of prison. Wars, ethnic cleansings, and genocides have killed people by the millions. Some of this was done for profit motives, but some done for other motives.
Don't for a moment think that people like that would hesitate to use the power in their hands to harm those they hated or wanted dead for whatever reasons of their own.
Then there the sociopaths, who would let others die simply because they think giving them water would be worth their bother, or maybe because they were just more interested in other things.
Also, you don't have to buy up all the water in the world to be able to wreak havoc. All the water in a particular water-scarce region might be enough.
Of course, water is just an example and an analogy. The fact is that free market believers have very little to nothing except faith in the free market that would prevent the concentration of wealth and power (in whatever form) and the subsequent abuse of such at the whims of those who wield it.
I appreciate your post and line of thinking, but am not at all convinced that our natural resources and spaces would end up better off being owned by a private company for financial gain. I don't trust private owners to take the required long-term view on those resources and spaces. And in cases where some benefit is derived (i.e., quicker action against microbeads), I think there are likely accompanying outcomes that are even worse (e.g., decreased access to fresh water or natural parks).
Unfortunately it seems the current market for public opinion that is the modern media is too inefficient, we need a high frequency bid/ask system that can implant ideas directly.
Free market can't prevent short cited assholes who don't understand what they're doing.
Smart people with the public good and long view can, however, do better in public policy that sets limits on wreckless actions.
If you want to see what happens when people don't give a shit look at China where Beijing recently had to close schools for several days because of the pollution.
> being owned by a private company for financial gain
This doesn't necessarily follow. Private ownership can be structured as something other than for profit. Non-Profit Land Trusts are big thing in many areas of the US for example. But even industry motivated groups might be a choice. Think a home-owners association of beach property owners (including resort hotel conglomerates) or fishing rights cooperatives. If either had stronger property rights and standing to remedy damages they would be motivated by longer term aims to ensure water quality. You might be right that this could have adverse consequences but I wanted the raise the possibility of options other than BigWaterCo.
> I don't trust private owners to take the required long-term view
If there's anything we know about politicians up for re-election in two years or unable to run for another term it's that they take a long-term approach to problems. /s
> Governments often protects polluters by limiting their liability.
Exactly, exactly, exactly.
As an environmentalist and libertarian, I can't stress enough how government is more often than not a partner in crime when it comes to polluters. One of the big reasons places like China are so polluted is the advance of industry is considered to part of the common good. Therefore, the polluters are given immunity and protected by the political system - giving no recourse for regular citizens.
China's pollution because of coal powerplants left over from industrialization, not special favors to industrial leaders or under-the-table deals. The Chinese government is also the largest green energy producer in the world.
What forms of recourse would citizens have against polluters without government intervention?
I hope you're not going to suggest 'they can shop elsewhere' as it's been proven time and time again that most citizens don't choose what they buy based on the greater good.
Sue them in court for damages. Folks sue corporations all the time for various reasons and often win. We should be able to sue polluters as well, but governments often protect them for various reasons (like tax base, campaign contributions, etc). Oil spills are a great example of corporations having immunity thanks to government.
The court system is an arm of the government. Are you saying the government should pass regulations and/or set out citizen rights that limit the potential behaviour of corporations?
No, I'm saying corporations lobby and have laws passed that protect them from (as well as from competition), making them immune. Going full-libertarian is not even necessary, just roll back those laws.
I think if you're going to claim that we're not taking one part of a philosophy into account, that part needs to be remotely possible, or you need to show how we can privatize the ocean.
Selling ownership of fishing rights over tracts of the ocean, while it has flaws, would be infinitely better than the free-for-all we have now, where nobody has an incentive to preserve any fish stocks.
What are you talking about? This is the absolute definition of the Tragedy of the Commons -- when you have a public resource it is in everyone's interest to exploit.
The free market does not just mean "anarchy! take everything!" There's no market here!
If you're resorting to arguing the semantic meaning of Free Market, then it is definitively "an economic system in which prices are determined by unrestricted competition between privately owned businesses."
With natural resources, the free market is certainly an unrestricted first-come first-serve market. Fishing quotas are restricted competition and price influential, which makes it not a free market, despite your attempts to redefine it as such to fit a narrative.
Quotas, seasons, and permits aren't really as good. As a fisherman you have no incentive to catch below your quota, since you aren't rewarded for leaving anything on the table. And the quotas are usually too high for political reasons, to avoid losing votes. Either way, they are centrally planned.
A market solution would allow a fisherman to catch below quota and be rewarded in later years by replenished stocks in the waters he controlled.
> And the quotas are usually too high for political reasons, to avoid losing votes.
This is a problem with some of those quotas - they're not being set up right. That's the failure of democracy though, not of central planning per se.
> A market solution would allow a fisherman to catch below quota and be rewarded in later years by replenished stocks in the waters he controlled.
That may work, in a type of business with big inertia (i.e. where you can't go out of business over a single season), if we could parcel water like that. Sadly, fish colonies don't respect arbitrary lines we draw on maps. I don't know if such a market solution is ecologically possible; fish need space, and they often need to travel.
Adding onto that, I can't imagine the logistical nightmare of privatizing the entire ocean including international waters, besides the root of the issue in which it solves nearly no problems.
What if I buy a 1 meter by 1 meter parcel of the ocean and charge bottom of the barrel prices to dump industrial waste in it? What if that square of ocean was 100 meters off shore from a beach in Los Angeles? If you don't want that then we're right back to government regulations.
It's possible to determine negative externalities before a product goes to market, and this 'pre-filter' regulatory framework and barrier to entry is decidedly not 'free market'.
Any solution that does not implement the above is a 'post-filter' approach which can result in permanent, irreversible damage to a system.
Even if we accept your post-filter scenario (which is not rational from a system design perspective), your 'privatize common resources' suggestion relies on the erroneous idea that public property rights are not equivalent to private property rights from a legal perspective.
Either type of right can be reduced to a legal right which gives the rightsholder standing to take legal action. Whether a rightsholder enforces those rights is simply a matter of whether the rightsholder is competently managing those rights, and both public and private entities can be good and bad at this.
In a situation where there are shared resources used by all actors in a system the simplest and logical solution is to have one entity which represents all actors managing those resources.
Your suggestion of transferring ownership / management of shared resources to a narrow subset of beneficiaries seems unavoidably complex, convoluted and illogical to me, having surveyed various arguments online for 'privatize all the things'.
What is the best resource you'd cite online? I've reviewed Walter Block and to be honest he seems like a total lunatic, but that won't necessarily stop me evaluating his arguments.
I just don't want to read a 494 page book on privatizing roads to get the gist of his arguments.
I wasn't actually arguing for water privatization, but rather I was arguing against blaming the free market for the current situation, one where the government owns most bodies of water and allowed this pollution.
I'm no expert on water privatization. I know a few economists have written about it, but it's unrealistic to expect them to come up with a great solution on their own. A good solution would have to evolve over time with decisions by judges etc.
AFAIK most proposals don't have a single entity owning a river, but rather people own rights to a certain amount of water from the river at a certain quality.
I wasn't referring to Walter Block's book on roads, but this one: "Water Capitalism: The Case for Privatizing Oceans, Rivers, Lakes, and Aquifers." I haven't read it.
AFAIK most proposals don't have a single entity owning a river, but rather people own rights to a certain amount of water from the river at a certain quality.
Isn't that pretty similar to how things work now, minus the quality part?
There is a common idea in economics that explains the problem with what you say: externalities. This is when my behaviour affects you in a way not captured by the market.
For example, if a forest is privatised and cut down the owner might be acting in their best (market) interest, but people living nearby will be negatively affected.
We could try and put a price on these negative effects, but it's very difficult so the better solution is normally to keep things private.
Essentially, you're arguing the solution to one set of externalities is to privatise, but that very privatisation will create a whole host of even worse externalities.
We see with climate change and carbon emissions that ensuring companies pay for their externalities (pollution) is very difficult post the event because there are strong vested interests against this, and the people affected negatively are many but spread out, there is little incentive on an individual level for them to go to the vast effort to fight against the polluter.
You are conflating privatization with free market economics. The two are not mutually inclusive, even though there is surely a large commonality among the respective advocates of each. Free market economics has more to do with the state intervention disrupting competetive markets. You're still correct that it has nothing to do with the government failure described in the link.
If the government "owns" a bunch of lakes and rivers, and someone else is polluting those lakes and rivers, then the government should sue them for damages (cost of cleanup) in addition to ordering them to stop doing it. After all, that's exactly what I would demand if I owned a swimming pool and someone kept dumping trash into it. Simply telling them to stop is not enough. They should pay for the cleanup as well.
So yes, I agree that this is a failure of government. The government should be a lot more zealous in protecting the value of all the natural resources that it claims for itself, perhaps even more so than private landowners.
> Most arguments that the free market can handle these problems start out with the recommendation that the resources involved (rivers, lakes, ocean) should be privatized. Economist Walter Block and others have written about ways this could be done. To fault free market arguments for not working when the waterways aren't privatized is to misrepresent the arguments. Most people aren't arguing that the free market is going to solve problems relating to unowned, unownable, or government owned property without first recognizing private property rights in those resources.
> Most arguments that the free market can handle these problems start out with the recommendation that the resources involved (rivers, lakes, ocean) should be privatized.
Either that, or you need to use the tax system to internalize the externalities. (Think carbon tax or London congestion charge.)
They want to kill XUL for Firefox so they can be all fancy HTML. So they have to kill Thunderbird, a XUL app.
In a few years the all new HTML Firefox will come out. My bet is that it will suck. It will lack a TON of features that the existing Firefox has, but hey, it's all HTML! And you won't be able to stick on the old one, because within a week or two some critical security flaw will be discovered and eventually (like six weeks later) they'll stop supporting those for old Firefox.
Initially the HTML Firefox will suck. When you take an app that's been worked on for 15 or so years and then replace it's UI you're going to lose a TON of features. They'll slowly reintroduce some of the most popular features (hamburger menu will be priority #1!) but there will be a TON that they will not reintroduce. Why? Because when they were first introduced a decade ago it was a cool idea someone had, and no one knew how popular it would be, so heck, why not implement it. But now they know that only 10 million or even 1 million people use that feature, and they're only interested in 100 million user features! If Google Chrome doesn't have it, it must not be important!
As much as people complain about XUL not looking native, wait for HTML Firefox, it will take them forever to get where XUL was years ago.
They can't just kill XUL for Firefox though, they have to burn down the XUL ecosystem first so they're just releasing a new Firefox, nothing to see here.
1. They try to kill xulrunner as a project separate from Firefox. They try to move everyone to firefox -app.
2. They stop releasing binaries for xulrunner.
3. They deprecate XUL extensions.
4. They distance themselves from Thunderbird. They say it's better for Thunderbird. Yeah right! Thunderbird is built on XUL, it's not going to be rewritten in HTML any time soon, definitely not by volunteers. It's not going to be able to maintain XUL either, and when Mozilla stops supporting XUL for Firefox a few years after deprecating XUL extensions then Thunderbird will be screwed, but hey, it's not our project! We abandoned it years ago!
So when the crappy HTML Firefox shows up, with way less features than the Firefox of today, remember that this (Thunderbird) was one of the things given up to have it.
But hey, donate to Mozilla! $5, $15, $25, anything helps. Because we already make hundreds of millions of dollars and we do whatever is shiny and new, screw the "community" of existing stuff. We're fighting for an open web! (where you can use Gmail for email)
XUL was a weight on developing for Firefox. It worked like HTML, but was nonstandard. At first, it meant they could quickly implement things like flexbox (<hbox> / <vbox>) and gridlayout (<grid>) long before they were a thing in CSS.
But as HTML and CSS slowly got more features than XUL, XUL development slowed down, up to the point where writing the Firefox UI in XUL became a pain because of poor tooling and sneaky bugs. More and more pieces of Firefox got written in HTML inside XUL, and factorizing code between the pieces in XUL and those in HTML was nightmarish.
Dropping XUL means putting those bugs and issues behind us, and focusing development on a single DOM language. You would probably be surprised by how much of the UI already is in HTML; tab groups is almost all HTML, and the DevTools' editor and DOM inspector are in HTML as well.
As for donating to Mozilla, the distinction between Mozilla Corp and Mozilla Foundation is understandably complex for outsiders, but basically only Mozilla Corp makes money from the partnerships.
I'm glad to see it disappear, it's one of those things the world doesn't need. A failed experiment. And an HTML UI for firefox makes sense in the long run.
Fun fact of the day: Did you know XUL uses DTD to store translations? That's right, if you have a string you want to translate, you just have to create a new XML element in a localized DTD file. Isn't that just a wonderful idea.
I was just going to mention that, but you beat me to it. It's just bizarre to use XML external entities for internationalization.
I worked on TomTom Home, which was implemented in xulrunner, and I developed some internationalization/localization tools that had to deal with XUL DTDs as well as several other different and incompatible file formats for storing translations. I could never for the life of me figure out why they decided to use DTDs with external entities for translations.
XUL wasn't a failed experiment. From what I understand it served as the inspiration for some of the new features found in HTML. If that's the case it was a useful experiment.
It was meant as a pro. Since it was their own language, they could do anything with it. They could make a better HTML.
But HTML caught up. Currently, I believe HTML is better than XUL, and making XUL great again is both a silly reuse of a political slogan and a waste of effort.
I for one would rather see efforts made to allow CSS styling of all input elements in HTML.
Good luck - the main advantage of XUL IMO is that it looks and feels native to the platform. This is completely lacking in any HTML based UI I've ever seen. It's been a major advantage for FF extensions and I can't help but feel like it's a major step backwards to lose it.
>
Good luck - the main advantage of XUL IMO is that it looks and feels native to the platform.
That you say that is a testament to how well the meticulous CSS styling of the XUL elements—which applies equally well to HTML (try it in a browser chrome shell!)—worked.
I don't think that has anything to do with XUL directly, you can still find its original look by furing up Seamonkey, but that Firefox developers opted to rework the UI subsystem so that it translated XUL elements into native elements as much as possible. Iirc, this was done in part to speed up the Firefox UI compared to Seamonkey.
> Being nonstandard is completely irrelevant here.
It's relevant in that much of the work has to be duplicated (documentation but also layout implementation & the like) and none of the new and improved web development tools can be used for XUL.
The problem as I understand is more about the documentation, the hidden quirks and also the barrier to bring new contributors to the source code. It's always easier when they don't have to learn a custom language first.
> Being nonstandard is completely irrelevant here.
No, it's not irrelevant. Standards are also about documentation (it's easier to document standardized things) and familiarity (people are more familiar with standardized things).
I was poking at the non-standard portion. Lowering barrier to development of plugins, patches, etc is fine, but being standard doesn't help or hinder that. Popularity and ease of introduction help that. Javascript (that is, the dialect of ECMAScript that is implemented by Firefox) itself is non-standard and Mozilla isn't throwing that one out for exclusively ES2015.
Actually, tons of nonstandard SpiderMonkey features have been removed. Sharp literals were axed, E4X was removed, "let" is being changed to the ES6 behavior, and so on.
It happened before, with the transition from Mozilla Suite to Firefox. And let's be honest: XUL was just lipstick on the pig that is cross-platform development. HTML/CSS/JS are now fast enough to look like a slightly better pig, so here we go.
Also, there's a generational shift underway. You and me could find crazy that people would openly choose to use IDEs built on HTML/CSS/JS, but that's what a lot of young folks are doing (Atom, VSCode etc etc). That's their world, that's what they like. An entire generation now exists, who learnt to code from web scripting rather than C or BASIC. They have taken over. It's just how it is.
(this said, I agree that donating to Mozilla feels a bit silly, looking at how much money they make from commercial agreements. It's like donating to Ubuntu or RedHat.)
I'm all for writing new apps in HTML, I think Atom and VSCode are awesome, but I'm not for rewriting huge legacy apps to be HTML apps for no good reason. The reasons given, that XUL requires maintenance that Mozilla engineers don't enjoy doing, is a joke considering the amount of effort to maintain XUL is less than 1% of the amount of effort to move Firefox to HTML.
No one has listed the ten awesome features that we're going to get from HTML Firefox (cause there ain't many) or the 1,000 features (tons of little details) that will be lost. If users listed their 10 biggest problems with Firefox I doubt any of them would be solved by moving to HTML.
Imagine if instead of writing VSCode from scratch and releasing it alongside Visual Studio Microsoft had rewritten the Visual Studio UI in HTML, abandoned all the nonessential features, and abandoned the old native Visual Studio.
One might say that Mozilla will wait to release the new Firefox till it has all the old features of the old Firefox, but that's not been my experience with how teams work. They'll get frustrated with the rewrite and want to get it out the door. "We can add those features later" they will say, and then they'll never get added.
> The reasons given, that XUL requires maintenance that Mozilla engineers don't enjoy doing, is a joke considering the amount of effort to maintain XUL is less than 1% of the amount of effort to move Firefox to HTML.
Ah, but maintaining XUL means working on old code (which is boring), but moving Firefox to HTML means working on new shiny code (which is exciting).
Well, or Firefox as a web browser has to render HTML/CSS/JavaScript no matter what, and now that HTML/CSS is at feature parity or better with XUL in the space XUL is meant to occupy, it doesn't make sense for Mozilla to maintain two competing technologies when one receives 95% of their internal developer attention and 99.99999999999999% of external developer attention.
Instead, they plan to render the UI natively, with only "some" parts in HTML.
So, instead of XUL + HTML, we’re going to get GTK + WinForms + Cocoa + HTML. Great, eh?
And we lose the ability to style it with addons – your themes can only change the background image of the header bar, that’s it.
And the remaining addons can’t modify the UI (tree style tabs, bottom tabs, etc) anymore either, instead you can only modify page content.
I’m seriously pissed off now, because Firefox was the last browser where I could actually customize it how I liked it.
I hope the person who made this decision is going to have to use software without any config options and with horrible defaults, like GNOME. For the rest of their life. May their car always have have either 60°C+ heat, or -20°C AC, may their screen of their phone always either be too dark, or too bright.
> Part of the decision has already been made. We are moving Firefox addons (themes and extensions) away from a model where you can perform arbitrary styling or scripting of the browser chrome. This is an engineering-driven decision, and it's unavoidable and necessary for the long-term health of Firefox. Not only are we moving Firefox away from XUL, but we are likely going to make significant changes in the way the UI is structured. It is likely that some parts of the UI will be implemented using native widgets, and other parts will be implemented in HTML, but the exactly DOM structure may involve independent connected with well-defined API surfaces.
Official statement from the Mozilla post in the discussion regarding removal of support for "heavyweight" themes. Emphasis mine.
That’s a pretty clear statement that it won’t be 100% HTML.
Also, the fact that "arbitrary styling and scripting" won’t be possible is another issue.
Tell me how I am supposed to write an addon that adds tab-previews as thumbnails when you hover over a tab like Vivaldi is doing it: http://i.imgur.com/vqysJs1.png ?
How am I supposed to write an addon that colors the navbar and the current tab in the theme color given by the HTML, or, if not existing, the favicon?
With current addons I can do that, with the new addon system, I’m seriously fucked.
You're citing jwz's CADT post in a thread discussing Firefox? It's a product for which people regularly complain about open bugs that are 10 or more years old.
He's right. Mozilla, a $200-300 million a year outfit, currently maintains their software including Firefox. It's a huge C++ application. People who code XUL in their spare time, even a bunch of them, aren't likely to make a dent in keeping parity between a Firefox fork and the main release. They'd likely have trouble even porting it.
So, a fork is a rough solution and will have maintenance issues for an app this size.
As an active user of Thunderbird I am very disappointed with these news.
But these rants are just silly. XUL is a technology that needlessly duplicates what HTML/CSS do these days. And if you ever want to have a smooth transition to servo, which solves real, deep problems, having a html UI is going to be dramatically important.
I really want to agree with you that Atom is awesome, and it is, in principle, but in reality a text editor should not be using 300MB of RAM. Sublime Text, which I consider to be a direct competitor, barely uses 20MB of memory on my machine even after hours of use. Heck, does Intellij even use that much memory?
I'm just really discouraged with how more and more desktop apps are being written in HTML, CSS and Javascript and suffer in quality as a result.
Considering my first 16 Bit computer had 3MB RAM and ran MS Word alongside a GUI, I am wary to say "Amount X of RAM is preposterous for a given task".
As long as Moore's law provides enough lift under our wings, RAM usage is one of the less important aspects of an application. However, I fear the trend of building application UI's in HTML/CSS because invariably it will lead to wildly inconsistent look, feel and behavior.
It's worth pointing out that for a couple of years Mozilla has been building a replacement for their rendering engine (Servo), so I imagine that to some extent deprecating XUL now is preparation for not having to re-implement it for Servo. Just speculation.
> One might say that Mozilla will wait to release the new Firefox till it has all the old features of the old Firefox, but that's not been my experience with how teams work.
Realistically, getting to feature parity after rewriting a core part of any application is going to be nearly impossible. You end up with different features, hopefully better ones, but not exactly the ones you had before you started. You can't step twice in the same river.
What bothers me about being older is my first browser was Internet Explorer, then I got to play with MOSAIC's slow arse in school, then used Opera/Mozilla, and so on. Got to see where it came from. And the new stuff, especially Firefox, is coming full circle in how friggin' slow they run to serve the lowest common denominator of web pages.
It's annoying. I miss Web 1.0. Add just a bit of dynamic functionality plus broadband and it would be fine for 80-90% of cases. And FAST!
Instead, we get Web 2.0, 3.0 (4?) which makes pages on 40-50Mbps load like my old 28Kbps modem on AOL. Seriously...?
> And the new stuff, especially Firefox, is coming full circle in how friggin' slow they run to serve the lowest common denominator of web pages.
It should be easy to find performance numbers showing how Firefox 42 renders old, pre-CSS pages slower than, say, Netscape 4, then.
Netscape 4 didn't have a JIT, didn't use hardware accelerated layers, trapped into kernel mode for GDI calls, didn't use accelerated SIMD for painting, and didn't have HTTP 2. It barely had any optimizations for dynamic restyling, so tons of stuff would get reflowed when it didn't have to. This is just the tip of the iceberg.
Browsers have gotten more complex, but the complexity is often in the service of making things faster.
Notice my references to Web 1.0, 2.0, etc? That means my comment was talking about not just the browsers but the sites designed for them. The combination of the two have made web sites really slow that could be designed to load up instantly. Instead, they load up as slowly as some sites did on my old Pentium 2 running Opera, etc. You'd think they'd be significantly faster with all the Moore's law iterations and browser improvements. Modern sites make sure that doesn't happen, though.
And I never mentioned Netscape: it was called Netscrape then and hackers despised it. I used Opera and IE mainly.
Back in the Netscape 4 days, the architectures of those engines were broadly the same (in the sense that Linux and FreeBSD have broadly the same architecture). Of course the codebases were different.
In the late '90s, the time frame this thread is about, it was well known that the layout engines at the time were not dynamic: they could not in general reflow only parts of the page. Everything else I mentioned in the post that triggered this subthread is obvious simply based on browser engine and OS history.
I don't even know what we're arguing about anymore. Do you dispute this?
> (Netscape 4/Opera/IE) didn't have a JIT, didn't use hardware accelerated layers, trapped into kernel mode for GDI calls, didn't use accelerated SIMD for painting, and didn't have HTTP 2. It barely had any optimizations for dynamic restyling, so tons of stuff would get reflowed when it didn't have to.
Disabling js has given me the fastest turn-round on page-load times. It's usually not the browser, but loading 15 different un-optimized JS engines that causes the problem.
I am 29, started coding at 14, I've built UIs based on mIRC Scripting, VB/Winforms, C++/Qt, C#/WPF, Java/Android layouts. Using IDEs such as Visual Studio, Eclipse, Qt Creator and Android Studio.
I must say: Atom Editor is great. Using HTML/CSS/JS to build desktop/mobile apps really makes sense to me, especially given that:
- You only have to support one rendering engine.
- You have access to the latest Web Components/ES6/CSS3 features.
- You can rely on native Node.js modules when needed.
It's good for portability, HTML/CSS became better at UI, JS becomes a better language, current IDEs are great, live debugging tools are great.
- There are far fewer native components, leaving accessibility down to the developer of the app and making it nonexistant.
- Platform integration is impossible, which means there is no way for the framework, for example, to create widgets differently on OS X, Windows or Linux (these platforms have many different conventions)
- Theming globally becomes impossible. If you want to write a dark theme for your desktop, you go from writing a theme once for GTK and once for Qt to once for each and every app you run. Uuuurgh.
But only supporting one rendering engine, yay! Much better than the 6 different engines we have to support in Qt (huh?).
And having access to all the latest JS additions! ... that were copied from other languages you could develop desktop apps in, because JS is a terrible hack.
And relying on native Node.js modules, yay! As opposed to native modules for literally every other better language out there.
I'm not sure what you're actually comparing this workflow to. Maybe one day writing HTML apps will be great, but today is not that day. Today, writing HTML apps is only beginning to be an idea that doesn't completely suck. But for the user, it does massively suck. Massive apps that ship their own copy of webkit/blink/whathaveyou, with security flaws that won't get patched, disgusting performance on low-end hardware, atrocious battery usage and decades of UX knowledge thrown out of the window just because the app developer doesn't have the knowledge to see it.
I don't look forward to this. And I'm younger than you.
> "Platform integration is impossible, which means there is no way for the framework, for example, to create widgets differently on OS X, Windows or Linux (these platforms have many different conventions)"
So web apps can't have reusable components now?
> "And having access to all the latest JS additions! ... that were copied from other languages you could develop desktop apps in, because JS is a terrible hack."
WebAssembly will take over for web apps eventually.
> "decades of UX knowledge thrown out of the window"
UX knowledge isn't toolkit dependent, UX knowledge is just as applicable on the web. You can make a dog of an app with any toolkit, native toolkits offer no guarantees for good UX, you can only hope that designers choose to follow best practices.
I'm not saying they can't, I'm saying they don't. You go ahead and try to fix that, create widget#772981 that still won't support typed-selection, or will break with large text or what not... I've seen too many of those, most of them bad, and none of them standard. So far, React is the only thing that even comes close to a sane model for a contender to UI development on the desktop, and it still mostly fails the accessibility checkbox.
> WebAssembly will take over for web apps eventually.
More eventualism. Do you have evidence for that? Do you even have evidence that it'll be better than what we have now in other languages if it does take over?
> UX knowledge isn't toolkit dependent
A lot of it is. You'd be surprised just how much UX is crammed into Qt Widgets for example. Years of experience making them more accessible, more usable, more consistent with the platform they're running on, etc.
> "I'm not saying they can't, I'm saying they don't."
They already do. Electron, Web Components, etc...
> "Do you even have evidence that it'll be better than what we have now in other languages if it does take over?"
Compare the performance of vanilla JS vs. asm.js. It is clear from what developers have stated that WebAssembly performance will exceed the performance of asm.js, the threading improvements alone should offer noticeable benefits.
> "Years of experience making them more accessible, more usable, more consistent with the platform they're running on, etc."
So what are we looking at to replicate that? A theme per platform? Some accessibility work? What else?
It should be noted that the web isn't starting from zero with UX either, we've already had 20+ years of refinements to the web user experience.
> They already do. Electron, Web Components, etc...
And nobody can agree on which to use. It's not standard if it's just "some set of components some people reuse". A far cry from standardization.
> Compare the performance of vanilla JS vs. asm.js.
That's not what I'm comparing. I'm comparing the performance of native toolkits vs web toolkits on asm.js. It pales in comparison, and the battery usage is through the roof. ymmv?
> So what are we looking at to replicate that?
1. Standardization of components (developer does not have to build their own scrolling system, context menu, etc)
2. Themability of components at the application level (developer can style components not to clash with the style of the application)
3. Themability of components at the platform level (user can style the application not to clash with the style of their own desktop)
4. Performance needs to shoot way, way up. Apps can't rely on a performant GPU, it's unreasonable to ask that of every device at this point in time. Some day maybe every device will come with their own high performance GPU, but eventualism cannot excuse bad coding practices and unnecessary layering.
The rest should follow. But I still don't see us getting any of those things, any time soon. These are not easy problems to solve.
An app based on React Native isn't a web app anymore. But it's not quite native either; it doesn't use the native button and list view controls on either iOS or Android.
>I'm not sure what you're actually comparing this workflow to.
He is comparing it to what they have now, XUL + CSS + JavaScript. Replacing it with html isn't as big a change as it sounds, their UI is already written in XML and rendered by Gecko, all they are doing is moving from a custom XML like XAML(MS) or FXML(Java) to standard HTML rendered by Gecko.
Unlike you know something I don't about Kunix specifically, I don't think he's talking about Mozilla development in particular, but rather development in general.
Unlike the original parent post I don't think this announcement anything to do with XUL. Thunderbird just doesn't have the userbase Mozilla expects it to at this point because, surprise, the people who want an email client are a small subset of the people who want a web browser.
<rant>
This is lost in a sea of replies now but I'm sure pissed off Mozilla is completely losing their root mantra of fighting for the free web. Persona, Thunderbird, two critical components of a "free web": free global authentication, free email client. I'm sure next mozfest the same suits as every year will talk about how they're so proud of "keeping the web open". What a crock of crap. Firefox isn't even that good of a browser anymore.
It is easy to forget that freedom and creativity are not a numbers game. There are benefits to a majority when a minority is free to create. An argument could be made that the majority consumption patterns are made possible by the minority creator patterns, hence it makes economic sense to fund the minority out of majority profits.
This is one of the reasons why iPads have stalled - pro developers can't make money, for well documented reasons. The web equivalent will be the starving of open communication, thought and creativity, leading to homogeneous noise as a poor substitute for ground-breaking content.
I don't see why it couldn't, they are doing it for FireFox. On the other hand Mozilla has been on the path to retire Thunderbird for some time, this is just the next step.
Mostly I think it is an effort vs reward thing. Thunderbird doesn't have the user base and doesn't have a revenue stream the way fire fox sells the search box. I wonder if anyone over there has thought about cleaning it up and selling it as a white label email client.
> - You only have to support one rendering engine. - You have access to the latest Web Components/ES6/CSS3 features. - You can rely on native Node.js modules when needed.
So, developer convenience trumps user experience? Who cares about your battery run time, how hot your PC runs, the bandwidth overhead, the massive attack surface from all the useless components shipped, how badly the webapp integrates into your OS, as long as we can ship faster and faster?
Oh, I'm writing web apps myself, but Gods, am I hating myself for it. OS integration is somewhere between a nightmare and impossible – and yes, it is desirable, unless you're on like Gnome 3 –, the resource requirements are abysmal (400 MB for an app that consists of a single input form, a table, and a search field, really Chrome?), performance is actually pretty lacklustre (non-blocking is one thing, actual multithreading another!), the UI isn't actually all that nice if you want to use it, not just look at it (say goodbye to accessibility; hell, even just proper keyboard navigation is black magic for most frameworks); and if you ship the engine yourself, you're now responsible for orchestrating and shipping bi-weekly browser updates to your customers to make sure your browser engine stays patched.
Is all that bullshit really worth… whatever we're saving? (Our company is mainly saving in developer salaries, because we can force kids fresh out of school to work for minimum wage, instead of hiring experienced developers… we'll see how long this keeps working.)
I've also frequently seen complaints to the effect that Atom is much too slow, even on machines only a few years old. Even my 486 laptop back in 1997 could run a text editor with syntax highlighting (specifically, JPad for programming in Java). What are editors doing these days that needs so much CPU power?
This review (6 months ago) is based on an early version of Atom Editor (0.204.0), the latest stable (1.2.4) is way more mature and provides very nice packages [1].
It’s not really JavaScript, I’d argue it’s a different reason.
node-webkit apps are cheaper to develop. Far cheaper.
Everyone can write a webapp, a native application requires professional developers. The payroll looks completely different.
And then the devs who have only worked with node-webkit don’t know how much better they could have it. I know devs who refuse to use map, reduce and filter, because "it’s black magic and we always used for".
The fact that you are being(edit: was) downvoted for constructive opinion shows unwarranted prejudice of HN hivemind towards web technologies. HN discussions on this topic are effectively useless, any constructive truth finding is drowned.
edit: I'll take my downvotes with pleasure. The fact that I am able to use Atom or Nylas N1 or Nuclide on linux with 0 problems alone is enough to welcome proliferation of web tech on desktop.
Same here, I'm 26. The industry has changed a lot when people realized you could make a shitload of money with the web/scripting stuff. Coding schools, open source communities and huge companies love HTML and JavaScript and Python and Ruby for their simplicity. Just take a bunch of people, tell them they could make a lot of money by learning some dead simple languages and there you go. Doesn't matter if they write the most disastrous code in the whole universe.
Look at Code Academy, for example. They add new programming languages and technologies every now and then, but basically it's always the same. Just like their audience. They won't add C to that list, because that wouldn't work for this average not-nerdy-enough-for-real-programming audience.
Yes! JS, Python and Ruby are just "dead simple" "scripting" languages that only serve to "make a shitload of money" and aren't "real programming".
Real programmers (like me) use C.
Seriously, no. Just no.
And if I wanted to prevent people from writing "the most disastrous code in the whole universe", teaching C instead of JS would be much, much lower in the list than teaching how to split code into modules/libraries, write testable code, etc.
> And if I wanted to prevent people from writing "the most disastrous code in the whole universe", teaching C instead of JS would be much, much lower in the list than teaching how to split code into modules/libraries, write testable code, etc.
And for me that would be much much lower than teaching how to keeps things simple. The whole modern web has a bad case of over engineering, everything is modules of modules of modules, with so many tools associated you get a headache trying to install a simple JavaScript library (what is wrong with a download link and I drop the lib on my page, nothing that is what).
I'm all for modern approaches but I can't help but feel many developers have lost touch with what writing clean code is, it's not making module, libraries or even tests, it's making sure what you are doing is as simple as it can be and efficient at it. Sadly most modern web stack fail at that. All hidden in mumbo jumbo of modules and dependencies no-one really needed or asked for, often created by people who never questioned the purpose of what they were doing, or if the whole internet needed it (because you are at Google and have found a neat way to deal with your huge JS stack doesn't mean the whole web needed it too, and that you needed to spent a whole lot of effort making people adopt it).
As much as you make fun of C, learning and writing C will teach you to keep you programs simple and efficient, because the language requires it. And that's coming from someone who started programming with Perl, then PHP, and only learned C later on.
Makes thing simple not simpler should be the cardinal rule of programming, not modularize and test everything, those are situational, the former applies all the time.
I didn't make fun of C. I made fun of a comment posted by a C programmer, which is very different. I have absolutely nothing against C.
> C will teach you to keep you programs simple and efficient, because the language requires it
From what I've read, the OpenSSL codebase is definitely not simple, and I'm not sure it's efficient either—it would depend on how you define efficiency. So your affirmation seems factually incorrect.
Other than that, I agree with your post. Simplicity is awesome. No point in using Angular to build a landing page if static HTML can do the job just as well. (Edit: let me take that back. There can be a point: the pleasure of experimenting and learning something new.)
You write code that can be broken up finely enough into discrete, stand alone modules that can each be run through a series of tests.
For instance, you might create a model class, and then you know that model should have a name, shouldn't be able to be saved without a name, the name should be x-number of characters.
Then you can write a series of tests that makes sure that, regardless of how the model implements that name, all your assumptions about what that name should look and act like don't change without throwing a red-flag up to whoever is changing that model.
Say you want to test how a program fares when strings are malformed, or the disk is full, etc. It can be hard to simulate this if the code refers to variables and results from functions from all over the place.
Making code into reusable modules code is generally a good thing, but it's even better if you make them in a way that will help you test those chunks independently as well - that is testable code.
Usually it's about keeping it granular enough that various sub-activities can be tested in isolation. If you have one function with 10000 LOCs, that's not really testable beyond "something doesn't work".
Then of course you need some way to automatically run these tests, but that's usually provided by IDEs or standard libraries these days.
The only good thing (I think) of a generalist is that one is used to adapt, but yeah... it gets boring sometime. Although after a while, is interesting to see how we reinvent wheels and show them with a new fancy name, it is a pattern that you would see more or less each 10 years.
Sometimes it feels like everything was already invented in the 60's.
>Reading up on the capabilities of early mainframes is eye-opening if you (like me) grew up on Pentiums.
Close enough... I grew up on XTs, i286, i386, so on :) All of them way weaker than my phone (Not sure if that was what you were referring to).
However, in terms of architecture, algorithms, programming language features... It feels like we haven't advanced too much. Actually, the opposite... we are encourage now to do not be too clever on programming because processing power and memory usage is close to be a commodity and clean code is more important (which is fine but less fun).
Referring to architecture, algorithms, language features, etc obviously. They were all better on many machines from 1960's-1980's. Market kept rejecting anything that wasn't backward compatible with existing garbage and had max performance per dollar. So, dumb CPU's, COBOL, and C it is. :)
Gave examples of what features old ones had in the essay below with the first link mentioning the specific systems for further inspection:
Note: B5500, System/38, and Ten15/FLEX are all especially worth considering. Two were basically HLL machines with type-safety and interface safety enforced at hardware level. System/38 was object-based with HW- and SW-level protections plus portable microcode layer.
I'd say Channel I/O counts as one that kicks modern systems' asses. Servers have been copying it bit by bit over past ten years, maybe even exceeding it, but server OS's are inherently inferior in usage given mainframe OS's are designed for I/O offloading at core. Near interrupt-less architecture with acceleration engines makes many apps scream with performance. And would only cost $10 per CPU on desktops but would require Windows & Linux rewrites. (sighs)
The modern equivalent to System/38, IBM's POWER hardware running IBM i still has the same benefits. I actually really like the concept and the way the ILE runtime works, but it's too bad that much of the platform is stuck with legacy design decisions and hasn't been modernized.
I agree with about everything you said. The System/38 design was one of the best cathedrals of old. Very forward-looking, thorough, consistent, and great for admins of the time. Still the only capability system bringing in revenue. Adapted pretty well to modern stuff but main OS's issues & stagnation hurt it as you said. I think the fact that it's pricey and proprietary kept the OSS innovation out, too.
However, the change to the AS/400 and POWER cost it one of its greatest features: hardware-enforced integrity at the object level. That's the feature that would still be giving hackers hell if it was widely deployed. The Intel i432 APX and i960MX had similar property. Interesting enough, IBM actually has secure CPU's they've prototyped and even sold to select customers. Would be great if they integrated one with IBM i at microcode, compiler, and OS levels. That plus an optional interface for new customers without legacy crap would be a huge differentiator that might give it new life.
Yes, especially the MULTICS operating system was so advanced and dozens of decades ahead that we still borrow from its concepts. It had for example 16 security rings. Intel CPUs support only 4 CPU rings, and Windows e.g. uses only 2 rings (for kernel mode and user mode) (hypervisor mode uses another ring in recent iterations).
Initial software based Multics had 64 rings, and as I recall only 8 in the hardware versions. No more than 4 were needed in practice: 0 for root, 1 for mail (e.g. you could delete mail you'd sent to other people from their mailboxes if they'd not read it yet), 4 for normal users, and 5 for some stuff that e.g. allowed anyone to use, but was restricted at touching anything deeper in the system.
AMD dropped rings in their 64 bit architecture which Intel was forced to adopt, so they're becoming a historical curiosity.
> AMD dropped rings in their 64 bit architecture which Intel was forced to adopt, so they're becoming a historical curiosity.
Not quite. Hypervisors are operating in Ring -1, SMM is equivalent to another ring above that, and I can't find anything about AMD64 dropping Ring 1/2? Ring 0/3 at least are still in use.
I tried looking at the latest 4.2 kernel tree - but the assembler/c-code that sets up and deals with sys-calls has been rather re-factored, so I'm not entirely adamant it's ring 0 and ring 3 for both 32bit and 64bit - but I think so? (From a quick glance, no sections stand out as calling out ring 3 explicitly when talking about returning to user-land -- granted I didn't do any searching over the code).
I submit that "supervisor/user" isn't an implementation of "rings", plural (and it's got to predate Multics by a lot, but I can't quickly prove that), and that SMM is something entirely different. Hypervisors and there use really aren't comparable to Multics Rings.
A couple of minutes with Google only found hints that confirm my memory WRT to AMD64 and rings, and/or Intel not copying a segmentation feature added to later versions of AMD's chips.
So this generation is rewriting everything that was made and working in the 90s. Was everything written in the 90s also a rehash of stuff from the 70s/80s? Just curious, what was the equivalent of HTML/CSS/JS in the 80s?
In a way, yes. If you look at OS interfaces from the '80s and compare them with modern ones, you'll see a lot of cpu power is now spent on eye-candy but functionally they're not terribly different. Except they're all built on C++, whereas before they were in C or lower-level languages. OO was the HTML/CSS/JS of the '90s.
Could all the large software applications of today have been created without OOP, just on functional programming paradigms? If OOP was beneficial 10 years on, maybe the cross-platform nature of HTML/CSS/JS will also be vital to future applications.
> maybe the cross-platform nature of HTML/CSS/JS will also be vital to future applications.
There are already cross-platform toolkits that can deliver everything a browser engine can, faster and safely.
Face it, "web technologies" are not winning because of any massive technological advancement, just like C++ wasn't this huge advancement over C. They just managed to achieve enough critical mass to make everything else look less popular. In the '90s, OOP did that through academia and commercial push (in what was a much smaller tech sector); html/css/js did it through the accidental monopoly that is the web browser. The end result is basically the same.
As with all design decisions, it's a trade off. The main benefit of web apps is found in their cross-platform nature. If this is desirable then you may choose to sacrifice a little performance to get that.
To give an analogy, it's like programming languages. It's possible to write very fast code with assembly languages, yet their portability to other architectures is practically non-existent. Part of the reason higher level languages like C/C++ are used is because they are much more portable.
LISP, a semi-functional language, was originally invented to solve the biggest, hardest problems. Scheme, Common LISP, Ocaml/ML, and Haskell have all been used in large systems with good performance. Entire OS's were written in LISP's with some benefits that modern machines still don't have:
Note: And some that are laughably obvious and available today lol.
So, yes, what people use today is an accident of history. That includes COBOL, C, C++, OOP languages, HTML/CSS/JS, HTTP-centric everything, and especially whatever crap is being built on them next.
If you're curious, here's the history I put together on C language and UNIX in numbered list form. You'll see how IT evolution often works in practice to give us lowest common denominator. And afterwards people swear it was product of good design and great achievement. (rolls eyes)
The most remarkable part of "web apps" is probably what they do not improve on:
Smalltalk had (has) messaging and decent object orientation (and you have that especially in latest js) - but Smalltalk never had one standard vm implementation - different versions had different image (ram-saved-to-disk, source code and byte-code) and vms. Javascript has common source code, but no common vm/image format.
Office Suites had rich documents with smart(ish) widgets, but no security - a macro in Excel had access to all your spreadsheet data. Web apps don't really have any good encapsulation either -- so we'll likely repeat the macro-virus era with web virus era (I'm not sure if we already are or not, there's certainly been a few self-replicating ones, that eg spread via facebook updates etc. Not sure if they generally live in the phone-apps or various web-apps. Probably both).
We already had the future of web applications within reach years ago - but apparently no-one cared: http://lively-kernel.org/
> what was the equivalent of HTML/CSS/JS in the 80s?
Depends what are looking as an equivalent. Browsers didn't exists but markup languages have been around since the 70's (mentioned for first time in late 60's)[1]
Maybe an equivalent to express GUI as resources? yeah, we called it RAD back in the day and they were quit popular among some circles. Does somebody remember Hypercard? Or even the first versions of Visual Basic?
I still get mad when people talk bad about Fortran and replacing the code with some slow solution. The power of CPU and cheap RAM has caused the world to favor the hand holding of developers.
No, what almost killed them was the transition from Netscape Navigator 4.0 to Mozilla Suite 1.0 — that's where the massive rewrite happened. Looking at that first link, it's it doesn't point out where Firefox development started: mid 2002. By that point Netscape had already almost entirely lost all its marketshare.
> You and me could find crazy that people would openly choose to use IDEs built on HTML/CSS/JS, but that's what a lot of young folks are doing (Atom, VSCode etc etc). That's their world, that's what they like. An entire generation now exists, who learnt to code from web scripting rather than C or BASIC. They have taken over. It's just how it is.
Dude, you're likely typing on the "wrong" keyboard layout and use a "wrong" calendar... convention trumps correctness pretty much all the time. What matters is that enough people are doing it to make it "the way". I fully expect that we will eventually see a Javascript OS, because "it's so much easier to maintain".
> you're likely typing on the "wrong" keyboard layout and use a "wrong" calendar
When someone shows me/I find a better way to meet a requirement I will adopt it (static site generators, Go routines, ...). I won't doggedly stick to the first thing I learned; I don't expect the world to adapt to suit me.
It's still a scripting language, and you still can't write an OS in it. There's no low-level access to the hardware. You could of course write all the low level parts in C, but then you haven't written an OS in JavaScript.
I think the only bit that really needs to be in assembly is the context switch, but even that can be embedded in the C (which you might consider cheating, but isn't even an option in JavaScript).
> "You and me could find crazy that people would openly choose to use IDEs built on HTML/CSS/JS"
Why is that crazy? Makes perfect sense to me, especially as the performance of the programming language element improves. Ultimately these IDEs are sure to make use of WebAssembly, which should take away the remaining performance concerns.
It's crazy because, apart from performance concerns (it looks like we love to make our computers slower and slower every 10 years), you're just discarding all the features of the containing desktop OS. Formatted copypasting, usability features, network features, etc etc... you'll have to reimplement them all, solving all the problems that systems developers solved 10 or 20 years ago. The OS will become little more than a very expensive pixel pipe. But that's what people like, because C++ is hard, native widgets are hard to customise, and everyone loves designing interfaces, so that's where we're going.
I anyone ever starts making javascript-optimised CPUs and GPUs, they're going to make billions. At the moment we only have micropython, but who knows...
Which network features do you need for a coders editor? In the case of Atom/VS Code, can't think of a single one that a browser engine doesn't already provide. Not going to need things like AD-integration, etc...
> "The OS will become little more than a very expensive pixel pipe."
If that's what people want, then so be it. I see no problem with simplifying the OS, most of them are already too bloated.
> " But that's what people like, because C++ is hard, native widgets are hard to customise, and everyone loves designing interfaces, so that's where we're going."
The main advantage is the cross platform compatibility. If certain OS vendors didn't make it hard to build apps that utilised a common base then there'd be much less drive to produce web apps. It has very little to do with the complexity of C++.
> "I anyone ever starts making javascript-optimised CPUs and GPUs, they're going to make billions."
As I've said a number of times now, the final target with web apps won't be JS, it'll be WebAssembly.
Not to mention death of accessibility features - those HTML apps are effectively unusable for disabled users because they don't implement accessibilty features available in native UIs.
Sure, but with the end benefit being a universal UI toolkit.
Also, 'expensively' is debatable, I'm sure it'd be possible to have reusable accessibility components, wouldn't necessarily have to reinvent the wheel for each new web app.
So we finally have a free choice of OS.
It's not the full story to complain about people solving the problems that the OS side had already solved decades ago. The big new thing is that they are doing so in a platform agnostic way. That I have nine different virtual machines installed on four different operating systems and the same code base can run on all of them smoothly.
Except that's not true: Webkit, Gecko, Trident... they are all different "OSs" you're writing for, you just wave them away by shipping the OS with the application. You could do the same by shipping a virtualised image running a stripped-down Linux configured to run only one application. One of these solutions is now socially acceptable, but both manage to completely discard everything the desktop OS achieved in 30 years.
Not really, you still a browser that implement all this stuff. The only difference is Open tech vs proprietary.
Open technologies are obviously a good thing. Writing a software as complex as Photoshop for instance with the exact same features with HTML/CSS and JavaScript isn't going to fly and be usable for someone who has to work 10 hours a day on it. The performance issues will be significant. It's not a big deal for an text editor though I still can't open a 5mb log file in Atom for some reasons. No problem with Sublime Text 2 or Vim. Why is this ?
My point wasn't that they are a silver bullet. My point was simply to say that while something is obviously lost (performance, native integration, etc...), something else is gained. And that is massive portability.
To be fair, what I have heard (I am not a Mozilla insider so I can't say personally) is that donations go to the Mozilla Foundation and the commercial agreements give money to Mozilla Corporation. So you aren't donating to the same place.
From the same source, apparently the Mozilla Foundation does a lot of important work that the Mozilla Corporation can't/won't do but most of the development is done by the Mozilla Corporation currently (possibly due to not enough donations to the Foundation?).
I haven't really verified that, but thought it was worth pointing out.
When you take an app that's been worked on for 15 or so years and then replace it's UI you're going to lose a TON of features. They'll slowly reintroduce some of the most popular features (hamburger menu will be priority #1!) but there will be a TON that they will not reintroduce. Why?
Hmm, that's how Firefox (then Phoenix) started in the first place in comparison to the bloated Mozilla Application Suite.
>So when the crappy HTML Firefox shows up, with way less features than the Firefox of today, remember that this (Thunderbird) was one of the things given up to have it.
Please recall that Phoenix/Firebird/Firefox was born as a lightweight, fast alternative to the bloat of Netscape Communicator/Mozilla Suite. Dropping features from an old bloated application is what launched Firefox to fame in the first place.
> In a few years the all new HTML Firefox will come out. My bet is that it will suck. It will lack a TON of features that the existing Firefox has, but hey, it's all HTML!
> As much as people complain about XUL not looking native, wait for HTML Firefox, it will take them forever to get where XUL was years ago.
And isn't Mozilla/Netscape literally the poster child for how to destroy your product with a rewrite? They only regained their market share because Microsoft let IE stagnate to a ridiculous degree.
> But now they know that only 10 million or even 1 million people use that feature, and they're only interested in 100 million user features! If Google Chrome doesn't have it, it must not be important!
I've pretty much only stuck with Firefox because of its extensions and the quirky little features it has. The more they focus on aping Chrome, the more they decrease the friction for switching away to it.
Vivaldi seems neat, which is essentially Chromium with a HTML5 GUI built by the old Opera folks intended to restore the best features of old Opera, with a heavy focus on customization.
My hope is that Mozilla will take the direction of using Servo to build something Vivaldi-like (instead of everybody running with Webkit/Blink) and start to restore old API:s and frameworks from old XUL-Firefox that all the best old addons relied on (things llike NoScript, Session Manager, Vimperator, uBlock, Tab Mix Plus, etc...) instead of just sticking with a blackbox rendering engine and settling with Chrome addon API parity.
Have you tried using Vivaldi? It's awfully slow to respond to user interaction because it's all HTML. Say what you want about XUL, but for something aiming to be like HTML it never felt slow. If Vivaldi's non-existing responsiveness is what the new Firefox will be, then they should seriously consider to rewrite the whole thing like a game (in OpenGL/Vulkan) instead. Oh and XUL was optimized for memory efficiency, which will have to be reoptimized for the HTML-only interface, but that can benefit the whole web.
Vivaldi is much faster than Firefox on both Linux and Windows, where I'm using it. Of course it's still alpha software, but the UI is blazing fast compared to FF.
It's substantially slower than Chromium and Firefox for me on Linux. Both Chromium and Firefox are very fast in comparison on my machines. I'm glad it's faster for you, I really am.
I do believe Otter is doing a better job of restoring Opera's past glory, feature-wise. Vivaldi, even with it's HTML gui feels like another opera/chrome/chromium derivative.
> " When you take an app that's been worked on for 15 or so years and then replace it's UI you're going to lose a TON of features. "
What makes you think they'll drop XUL as soon as the first release of the HTML-based UI? It's pretty obvious they'd want to support both until the HTML UI was close to feature complete. You're finding problems where there aren't any.
Have you ever met a team that wants to support two things instead of one? :) No one wants to support the old stuff, especially when no one's paying for it. If they make staying with the old too convenient, people won't convert to the crappy new, and their adoption graphs will suck! Can't have cannibalism!
I spent a good deal of time dealing with XUL, heck I even helped with the French community. Designing UI was a ton more robust with it than with HTML, but XULrunner was (and still is from what I saw) not the most pleasant beast to run.
I'm not advocating a rewrite for the sake of novelty, but they're part of a product life cycle.
You mentioned that it would mean dropping some features. I think that's a good thing; an opportunity to cut the fat and make sure only what is proven and useful makes its way back into Firefox.
You seem to think volunteers won't pick up the maintenance of Thunderbird; if so then why should Mozilla care about it?
Now, I don't think HTML, CSS and JS are the best technologies. They each suck in their own way. But they are winning, and Mozilla is merely embracing that.
Now that I actually have a career and money I have no problem giving back. I'm glad I can donate to Ubuntu and Wikipedia nowadays. I am grateful for everything Firefox has given me for well over 10 years...
I really don't know what XUL is (intermediate language between HTML and FF UI?), but I guess I do feel sorry for people who have been using Thunderbird. I hope it's significant enough a property that people will want to continue, write it in Rust?
XUL (or rather XULRunner) is a cross-platform UI toolkit, basically a Javascript+XML runtime built with C++. It's what Mozilla programs are built with today, abstracting out a lot of OS-specific details. It's unlikely to ever be rewritten in any language, and pretty much failed to get any traction outside of the Mozilla ecosystem.
Maybe they decided there's too much technical debt and they can't make changes to the browser as fast as they wish they could in the current framework?
Not 'broken' in a traditional sense but still a valid reason to rewrite.
Yeah, but there is a trade-off involved. If you spend 3 years rewriting your toolkit, then the time you will eventually save on further changes has to offset those 3 years, and that's very hard for most projects.
Joel was right and wrong.. Firefox was a huge success (or at least in my mind it was) but some might say the Netscape company was trashed during the process (I don't know if I agree).
Starting over isn't always a bad thing but I agree that is also highly overrated.
It was a failure as he said because the goal was to rewrite it for the company to do better. They failed at that. More work was done including by Mozilla. That worked but was too bloated and all-in-one. Eventually, someone trimmed it up to make Firefox and added the customization features. That succeeded.
So, there was a failure, years of struggling changes, several new audiences, and another big change before it made it. Not seeing it as a counterexample as much as good luck for a project that seemed doomed to failure.
I don't disagree it was a bad idea to do the rewrite but rather the company (Netscape) was on a downward spiral regardless (it didn't help but I don't think it was the sole reason).
Definitely wasn't the sole reason, their server software generally sucked, e.g. a lot of people happily switched to Apache once it was perceived as being sufficiently trustworthy. I could see the latter happening in the 1995-6 period.
One thing I've read about the rewrite is that Netscape accuihired? (back before that was a word) a failed company, and put its failed managers in charge, who I guess were good at what really counts in the short term (looking out for themselves). An obvious corollary to the "don't do a rewrite" is "if you're going to do one, employ really good people to do it".
Its funny (and ironic to me given the topic) you mention Netscape and Apache. One of my first jobs out of college was rewriting an old LiveWire application (yes ... the original server side Javascript) to a JSP Tomcat application.
I forgot LiveWire existed. One of few that finally faded from memory. Of old & commercial ones, AOL server + a TCL web framework are still getting updated. Opera is getting redone. The product with the coolest name is still scraping by per .cfm pages I see. Some ancient tech still around but mostly going bye bye.
Only thing left is maybe to redo Mosaic in Ruby or Java to see if it's technically feasible to slow its rendering down any further.
Was it really HTML+JS? Because it was fast, many years back (used it just as an RSS reader though). Also, they shipped it as a standalone product[1], but I don't think it ever got updated after that.
From memory, it was C++, much like the rest of the Opera UI. It used Quick — Opera's in-house cross-platform UI toolkit, and it died (along with the rest of the old Opera) because Quick was heavily entwined with Presto.
It's all open source right? You are not the first person I see complaining about losing XUL, why is no one forking it instead of just making sarcastic comments? I thought that was the whole point of open source, that when the original maintainer loses interest the community takes over. You even say in another comment that the effort is not very big (I don't know, never seen the code) so why not?
But hey, donate to Mozilla! $5, $15, $25, anything helps.
They're starving over at Mozilla. They took in a mere $323 million in 2014. How can you expect them to properly fund multiple projects on such a pittance?
Which would be a ton of cash for most applications, but this is a web browser and they are in direct competition with not just one, but several, of the biggest companies in the world. Here are their 2014 revenues: $86 billion (MS), $182 billion, $66 billion. Budgets for the other browsers aren't clearly defined, but we do know it's a big focus for all the manufacturers and they each have major incentives to keep users inside their own ecosystem.
Even just keeping up with all the standards, and contributing to them, is an enormous undertaking at this point in the web's evolution. Let alone UI design, devtools, porting to mobile OSs, etc.
I'm fairly sure the revenue attributable to Firefox dwarfs what any of the bigger competitors are spending on their browsers, so it's not really for lack of resources if Firefox falls behind. But it's the cash cow that subsidizes pretty much everything Mozilla does, some of which is great and some dubious. Everyone can make their own case for which projects are great and which are iffy.
Personally, I would think an e-mail client is the perfect complement to browser development, and something that could/should be another significant source of revenue instead of a burden.
> I'm fairly sure the revenue attributable to Firefox dwarfs what any of the bigger competitors are spending on their browsers
I suspect you would be wrong at least for the cases of Google and Microsoft. I don't have a good feel for how many people Apple has working on Safari and WebKit, but both the Chrome and IE teams are significantly bigger than the Firefox team from what I can tell, and probably more expensive unless you think Google and Microsoft pay developers less than Mozilla does. Let me ask you this: how many people do you think Mozilla, Microsoft, and Google each have working on their respective browsers? Ballpark figures, of course.
Also, estimates are that the money Google spends annually just _advertising_ Chrome in the last few years (TV ad campaigns with Justin Bieber and Lady Gaga, worldwide ads on public transit, etc) is comparable to the entire annual revenue attributable to Firefox.
Context. If I'm in a bright room (office), and the last page I visited was black on white (the default/norm on the web), and the page I view after this one will be black on white (most likely), then viewing this one in the middle in white on black is jarring, annoying, inconvenient, distracting, hard to read (because my eyes have to adjust) etc.
If one is in a dark room, and has a custom stylesheet to display all pages in white on black, or if the web were predominantly white on black, then sure it's OK.
> Git is a mild pejorative with origins in British English for a silly, incompetent, stupid, annoying, senile, elderly or childish person. It is usually an insult, more severe than twit or idiot but less severe than wanker, arsehole or twat.[0]
Are you sure? The writing implies that it was written recently. (Does the emacs release team consider "any day now" == a year in "There’s a new Emacs minor release due out any day now,"?)
Either way, the article does not mention its publishing date anywhere, which is rather unfortunate for a medium like the internet, where text is forever.
> I'm guessing a lot of new stuff has been added since then.
The article implies it shouldn't have been:
> At the time of this writing Emacs 24.4 is in feature freeze; no major changes will get in, but the list of changes you see below is not set in stone – but it almost never changes much.
I think it's interesting how he never explicitly says that he forked Gosling Emacs (I think that's what he did, but if not please correct me!).
"Therefore, when I wrote my second implementation of Emacs, I followed the same kind of design...Now, this [GNU Emacs] was not the first Emacs that was written in C and ran on Unix. The first was written by James Gosling, ... I discovered that Gosling's Emacs did not have a real Lisp. It had a programming language that was known as ‘mocklisp’, which looks syntactically like Lisp, but didn't have the data structures of Lisp... I concluded I couldn't use it [it here means mocklisp, but can be confused to mean Gosling Emacs] and had to replace it all, the first step of which was to write an actual Lisp interpreter. I gradually adapted every part of the editor based on real Lisp data structures, rather than ad hoc data structures, making the data structures of the internals of the editor exposable and manipulable by the user's Lisp programs...This second Emacs program was ‘free software’ in the modern sense of the term"
Yes, he did. At that time or a bit later I worked for UniPress; the owners knew that taking any action about that beyond a polite request would untimely be harmful. But it was seriously reckless of RMS to put his GNU effort in jeopardy.
Mike (the friend at UniPress who RMS mentioned) and I were wandering around an SF convention and ran into RMS. Mike said, "Richard, I heard a rumor about your house being burnt down. Is that true?" Richard immediately shot back "Yes, but where you work, I'd have thought you'd have heard about it in advance!"
We all had a good laugh. RMS is a funny guy and quick of his feet!
At some point (already?) there will be so many smartphone users that even if only 50% of smartphone users download apps then smartphone apps will be bigger than almost anything else.