RMS is likely to be judged very kindly by future historians for fighting the good fight. But let's hope that it's not because his worst fears have come to fruition.
Oh yeah, he's one of those guys that nobody wants to talk to when he's alive and in everyone's face, but will be widely praised after he's safely dead.
I agree with a lot of RMS' views, but I would be very interested to know what he thinks about computer games (PC, and consoles). Can anyone shed any light on this?
The way I see it, games are much like an interactive film. It's an experience someone has personally created for us. It's more of an art than a computer program. I'm aware of open source games, but I wonder how an open source model would affect a game like Battlefield 3? Cheating would surely be rampant. I certainly do not expect Terabytes of data when I watch a film so I can make the changes I want, nor do I expect sheet music when I buy an album. Games are as much of an art as music & films.
What about consoles? They are viciously protected from custom modifications. I don't agree with this, I believe it's our hardware and we should be allowed to do as we wish (hey Microsoft, blowing fuses in my NVRAM without telling me is really not cool).
Essentially, he thinks that the production quality associated with studio games is not worth the sacrifice in user freedom. Your point about games being much like film is an interesting one. Elsewhere in the AMA, he says that he doesn't consider proprietary software that controls, eg, a microwave, to be unethical as it's functionally indistinct from hardware. Now, with some games (cough BF3) being functionally indistinct from movies, it would be interesting to know whether he believes those, too, are unethical. My guess is yes, as there are many things like DRM and data collection going on behind the scenes that the FSF is opposed to.
Not at all, as it is anybody can modify the game binaries in the same way they would modify the source code (albeit with more difficulty).
Cheating is prevented by having a central server validate all client actions for consistency, or in games without a simulation server (Starcraft II) having clients compare a game state hash and disconnect if they differ.
Client side cheating (Wallhacks, Aimbots) are not preventable without a locked down client, open source or not.
In opensource code, you would have decentralized servers run by various groups that may or may not accept "cheating" (cheating is just a different set of game rules). You could have peers validate the game rule set for each of their peers as well so that everyone playing on servers is agreeing to the "cheats".
> Client side cheating (Wallhacks, Aimbots) are not preventable without a locked down client, open source or not.
You can easily prevent wallhacks; just don't tell clients information they shouldn't display to the user, such as the positions of players the user can't currently see.
I think this is a case for performance mostly. A character hidden behind a wall will be likely to appear to the player very soon.
The client then gets a chance to pre-load any necissary rescources into memory and download from the server any information about the other character it might need.
If you didn't do this then for example the page containing the player character model might have been swapped to disk and the delay caused in bringing this back in might cause a lag which will give one player a disadvantage.
Also you may be able to hear the person behind the wall , for (even though it may only be a very quiet sound from the other character walking slowly). The client application will need a location in order to play the sound correctly, this information could be used to make a wallhack.
The other alternative would be to pre-prepare all sound on the server and stream it to the client , but this would put extra stress on the server and more demand on bandwidth.
Hmm...I don't think you would need that much of a performance drop.
If the data was received encrypted, you could have a relatively small decryption key sent when necessary, thus solving the "prefetch from server" problem.
If the model files were broken up into chunks and xored with each other in various ways, and saved in a redundant manner, I could imagine that there is a setup in which there are a combinatorially large number of possible ways to fetch a set of blocks which reconstruct a given model, and if you prefetch a few red-herrings as well, it would be infeasible to determine which model you are loading based just on the commands the sever sent you on which blocks to load. Then, when the time comes, they send a very short command which tells you which blocks to use and how to reconstruct the model. Hmm... but now I'm thinking about it, I think you'd need a large overhead in both directions to make it difficult to just match the patterns of which blocks you used.
Most 3d models in games are are loaded whole rather than in chunks (this allows them to be smaller and rendered more efficiently), there may also be 3 or more seperate models with different levels of detail. Some of these models can be very detailed in recent games and preloading allot of them as red herrings would eat graphics memory and kill the performance.
Online games always walk a tightrope between performance and security. The currently popular model seems to be to optimise the game for performance but use a second program to verify that the client code has not been modified.
Of course you then have to make sure the anti-cheat is not modified itself. I have to say though that cheating is not a big apparent problem in most online games that I have played. Most cheating problems are in older games that have not been updated in some time.
You are correct although there are a myriad of reasons why this doesn't work in practice. Perhaps I should have said augmented overlays, task queuing (macros) and so on instead -- they better illustrate the point.
But open source would make cheating far more "assessable". It would be a trivial job to modify the opacity of walls for example - I imagine such a feat would be substantially harder without access to the source code (of course, still possible).
It's easier to trust a "cheating hack" if it comes as a source code patch or the source code is readily available. If I was so inclined to cheat, I certainly would not trust a strange patch/executable from some shadowy figure I knew nothing about.
That's like worrying that sharper knives will make stabbing people easier. It's true, but stabbing is already easy enough with the knives we have that the difficulty of the act isn't really stopping anyone. Similarly, games get hacked, open-source or not.
Ignoring the security through obscurity mess, the more salient point is that there are classes of problems that cannot be solved at present without a locked down client. If solving such a problem is the entire function of the device I don't know that I'm against this.
Technology will eventually solve such problems (thin clients or fully homomorphic encryption) but these have serious issues of their own.
Similarly are distributed computing projects like SETI at Home which (last I knew) were explicitly NOT open sourced because allowing others to see the code would explicitly undermine their trust in the computations done remotely.
I haven't seen RMS address that either; I think his views would be interesting.
Isn't the natural solution to something like that just to do some redundant computation? I think that's how people deal with similar problems in services like Mechanical Turk. It would, at worst, make the computation three or four times less efficient but would limit both active meddling and errors. Also, relying on a closed source for security is a bad policy anyhow.
I really respect many of Stallman's views but I still have trouble reconciling GPL'd software and sustainable commercial innovation. Doesn't the GPL level the playing field to such an extent that the competition can undercut your business model with trivial ease? I would hate to put a bunch of work into something that I planned on making money from only to have perfect copycats overnight. Selling support only scales well for some software.
What I understood (from one of his talks), Stallman doesn't care about your business model. As simple as that -- if you can't figure out how to make money while writing free software, that's your problem. The software may eventually be written by those who can figure out the business model, by those hired to write such software, or by those who don't care about making money.
As for how this affects commercial innovation, I don't know. [Added:] I guess, the point is that human rights (according to RMS, proprietary software restricts freedom) are more important than innovation.
I meant "more important than any potential innovation gained from writing proprietary software". That is, if there's no more innovation in proprietary software compared to free software (like you said, with GNU/Linux) -- we're good, but if we're missing the innovation from such software, so be it, we're also good.
His views on free software come from a very different idealized reality than what actually exists. From the GNU Manifesto:
In the long run, making programs free is a step toward the postscarcity world, where nobody will have to work very hard just to make a living. People will be free to devote themselves to activities that are fun, such as programming, after spending the necessary ten hours a week on required tasks such as legislation, family counseling, robot repair and asteroid prospecting. There will be no need to be able to make a living from programming.
Which has some truth in it. Our world is already struggling to keep people employed despite continuous productivity gains. As developers, we know the next 20 years will probably be low risk for us, but many professions are endangered. Who will be a truck or taxi driver when the Google car will work fine? Who will need secretaries when Siri will be evolved? New technologies are just not creating that much jobs.
Note that I am not disagreeing with you, I just suppose that RMS may not be completely wrong.
1995, Jeremy Rifkin talked about the End of Work, saying we should plan for how to structure society as it was coming. He was ignored, and now we seem to be living the results of not planning for it.
The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era is a non-fiction book by American economist Jeremy Rifkin, published in 1995 by Putnam Publishing Group.[1]
In 1995, Rifkin contended that worldwide unemployment would increase as information technology eliminates tens of millions of jobs in the manufacturing, agricultural and service sectors. He traced the devastating impact of automation on blue-collar, retail and wholesale employees. While a small elite of corporate managers and knowledge workers reap the benefits of the high-tech world economy, the American middle class continues to shrink and the workplace becomes ever more stressful.
As the market economy and public sector decline, Rifkin predicted the growth of a third sector—voluntary and community-based service organizations—that will create new jobs with government support to rebuild decaying neighborhoods and provide social services. To finance this enterprise, he advocated scaling down the military budget, enacting a value added tax on nonessential goods and services and redirecting federal and state funds to provide a "social wage" in lieu of welfare payments to third-sector workers.[1]
It's shocking that only a minority of people in technology know about post-scarcity concepts. Un-employing others is our main goal and when we cannot sustain an income from supporting an application, technology has done exactly that to us. Support has either been crowd-sourced or enough able people are able to add enhancements themselves. We should be the ones that are most aware of these consequences and simply move on to new technologies and projects instead of artificially limiting our products.
It's not the technology workers that are getting hurt the worst by this change, though. It's all the other people in businesses affected by and attached to the tech industry, who don't have the flexibility to pivot quickly or at all.
Moving on to new technologies just puts more people out of work with nothing to replace their old jobs.
Additionally, technology is neutral regarding human nature. As we grow older, we are less comfortable with shifting around and there are attachments that make it more difficult.
Sure, it's great to be rootless and fancy-free when you're in your 20s, but when you hit 40 or 50, you have to start thinking about how it's all going to end.
So should we just stop developing technology because all those jobs have to be preserved?
You seem to take jobs as something that is inherent to the existence of the individual. The book recommended by the parent (and some others) question that position and explore alternatives and the transition from a society rooted in that belief and one that doesn't require the existence of jobs.
Technology is not about putting a few people out of "work", but about freeing people from it altogether.
Jobs are something inherent to the existence of individuals until we somehow trick the political class in to implementing basic income, which is absolutely not happening any time soon. All this talk of "post-scarcity" tomorrow ignores the fact that no jobs causes a lot problems for people today.
The idea that we no longer need as much labor to sustain ourselves is nothing new. But it appears that we're completely willing to convert the surplus into improvements in quality of living. What we at one point might consider a creature comfort, we gradually elevate to the level of necessity. Cars, air conditioning, 2 day weekends. You name it.
Basically we need socialism. Everyone should be guaranteed a standard of living (including a house and food) at a certain level that will increase as our productivity increases. The freeloader "problem" won't be seen as a problem. We should actually be looking it as the model for the future...how can we increase productivity such that we can support as many people as possible who want to be lazy or who want to pursue what they love?
Our world is already struggling to keep people employed despite continuous productivity gains.
In a rational world, that should be "because of... continuous productivity gains". I admit that the way you phrase the situation is how most people today view it but just as much it is a statement akin to "despite digging deeper and deeper, we keep getting closer to China"...
Can you elaborate on that? Are you thinking of a special theory, or a book explaining that? I know the work from Rifkin, who is arguing for "despite".
I understand that the real picture may somehow be different... For example, gains in healthcare may keep more people at work who else wouldn't be able to work. And an elaborate society also allows women to work, increasing the workforce. So I see this is not a black/white issue.
Since I'm commenting on what is or is not obvious, it would seem to me that the argument couldn't be strengthened by background material.
The argument is simply that the more things X people can do, the less things there are for a remaining Y people to do. When workers are more productive, total societal demands can be met with fewer workers.
Seems intuitively obvious. X factory produces N widgets. If it is can suddenly produce with half the workers, it will fire the others and get profit.
It has been argued that in these situations, demand will magically double or jobs will be magically found for these laid-off widget makers. Whatever their long pedigree, it seems to me these arguments only seem plausible because we've trained to believe them. A look at the present world seems to indicate that "what you'd expect without the propaganda" is happening - the laid-off widget makers of all sorts are thrown on the scrap heap with crocodile tears from the finance managers running things.
Uh, where was my head? Of course you are right, this is because. I fully agree with you, and my word was obviously wrong. Somehow I still had my original meaning in mind, which was the same as you exposed, and thought you were opposing that. Sorry and thanks for the correction.
I wonder when we will cross the post-scarcity point on housing. Considering that about 50% of my monthly expenses goes to rent, I think reducing the cost of housing is where most of my mental energies should be devoted, instead of yet another social/local CRUD app. My partial solution is Commodity Housing in urban areas, i.e., housing so abundant that what you pay is close to the maintainance costs, which I estimate would be on the order of $50-60/month (excluding electric/gas, water/sewage and garbage collection expenses which most of us already pay for separately). I can't seem to figure out why, when there is so much land in the U.S. [1], the rents are still so crushingly high [2]? I only have a partial answer in zoning laws and (ironically) rent controls [3].
As an expression of "a lot of land in the U.S.", you can buy a habitable dwelling in Nowhere Unincorporated, New Mexico for $100. Would you take this option?
My intuition suggests that the rents being so damn high is yet another case of the 1%, already rich, screwing the rest of us to maintain their cash flow. However, as the late John McCarthy cautioned, "he who refuses to do arithmetic is doomed to talk nonsense" [1]. So, I'll need to do my arithmetic before jumping to this conclusion. Once I'm finished, I'll post an "Ask HN" question on this issue.
[1] Progress and its Sustainability, by John McCarthy
The availability of water is one of the limiting factors to the population density of a city. A lot of land implies large catchment areas over which to collect enough water to support a large city population.
Location still matters, or else there would be no such thing as startup hubs like silicon valley. I'm thinking more along the lines of...why is that outside of downtown SFO, most buildings are at most 2 stories high? Is it just due to prevailing zoning laws and rent controls or is there something more? There is obviously great demand for housing as indicated by the extremely high rents in this area, so why not just replicate the downtown high rises further south?
By the way this is what I found on a plaque next to the Golden Gate. The biggest faction opposed to building the bridge were the landlords in SFO, who feared that once the bridge was built, it would greatly lower the rents they could collect.
The thing about high rises is that they require money to put them up, especially in seismically active areas. Typically the cost is such that an apartment in a high rise takes at least several years to fully pay off when paying off a mortgage with payments of the equivalent of today's monthly market rent.
Relax regulations, clean out zoning, remove market controls, employ people on the black market, do the construction not-for-profit and you might be able to slash the cost by half or maybe two thirds, probably not more. (There is a finite amount of people willing to pay to live in such a high rise, but let's assume demand really is huge.)
In particular, even at that lower construction cost it won't be worth it for anyone to build if the rent is to be comparable to maintenance costs, whether through supply/demand or otherwise. This will be the case until you can put up high rises at a total cost of about $900-1200 per unit (roughly 15-20 times the monthly payment), no matter how many you can put up.
Of the open source licenses, GPL is (ironically) the friendliest to commercialization because its terms make commercialized forks difficult (because it's difficult to close-source the fork).
One reasonable strategy is to adopt one of the fascist GPL variants (like AGPL3) and then offer a commercial license alongside it.
But then you can't accept any contributions, so you lose much of the benefit of the GPL, right? If there is going to be any community around the source, it will have to be a fork, and to you it will effectively be closed source since you can't relicense their code.
"A community around the source" is not the biggest benefit of commercial open source. Before "whole features implemented by outsiders" factors in, you already have:
* Drastically increased enterprise adoption because influencers w/o purchasing authority can download and do their own pilot deployments.
* Free testing, with drastically decreased expectations of release quality (it's open source, what do you expect? &c).
* The ability to market on the warm-fuzzy of "open source", which warm-fuzzy also includes the pragmatic issue of "what if you go out of business" (you still have the source).
I think the reality of a lot of open source projects, particularly in the enterprise, is that the vast majority of the real code is written by a small affiliated core team.
If you own the copyright over your GPL'd code, you can do a multi-license business model and sell/build proprietary stuff on top of the open core. (Or open stuff on a proprietary core, or a mixed core, etc.) When other people do the same thing, suddenly you have even more open cores out there (besides the ones done without profit motive) to build stuff on that you'd otherwise have to reinvent at a negative cost factor.
Mixing open and proprietary, while perhaps not in line with RMS's views, is still a great way to make money, and you can even do it with the GPL in many circumstances. (Not always though.) In the gaming industry, I find philosophically pleasing the approach of Carmack and some indie companies who make a credible commitment to open sourcing the software (not necessarily the art assets) at a certain date in the future while making money on it at release.
Edit: I also want to add that copycat fears are generally out of proportion. Copycats usually don't win, as in displace the original's dominance. (What usually happens though is the original starts copycatting others to keep a safe, steady income flow, and possibly to crush newcomers if the thing being copycatted isn't sufficiently novel. You're not going to get big by copycatting, though you might get a modest sustaining profit.)
I think the multi-license model does work with RMS's views, it's just that you have to make sure that you're selling the same software; the GPL version cannot be limited or treated as trial software.
Using a proprietary license for businesses means that you can punish them for refusing to use GPL'd software by making them pay more. When they accept the GPL, then you reward them with a lower price (but not free, you can charge for GPL software too ;-) ) hopefully encouraging more GPL'd software to be released into the world.
The point is to reduce how much proprietary software there is and increase how much free software there is and multi-licensing uses a carrot and a stick based on pricing to do that.
I don't think your reasoning applies to most software. I'm guessing most software these days runs on the server. Think of heroku. If they open sourced their full stack they still wouldn't have to worry too much about copycats. It still takes a lot of expertise and work to run something like heroku, and they are always making progress which the copycat would have a hard time keeping up with.
Commercialization isn't necessary for innovation. Stallman sees a world where software is free because it's abundant. There's a whole different system of economics there. Your current business model is based on our free market system where profits rule which means that giving away your software for free is stupid and unsustainable.
Different economics means much less competition and much greater co-operation. Even if there are perfect copycats selling overnight, they're hitting a different market than yours. They may going after a local market or one based on a particular language
This is my chief difficulty with the GPL as well. As a profession, we can't self-destruct with an ideology of giving everything away in a society which we have to pay rent, insurance, etc.
This is a fallacy. It's equivalent to arguing that any increase in productivity (which is precisely what this is: less effort expended to develop software) is bad. But it's wrong. Over time, those freed from obsolete jobs are employed by society to do new stuff (stuff that wasn't practical before because of labor costs) and we all benefit.
And even in the short term this is pretty much just plain wrong. Almost all GPL software is written by people paid to develop GPL software.
There are a number of factors you're not addressing, and are very convenient to overlook. One particular (example) is that it is no longer necessary to pay for software for typical (e.g., email, web browsing, Skype, music playing, spreadsheeting) activities with a computer.
You can. You don't have to unless external forces constrain you to.
Socially, an expectation is being created that useful software is without monetary charge. I would like the readers to extrapolate from this what happens as more users expect gratis software.
If I was running a business, I would use 100% gratis software unless a really compelling reason otherwise was presented. As a software engineer, I usually recommend gratis software for most things... it works great! Why should I pay Microsoft/Oracle/etc when gratis software works so well? This means that non-gratis software companies (and their developers) are simply not receiving my dollars.
Unfortunately us, we as software developers have to pay the bills, and gratis software is, well, gratis.
Arguably, we can assume that software developers will continue to invent new and shinier things, useful to all, which people will pay for. But "past performance is no predictor of future performance", as stock prospectuses like saying, and I believe that to be a great statement about our industry. Who could have predicted the fall in 15 years of mainframes in 1970 [1]?
I think that a viable business model for AGPL3 software still needs to come into fruition besides consulting/SaaS.
Let's personalize this: I own a Macbook Pro, and have paid $0.00 for software on it; I use it probably 80% of the time I'm at home doing all sorts of geeky things. I feel bad for Mac app developers, but I don't need what they sell: gratis software does it for me. I've only paid for 2 pieces of app software on my personal computers in the last decade(not counting games - maybe I should?):
- Microsoft Office for Mac (prior Mac)
- Quicken (Once)
Plus OSX 10.6, 10.4, and Windows 7.
Until I come into a stiff requirement for a piece of proprietary software, I have no plans to change the 'use gratis/GPL software' model. It's worked well so far!
I've bought about 4 (cheap) apps on the iPod, in contrast - the GPL is effectively banned, iirc.
Of course, the GPL has caused some wonderful shared goods to come into being: gcc, emacs, Linux, etc. But those hard-working very smart developers don't receive money from me in general[2], because it's more financially efficient to use software gratis. I release my small amounts of public software under the AGPL3, because I believe in building a better world together, instead of grabbing each other's resources. But I think that we need to figure out a better way to make our bills get paid.
To summarize: Someone around here referenced the idea of a post-scarcity world. I think common software is becoming a post-scarcity world, and I don't know how common app developers are going to survive and keep developing on those common apps.
[1] probably a few visionaries, but besides them... :-)
[2] modulo donations or swag
"It's equivalent to arguing that any increase in productivity (which is precisely what this is: less effort expended to develop software) is bad."
RMS is not proposing an increase in productivity, he's proposing a massive disincentive to engaging in one of the most productive, highest growth industries in the world's history.
"Over time, those freed from obsolete jobs are employed by society to do new stuff ...and we all benefit."
These jobs aren't obsolete. It's not like the same software will get done if all these people stop working and go do "new stuff". We just wouldn't get those advancements and would suffer greatly not benefit.
"RMS is not proposing an increase in productivity, he's proposing a massive disincentive to engaging in one of the most productive, highest growth industries in the world's history."
You say tomato, I say tomato...
Seriously: if you were explaining to a space alien what the distinction was, how would you do it? The only one I can see is all the subtle connotations of the adjectives and superlatives you chose to throw in.
I mean, the automobile was a "massive disincentive" to the horse breeding industry too. The gun put a ton of blacksmiths out of work. And secretaries and typists were an awfully big part of the world economy in the recent past too... All those people (figuratively -- really the fraction of the workforce they represented) are now doing other stuff like building web apps. And the world is a better place.
"Productivity" is a quantifiable metric. Either you pay programmers to duplicate effort or you don't. In the latter case, productivity is higher by definition, and no amount of adjectives is going to change that fact.
I don't think that analysis is complete. Obviously the point isn't that there no duplication within free software; just that there is less. Sure, clang competes with gcc. But as recently as the 90's there were dozens of commercial compilers were competing in the market. Now, everyone (outside of a handful of legacy platforms like windows) just uses gcc. Clearly that's a gain in productivity.
And LLVM was bootstrapped by using GCC as the front end before clang was written to replace it.
The freedom to combine with another program _temporarily_ while you bootstrap something better improves productivity significantly.
And the world before GCC had a great many commercial compilers. Many of the crap. GCC replaced almost all the crap ones, so that only free compilers and a few commercial compilers that were good enough to offer something over GCC existed.
Not only that, but Gnome, KDE and xfce all build upon each other - experience and code - and explore different ideas. This accelerates their evolution.
Alternately, we as developers can use existing open source software with which to build new software, without having to rewrite things that are already completed and available.
"Either you pay programmers to duplicate effort or you don't."
That's not what I'm talking about. Free software doesn't just kill duplicate effort. Free software kills the original effort that is there to be duplicated. You're saying "well all the unproductive leeches will go do something else". What actually happens is all the productive people go and do something else too.
If you fix the price of automobiles at zero, you don't get an automobile revolution that puts horse breeders out of business. You get an artificially created dark age while you wait a few extra decades for hobbyists to create the model t.
"Free software doesn't just kill duplicate effort. Free software kills the original effort that is there to be duplicated."
Once more you're making a distinction that seems meaningless to me. How does one kill "original effort" and what on earth would it mean to do so? My guess is that you mean that it kills off the profits gotten from the rights to software that has already been written; in which case I can only suggest you google "rent seeking" for the thoughts of smart economists on the issue.
Using free software means that fewer people need to be paid to develop software. And over time the market will put them to work doing something else. (Note carefully that I'm avoiding loaded terms like "leeches" which you seem to be so enthused about using.) That's just macro economics 101. Do you really disagree?
There are companies which don't rely on copyright. One really big example is "on demand": non-software company needs software which does X (and doesn't exist yet), so they contract with a software company (or just hire developers) to write it.
I know some companies like that - they basically get paid to adapt FOSS software to specific needs.
Another example are companies which rely on other things besides per-copy licensing; Red Hat is a good example.
The point isn't that zero people will be writing software, the point is that the few people left will not accomplish a fraction of what's currently being done.
If there were no copyright for film, you'd still have films being made for various reasons (hobbyists, educational, propaganda, vanity, advertising, etc.) but there's no way you'd see stuff like Avatar get financed.
Obviously film as a medium is an art form and not everyone values it the same (or at all).
Reasonable analogies do have limits that it pays not to overextend them. Software itself has more direct and inarguable benefits then film: the software equivalents of Avatar save lives in our world and many of them wouldn't exist in RMS's utopia.
I'm not sure what you were referring to, but to be fair, the software equivalents of Avatar are big blockbuster video games. Like Call of Duty, Gears of War, Skyrim... They're awfully similar to big, blockbuster computer animated films.
The obsolete jobs are those which depend on re-inventing the wheel, because some company needs to add a new feature or fix a bug in software X and they can't, because it's proprietary.
If every software was Free we'd eliminate plenty of redundant work.
> [RMS is] proposing a massive disincentive to engaging in one of the most productive, highest growth industries in the world's history.
OK, you lost me. How exactly the possibility of rendering your software proprietary enhanced in any way at all productivity in software ?
Just a warning : I don't accept metrics based on how much money you extract from others, or how much people work behind a keyboard. What matters is how the lives of the people actually improved thanks to proprietary software.
My guess is not much at all. If I recall correctly, the most crucial innovations where mostly public research work.
I wonder when we will cross the post-scarcity point on housing. Considering that about 50% of my monthly expenses goes to rent, I think reducing the cost of housing is where most of my mental energies should be devoted. My partial solution is "Commodity Housing", i.e., housing so abundant that what you pay is close to the maintainance costs, which I estimate would be on the order of $50-60/month (excluding electric/gas, water/sewage and garbage disposal expenses which most of us already pay for separately).
I can't seem to figure out why, when there is so much land in the U.S. [1], the rents are still so crushingly high [2]? I only have a partial answer in rent controls [3] and zoning laws.
Well, as someone in the rural US, I don't want to see the landscape peppered with suburbs. I think suburbs are a terrifically ugly invention and have a number of nasty externalities (a rant for another time).
So places like where I live (semi-rural northwest) are incentivized to make it difficult to expand. I like urban apartments/condos/flats, I think those are a good solution, but as a culture, outside of the urban areas, people like houses.
You should look up state-planned housing and in general, socialist , communist and anarchist ideas. I think the communists hit a dead end with their ideas of the state controlling everything while the anarchists have alternative ideas (which are continually borrowed though to various degrees).
It's really ludicrous we all know the business model of record labels and movie studios, of selling the right to use and possess something that costs nothing to duplicate, is dead while we are still delusional about our ability to sell software licenses...
I have a challenge for you. Make the last Metallica album in your home without making copy. How long do you think it will take you? How about Photoshop?
If you find it's difficult, it's because the original is what costs money, not the duplicate.
I also find it interesting that you can't see the value in bits that "cost nothing to copy".
Yet, with currency, it's just ink and paper (physically worth considerably less) that represents much more in value. I'm not sure why this concept is so difficult to grasp when we've already been using it for many years before digital content.
"The Anonymous protests for the most part work by having a lot of people send a lot of commands to a website, that it can’t handle so many requests. This is equivalent of a crowd of people going to the door of a building and having a protest on the street. It’s basically legitimate. And when people object to this, let’s look at who they are and what they do. Usually they are people who are doing much worse things"
So he's saying that DDoS attacks are legitimate, and anyone who doesn't think they are is up to no good?
If I understand, he is first of all referring to DDoS attacks where many people voluntarily traffic a server with common computer setups, not when a small number of people pull off major attacks using extra resources such as unwilling puppets.
From the article: "Stallman is the man behind the concept that every computer program must be free for users to study and modify as they want."
Does he suggest software must be free?
We know he doesn't say the must be free in cost. Does he say they need to be free as in speech? I've never heard him say that. I don't think of him as telling others what they must do.
I feel like a lot of the polarization around what he says comes from misunderstanding. This feels like one of them.
If you mean must as in, it should be illegal to do otherwise, I'm not sure. I don't think so, but given that he casts the conversation in terms of human rights, I can't entirely tell.
But if you mean must as in, does he really think all software, without exception, should be free? Absolutely. I think that's clear from the linked article alone.
He consistently states that he views this as a moral imperative. He's not exactly telling everyone that they have to, but he is saying it is fundamentally morally wrong not to.
Free air traffic control software doesn't really sound absurd. Software doesn't give one air traffic control centre any advantage over another. That's not how they compete if they compete at all, so they may as well give it away.
Yes, it's not like ATC are suddenly going to start installing every random nightly release on their production systems.
I would imagine they would stay safely (maybe even years) behind the cutting edge. But common availability of the source code would make it easier to share information that may be able to speed up acceptance testing or flag up issues.
You could argue it's actually more dangerous for an airport to rely on software that is opaque, for all they know there is some edge condition in there by accident (or on purpose) that could cause mayhem.
I am going to assume however that ATC software source code is rigoursly audited by a trusted partner of the airports using it.
This is not far from reality, actually: most ATC programs run on oss stacks anyway. Only, I can't imagine anyone willing to maintain those vast mountains of unmaintainable code gratis.