After standing on the landing page, I really wasn't sure about what it was about. I decided to signup to try it out but there was clearly too much fields. I'm an asshole, I know. I'd suggest making the font bigger, cutting the text to a strict minimum, providing a Demo page or an example of a diary and finally making the signup way smaller (or even better, let the user start to write.. and let it create the acount later on).
I downvoted you mostly because I wasn't sure why you were talking about making Python exercises into a Javascript emulator/learner tool when the thread is about Perl.. but then, I wanted to cancel my vote but was unable to.
Wow, I just got nerd chills while listening to that video, great idea and awesome execution. Where it will shine is when you'll be able to follow a 5min tutorial and create a "Quake" and invite your friends to play your game.. while letting them hack their own characters, edit levels and add more original features to the game.
Interesting article. I like the provocative asshole.. with a nice conclusion at the end specifying that people are just busy and not being direct is just non-respectful to your audience.
That's a platitude which everybody can and does say about everything.
But I think one can make a case that there is something wrong with OO in general. The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general. So we need to ask, when is OO a win? But for simpler problems it doesn't matter, because anything would work. So we need to ask: what are the hard problems that OO makes significantly easier? I don't think anyone has answered that.
I suspect it's that OO is good when the problem involves some well-defined system that exists objectively outside the program - for example, a physical system. One can look at that outside thing and ask, "what are its components?" and represent each with a class. The objective reality takes care of the hardest part of OO, which is knowing what the classes should be. (Whereas most of the time that just takes our first hard problem - what should the system do? - and makes it even harder.) As you make mistakes and have to change your classes, you can interrogate the outside system to find out what the mistakes are, and the new classes are likely to be refinements of the old ones rather than wholly incompatible.
This answer boils down to saying that OO's sweet spot is right where it originated: simulation. But that's something of a niche, not the general-purpose complexity-tackling paradigm it was sold as. (There's an interview on Youtube of Steve Jobs in 1995 or so saying that OO means programmers can build software out of pre-existing components and that this makes for at least one order of magnitude more productivity - that "at least" being a marvelous Jobsian touch.)
The reason OO endures as a general-purpose paradigm is inertia. Several generations of programmers have been taught it as the way to program -- which it is not, except that thinking makes it so. How did it get to become so standard? Story of the software industry to date: software is hard, so we come up with a theory of how we would like it to work, do that, and filter out the conflicting evidence.
> The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general.
Wrong. Even the worst Enterprisey mess of Java classes and interfaces that you can find today, is probably better than most of the spaghetti, global state ridden, wild west that existed in the golden days of "procedural" programming.
If you consider that software is composed of Code and Data, then OOP was the first programming model that offered a solid, practical and efficient approach to the organization of data, code and the relationship between the two. That resulted in programs that, given their size and amount of features, were generally easier to understand and change.
That doesn't mean OOP was perfect, or that it couldn't be misused; it was never a silver bullet. With the last generation of software developers trained from the ground up with at least some idea that code and data and need to be organized and structured properly, it's time to leave many of the practices and patterns of "pure" OOP and evolve into something better. In particular, Functional has finally become practical in the mainstream, with most languages offering efficient methods for developing with functional patterns.
You believe this, but you've given no reason to believe it other than the dogma you favor. The old-fashioned procedural systems I've seen with global state and the like were actually easier to understand than the convoluted object systems with which they were often replaced. Your comment is exactly the kind of thing that people who are enthralled with a paradigm say. But the "worst Enterprisey mess of Java" that you blithely invoke is... really bad, actually, as bad as anything out there. You're assuming that paradigm shifts constitute progress. I offer an alternate explanation for why paradigms may shift: because the new generation wants to feel smarter than the old one.
OO enterprisey mess is strictly better than global state ridden spaghetti code. The hard part with enterprisey code is that the code performing the action is hidden under layers of abstraction. The bootstrapping cost is much higher here because you have to put more abstractions in your head before you can understand a particular bit of functionality. This is a different sort of complexity than the global state spaghetti.
With the global state code you have to understand the entire codebase to be sure who exactly is modifying a particular bit of state. This is far more fragile because the "interaction space" of a bit of code is much greater. The dangerous part is that while you must understand the whole codebase, the code itself doesn't enforce this. You're free to cowboy-edit a particular function and feel smug that this was much easier than the enterprisey code. But you can't be sure you didn't introduce a subtle bug in doing so.
The enterprisey code is better because it forces you to understand exactly what you need to before you can modify the program. Plus the layered abstractions provide strong type safety to help against introducing bugs. Enterprisey code has its flaws, but I think its a flaw of organization of the code rather than the abstractions themselves. It should be clear how to get to the code that is actually performing the action. The language itself or the IDE should provide a mechanism for this. Barring that you need strong conventions in your project to make implementations clear from the naming scheme.
This sounds like ideology to me. You can never be sure you didn't introduce a subtle bug, you can never force someone to understand what they need to, and interactions between objects can be just as hard to understand as any other interactions.
I agree that a project needs strong conventions, consistently practiced. See Eric Evans on ubiquitous language for the logical extension of that thinking. But this is as true of OO as anywhere else.
First off, apologies for seemingly badgering you in various threads in this post. I don't usually look at usernames when replying so it was completely by accident.
>interactions between objects can be just as hard to understand as any other interactions.
While this is true, there are strictly fewer possible interactions compared to the same functionality written only using primitives. To put it simply, one must understand all code that has a bit of data in its scope to fully understand how that bit of data changes. The smaller the scopes your program operates in, the smaller the "interaction space", and the easier it is to reason about. Class hierarchies do add complexity, but its usually well contained. It adds to the startup cost of comprehending a codebase which is why people tend to hate it.
That's hilarious. Well, you're welcome to "badger" (i.e., discuss) anytime. It's I who should apologize for producing every other damn comment in this thread.
In my experience, classes don't give the kind of scope protection you're talking about. They pretend to, but then you end up having to understand them anyway. True scope protection exists as (1) local variables inside functions, and (2) API calls between truly independent systems. (If the systems aren't truly independent, but just pretending to be, then you need to understand them anyway.)
You're right that the scope protection I'm talking about isn't as clear cut when it comes to classes. Design by contract is an attempt to address this. Client code can only refer to another object through a well-defined interface. Thus your limited scope is enforced by the type system itself. Furthermore, much of this discussion is about perception of complexity rather than actual complexity.
In python you can access any classes internals if you're persistent enough. However, the difficulty in doing so and the convention that says that's bad practice gives the perception of a separation, thus one does not need to be concerned about its implementation. It lowers the cognitive burden in using a piece of functionality.
Haven't you noticed this yourself? A simpler interface is much easier to use than a more complicated one. If you're familiar with python, perhaps you've used the library called requests, its a pythonic API for http reqeusts. Compare this to the core library API using urllib2. The mental load to perform the same action is about an order of magnitude smaller with requests than urllib2, and its because there are far more moving parts with the latter.
My contention is that if bits of data and code are exposed to you, it is essentially a part of the API, even if you never actually use it. You still must comprehend it on some level to ensure you're using data it can access correctly, aka interaction space (I feel like I'm selling a book here).
When I compare crappy codebases in procedural C vs OO java, all I can say is that it took fewer lines of code to bring a project to the region of paralysis in C. Does that mean the C codebases accomplished less? I don't think so.
Part of the allure of Java: it gives the illusion that more progress was made before you entered that region.
We've all heard that you can shoot yourself in the foot with any language, but I think the implications haven't fully sunk in: There are no bad paradigms, only bad codebases.
>The original arguments for OO were: it's better for managing complexity, and it creates programs that are easier to change. Both of these turned out not to be true in general.
How so? I find OOP code to be much easier to understand and change.
I've tried to explain that in my other comments. But we may not be able to say much to each other than that our experiences differ. How much of our experience is real and how much is owed to our assumptions (like "OO is good" or "not") is impossible to disentangle in a general discussion. You know how this stuff really gets hammered out? When people are working together on the same system. Then we can point to specific examples that we both understand, and try to come to an arrangement we both like. Barring that, it's pretty much YMMV.
The thing that is wrong with OO in general is that is is not a silver bullet. I disagree that OO is useful only for simulation - I've done virtually no simulation work and I've found using OO useful in many contexts. Maybe if I used functional style or hypertyped dynamic agile mumbojumbo style or whatever is the fashion de jour - I would save some time, but OO worked fine for me and allowed me to do what I needed efficiently. Would I use it everywhere? Definitely no. Does it have a prominent place in my tool belt? Definitely yes. I reject the false dichotomy of "OO is Da Thing, should be used for everything" and "OO is useless except in very narrow niche". My experience is that OO is very useful for reducing complexity of projects of considerable size with high reuse potential and defined, though not completely static interfaces.
Of course you find OO useful in many contexts, since you know it and like it. But the argument is not that OO is useless. The argument is that it provides no compelling advantage and that it comes with costs that people don't count because -- owing to familiarity -- they don't see them. It would be interesting to know how much of the "considerable size" of the systems you refer to is owed to the paradigm to begin with. The cure is often the disease.
No compelling advantage over what? Over random bag of spaghetti code? You bet it does. Over some other methodology? Bring up the example and we'd see. I am sure you can replace OO with something else - so what? I explicitly said OO is not the panacea - you like something else, you do it. So what's your proposal?
Considerable size of the system is because of considerable complexity of the task at hand. I don't see how you can write 1000-page book of requirements into 2 screens of code. And if you need the system that is able to be customized to satisfy different books with minimal change - you need more complex code.
Another reason it endures is the large collection of mature and easy to use libraries available. Of course this is something of a chicken and egg problem, but a very present one.
Yes, but that's just the same inertia. If programmers no longer believed in those libraries, they'd soon write new ones. Otherwise our systems would all still be integrating with Fortran and COBOL. Some still do, of course - and the extent to which they still do is probably the true measure of the library argument.
I'm not so convinced. There was a time when most people had grown up with procedural code. OO offered enough promise that people moved over to it and developed things like Cocoa or Java.
If people aren't moving on, it's not simply because of 'intertia'. It's because the alternatives aren't offering enough benefit yet.
I think that's nearly entirely untrue. Emotional attachment to what one already knows is the dominant factor - all the more dominant because, in our zeal to appear rational, we deny it.
What will change the paradigm is the desire of a future generation to make its mark by throwing out how the previous generation did things. One can see this trying to happen with FP, though I doubt FP will reach mass appeal.
When you do anything as intensely as writing software demands, your identity gets involved. When identity is involved, emotions are strong. The image of being a rational engineer is mostly a veneer on top of this process.
Edit: 'intertia' - your making fun of an obvious typo rather illustrates the point.
You haven't explained how the inertia was overcome before. Nor have you explained what you think would replace OO if not for the inertia.
So this is basically a contentless argument that says people are stuck on OO because they are irrational.
As you me 'making fun' of a typo - I don't know what you're projecting onto me. I simply mistyped the word inertia. I put it in quotes because I don't think it's a cause of anything.
My apologies! I had typed intertia at one point and thought your quotation marks were quoting that. It seems our fingers work similarly even if our minds do not :)
As for OO, I think our disagreement has probably reached a fixed point.
Edit: nah, I can't resist one more. I believe I explained how the inertia was overcome before: people who were not identified with the dominant theory at the time (structured programming) came up with a new one (object-orientation) and convinced themselves and others that it would solve their problems. Why did they do that? Because the old theory sucked and they wanted to do better. Their error was in identifying with the new theory instead of failing to see that it also sucks. The only reason they could see that the old theory sucked was that it was someone else's theory.
As for "what would replace OO if not for inertia", I know what my replacement is: nothing. I try to just let the problem tell me what to do and change what doesn't work. Turns out you can do that quite easily in an old-fashioned style. But if you mean what paradigm will replace OO (keeping in mind that we ought to give up our addiction to paradigms), who knows? FP is a candidate. One thing we can say for sure is that something new and shiny-looking will come along, until we eventually figure out that the software problem doesn't exist on that level and that all of these paradigms are more or less cults.
Perhaps I should add that I don't claim to know all these things. I'm just extrapolating from experience and observation of others.
PG covered the inertia problem pretty well with The Blub Paradox [1]. OO is now the blub language somewhere in the language power continuum, and those who don't know any other paradigm equally well are more likely to become and stay invested in it, hence the inertia.
I've been as influenced by that essay as anybody here, but I'm not sure I believe in a power continuum anymore. How powerful a language is depends on who is using it. You can't abstract that away, but if you include it the feedback loops make your head explode.
The trouble I have with what you're saying is it suggests that a better paradigm (e.g. FP), higher-level and more powerful, will improve upon and succeed OO. But the greatest breakthroughs in my own programming practice have come from not thinking paradigmatically- to be precise, from seeing what I was assuming and then not assuming it.
Edit: My own experience has been this weird thing of moving back down the power continuum into an old-fashioned imperative style of programming, but still very much in Lisp-land. For me this has been a huge breakthrough. Yet my code isn't FP and it certainly isn't OO, so I guess it must be procedural. How much of this is dependent on the language flexibility that comes with Lisp? vs. just that Lisp happens to be what I like? Hard to say, but I suspect it's not irrelevant. If you can craft the language to fit your problem, you can throw out an awful lot. Like, it's shocking how much you can throw out.
I just read through the ClojureScript One source and read through their wiki and tutorial. The mini-framework they've developed borrows ideas from (at least as I gather from their docs) functional reactive programming and dataflow programming. You register handlers who respond to events, and it's mostly just functions calling each other, although obviously there's a huge pile of Objects with a capital O in the browser's DOM. But I think it's a pretty compelling alternative, and certainly interesting, although I'm not sure if it's actually better or not.
I find this to be over kill but I see the point.. as if you're always trying to make "good" commits you kind of lose the "get all the shit done" mentality as you have to separate everything in smaller ideas.. whereas it's faster sometime to really hack lots of thing and commit, and then at the end cleaning stuff.
"The retina display is amazing, everything in the UI feels faster, and the price points remain the same. What’s not to love? It’s that simple."
Sadly, it's not enough for me to switch from the ipad1 to the ipad3. If there was a way to have a reduced prize by trading it with your ipad1, then maybe.. :) Still, I wish there was something more to it..
It turns out that people aren't all te same. I happily upgraded from my iPad 2 solely because of the display. People upgrade their computer monitors all the time, so why shouldn't I upgrade the display on a device I use for at least an hour every day (with some days topping 3 hours)?
I love Arch. Best distribution by far in my opinion. Sadly, I had huge problems one week ago when I did a full system upgrade (took me 2 days to fix everything that was corrupted and lost lots of money because of it) but then it was my fault. (Never use --force with a system upgrade!) To be fair, there should be a warning when you try to execute the command as it's probably not what a user would want to do.
I have a bunch of painful stories about using Arch but the best one happened most recently.
I had some outdated package that I wanted to update so I asked in the #arch irc room how to just update that one package. I was told upgrading a single package is generally a bad idea and its better to just update the entire system. I have had a server running Gentoo for ~5 years and I frequently upgrade single packages at a time so I saw no problem with this but ok, I'm not an Arch expert so I followed the #arch people's advice.
I invoked the upgrade command and I see it wants to upgrade the linux kernel to 3.2 and a bunch of other stuff. After the upgrade completes I rebooted the machine (or it rebooted itself, I forget). It wasn't able to boot up. I put in a rescue cd but I couldn't figure out what was wrong.
This is exactly why I don't do 'emerge world' in gentoo anymore. It has backfired on me more than 50% of the time (when I used to do it). I simply do not trust these bleeding edge distro people to get everything working all the time and I am annoyed at the zealots who constantly advise to just upgrade as if nothing could possibly go wrong.
Good points here. I currently run Arch but would be interested in seeing sort of a LTS style Arch that would only update packages with security updates and other stable packages. I guess that would require a lot of maintenance though.
> I frequently upgrade single packages at a time so I saw no problem with this but ok, I'm not an Arch expert so I followed the #arch people's advice.
Imagine your window manager relies on libX as a dependency. You update CoolNewApp, which relies on an updated version is libX. So it installs that from your repos and CoolNewApp works great. However, your WM needs an update to be compatible with the newer version of libX, and that update wasn't installed, so the next time you go to login, bam, broken system.
This is exactly the kind of things that pacman should take care of. Simply "upgrade everything" is not a good practice, it will break for some people, enough to have them annoyed.
Definitely a valid opinion, but the way I see it when you use Arch you inherently don the hat of one of those 'bleeding edge distro people,' and it's up to you to keep everything working. It's simply the price I pay to have the most up-to-date software and some of the best performance I've ever seen.
More hegemonic distro's like Debian trade some of that speed and freshness for a more hands-off experience. Nothing wrong with either!
I think having to use an option like '--force' is considered warning enough. Which is, I think, perfectly reasonable. Of course, I don't know your exact scenario, but that's how things seem to work.
Yeah, well the thing is I never had to use it before. But a package required it, so with a quick google-fu it said to include --force. What wasn't clear is that you had to do a --force for only that package.. while I simply added it to my already crafted command. (I.e. ctrl-p --force <enter>). Let's say I've learned that one the hard way ;)
You should be using Pacmatic. The --force thing was a news feed item, and Pacmatic would have shown you the official news before you could have done damage.
This issue is completely orthogonal to whether or not a GUI was in use. The text UI could just have easily asked for confirmation, and a graphical UI could easily allow the user to shoot themselves in the foot.
Even though I agree with your statement, do you truly believe that if a GUI had presented the options Yes/No/Cancel/Force or whatever, that less people would have suffered from issues like this? I probably would have used that --force flag on a non production system and a GUI would even make that decision more easy for me...
Well, the thing is that it complained about only one particular file which I didn't care and knew it was safe to --force on it. So, basically, you type:
foo -abc
And it tells you, "Can't alter file bar.conf".
After a google search you read, "bar.conf needs the --force switch to be altered".
So you're like, fine..
foo -abc --force
* Everything crash *
Next reboot,
"Press <enter> to get in the shell"
Me pressing enter.. and even the keyboard isn't working.
The more I use Arch, the more I do things on the command line. That means a lot less software to maintain. I think if you run tons of different software, you are asking for things to break.
I happily run dwm, and a handful of CLI tools. This minimalistic setup is enough for me - and seems to cut down on things breaking.
I think using the 'cutting-edge' distros (Gentoo, Arch) in general pushes one to use more CLI tools rather than GUIs. At least that was the case with me.
> This minimalistic setup is enough for me
I think that's a poor choice of words ;). It implicates that a GUI, or 'less minimalistic' setup is something 'more', something better. I think it's just something different, for differenet needs.
Being very careful, maybe using less GUIs and handful of CLIs is an evolution (in certain areas). You can do more (stuff) with less (commands). But "great power, comes with great responsibility" = you can easily shoot yourself in a foot, and so it is not for everyone.
What you might not get is that people used to be as lazy but without the framework. So, you'd get really ugly websites without any good design touch. Bootstrap gives these people a way to have something that doesn't look like shit; still letting more design oriented people to improve and making it stand out.
BUT, I'd say that people seems to only use the SAME bootstrap page; I.e. it's a big framework with lots of useful snippets.. it's sad that we only see the black top bar thingy everywhere.