Hacker News new | past | comments | ask | show | jobs | submit login
Brad Cox: when OOP was about “Software-ICs” and micro-transactions (deprogrammaticaipsum.com)
91 points by kemenaran on Dec 20, 2019 | hide | past | favorite | 76 comments



He references the Byte magazine Smalltalk issue. There was another issue much later that looked at why objects failed but actually they didn't. They succeeded but they weren't called objects. They were called Visual Basic controls/components. These were a runaway success. VB despite it's limitations enabled a massive amount of software to be written quickly and easily mainly through re-using commercial components.

But the components were completely encapsulated. Black box, no source code, no inheritance. You could argue the same for services today. Services are software ICs, you can call them objects it whatever you like but they achieve Brad Cox's objective, which is all that matters. (It's just a pity that people think that a service/component has to run in a separate process always, unlike VB)


This was the book that defined components https://smile.amazon.com/Component-Software-Beyond-Object-Or... At the time components were going to be the next level above objects - they were the black box that contained a lot of objects and provided a nice programmer interface. Unfortunately the ideas seem to have disappeared, Delphi is an even better example of a component model than VB was imho.


I think the concept of software "parts" (similar to the auto industry - many of the components are made by 3rd parties) is only tangentially related to object oriented programming.

There were always APIs but they're still parts with an exposed interface - now just distributed over the Internet "as a service".


I think we have succeeded at software IC's beyond our wildest dreams at the time: standard, third party components with mostly understood behaviors, documents, and catalogs to obtain them.

Almost everything except C has web loadable modules and online docs (analogous to IC data sheets in retrospect!) at this point, whether it's Perl's CPAN (going back to 1995) or Gems or MVN or whatever through obvious L-Pad disasters and now containers and various Hubs.

edit. What we didn't dream of when the Byte Smalltalk issue came out (should NOT have tossed mine :-) was the implications of networked module repos. Add one or two lines of code and the system goes out and gets a standard module and wires it in for you. This was not on the menu for people shopping for 74xx at Radio Shack and Digikey.


It's a shame that, in the minds of many, OOP has come to mean only:

* chuck everything vaguely related in a class

* share code by using inheritance

IMHO, the way Java and C++ have been taught (in practice, if not by teachers) has done a disservice to OOP. So much so that many programmers now don't consider languages to be OOP unless they have the class keyword and inheritance.


> many programmers now don't consider languages to be OOP unless they have the class keyword and inheritance.

If a language doesn't support inheritance, in what sense is it an object-oriented programming language?

It's true that you can use OOP principles in languages that don't have language-level support for OOP. The canonical example of this is the GObject library for C. That doesn't tell us how we should use the term object-oriented programming language though. I don't see why we'd use a more expansive definition.


Sending a message to an Erlang / Elixir process to change its internal status (a GenServer) is not conceptually different than calling a method of an object.

The big difference I see between Elixir and (example) Java or Ruby is that everything is an object in those languages but a big Elixir application has only a few GenServers, where keeping a state matters. Everything else is passed as argument, which by the way it's what a GenServer does internally with an endless tail recursion.


In every sense. Prototype-based OO has been a thing for a long time now.

If you look at the entire breadth of OO languages and see what they have in common, there's really only two things that are truly common: object identity, and some form of dynamic dispatch based on that. Everything else - inheritance, encapsulation etc - is optional.


Prototype-based OO is a form of inheritance.

> there's really only two things that are truly common: object identity, and some form of dynamic dispatch based on that. Everything else - inheritance, encapsulation etc - is optional.

That's a broader definition than I had in mind, but I can see where you're coming from.


Self doesn’t have classes or inheritance, and it’s as [single-dispatch] OO as they come.

(Certainly moreso than something like C++, which wouldn’t look out of place on a Melanesian island. Never could figure why C++ and all its typed kin would choose to dispatch [dynamically] on only one argument instead of on all. Playing to their weaknesses, no?)


My guess is that by going with single dispatch, you get clear 'ownership' in terms of methods^H^H^H^H^H^H^H member-functions belonging to classes.

Can't say I know a lot about multiple-dispatch though.


Thinking off the top of my head, how about a language that only had interfaces (ala COM)? Implemented by non-inheritable objects, using delegation instead of inheritance to share code.

I imagine some syntax sugar to help with the delegation to make it less cumbersome.


If you care about this, my advice is to not rail against words taking on new meanings. That's how language works, and it's a losing battle to fight against it.

Just take what you believe is the better definition of OOP and rebrand it.


> That's how language works, and it's a losing battle to fight against it.

The very name of this site is a powerful argument to the contrary. Slashdotters initially dissed it, saying the word "hacker" (as in coder or builder as opposed to criminal) should be retired. Now the older sense of the word is widespread enough that people halfway around the world are aware of hackers in the sense that PG named this forum. Indie Hackers is only further cementing the reversal of the trend that once appeared inevitable.


I've got a linguistics minor. The concept here is prescriptivism versus descriptivism. The linguistics community overall has a huge emphasis on descriptivism. Trying to enforce consistency of language is a Sisyphean task, and in a lot of ways is an element of imperialism. Words change, grow, gain and lose definitions and shades over time. Document the new uses, and let people live their lives.


I don't see things in such absolutist terms as that. For everyday language, sure, I'm more than happy to go with the flow. For technical language we need a relatively stable, agreed upon terminology.

Terms that flow and change too freely makes communication very difficult and often leads to people having pointless arguments due to fundamentally misunderstanding what each person is saying. And I do disagree that it's Sisyphean. Raising awareness can and does influence how words are used (especially within a group).


The meaning of a term depends on its usage. That is not a matter of opinion, it's how language, including technical language, works.

Currently "OOP" roughly means [Alan Kay's vision] or [Java-like]. That is an imprecise and mostly useless definition but it's the one it has, whether we like it or not. A stable terminology is desirable and that's why we shouldn't redefine "OOP".

Think of it like an API. What's more backwards-compatible: changing the behavior of oop() or deprecating it in favor of javaLikeProgramming() and kayProgramming() ?


fwiw, a minor (so, ~4 uni classes) is not what I would flex as credentialization in a subject. Especially when you go on to bring up such an elementary part of linguistics.

It's kind of like introducing yourself as a compsci minor and then explaining OOP vs FP. I'd just leave off the creds, there's no need.


> fwiw, a minor (so, ~4 uni classes) is not what I would flex as credentialization in a subject.

It's not a "flex," it's an honest contextualization of an anecdote. They certainly studied more linguistics than I did. I, uh... had a minor... crush on a linguistics major I met through the teaching union, and we had a few casual conversations about their thesis? Now that's a weird flex.

The point of their comment was not to introduce the well-known concept; it was to relay their impression of the culture that they experienced. Please be more charitable with folks' contributions here.


The concept has historically gets an order of magnitude more push back without the tiny amount of creds to go along with it. People really latch on to the English teacher style prescriptivist view and assume that linguistics in general takes that view too.

And for me it was eight classes, ~1 a semester.


It is futile to rail against other people using words "wrong". But that doesn't mean you can't use words in the "correct way" yourself. If you are influential enough it may even have an effect.

In the case of "hacker" it has a different meaning in the media and the general population, but this is not a problem because the audience of HN understands the intended meaning.


Hackers don't have the need to convince anyone else of what the word hacker means. It is of no consequence what "regular people" think we mean when we say hacker. We have our forums and our and we all know what it means.

But if we were in a situation where it mattered what other people I thought the word meant, then we would be fucked. And that is evident by just talking with regular folk and asking them what they think about hackers.

The situation with OOP is different. It is our community itself that uses the word incorrectly (according to some people, including the original commenter).


> The very name of this site is a powerful argument to the contrary.

In our little bubble, perhaps. But I'm pretty sure that to my family and non-techie friends, "hacker" has only the negative meaning. That's why, when I tell them in general terms about what I'm doing during a hackathon at work, I don't use that word.


Bernie Sanders calling himself "socialist" (although he could also use social democrat) is a very similar example.


If you want to really see OOP in action study Erlang. As the late great Joe Armstrong says in his introductory book, Erlang is a real OOP system - objects can only communicate through message passing. There is no notion of visibility, friendship, inheritance, etc. Objects can hold state and pass messages and that's it - which coincidentally makes concurrency dead simple.

What most people think of when they think OOP is C++, Java, and Python which got OOP completely wrong and ruined our perception of OOP. They also tried to use their misshapen OOP hammer on every problem they could find to the point that it’s an overused meme. These days you see languages actively trying to distance themselves from OOP as a form of enticement (Rust and Go being the two prime examples).


> These days you see languages actively trying to distance themselves from OOP as a form of enticement (Rust and Go being the two prime examples).

They definitely distance themselves from inheritance. However both Rust traits and Go interfaces look to me like they're there to appease people who like the .method() calling syntax. (I know there's more to the story than that, particularly for Rust and its type system...) I think either could've gone with multimethods or overloaded functions and have been better for it, but a lot of people seem to really like the object.method() look.


> a lot of people seem to really like the object.method() look.

There’s a pretty simple reason for that: autocomplete.

In most halfway decent dev environments, typing “object.” will present you with a list of possible operations on that object. OTOH, if you want to do “method(object)” you need to know the method name (in all current scopes, including globals).

I’m not saying this is the only or even best way to write code, but it’s definitely a factor IMHO.


You're almost certainly right, this is pretty compelling. It's not the way I work, but I'm sure lots of people really like their IDEs. Despite that, I recently changed one of my APIs from being:

    result foo(bar b, other stuff)
    result foo(baz b, other stuff)
    result foo(bum b, other stuff)
To:

    result r = bar.foo(other stuff)
    result r = baz.foo(other stuff)
    result r = bum.foo(other stuff)
Not because of the IDE, but because the compiler error messages are horrible for the ones above when you make a mistake. If you pass the type as the first argument, the compiler "helpfully" tells you 3 pages of information about all of the overloads. However, if you use method syntax, it only tells you overloads for the one object.

It's a little frustrating that the hammer is changing the shape of our hands, and not vice versa.


The world has been changing the shapes of our hands since they were fins.

The problem I have with most language's function call syntax (except for point-free stack based languages like FORTH and PostScript) is that you can have multiple fingers on your "in" hand, but only one finger on your "out" hand. C#'s in/out/ref modifiers and Lisp's multiple-value-bind are hacks.

https://en.wikipedia.org/wiki/Tacit_programming

FORTH's /MOD ( numerator denominator -- remainder quotient ) naturally takes two integer inputs and returns two integer outputs, and it doesn't use need any special clumsy syntax to express that.


> The problem I have with most language's function call syntax (except for point-free stack based languages like FORTH and PostScript) is that you can have multiple fingers on your "in" hand, but only one finger on your "out" hand.

I dunno, in many modern languages that aren't point-free stack-based languages you can (and the difference between these is one of perspective more than concrete substance, arguably) either have multiple “fingers” on either hand or only one on each, but the one value each touches can be arbitrarily structured and destructured.


That same logic can be used to argue that functions should only support one input argument, too. If you can have multiple inputs, then what's wrong with multiple outputs? And if multiple outputs are so bad, then why have multiple inputs?

There's a big difference between simply and efficiently returning multiple values on the stack without generating intermediate garbage and memory references, and packing multiple values up into a single tuple, polymorphic array, structure, or class, and then destructuring it later, or passing input parameters that are indirect pointers to temporary output locations in linear memory.

Using indirect pointers for output parameters can also cause bugs and performance optimization problems with aliasing.

https://en.wikipedia.org/wiki/Pointer_aliasing

To elaborate what I said: C#'s in/out/ref and Lisp's multiple-value-bind syntax are clumsy, inelegant, and inefficient hacks.

And languages like Java that don't have pointers can't even do that, and just have to proliferate pointless container classes and generate garbage intermediate objects. Quick: How do you implement swap(a, b) in Java? (Without using an IntermediatingObjectSwapperDependencyInjector- ProxyThunkingShadowEnumeratorComponentBeanAdaptor- DecoratorReferenceRepositoryServiceProviderFactoryFactory!)

http://www.javapractices.com/topic/TopicAction.do?Id=37

https://stackoverflow.com/questions/1403921/output-parameter...

https://stackoverflow.com/questions/3624525/how-to-write-a-b...

With WebAssembly, for example, returning multiple values has a significant effect on performance and code size, because it's much more costly to return multiple values indirectly through linear memory than on the stack. (The stack is in a separate address space that you can't point to like linear memory.)

That's why there's an active multiple value return proposal for WebAssembly, which is implemented in Chromium release 80:

https://www.chromestatus.com/feature/5192420329259008

https://hacks.mozilla.org/2019/11/multi-value-all-the-wasm/

>But Why Should I Care?

>Code Size

>There are a few scenarios where compilers are forced to jump through hoops when producing multiple stack values for core Wasm. Workarounds include introducing temporary local variables, and using local.get and local.set instructions, because the arity restrictions on blocks mean that the values cannot be left on the stack.

>Consider a scenario where we are computing two stack values: the pointer to a string in linear memory, and its length. Furthermore, imagine we are choosing between two different strings (which therefore have different pointer-and-length pairs) based on some condition. But whichever string we choose, we’re going to process the string in the same fashion, so we just want to push the pointer-and-length pair for our chosen string onto the stack, and control flow can join afterwards. [...]

>This encoding is also compact: only sixteen bytes!

>When we’re targeting core Wasm, and multi-value isn’t available, we’re forced to pursue alternative, more convoluted forms. We can smuggle the stack values out of each if and else arm via temporary local values: [...]

>This encoding requires 30 bytes, an overhead of fourteen bytes more than the ideal multi-value version. And if we were computing three values instead of two, there would be even more overhead, and the same is true for four values, etc… The additional overhead is proportional to how many values we’re producing in the if and else arms. [...]

>Returning Small Structs More Efficiently

>Returning multiple values from functions will allow us to more efficiently return small structures like Rust’s Results. Without multi-value returns, these relatively small structs that still don’t fit in a single Wasm value type get placed in linear memory temporarily. With multi-value returns, the values don’t escape to linear memory, and instead stay on the stack. This can be more efficient, since Wasm stack values are generally more amenable to optimization than loads and stores from linear memory.


> That same logic can be used to argue that functions should only support one input argument, too.

Some languages do (more if you consider things that support what looks like multiple arguments but which the complete set of arguments that can be passed to a function corresponds directly to a single data structure in the language.)

> There's a big difference between simply and efficiently returning multiple values on the stack without generating intermediate garbage and memory references, and packing multiple values up into a single tuple

Fundamentally, there's not, since you have to represent both the number of items and each item either way. It's true that there are more and less efficient means of performing the task, but the information required is identical, so it is quite possible for any method capable of doing what looks like one to a user of the language to implement what looks like the other from the same perspective.

The obvious implementation when conceptualized each way given other elements of a language or it's implementation design may be different, but that's not an inherent difference, and implementing efficiencies in the implementation has no necessary reflection in language-level features (and supporting any particular language-level feature isn't a guarantee of efficient implementation.)


> only one finger on your "out" hand

I think Python's way isn't amazing, but it's not too bad either:

    d, r = divmod(100, 3)
It's a bit uglier in C++, but you could make this work:

    int d, r;
    refs(d, r) = divmod(100, 3);
I'm certain there's either a Boost or STL library for this. (I wrote my own for C++98 at one point)

It's clear you like stack based languages, and I certainly admit they can be concise.


Yes, I think how Forth and PostScript does that is good, compared to the other ways (there are other good things about Forth, too). And then, in assembly language, it may depend what instruction set is used and how call frames are organized; Glulx allows only one return value, and same with Z-machine code.


There's a proposal to support multiple value returns in WebAssembly!

Multi-Value All the Wasm (hacks.mozilla.org)

https://news.ycombinator.com/item?id=21596965

https://hacks.mozilla.org/2019/11/multi-value-all-the-wasm/


> It's a little frustrating that the hammer is changing the shape of our hands, and not vice versa.

the "hammer" in this case being "doing things in causal order" - hard to find a different tool in the shed.


I don't understand what you mean by "causal" in this context. A.f(B) is just a syntax sugar around f(A, B) unless dynamic dispatch comes into play. Then it's usually syntax sugar around something like:

   A._vtable[f](A, B)
It really is just a function of two arguments, even when the privileged first argument looks like it's outside of the parens.


> I don't understand what you mean by "causal" in this context.

I mean that if you want contextual autocompletion (e.g. have only the relevant functions being shown given the type of an argument) then at some point you have to give your IDE this type information before typing the function - however you look at it it will always be [context] [function] instead of [function] [context].

> A.f(B) is just a syntax sugar around f(A, B) unless dynamic dispatch comes into play.

this is a complete implementation detail of your operating system ABI and the way OO languages compile down to it, and in no way a "general truth" (of course there is an equivalence between both). There used to be some CPUs with hardware instructions for OO method calls in the 80s for instance - you may want to call that syntax sugar around electrons, but I would respectfully disagree :-)

(even then, for instance Microsoft has the __thiscall calling convention for C++ methods which differs from __cdecl used for C functions - so even there there are fundamental differences between a.f(b) and f(a, b) as arguments will be passed differently to the method.)


I just don't use overloads, and not even a language that has overloads.


What language? Unless it's Ocaml, I'll be surprised if addition isn't overloaded for ints and floats at the least. Most people are fine with some overloading... To each their own though.


Yes but that doesn't really apply to your point. Addition is built in. It doesn't affect completion, and barely affects type checking. The "overload" is fixed and generally well understood. It's almost not an overload at all if you view it as an operation on the set of all integers. There is only a clamping in the end that depends on the size of the type (unless it's unlimited precision arithmetic on BigNums)


I’d say infix syntax and somewhat asymmetric method lookup/overloading rules are the bigger factor. Notwithstanding that it matches the implementation of virtual dispatch.


Yeah, both Rust and Go also use the method syntax to provide dynamic dispatch, and it's simpler for them since this dispatch is chosen (presumably vtable style) based on the privileged first operand.

However, I think using methods to provide infix syntax is a real mistake. For instance, if I want to make an infix operator which allows me to subtract mytype and yourtype, I can do the following:

    mytype - yourtype; // mytype.sub(yourtype)
However, the opposite requires me to add methods to yourtype:

    yourtype - mytype; // yourtype.sub(mytype)
Python gets around this by having 'r' versions of functions which get invoked if yourtype refuses to acknowledge my type. However Go and Rust could resolve which binary function to use statically, but they don't want to support overloading unless it's a method on the first object.


Infix operators are out of scope of my comment. I am only referring to method invocation syntax itself, x.a(y, z).b(v), being infix.


I misunderstood then. However, if you just meant that people prefer method syntax because method syntax is infix, then it seems to come full circle to asking why they like infix syntax for methods... Doesn't really matter - it is what it is.


Well, if you imagine some prefix method syntax -- let's say, foo[a](b, c, d) instead of a.foo(b, c, d) -- it misses out on one of the advantages, in that method chaining is easier to read. I mean this is a nice feature for function call syntax in general and the benefits of infix syntax is not specific to methods. This is one reason why some new languages make f(a, b, c) and a.f(b, c) produce the same AST. Anyway, I wanted to separate the effects of infix positioning of the method name from those of having distinguished syntax for method invocations, which with static dispatch still carries other advantages.


It won’t be dynamically dispatched in Rust unless it’s a trait object.


I think your point is to emphasize there is no additional cost to method syntax. I wasn't trying to imply there was, only that when you want dynamic dispatch, you can only get it from method syntax.

Another interesting asymmetry, which I think shows a weird favoritism for method syntax is that Rust allows both of these:

   object_of_type_A.foo()
   object_of_type_B.foo()
But not both of these:

   foo(object_of_type_A)
   foo(object_of_type_B)
Ignoring dynamic/static dispatch for the moment, you can overload if you use method syntax, but not if you use function syntax. For me, I would prefer the latter, but I think I'm clearly in the minority.


In a dynamically-typed message-passing system, inheritance is just automatically delegating messages to another actor and possibly modifying the results. Erlang may not have explicit inheritance, but people almost certainly implement inheritance-like actor structures to achieve extensible message dispatch.


> inheritance is just automatically delegating messages to another actor and possibly modifying the results.

I don't think this is true. Objects can message themselves as part of base-class code; implementation inheritance means that this has to involve a level of indirection and dispatch, not just explicit delegation.


This isn't always implicit, though: python's explicit self is still "object-oriented"


> completely wrong and ruined our perception of OOP.

or maybe Erlang is its own thing and OOP -as what most people are thinking when talking about it- is the set of practices followed in languages descended from the cross between C and Smalltalk.

This explains how every introductory course in OOP using either Java, C#, Python, etc. will list some sort of OOP principles always including something along "objects talk to each other though message passing" which is hand-waved simply as something to do with taking other objects by ref

  // taking a reference
  bar.someMethod(foo)
 
Or some form of import method

  import Foo;
  
  class Bar {
    someMethod() {
       // init the import
       foo = new Foo() 
       // ... do something with it
       foo.someMethod()
    } 
  }
which is worse since you need a huge disclaimer to explain that in practice you need to account for Dependency Injection and something for Mocking.


Go and Rust OOP view of the world are also possible in C++, Java, Python, C#.

It is a matter of actually learning about OOP concepts in general, and Kant features in particular.

Swift's protocol oriented programming actually goes back to Objective-C protocols, Java's inspiration for interface types or Smalltalk traits (post-Smalltalk-80).


Veteran Smalltalker-turned-modern-iot-embedded-polyglot here.

OOP is a difficult to judge because it became such a big envelope. Alan Kay famously said in 1997 "I invented the term Object Oriented Programming, and this [C++ and Java] is not what I had in mind."

He would later even tell the Smalltalk community that even that wasn't spot on, only an early step, to what he was really after in his visionary's quest that would create a DynaBook that would increase world peace (visionaries always reach big).

In my own journeys, my own personal conviction came to be that the "paradigm shift" with the OO movement was to learn to bind behavior to data. This (IMO) fits Kay's ideas about cellular biology and the inspiration he embraced with his ideas. It's all about binding behavior to data.

Everyone should read that byte magazine (because apparently 500+ pages was a magazine), especially Peter Deutsch's section on block closures. Smalltalk did more with closures than any other language I've used since. Because closures are objects too.


Superdistribution was a pretty good book. I guess we went a really different direction after the Visual Basic component years. I keep coming back to Superdistribution, Visual Basic components (VBX/OCX), and the concepts from Mirror Worlds[1] (tuples spaces) and wonder if there is still some paths to look at down those roads.

1) Mirror Worlds: or the Day Software Puts the Universe in a Shoebox...How It Will Happen and What It Will Mean


Nice book, but it's too bad David Gelernter's brain went bad.

https://www.washingtonpost.com/news/speaking-of-science/wp/2...


Given the false reporting done by the Washington Post on the Covington kids when they had video of the incident, I place no value in their reporting. Thanks for dragging politics into this discussion.


Except for the fact that the Washington Posts' reporting in the vast majority of cases, including Watergate and the Iraq War and Ukrainegate, is historically dead-on accurate and excellent and award-winning, and when they DO make mistakes, they actually admit it and correct it, as they already did in the case you're complaining about:

https://www.washingtonpost.com/nation/2019/03/01/editors-not...

Your attempt to dismiss everything they write out of hand because of one mistake they already admitted and corrected, and instead mindlessly parroting the propaganda of a pathological liar who never admits any of his many mistakes, and has peddled at least 13,435 documented lies during his term, is the very definition of dragging politics into this discussion, and it's intellectually dishonest of you to project like that. You're the one who first dragged a demented political hack-job into this discussion, who was angling for a job in the Trump administration sabotaging science, denying climate change, and promoting Intelligent Design.

https://whyevolutionistrue.wordpress.com/2019/05/17/computer...

>Computer scientist David Gelernter drinks the academic Kool-Aid, buys into intelligent design: I’ve pondered at great length how a man can be apparently as intelligent as Gelernter, yet so susceptible to the blandishments of Intelligent Design—and so ignorant of the evidence that refutes it.

Honestly: Do you also choose to deny anthropogenic climate change (and Darwinian evolution for that matter) and push Intelligent Design, just like Gelernter does, and to gullibly believe Putin's propaganda about Ukraine interfering in the elections instead of Russia, just like Trump does?

https://yaledailynews.com/blog/2017/01/25/gelernter-denies-m...

>Gelernter, potential science advisor to Trump, denies man-made climate change: “For human beings to change the climate of the planet is a monstrously enormous undertaking,” Gelernter said. “I haven’t seen convincing evidence of it.”

https://en.wikipedia.org/wiki/David_Gelernter#Controversial_...

>David Gelernter does not believe in anthropogenic climate change. In July 2019, Gelernter challenged Darwin's theories.

By the way, Gelernter's also a patent troll sell-out, whose ideas are unoriginal:

https://arstechnica.com/tech-policy/2016/07/apple-will-pay-2...

>Apple will pay $25M to patent troll to avoid East Texas trial

>When Mirror Worlds ran out of appeals, it gave up and sold its patent—to another patent troll called Network-1 Security. In 2013, Network-1 created a similarly named LLC, this time called Mirror Worlds Technologies, and filed another lawsuit (PDF) in the Eastern District of Texas. The same patent, No. 6,006,227, was used to sue the same target, Apple.

>When Apple started to come out with features like Cover Flow and Time Machine, Gelernter believed his own ideas being used. "I know my ideas—our ideas—when I see them on a screen,” he told the New York Times in 2011, while his case was on appeal.

To address your question: You may "wonder if there is still some paths to look at down those roads", but if you follow the path that leads to Mirror Worlds, you'll get sued for patent infringement by a troll.

So exactly which facts of that article do you disagree with? And where is your proof that what the Washington Post and Yale Daily News and Wikipedia and many other sources (including himself) say about Gelernter is false, and that he does actually believe in anthropogenic climate change and Darwinian evolution, in spite of his own quoted words? Or that he was the first person to invent the idea of presenting documents in chronological order? Or is it all based on your unsupported uninformed false opinion that everything the Washington Post says is "fake news"?

Don't even bother answering if you don't have any proof.


This article was an excellent read. Like other commenters, it reminded me of Clemens Szyperski's "Component Software" book. While a component model of software production has succeeded in some niches, it has (so far) failed to become a central organising principle for large scale software production. The article suggests that new micropayment mechanisms such as blockchain might enable Brad Cox's vision of pay-as-you-use software components. But I think this misses the point. I would argue that software differs from physical goods in important ways that make it a poor fit for this model. In particular, software is cheap to modify, evolve and customise, but expensive to specify independent of implementation (cf. Refactoring, Agile processes versus ISO Standards Development.)

One issue that Szyperski's book examined was the composition model (i.e. object model: COM, CORBA, JavaBeans, etc) used for defining interfaces between software components and how those interfaces can be composed. The idea of standardizing interfaces and composition mechanisms does not get so much attention today. It seems that things are currently balkanized into language communities, each doing their own thing.

The industrial manufacturing concept of "interchangeable components" comprises of two things: (1) standardized specifications, (2) multiple independent manufacturers who are able to independently implement specifications. We do have this kind of practice in, for example, the specification and implementation of the C++ standard library. But that's not how most software is developed, and it is commonly held that it is not how most software should be developed. On the other hand, we do have "component reuse" in the form of software libraries -- but the interfaces are unique and idiosyncratic to each library, not standardised, and there is usually a single "manufacturer".


I don't think this is a failure of OOP itself, nor does it come down to any language's shortcomings.

I think it's mostly a market failure. The units of trade were complete software products and/or services/apps these days. Mostly because that's what the end users actually see on the desktop, the atomic unit of software is huge and perceptible.

Once upon a time, it looked like it could be different. Component-oriented systems arose. COM, CORBA, CommonPoint, Taligent, D'OLE etc. (Heck, one might even argue for Amiga filetypes and embedded X windows to belong in the same category)

But apart from UI Widgets (ie. windows controls for VS or Delphi), it never seemed like there was a marketplace for it. It also was quite hard to sell to managers, as it's something you can't put in a box or have a big trademark for it.

So we ended up with fast food joints and black-box-systems, no ICs, condiments, recipes etc.

Open Source could've been an answer/alternative, but the OSS world mostly copies stuff from the programmer's day jobs.

(Gnome actually started out with CORBA and KDE had some component structure, too, but that doesn't appear to be anyone's focus, compared to copying notifications and doing flat redesigns)


Two interesting ideas that helped me get insight into OO:

1) An object is a poor man's closure; and a closure is a poor man's object.

2) Most object-oriented programming is done with mutable state, which muddles what OO is. You don't really need state to have objects.

In a closure, the lexical scope is preserved, and the functions defined in it can access it any time. This is very similar to what an object does: https://stackoverflow.com/a/2498010

When objects are immutable, one gets to truly appreciate the elegance of grouping both data and their functions together: https://dev.realworldocaml.org/objects.html#scrollNav-3


RE 2 do you mean you don't need to have mutable state to have objects? Unless I have a huge misunderstanding of what you mean by 'state', it seems like state is necessary for objects to exist..


I think "state" is simply an overloaded term and that causes misunderstandings here. I believe by "state" GP meant specifically mutable state on the level of the programming language abstraction.

"Having state" often implies mutability, i.e. to be in a given state means the same thing can be in different states/configurations as well.

On the other hand people often also use the word "state" to refer simply to a "current state" of something at a given time, and then under this interpretation there is state in immutable objects.

As you already know, "representing (real world) state at some step/time" does not require mutability on the language level. These immutable objects are state snaphots of a hypothetical mutable object. Operations on them cannot mutate them, but instead derive a new snaphot that represents the new state at a given time after the operation. So working with those snapshot does represent state and state changes and models real world state, but the individual snaphots/immutable objects cannot be mutated.

Whether those immutable objects deserve the monicker "object", I don't really have a opinion on, but I wouldn't outright deny it.

Are they useful in similar ways that mutable objects are? I'd say so.

Immutable objects can still get you polymorphism/subtyping and encapsulation, features often ascribed to object oriented programming.


I think the grandparent comment is saying that you do not need your state to be mutable; that writing software where your state is immutable is useful and elegant.


Yeah that is what I was trying to clarify, I guess my comment was equally confusing because I am getting responses trying to fill in words that the OP may or may not have left out accidentally


You don't. See http://wcook.blogspot.com/2012/07/proposal-for-simplified-mo... by William Cook. However, objects help to tame mutable state.


Can you point me to a language or some code snippets with objects with no state --- in my head this just comes back to some sort of namespace for static functions. At that point, you need no object, so what's the point of having one. If you are saying that having a reference to an object is not 'state' then OK, but that makes no sense to me. I'm not trying to be pedantic, just trying to understand if OP had a typo or there is a whole world out there that I can't conceive of.


Again, in discussions like this state is often used to refer to mutable state (an identical thing itself mutates).

Having a conceptual reference (oh god another overloaded term) to something does not imply that there is mutable state on the programming language as long as the reference is immutable.

    def times(x, y)
      if x.is_one?
        y
      elsif x.is_zero?
        x
      else
        y.add(times(x.decrement, y)
      end
    end
a) x and y are not mutated. b) you can implement the called methods on various immutable object types, and the code would work (assuming we send in the same types, and they obey certain protocols but that is yet another different concern):

    class WeirdString
      def init(v)
        @val = v
      end

      def val
        @val
      end

      def is_zero?
        @val.empty?
      end

      def is_one?
        @val.length == 1
      end

      def decrememt
        WeirdString.new(@val[0..-2])
      end

      def add(ws)
        WeirdString.new(@val + ws.val)
    end

    # times(WeirdString.new('abc', 'hi').val → 'hihihi'
Now, you may say the fact that the object has an instance variable means there is state and you'd be right depending on how you define state, but at any rate the state of a given instance of WeirdString is not mutable.


If that's what you mean by "no state", then I misunderstood, but I don't especially care what you call a namespace for static functions. (A module?) I use immutable objects frequently, and they're a different thing, just as first-class nested functions are a different thing from standard C functions.


IMO that post is specifically calling out the mutable part, not claiming to have objects completely without state.


One book[1] I read explained this in detail, amongst other similar concepts. It was really interesting and helped me think of how functional concepts could work in OOP languages.

1: https://www.elegantobjects.org/


I agree with a lot of those suggestions, but I am still scratching my head at what it means to have "objects with no state", at that point it seems like we just have static methods that are in some namespace which is the object. Having immutability does not get rid of state existing.


Objects do have state, but they can be either mutable or immutable. The link in my original comment describes an immutable stack (written in OCaml, but is fairly readable) where both push and pop returns a new stack object. The original object is never changed. In a mutable stack, push would return void (or unit in OCaml), which signifies a side-effect. pop would return the element, and the object changes in-place.


Yeah, I'm familiar with the pattern, but I wouldn't say that this is a 'stateless' pattern because even having that object is 'state'. I don't think 'stateless' makes sense if objects are involved. I would agree that it's immutable.


> Even having that object is state.

There is mutable state on the conceptual level, i.e. what is modelled, but not on the language level abstraction of an object.

There is no mutable data/state on the language level, at least how things are understood in pure functional programming. But of course mutable state in the real world (for lack of a better term) can still be modeled there.


"Objects" in OOP, as in "let's couple data with behavior" necessarily means mutable state, otherwise they are just types. Use only immutable data; write abstract classes only; do polymorphism with interfaces and you'll end up with something very similar to what people mean by 'type-level programming' in Haskell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: