Hacker News new | past | comments | ask | show | jobs | submit login

In the introspective human experience, classification comes from observation.

First, we have a unique exemplar. This is the first thing we see of whatever "type".

Then, we notice a similarity of something else to the exemplar. The exemplar defines the "type", and this new object is classified as similar, usually with some amount of noted variance.

Once we notice a large number of similar observations, we tend to back away from a single exemplar, to a more nebulous classification defined by mostly-present similarities in that rough group. The "exemplar" of the type is now a more vague ideal, not any particular instance.

This is the point at which we try to nail down exact types, to attempt to again make concrete the exemplar of the type, but fall short in the ways similar to what the author brings up. The similarities we observed were because in some particular context, the similarities of the rough group of type members are important, while their differences are less relevant. In another context, these differences might become more important, and the similarities binding the classified population together for the defined type are no longer relevant.

The "type" is NOT a feature of the object, but emerges in the combination of a cloud of objects with the context in which the type is useful.

One common example is defining "car". Is a pickup truck a car? If not, what about an El Camino? An 18-wheeler? A van? Minivan? A 3-wheeled vehicle? Enclosed motorcycle variants? Race cars? Go-karts? Remote controlled cars?

Across the contexts of personal transport, convenience, repairs, insurance, speed/handling performance, etc, the similarities between vehicle types vary. Those vehicle instances which are of similar type in one context might not be of similar type in another context. What is exactly meant by "car" with no additional context?

I do a lot of AI work, particularly rational and symbolic. This needs to know, reason about, and explain particulars, not just return an unexplained impression of relevant associations as a lot of modern AI momentum deals with (successfully, I might add). But the notion of a "concept" or a "type" is certainly contextual, and relates to how the cloud of potential exemplars offer important characteristics to the current context. I have discarded the notion of explicit programming-language style "type" in my work, as far as the "understanding" part of AI goes, in favor of a more vague "concept" existing in a contextual cloud of exemplars, in pursuing a more hybrid classical/symbolic + modern/statistical AI approach. (The underlying implementations dealing with generating and compiling source code still has distinct data types, though.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: