Hacker News new | past | comments | ask | show | jobs | submit login

> A few dedicated clever people and idealists and dreamers talking about ontologies and building things I don't understand

I was briefly deeply interested in ontologies via OWL and I suspect Prolog has the same issues that I think plague ontologies in general.

They are a fantastic tool for a system complex enough to be nearly useless. Modelling an ontology for a reasonably complex domain is unreasonably difficult. Not because the tools are bad, but because trying to define concrete boundaries around abstract ideas is hard.

What is a camera? A naive attempt would say an item that takes pictures, but that would include X-rays. Are deep-space radio telescopes cameras? Trying to fix those issues then causes second order issues; you can say it’s something that takes images from the visible light spectrum, but then night vision cameras aren’t cameras anymore.

The reasoning systems work well, they just don’t solve the hard part of designing the model.




I had similar discussions with people that wanted to encode published research into ontologies. I would ask researchers what they think - the answer was always great idea. I would then follow up with - How would you use it? No response. I finally concluded that it would never happen.

1. No one wanted it enough to pay for it to happen.

2. There is always a turn over of ideas coming and going which can never be sufficiently updated to keep it useful. Again no one would pay anyway.

Tools like LLMs seem to be fill the role now. I would like to see a Prolog integrated with LLMs is someway (lack of imagination fails me how that would happen).


A theorem prover for the medical literature:

https://github.com/webyrd/mediKanren

http://minikanren.org/workshop/2020/minikanren-2020-paper7.p...

Not prolog though. But gives an idea about the goals behind the classification of science papers.


How would you use it? For searches.

If I want to find something in the brain but not in bone structures. If I want to find something in a kind of cell but that have a nucleus.

They are also extremely useful for automated annotation. Your automated system may annotate with a upper term because it doesn't have enough information to be more precise. That's already a big help for a human to come and put a more precise term.

We are at a convergence of technologies, with ontologies, graphs, llms and logic programming. A lot of people were too early on this and discouraged from pursuing further by people that couldn't grasp why it was so important.


This is why Lenat and CYC had settled on micro-theories. They found it impossible to build a useful universal ontology so had to fracture them on domain boundaries.


I was just pondering if the Prolog universal quantifier would be applicable to reasoning about Cyc frames. Does your comment imply it's not?


I'm somewhat familiar with Cyc but I'd never heard of this development of "micro-theories". It makes perfect sense though - to generalize hugely structured ontologies break as soon as the second person tries to use them or they are used on a slightly different domain.

Anyway, Prolog should be suitable for reasoning over them, but it is only grounded in the "micro-theory" part.


your camera example demonstrates that human knowledge is loosely structured and formalized in general, so you can't create strict onthology. One way to work around is to assign some confidence score on statements, so you will have something like that Nikon device is likely camera, and x-ray machine is unlikely camera based on current world model.


I don’t see an issue with saying “X-ray photography machines, and deep-space radio telescopes, are (or at least contains-a, in the case of the telescope) cameras”. They just aren’t ordinary cameras of the sort that a typical person might take a picture with.

I think most of the reasoning you would want to do with a concept of “camera” that excludes X-ray machines and telescopes, but includes night-vision, could be handled with “portable camera”?

Hm, I guess you probably want to include security cameras though..

Ok. “Portable cameras or security cameras”.


An universal ontology cannot have any notion of an "ordinary" camera, not because of expressive limitations but because it's subjective.

Is a CAT machine a camera? Maybe only its sensor and the computers that reconstruct images? Maybe just the sensor? It mostly depends on your location in the supply chain.

Is a box with a projection plane and no means to capture images a camera? Before about 1830, definitely (and then making photographs became a simple upgrade for your "camera obscura").


I don’t think the “before 1830” case is really an issue. That’s just an example of the meaning of words changing.

I didn’t mean that “ordinary camera” should be a term in the formal ontology. I meant something more like “If you want to formalize the notion of ‘a camera’, it should include the CAT machine and telescope. If you want to address only the types of cameras that you think of as ordinary cameras, you should add extra qualifiers to get at what you mean.” .

(Where, “what you mean” might not get the term “ordinary camera”, but something more clear and descriptive.)


Yes, I think that is the experience, for example, in what we called (or call) data science: most of the time is spent in ETLs rather than using ML methods. In a real company linking data difficulty is not technical but time and resource consuming.


Ontology: Study of the nature of being, becoming, existence or reality, as well as the basic categories of being and their relations (philosophy)

What does that have to do with this?

Is there some use of "ontology" in logic I have not heard of?


It would be this version of ontology: https://en.wikipedia.org/wiki/Ontology_(information_science)

Loosely speaking, ontologies are categories of objects defined by their attributes and relationships to other things. Where a hierarchy is a branching structure where items can only appear on the tree once, ontologies do not require everything to stem from a single "root" node and items can appear in the tree in more than one place.

It's a way of working around how hierarchies can't model some things very well. E.g. "bipedal" is an attribute that can apply to both animals and robots; where does it go in a hierarchy that it can apply to both animals and robots without also implying that robots are animals or vice versa.


Domain Driven Design - one of those things like Agile that triggers all sorts of holy wars - has a lot of overlap with the general concept of ontologies, to the point that I've seen some teams formalize all communication between microservices through a shared "ontology", which in reality was essentially a giant XML based descript of valid nouns and verbs that events could use to communicate between services.

Additionally, there's a good deal of overlap with the "semantic web" concept, which itself had a good deal of hype with very limited (but important) application- even the W3C has some published content on how all three fit together: https://www.w3.org/2001/sw/BestPractices/SE/ODA/


Maybe more in philosophy and classic general AI. Basically ontologies are systems of categorizing and classifying knowledge. E.g. if you want to reason about self driving, you would have an ontology that lets you separate traffic signs from billboards.


In this context ontology means common vocabulary/categories.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: