Hacker Newsnew | past | comments | ask | show | jobs | submit | lyxsus's commentslogin

  Location: Buenos Aires, Argentina
  Remote: Yes
  Willing to relocate: Depends
  Technologies: JavaScript/TypeScript, NodeJS, React, Graphl, Relay, Apollo, Nextjs, Python, Postgres, Mongodb, Couchdb, Dynamodb and etc and etc. AWS/GCP. RDF/OWL/RDFS, langchain/langgraph.
  Résumé/CV: https://drive.google.com/file/d/1m4A_XFaPKFqUky2n3WdGsFpXWuALTPd2/view
  Email: lyxsus@gmail.com
Hi, my name is Sergei, 20 years in software development. I like a challenge and working with interesting and new technologies.


This is incredible. I really want it to take off.


tbh, tailwind feels like the worst thing that happened to frontend dev since coffee script.


Have you seen "The modern way to write TypeScript"? https://github.com/DanielXMoore/Civet

*runs for cover*


omg. ok, adopting some F features is an interesting idea. but why are they trying to make that thing with syntax happen?


hehe, I guess they loved CoffeeScript and/or minimalism. But if I want F# features I'd rather use F# and compile to JS with Fable, or simply use ReScript which is more sound than TypeScript any way. If I should learn a new syntax, why not get a better type engine in the process?


Arguably coffeescript was a big push to move JS spec forward into ES6 and beyond.


There're a lot of wrong perspectives on the topic in this thread, but this one I like the most. When someone starts to talk about "agreeing on a single schema/ontology" it's a solid indicator that that someone needs to get back to rtfm (which I agree a bit too cryptic).

The point here is that in semantic web there're supposed to be lots and lots of different ontologies/schemas by design, often describing the same data. SW spec stack has many well-separated layers. To address that problem, an OWL/RDFS is created.


"The point here is that in semantic web there're supposed to be lots and lots of different ontologies/schemas by design, often describing the same data."

Then that is just another reason it will fail. We already have islands of data. The problem with those islands of data is not that we don't have a unified expression of the data, the problem is the meaning is isolated. The lack of a single input format is little more than annoyance and the sort of thing that tends to resolve itself over time even without a centralized consortium, because that's the easy part.

Without agreement, there is no there there, and none of the promised virtues can manifest. If what you say is the semantic web is the semantic web (which certainly doesn't match what everyone else says it is), then it is failing because it doesn't solve the right problem, though that isn't surprising because it's not solvable.

If what you describe is the semantic web, the Semantic Web is "JSON", and as solved as it ever will be.

A "knowing wizard correcting the foolish mortals" pose would be a lot more plausible if the "semantic web" had more to show for its decades, actual accomplishments even remotely in line with the promises constantly being made.


so if it tries to have a unified ontology that's why it's destined to fail, but if it's designed to working with many small ontologies… that's why it will fail! lol, but you can't have it both ways.

In SW, the "semantic" part is subjective to an interpreter. You can have different data sources, partially mapped using owl to the ontology that an interpreter (your program) understands. That allows you to integrate new data sources independently from the program if they use a known ontology seamlessly or create a mapping of a set of concepts into a known ontology (which you would have do anyway in other approach). So in theory, data consumption capabilities (and reasoning) grows as your data sources evolve.

> If what you describe is the semantic web, the Semantic Web is "JSON", and solved.

It has nothing to do with JSON, JSON-LD, XML, Turtle, N3, rdfa, microdata and etc.. RDF is a data model, but those are serialisation formats. That's another interesting point, because half of the people talk only about formats and not the full stack. That's not a reasonable discussion.

> which certainly doesn't match what everyone else says it is

oh, I know it and it's upsetting.


> if it tries to have a unified ontology that's why it's destined to fail, but if it's designed to working with many small ontologies… that's why it will fail! lol, but you can't have it both ways.

You're only supposed to say "you can have it both ways" about contradictory things. It can both be a hopeless endeavor because it is impossible to agree on ontologies and a useless endeavor if you don't agree on ontologies.


Oh, I would like to see a look on your face when just in about 100-200 years from now it will be mature enough for a "web scale".


Just 200 years around the corner.


Maybe 300. But no longer, I'm confident! Do you want to be left out in a couple centuries? You better get on the train now.


> The point here is that in semantic web there're supposed to be lots and lots of different ontologies/schemas by design, often describing the same data.

This is incredibly problematic for many reasons. Not the least of which is the inevitable promulgation of bad data/schemas. I remember one ontology for scientific instruments and I, a former chemist, identified multiple catastrophically incorrect classifications (I forget the details, but something like classifying NMR as a kind of chromatography. Clear indicators the owl author didn't know the domain).

The only thing worse than a bad schema is multiple bad schemas of varying badness, and not knowing which to pick. Especially if there is disjoint aspects of each which are (in)correct.

There may have been advancements in the few years since I was in the space, but as of then, any kind of probabilistic/doxastic ontology was unviable.


That's a valid point, but I'm not sure, the following problem has a technical solution:

> Clear indicators the owl author didn't know the domain


It doesn't, which is exactly the problem. Ontologies inevitably have mistakes. When your reasoning is based on these "strong" graph links, even small mistakes can cascade into absolute garbage. Plus manual taxonomic classification is super time consuming (ergo expensive). Additionally, that assumes that there is very little in the way of nebulosity, which means you don't even have a solid grasp of correct/incorrect. Then you have perspectives - there is no monopoly on truth.

It's just not a good model of the world. Soft features and belief-based links are a far better way to describe observations.

Basically, every edge needs a weight, ideally a log-likelihood ratio. 0 means "I have no idea whether this relation is true or false", positive indicates truthiness and negative means the edge is more likely to be false than true.

Really, the whole graph needs to be learnable. It doesn't really matter if NMR is a chromatographic method. Why do you care what kind of instrument it is? Then apply attributes based on behaviors ("it analyses chemicals", "it generates n-dim frequency-domain data")


Understood, thank you.

Yes, that's not solvable with just OWL (though it might help a little) or any other popular reasoners I know. There're papers, proposals and experimental implementations for generating probability-based inferences, but nothing one can just take and use, but there're tons of interesting ideas on how to represent that kind of data in RDF or reason about.

I think the correct solution in SW context would be to add a custom reasoner to the stack.


I've been part of 4 commercial project that used the semantic web in one way or another. All these project or at least their semantic web part where a failure. I think that I have a good idea on where the misunderstanding about the semantic web originate. The author does seem to have a good understanding and is right about the semantic web forcing everything into a single schema. Academia sells the straight jacked of the semantic web as a life long free lunch at an all-you-can eat-buffet but instead you are convicted to a life sentence in prison. Adopting RDF is just too costly because it is never the way computers or humans structure data in order to work with it. Of course everything can be organised in a hyper graph, there is a reason why Steven Wolfram also uses this structure, they just so flexible. At the end of the day I don't agree with the author opinion of the semantic web having much of a future, I did my best but it didn't work out, time for other things.


> semantic web forcing everything into a single schema

I don't think "forcing" is the right word here, I think the right one would be "expects it to converge under practical incentives". That's a more gentle statement that reflects the fact, that it doesn't have to for SW tech to work.

Also, the term "schema" is a bit off, bc there's really no such thing in there. You can have the same graph described differently using different ontologies at the same moment without changing underlying data model, accessible via the same interface. It's a very different approach.

> never the way computers or humans structure data in order to work with it

If you haven't mentioned that you had an experience, I would say you confuse different layers of technology, because graph data model is a natural representation of many complex problems. But because you have, can I ask you to clarify what you mean here?

> Academia sells the straight jacked of the semantic web as a life long free lunch at an all-you-can eat-buffet

I disagree, bc I in fact think that academia doesn't sell shit, and that's the problem. There's no clear marketing proposal and I don't think they really bother or equipped to make it. There's a lack of human-readable specs and docs, it's insane how much time you need to invest in this topic even just to be able estimate whenever it's a reasonable to consider using SW in a first place. Also, lack of conceptual framework, "walkthroughs", tools, outdated information, incorrect information drops survival chance of a SW-based project by at least x100. But it can really shine in some use-cases, that unfortunately have little to do with the "web" itself.


RDF is just an interoperability format. You aren't supposed to use it as part of your own technology stack, it just allows multiple systems to communicate seamlessly.


Isn’t it similar to linear types in haskell?


That's basically exactly the same idea, yes.

Haskell didn't invent linear types. Eg Clean had them for ages to do IO. (They didn't use Monads in Clean.)


We’re not, but your point is still valid.


If you open specification linked to that page ( https://wicg.github.io/sanitizer-api/#sanitizer-api ), you'll see that Sanitizer constructor accepts config param, that does exactly that and will probably extended before spec is ready.


Location: Saint-Petersburg, Russia

Remote: Yes

Willing to relocate: Yes

Technologies: Javascript/TypeScript, React (Relay/Redux/Apollo), Postgresql/Mysql/NoSQLs, K8S, Python

Résumé/CV: http://lyxsus.github.io/Sergey_Antoninko_CV-2021-04.pdf

Email: sergey.antoninko@gmail.com

Fullstack developer (~15 years).


As a non-US citizen, I have 3 questions:

1. WTF

2. Is it even enforceable?

3. Does anybody know recent examples?


What do you suggest to use instead? It's a bit too early to use RDF, people haven't catch up yet.


There is no good reason that the thing that 98% of people use SAML to do should involve XML.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: