I found myself cringing watching this talk. Berners-Lee seemed so nervous, stressed out. Certainly under strain, for whatever reason. I hope he is/was okay, you know, generally, and that it was just that big red timer and the VIPs in the audience, or something equally obvious. I watched the video a couple of days after Bill Gates' talk about malaria, and sure, maybe linked data may end up saving the world'n'all, but I did cringe when he invited the audience to shout "Raw Data Now!". Especially when compared to Gates' letting mosquitoes out into the audience, it just seemed silly. Mmm. Now I'm sounding all holier-than-thou, huh.
IMHO TimBL did a great job at TED. I dunno where you have seen him stressed out - he certainly enjoyed it. Btw, linked data is mainly about URIs and HTTP and yes RDF (for the data model part) and some lightweight vocabularies such as FOAF, SOIC, Dublin Core, etc. - check out http://webofdata.wordpress.com as well, I blog there about it ;)
Cheers,
Michael
PS: To be honest, I was also sort of surprised to see him calling the audience like 'speak after me'. However, I was positively surprised - I really like when people put enthusiasm in things and not only give interesting but boring talks. In that sense I very much support TimBL to continue this (though, maybe not go as far as Steven B. at MS did with his infamous 'monkey dance' ;)
I thought he would be more articulate for a Swatty.
In order for linked data or semantic web to really take off someone really needs to break through with a proof of concept. Right now it is just extra work.
agreed. I haven't seen TBL do many(any) speeches before to compare - but he didn't seem comfortable at all. I support the idea (actually - it made me think a lot about Wolfram's alpha -- it seems to be a compiled version of what Tim is after) - but I thought the pitch for it was all wrong. The "Raw Data Now!" chant was cringe-worthy - even without comparison to BillG.
Open data would be a much more sellable concept if it was integrated into the development process in a fairly seamless way. If the application itself could be built around the open data standards, without obviously spending a lot of time on things that provide no immediate value, it would be much easier to justify.
My boss/client is not going to pay me to add REST interfaces and metadata that has only vague hypothetical future usefulness but he might pay me to build the data layer for the application in a way that just happens to also be somewhat open, especially if doing so makes the application better or easier to build.
To meet that requirement, the universal ontology stuff might have to be sacrificed but just having API access to more web apps, even if they are all proprietary, would be a worthwhile compromise.
Based on the video alone, I thought that was what TBL was proposing but I see from the links that he's still pushing the whole RDF rigamarole. That might catch on among a few large informational projects but the web at large is either going to ignore it or make a mess out of it.
The strength of RDF is in seamless data integration.
Think flexible data mashups (where you don't need to code to the interface of a particular service because they all use same linked data principles).
I have seen people build a simple application and then increase its value by automatically pulling in information from other large source of linked data such as DBPedia and GeoNames.
Fully agree that we need simple and understandable demos that show the value of linked data (assuming for a moment that linked data have business value :).
That's an example of value derived from consuming generic data, but how do you get value from providing it?
Providing generic third party access is a rare case in business. Usually, data is used by third parties for a specific purpose that requires substantial domain knowledge, in which case an RDF description is probably redundant and useless.
There may be business models based on generic data but those businesses will face a serious bootstrapping problem as they will likely require either a critical mass of generic data providers or consumers, which are mutually dependent.
The bootstrapping will have to be grassroots, as it was for the original "unstructured" web, which succeeded due to the ease of publishing HTML documents, which is much simpler than publishing structured data.
"how do you get value from providing it" - how do you get value from publishing anything on the web? The same principles apply. The value comes from making information more readily available, so people can find out new things, make connections, do their jobs better. If you want to make money from it (and of course a lot of data has already been paid for by the taxpayer) then you can do all the usual stuff: charge for access, charge for services built on top of the data, provide it free and use it to build your brand or to gain attention of people that can be sold to advertisers etc.
One important use case is publishing data for re-use within an enterprise or other organisation - it doesn't necessarily need to always be public.
I concur. We have largely solved the publishing issue with linked data. Now we aim to realise the read-write Web of Data - http://vimeo.com/3663028 is a screen cast explaining and demoing how this can be done (and this is a grass-rooted, open community project building upon deployed and already used technologies whilst offering a generic processing model - consider signing up and contribute!)
I want it to be trivial to make a greasemonkey script which adds people's small facebook profile photo into their news.yc profiles.
Or add a feature which lets me share a news.yc story on anything that implements a share-link interface - say, with my facebook friends, or my twitter followers, without leaving news.yc.
These things are possible right now, but doing them requires a lot of work, most of which would be duplicated (lots of html parsing, for example). How can we make it easier?
Here's another, if you want to get the gist of what I'm thinking:
When I go to news.yc from my work browser, display any blocker bugs assigned to me in our internal bug tracking application on top of the stories, in bold.
Again, this is doable in greasemonkey, but would should require a trivial amount of work with the right API's in place.
His ideas for the future of the web are vague at best, but the bottom line: the internet is only at its infancy.
In 10 years from now, the use cases and abilities of the internet are likely to be drastically different. His talk is just hinting at the ideas of whats yet to come.
I did not find a lot of info about his idea (Linked Data). Does anybody knows if there is a standard or something that is written at this time regarding his vision explained in this video?
Nope, the semantic web was about reasoning. Linked data is about linking data, and in effect creating a large distributed database. One way to think about it, is to consider building a system that stores a tracking code for a package. Instead of storing XXX-XXXXXX-XXX use the URI so that it can just be deferenced to get more information. In effect you are linking your store data with shippers data.
This could allow you to write trivial queries to quickly identify if there are bottlenecks in how your packages are shipped etc.
Also the development of SPARQL was a significant milestone, that really changes how one works with rdf.
A linked data web could then be used to for reasoning on top of it, but it would need to be large and thus it needs to be useful before the reasoning can be used. For instance, imagine the original web when people were working on search engines with only a few pages... The pages need to exist first.
Quoting Wikipedia: "Linked Data is a sub-topic of the Semantic Web. The term Linked Data is used to describe a method of exposing, sharing, and connecting data via dereferenceable URIs on the Web."