> We can understand structured data in Web pages about datasets, using either schema.org Dataset markup, or equivalent structures represented in W3C's Data Catalog Vocabulary (DCAT) format. We also are exploring experimental support for structured data based on W3C CSVW, and expect to evolve and adapt our approach as best practices for dataset description emerge. For more information about our approach to dataset discovery, see Making it easier to discover datasets.
It’s funny because Google does not use these standards to validate.
I keep getting errors from Google that some of my dataset’s descriptions are over 5,000 characters even though dcat:description does not have a size limit.
Of course it’s impossible for me to report a bug in how they index.
> (How do I indicate that this is a https://schema.org/ScholarlyArticle predicated upon premises including this Dataset and these logical propositions?)
3. [CSVW (Tabular Data Model),] schema.org/Dataset(s) with per column (per-feature) physical quantity and unit URIs with e.g. QUDT and/or https://schema.org/StructuredValue metadata for maximum data reusability.
This is a great resource. At Splitgraph, we index ~40k open data sets, and we make sure to include structured metadata for each one, so we show up in these results. (example [0])
One cool aspect of this metadata is that it allows a dataset to have multiple sources. So if two sites index the same dataset, there is no duplicate content penalty like there might be with textual content. If you search for a dataset, it will include links to all its sources (whether canonical or otherwise).
For most of the data we index at Splitgraph, the canonical source is
an open government data portal powered by Socrata (e.g. data.cdc.gov). We noticed that Socrata powered a lot of portals, so we wrote a Socrata plugin for Splitgraph, along with a scraper to index the metadata. The plugin basically implements a Postgres FDW so that Splitgraph can translate from SQL to the upstream query language. In this case, the plugin translates to Socrata's bespoke API language. But for private deployments we also have plugins for Snowflake, Postgres, some SaaS services, etc.
If you find some data on Google Dataset Search with Splitgraph listed as a source, please take a look! Our "Data Delivery Network" (DDN) is implemented on top of the Postgres wire protocol, so you can connect with any Postgres client (or use our web editor). All the Postgres query syntax is available to you; you can even JOIN across any of the other 40k+ datasets indexed at Splitgraph. That includes "live data" like Socrata portals, but also versioned snapshots of data called "data images." Here's an example of a point-in-time query across two snapshots (basically a diff) [1], and another query that joins across tables at data.cityofchicago.org and data.cambridgema.gov [2].
This dataset search engine has been around for years! We created DataMarket (https://datamarket.es) inspired by this site (and Auren Hoffman's SafeGraph).
I've actually been on the lookout for model hubs lately. Any that you've seen or reccomend?
I've found https://modelzoo.co/ but it seems more like a currated list of models (some incomplete) rather than a community where users share trained models.
I have a lot to read before I get excited but if the team is here: Can we get DCAT for sets that are otherwise only discoverable with OAI-PMH? Seems like a divide between govt and academic repos that hinders harvesting.
Doesn't take a genius to predict, but there ya go! Governments are assembling datasets in a very fragmented way. It'll take a private company to provide one single website to explore and find all datasets from around the world, making it easier to look at holistic patterns that are happening around the world, or compare patterns between countries.
Though, I would expect a much better UX from Google nowadays. This site has more in common with Google Scholar than Google Search.
And ultimately I'd like to see them build something where people don't need to download datasets in order to make use of the data.
I compare the state of open data to the state of mapping software before Google Maps. You needed to download map files and open them on special software that you open on your computer to make sense of the data. And then Google Maps came along and flipped that whole model. Open data needs the same leap forward in order for more people to make greater use of open data.
I’ve come across this a few different times over the years... always seems enticing and potentially useful, but I’ve never found a real use for it. I suppose it provides a library of well-prepped datasets to test ML models on? Anyone ever used this for any practical purpose beyond a sandbox-type use case?
The lab I work in has a project that helps annotate datasets with metadata and register their schemas: https://discovery.biothings.io/
A common barrier to making FAIR datasets is that not all data lends itself to be schema.org compliant. The idea is that instead of enforcing one schema to rule them all, we allow people to make their own schemas by extending existing ones, and register them in an API to be easily discoverable.
Also in case the team is here... the updated date for ERA5 back extension to 1950-1978 (Preliminary version - https://datasetsearch.research.google.com/search?query=ERA5%...) is incorrect as this was only released last year (2020) but is stated as 2011.
It’s ok, but surprisingly feature poor since they only index datasets with structured metadata. I kind of wish they would compile all their metadata into a structured mega-catalog and allowed searching by api. Or just dumped it out as a dataset itself.
As far as your SQL client is concerned, data.splitgraph.com:5432 is a giant Postgres database with ~40,000 tables in it. You can query and join across them using your existing tools. Behind the curtain, we'll forward your query to the upstream data source, translating it from SQL to whatever language it expects. (We can also ingest delta-compressed versioned snapshots).
On the public DDN (data.splitgraph.com:5432), we enforce a (currently arbitrary) 10k row limit on responses. You can construct multiple queries using LIMIT and OFFSET, or you can run a local Splitgraph engine without a limit. We also have a private beta program if you want a managed or self-hosted cloud deployment with the full catalog and DDN features. And we are planning to ship some "export to..." type workflows for exporting to CSV and potentially other formats.
For live/external data, we proxy the query to the data source, so there is no theoretical data size limit except for any defined by the upstream.
For snapshotted data, we store the data as fragments in object storage. Any size limit depends on the machine where Splitgraph's Postgres engine is running, and how you choose to materialize the data when downloading it from object storage. You can "check out" an entire image to materialize it locally, at which point it will be like any other Postgres schema. Or you can use "layered querying" which will return a result set while only materializing the fragments necessary to answer the query.
Regarding ClickHouse, you could watch this presentation [0] my co-founder Artjoms gave at a recent ClickHouse meet-up on the topic of your question. We also have specific documentation for using the ClickHouse ODBC client with the DDN [1], as well as an example reference implementation. [2]
We support both! A Splitgraph repository can "mount" an external data source, and we'll proxy queries to it using a system based on Postgres Foreign Data Wrappers (FDW). But a repository can also contain any number of "data images," which are versioned snapshots of data roughly inspired by Docker images. You can define them with a declarative, Dockerfile-like syntax called a Splitfile, and you can rebuild them against upstream sources with caching semantics similar to "docker build."
Our core philosophy has always been that it makes sense to start with data federation (live data), and then selectively warehouse/ingest only what you need (versioned data). We're shipping some upcoming features to support this workflow. You start by providing us (or your private deployment) a set of read-only credentials to any supported data source, which we then "mount" as a repository, making it discoverable in the catalog, and instantly queryable with all the other data on Splitgraph. If or when you decide that you want to warehouse this data, we'll make it easy for you to schedule a loading job to ingest it as a Splitgraph image. This way, you can query the live or versioned data in any repository, by simply changing the tag you use to address it.
You can do all this stuff locally, btw – a decentralized workflow is fully supported, and you can push data between peers. The public Splitgraph.com happens to be a "super peer" with a data catalog, scalability features, etc. But if you just want to experiment on your own, you can try it in five minutes!
Why the condescension? Do you mean that Google has been offering this service for a while? Or do you mean that similar services have previously been offered by other organizations? In which case, perhaps you could link to them?
I'm not a scientist, so not the most scholarly first lookup, but tried searching for penis data[0]. The first link sent me to a site that requires signup to use [1]. No fun. Won't use again.
> We can understand structured data in Web pages about datasets, using either schema.org Dataset markup, or equivalent structures represented in W3C's Data Catalog Vocabulary (DCAT) format. We also are exploring experimental support for structured data based on W3C CSVW, and expect to evolve and adapt our approach as best practices for dataset description emerge. For more information about our approach to dataset discovery, see Making it easier to discover datasets.
For more info on those:
- W3C's Data Catalog Vocabulary: https://www.w3.org/TR/vocab-dcat-3/
- Schema.org dataset: https://schema.org/Dataset
- CSVW Namespace Vocabulary Terms: https://www.w3.org/ns/csvw
- Generating RDF from Tabular Data on the Web (examples on how to use CSVW): https://www.w3.org/TR/csv2rdf/