- https://public.enigma.com/ -- One of the best collections of U.S. federal data, with good taxonomy and lots of useful options for refining a search, such as filtering by dataset size.
- https://www.data.gov/ -- Not as useful as what most people would want -- e.g. unlike Enigma and Socrata, it's a directory of self-submitted (by the government) data sources, not one in which the data is stored/provided in a standardized way. But it's a pretty good listing, though not sure if it's much better than just using Google.
- https://data.gov.uk/ -- Better than the U.S. version in terms of usability and taxonomy.
@danso thanks for the feedback on data.gov. I'm part of the small (3 person) team that helps to manage it. If you have a moment to chat I'll see if I can reach out to you to see if you'd be interested in participating in some more in-depth user research in the future. Folks can also always leave feedback via email, github, twitter, and other means - https://www.data.gov/contact
Data.gov and Federal agencies use the same metadata standard (DCAT) that Google Dataset Search is using so much of our metadata is also being syndicated there.
I think the biggest blocker with using publicly available datasets is stale data.
If you, or anyone else who aggregates these datasets could make it EASY to find the FREQUENCY of updates, rather than just the LAST UPDATED timestamp, it'd incentivize people to consume APIs more.
I realize having a snapshot from 2014 is better than what was publicly available before. But I feel no one's really talked about why they would or wouldn't use particular data.
I think this is exactly correct. Frequency of updates (and clear documentation of the lag relationship between when data is reported and for what period data is applicable too) is often missing or hard to find.
The value of increasing the cadence of updates should also not be understated! A lot of public dataset report on annual frequencies with more than a quarter of delay... Although this is a different issue altogether that has more to do with the processes of the reporting agency.
Definitely, feel free to email me (in my user bio). Thanks for the info about the upcoming comment period, will have to put a reminder on my calendar for that.
Interesting. There have been a lot of attempts at "meta data portals" that search across portals. Most of them have struggled.
At Open Knowledge we built a really early one called opendatasearch.org in 2011/2012 - now defunct - and were involved in the first version of the pan EU open data portal. We also had the original https://ckan.net/ (and subsites) which is now https://datahub.io/ and has become much more focused on quality data and data deployment. [Disclosure: I was/am involved in many of these projects]
The challenge, as others have mentioned, is that data quality is very variable and searching for datasets is complicated (think of software as an analogy - searching for good code libraries is a bit of an art).
I imagine Google are trying this out before making datasets another "special type" of search result -- after all you can already search google for datasets. In addition, Google are already Google so including datasets will have a level of comprehensiveness and exposure you struggle with elsewhere (part of the power of monopoly in a sense!).
I'm a particular fan of the boundary "dataset" you have that's a low-resolution TIFF file.
(edit to add something more productive: the site is littered to the tune of at least 25% and maybe even a third with junk "data", all obviously added to get the number of records as high as possible, with no regard to whether that data is either useful to anybody, is machine-readable in any way at all, or even -- in the example above -- even qualifies as "data". Data.gov.ie would be moderately interesting if all the shit in it was removed.)
The quality of the datasets varies greatly depending on the source. Some work well, some, less so. There are data sources that are undergoing active development to harvest them more accurately. None of them were added to pump up the numbers.
The biggest numbers bump recently was ca 1600 Met Eireann rainfall records datasets from all around the country, some of them daily rainfall dating back 60 years. (Spoiler, there’s a lot of rain)
This is specifically a catalog of data sets, it doesn’t host the data except for previews, and even doing that is pretty complicated in all its glory.
I believe that the eircode dataset is one of the most highly requested sources, but it’s a private for profit database.
If you are aware of such a dataset that a public body is hosting, then it would certainly be something to include. Convincing (and helping) the public bodies to publish their data is still a big task.
Nice. I'm glad Google is making it easier to find public data sets. I wish that these could be filtered by format, so that you could narrow them to CSV, XML, JSON, KML, etc.
Another nice resource that I've used in the past is 'toddmotto/public-apis' on Github [0].
In the end I would prefer all public data sets to be available over the DAT protocol [1] instead of being hosted only on government or organization websites. A lot of climate data previously made available by the EPA was taken down, and only saved by efforts of volunteers.[2]
Dat's pretty cool, but it's not the only game out there. The efforts of git-annex/Datalad [0], Academic Torrents [1], Quilt[2], DVC[3], and Pachyderm [4] are also notable in this space. My hopes are broader in the sense that I just hope that dataset versioning happens in the first place.
Yeah, if they had added an attribute for formats available for each data set, and then added a filter by type to the search (e.g. "type:csv") or similar, it would be great.
Sometimes you really want a specific format for a dataset.
Having dabbled a bit in open data, I think "this data is available in too many formats and going through all the options manually is tiresome" counts as a problem you'd love to have.
The state of data sharing seems to be still quite sad.
* Hosting problems. The first link I tried was already broken.
* Format problems. Also the presented data is in all kinds of formats, some "data sets" even require me to read data off images: https://www.ceicdata.com/en/indicator/germany/gdp-per-capita
And even if it's JSON, this is not particularly great either (Unicode support? Large (64bit) integers?).
* Update problems. Many data-sets change over time (e.g. GDP). How can I subscribe to updates? "git pull" would be nice.
* Provenance problems. I want to know who put which record into the dataset, when and why? "git log" would be nice.
* Presentation problems. (This is OK sometimes) I necessarily want to download 5Gb file before I looked into it. The first few rows of the dataset should be presented on the page, with information about it.
Yes, state is still mediocre. Back in 2011 i did Chaos Computer Congress talk on "apt-get for Debian of Data". This itself came out of building the original data portal and open data catalog/hosting site in 2006 https://datahub.io/ (originally ckan.net).
This an initiative providing a simple way of "packaging" data like software plus an ecosystem of tools including a package manager etc - https://frictionlessdata.io/data-packages/.
Aims to be minimal, easy to adopt etc (e.g. based on CSV). It has got significant traction with integration and adoption into Pandas, OpenRefine etc.
https://datahub.io/ itself is entirely rebuilt around Data Packages and includes a package manager tool "data".
I would like to create laymen oriented central repository for all public spending data of the world.
* Hosting problems - I make my own copy of the data.
* Format problems - clearing and formatting data from different sources is a real pain. Once it is on my website, I offer CSV download or COPY/PASTE tabulated data.
* Update problems - no versioning or public API yet.
* Provenance problems - there is a link to the source of the data.
* Presentation problems - tailored to displaying budgets. Not cross-browser or full mobile support yet.
Open Data portals powered by OpenDataSoft let you see/sort/filter/visualize data in your browser.
You can access the data via API (including sorting/filtering) or download static files (CSV/JSON/XLS etc.).
I haven't seen many other platforms offer the same kind of functionality.
e.g This one is a dataset of CCTVs across Leicester. You can easily see all of the columns, sort data around, display a chart of camera types, see their location on a map etc.
Totally agree! At Qri (https://qri.io) we're working on many of these problems together - hosting, formatting (interoperability), provenance and sync. It's an open source project - we'd love to have your feedback as we design it!
This is cool and is a perfect case for IPFS for public datasets. I've not heard of it before, though, and I think naming / branding is something that makes finding these things / building momentum more difficult.
For example, someone else mentioned engima.com. I would have no idea that is related to data sources / sets unless I knew what it was.
Certainly wish you the best of luck though and will keep an eye on Qri! Cool project!
Hey! thanks for checking us out. If you are still having trouble, please head over to our github (http://www.github.com/qri-io/frontend) and I'd be happy to help out.
You would be doing us a huge solid, working through these use cases irl is beyond helpful.
I agree and I have an idea for one possible solution that I've wanted to implement for years. It could be a business but, I think maybe it would be better for the world if it was open source. I just haven't had the time or support to do it, as it would need full-time effort even to get it started.
It's one of those... I know I should just do it kind of things, even just to get it out there, but I haven't found the inertia.
Seeing things like dat, quiltdata, public data sets, etc. made me think what I wanted to do was unnecessary, but I also agree with your comment.
I think a core problem is data democracy / control / politics of data. Too much we still act silo'ed instead of benefiting from massive data sharing, for a multitude of reasons (especially but not limited to $$$).
4) DataScientists work with R/Pandas "DataFrames". If you are familiar with either one, import the data into a data frame and use an export method to do the serialization for you: https://pandas.pydata.org/pandas-docs/stable/api.html#id12
Oh, and don't forget that CSV is a rigorous data format, with many tricks.
Don't just append some text together, separated by commas, and call it CSV.
Instead, use a dedicated lib to create the CSV for you.
CSV is your best option.
CSVW is a CSV+a metadata file to convert the tabular data to graph data (plus it types the nodes and relationships of the graph). You may want to have a look at it.
It's one of those areas they have long attempts at involvement in - e.g. Google Public Data Explorer which never quite reached it's potential, and Freebase which although flawed was good and was shut down after Google acquired it.
I like that this is search based! The web is still the best place to publish data - in fact in my view normal Google search is still by far the best way to find datasets, even though it isn't directly designed for that.
There's a link from the about page of Google Dataset Search to this help for webmasters on how to mark up content for it - although it is a bit odd, mainly showing how to mark a dataset with a DOI (so good for academics certainly!):
Just metadata about data feels like a very niche thing to search to me - I'm still not convinced anyone will maintain the metadata well enough to help. Possibly will work in particular domains.
Does Dataset Search have some way to search column headings, types or content (of CSV, Excel, JSON etc)? I can imagine a load of operators that would make that really powerful for finding badly meta-marked up datasets deep in the web. Would seem like the obvious extra thing a dataset search would do.
Also previews please!!! Just nicely render the fist ten rows of common formats - CSV and Excel to begin with.
Looks like academic institutional repositories and figshare are doing the heavy lifting here. It's still neat to see Google aggregate everything, but it's not that different from what they do with other services relying on these sources already, and is largely dependent upon how rich these upstream sources are in the first place.
Then there's this $500K just awarded by the NSF to build a "Google for data sets". I wonder if, before making these sorts of grants, the NSF looks at what Google and other and other companies are already (or likely) doing.
https://www.lehigh.edu/engineering/news/faculty/2018/2018082...
Look at https://knoema.com which positions itself as a search engine for data with more than 2.5 billion time series available. They provide both visual data discovery through search and navigation as well as API access through Python, R etc.
Maybe I’m missing something but this strikes me as underwhelming. To the point of something that I could do as opposed to something that the firm that created maps and gmail could do.
- https://www.opendatanetwork.com -- what I would call the "Google, for Socrata datasets"
- https://public.enigma.com/ -- One of the best collections of U.S. federal data, with good taxonomy and lots of useful options for refining a search, such as filtering by dataset size.
- https://www.data.gov/ -- Not as useful as what most people would want -- e.g. unlike Enigma and Socrata, it's a directory of self-submitted (by the government) data sources, not one in which the data is stored/provided in a standardized way. But it's a pretty good listing, though not sure if it's much better than just using Google.
- https://data.gov.uk/ -- Better than the U.S. version in terms of usability and taxonomy.