Hacker News new | past | comments | ask | show | jobs | submit login
Google's new pipe syntax in SQL (simonwillison.net)
328 points by heydenberk 4 months ago | hide | past | favorite | 182 comments



Richard Hipp, creator of SQLite, has implemented this in an experimental branch: https://sqlite.org/forum/forumpost/5f218012b6e1a9db

Worth reading the thread, there are some good insights. It looks like he will be waiting on Postgres to take the initiative on implementing this before it makes it into a release.


That comment where he explains why he's not rushing to add new unproven SQL syntax to SQLite is fascinating:

> My goal is to keep SQLite relevant and viable through the year 2050. That's a long time from now. If I knew that standard SQL was not going to change any between now and then, I'd go ahead and make non-standard extensions that allowed for FROM-clause-first queries, as that seems like a useful extension. The problem is that standard SQL will not remain static. Probably some future version of "standard SQL" will support some kind of FROM-clause-first query format. I need to ensure that whatever SQLite supports will be compatible with the standard, whenever it drops. And the only way to do that is to support nothing until after the standard appears.


It's so ambitious in an almost boring way, exactly the right steward for a project like this


Dr. Hipp is one of my heroes. He seems to labor quietly in semi obscurity for decades, and at the end of it he's produced some amazing software. I was tickled by the curfuffle over his use of a set of guidelines for living in a Christian monastery as SQLite's code of ethics for the purpose of checking a box on an RFQ (part of the fallout of the libsql fork), because he does seem like a sort of programmer monk. (For what it's worth, as an agnostic, I've read them several times and found them unobjectionable. While I think the drama was unnecessary, the libsql people are doing interesting work.)

I choose never to meet this man and be disabused of this notion. Shine on, doctor.


In fairness, I think the complaint over the tongue-in-cheek 'code of conduct' was that it was transparently unsuitable if considered as an actual code of conduct (i.e. a list of rules that SQLite contributors must obey in order to participate in the project). For example, it seems unlikely that Dr. Hipp would wish to exclude contributors who have committed adultery, or who do not pray with sufficient frequency.

(The erstwhile code of conduct is now labeled a 'code of ethics', and AFAIK SQLite has no official CoC currently.)


To me it seemed like they had incompatible visions (SQLite wants to work in 2050 in the contexts it's been traditionally used in, libsql wants to modernize and lean into the more recent use cases) and so a fork was the appropriate and inevitable course of action.

Given that SQLite isn't really open to contribution (one of libsql's frustrations) it doesn't really worry me that they didn't & don't have a clear code of conduct. To me, digging through the repository [ETA: the website, rather] for what amounts to a cringey Easter egg and then linking to it as if it were a serious issue is uncalled for. To be honest, I think the complaints shouldn't stayed out of their announcement entirely - they have a legitimately cool vision for what their fork could be, and the complaints were only a distraction.


Yes, it's an important point that SQLite is not a project with an open contribution model. However, they do presumably accept external contributions in the form of bug reports, suggested patches, etc. etc.

You didn't have to dig through the repository to find the CoC. It was right there on the website at /codeofconduct.html: https://web.archive.org/web/20180315125217/https://www.sqlit...


Another cool project from Dr. Hipp is the fossil SCM, which SQLite is developed in, and one of it's features is that it ships with a web view similar to GitHub. The website is actually the web view of the repo. (Apologies for expressing that in a confusing way, I knew it was on the website, I was referring to the website as the repository.)


The blatant religious discrimination in the document is both not a problem at all if the author is the only contributor (I suppose thetr must be some form of arms-length way of consuming external support from less beholden entities; I don't know the details of Critical Code of Conduct Theory), and totally unacceptable otherwise.

Following the document itself, it should be rewritten if it ever intends to include other people, and should be explicitly clarified that the current form only applies to the author himself.


FROM first would be nothing short of incredible. I can only hope that Postgres and others can find it within themselves to get together and standardize on such an extension!


This syntax looks a lot like PRQL. ClickHouse supports writing queries in PRQL dialect. Moreover, ClickHouse also supports Kusto dialect too.

https://clickhouse.com/docs/en/guides/developer/alternative-...


Yeap I didn't know DuckDB supported it already!

Being able to do SELECT FROM WHERE in any order and allowing multiple WHEREs and AGGREGATE etc, combined with supporting trailing commas, makes copy pasting templating and reusing and code-generating SQL so much easier.

  FROM table  <-- at this point there is an implicit SELECT *
  SELECT whatever
  WHERE some_filter
  WHERE another_filter <-- this is like AND
  AGGREGATE something
  WHERE a_filter_that_is_after_grouping <-- is like HAVING
  ORDER BY ALL <-- group-by-all is great in engines that support it; want it for ordering too
...


A special keyword like HAVING prevents erros by typing in the wrong line.

How is OR done with this WHERES?


What’s group-by-all? Sounds like distinct?


It's different from distinct. Distinct just eliminates duplicates but does not group entries.

Suppose...

  SELECT brand, model, revision, SUM(quantity)
   FROM stock
   GROUP BY brand, model, revision
This is not solved by using distinct as you would not get the correct count.

Group By All allows you to write it a bit more compact...

  SELECT brand, model, revision, SUM(quantity)
   FROM stock
   GROUP BY ALL


Gotcha. Thanks. That’s actually super useful! Looks like Postgres doesn’t implement it unfortunately.

I revert to “group by 1, 2, 3… “ when I’m just hacking about. Group by all would definitely be an improvement.


Normally the SELECT has a bunch of columns to group by and a bunch of columns that are aggregates. Then, in the GROUP BY clause, you have to list all the columns to group by. The query compiler knows which they are, and polices you, making sure you got it right. All the GROUP BY ALL does is say 'the compiler knows, there's no need to list them all'. Very convenient.

BigQuery supports GROUP BY ALL and it really cleans up lots of queries. E.g.

   SELECT foo, bar, SUM(baz)
   FROM x
   GROUP BY ALL <-- equiv to GROUP BY foo, bar
(eh, except MySQL; my memory of MySQL is it will silently do ANY_VALUE() on any columns that aren't an explicit aggregate function but are not grouped; argh it was a long time ago)


MySQL doesn't do this anymore; the ONLY_FULL_GROUP_BY mode became default in 5.7 (I think). You can still turn it off and get the old behavior, though.


What exactly is the history of having FROM be the second item, and not the first? Because FROM first seems more intuitive and actually the way you write out queries.

Really hope this takes off and gets more widespread adoption because I really want to stop doing:

  SELECT *
  FROM all_the_joins
into

  SELECT {my statements here}
  FROM all_the_joins


It's funny how he addresses the new syntax as "from-clause-first". Like a very minor change with a low value.


I think that's important, because a lot of concepts are presented as prohibitively complicated; for example, functional programming makes sense in my head, but if you present it as lambda calculus and write it in concise form with new operators, you lost me.


LINQ, PRQL, Kusto has all preceeded this.

While LINQ is mostly restricted to .NET, PRQL is not. https://prql-lang.org/

It's a welcome change in the industry.

I made this prediction a couple years back: https://x.com/tehlike/status/1517533067497201666


The paper directly references PRQL and Kusto. The main goal here is to take lessons learned from earlier efforts and try and find a syntax that works inside and alongside the existing SQL grammar, rather than as a wholly separate language.


I've been following PRQL for some time now since it first got good traction on HN and I like it a lot, but I'm really hoping this pipe syntax from Google takes off for a couple of reasons:

1. Similar to what you mention, while I think PRQL is pretty easy to learn if you know SQL, it still "feels" like a brand new language. This piped SQL syntax immediately felt awesome to me - it mapped how my brain likes to think about queries (essentially putting data through a chain of sieves and transforms), but all my knowledge of SQL felt like it just transferred over as-is.

2. I feel like I'm old enough now to know that the most critical thing for adoption of new technologies that are incremental improvements over existing technologies is to make the upgrade path as easy as possible. I shouldn't have to overhaul everything at once, but I just want to be able to take in small pieces a chunk at a time. While not 100% the same thing, if you look at the famously abysmal uptake of things like IPv6 and the pain it takes to use ES module-only distributions from NPM, the biggest pain point was these technologies made you do "all or nothing" migrations - they didn't have an easy, simple way to get from point A to point B. The thing I like about this piped SQL syntax is that in a large, existing code base I could easily just start adding this in new queries, but I wouldn't really feel the need to overhaul everything at once. With PRQL I'd feel a lot less enthusiastic about using that in existing projects where I'd have a mix of SQL and PRQL.


It's wild that the enterprise and connected world has moved on from forcing COBOL compatibility for modern projects, but still insists on SQL compatibility.


I’m a big kusto user, and it’s wonderful to have pipes in a query language.

If you haven’t tried it, it’s great!


I have not tried it, but I used to be a .net developer and worked a lot with LINQ (and contributed a bit to NHibernate and its Linq provider) and I am a big fan of the approach.

Kusto does seem interesting too, and i think some of the stuff i want to build will find a use for it!


LINQ is so incredibly intuitive. I wonder if this will make creating C# LINQ providers for databases that support this syntax easier.


Indeed. Elastic has also recently released a piped query language called ES|QL. Feels similar to Kusto.

I find piped queries both easier to write, and read.


Not having LINQ is a terrible inconvenience everywhere. Most languages have libs that try to hack something similar, but it usually simply isn't.


It's a lot easier to design a good DSL when it doesn't have to be compatible with anything


Well, .NET was already used a lot when it was built in a few decades ago.


Is "from" keyword originating from .NET (Framework 3.5 in 2007) or is this pre-existing somewhere in research?


> This remains a long-standing pet peeve of mine. PDFs like this are horrible to read on mobile phones, hard to copy-and-paste from ...

I've never understood why copying text from digitally native PDFs (created directly from digital source files, rather than by OCR-ing scanned images) is so often such a poor experience. Even PDFs produced from LaTex often contain undesirable ligatures in the copied text like fi and fl. Text copied from some Springer journals sometimes lacks space between words or introduces unwanted space between letters in a word ... Is it due to something inherent in PDF technology?


> Is it due to something inherent in PDF technology?

Exactly. PDF doesn't have instructions to say "render this paragraph of text in this box", it has instructions to say "render each of these glyphs at each of these x,y coordinates".

It was never designed to have text extracted from it. So trying to turn it back into text involves a lot of heuristics and guesswork, like where enough separation between characters should be considered a space.

A lot also depends on what software produced the PDF, which can make it easier or harder to extract the text.


My favorite is when they do bold by duplicating and slightly shifting the letters. Bboolldd. PDFs are hell.


That's inherited from the original Portable Document Format for machines - the typewriter instructions.


I've never looked into the PDF format, but, does it not allow for annotations that say, "the glyphs in the rectangle ((x0, y0), (x1, y1)) represent the text 'foobar'")? That's been my mental model for how they are text-searchable.


They do but such annotations are optional.


PDF natively supports selectable/extractable text. Section 9.10 of ISO 32000 is literally “Extraction of Text Content.” I’ve implemented it myself in production software.

There are many good reasons why PDF has a “render glyph” instruction instead of a “render string”. In particular your printer and your PDF viewer should not need to have the same text shaping and layout algorithms in order for the PDF to render the same. Oops, your printer runs a different version of Harfbuzz!

The sibling comment is right that a lot depends on the software that produced the PDF. It’s important to be accurate about where the blame lies. I don’t blame the x86 ISA or the C++ standards committee when an Electron app uses too much memory.


It’s due to poor choices made in the implementation of pdfTeX. For example the TeX engine does not associate the original space characters with the inter-word “glue” that replaces them, so pdfTeX happily omits them. This was fixed a few years back, finally. But there’s millions(?) of papers out there with no spaces.


ligatures like fi fl ffi ffl etc are for changes in fonts specific to rendering correctly on a screen or printer. It's intended to be a _rendered_ format, rather than a parse-able format.

Well formatted epub and HTML generally are usually intended to update to end user needs and better fit available layout space.


Though it's also a stuck legacy throwback. Modern advice would be to not send ligatures directly to the renderer and instead let the renderer poll OpenType features (and Unicode/ICU algorithms) to build them itself. PDF's baking of some ligatures in its files seems something of a backwards compatibility legacy mistake to still support ancient "dumb" PostScript fonts and pre-Unicode font encodings (or least pre-Unicode Normalization Forms). It's also a bit of the fact that PDF has always been confused about if it is the final renderer in a stack or not.


That wouldn’t work for PDF’s use case of being an arbitrary paper-like format because the various Unicode and OpenType algorithms don’t provide sufficient functionality for rendering arbitrary text: there are no one-size-fits all rules! The standards are a set of generic “best effort” guidelines for lowest-common-denominator text layout that are constantly being extended.

Even for English the exact tweaking of line breaking and hyphenation is a problem that requires manual intervention from time to time. In mathematics research papers it’s not uncommon to see symbols that haven’t yet made it into Unicode. Look at the state of text on the web and you’ll encounter all these problems; even Google Docs gave in and now renders to a canvas.

PDF’s Unicode handling is indeed a big mess but it does have the ability to associate any glyph with an arbitrary Unicode string, for text extraction purposes, so there’s nothing to stop the program that generates the PDF from mapping the fi ligature glyph to the to-character string “fi”.


I think you are seeing different problems here than I was complaining about. Maybe I can restate the case: Baseline PDF 1.0 is something like (but not exactly) rendering to print to a specific PostScript printer that understands embedded PostScript fonts, somewhat like but not exactly like virtual version of an early Apple LaserWriter. PDF has been extended over the versions and included extensions and upgrades over the years and now that target "printer" that PDF represents has upgraded and also understands Unicode in its PostScript and also understands embedded OpenType fonts (and their many extensions of character mapping and ligatures and contextual alternates, etc). But because its legacy was a dumber printer for the easiest "backwards compatibility" a lot of PDF rendering apps still do dumb things like encode ligatures "by hand" in quaint encodings like some of the extended ASCII code pages or the pre-combined Unicode forms that today we consider obsolete as if they were printing to a PostScript printer that doesn't understand ligatures directly because it is still only capable of printing older PostScript fonts.

Yes, if you don't embed your fonts (or at least their metrics) in the PDF layout is less deterministic and will shift from font to font. The point is that we can embed modern fonts in PDF, the virtual printer has upgraded support for that, but for all sorts of reasons some of the tools that build PDF are still acting like they are "printing" to the lowest common denominator and using legacy EBCDIC ligature encodings in 2024. (Fun fact: Unicode's embeddings of the ligatures are somewhat closer to some classic EBCDIC report printing code pages than Extended ASCII's equivalents because there was a larger backwards compatibility gulf there.)


That's fine, but a good compiled format should also include a source map for accessibility.


It is a shame that CSS pagination is still a mess. Not that I like CSS, but it would go a long way towards unlocking some layouts from PDF.


Agreed - I used CSS to lay out a book a couple of years ago and it wasn't too bad, but the things that have poor support/don't work at all (like page numbers) are a pain to hack around.


XPS solved a lot of the problems with PDF, but Microsoft couldn't reach a critical level of adoption to let network effects take hold.

However, I don't know if XPS handles the copying of text better.


If a PDF doesn't support text extraction, it's the fault of the software that created it. Most likely the software didn't include the glyph → Unicode character mapping in the PDF.


Previous submissions on the paper itself:

https://news.ycombinator.com/item?id=41321876 (first) https://news.ycombinator.com/item?id=41338877 (plenty of discussions)

I tried this new syntax and this seems a reasonable proposal for complex analytical queries. This new syntax probably does not change most simple transactional queries though. The syntax matches the execution semantic more closely, which means you less likely need to formulate query in a weird form to make query planner work as expected; usually users only need to move some pipe operators to more appropriate places.


Kinda looks like a half-assed version of what PRQL does. Like, if we’re going to have nonstandard sql, let’s just fix a whole bunch of things, not just one or two?


> Like, if we’re going to have nonstandard sql, let’s just fix a whole bunch of things, not just one or two?

I think they intentionally kept themselves away from massive redesign of the languages, which has a good chance of becoming multi decades of frustrating death march. I know a number of such cases from C++ standard proposals and probably the team wanted to avoid it.


This is addressed in the paper -- it's nice to have something deployable in existing SQL languages, and it also doesn't rule out using PRQL


> Kinda looks like a half-assed version of what PRQL does. Like, if we’re going to have nonstandard sql, let’s just fix a whole bunch of things, not just one or two?

To be honest, this feels exactly like the kind of mistake that IPv6 made. It wasn't just "let's extend the IPv4 address space and provide an upgrade path that's as incremental as possible", it was "IPv4 has all these problems, lets solve the address space issue with a completely new address space, and while we're at it lets fix 20 other things!" Meanwhile, over a quarter century later, IPv4 shows no signs of going away any time soon.

I'd much rather have an incremental improvement that solves 90% of my pain points than to reach for some "Let's throw all the old stuff away for this new nirvana!" And I say this as someone that really likes PRQL.


You can't "just" extend the IPv4 address space while keeping the compatibility.


Extending src/dst in current IPv4 protocol headers is much easier than adopting a completely new suite.


> Extending src/dst in current IPv4 protocol headers is much easier than adopting a completely new suite.

And that's precisely why that was also one of the competing proposals back then, so that tells me that just being easier probably wasn't enough.

You can search for RFC 1475 ("IPv7") and its surrounding history.


Yes I know. And IPv6 win because it's an objectively a superior standard. No politics and all committee garbage of course.


There was a second submission of the paper, which attracted more comments: https://news.ycombinator.com/item?id=41338877


Thank you, added it to my comment. I missed all the discussions!


Every time this FROM-first syntax style crops up it's always the most basic simple query (one table, no projections / subselects / consideration to SP/Views).

Just for once I want to see complete examples of the syntax on an actual advanced query of any kind right away. Sure, toss out one simple case, but then show me how it looks when I have to join 4-5 reference tables to a fact table and then filter based on those things.

Once you do that, it becomes clear why SELECT first won out originally: legibility and troubleshooting.

As long as DBs continue to support standard SQL they can add whatever additional syntax support they want but based on history this'll wind up being a whole new generation of emacs vs vi style holy war.


Sounds a bit like "new thing scary" unless you show why having select in front actually avoids problems, and I don't think there's a clear problem they avoid, but it does make it really hard to autocomplete (can you even do it properly?) while something along the lines of just swap select for from is well defined.


> Sounds a bit like "new thing scary" unless you show why having select in front actually avoids problems

This isn't really fair. BeefWellington gave a reason why SQL is how it is (and how it has been for ~50 years). It's reasonable to ask for a compelling reason to change the clause order. Simon's post says it "has always been confusing", but doesn't really explain why except by linking to a blog post that says that the SQL engine (sort of but not really) executes the clauses in a different order.

I think the onus of proof that SQL clauses are in the wrong order is on the people who claim they're in the wrong order.


But it has been explained many times from many angles.

* SELECT first makes autocomplete hard

* SELECT first is the only out of order clause in the SQL statement when you look at it from execution perspective

* you cannot use aliases defined in SELECT in following clauses

* in some places SELECT is pointless but it is still required (to keep things consistent?)

Probably many more.


> you cannot use aliases defined in SELECT in following clauses

Some DBs allow it or allow it partially. It's a major constant friction factor for me to do a guess work across different database systems.


This is a case where stating your opinion and credentials will make you sound really old and conservative so it will be easy to take cheap shots like "you are just afraid of change".

At my previous gig I worked for a decade with an application that meant creating and maintaining large hairy sql that was created to offload application logic to the database (_very_ original) And we used to talk about this "wrong order" often but I never once actually missed it. It was at the most a bit annoying when you jumped in a server to troubleshoot and you knew the two columns you were interested in and you could have saved two seconds. But when working with maintaining those massive queries it always felt good to have the projection up top because that is the end result and what the query is all about. I would not have liked if the method signature in eg Java was just the parameters and the return type was after the final brace. This analogy falls apart of course since params are all over the place but swapping things around wouldn't help.

So just go 'SELECT *...' and go back and expand later, I want my sql syntax "simple". /old developer


It really isn't. I've been working in this field for ages and did a lot of those years as a DBA and data modeler. I've worked with other syntaxes too, mostly MDX but some others specific to Hadoop/Spark. I'm not afraid of new things. I just want them to improve on what we have. I want them to be honest about situations where their solution isn't great.

SQL has lots of warts, e.g.: the fact that you can write SQL that joins tables without including those tables in a JOIN, which leads to confusion. It's fragmented too -- the other example I posted shows two different syntaxes for TOP N / LIMIT N because different vendors went different ways. The fact that some RDBMSes provide locking hint mechanics and some don't (at least not reliably). The fact that there's no standard set of "library" functions defined anywhere, so porting between databases requires a lot of validation work. It makes portability hard, and some of those features are missing from standards.

You'll note I also mentioned that if they want to add it that's fine but it's gonna wind up being a point of contention in a lot of places. That's because I've seen the same thing happen with the "Big Data" vs "what we have works" crowd.

Having select up front avoids problems in a couple key ways:

1. App devs who are working on their application can immediately see what fields they should expect in their resultset. For CRUD, it's probably usually just whatever fields they selected or `*` because everyone's in the habit of asking for every field they'll never use.

2. Troubleshooting problems is far easier because they almost always stem from a field in the projection. Seeing the projected field list (and thus, table aliases that field comes from) are literally the first pieces of information you need (what field is it and where does that field come from) to start troubleshooting. This is why SELECT ... FROM makes the most sense -- it's literally the two most crucial pieces of information right up front.

3. Query planners already optimize and essentially compile the entire thing anyways, so legibility trumps other options IME.

Another point I'd make to you and everyone else bringing up autocomplete: If you need it, nothing is stopping you from writing your FROM clause first and then moving a line up to write your SELECT. Kinda like how you might stub out a function definition and later add arguments. This doesn't affect the final form for legibility.


> becomes clear why SELECT first won out originally: legibility and troubleshooting

nothing "becomes clear" just by you claiming so, better elaborate


For examples of larger queries, see here for all TPC-H queries in standard syntax and converted to pipe syntax: https://github.com/google/zetasql/blob/master/zetasql/exampl...

And several more examples with pipe syntax here: https://github.com/google/zetasql/blob/master/zetasql/exampl...


> Once you do that, it becomes clear why SELECT first won out originally: legibility and troubleshooting.

Select first was as much an accident of "it sounded better as an English sentence" to the early SQL designers. Plus also they were working with early era parsers with very limited look ahead and putting the primary "verb" up front was important at the time.

But English is very flexible, especially in "command syntax" and From first is surprisingly common: "From the middle cupboard, grab a plate". SQL trying to sound like English here only shows how inflexible it still is in comparison to actual English.

I've been using C#'s LINQ since it was added to the language in 2007 and the from/where/join/group by/select order feels great, is very legible especially because it gives you great autocomplete support, and troubleshooting is easier than people think.


https://prql-lang.org/ has a bunch of good examples on its home page.

If you engage the syntax with your System 2 thinking (prefrontal cortex, slow, the part of thinking we're naturally lazy to engage) rather than System 1 (automated, instinctual, optimized brain path to things we're used to) you'll most likely find that it is simpler, makes more logical sense so that you're filtering down things naturally like a sieve and composes far better than SQL as complexity grows.

After you've internalized that, imagine the kind of developer tooling we can build on top of that logical structure.


> If you engage the syntax with your System 2 thinking (prefrontal cortex, slow, the part of thinking we're naturally lazy to engage) rather than System 1 (automated, instinctual, optimized brain path to things we're used to)

You might not have intended it this way, but your choice of phrasing is very condescending.


Re-reading it I can see how it could be perceived by some people as such, thanks for pointing it out. There's probably better phrasing or adding more context could make it more amicable:

The goal was to explicitly tell people not to bother "just reading it" as one (and by one I mean myself and most people I know, surely there are exceptions) is naturally inclined to do unless something is particularly piquing our interest.

Without engaging in active, conscious effort, syntax that is different than what we're used to (specially something as established as SQL) where the changes aren't groundbreaking at first glance can easily make us dismissive without realizing the benefits. And after seeing it too many times with all kinds of technologies that stray away from the familiar, I just want to prepare the reader so that their judgment can be formed with full use of their faculties rather than a reflex response.


Edit: In my pre-coffee rush this morning I completely missed the grouping by role (which is not that much harder FWIW). This unfortunately invalidates my entire post as it was posted and I don't want to spread misinfo.


I don't think your alternatives actually solve the same problem. Your alternatives would give you the single most recently joined employee. The actual problem being solved is to find the most recently joined employee in each role.

You'd need to do some grouping in there to be able to get one employee per role instead of a single employee out of the whole data set.


Yeah you're correct, I caught that and edited my reply right as you responded.

Time willing I will provide an updated reply with fixed SQL.


As a test, I refactored a 500 line-ish analytical query that joins more than 20 tables with tens of complex CTE and I can say that this FROM-first syntax is superior than the legacy syntax on almost every single aspect.


> SELECT first won out originally: legibility and troubleshooting.

It quite interesting to dive into history of SQL alternatives in 70x/80x.


> Once you do that, it becomes clear why SELECT first won out originally: legibility and troubleshooting.

Also, tools can trivially tell DQL from DML by the first word they encounter, barring data-modifying functions (o great heavens, no!).


FROM order is, like, the least offensive and least wrong thing about SQL.

Bikeshedding par excellence.


Title should probably be changed, since the article is about using AI to convert a PDF to semantic HTML.


A surprising problem I'm seeing with maintaining a link blog is that articles from it occasionally get submitted to Hacker News, where people inevitably call them out as not being as appropriate as the source they are linking to - which is fair enough! That's why I don't tend to submit them myself.

This particular post quickly turned into a very thinly veiled excuse for me to complain about PDFs, then demonstrate a Gemini Pro trick.

In this case I converted to HTML - I've since tried converting a paper to Markdown and sharing in a Gist, which I think worked even better: https://gist.github.com/simonw/46a33d66e069efe5c10b63625fdab... - notes here https://simonwillison.net/2024/Aug/27/distro/


Have you seen gist.io?

If you replace `gist.github.com/<user>/<id>` -> `https://gist.io/@<user>/<id>`, you get a gist with nice typography.

https://gist.io/@simonw/46a33d66e069efe5c10b63625fdabb4e is the same gist you linked, but nicer to read


That's pretty neat! I like that it's run by a GitHub employee too (presumably as a side-project, but still) - makes me less nervous about the domain name blinking out of existence one day.


This reminds me .NET's short lived Linq to SQL;

There was a talk at the time, but I can't find the video: http://jaoo.dk/aarhus2007/presentation/Using+LINQ+to+SQL+to+....

Basically, it was a way to cleanly plug SQL queries into C# code.

It used this sort of ordering (where the constraints come after the thing being constrained); it needed to do so for IntelliSense to work.


"Short-lived"? LINQ is very much alive in the C# ecosystem.

And FROM-first syntax absolutely makes more sense, regardless of autocomplete. You should put the "what I need to select" after the "what I'm selecting from", in general.


LINQ yes, but they killed off the component not long after introducing it.


It was replaced by Entity Framework.


Linq to sql still lives


> This reminds me .NET's short lived Linq to SQL;

"Short lived"? Its still alive, AFAIK, and the more popular newer thing for the same use case, Linq to Enntities, has the same salient features but (because it is tied to Entity Framework and not SQL Server specific) is more broadly usable.


It was in 3.5 only.

If they've replaced it with something else in the last decade and a half that does not mean that they didn't get rid of it, or that it wasn't short lived.

https://learn.microsoft.com/en-us/dotnet/framework/data/adon...


Yeh. Linq to sql was a much more lightweight extension than EF, and was killed due to internal warring at MS.

Database people were investing a lot of time and energy on doing things “properly” with EF, and this scrappy little useful tool, linq to sql, was seen as a competitor.


I quite liked it in the 5 minutes it existed - it was just really easy to use.


LINQ is not the same as LINQ-to-SQL. The former is a language feature, the latter a library (one of many) that uses that feature.


Did you reply to the wrong person? Because I'm not the guy that didn't know that.


There is https://github.com/linq2db/linq2db which is LINQ to SQL reincarnated.

Of course there's EF Core too.


And NHibernate.Linq and Dapper.Extensions.Linq… Most ORMs in the ecosystem have at least one Linq support library, even if just a third-party extension.

Also, there are fun things that support Linq syntax for non-ORM uses, too, such as System.Reactive.Linq and LanguageExt: https://github.com/louthy/language-ext/wiki/How-to-deal-with...


The first piped query language I used was Nushell's implementation of wide-column tables. PRQL offers almost similar approach which I have loved dearly. It also maps to different SQL dialects. There is also proposal to work on type system: https://github.com/PRQL/prql/issues/381.

Google has now proposed a syntax inspired by these approaches. However, I am afraid how well it would be adopted. As someone new to SQL, nearly every DB seem to provide its own SQL dialect which becomes cumbersome very quickly.

Whereas PRQL feels something like Apache Arrow which can map to other dialects.


As to the writer's problem with PDFs on the web: they aren't for reactive web app viewing on mobile phones. Not everything has to be. If you reeeeeeeally need to read that research paper, find a screen that's bigger than 3" wide.


I think his point is that Google is a web company. And a mobile phone company. And they publish a lot of stuff in a format that's basically optimized for print and kind of useless for anything else.

I did my PhD more than 20 years ago and it was annoying then to be working with all these postscript and pdf documents. It's still annoying. These days people publish content in PDF form on websites and mostly not in printed media. People might print these or not. Twenty years ago, I definitely did. But it's weird how we stick with this. And PDFs are of course very unstructured and hard to make sense of programmatically as well.

I bet a lot of modern day scientists don't actually print the articles they read anymore and instead read them on screen or maybe on some ipad or e-reader. Print has become an edge case. Reading a pdf on a small e-reader is not ideal. Anything with columns is kind of awkward to deal with. There's a reason why most websites don't use columns: it kind of sucks as a UX. The optimal form to deliver text is in a responsive form that can adapt to any screen size where you can change the font size as well. A lot of scientific paper layouts are optimized to conserve a resource that is no longer relevant: paper real estate. Tiny fonts, multiple columns, etc.

Anyway, I like Simon's solution and how it kind of works. It's kind of funny how some of these LLMs can be so lazy. The thing with the references being omitted is hilarious. I see the same with chat gpt where it goes out of its way to never do exactly as you asked and instead just give you bits and pieces of what you ask for until you beg it to just please FFing do as you're told?! I guess they are trying to save some tokens or GPU time.


Why shouldn’t I read research papers on my phone? That’s where I read almost everything else.


Even when reading on the phone, I do not understand the complaint against the two-column format.

The one-column format is fine on a large monitor, but on a small phone I prefer narrower columns, because a wide column would either make the text too small or it would require horizontal panning while reading.

So I consider the two-column format as better for phones, not worse.


One of the most complex and battle-tested open source projects is essentially a rendering engine for semantic text that has supported reflowing text to fit the screen for decades. And now you’re seriously considering having to zoom in on a column, then scrolling all the way back up and right to the next column, then down to the footnotes at the bottom, then to a random figure, to be a solution?


Yes, I strongly prefer reading PDF documents with fixed layout instead of HTML or any other formats with reflowing text, including on small phone screens.

I frequently read documents with many thousands of pages, which also contain many figures and tables.

A variable layout, at least for me, makes the browsing and the search through such documents much more difficult.

I have never ever seen any advantage in having the text reflow to match whatever window happens to be temporarily used to display the text, except for ephemeral messages that I will never read again.

For anything that I will read multiple times, I want the text to retain the same layout, regardless of what device or window happens to display it. If necessary, I see no problem in adjusting the window to fit the text, instead of allowing changes in the text, which would interfere with my ability of remembering it from the previous readings.

I really hate those who fail to provide their technical documentation as PDF documents, being content to just have some Web pages with it.


I don't want reflowing text to fit the screen. Text has an optimal number of characters per line, and it's between 40 and 60 depending on who you ask. Lines longer than that hinder reading. Lines shorter than that are just inconvenient.

The usual two-column layout is because having 40 to 60 characters per line in a single column is wasteful of paper. That is a real issue. But the solution is to make the PDF page narrower. Almost nobody prints these documents anyways; there's no good reason they need to conform to legacy sizes like A4 or letter paper commonly found in office printers. Just choose A5 as the size. People who really need to print can fit two A5 pages on one A4 page, and people who view these documents on a phone screen will also find A5 more convenient.


I actually work on SQL Server, but I also write a lot of KQL queries which also work this way and I totally agree that the sequential pipe stuff is easier to write. I haven't read through the whole paper, but one aspect that I really like is that I think it's easier to guide the query optimization in this sequential style.


Is there any internal inertia for such changes to SQL server?


Given how Entity Framework is quite ubiquitous as "the ORM of choice" for SQL Server and its usage of C# Linq, there's certainly external momentum, whether or not SQL Server devs themselves are paying attention to how the majority of their users are writing queries today.


I've been writing SQL for something like 25 years and always thought the columns being SELECTed should have come last, not first. Naming your sources before what you're trying to get from them to me at least makes much more logical sense. Calling aliased table names before I have done the aliasing is weird.

Also it would make autocomplete in intelligent IDEs much more helpful when typing a query out from nothing.


Looks just like writing sql using Ecto in Elixir:

"users" |> where([u], u.age > 18) |> select([u], u.name)

https://hexdocs.pm/ecto/Ecto.Query.html


Thought this too. The example queries look very much like Ecto statements. I miss the ergonomics and flexibility of Ecto when I use database wrappers on other platforms.


The next thing I would like is to define a function / macro that has a bunch of |> terms.

I pointed out that you can do this with shell:

Pipelines Support Vectorized, Point-Free, and Imperative Style https://www.oilshell.org/blog/2017/01/15.html

e.g.

    hist() {
      sort | uniq -c | sort -n -r
    }

    $ { echo a; echo bb; echo a; } | hist
      1 bb
      2 a

    $ foo | hist
    ...
   
Something like that should be possible in SQL!


It is, using table-valued functions (TVFs).

There's an example at the bottom of this file:

https://github.com/google/zetasql/blob/master/zetasql/exampl...


That's cool, thanks!

What about scalar valued functions? :) So I can reuse an expression in a WHERE and so forth

(and I appreciate that HAVING can be generalized/removed)


I didn't see this the first time:

    GROUP AND ORDER BY component_id DESC;
Is this kind of syntax combining grouping and ordering really necessary in addition the pipe operator? My advice would be to add the pipe operator and not get fancy adding other syntax to SQL as well.


It could be a custom zetasql extension leaked into the paper.


That is basically R with tidyverse.

  flights |>
    filter(
      carrier == "UA",
      dest %in% c("IAH", "HOU"),
      sched_dep_time > 0900,
      sched_arr_time < 2000
      ) |>
    group_by(flight) |>
    summarize(
      delay = mean(arr_delay, na.rm = TRUE),
      cancelled = sum(is.na(arr_delay)),
      n = n()
      ) |>
    filter(n > 10)
If you haven't used R, it has some serious data manipulation legs built into it.


An interesting thing to me about all these dplyr-style syntaxes is that Wickham thinks the group_by operator was a design mistake. In modern dplyr you can often specify a .by on an operation instead. I found switching to this style a pretty easy adjustment, and I think it’s a bit better. Example:

  d |> filter(id==max(id),.by=orderId)
I think PRQL were thinking a bit about ways to avoid a group_by operation and I think what they have is a kind of ‘scoped’ or ‘higher order’ group_by operation which takes your grouping keys and a pipeline and outputs a pipeline step that applies the inner pipeline to each group.


Given 10 more years dplyr syntax might resemble data.table's


My thoughts exactly, it even uses the same pipe syntax, though I do prefer `%>%`. I've been avoiding SQL for a while now as it feels so clunky next to the tidyverse


If anyone is interested in the theoretical background to the thrush combinator, a.k.a. "|>", here is one using Ruby as the implementation language:

https://leanpub.com/combinators/read#leanpub-auto-the-thrush

Being a concept which transcends programming languages, a search for "thrush combinator" will yield examples in several languages.


I find this [1] from this [2]. Seems like a good explanation. It doesn't exist on Wikipedia though.

[1] https://github.com/raganwald-deprecated/homoiconic/blob/mast...

[2] https://stackoverflow.com/a/285973/88231


A key thing to keep in mind is that the thrush combinator is a fancy name for a simple construct. The semantics it provides is a declarative form of traditional function composition.

For example, given the expression:

  f (g (h (x)))
The same can be expressed in languages which support the "|>" infix operator as:

  h (x) |> g |> f
There are other, equivalent, constructs such as the Cats Arrow[0] type class available in Scala, the same Arrow[1] concept available in Haskell, and the `andThen` method commonly available in many modern programming languages.

0 - https://typelevel.org/cats/typeclasses/arrow.html

1 - https://wiki.haskell.org/Arrow_tutorial


We should really standardize a core language for SQL. Rust has MIR, Clang is making a CIR for C/C++. Once we have that, we'll be able to to communicate much better.

Right now, it's everyone faffing around with different mental models and ugly single pass compilers (my understanding is that parsing-->query planning is not nearly as well-separated in most DBs as parsing-->optomize-->codegen in most compilers).


> We should really standardize a core language for SQL

Do you mean something other than ISO/IEC 9075:2023 (the 9th edition of the SQL standard)?


It costs 194 CHF to read. There is room for improvement.


A core language is a minimal AST without surface syntax (and thus no bikeshedding of that) that distills the surface language to its essence.


SQL is basically the list monad, with various quotients / refinements:

- Sometimes the order doesn't matter - Sometimes there are functional dependencies - Sometimes one knows the length of the list in question is 1 (foreign key constraints)


ANSI SQL is very much a thing, and you should strive to keep your queries as close as possible to standard SQL as your database engine allows, if you want those queries to be portable to other database technology in the future.


You might enjoy https://substrait.io/



I just want trailing commas allowed everywhere. I can't believe this 2024 and we still have to deal with this crap. Humanity deserves better.

Syntax/DSL designers: if your language uses a separator for anything, please kindly allow trailing versions of that separator anywhere possible.


My big wish for SQL is for single row inserts to have a {key: value} syntax.


In ClickHouse you can do

    INSERT INTO table FORMAT JSONEachRow {"key": 123}
It works with all other formats as well.

Plus, it is designed in a way so you can make an INSERT query and stream the data, e.g.:

    clickhouse-client --query "INSERT INTO table FORMAT Protobuf" < data.protobuf

    curl 'https://example.com/?query=INSERT...' --data-binary @- < data.bson


MySQL has it without the braces.


This would condense lines of code by a lot and prevent a lot of dumb bugs.


I find this particular choice of syntax somewhat amusing because the pipe notation based query construction was something I ended up using a year ago when making an SQL library in OCaml:

https://github.com/kiranandcode/petrol

An example query being:

```

let insert_person ~name:n ~age:a db = Query.insert ~table:example_table ~values:Expr.[ name := s n; age := i a ] |> Request.make_zero |> Petrol.exec db

```


This feels like this should be in the official SQL standard and supported across a bunch of RDBMSes and understood by IDEs, libraries and frameworks.


Yeah, and we will have two standards given popularity of existing syntax


Looking at the first example from PDF:

    FROM customer
    |> LEFT OUTER JOIN orders ON c_custkey = o_custkey
    AND o_comment NOT LIKE '%unusual%packages%'
    |> AGGREGATE COUNT(o_orderkey) c_count
    GROUP BY c_custkey
    |> AGGREGATE COUNT(*) AS custdist
    GROUP BY c_count
    |> ORDER BY custdist DESC, c_count DESC;
You could do something similar with Ryelang's spreadsheet datatype:

    customers: load\csv %customers.csv
    orders: load\csv %orders.csv

    orders .where-not-contains 'o_comment "unusual packages" 
    |left-join customers 'o_custkey 'c_custkey
    |group-by 'c_custkey { 'c_custkey count }
    |group-by 'c_custkey_count { 'c_custkey_count count }
    |order-by 'c_custkey_count_count 'descending
Looking at this, maybe we should add an option to name the new aggregate column (now they get named automatically) in group-by function because c_custkey_count_count is not that elegant for example.


Is there research on what is easier to read when you are sifting through many queries?

I like the syntax for reading what the statement expects to output first, even though I agree that I don’t write them select first. I feel like this might be optimizing the wrong thing.

Although the example is nice, it does not show 20 tables joined first, which will really muddle it.


The select list is meaningless without everything that follows. Knowing that a query selects "id, "date" tells you nothing without knowing the table, the search criteria, etc.


That's one benefit of the SQL naming convention which would use names like e.g. customer_id, invoice_date, etc. Also, when joining tables (depending on the SQL dialect) that can allow a shortcut synax, JOIN ON field_name, if the field name in the two tables is the same.


I really wish SQL used "RETURN" instead of "SELECT" (like in XQuery):

1. Calling it "RETURN" makes the fact of its later order of execution (relative to FROM etc) less surprising.

2. "RETURN RAND()" just reads more naturally than "SELECT RAND()". After all, we're not really "selecting" anything here, are we?

3. Would also eliminate any confusion with the selection operation in relational algebra.


If you name fields that way, but accountId, createDate may not be meaningless in the context you are looking at.


There's honeysql library in Clojure, where you define queries as maps, which are then rendered to SQL strings:

    {:select [:name :age]
     :from {:people :p}
     :where [:> :age 10]}
Since maps are unordered, this is equivalent to

    {:from {:people :p}
     :select [:name :age]
     :where [:> :age 10]}
and also

    {:where [:> :age 10]
     :select [:name :age]
     :from {:people :p}}


These can all be rendered to 'SELECT... FROM' or 'FROM .. SELECT'.

Queries as data structures are very versatile, since you can use the language constructs to compose them.

Queries as strings (FROM-first or not) are still strings which are hard to compose without breaking the syntax.


> GROUP AND ORDER BY component_id DESC;

This feels like too much. GROUP BY and ORDER BY are separate clauses, and creating a way to group (heh) them in one clause complicates cognitive load, especially when there is an effort to reduce the overall effort to parse the query in your mind (and to provide a way for an intellisense-like system a way to make better suggestions).

    GROUP AND ORDER BY x DESC;
vs

    GROUP BY x;
    ORDER BY x DESC;
This long form is 1 word longer, but, it easier to parse in your mind, and doesn't introduce unneeded diffs when changing either the GROUP or the ORDER BY column reference.


I love the idea but something in my brain starts to itch when I see that pipe operator

     |>
What IS that thing? A unix pipe that got confused with a redirect? A weird smiley of a bird wearing sunglasses?

It'll take some getting used to, for me...


It's like other "arrow" digraphs in common programming languages today, such as =>. You can picture it as a triangle pointing to the right.

Many Programming Ligature fonts even often draw it that way. For instance it is shown under F# in the Fira Code README: https://github.com/tonsky/FiraCode


They considered ditching `|>` or using `|` but unfortunately there's a bunch of syntactic ambiguity.


> Rationale: We used the same operator name for full-table and grouped aggregation to minimize edit distance between these operations. Unfortunately, this puts the grouping and aggregate columns in different orders in the syntax and output. Putting GROUP BY first would require adding a required keyword before the AGGREGATE list.

I think this is bad rationale. Having the columns in order is much more important than having neat syntax for full-table aggregation.


Why even add the pipe operator?

If the DB engine is executing the statement out of order, why not allow the statement to be written in any order and let itself figure it out?


Aggregations could be non-commutative in general case and order is important. Filters before and after grouping are also tied to a particular place in the pipeline.


> Why even add the pipe operator?

To make it easier for humans to read/write the queries.


I haven't seen it mentioned yet, but it reminds me of PQL (not PRQL): https://pql.dev

It's inspired by Kusto and available as an open-source CLI. I've made it compatible with SQLite in one of my tools, and it's refreshing to use.

An example:

  StormEvents
  | where State startswith "W"
  | summarize Count=count() by State


For autocomplete, FROM first makes a lot of sense. For readability, SELECT first makes more sense because the output is always at the top.


People here are describing many projects that already have something resembling this syntax and concept, so I'll add another query language to the pile too: Influx's now-mostly-abandoned Flux. Uses the same |> token and structures the query descriptions starting with an equivalent of "FROM".


This is why I like tools like datastation and hex.tech. You write the initial query using SQL than process the results as a dataframe using Python/pandas. Surely, mixing Pandas and SQL like that is not good for data pipelines but for exploration and analytics, I have found this approach to be enjoyable.


Yes, it's very convenient to be able to use SQL with your massively parallel commercial database (Oracle, Snowflake, etc.) and then again with the results sets (Pandas, etc.). Interestingly, it's a concept that was implemented 35 years ago in SAS (link below) but is just now gaining traction in today's "modern" software (e.g., via DuckDB).

USING THE NEW SQL PROCEDURE IN SAS PROGRAMS (1989) https://support.sas.com/resources/papers/proceedings-archive... The Sql procedure uses SQL to create, modify, and retrieve data from SAS data sets and views derived from those data sets. You can also use the SOL procedure to join data sets and views with those from other database management systems through the SAS/ACCESS software interfaces.


Wow, that is really cool. One of my theses is that DuckDB will be bought by GCP (BigQuery), and polars will be bought by Databricks (or AWS). The thesis is based on the idea that Snowflake bought the Modin platform. The movement in DE seems to be towards data warehouse platforms streaming data (views/results) down to dataframe (Modin, Polars, DuckDB) platforms, which then stream down to their BI platforms. Because these database platforms are designed as OLAP platforms so, this approach makes sense.


This like Elixir's pipe operator [1]! I use it on the daily (with Ecto) and it's epic!

[1] https://elixirschool.com/en/lessons/basics/pipe_operator


That's just Linq from C# except Google want to make it a SQL standard...


Recent and related:

Pipe Syntax in SQL - https://news.ycombinator.com/item?id=41338877 - Aug 2024 (219 comments)


> It's been 50 years. It's time to clean up SQL. This

Is it though?

Are we trying to solve the human SQL parser and generator problem or there is some underlying implementation detail that benefits from pipes?


Do manually-generated SQL strings have a place outside of interactive use? I use them in my small projects but I wonder if a query builder isn't better for larger systems.


Query building for an analytics database is impossible.

These queries are always hand-rolled because you pay the analysts to optimize them.


SQL replacements is like not understanding the magnitude of the success of something so old.

SQL is fine.

SQL has been the state of the art for db queries for 40 years.

And it will continue to be when we all retire.


They’re a bit late to the game, there’s are least a dozen such popular query languages. LINQ and KQL come to mind, but there are many others…


Simon: Please keep pushing, and mute nothing.


I like this. Reminds me of pandas.


For the sake of God, please fucking stop inventing new pipe languages.

LINQ: exists

Splunk query language: exists

KQL: exists

MongoDB query language: exists

PRQL: exists


LINQ, Splunk, and KQL are all proprietary. For the purposes of setting new standards, they might as well not exist.

PRQL is the only real entrant in your list when it comes to adding a pipelining syntax to a language for relational queries in a way that others can freely build on.


SQL parsers: exists.

The paper clearly describes the goal: add a pipe syntax into existing systems with minor changes and be compatible with existing SQL queries.

BTW: LINQ is an AST transformer not a language per se tied to a particular platform. None of existing DBs allows to use it directly.


Isn't this the same syntax (or very similar to) Apache Beam?


Is it just me, or does this seem anachronistic? Like, this is a conversation I expected to blow up 20 years ago. Better late than never.


this reads like an article written by someone with adhd who started writing about a scientific paper but got distracted by some random thing instead of reading it



int *ptr;

// but let's change it to *int ptr;

// because the pointer symbol is more logical to write first

Please can we solve a real problem instead?


Wait, is this post about SQL or PDF...


Now wondering if there is any relation to "Structural versus Pipeline Composition of Higher-Order Functions (Experience Report)":

https://cs.brown.edu/~sk/Publications/Papers/Published/rk-st...


I have to honestly say that I like PDFs they always work and don’t fail without JS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: