Hacker News new | past | comments | ask | show | jobs | submit login

Appreciate your insights. A few comments/responses

1. Agree on the introspection point, but it's worth noting that the future of LLMs might involve self-awareness capabilities, which could provide an introspective mechanism similar to SQL's transparency. the aim would be to build upon this middle layer, not blindly supplant it. 2. While 3NF, BCNF, and other normalization forms have served us well, they are essentially tools to manage imperfections in our storage and retrieval systems. LLMs can be trained to understand these nuances intrinsically. additionally, database theories have evolved, with advancements like distributed databases, graph-based models, and nosql. so, it's not entirely outside the realm of possibility that we can pivot and adapt to new paradigms. 3. The "translational losses" referred to the semantic disconnect between natural language queries and their SQL representations. while SQL is unambiguous, the leap from a user's intent to the SQL formulation often introduces errors or inefficiencies. LLMs can be trained to improve over time, constantly refining based on feedback loops.

not arguing that SQL or databases as we know them are obsolete today, just advocating for a more imaginative exploration about where the tech is headed.




Your answer to question 2 tells me that you haven’t read Codd’s seminal paper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: