Maybe we should get rid of databases (or limit them to reporting).
Back in the days they were mandatory since memory was expensive and you simply had to access stuff from disk and needed something that made it reasonably efficient. Now memory is so cheap and for quite many business applications it would be feasible to keep everything in memory. SSDs with GB/s level read speeds would enable restoring the stuff back to memory in reasonable time in case the cluster goes down.
I think the whole database thing also caused major issues on the object oriented side and actually stopped people from really using OO (or getting benefits out of it). Instead of building intelligent and smart objects it is easy to end up with objects that are just containers for data.
Probably there has been thoughts on how to do this, one old project I remember is Prevayler[1].
SQL and ORM are a superb fit for a great many applications, incidentally a large share of real world applications. The four-letter abbrev. CRUD comes to mind, as well as reporting.
Most kinds of NoSQL remove some benefits of the relational model while not really giving you the benefits of object persistence. I never felt a great need to use it; since quite some of these databases also had all sorts of issues that not exactly endeared trust for serious applications -- I'm not working on software where that's an OK thing to have (though for others it might be an acceptable tradeoff).
The real deal persistent object databases are a completely different beast from both NoSQL and relational models (only some NoSQL concepts like explicit / application-level indexing carry over). Two important things to note about these: 1.) There are not many of them 2.) Properly using them already requires proper OO technique. Failing 2.) will make it an unmaintainable mess. Certain kinds of applications benefit greatly from these, and there they also tend to perform better in both developer experience and efficiency as well as application performance than forcing a huge impedance mismatch down the throat of an ORM -- which pretty much always means that the underlying relational DB is used in very anti-patternish ways, resulting in poor performance regardless how good the DB actually is.
The idea is not to query things from the persistent store. Instead you would keep all the data in memory. The persistent store is just there in case your server goes down. In case of OO, there would be one big object graph which you then navigate using the functionality available in your programming language. For example take a list of things, pass it through the filter function to pick what you want.
My feeling is that in traditional business apps you don't have so much needs to these random searches over huge amount of data. It's more about taking hold of one piece of the graph and then navigating to related objects (dummy example: search for customer, then start looking at that customers orders).
I'm not familiar with Smalltalk, but I'm sure there was some ideas related to persistence other than just ORM and relational database (but could be those ideas have been proven as bad over the years). I just find it a bit strange that we have been building these things pretty much the same way for the last 20+ years. Meanwhile the hardware has changed much. Even with quite small corporate hw budget you are looking at half a terabyte of RAM and tens of cores in a single server.
It's the basis of most Zope apps, including Plone, but also many other applications; there's not much talk about it, because it just works. Like in the comment I made below; if it's a good fit ze ZODB is quite literally worth every LoC in gold.
The database itself doesn't do that much (apart from giving you transparent object/application persistence, transactions and MVCC :), as one would expect.
The Zope people made an excellent job of modularising it (or, in other words: this is by principle extremely modular), so there are a bunch of packages commonly used around it (eg. BTrees, zodburi, often either ZEO or RelStorage and stuff like eg. zope.container).
(Historically it's also very interesting - development started in 1997! The revision history is an interesting read, too, many familiar names pop up, including GvR)
Back in the days they were mandatory since memory was expensive and you simply had to access stuff from disk and needed something that made it reasonably efficient. Now memory is so cheap and for quite many business applications it would be feasible to keep everything in memory. SSDs with GB/s level read speeds would enable restoring the stuff back to memory in reasonable time in case the cluster goes down.
I think the whole database thing also caused major issues on the object oriented side and actually stopped people from really using OO (or getting benefits out of it). Instead of building intelligent and smart objects it is easy to end up with objects that are just containers for data.
Probably there has been thoughts on how to do this, one old project I remember is Prevayler[1].
[1] http://prevayler.org/