To put it simple:
Like copilot, it does have an auto-complete and also chat interface. Different than copilot, it focuses on generating a full code task plan, then having the auto completion work acrroding to your plan, and it check the of quality code.
Like agents, it tries to help you complete a full task, yet,does that in tandem with you, as you work inside tour favorite IDE writing the code with you. In addition, there is focus on code quality, testing, and fetching relevant context from your codebase.
I completely agree. I switched to neomutt three or four years ago and there are a few things with text-based emails that really accelerate my workflow.
1. Fewer distractions.
2. Scripting keyboard shortcuts through emails - creating a to-do from an email with just tapping a function key, for example, or adding a company to a CRM with another function key tap.
3. Being able to delete emails with a Regex filter, which is really important for mailing lists.
4. Much faster latency which Though it seems to be trivial Google's research has shown is important to great user experiences
5. Ability to use neovim within the email client.
6. Local search using not much which again much lower latency than Google even for very large mailboxes.
Author here: I agree with you on benchmarks. It's hard to compare different databases well.
MotherDuck is an analytics database optimized for reading with columnar compression. Postgres is more of a transactional/general purpose database. Tuning it well for analytics would surely improve performance.
Most people won't tune a db with custom indices though because it can be hard, so purpose built solutions like this offer value in those cases.
The article isn't purely about performance but also ease of use. MD is an in process database so starting with it is very easy.
Glad we agree on the power within a laptop! It's underappreciated.
How this is easier than using snowflake/bigquery etc?
The situation where it seems useful is if I'm using a jupyter notebook and I want to use it to pull data in but it's too much to fit into memory, and sqlite is too slow... which seems like a pretty specific situation to be in? And it's not data that is part of some business process that needs updating frequently?
Or I'm just missing the point entirely? I see another post about huge parquet datasets - but how do I get them? Is everyone at my company comfortable with me moving them onto my machine even if I can? Is the dataset so big it won't fit into memory? Can't I just connect my jupyter notebook to snowflake anyway? Then I'm only pulling down the data I want instead of all the data.
I guess maybe the real question is - exactly who is the buyer of this product, how much will they spend on it, and who is the user?
Looking into setting up a Lakehouse/Datalake over the coming quarters. I'd say the biggest appeal to us is that Iceberg can handle schema drift/evolution, and a little more open.
I love TUIs and I'm looking for a TUI calendar. But there's one feature they all lack: sending availability to someone. Using grep or rg would be great. That's where Calendly and Vimcal shine.