Hacker News new | past | comments | ask | show | jobs | submit | firtoz's comments login

Perfect timing, I just added Lua integration to our product ( my bio has details ) for an AI agent to run code on.

Cannot wait to see Lua come back in full force. I had used it ages ago to build World of Warcraft plugins, then came back with Roblox, then back again for AI.


> Cannot wait to see Lua come back in full force.

I also recently released a server for Server-Sent Events [0] that is programmable with Lua. Overall it's been a great experience. The Rust mlua-rs crate [1] is trivially easy to integrate.

[0] https://tinysse.com/

[1] https://github.com/mlua-rs/mlua


Thank you for this, amazing to see a glimpse into how they come up with the songs!

Especially back then, in the time of vinyls and cassettes (browsing music wasn't exactly as easy as pressing "play"), it shows the amazingly deep musical culture of these artists. The samples they use are from all over the place, and their songs are often built around a handful of seconds from obscure b-sides.

not about Daft punk but..

> The samples they use are from all over the place

> built around a handful of seconds

have you seen/head this?

https://www.youtube.com/results?search_query=mondovision

the original was at www.giovannisample.com which disappeared..


What's the latest and the best so far? Are they using GPGPU? Is quantum computing there yet, or would it help? Heuristics and sampling?


GPGPU is definitely mainstream for large scale quantum and molecular simulation. Quantum computing might help speed up electronic structure calculations, but my impression is that it’s still in its infancy

To give a sense of the scale of this problem, the largest frontier simulations I’m aware of are around the trillion atom scale. (On tens of thousands of GPUs [0])

Based on a quick web search, a c elegans cell is between 3 microns and 30 microns in diameter, so if we assume we can count atoms using the density of water then an all-atom simulation of a single neuron would need between 5e11 to 5e14 atoms. c. Elegans has 302 neurons so simulating the full neural network will be 2-5 orders of magnitude larger than current frontier simulations. Honestly more doable than I thought it would be, though all-atom simulation of a full organism still seems quite out of reach

This is all with classical force fields. Doing this simulation at the electronic structure level is much much harder with our current modeling capabilities

0: https://www.mrs.org/meetings-events/annual-meetings/archive/...


There's also interesting custom-made machines for molecular simulations that don't rely on GPGPUs and are significantly faster, e.g.: - https://arxiv.org/abs/2405.07898 - https://www.psc.edu/resources/anton/


is there any reasoning that for besides highly reductionistic and repetitive systems like crystals, quantum computing can compute quantum properties of molecules?

it seems to me "the quantum computer you seek" is the molecule + the medium (especially the medium) itself


So first I think I might need to apologize for some jargon collision - my background is mostly material simulation, and when I say “quantum simulation” I mostly mean using classical algorithms to solve the quantum mechanical wave equation describing a material or molecule.

I don’t pretend to have any particularly deep insight into quantum algorithms for chemistry, but [0] is a really nice review. It seems like there are a lot of possibilities for simulating general molecular and materials systems on quantum computers. The holy grail would be solving the exact quantum mechanical wave equation in sub-exponential time and space complexity. I don’t know how feasible that is, but it seems like people are making progress using quantum algorithms to accelerate approximate quantum simulation [1].

Back to all-atom c. elegans: I think quantum computing is more about accurate and scalable electronic structure modeling, and simulating enormous systems like this will still require fitting classical (meaning electrons are implicit) force fields and running them at scale for the foreseeable future. A lot of this is space complexity - I’m not sure how a quantum computer could do atomic simulations with sublinear scaling of qubits in the number of atoms being simulated, and were in the very early days of scaling quantum computers up

0: https://arxiv.org/abs/1812.09976

1: https://arxiv.org/abs/2307.07067


thanks for the reference.

yes. you nailed my point-that you will have to fit classical or quasi-classical fields, which is liable to require scads of qbits just to get close. qbits are just not "designed" to do that sort of thing.

in any case we ~solved protein folding heuristically and not using fields so i shouldn't be too pessimistic that it's impossible that quantum compute will help eventually.


Heh, I was actually building one. Haven't considered the battery... Are the web audio APIs bad, or are you forced to use the CPU? I guess with webgpu it may be easier?


I think on iOS you need access on the CoreAudio level if you want to be efficient, ie fill audio buffers on a high priority thread with some lower level static language.


There's a way to download all of your Alexa requests. I recommend it to everyone. It was interesting and horrifying to get literally all of them, from day 1. I noticed how tired I sound in the mornings or evenings. I started understanding patterns of my thoughts and needs. The Alexa went to the bin quickly after that session of exploration and insight.


Link for those that are interested: https://www.amazon.com/hz/privacy-central/data-requests/prev...

Be advised it's not instant.


Mine is a whole list of "weather" and "set timer for 4 minutes".


Heh, you can do the same with your Google searches. Equally horrifying, I suppose.


Where does Google offer this?



> The Alexa went to the bin quickly after that session of exploration and insight.

Why? It sounds like it was really interesting and valuable to observe those patterns.


Presumably because it is a privacy hazard to have someone else storing that kind of data about you


Exactly.


Precisely because it _was_ so interesting and valuable to observe those patterns - for the corporation observing them.


Exactly.


There are competitors, even open source ones


These are not viable options for the vast majority of users. Most peppe don't have a clue how to set up open source options, let alone set them up with usable hardware.

The average consumer wants out of the box solutions that don't require a degree in Computer Science to use.


Home Assistant is getting far easier to set up than you might expect, especially because they now do in fact have out of the box devices. It's not quite as ridiculously simple, not quite yet, but they're rapidly improving and it won't be long until they're better than Amazon Alexa/Google Home/other commercial solutions.


I am relatively tech savvy, installed HA recently in a VM on my media server and the thing was just a massive pain in the arse, particularly trying to migrate Thread devices from Apple Home to HA.

Sure things might be getting easier but they’re certainly not easy.


Just to chip in with a plug for HomeAssistant. I am really not very techy at all, but so far I have used the out-of-box HA Green version and:

-installed waterproof exterior socket, remotely controllable -installed various interior sockets -installed smart thermometer to control our little plant propagator

So far it seems to be a case of checking that the thing you are going to buy has a working HA integration program (which seem to be added on a fairly frequent basis) and then just adding it to the network. The only vaguely difficult thing I had to do was log in to my router homepage and change the wifi mode to allow the exterior socket to connect.

I'd much rather just not use Amazon/Google/etc where possible, as I don't like the feeling of being used.


What are these "out of the box devices"? I looked into things a couple of years ago, and back then it was all too much effort to set things up and keep things running and integrated, so I just went with Smart Life stuff from AliExpress. But would love to have Home Assistant if it means I don't need to spend weekends just reading docs, pairing, setting things up, connecting stuff...


Look at Home Assistant Green [0]. They've also got a smart speakers as of just recently [1], although they're still a "preview edition". The prices seem comparable to other similar smart home devices, IMO.

[0] https://www.home-assistant.io/green [1] https://www.home-assistant.io/voice-pe/


For the wifi smartlife stuff, you can use the official cloud based integration or if you want local control, the unofficial tuyalocal. The official integration is really easy to use but if your internet connection drops, you can't control your devices so I prefer to use tuyalocal it still requires to add the devices to the smartlife app once and then you add a device from the addon by scanning a qr code with the app. Once this is done you have local control over the device.

Zigbee devices require more initial setup, you have to buy a dongle, install the Zigbee2mqtt addon and the mqtt integration, but once this is done adding a devices is a really simple process : you put the devices into pair mode and allow pairing for 90s in the Zigbee2mqtt page and rename your device to something useful.


I've got HA set up (nearly 2 years now with a whole host of things connected: Bluetooth, WIFI, iOS devices, Zigbee, etc.) and I think I'm only just getting to the point now of two weekends worth of reading docs (primarily because their documentation seems to be written by developers rather than technical writers). Most time I've spent tinkering with HA was modifying their embedded `mastodon.py` to make it work with GotoSocial (but I think someone upstreamed a fix for that and it's no longer required.)


They're already better compared to commercial solutions regarding device/service support and complex automations.

But missing opinionated defaults, really, you still have to roll your own home/away/vacation solution. Creating a dashboard requires you to understand the meta of Home assistant which takes a lot of time.

People asking should I get PI or NUC every single day in the reddit. I am happy with my 2000lines long configuration file except scripts and automations. But it won't be easy for someone is not tech savvy.


Home assistant is a nightmare to set up. Even with their hardware, you need to learn a whole new vocabulary and God help you if you stay off the happy path.

If HA (which is a wonderful project) is your example of usable OSS software, then your bar is set lightyears away from what actual consumers need.


At no point did I say it's usable by the average, non-tech-inclined user. I said it's getting much better, quickly. It absolutely still needs work to replace something like Amazon or Google have.


I like your confidence in the competitors. Which ones do you recommend?

I need a timer, integration with smart home (turn things on and off), play songs and radio, I need to announce to my other devices. And the set up should not be a month long side project.

How much will it cost me to replace Alexa in at least 5 rooms...


Home Assistant. Sure, non-tech people might have an issue setting it up today (it's easy and getting easier, but it's not turn-key easy yet), but for you personally, this shouldn't be an issue.

Assuming you have a spare Raspberry Pi or some other compute you can dedicate to it, replacing Alexa in every aspect except the microphones is at most a couple hours of installing, configuring and testing stuff. I don't personally know how things are on the market with replacing the always-on microphones in every room, but ignoring that (let's assume for a moment you're fine with using either a phone or a smartwatch as voice I/O), you get:

- A better and more capable integration with smart home than anything on the market;

- A chance to pick whatever LLM you want to power your logic (just bring your own API key, ofc.), which instantly makes it much better than Google's Assistant, Siri and Alexa; this has been the case for around a year now, and the Big Companies are still playing catch-up with the simple "just feed it to GPT-4 / Claude along with some context and tools, and let it do what you want" approach.

- You can configure the activities whichever way you like, expose whichever smart devices you like, and you don't have to speak brands anymore. No more "Hey ${brand 1}, use ${brand 2} to play ${brand 3} on ${brand 4}" - you can just say "Please play whatever in the living room" and it just works.

(In my case, some of the most frequent commands are off-hand lines like "warm up the kids' room a bit, please", and "kill the ACs", or any variation that rolls off the tongue best. Claude knows what to do with zero config. Home Assistant alone cut the time to operate ACs from 2 minutes to 5 seconds (cold-start) relative to the vendor app; running things by voice from a watch is just a cherry on the cake.)

- If you're on Android, you can (and, again, could for around a year now) expose your phone to Home Assistant; setting the HA app as your assistant + coupling it with Tasker lets you also replicate the on-phone feature of commercial assistants, but better, because LLMs. It's smarter and sends less sensitive data to iffy cloud services (you control where STT and TTS happen).

- Timers and announcements and weather and such, you can obviously also handle through Home Assistant. The defaults should be enough for this (you might need to "add weather integration", "add timer integration", etc. - couple UI clicks in the UI, each). HA is simple by default, but you can also do more advanced stuff, at any complexity level between this and arbitrary code execution, through no-code, low-code (e.g. NodeRED) or yes-code means.

Going back to the topic of microphone arrays - I didn't look into it much; there are DIY solutions (with DIY quality of listening - which may be OK, depending on environment; almost 2 decades ago, I got a lot of mileage out of cheap microphone soldered to a 2M cable and glued to the side of the wardrobe, + Microsoft Speech API on the PC), I think I recall some people selling packaged microphone arrays, and I wouldn't be surprised if you could reuse Alexa hardware for the I/O part. But I honestly don't know. I'm fine with my phone and watch for I/O at the moment.


The microphones and speakers are what I care about. Alexa is the perfect hands-off universal remote + podcast speaker.

Is there a way to flash the Echo hardware to make it work with Home Assistant without pinging Amazon HQ?


I currently pay around $200-300 to a combination of Cursor + Anthropic through the API. I have both a full time job and freelance work. It pays for itself. I end up reviewing more than manual coding, to ensure the quality of the results. Funnily, the work I did through this method has received more praise than my usual work.


Did you outgrow the vase 500 searches that Cursor gives you per month and connect your API key for usage based pricing?

I’m having a hard time coming close to the 500 included in the monthly subscription and I use it like, a lot.

Just curious how you’re hitting that 200-300 mark unless you’re talking about paying Anthropic outside of cursor. Which I just now realized is probably the case.


I ran out of fast requests and using my own API key


It's still very easy to catch up on the latest trends and developments and what people are building and talking about. Even though most of us kinda hate it, it does have weight.


It's pretty easy to learn from others. At what cost? I'm not willing to give up privacy. Maybe if there were paid social media services where users aren't the product. I might buy something like that.


Note to self: when doing HN demos, bulletproof your endpoints


The 2013 Vienna Philharmonic New Year's Concert?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: