We already build websites for Googlebot so I don't really see much a difference. Maybe designers should be worried because if there's nothing to "look at" there's no point in making it look nice. This feels like XML/XHTML all over again.
The article mentions XML, but the true revolution is JSX itself, which lets you describe any piece of logic as a React element. This opens the possibility to create DSL for everything, just like in Python.
Couldn't this time reasonably well on a local machine is you have some kind of neutral processing chip and enough ram? Conversion to MD shouldn't require a huge model.
This is nice, but I wonder about the actual use cases of such a service, given the very loose permissions:
1. Anyone can subscribe to a channel
2. Any registered user can publish to a channel
3. Only registered users can publish to their personal channel (@username)
The second point in particular is problematic. I don't want to add notifications to my app, only to have a script kiddie use is to spam my users.
OP is making a decision about a complex problem using only back of the envelope calculation and without looking for scientific studies on the matter.
We’re not obliged to take their advice.
Typing “AI Carbon Footprint” on Google Scholar brings much better info than this post.
Evaluating the quality of the responses of AI agents used to be tricky. It required knowledge of eval criteria as well as third-party tools like promptfoo, ragas or prometheus. Now openAI makes it ridiculously easy with a new API endpoint. It can grade a completion against a reference response, assess its format and tone, and you can even promt the eval to add your own criteria.
I tried to have it solve an easy Sudoku grid too, but in my case it failed miserably. It kept making mistakes and saying that there was a problem with the puzzle (there wasn’t).
Why does the Mac installer require admin right and a restart? Giving admin rights to an installer requires trust in the vendor. Supertone Shift is just a newborn. I cancelled the installation because of that.
I would love to test the technology without the risk of damaging my computer!
I use the great, free, "Suspicious Package" app [0] to inspect installers like these.
In fact, it was Supertone Shift's installer that prodded me to seek it out (I happened to find and install Shift a couple of weeks ago).
In this case, it needs admin permissions to install to `/Library/Application Support` as well as `/Library/Audio`.
It needs to restart in order for the HAL driver to be loaded (this provides the virtual audio interface for using the app with Teams, Zoom, etc.)
The preinstall/postinstall scripts simply handle the app's directory in Application Support.
I decided it was safe enough, and had some fun playing with it. It contacts what it claims are licensing servers (when it starts), and won't start without it. It wanted to keep contacting those servers constantly, but blocking its network access via Little Snitch didn't prevent it from functioning. The network traffic was in the single-digit kilobyte range, so I felt reasonably confident no audio data was being looted.
Great article. Now what happens when you apply this idea and let a LLM continue a chain of thought beyond mere question answering? Some form of artificial consciousness.
Material reductionism at its best. Now you have a stochastic parrot "talking" to itself. How can anyone get to the conclusion that this could even begin to resemble a tiny bit of what we call consciousness?
Should we (developers) start building websites for robots?