Hacker Newsnew | past | comments | ask | show | jobs | submit | abusaidm's commentslogin

The concept of what you describe is interesting and useful. I struggle to make the same conclusion from the website, it feels like it is written as too abstract to allow the reader to grasp exactly what you will get. Was the text improved by an LLM?

It would be great to have a simple example or a scenario or a hello world for someone to say “Aha! I get what it is and how it can help me”


Oh thank you. The website was made much earlier. I had to get a frontend person to build it and I'm not really good at frontend design so I have to get another person to rework it. Thank you very much for this feedback.


I think the project is saying, in cases where you are deploying the Frontend with server side serving, then you can include this. Given projects like NextJS have a server side serving for react server-side-rendering and for APIs this project uses the server side to add additional services as mentioned in the post.


Yeah, the messaging isn't very clear.


Yes, I agree, but it's really hard to find the right words. How would you describe it better?

That bknd is "embeddable" doesn't mean it has to. Backends such as Supabase or Firebase run on separate deployments. Especially for Supabase, if you want to self-host it, you run multiple services including your frontend. I tried to express that if you host your app on Vercel, CF, etc. – your backend (excluding database) can be deployed together with it.

Of course you can deploy it separately, e.g. fully on Cloudflare using Workers, D1 and R2.


I think the language probably assumes some knowledge specific to that ecosystem, particularly the more recent trend of server rendering react.

To someone that works with more traditional server rendering frameworks like Rails and Phoenix, embedded to me implies storage will be clientside.

I'm guessing it might makesense to a frontend developer but people like me might be scratching our heads for awhile.

I'm assuming this is an alternative to using nextjs (or whatever flavor) with an orm. There's a lot of word salad in the why? that kind of suggests that. Maybe you can simply compare alternatives?


“ Open Source Initiative chief accuses tech group of ‘polluting’ the term by using it to describe its Llama models”

I agree that the use of open source might be incorrect, but it’s more because the definition of open source up to this point was more for the software, the data, the art.

For these models that consume all the data available can we use the same term? What criteria needs to be defined that you evaluate a released model on? (Model and weight, or more and include the data too????)


> the definition of open source up to this point was more for the software, the data, the art.

LLMs are software. Not releasing the data sets or training code is the same as "open sourcing" an application as an executable with some of its source code but not providing the proprietary/secret compiler required to actually compile it. It's useful, to a point, and keeps users dependent on the good will of Facebook.


A trained LLM is software the same way an opaque binary is software. To be truly 'soft' you need to be able to recreate or change it, i.e. have the 'source' in this case the means to recreate/train it.


that's why the OSI has been leading a global multistakeholder effort to define Open Source AI. Check https://opensource.org/ai


I think 3Blue1Brown content is so good and educational that having well thought material like this would elevate the understanding of complex topics and help people learn in a more enjoyable way.

I also would subscribe to a commercial offering that provides me with rich content that educates me on topics I find interesting busy not my field of expertise like some of their other videos.

Kudos to the team behind it


The problem with mathematics is that learning it is different from merely understanding. To really learn it and have it as a primitive tool in your brain, you have to train a lot, and not everyone is willing to put in the effort.


@davidbessis on X had a very good thread on this in August. One excerpt from the thread:

“The #1 reason why we fail to teach math: we present it as knowledge without telling kids it's a motor skill developed by practicing unseen actions in your head. Passive listening is useless, yet we never say it. We're basically asking kids to take notes during yoga lessons.”


Since VSCode can be installed as a PWA on chrome, is it installable on iPad as a PWA app to use from the desktop and avoid pressing back to a webpage?

Also interested how it handles when network is lagging. Is the input laggy or does it smooth on the client and it syncs once network stable things sync?


There is a iOS app that acts as a client to specifically connect to code-server instances. It removes the chrome from the web browser and makes things like copy/paste easier. IIRC, it was much more pleasant than the PWA version (but it’s been a while since I meaningfully used it).

Serveditor is the one I’m thinking of. Not sure if it is still around or not. There were paid plans for them to host your backend, or it was free if you had your own server.


Running it as a PWA is a fun idea, but...

"Terminals are not available in the web editor. Continue in an environment that can run code, like a codespace or VS Code Desktop."

Which makes me want to integrate it with a Linux in the Browser...?

https://geekflare.com/run-linux-from-a-web-browser/


You can use a terminal if the backend is something like coder-server or openvscode-server


Nice write up Sebastian, looking forward to the book. There are lots of details on the LLM and how it’s composed, would be great if you can expand on how Llama and OpenAI could be cleaning and structuring their training data given it seems this is where the battle is heading in the long run.


  how Llama and OpenAI could be cleaning and structuring their training data
If you're interested in this, there are several sections in the Llama paper you will likely enjoy:

https://ai.meta.com/research/publications/the-llama-3-herd-o...


But isn't it the beauty of llm's that they need comparably little preparation (unstructured text as input) and pick the features on their own so to say?

edit: grammar


Yes, if you want an LLM that doesn't listen to instructions and just endlessly babbles about anything and everything.

What turned GPT into chatGPT was a lot of structured training with human feedback.


Exactly. Section 4.3.7 briefly explains how they trained the model to better follow instructions ('steerability').


Yes. Would love to read that.


How is this k8s alternative when the homepage is focusing about running LLM models and AI content in containers rather than being for all workloads?

Maybe it’s k8s alternative for AI/LLM!


Hi, I’m one of the contributors to dstack! Yes, you’re absolutely right—the title is a bit off. It should be "K8S for AI Teams Only." And yes, there is a lot of focus on LLMs specifically. If you have any other feedback, please share it!


I think framing it as being able to run LLMs with ease and portability across machines and hardware is an amazing sell. Most guides are very technical and detailed that it’s a barrier for many to self host.


This is a great looking project. The combo of single binary Go-lang and the ability to package assets like css, js and html makes this a killed multi platform distributable one click run.

It might be an interesting experiment to have the UI be installable as PWA and thereby not need the electron stack to achieve the common functionality offered by these apps.


This would a great candidate for https://wails.io/. I've been building a lot of utility desktop apps and the static binary is around 9mb (uses the system webview).


I'm also eyeing wails for a couple of personal desktop apps that I want to build soon. I built toy app first towards the end of last year to try it out and I was impressed. It felt like a nice middle ground between Electron and Tauri for desktop apps using a web front-end. Looking forward to the release of v3!


This looks neat. Getting a few examples on how it can connect to multiple sources like docker engine and k8s may make it a solid tool for developers that need to develop in complex infra setup. I have seen K9s fill this space with room for improvement.


Nice demo, I tried some of the examples and tweaked to see what happens. I noticed no mention of UTF-8 and I tried to add some arabic letters and other RTL letters and it printed garbage chars.

Are langs other than english supported?


Yes, it is supposed to be supported, you are welcome to file an issue on github to add arabic letters https://github.com/moonbitlang/moonbit-docs


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: