Hacker News new | past | comments | ask | show | jobs | submit | tmzt's comments login

Very nice and very inspirational for someone bootstrapping a startup.

The pages clearly defined what you are building and how to use it.

The explanation of platform fees makes sense, though it could more clear if the pricing examples are only based on those fees or are account limits to number of members or dues.

You might want to check your terms of service, they do not list a jurisdiction and have the placeholder [jurisdiction] instead.

Best of luck with it!


Not launched, but building open source components and mucroservices with a closed dashboard and overall backend.

For instance, a Git-like DAG for a config service [1] that supports history and eventually forking and merging on top of PostgreSQL. This was broken out of the larger Go codebase and is currently AGPL3 as a placeholder while I research better licenses to use.

The overall idea is the "nuts and bolts" of a small startup/SaaS app, including metrics, IAM/auth, and subscriptions via a single API and React/JS SDKs.

I hope to do a Show HN on one or both in the near future.

[1] https://github.com/tmzt/config-api


Do you have that example posted anywhere? I'm curious to see it. Also, any support for the Wifi and/or Bluetooth on Pi Pico W from Rust?


https://github.com/mkj/sunset/tree/main/embassy/demos/picow is a wifi-ssh to serial impl for a rp2040. I've got it plugged into my home server's serial port. It's using cyw43 for wifi also from Embassy - dirbaio is prolific! There's some WIP in Embassy for bluetooth.

That repo has a few crates depending on each other - sunset is the toplevel ssh, sunset-embassy adds no_std async, then the async dir has std rust, with a commandline ssh client in the examples dir.


Would it be possible to support a custom URL for the local model, such as running ./server in ggml would give you?

This may be more difficult if you are pre-tokenizing the search context.

Very cool project.


Is there a clean way to share an emptyDir between sidecar(s) and main container(s)?

Looking at the logging usecase and want to be able to add a log shipper sidecar to a pod with ephemeral storage.


An easier solution for you might be something like vector-which will automatically harvest the logs from pods, and has excellent routing capabilities.

You wouldn’t need a sidecar-per-pod this way either.


How are you using TipTap with Monaco instead of ProseMirror, do you have the bindings/plugins working the same way?

How does this compare to something like Milkdown using markdown as a native format?

I see you solved some of the hard problems (tables, etc.) with Yjs and CRDT in general.

Are the backend components also open source?


TipTap is built on ProseMirror. Basic elements of the editor are made in TipTap, while for more advanced use (tables, menus, integrating code editor) I had to tap into ProseMirror.

I'm not really familiar with Milkdown so I can't say.

The entirety of Vrite is open-source under AGPL-3.0 license (with parts under MIT): https://github.com/vriteio/vrite


Could the RP2040 ROM be adapted for RISC-V creating a defacto standard for emulating features that aren't implemented by the specific RV-variant?

I'd also love to see a big-little implementation with a Linux capable RV paired (gateable) with the above M4-class RV and the state machines from the RP2040.


I had the thought years back to layer a Wayland-like surface rendering protocol over X using an HTTP UPGRADE like primitive and XIE extended event IDs.

Basically it would transform the socket to this new protocol which keeps X atoms under a binary ID prefix.

This would keep backwards compatibility, with Xorg DIX replaced with a modern server implementation.

This could be modified to a heavy client approach where direct rendering can bypass the server. Or a client can takeover chrome rendering through an extension.


Starting with this post:

https://community.home-assistant.io/t/using-gpt3-and-shorcut...

I've been trying to adapt it to an offline LLM model, probably a LLaMA-like one using the llm package for Rust, or a ggml-based C implementation like llama.c.

It could even be fine-tuned or trained to perform better and always output only the json.

This could be a good fit with open sourced tovera when that is released.

I like the idea of supporting natural language commands that feel more natural and don't have to follow a specific syntax.

It can also process general LLM requests, possibly using a third-party LLM like Bard for more up to date responses.


What's the fundamental difference between generating Terraform to a spec and "rolling out VMs and DB clusters"?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: