Hacker News new | past | comments | ask | show | jobs | submit | marcus_cemes's comments login

Never heard of shadcn or franken-ui, but they look identical, one links to X (Twitter), the other to Mastodon. What's the story there?


Franken UI is an HTML-first, open-source library of UI components based on the utility-first Tailwind CSS with UIkit 3 compatibility. The design is based on shadcn/ui ported to be framework-agnostic.


Thank you. As weird as it can be, I came here looking exactly for an HTML-first option and had a gut feel that I would find it in the comments!

Thanks again!!


Oh wow, I haven't seen Franken UI - this looks great, I can definitely look to port some of these.

I guess I've been taking an opinionated approach to start by taking components I had already built from my other projects and compiling them here for now.


In what sense is shadcn not framework agnostic?


not sure if trolling

It provides you with templates for react...? How can anyone argue that that's framework agnostic...?


When all you know is React everything looks like it needs a fat client?


yeah, I just have no idea what shadcn is, so I figured I'd ask for the sake of others who also have no idea.


It's a React library.


This is clean JavaScript syntax in my opinion and should be what people strive for. It's perfectly readable, it's faster, it does async correctly without any unnecessary computation, can be typed and will have a normal stack trace. Piping is cool when done right, but can introduce complexity fast. Elixir is a good example where it works wonderfully.


The two are not mutually exclusive, it's probably not an issue with horizontal scaling.


I think this is a great use case. From my experience, having everything in one language is a huge plus. You can pull data from the database and just inject it into the view. The closest I've gotten to this in the JS/TS world is a Prisma + tRPC + SvelteKit for E2E type safety, but there's a huge cost in complexity and language server performance and some extra boilerplate.

The main limitation is likely offline apps, LiveView requires a persistent connect to the server. I doubt this is something you'll encounter for your use case.


I'm planning on adding tRPC to the Prisma + Nest + Next stack so can you elaborate on "language server performance"?


I decided to use Tauri for the first time for a university project and it was absolutely painless to design a small and useful GUI application to programatically generate schematics for photolithography masks.

- Single lightweight binary install and executable (~6 MB), clean uninstall

- Automatic updates (digitally signed, uploaded to a small VM)

- Integrates nicely with SvelteKit and TailwindCSS

- The Rust backend was able to integrate with GTSDK over FFI. The cmake crate made C++ compilation and linking automatic as part of cargo build, provided that a C++ toolchain is available (no problems even on Windows).

- No scary toolchain setup with a load of licenses to review and accept (looking at you, Flutter. I'm a student, not a lawyer. Although perhaps this will also be a thing with Tauri + Android?)

For a small project, I can't recommend it enough. I wouldn't know where to start with a C# or Qt GUI application, especially if I wanted to make it cross-platform.

It'll be interesting to see if it gains any traction in the mobile space. Flutter is great and may be better optimised for certain rendering techniques, such as infinite lists, but sticking with web technologies is a very compelling advantage.


.NET MAUI is cross-platform and very easy to get started with, but would sacrifice a lot of performance to gain the convenience and simplicity of the development experience.


It's also no-go for Linux. Otherwise I would be all over it.


There is work being done to address desktop linux, but I agree that is one of the deficiencies.

https://github.com/jsuarezruiz/maui-linux/pull/37

The lack of a WASM target is another, although UNO project in the past provided such a target for MAUI's very closely-related predecessor (Xamarin.Forms).

https://platform.uno/xamarin-forms/


I guess this comes down to personal preference. For me, this is mixing the interface with the implementation. You shouldn't need to know how something works to be able to use it, for me, that's the real overhead. Maybe this works on a small scale, but what if the source code changes?

That being said, I do like inspecting the source from time to understand it better, or make up for missing documentation. Sometimes though, with this being JS, I wish that I could unsee the things that I've seen, code that production depends upon, deep within the dependency tree.

I agree with the idea of fluency when writing without types, but for me it's not about how fast you can write code. Code for me is a lot of rereading and understanding what the hell you wrote just a few days ago, I find typed code easier to get back into and it's faster to find things that broke in parts of the codebase that you're less familiar with when you change something.


I don't want to come off dogmatically defending Rust, I code little Rust in comparison to JS, and I've done C and C++ for some embedded systems. C is a very different monster to most other languages, I find people defending C to be just as proud and defensive as Rust programmers.

To address some of your complaints: yes, there are a lot of concepts to understand. I like wrappers, I found it crazy that in C, you first declare a mutex_t variable, and then you specifically have to call chMtxObjectInit(*mutex_t mutex) to initialise it [1]. If you forget? UB, kernel panic sometime in the future. I think Mutex::new() is far cleaner, and it's namespaced without arbitrary function prefixes. Binaries are tiny in comparison to JS/Python with deps, they will be larger than C. Compile times aren't that slow and you can't make extra language features happen out of thin air.

In C, I've found that it's commonplace to do a lot of clever and mysterious pointer and memory tricks to squeeze out performance and low resource utilisation. In embedded, there's usually a strong inclination to using "global" static variables, even declaring them inside function bodies because it "limits the visibility/scope of the variable". Not declaring a static variable inside a function is what knocked a few points off my Bachelor's robotics project.

I personally don't like this. It puts a lot of pressure on the programmer to understand the order of execution, and keep a complex mental model of how the program works. Large memory allocation, such as a cache, can be hidden in just about any function, not just at the top of a file where global variables are usually defined.

It sounds like what you're trying to accomplish is inherently unsafe, hence the "preaching", as in it requires the programmer's guarantee that 1) the data is fully initialised before it's accessed and 2) once the data is initialised, it's read-only and can therefore safely be accessed from other threads. C doesn't care, it will let you do a direct pointer access to a static variable with no overhead. Where's the cost? The programmer's mental model. I haven't tried, but I imagine that Rust's unsafe block will allow you to access static variables, just like in C with no overhead, effectively giving your OK to the compiler that you can vouch for the memory safety.

Rust solutions: lazy_static crate (safe, runtime cost in checking if initialised on every access), RwLock<Option<T>> (safe, runtime cost to lock and unwrap the option), unsafe (no overhead, memory model cost and potentially harder debugging), extra &T function parameter (code complexity cost, "prop-drilling", cleaner imo). On modern hardware, the runtime cost is absolutely negligeable.

Why would you not want to use Rust for a large project? This seems a bit contradictory to me. The safety guarantees in my opinion really pay off when the codebase is large, and it's difficult to construct that memory model, especially with a team working on different parts. Instead, you overload that work to the compiler to check the soundness of your memory access in the entire codebase.

If you like C, by all means keep on using it, I enjoyed my forray into C, it's simple and satisfying, but would much prefer Rust, after spending a lot of time tracking down memory corruption. Rust's original design purpose was to reduce memory bugs in large-scale projects, not to replace C/C++ for the fun of it. We usually have a natural inclination to what we know well and have used for a long time. Feel free to correct me if something is wrong.

1: http://www.chibios.org/dokuwiki/doku.php?id=chibios:document...


The point was none of those rust crates worked, or required you to use mutex in the end which the solution would not actually need (not zero-cost abstraction). I would've fine using unsafe, but even with unsafe it felt like I was fighting the compiler. I would just write this particular function with C, or use the lower level C FFI functions instead.

I stand that rust is more fun to write when you work higher level, treat it higher level language, where you'll have less control over the memory model of program. It all starts breaking down and you need to become a "rust low-level expert" when you want to work closer to the memory model (copy-free, shared memory, perhaps even custom synchronization / locking models ...). It does make sense, but in my opinion figuring out how to map your own model into rust concepts is not trivial, it requires lots of rust specific knowledge, which will take a long time to learn IMO.

When unsafe was marketed to me, I thought it was a tool I could use to escape the clutches when I'm sure what I'm doing and don't want rust to fight me, but sadly it doesn't work that way in practice, but the real way is to actually write C and call it from rust.


Just for fun, I tried the static variable approach for myself. I have to agree with you, it's really hard. I gave up after half an hour. Rust doesn't seem to like casting references to pointers, which I understand, as I don't think there's a guarantee that they are just pointers. A &T[], for example, is a fat pointer (two words, also encodes the count). I think the correct approach here is either accept runtime overhead, or pass a context to each function as a parameter.

I also agree with your other statement. I think Rust tries to abstract a lot of behind into its own type system, such as Box<T> for pointers, whilst keeping it relatively fast. C is definitely the right tool for the job if you want direct memory access, I also think this is a relatively small proportion of people, working on OS, embedded systems or mission critical systems such as flight control/medical equipment.


Really hard question to answer simply. They're two languages that are at very opposite ends of the spectrum, yet they can usually both accomplish the same goal. I think the main difference is that Rust is significantly faster and uses less memory in the majority of cases, but also harder to learn and to reason about. Good package manager, tooling, etc, is a nice to have.

Python focuses on being simple, interpreted and dynamicaly typed, Rust requires you to specify the exact types of all the things (like C/C++) which allows it to generate really optimal compiled machine code before execution, it has more information to work with. Accessing a struct/"object" field is not a hash table lookup, it's a direct pointer access, as the exact size of things are known at compilation time.

If you're writing a script, a tool, a small game, it's simpler to use Python. If you're writing a database engine, a mission-critical piece of code that has to behave predictably without the possibility of random GC pauses, anything that has realtime constraints such as audio, lower-level languages like C, C++ and Rust are a must.

A lot of higher-level folk seem to enjoy Rust as well, especially in networking/the web, for whom the speed is worth the additional difficulty and complexity.


This is by choice, while it is really convienient to interrupt execution flow by throwing an arbitrary value, it's extremely hard to know whether calling library code can throw, and if yes, what kind of errors, without exceptional documentation.

The Result<T,E> type is very explicit, and can be easily ignored, composed or "re-thrown" with the "?" suffix without nested try/catch blocks. Return value based error handling is something Go, Elixir and other more functional languages have also adopted.

Panic is there to aleviate the really exceptional circumstances, when the trade-off for possible program termination is worth the much simplified error handling, such as when casting to a smaller integer type in case of overflow, or locking a mutex when it may be in a posioned state (which, funnily enough, can arrive when a panic occurred whilst the mutex was previously locked, i.e. the mutex guard is dropped during a panic).


> Panic is there to aleviate the really exceptional circumstances, when the trade-off for possible program termination is worth the much simplified error handling

It would be nice if Rust grew "panic annotations" so that we could determine shallowly and with automated tooling whether functions could panic. It would make it easy to isolate panicky behavior, and in places where it is absolutely necessary to handle, ensure that we do.


This kind of already exists in the form of #[no_panic] [1]?

> If the function does panic (or the compiler fails to prove that the function cannot panic), the program fails to compile with a linker error that identifies the function name.

1: https://github.com/dtolnay/no-panic


Almost anything will panic when you're out of memory, as allocating is regarded as infallible (due to above mentioned tradeoff).


Yes I know. I wasn't complaining about the lack of exceptions in Rust (I kind of hate exceptions) - I was pointing out that calling Rust errors "exceptions" is not correct (and probably misleading).


Probably more to do with the fact that people are mashing F5 to see the difference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: