Hacker News new | past | comments | ask | show | jobs | submit | i_like_robots's comments login

Firstly, congratulations on launching - it's uncanny timing and similarity to the project I've been building in my 10% time at work!

Here's a screenshot of my own efforts so far: https://github.com/user-attachments/assets/6fac23c3-79ef-4c0...


Thanks! You're project looks awesome! I'd be very keen to learn some more about the reason you've been building it, who you are hoping to target with it and plan to do with it if you are :)


Agreed! About 5 or 6 years ago I wrote a library to demonstrate to my teams that JSX a) was not magic b) not limited to React. After that was done I became a little obsessed with making the library the fastest JSX renderer - which I think is still true: https://github.com/i-like-robots/hyperons


Add me to this club of JSX hackers: https://crwi.uk/posts/hiccup/

I used this approach to make a syntax highlighted text editor in < 5kb of JS: https://crwi.uk/experiments/text-editor/


Very cool - reminded me of https://github.com/creationix/dombuilder which I have... memories of. Shout out too to https://microjs.com/ for allowing me to relive some nostalgia.


Everything becomes Lisp in the end.


These are awesome (both parent comments). Love seeing that I'm not alone in my JSX curiosity.


With our powers combined we can reinvent the tiny fun internet.


Take a look on https://gifcities.org/ - I'm sure you'll find what you're looking for!


Congrats, this looks really nice! I recently finished a side project using Fastify and the Fastify Swagger plugin (which extends the built-in request and response validation and can dynamically generate an Open API definition) which was a good experience overall but the addition of request mocking and the solid documentation site shown here could tempt me to revisit it.


Cheers! Going from code first to schema first is definitely worth it in my experience! Especially when working in a team.

The nice thing is you already have an openapi spec, so it’s pretty trivial to eject from fastify swagger and switch to openapi-backend if you want!

Here’s an example of openapi-backend running on Fastify

https://github.com/openapistack/openapi-backend/tree/main/ex...


Perhaps look for web development books published around 2010-2012, before the large JS frameworks gained a major foothold. These will be missing some of the very valuable and useful developments in CSS and JavaScript (e.g. grid layout and promises) but for the other topics much of their content will still be relevant.

There's also the Indie Web wiki which has lots of getting started guides for hosting your own website: https://indieweb.org/Getting_Started


I don't think this is a good idea. JS made some extremely useful improvements starting with ES6 (so, 2015).


I mostly agree - that was the intention of the caveat - although I don't think there would be too much harm done if somebody started with ES5 then layered ES20xx features or new browser APIs on top as needed.


> Webpack's loaders can't really check your types because, without doing a lot of extra magic, they operate on one file at a time.

This is the point folks really must understand when setting up a new tooling pipeline to deal with TypeScript. Certainly all of the module bundlers I'm aware of operate in this way.

To explain further for anyone curious; for TSC to work effectively it must construct and understand the entire "compilation". To do this it starts by performing a glob match (according to your include/exlude rules) to find every TypeScript file within the project. Resolving and type checking the entire compilation every time the bundler calls for a transform on a file is very slow due to lots of repeated and unnecesssary work so most TS bundler plugins have to work around this. Unfortunately, they're still relatively slow so type checking and bundling code separately is often the best way to go.


But in theory type checks could be cached too. Why this isn't currently done is because it's probably more work and requires very deep knowledge of the typescript language and parsing. Meanwhile the bundling itself is a simple affair of combining text files together. So these tools instead just fork the typechecker for some minor gain but in practice you could absolutely cache type information. That's what every IDE does with their intellisense database. I think in this thread we are essentially talking back and forth over the current state of bundlers and most here rationalize why it's okay (if not better!) for type checking to be separated from the compilation but the reality is that it's a fairly arbitrary state of affairs. It could easily all be efficient so that type checking is just as parallelized and incremental as bundling and therefore requires no separation into a different process. As we know, after all a binary produced by c++, rust or c# is also a "bundle".

Actually this painfully reminds me why I found it so weird that tsc doesn't simply offer bundling itself. Why wouldn't it? It should be very easy for the compiler to do this as it has all the information and on top of that, tsc also has an incremental mode already. That definitely means 'incremental' for type information.


tsc already has incremental mode, also there's the LSP, the bundler in watch mode could keep a persistent TS language server running. if the typecheck succeeds the bundler emits the new bundle, easy peasy.

If I remember correctly gulp(js) was perfectly able to do this.


>easy peasy

but how do you do it? this is not as easy as it may seem. Of course it's possible but the value here is that I don't have to do this for every IDE and/or the LSP when I use webpack where waiting on the typecheck is an integrated feature.


I don't think that's an integrated feature of Webpack, though. It is a feature of TypeScript's own compiler (`tsc`), but that's about it. Nothing about Webpack supports typechecking (or TypeScript) natively.


Like you I tend to agree that the effort delivering front end projects often seems to outweigh the value added over recent years. I can think of half a dozen projects I've seen over the last few years where the effort teams put into figuring out stacks of new tools was an order of magnitude greater than the value added (and most of the projects failed to deliver or became haunted forests soon afterwards which is even worse!) It's not sustainable, businesses are wasting far too much money and too many talented developers are left feeling inadequate.

The most important thing in any organisation is to ensure teams are setup to succeed - that means they're using tools which enable them to work efficiently, ship easily and reliably, and their work can be maintained in future.

With that in mind, my advice would be:

1. Ensure there is a process where new tools are scrutinised by peers. I always push developers and teams to explain their tech choices in terms of the value added and often as we dig into this together the justifications melt away. Asking for a timeboxed proof of concept can also be effective - battling toolchain woes and trying to manipulate tools into solving the problems a team actually has often helps lead to better decision making (think https://boringtechnology.club/)

2. Try to work in organisations where developers can be close to their users and are empowered to suggest new features and self serve analytics data. Teams able to build empathy with their users and are motivated to solve problems for those users are more likely to favour choices which provide value quickly - this is a big nudge towards making simpler choices.

3. Give developers space for learning and experimenting. Whilst tempting, picking up a suite of new tools to deliver each new project is a really crap way to encourage self development because those new tools will more often be a big distraction than a force multiplier. Many developers are highly motivated by trying out new tools and frameworks, and some of those might lead to something great in future, so give developers the time and space to try them and make sure their learnings are shared with the team to help level them up too.


This is how we did it at the FT. We looked at on demand HLS or DASH via our CDN as well but this didn't become a requirement during my time there.


Yes.

Prior to my current company I think I'd only met two candidates face-to-face who had sent misleading CVs (one of whom memorably tried to tell me MooTools was a new Linux based operating system.)

But so far this year I've cut short half a dozen interviews once it became clear the candidate was hopeless, despite having good CVs. In some cases they seemed to struggle to even to use their own computer.

And in the last 12 months we've also cut ties with somebody who joined during lockdown after it became clear that they'd falsified details of their background and experience.


I work four days in my current role and will continue doing so in my next one. I don't have much choice in the matter so I always raise it at the earliest opportunity and have even added it to my CV to make sure I don't waste anybody's time.

I've found the majority of the companies I've spoken to over the last few years (in the UK) have been open to me working 4 days a week but I've also had the frustrating experiencing of being told this isn't an option after interviewing successfully.

At the moment people with my experience are in huge demand so having this leverage must be a big help and I'm happy to take advantage of it while it lasts.


What industry or domain are you in?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: