Hacker Newsnew | past | comments | ask | show | jobs | submit | Andys's commentslogin

Every piece of progress looks like this to begin with.

https://tapitalee.com Deploy to your own AWS account like Heroku


We did this at Chargify, but with MySQL. If Redis was unavailable, it would dump the job as a JSON blob to a mysql table. A cron job would periodically clean it out by re-enqueuing jobs, and it worked well.


This is made possible because Elastic gained a write-ahead log that actually syncs to disk after each write, like Postgres.


I came to a similar conclusion. What about measuring enjoyment? Turns out people enjoy meetings more than work!


I'm surprised that we're circling back to banning things being the answer, when we (the Internet) know that doesn't work long-term.

It seems nice at face value, so it appeases everyone, while being an overly blunt tool used as a political weapon, etc.


Imagine viewing the same chat logs, while logged in an admin interface, then it isn't self-XSS anymore.


Indeed, it appears that the limited scope meant the juicy stuff could not be tested. Like exfiltrating other users' data.


Which is stupid as those are the vulnerabilities worth determining if they exist.

I can understand in a heavily regulated industry (e.g. Medical) that a company couldn't due to liability give you the go ahead to poke into other user's data in attempt to find a vulnerability, but they could always publish a dummy account detail that can be identified with fake data.

Something like:

It is strictly forbidden to probe arbitrary user data. However, if a vulnerability is suspected to allow access to user data, the user with GUID 'xyzw' is permitted to probe.

Now you might say that won't help. The people who want to follow the rules probably will, and the people who don't want to won't anyways.


Try kilocode - https://kilocode.ai/ Its a VScode extension and allows different LLMs to be used.


Thanks for sharing that.

Presumably if you'd split the elements into 16 shares (one for each CPU), summed with 16 threads, and then summed the lot at the end, then random would be faster than sorted?


I don’t think random should be faster than contiguous access, if you parallelize both of them.

Although, it looks like that chip has a 1MB L2 cache for each core. If these are 4 Bytes ints, then I guess they won’t all fit in one core’s L2, but maybe they can all start out in their respective cores’ L2 if it is parallelized (well, depends on how you set it up).

Maybe it will be closer. Contiguous should still win.


What if you factored in time to sort them first?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: