Hacker Newsnew | past | comments | ask | show | jobs | submit | Nemo157's commentslogin

But you are the second party, there is no third party involved when you use the software for yourself.


You're not using the software, you're using it on a 3rd party (GitHub).


You don't need to pay to have an organization, I just created a personal organization to hold my non-experimental source repos.


Since unsoundness is a property of an interface, that private safe function would be unsound but could be used as part of the implementation details of a sound public API.

It's not something I would do personally, and would be something I'd put down as a black mark against using a library as a dependency, but it's nowhere near as bad as doing the same thing on a public API.


Every single function call is using an interface. Your public API is not the only interface.

It's an extremely bad habit to pick up as this ruins one of the main purposes of Rust, namely it's ability to allow collaboration and while not causing undefined behavior. Any sort of internal "don't do this or it'll cause undefined behavior" is just falling back to C/C++ land.

Some developer later will modify that code later, maybe even the same developer and will forget about the invariants that need to be held if they're not documented in unsafety comments around unsafe code. Anything that relies on code review to catch unsoundness outside of unsafe code is automatically wrong.


Then your code calling `pre_exec` is unsound, it is calling into `foo1` which is not documented to be async-signal-safe. Without any documentation saying otherwise you can't assume a function can be called within weird contexts with non-standard restrictions, and if they update to do something like add an allocation that causes your code to be UB, that's allowed by their documented API.


If you're improving the UI around here it would also be good to see how the list was determined. For automated detection in particular it seems like the tooling used also should be made public to allow testing. I know of at least one dependency that I would expect to turn up on sentry's list that doesn't. (A first guess: a bug in how you handle Rust workspaces, using the root to calculate dependency depth; alternatively a bug in your handling of non-lowercase github usernames, I notice there are only lowercase usernames in the list, but that might just be an artifact of your UI design).


The hash you have is computed from the hash of the content and other data. You would need to send that additional data out-of-band to allow the client to compute the overall hash and verify it. (Not to mention with urls like ipfs://<hash>/some/path you only have the hash of some arbitrary parent node, so there's even more additional data necessary to be able to verify that the content at that path under the hash you have is valid).


You do not need to send the additional data out of band unless you are claiming that sha-2 (the hash function used in ipfs) is vulnerable to second preimage attacks.


It's not possible to verify a file downloaded from IPFS using only its CID, because an IPFS CID does not contain a checksum of the file content. It contains the checksum of a meta-file, which contains the hashes of further meta-files.

For example, debian-10.7.0-amd64-netinst.iso has SHA256 checksum b317d87b0a3d5b568f48a92dcabfc4bc51fe58d9f67ca13b013f1b8329d1306d. Here are two example CIDs generated from that file:

https://cid.ipfs.tech/#bafybeihjy54iyvheotna2aeqmzhqnro6yot4...

https://cid.ipfs.tech/#bafybeihfqpypuhmtyzazrj3g4b4f4nqk2ziy...

Notice that neither one contains the original checksum.


A long time ago Cloudflare was experimenting with support for E2E integrity via their public gateway:

https://blog.cloudflare.com/e2e-integrity/

It looks like they've taken the FF addon down and archived the source repo though.


JS being required for the interactive features would be fine. My personal problem is that I end up on some random gitlab instance to just take a look at the source or issues for some library, and get a blank white page. For the read-only public view there should be no need for any JS.


I understand that thinking but it's ignores reality. To do what you want you want means maintaining 2 codebases (even if just for sub-parts of a site). It's really easy to say "This specific page could be static" and you are right, it could, but it would mean having fallbacks for every JS interaction on the page (or removing them if the user has JS disabled). There simply aren't enough people who die on the no-JS hill to care about, especially since it means ongoing development maintenance, testing, design/UI work, and the list goes on.


GitLab is built on JS and renders a white screen without JS. Enabling JS at all taxes my Core 2 Duo machine, and opening GitLab to a few thousand line file (or worse yet, opening the pull request diff view) taxes my top-of-the-line Ryzen 5 5600X machine running Firefox. GitLab is just badly written.


Or you could server render the pages and hydrate them as needed which is something easy to do with NextJS, NuxtJS, Remix, Fresh, among other modern frameworks for developing with JavaScripts libs.


This is my opinion, too. The JavaScripts should not be required just to read the documents, files, list of files, etc; even if some of the other features do use it.


Though IIRC from some bugginess I was noticing a while ago, only to the JS API, it will still obey the dark color scheme media query (but sometimes inconsistently, I would load the same page in multiple tabs and sometimes get dark or light schemes).


> In most cases, it just blocks or hides cookie related pop-ups.

This bit is actually the opposite. All tracking _must_ be opt-in, therefore by blocking the pop-up and not opting-in the website cannot track you.

It's only for the websites which are broken when not opting-in that it accepts the policy (which AIUI is itself a violation of the GDPR).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: