Hacker News new | past | comments | ask | show | jobs | submit | wtatum's comments login

Not to go too off topic but I'm extremely interested in the throwaway comment about holding enough equity in your service provider of choice to justify calling investor relations when you have an issue. Is this a real thing that people do? If it works it sounds like an amazing life hack but I have my doubts how much influence they would have over the "real" support.


It's a thing the author of that article does, at least. He's got an article mostly about how to escalate identity theft complaints here that touches on it: https://www.kalzumeus.com/2017/09/09/identity-theft-credit-r...

It's very long so I'll just quote a small bit specifically about the Investor Relations:

> If you cannot route letters to the legal department, go as high up as required. Pro-tip: virtually every major US company has a department called Investor Relations which is trivially discoverable, very well-funded, publicly routable, and very bored during 80% of the year. You can excuse any letter to Investor Relations with: "I am a shareholder in BigBank. I was therefore profoundly displeased when I learned…"

> What’s a well-paid bored professional in Investor Relations going to do with your account information? Nothing? Nothing is a great way to get fired. No, they’re going to open up their internal phone tree or ticketing system and say “I have a letter from an investor which alleges an identity theft issue. Which group handles that? Your department? Great; handle it and call me when you’re done. Do you want it by fax, email, or FedEx?”

For this to work though you've got to present like your position in the stock is in the millions of dollars, even if it's actually like $100. The author of the article has been in the financial industry for a very long time, and has also spent a long time as a Japanese salaryman, so he can definitely pull that off.


There are a ton of free services only accessible to those with money. For example, if you buy jewelry from any luxury store you can often also bring it in for free cleaning. But they will honor this for any piece you bought from them. And if you bring in a mix of pieces they will often clean them all.

Luxury retail isn’t often worth it, but when it is it comes with lifetime services.


That's this https://apacorp.com/ Apache


Having been the target of a Crosslake report a few times it felt like the process was actually almost entirely focused on points like this. Key person risk, talent acquisition pipeline, process maturity, "definition of done", etc. It felt like "hard" tech debt items around code quality, measured SLA performance, etc were present for the sake of completeness but were of almost no consequence to the Crosslake team or those who would read the report.

Does that mirror your experience working in DD?


It's a complex question. First, every diligence has very limited time, so for code and arch diligence, we get enough time to know if it's really good, or really bad. In the middle, you need weeks, not hours.

Second, every client has different priorities, but in general, they want to know what the potential risks are of tanking the investment, and what they need to do to make sure this can't hapen. PE firms aren't like VCs... they don't want a 1 in 10 chance of unicorn, they want a 90% chance of a solid return with zero chance of losing the entire thing. When you're spending $50M to a $1B, a low-but-avoidable risk of disaster is not OK. Finding those is our number one priority. If the code is not awful, then the people, process, and org risks are much more likely to do this.

I'm one of the first to say tech debt kills companies - I've literally seen it. But.... dysfunctional organizations kill companies worse and faster. And those are much easier to sniff out in two days of interviews.


There is a reason for that. Tech debt, code quality, poor SLA metrics are symptoms, and proxies for various things (that can be fixed).

Failures in the other areas are root causes and more fundamental issues.


I see a lot of projects started in this space and all of them appear to have multi-writer as a goal. I've been interested for a long time (and have started and stopped) in a solution for single-write multi-read with eventual consistency. I chatted with @benbjohnson on a LiteStream ticket about the possibility of adding a mobile client to receive the replicas to mobile devices but I think that option isn't really consistent with the new direction of that work for LiteFS at Fly.

To me the multi-writer "collaborative" use case is super powerful but also has a lot of challenges. I personally would see a lot of value in a solution for eventually consistent read-replicas that are available for end-client (WASM or mobile native) replication but still funnel updates through traditional APIs with store-and-forward for the offline case.

Is anybody aware of an open-source project pursuing that goal that maybe I haven't come across?


Do you know of a straightforward way to identify that this is happening: where one node is using DERP or one link between your nodes is falling back to DERP?


`tailscale status` should tell you which nodes do or don't have a direct connection.


Same. I tried out some toy use-cases using nats.js over websocket a few months ago. The prospect of possibly being able to "directly" consume messaging or key/value store from the browser with only a thin gateway between was really interesting to me but I couldn't square up the NATS-internal JWT cookie thing with how you would handle auth in a traditional web-app (OAuth client on a gateway plus a session cookie).

I found some threads saying auth callout in 2.10 would solve this and decided to table the project until 2.10 but it's really really unclear how to work through the details of converting a "traditional" OAuth access token into the NATS-specified access token required by the auth callout contract.


Auth callout is post-NATS client authentication, so it would not solve the "auth web flow" for authentication. Instead, the resulting token from that would be set as a cookie that then would be passed into the nats.ws client connection. The auth callout service would use that token to map to the concrete NATS user. The mechanism of doing that is up to the implementation. One option is to manage NATS claims into the OIDC provider (for the user authenticating) and then the auth service would decode that source JWT and extract the NATS claims and generate the NATS user JWT in the response.


Thanks that observation is extremely helpful. If I have it down then the intended flow looks something like?

- Web client is directed to some token vending service. This service implement authn in a manner of its choosing (i.e. OAuth) then sets a NATS client JWT in the cookie per https://docs.nats.io/running-a-nats-service/configuration/se... - Nats.ws client connection provides cookie during connection to perform client auth - If further authz/fine-grained control is needed the auth callout mechanism can be used. This would have access to the provided cookie/token so any claims needed for access control could be stapled on during step one and used at this point?

For GPs original question -- I'm running a fairly old Keycloak version (v8) but it does appear to set a JWT in KEYCLOAK_IDENTITY and KEYCLOAK_IDENTITY_LEGACY.

Am I right in understanding that IFF the token is signed with Ed25519 and both sub and iss are an NKEY value this is sufficient for NATS to accept that cookie as a credential?


Yes that reads correct. The `sub` would a NATS user public nkey, the `iss` would be the NATS account public nkey (either the issuer nkey in config-mode or existing nkey in decentralized auth).

As long as it can verify the chain of trust for the user JWT that is returned, it should work.

The three schema types are shown here: https://docs.nats.io/running-a-nats-service/configuration/se...

auth request comes in -> generate user jwt, sign + encode -> respond with auth response.

As long as the necessary bits of the response and user JWT conform, it will work.


Not (necessarily) relevant to this solution, but I've recently come across a use case where it would be valuable to allow third-party (semi-trusted) individuals to sandbox/build/test some data processing/transformation pipelines in Jupyter and then "operationalize" that into ongoing ETL in my main webapp/API once they're finished. Not the same use case as Mercury but still in the bucket of hoisting a notebook into a more repeatable/operationalizable runtime. Does anyone have experience with something like that?


I'm using Papermill to operationalize Notebooks (https://github.com/nteract/papermill), it e.g. also has airflow support. I'm really happy with papermill for automatic notebook execution, in my field it's nice that we can go very quickly from analysis to operations -- while having super transparent "logging" in the executed notebooks.


Also using Tauri in SQLite for a project. I only briefly looked at the provided SQLite plugin and quickly decided it didn't support everything I needed (custom scalar functions for example but there were others) but as far as I know all "official" Tauri plugins use the same event/command RPC mechanism available to you in userland for calling into Rust so I don't actually think the Tauri SQLite plugin "does the access in the UI" in the truest sense -- otherwise it wouldn't be a Tauri plugin it would just be a vanilla JS lib.

If you've looked more closely and know that not to be the case would like to hear what you've seen but my understanding is that anything that leaves the WebView sandbox is using RPC to make calls to Rust.


I'd be surprised if it was anything other than what I can do myself via the existing event/command system... I just meant that controlling the queries themselves on the JS side vs. the Rust side... kind of torn between something similar to a regular web api, abstracted over events, etc. Honestly, I should just start doing anything at this point... started over this past weekend, and got some hello world bits pretty quickly and been mostly toying since.


Looks to be using the kvvfs which uses local storage key value pairs as the DB VFS. The file load-db.js (https://craft-of-emacs.kebab-ca.se/load-db.js) pre-loads the values into localStorage. I assume the values were determined by loading local storage through another method and then extracting it during development.


Could you comment on which "flavor" of SQLite for WASM this is using and how you built or pulled the glue code? I think sql.js and absurd-sql are the best known solutions for this but it's not clear from the blog if you're using those or rolled your own. Information on how it was built or what prebuilt you're using would be fantastic.

I also see that load-db.js is loading the known search index values into localStorage ... do you have some tooling for creating this file from a known SQLite base file or was it just handrolled from a localStorage already holding a valid DB?

EDIT: Looking in the code I found this const db = new sqlite3.oo1.DB({ filename: 'local', // The 't' flag enables tracing. flags: 'r', vfs: 'kvvfs' });

Googling lead me to the "official" SQLite WASM pages which otherwise don't appear too prominently in search results for whatever reason. This (https://sqlite.org/wasm/doc/trunk/about.md) seems like a good starting point and notes that both sql.js and absurd-sql are in fact inspirations


Of course! As you’ve found through digging into the js, it uses the official SQLite WASM build. The site currently uses a slightly old vendored version built from the SQLite codebase, but the SQLite team have recently released a binary at https://sqlite.org/download.html (under the WebAssembly section) that anyone can try out. You can find the docs at https://sqlite.org/wasm/doc/trunk/index.md, which have been rapidly improving over the past few months.

The process for building the database is a bit complex. I want to support all browsers, so unfortunately need to use local storage to back it up. Firefox has a while to go before it supports the Origin Private File System, but once it does so the build will be a lot smoother.

I build the index as part of the site’s CI (using nix), by running SQLite-WASM in deno to pre-load local storage. I then extract the keys from local storage and populate them as part of the site load using the hand-rolled load-db.js file.

SQLite WASM does have better support for importing / exporting databases on OPFS, so this process should be simpler as soon as I can move to it.

I’ll write a follow up post at some point on the implementation details.


Thanks for the info that's really helpful. I was immediately curious whether SQLite-WASM would run in Deno and so I'm glad to hear you say that's something you've tried. It felt immediately like it "should" since Deno mirrors web APIs so well and supports a WASM runtime but I wasn't sure.

I have a current project that is starting out by building a client-side SQLite DB (currently in Tauri but WASM SQLite would be an option as well) that would benefit from an option to offload building really large DBs so a shared server process. Knowing same or very similar code could run in Deno opens up some really interesting possibilities.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: