Hacker News new | past | comments | ask | show | jobs | submit | asdjlkadsjklads's comments login

We used NixOps for a while. Small company (<100, >20), with lots of client deployments on prod and various pre-prod environments as well. NixOps worked well enough, and i never heard complaints about it. Though i wasn't in the trenches with it personally.

Our problem was hiring for it. Ultimately we ended up moving away from NixOps and into more Kube/AWS solutions to provide this basic orchestration. Unlike a lot of Kube talk, it isn't entirely overkill for us - but NixOps worked just fine, and is probably better for us. We just didn't have Ops with enough knowledge in Nix to properly exploit Nix to easily achieve our goals. Nothing was easy.

With Kube/AWS though a lot of hiring decisions were dead easy and bringing people on who knew it was dead simple. As much as i'd prefer to be on Nix, reducing decision fatigue is at least nice.


Thank you, insightful. Do you recall if diving in to NixOps internals and fixing bugs yourselves was necessary at some point? Did you have issues with incomplete coverage of AWS APIs?


In this case i'd say yes. It's a DB purposefully written for learning, with good references and documentation.

However if it was just another DB.. especially one competing in an area no one really feels deficient, prob not.


Okay i wasn't interested in the `less` usage in the GP comment because i use tmux to stop and view logs as they're passing by. However automatically opening in my CLI editor? That sounds sexy as hell!


Couldn't agree more. The only reason i'd call myself employable as a developer is because i've taken on many self-projects and reinvented many wheels, which resulted in lots of sharp corners and hard lessons learned.

It's a definite time sink to learn these lessons, but it's of real value to me personally.


Losing, but still buying in.


The users aren’t really, their bosses are, most likely because it gets bundled in a contract with Office and Exchange etc.


Agreed, but only one party there is supposed to be working in your interest.

It's why i'm pro-big-government, but also very heavy handed about what i want to see with government transparency and power dynamics.


heh, makes me want to go to a code camp now. I've got ~7 years professional(?), ~11 years hobby, experience at two medium sized companies. Spent my years reinventing a ton of things, and know a fair amount of oddities on how to get things done. Combine that with a passion for idiomatic, standards, maintainable code and .. i think i'd be a decent hire.

(Primary experience through the years in: Rust (current), Go, Python, JavaScript(Node/etc).

Yet. I don't think i'd pass a single interview. I really need to learn the fundamentals i need for those interviews. Feels like a liability at this point.


Nobody that hired me has ever been dissapointed by what they got, but there’s a ton of people that never hired me because I can’t tell them whether something is O(n) or O(n*n), just ‘faster’ and ‘slower’.

Who the hell cares what it’s called as long as it’s the fastest it’s ever going to be.

> Do you have options other than caching?

Yes, but who cares since they’re all worse (for your wallet and for speed).

I may still be a bit salty.


Not wanting to rub salt to your wound but there is a difference between able to tell if a car went fast or slow, or definitely slower than the speed limit, or exactly 58.4km/h.


That’s fine. If someone ever wants to know whether it was 49km/h or 51.3km/h we can figure it out. It’s just not knowledge that is very essential most of the time.


I'm writing a git-like, so i'm curious your thoughts on this. What would undo, do exactly?

Eg would it make a new commit, reverting the previous? Would it undo the last commit, dropping it from history? Would it undo the last commit, putting the contents into staging/unchecked?

My git-like is for structured data and is very very different from git. Conceptually though, it's similar in that it's a hash tree of content addresses. So it'll conceptually suffer the same problems, if i wanted it to.

My plan, due to the nature of the project, is to for the most part avoid conflicts and be more forgiving in the primary flows. This makes sense (imo) because the nature of the application is about data retention moreso than it is strictly VCS. It just has VCS fundamentals to reach into, should they be needed.

But as i dive into the primary user flow - it feels like there are always a few cases that seem primary. Undo is a great example - making a new commit (revert), undoing the last and dropping it (hard reset), and undoing the last into staging all seem like primary UXs.

A lot of Git feels like the primary UX. I struggle to think of single concepts, like your undo example, where a simple path will be right to the user 90% of the time.

Thoughts?


IMO it's simple: with undo you want to 'go back in time' so you want to remove the commit and get the exact same list of staged/modified files that you had before (especially useful when you made a git commit -a which added too many files in the commit)


What's going to be in 5.11?


Apparently some features that will allow WINE to capture a lot of the Windows syscalls used for a lot of DRM and anti-cheat, meaning games with those might work through WINE/Proton now:

https://www.gamingonlinux.com/2020/10/collabora-expect-their...


Ah they're emulating the windows syscall interface, that's not so bad. I was worried they were making it easier to install rootkits, that seemed pretty weird.

Eventually Linux and NT will converge until the language runtimes will run on both and apps will run on both. Which works better will be largely a question of linker flags.


it's not emulating syscalls, but it's adding the ability to trap the syscall instruction and provide a handler in user space


As If NT and Linux will become implementation detail. Odd


Hey, it seems like this was clarified to not be work towards anti-cheat after all, and seems to have been a misunderstanding.

https://www.reddit.com/r/linux_gaming/comments/jtz08q/collab...


I would assume syscall user dispatch. "Syscall User Dispatch allows for efficiently redirecting system calls and to be done so for just a portion of the binary. The system calls can be redirected back to user-space so they can be handled by the likes of Wine."

https://www.phoronix.com/scan.php?page=news_item&px=Syscall-...


Interesting article. On this note, recently as part of small group of people we needed to implement API -> API authentication. Unfortunately, we don't have anyone who has solely implemented this before, so we took to searching to try and pick something resembling industry standards. Of course Oauth 2 got brought up, however the spec and articles on it seem bizarrely out of sync with out needs. Notably on two points:

1. It seems any piece of content talking about Oauth 2 is referring to 3rd party, external authentication. Ie i want a client API to talk to my API but with an identity managed by a third party entirely. This is not our use case, but seems to be the primary focus for Oauth 2 articles.

2. The rest of the spec _feels_ very loose for our use case. There's loose definitions _(it feels like)_ on some basic patterns of tokens and renewal tokens, but it all seems so wildly flexible that we're almost lost on what to actually write, what technologies to use to generate tokens, etc.

Any tips for narrowing the field, to implement the right thing, in the right way? This article struck a chord with me.


most of my knowledge about oauth is about using it at my dayjob for years, so don't take my answer with any authority. its more a users perspective of the technology then how it was actually meant to be used, which i do not know.

first off: the whole idea of using oauth in your systems is to externalize authentication. its main usecase is if you have several services which share the same users... usually microservices, but everything else which supports oauth works as well, obviously

if you do not actually want to externalize your authentication, then oauth is likely not what you actually want. (keep in mind this only applies to the process of authentication. you will probably still create a `users` table to put a foreign key on your domain entities. said users table just won't have a `password` field or similar)

under this perspective it makes sense that you basically only find blog posts to integrate 3rd party tools, right? you're just setting out to implement a new oauth2 server otherwise after all, and... i'd strongly suggest you don't. there are so many pitfalls you can fall into. its not a trivial subject.

thankfully, there are readily available FOSS oauth2 servers like keycloak around if thats what you want to use...

but to come back to my initial point: if you do not want to externalize your authentication, then do not use oauth! almost all frameworks already have tools available you can use to authenticate with api tokens. it keeps you within the bounds of your frameworks strength and you can automatically fetch a token on your frontend after the user authenticated with your usual flow.

but i'm already off topic considering your first question. if you do use a solution like keycloak, there are `oauth clients`. these behave exactly as you want.


> first off: the whole idea of using oauth in your systems is to externalize authentication. its main usecase is if you have several services which share the same users... usually microservices, but everything else which supports oauth works as well, obviously

We don't want to externalize it. For clarity _(because i think i implied internal API->API)_, our primary concern right now is a client API interfacing with our API. We want to choose some implementation that clients would expect, and easily reason about.

There may be a future where those clients have to authenticate to multiple APIs / microservices of ours, but currently it is one API setup for this explicit purpose.

> but to come back to my initial point: if you do not want to externalize your authentication, then do not use oauth! almost all frameworks already have tools available you can use to authenticate with api tokens.

Yea, not using Oauth was my thought was well. My concern however, was trying to pick something clients would expect. I don't want it to feel custom, arbitrary, hodgepodge.

Our very early impression / plan is to use a token renewal, and a shortlived token, similar to oauth. However this loose, and custom, and i don't want clients feeling like we're making things up. Hell, i don't want to make things up.

My current thought is that I need to research JWT more, as it may fit what we need, and be more standardized.

edit: And PASETO


You should probably just try out a foss oauth server to get a feel for it.

Just start a keycloak server with docker and write a small Webservice in your language of choice which only let authenticated users open the website, which just prints the user name for example.

After you did that, you could write a second service which accesses that api using a client (not user) and prints which users have accessed it since it was started.

That should be doable within a few hours at most and give you a feel for the technology.

If you're fluent in python, I'd personally suggest just starting a hello world flask project, connect it to a keycloak and write a second cli script which simulates the non-user access. (If you use Java, keep in mind that springboot2 uses Springsecurity5, which was incompatible with the official keycloak Java sdk the last time I checked. Either use springboot1 or expect to have a slightly harder time figuring out what to write in the application.properties)

Reimplementing advanced authentication systems like jwt on your own is extremely error prone and frankly unnecessary if you're not in the business of authenticating users. I'd suggest to either use whatever your framework prefers (which is usually just a static and very long, manually rotated token) or try to externalize it by using readily available foss solutions


> Yea, not using Oauth was my thought was well. My concern however, was trying to pick something clients would expect. I don't want it to feel custom, arbitrary, hodgepodge.

I think clients often expect one of the following (assuming everything is done over HTTPS): 1. HTTP basic auth 2. Login to the API, get a token, and use that token for requests 3. Generate an API key for the client and the client sends the key on every request 4. (Occasionally) Cryptographic signing of every request, sending the signature with the request

1 and 3 (if implemented as a username/password on the back end) are going to have similar trade offs. You are fully authenticating every request that comes in against your credentials data store.

Option 2 is where OIDC/OAuth2 comes into play or you could use a standard session token approach if you want. In OAuth terminology the client credentials grant is the simple flow where the client logs in with a username/password, gets a token back, and includes that token in API calls. Option 2 is also valuable if you want to support mobile clients with refresh tokens and such using things like the authorization code grant. Essentially allowing users to login to a mobile app once but use short-lived tokens when calling the API without prompting users for credentials every few minutes.

Option 4 can be an alternative to token-based approaches to reduce load on your credentials database but gets more complicated for the client.

Ultimately if you are looking to keep things simple I would lean towards HTTP Basic auth until/unless you are building a mobile app or otherwise really need to move to a token-oriented approach. At that point I would consider full fledged OIDC/OAuth2 or maybe a login that generates an API session token.


I've been doing a similar thing recently. I found PASETO to be interesting ( https://paseto.io ), which is a lot like JWT but is simpler and more opinionated (e.g. there is no plaintext option; symmetric and asymmetric operations are cleanly separated; etc.). I especially enjoyed watching this video https://youtu.be/RijGNytjbOI

I still decided to go with JWT, since PASETO isn't yet standard and its software hasn't been around for long, but it definitely helped me understand more about the choice I was making. In particular, PASETO distinguishes between "local" tokens (producer and consumer can both use the same symmetric key) versus "public" tokens (producer and consumer can't use the same key, e.g. third party services run by different organisations; hence asymmetric keys are needed). PASETO encourages local tokens to be used when possible, since their keys can be shorter, they generate shorter tokens, they're faster to encrypt/decrypt, they're easier to rotate, etc.; public tokens are essentially a last resort when we must interact with a third party.

I was originally considering 'public' JWTs (i.e. asymmetric keys) for my use case, since we're using AWS API Gateway and that has the option to check the signatures of 'public' JWTs, and reject invalid ones before they ever hit my code. However, I eventually went with 'local' JWTs (i.e. symmetric keys), since the convenience of having AWS discard some tokens for us wasn't worth the (mostly logistical) overhead of using asymmetric keys.

My choice to use the JJWT library was also partly due to its recommendation by the JPaseto library ( https://github.com/paseto-toolkit/jpaseto ).


PASETO looks interesting, thank you!

Since my primary concern is external client APIs communicating with ours securely, but also in a way that's not too-foreign. I imagine JWT would fit the bill of being widely known, but PASETO sounds really interesting too - having less things to get wrong sounds amazing. My concern with PASETO is that client APIs are huge slow customers, so i'm hesitant to choose anything that's not super mainstream.

I'll research PASETO more, but i feel like my decision would come down to fitting to what the customer can use - less change is more, in that case. Ie, JWT.


Jwt works in this case... You issue a key to a known user.. In this case you create an API user (doesn't have to exist in a backing store) and set the expiration to a super short duration and validate it on the other end... You can also use this for API testing in unit/integration tests


OAuth is perfectly valid as a standard for inter-API authentication but it is complicated. You'll also have to figure out separately how your oAuth clients manage the secrets they use to authenticate to the oAuth server to retrieve access tokens, and if encryption as well as authentication between the APIs is important to you, how to manage the keys to do so.

Take a look at https://spiffe.io/ which avoids these concerns focuses specifically for system to system authentication at scale.


This is extremely context dependent. With just a few services, you can generate and statically place a few random strings to use as bearer tokens. In a more complex microservices environment, you want some way for the scheduler to participate in identity issuance, for example SPIFFE. Also consider whether the applications can actively participate in the scheme, or if it needs to be abstracted through sidecar proxies. Also consider whether you are doing TLS. Bearer tokens on plaintext connections are weak; maybe you need HMACs. Or to sort out TLS first. Then consider whether you need tokens at all, or can just use client certificates. Etc.


Where are you blocked in this use case?

The easiest flow IMO: The API client needs a service account (username and password like a human). It authenticates to the OIDC server with the username/password and obtains a token. Then it can call any API service with the token.

It's quite straightforward really.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: