Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's the recommended method of adding authentication to a REST API?
397 points by somtum on Jan 16, 2018 | hide | past | favorite | 249 comments



This question comes up all the time on HN. I'm one of a bunch of people on HN that do this kind of work professionally. Here's a recent comment on a recent story about it:

https://news.ycombinator.com/item?id=16006394

The short answer is: don't overthink it. Do the simplest thing that will work: use 16+ byte random keys read from /dev/urandom and stored in a database. The cool-kid name for this is "bearer tokens".

You do not need Amazon-style "access keys" to go with your secrets. You do not need OAuth (OAuth is for delegated authentication, to allow 3rd parties to take actions on behalf of users), you do not need special UUID formats, you do not need to mess around with localStorage, you do not need TLS client certificates, you do not need cryptographic signatures, or any kind of public key crypto, or really any cryptography at all. You almost certain do not and will not need "stateless" authentication; to get it, you will sacrifice security and some usability, and in a typical application that depends on a database to get anything done anyways, you'll make those sacrifices for nothing.

Do not use JWTs, which are an increasingly (and somewhat inexplicably) popular cryptographic token that every working cryptography engineer I've spoken to hates passionately. JWTs are easy for developers to use, because libraries are a thing, but impossible for security engineers to reason about. There's a race to replace JWTs with something better (PAST is an example) and while I don't think the underlying idea behind JWTs is any good either, even if you disagree, you should still wait for one of the better formats to gain acceptance.


One very important thing you didn't mention : you MUST force transport encryption (SSL/TLS) to be used (deny plaintext connections). This is because the bearer token can be stolen by eavesdropping and since it's a bearer token it can be re-used by anyone anywhere.

Also remember to time out the tokens: it is almost never a good idea to permit infinitely long login sessions (surprising how often I see this not done). Again remember to invalidate the token when the user changes their password.

I agree that OAuth is not necessary on its own but it can be appropriate if you are also supporting delegated authentication with various 3rd parties : make your own native auth just another OAuth provider.


You need transport encryption no matter what, not just for REST APIs. I'm just addressing the app design concerns.


Is it not true that Basic Authentication requires TLS due to transporting credentials as plain text but other variations encrypt the credentials at a higher level before being put on the wire? In that case why is TLS a necessity?


It's necessary because otherwise the encrypted credentials could be copied and used to access the service.


It's not a given that something like Diffie-Helman shared secret is going to put the actual secret on the wire during authentication, or that smart auth strategies similar to this one that don't directly transmit secrets are always going to be susceptible to replay attacks.

(I am a dog on the internet, so don't listen to me. I also heard that the best way to get a really good answer is not just to ask any old question, but to give a wrong answer...)


That's interesting, do you have an example or an article about how it would not be subsceptible to replay attacks?


I think the key is that you must have a shared secret in advance, which is probably something you won't have in most cases unless you're building a point-to-point encryption.

I'm afraid I don't have such an article, and I'm not an expert (just a dog right) but the article that explained Diffie Hellman in a way that made me feel like I was understanding it, you each get a paint color, and you have a pre-negotiated shared secret color. You mix the paint colors to send signals and you know what the colors look like when they are blended with the secret, because you've seen them before.

What's missing from this to make it safe from replay attacks? (It's obvious that if this is the whole setup, if you could observe the color transmitted, you could simply pass the color again if you wanted it to appear that the message was transmitted a second time.)

The answer I think, is a Nonce or IV (Initialization Vector.) I'm not particularly clear on how a nonce is different than an IV or if you would only ever use one or the other, or if you might use both in certain cases, or in all cases...


RE: Denying plaintext connections. Totally agree & great if your clients connect directly to hosting you have full control over. The biggest problem I've come across is that cloud services like CloudFlare & API gateways (Tyk for example) don't have the option (or at least I couldn't find it) to disable HTTP traffic. Plenty offer to redirect HTTP to HTTPS but I haven't been able to refuse HTTP traffic outright.

Does anyone have any recommendations for services that do offer this? (or where those options are in the named services)


If you can install an open source Tyk gateway in your stack, you can use TLS on the Tyk Gateway to refuse HTTP outright. However, this isn't a Tyk cloud feature at present.


Correction: if Tyk Cloud has a custom domain (CNAME) setup for you then it can ignore https - it’s not a self-service option though.


> Do not use JWTs, which are an increasingly (and somewhat inexplicably) popular cryptographic token that every working cryptography engineer I've spoken to hates passionately.

Can we get more intel behind why JWT is bad? I've always been told that as long as you explicitly check both the signature and that the signature algorithm type is what you expect, its fine. Libraries tend to make the mistake of not doing that second part for you, so implementations have to be aware of that.

The one concern I've always had is that even though they are stateless, most implementations end up making a db request to get info about the user anyway (i.e. their role and permissions), so the stateless advantage of JWT isn't as big as it is touted. You can embed that into the JWT, but then the token inevitably gets huge.


You can't prematurely expire or invalidate JWT tokens once created, unless you keep a database of tokens, and at that point you should just use sessions because that's basically what it is at that point: A session token with additonal data.


That doesn't mean JWTs are bad; it just means their use case is more restrictive. JWTs are designed for sessions; think Google API tokens that have a validity of 1 hour. If you're using them for anything longer than that, then you'll probably need to back it with a database so you can support revocation, and at that point JWTs make less sense because they're so large.


I can’t fathom what the problem could be when using private key encryption to create stateless tokens.

I do this to create bearer tokens without JWT.

Anyway, you can find a lot of his comments about JWT by searching ‘tctapek JWT site: ycombinator.com’


In the box at the bottom of the page, just type "author:tptacek jwt" (make sure you switch to "comments" mode).

HN search is much more efficient than Google for this.


If a web application communicates with it's back end via REST API, and that API is only meant to serve the web app and no other client, and if they communicate with each other on the same origin (http://myapp.com/api), will I need authentication on that REST API at all? Will disabling CORS be good enough?

Sorry if the answer is obvious...this is not my area of expertise.


It's very possible I completely misunderstood your suggestion, however in case I didn't.

If you're storing the key on the client (cookie or w/e) and in the database and solely using it to authenticate, aren't you going to run into timing attacks if you're using it for retrieval?

What I typically do is also store a unique identifier like email for the lookup and then use a random key for comparison / validation.


Yeah could the DB token lookup timings by itself be used to find a real token? It might be several layers deep and DBs are noisy, but I think it's still possible in theory. Could you get around this by only storing some hash of both the token and a DB secret?


Heck, I've been overthinking this. Thank you!


> There's a race to replace JWTs with something better (PAST is an example) and while I don't think the underlying idea behind JWTs is any good either, even if you disagree, you should still wait for one of the better formats to gain acceptance.

Aside from PAST, I recently have come across this[0] (called branca) as I too was looking for an alternative to JWT. This seems to be based on Fernet, but according to the maintainer of this spec it isn't maintained anymore.

[0] https://github.com/tuupola/branca-spec


Great answer. One thing I'd like to add is if you're using bearer tokens, make sure your API has an easy way to invalidate and regenerate them, as anyone with the bearer token has full access.


What do you mean when you say: "you do not need to mess around with localStorage"? Aren't you supposed to store this token on the client side (presumably using localStorage/cookies), and then include it in the request?

You seem to know what you're talking about, but i'm a bit confused. With JWT I just store the token in localStorage and then add Authentization Bearer header with the token. What's the recommended approach? To send username + token as part of the form data?


Great points! I use high-entropy random secrets as passwords in the Basic Authorization header, with their hashes stored in a database. I also use cookies to make the browser experience pleasant and secure. The cookies are based on a HMAC hash that uses a single server-side secret, a string representing the principal, and a timestamp. So the cookies work without needing server sessions.

HTTPS is mandatory of course, and caching successful authorizations help performance.


All of this seems inferior to just using a 128 bit random token drawn from /dev/urandom and stored in a database. If you see "HMAC" in an API authentication design, something weird and probably bad is happening.


I would say, store the hash of the token in the database, but that's my personal preference to add a bit of defense against timing attacks, insider stealing the token, or token database breach.


I don't think this buys you anything. Keep things simple.


I feel like picking a fight. What's the downside? It's super simple to hash them, and it prevents a read-only exploit from turning into a major catastrophe.

Example from something I built. If our db was used by an attacker, with all client API keys, they could go liquidate those accounts (place phone calls). Huge loss, and not far-fetched (this kind of attack happens daily and is profitable). With hashed API keys, nothing is possible. We don't even need to tell people to rotate keys. With plain keys, we'd need to freeze usage for people without e.g. IP address restrictions in place.

Read-only leaks happen all the time. Why not make sure they don't impact your clients API usage?

I'm not just trying to fight. It's a handful of trivially-validated lines of code that significantly mitigate the impact of a data leak. Seems like a super easy tradeoff.


Click the link in his comment.


I’ve frequently used the equivalent of HMAC_SHA512(long_secret,uid + ’|’ + timestamp) to generate a token on the server, which the client can retain and pass along requests, and can be verified on the server without persistence. I assume this is what you refer to as stateless authentication. While I agree that there are no real performance reasons to do so, it seems convenient to me every now and then. Is there a security reason to stop doing so?


Only if the long_secret was made public.


How do you revoke access to that token?


Well, you don’t, so if that’s a requirement you’ve got to do it some other way.


You can rotate the secret to invalidate all tokens.


No, you can't. That breaks all of your users, and so you'll rarely do it, even when it might be warranted. Don't engineer security countermeasures that you (a) might need to rely on and (b) will be afraid to use.


Good points. But for some types of apps, you might have groups of users (a company, team, municipality) that you might be able to afford the cost of "everyone log in again". Or you might be able to safely log out everyone after business hours (if in the same timezone).


I like your "keep it simple" approach here.

Can you confirm re: your recommendation for random "bearer token" auth, are you talking about just short-lived tokens that are the response to a regular user auth flow (ie login in one request, get a token, use that for subsequent requests in this "session" ala a session cookie) or do you (also) intend it for long-lived tokens used eg for machine accounts?


You can use short-lived tokens, or you can use long-lived tokens. A long-lived 128 bit secret is superior to a username/password, for reasons explained elsewhere on the thread. So if your short-term token scheme requires programs to occasionally deploy the root account's password (or really any password that a user had to come up with), it's flawed.


Right - I agree that deploying a human-used password is not a viable option.

I'm thinking more in terms of deviating from your described solution on storing keys (particularly long term ones), by storing them hashed (and thus require some kind of account identifier prefix in the Bearer token string).


Could you clarify why (or when) to use bearer tokens instead of Basic Authentication (i.e. sending username and password) with every request? Is it that if the server is compromised, only passwords from future logins can be stolen rather than those of everyone who performs a request? The cost of checking the hash? Other reasons?


The purpose of a password is to have an authenticator that is human-readable/writeable and, ideally, human-memorable. Ideally, a strong password is stored only in a person's head; in the very worst (and unfortunately common) case, it's also stored in a password manager.

API authentication doesn't have the memorability problem, because the password has to be stored somewhere the client program can reach it. But, as you can see, it does have the storage problem, which means you need to account for the fact that it might be compromised in storage.

So you need a separate credential for API access. Since it's only going to be used by computers, there's no purpose served by making it anything other than a computer-usable secure credential; hence: a 128 bit token from /dev/urandom.

Once you have a 128 bit token, there's also little purpose served by forcing clients to relay usernames, since any 128 bit random token will uniquely identify an account anyways.


It was my impression that the token is obtained by supplying username and password to a login API, through which it can also be replaced when it expires. Now it seems that I was wrong. How do you suggest the token be managed?


You can do that, or you can just have a page in the application where the user manually fetches a new API token, which is a pretty common UX for this.


You almost certain[ly] do not and will not need "stateless" authentication

Unless you get Bill Murray to run into people on the street or crash their all-hands meetings and tell them this, no one will believe you. Or at least, it's worth a try since nothing else seems to work, as seen in thread.


My plan is just to keep repeating it over and over again so that more people will see the words arranged that way in a sentence: "you don't need stateless authentication; it's usually not a good thing".


That is sometimes what it takes. Appreciate the additional explanation you’ve put into other comments from this thread. Will most likely be referring back to this one when I get tired of repeating similar points. ;)


Key quote here is

> Do the simplest thing that will work:

For many a long, a randomized bearer token will do. Depending on the type of data you expose via the API (example - financial data, PII) this may not be sufficient for your security team or auditors.


I run security teams for startups with my current firm, Latacora, which is me and 5 other veterans of security firms. Our clients engage with financial services and with regulated environments (like HIPAA/HITECH and the standards and practices of large health networks). Before that, I founded a company called Matasano, which for almost 10 years was one of the largest software security firms in North America. Unlike at Latacora, where our clients are all startups, Matasano's clients ran the gamut from startups to big tech firms to international banks, trading exchanges, utilities, and pharmas.

With the exception of the military, which I on principle won't work with, there's probably no regulatory or audit regime I haven't had experience with.

I say all this as lead-up to a simple assertion: I have never once seen an auditor push back on bearer-token API access. It's the global standard for this problem. If you knew how, for instance, clearing and settlement worked for major exchanges, you'd laugh at the idea that 128 bit random tokens would trip an audit flag.

tl;dr: No.


Why won't you work with the military? (genuinely asking)


In case tptacek doesn’t respond, I’d like to give a (hopefully helpful) answer in the aggregate sense: many members of the information security industry refuse to work with the public sector (read: military and intelligence agencies) for reasons of personal ethics. They typically dislike the privacy-antagonistic ends to which they consider their skills, software or inventions would be used. This is particularly the case with many cryptographers, who could walk onto jobs or lucrative contracts with the NSA, but who would never even consider it.

I haven’t spoken to tptacek about this directly, so I want to make it clear I can’t elucidate his specific concerns. But the broad strokes of his principles are very common in the industry, and typically stem from a disagreement in how the government approaches security (philosophically speaking).


Just a brief addition to this as a user of APIs. Allow me to create a new token and have two tokens for the same account. This makes rotation of tokens without loss of service possible :)


The downside to allowing multiple tokens is that in the real world, people will create new tokens for new deployment environments, but never delete the old ones, which will inevitably end up on Github somehow.


agreed, but its a solvable problem.

e.g. You can request a second token, but doing so immediately sets the currently active token to expire and be deleted in X days.


Sorry, I made it sound like I think multiple tokens is an illegitimate choice. It's not; I just think, be aware of the tradeoffs and keep things as simple as they can possibly be.


Don't major companies like Google use JWTs?


Major companies like Google do all sorts of dumb things, and, equally importantly, have gigantic well-funded security teams triple-checking what they do, so that the pitfalls in these standards are mostly an externality to them.

(The actual answer is: I have no idea what Google does with JWTs.)


> have gigantic well-funded security teams triple-checking what they do

Yes! This is the answer to almost every "Well, it works for Google..." that comes up.

Alice: "It works for Google!"

Bob: "Sure, but how many PhDs does Google have on payroll managing it?"

Google has the resources, meaning dollars and reputation, that if they want to do something, they can hire anyone they want to make it possible. They frequently hire the authors of the programming languages and environments they use (that weren't already invented in-house), who can then customize everything to fit Google's needs just so.

Assuming you're a normal mortal corporation, getting the inventors of your platforms on board and committed to your problems specifically is no trivial matter, and you don't have an army of bona fide, credentialed computer scientists on payroll to patch any intervening rough spots, so "Google does it" is not really applicable.


With the right choice of algorithm, and if you control the code which interprets that JWT (whitelisting the alg and choosing the right library), I don't see a reason why JWT would be insecure.


It's absolutely true that if you do everything correctly, a JWT implementation can be secure. Generally, in crypto engineering, we're looking for constructions that minimize the number of things you need to get right.


Yeah this is the problem with crypto security people they are one dimensional. JWT has the benefit of allowing disconnected services to send each other information through the front end.

Which minimizess the number of things you need to get right or in your words equals more secure.

Designing a token that can be validated instead of looked up. (Design/Implement once)

Or

maintaining, updating and monitoring a set of firewall rules so that app-servers in network zone x and y can make call backs to a database in network zone z.(design many implement many)

There are a ton of great reasons to use JWT at scale. As with anything use case is important.


I disagree almost absolutely with all of this but am comfortable with how persuasive our respective comments on this thread are or aren't to the rest of HN. I'd rather not re-make points I've already made.


That's also an argument for just using Kerberos over the Internet. And I'm not sure that's a good idea.

I'm almost certain it's a bad idea if that means rolling your own Kerberos implementations in php, javascript and golang in order for your various back-end to speak to your various front-ends.

But sure, leverage secret-key crypto and tickets in your own implementation in a way that's more secure than Kerberos.

Or, use a solution that's simple enough any weakness is fairly obvious.


Fair point.


The answer would be yes with a rider. It would vary across the use cases and security requirements.


Do you have problems of the same complexity as google?


No. But OP said that JWT is insecure, yet it seems widely adopted by top tech companies.


Signed REST requests are good too, more secure. And what’s wrong with delegated auth?


Signed requests are not in general more secure:

* The implementation of cryptographic "signing" (really, virtually never signing but rather message authentication) is susceptible to implementation errors.

* The concept of signing is susceptible to an entire class of implementation errors falling under the category of "quoting and canonicalization". See: basically every well-known implementation of "signed URLs" for examples.

* Signed URLs have to make their own special arrangements for expiration and replay protection in ways that stateful authentication doesn't, either because the stateful implementation is trivial or because stateful auth "can't" do the things stateful auth can (like explicitly handing off a URL to a third party for delegated execution).

Stateless authentication is almost never a better idea than stateful authentication. In the real world, most important APIs are statefully authenticated. This is true even in a lot of cases where those APIs (inexplicably) use JWTs, as you discover when you get contracted to look at the source code.

Delegated authentication is useful situationally. But really, there aren't all that many situations where it's needed. It's categorically not useful as a default; it's a terrible, dangerous default.


Every time I read a API that uses signed/authenticated requests (AWS, Let's Encrypt ACME) I wonder exactly the same thing - why is this needed in the first place? If TLS guarantees lack of replays it seems to me like signed requests just protect their own complex infrastructure from reusing the same request twice...


A lot of text for your argument which is x isn't secure. Not very compelling.

Signed rest requests ensure that auth tokens can not be leaked as each request is individually signed by a private key.

Your extreme example btw is hyperbolic. Providing signing sample code to clients is pretty typical


I'm explaining where I'm coming from as a courtesy. I am also comfortable with the number and kind of HN readers who would simply take my argument as-stated without justification: "don't do signed URLs if you can get away with bearer tokens".


>susceptible to implementation errors.

So just like anything else you might want to implement if you do it wrong its insecure.


So really what everyone should do are PAKE schemes based on isogenies, because then we're all post-quantum secure, and, after all, it's only insecure if you do it wrong.


There's nothing wrong with delegated auth, but in this situation you're not doing any delegation.


What's so bad about JWT's? The cryptographic protocol is sound.


People write things like "the cryptographic protocol behind JWT is sound" and I always wonder where those assertions come from. Do you just think it must be, because none of the people you talk to say it isn't?


More to your original point, why add the complexity before it is required.

Let the complexity of the solutions incrementally grow with the complexity of the the problem being solved.


There is nothing wrong with JWT, implementing them requires some thought so that you don't leak sensitive info as well as configuring your backend properly.

Large swaths of the internet love to hate on JWT but its a major feature in oauth2 and is in use all over the place as decentralized APIs have become more commonplace.


Do you recommend signing requests?


No. The reason to sign requests is the same as the reason to support OAuth: so that the owner of the account can sign a request and give it to someone else to execute --- delegated authentication. Signed requests are finer-grained than OAuth is, but OAuth is much simpler and is the industry standard at this point. Don't do either thing until you absolutely need it, but then, start with OAuth.

Signed requests have burned a bunch of applications, more than have been burned by OAuth.


Thanks for the response!

My thinking was that you might sign requests so that a request that was intercepted or inadvertently logged would not contain sufficient credentials to authorize arbitrary other requests for the indefinite future. It sounds like you do not consider that a significant enough issue to justify the complexity involved.


Good point: Credentials must not be logged. The easiest way to achieve this is to use HTTP basic auth for the token because web server infrastructure already knows not to log that, or a header OAuth2 style.


No. You need to be securely using TLS anyways. Signed requests are hard to get right (Google "canonicalization vulnerabilities"). That's not a good tradeoff.


Signed requests were also invented when the transport connection was in the clear: if the request were not signed then it could be modified in transit by an attacker. These days all sessions are encrypted (SSL/TLS) and so this concern doesn't exist (or doesn't exist if you trust the transport).


The AWS API runs over TLS, and uses signed requests.


Perhaps because it was originally designed for use without TLS? Request signing was pretty much ubiquitous 10 years ago.


I'm not from Amazon but I'd guess they want to protect the request from being replayed inside their own systems.


more likely it's so they don't have to have a more convoluted process where they call out to requesting service to verify RQ & all which that entails (on both sides).


man, reading this comment was like a breath of fresh air.

we've been collectively brainwashed to reach out for AWS for almost any dev related work.

Also refreshing that a $5 DO/Linode box can do everything AWS can without the learning curve.


>>> The short answer is: don't overthink it. Do the simplest thing that will work: use 16+ byte random keys read from /dev/urandom and stored in a database. The cool-kid name for this is "bearer tokens".

Please don't reinvent the wheel and use a guid.

A guid is a random number generator to avoid collisions.


If you are getting collisions from 128 bit (and up) numbers coming out of a system CSRNG, your service is likely to be meaningfully affected by bigger problems like plate tectonics and lunar orbital decay.


Not all GUIDs are random, and not all random GUIDs are cryptographically random: https://en.wikipedia.org/wiki/Universally_unique_identifier#...


GUIDS are NOT random. They are unique (the 'U' in GUID).


Do everything via HTTPS, disable HTTP. The login request (POST, dont use url query params) contains username + password. The API replies with a session token (a random string). You can store any metadata relating to this session token in your DB.

The API client should this token in every request that requires authentication, often in the header as `Authorization : Bearer 123TheToken456`.

JWT: If DB performance becomes a problem (or you want to expose signed session metadata) consider using JWT to provide session validation with the request itself. The downsides of JWT are that its often used to hold secret values (dont do this), or is a few kilobytes big which makes all requests slow, or stupid mistakes in signing and session validation that make it very insecure like allowing any request to just specify false permissions.


I have found the above (sans JWT) to be the simplest, secure method. Do everything over HTTPS, use basic auth or post for the user/pass and return an expiring token, use that token as a Bearer token for all subsequent requests.


It's generally a good idea to avoid JWT. There are a lot of foot-guns in JWT, and many implementations have gotten it wrong in the past. This[1] is a good summary on the topic.

[1]: https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-ba...


On the client side that adds some complexity. You either do always two requests (first get token, then use it) or need to manage the storing the token and doing the renewals. If there are bursts of traffic and multiple threads/processes you need to think if each one will get their own token or if others will wait while one is getting a fresh token.


I’m mostley a FE person at the moment so forgive me for my ignorance but how does the server use the token passed to the client to do auth?

Does it keep a copy somewhere and check against it on every request?


> Does it keep a copy somewhere and check against it on every request?

Yes. The last time I did this we checked the X-Token header on every request, if it didn't exist or there were multiple we replied 401. If only one was there we checked a DB table of active tokens, if it wasn't there or had expired we replied 401. If it was there but wasn't associated with a role that had access to the requested resource, we replied 403. If it was not expired and had access we continued with the request.

As soon as you get away from "check authentication on every request" your attack vectors increase. As a bad actor, I no longer need to bypass your authentication, I just need to bypass whatever system you have in place to decide whether or not to authenticate me. That's generally going to be easier.


I was planning to use custom headers over HTTPS for a similar thing, but then I read that some firewalls will strip custom headers, even over HTTPS. Can anyone with experience comment? Thanks.


Just use a Bearer token in the Authorization header from the OAuth spec: https://tools.ietf.org/html/rfc6750#section-2.1


> Does it keep a copy somewhere and check against it on every request?

Yep, a store like Redis that has automatic row expiration is what I have used in the past. Most clients will often be bursty in their requests so a simple few second cache on API after verifying the token can also be useful/performant.


Yes, a copy is kept somewhere — that somewhere could be application memory, a local file or database on the server where the application is running, or another server that provides this as a service (which may in turn store in memory or disk/database). Which approach is chosen depends on factors like performance/load capacity, availability/redundancy requirements (answering questions like, "if a server goes down, will users be logged out?"), etc.


I basically do this with jwt. In my case jwt just contains the basic data that the front needs to find out who the user is and what it can do (user uuid and role). While obviously checking if action is allowed to user is done server side it's normally useful for the front end to also be aware.


Yea, I happen to be using JWT in the simplest way. Authentication only.

I don't even store role information in them, since authorization checks are performed on the server anyway.

If the client needs to know what a user is allowed to do with a resource (so it knows not to display certain buttons, etc.) I have the client do an OPTIONS call (with the token) to see what methods are allowed.

Lately, I've been thinking about replacing the whole JWT scheme with simple bearer tokens stored in the database, mostly because it would make revocation simple and I can't think of anything I would lose by giving up JWTs (a little storage space in the database?), and I don't think switching the type of bearer token I'm working with will actually be very painful implementation-wise. You know what, I'm adding a task to my backlog...


Doesn't a session token violate the stateless principle ?


Yes, it does.

If you want to be completely stateless, you need to send the authorization info on each request, probably as basic auth headers.

This is actually simpler than implementing the Bearer token, for both client and server, but requires that the client retain the non-expiring username and password, rather than retaining the expiring, session-specific Bearer token.


> but requires that the client retain the non-expiring username and password

IMO, this makes this simpler solution a non-starter. It becomes way too easy to leak credentials.


The auth framework I use avoids this problem but remains stateless by encrypting/signing the tuple (user ID, session expiration time, maybe other stuff) in a cookie. Essentially it's using the browser as an encrypted one-row database to store the info that would normally be in a sessions table.


This can work, but you want to keep what you send with every request very small. It's also hard to do a mass expiration or revoke a single session. If you have the tokens on the server you can run a query and easily do both. Checking signatures and decrypting on every request can also be a performance issue.


How so? The client needs to have the username and password to call the login function over and over, as well as managing the session token. It's just incredibly annoying with no benefits in nearly every scenario.


The client should not store the user/pass [1]. If the token expires, the user needs to provide a user/pass to login in again. The user should also be forced to provide a user/pass in order to change the password - something that cannot be enforced if keeping the user/pass on the client.

You also lose any method to force a re-authentication. With a token, I could expire with no activity for an hour and allow it to be good for a max of 2 hours.

[1] Users have a bad habit of just leaving computers. With a token, the worst case is someone has a short lived access to to something. With a user/pass left on the client worst case now becomes use/pass taken.


What is the client? OP says API which sounds more machine to machine. If you mean the API powering a site, used from the client's browser, then sure. Track separate logins, then give them a control panel to see where they're logged in.

But most clients store their user/pass in their browser anyways so I'm not sure it's a security win for preventing credential loss.

You don't lose re-auth. The master system issuing API keys can revoke keys, too.

But anyways maybe we're talking about different contexts because I don't understand the scenario you're describing.


I see. I read "Rest API" as securing the server portion with no implication that it meant only server to server. Cheers!


Yes, if you mean a token that identifies a session that is saved on the server somehow. But not a JWT, which is only saved on the client and verified on the server.


"Stateless" refers to the app, not to its datastores, because that would be nonsensical.

JWT is also poorly specified (no protocol under any circumstances should use negotiation, it does not support revocation, and it has been hammered home by the best security folks I know that public key cryptography is what you do when you don't have any other choice) and dangerous to use. Avoid it. Do the simplest thing that can possibly work. That's a session token. If, in the far and unlikely future, you are so successful that a single database call is so harmful, then you have the money to hire someone who doesn't have to Ask HN this question.


Totally agree -- but verifying a session token doesn't even have to be a database call. Put the token in memcache or redis; now, if you are so successful that a single network call that doesn't touch disk is your bottleneck, well, you can hire some very smart people to fix that for you.


If you use it for authorization only, it doesn’t.


What is your reasoning for not using query params for the login request? I know it's probably more RESTful to use POST, but otherwise if you're using HTTPS for everything, query params are just as encrypted as the POST body. Or is there another reason?


For one, the query string is much more likely to be logged, compared to the entity body. Think: httpd access logs, browser history, misconfigured caches / correctly configured caches subject to inappropriate cache directives, etc.


GET requests just do not make sense for actions: they are cacheable and replayable. An http client/a proxy/something on the backend can cache it and avoid going to the actual logic.

Also, mixing credentials into URL does not feel like a good separation of concerns, e.g. URLs are often logged and analyzed in separate logging/monitoring/analytic tools, so there is a bigger risk to have credentials leaked over some side-channel.


Query params in the URL are encrypted for transmission, but not elsewhere: http://blog.httpwatch.com/2009/02/20/how-secure-are-query-st...


Is this "go code this yourself" advice, or just what to look for from pre-existing tools? In my world there are middlewares that accomplish exactly this, it never occurred to me to code this myself, lest I screw it up.


Sure, the auth middlewares out there like Passport for js or Spring security on the java side do most of the work. You generally still have to code the very last bits of checking if the user/pass is good, generating the token, and checking if the token is good. There may be ones out there that do everything, but they are typically not quite as flexible as I'd like (lets store the token in Redis instead of an RDMS for example).


I highly recommend reading "The Do's and Don'ts of Client Authentication on the Web" [1] from MIT. It's rather old and not very well-known, but it's excellent. The concepts provide very useful background info that will serve you well no matter what technology you use to implement your HTTP services, including issues like session hijacking, etc. One of it's best recommendations: avoid roll-your-own solutions. Secondly, I recommend checking out the "auth" example from the expressjs repository on github [2]. It will provide a practical implementation example. Lastly, if you're considering using Express or any similar framework, I recommend checking out "route parameter preconditions". These seem to remain a little-known feature of Express, but they can be particularly useful for applying middleware to entire sets of routes, for example enforcing authentication on anything under a certain path. You can still find screen-casts for route-specific middleware and route parameter preconditions on the Express 2.x documentation site by TJ, the original author [3]. Some of the specific details may have changed in the newer versions of Express, but TJ's explanation of the concepts is simple and clear.

[1] https://pdos.csail.mit.edu/papers/webauth:sec10.pdf [2] https://github.com/expressjs/express/blob/master/examples/au... [3] https://expressjs.com/2x/screencasts.html


React router supports a similar pattern to Express, although they seem to be called "protected routes" [1].

[1] https://tylermcginnis.com/react-router-protected-routes-auth...


Isn't using the "auth" snippet more an example of rolling your own crypto? Why not use established libraries like passportjs? Super curious.


I wrestled with Passport for 3 weekends last October on my side project before reverting to a simple form for login. Passport has several dependencies that aren't well-documented, like a SQL ORM. It took a full weekend of researching to figure out what ORMs were and how they were used, since almost every blog assumes readers will recognize one in their code. This led to blogs pushing the opinion that ORMs were pointless and useless unless you already knew SQL.

Then, there are articles like the Hackernoon post explaining why most Passport blog posts are wrong in one way or another.[0] This article explains that there are no "copy/paste" authentication solutions for Javascript, as there are for other languages - and Passport is probably the best out there.

As there's no "copy/paste" auth solution for Javascript, it becomes essential to understand how auth works with your site. It has to be added to every Express render call, to work with the Session. And rolling your own is educational - you can learn some of the common pitfalls and why rolling your own is a bad idea.

I do plan to go back to Passport sometime this year. The number of Oauth providers is nearly overwhelming - too much to ignore. But also daunting for the first-time student.

[0] https://hackernoon.com/your-node-js-authentication-tutorial-...


Passport is pretty large so it can be confusing. IMO, it's much easier to not use the session stuff in passport and just do your own thing letting passport handle the flow. You can use the BasicAuth strategy on a /login url to sign someone in and grant a token, and then use Bearer auth strategy to check the token on the rest of your urls.

Doing it that way, Passport doesn't require an ORM at all. You'll need to obviously provide a way to auth a user and verify a token, but that's then up to you.

Now, if you want to actually use OAuth it can get complicated because the flow.


"Rolling your own crypto" usually refers to constructing new cryptographic schemes out of either primitives or just out of wholecloth.

Using well-developed hashing schemes like pbkdf2 (which the auth snippet uses) or bcrypt (another good and common option) and storing the output is not rolling your own crypto. Writing your own hash function would be rolling your own crypto.

People talk about not rolling your own crypto because crypto is very, very sensitive to even very small errors, in ways that other code is not. Writing your own authentication using a well-known strong hashing scheme (which handles salting and verifying passwords in a timing attack resistant way) is much less likely to have vulnerabilities that aren't obvious.


I think that if you use passportjs because you don't understand how to implement authn yourself, then you're no any better off from a security standpoint.

To me, passportjs might be useful if you need to plug into 3rd party auth APIs, but I don't really see the point. Authentication is a core part of your application and you should always know exactly how it works.

If you can't store an authn secret with confidence, how can you do anything with confidence?


I would disagree on the grounds that authentication is, generally speaking, a well-solved problem for the level most applications require. It's a better use of your time to use a library that's well-used and well-understood rather than rolling Yet Another (Probably Bad) Authentication Framework.

I will concede, however, that the most basic forms of authentication that I've used are so close to the metal that they're usually already built into whatever you're using to do communication.


Why not use either simple API key or HTTP basic auth? Both are simple to implement and supported by all the tools and libraries.

I would consider more complicated solutions only if you first come to conclusion that these simple things are not fit for the purpose.

True that some fancy token based solution may reduce database load, but if the API is doing something useful then that one primary key lookup and potentially the password hashing won't be a show stopper. Drawback with tokens and skipping the DB check is that you can't simply kill a client behaving badly. With API key you can just change the key and requests stop immediately (with MVP product this might be an issue, since maybe you have decided to add rate limits etc later).


I would agree, always start simple - unless you manipulate sensitive data - a shared secret is a good place to start (api-key or basic/digest auth)

You can always introduce other forms of authentication later. I have a slight preference for basic/digest auth as the secret isn't part of the URL, and therefore not cached/logged by any network equipment.


The api-key does not need to be part of the URL, you can also put it in the Authorization header.

edit: typo


Why not use either simple API key or HTTP basic auth? Both are simple to implement and supported by all the tools and libraries.

Does basic auth works if the client is a browser and you want to have a nice login screen?


> Drawback with tokens and skipping the DB check is that you can't simply kill a client behaving badly.

You can solve this with a token/user blacklist. There are desirable (and undesirable) characteristics of using a blacklist instead of a whitelist, but you don't lose this capability.


How do you share your blacklist across servers


The same way you share any of your other data across servers. Save it in a DB or Redis instance that all your servers can reach.


At which point you're hitting it on every request and since it'll be a nice, fast, indexed database lookup you might as well just use session tokens instead of a back-assward blacklist.

Do the simplest thing that will work.


As I mentioned, blacklists have different characteristics, some of which are desirable. A blacklist of tokens is (for the vast majority of use cases) orders of magnitude smaller than a whitelist, which means they can often be replicated into memory for in-memory lookups on the webserver.

This is a complex distributed systems topic, and there are applications for both.


There are cases where a blacklist totally makes sense, I will agree with you on that.

"I can access this thing" are totally not that for the 99.9% case. Do the simplest thing that will work--and the simplest thing so very rarely involves public-key cryptography!


HTTP Basic authentication should never be used, it is very vulnerable to traffic analysis attacks. HTTP Digest authentication however, would be a perfectly fine solution.


How so? Over SSL? (Note that you should never call anything requiring authentication/authorization over plain HTTP.)


A quick Google suggests you're right, as in either case you must run SSL/TLS.

Appypolylogies.


It depends on the use-case.

* Public data API (no notion of user account, e.g. weathers API, newspapers API, GitHub public events, Reddit /r/popular, etc): use an API key. The developers must create an account on your website to create their API key; each API call is accompanied by the API key in the query string (?key=...)

* Self-used API for AJAX (notion of user account, e.g. all the AJAX calls behind the scenes when you use Gmail): use the same user cookie as for the rest of the website. The API is like any other resource on the website except it returns JSON/XML/whatever instead of HTML.

* Internal API for micro-services: API key shared by both ends of the service. There can be a notion of user accounts, but it's a business notion and the user is an argument like any other. If possible, your micro-services shouldn't actually be accessible on the public Internet.

* API with the notion of user accounts intended for third-parties (e.g. Reddit mobile app where the users can authorize the app to use their Reddit account on their behalf): OAuth2


Hi, can someone explain me why SSL client authentication is not widely used? You can use the same protocol you use to authenticate hosts to authenticate users, yet no one seem to do that nowadays. I'm not professional web developer so maybe answer to this question is obvious (but I just don't know it).


Because SSL certificates are a hassle to use on the client side, and possibly cumbersome to generate. I used client certificates in a few projects, and while the idea sounds perfect, it's hard to implement, and at least I don't understand it thoroughly.

A username and a password is so much simpler.


For public services, getting users to have keys and install them in their browsers is quite hard.

For APIs, it should be more manageable but many places stumble with key management and a lot of developers were resistant to learning enough about the tooling to do things like manage test instances.


One of the reasons it is hard is that most error reporting on certificate authentication is horrendously bad and there are a lot of ways it can go wrong. Dates can be wrong, the CN field can be wrong, the server or client might not agree on the quality of the ciphers, user can install the cert in the wrong place or with the wrong options, etc... If any of this goes wrong all you get is a "cannot connect to website" error and maybe if you're super lucky an error in the web server log like "certificate failure". For security reasons they never tell you what the actual problem is and just assume that the user must have a PHD in cybersecurity because they're trying to do crypto on a computer, so it should be no problem for them to check the thousand different possible failure points in the system.


I guess regarding the public services your statement may be correct. But I wonder if anyone (any significant content provider) actually tried. The technology is available for > 10 years at least (including browsers support). I think it's an issue for most people that they need to manage multiple passwords and it sometimes turns off people from actually using the service. With client certificates you install certificate once and (given enough support from web developers) forget about passwords "forever".


> With client certificates you install certificate once and (given enough support from web developers) forget about passwords "forever".

The problem is that this isn't really true: it's more like this:

1. You go through a tedious and convoluted process to get the certificate, which requires using a largely-ignored browser feature which is now deprecated: (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ke...). Even when it was implemented, the UI is not great – e.g. https://www.instantssl.com will fail if you fill out the form too quickly before the browser has finished generating a key.

2. Wait for the email to arrive and follow the retrieval process to get the certificate. Then follow the clunky UI to install it. You'll be told that it's really important to back it up but e.g. Firefox won't give you any instructions about where to even start to do that.

3. You then need a non-trivial amount of work to export the private key and certificate and install it on all of your devices, which is another process where the UX was apparently never considered seriously at any point over the last 20 years.

5. Every time you visit a site or send an email, you now have to select which key you want to use.

6. Every year, repeat the process starting at step 1.

Don't get me wrong, I'd love for this to be available and am still somewhat amazed that after however many years nobody has made a serious effort to improve the experience. It'd be really nice to have a LetsEncrypt-style effort to remove the warts from this process so it's approachable for normal people without a heavy support pool.

This is another area where I wish Mozilla hadn't prematurely killed Persona as it'd be really nice if there was a service which would allow you to associate different client certificates with a single user identity so private keys never needed to leave the device, and register things like tokens. A U2F-style focus on the user-experience would be really nice: once a year when you try to login it redirects you to a page which says “Enter your password and tap the token if you want to keep using it" and refreshes the certificate with no other ceremony.


> Wait for the email to arrive and follow the retrieval process to get the certificate

Is there a reason why the server couldn't send the certificate back to the browser via HTTPS?

> You then need a non-trivial amount of work to export the private key and certificate and install it on all of your devices,

Would it not be better to just use a different key for each device? That is, repeat steps 1 and 2 for every device you plan to use?

> Every time you visit a site or send an email, you now have to select which key you want to use.

Could the browser remember which website the client certificate was used for? If so, then the user won't have to make the selection more than once.

> Every year, repeat the process starting at step 1.

Outside of a device getting compromised, is there a good reason for updating certificates more often than once every 5 years?

> It'd be really nice to have a LetsEncrypt-style effort to remove the warts from this process so it's approachable for normal people without a heavy support pool.

I'm still doing more research on this, but what did the <keygen> HTML element lack that the process used by Let's Encrypt provide?

> This is another area where I wish Mozilla hadn't prematurely killed Persona as it'd be really nice if there was a service which would allow you to associate different client certificates with a single user identity so private keys never needed to leave the device

Shouldn't the private key be something that's associated with the browser? That is, when you install the browser, a private key is generated and used for all certificate signing requests. I think the process could be extended to add additional browser instances for a given account on a website. For example, you could take the CSR from the other device and use your first device to send it to the server, get the certficate and then copy it back to your other device.


A number of the undesirable factors described here (email, 12 month limit) are because that process isnt generating a client certificate for use on a private site, it's generating an S/MIME cert for handling encrypted/signed email.


I guess I'm not clear on the current process. If we had a process as shown below, I think it would be much easier to get people to adopt client certs.

1. Click on link to sign up for new account on news.ycombinator.com

2. Enter information for the account including a CSR that your browser generates for you

3. Submit the information to the server

4. Server servers another page that confirms account creation and includes the certificate

5. The browser provides a way to import that certificate to a client certificate store it maintains

6. Next time you visit the website, your browser knows to use that client certificate

Registering other browsers would require sending the CSR via the first registered browser and copying the certificate that the server returns to the second browser.

Of course, this would be for websites that don't have a need to verify your identity (like reddit or Hacker News). For banks, or credit cards, there would have to be a way to verify the identity of the person signing up for an account.


> > Wait for the email to arrive and follow the retrieval process to get the certificate

> Is there a reason why the server couldn't send the certificate back to the browser via HTTPS?

The public implementations are generally trying to verify current ownership of the specified email address. That's part of why I wish this could be linked to a third-party which already does that so users don't have to repeat the process as often.

> > You then need a non-trivial amount of work to export the private key and certificate and install it on all of your devices,

> Would it not be better to just use a different key for each device? That is, repeat steps 1 and 2 for every device you plan to use?

It would, but currently you can't do this if you use Google Chrome or Microsoft Edge. Again, remember that I'm talking about the practical impediments doing this now rather than any sort of conceptual problem: if the industry cared this could improve a lot very quickly.

>> Every year, repeat the process starting at step 1.

> Outside of a device getting compromised, is there a good reason for updating certificates more often than once every 5 years?

The general idea is that it protects against unknown, non-permanent mistakes but I think the main point here is that key rotation should be automated so it can happen simply since it reduces the window of problems for any mistakes considerably. I'd expect a modern implementation to have a tiered approach where e.g. keys generated on a secure enclave, token, etc. are trusted longer than ones where user error makes it possible to get access to the private key.

> > It'd be really nice to have a LetsEncrypt-style effort to remove the warts from this process so it's approachable for normal people without a heavy support pool.

> I'm still doing more research on this, but what did the <keygen> HTML element lack that the process used by Let's Encrypt provide?

It's been awhile since I looked at the discussions but I believe it was basically another case of an early feature chucked in in the 90s having a bunch of dubious design decisions which people didn't want to support for since it was rarely used. I'd be happy with it going away or substantially changing if there was a modern JavaScript API.

> > This is another area where I wish Mozilla hadn't prematurely killed Persona as it'd be really nice if there was a service which would allow you to associate different client certificates with a single user identity so private keys never needed to leave the device

> Shouldn't the private key be something that's associated with the browser? That is, when you install the browser, a private key is generated and used for all certificate signing requests.

I was referring to the two related concepts here: I like the model where each browser manages a private key (preferably stored in secure hardware) but you also need to handle the question which keys are allowed to sign responses for a specific person. Consider e.g. all of the sites which trust Google or Facebook to authenticate users and imagine what it'd be like if that could be extended so you could ask that trusted third-party which keys correspond to a verified email address. Having it be something which is commonly used would also be a great place for rotation if there was a seamless way to repeat the signing process every n days rather than a user having to do it the first time they access a site a year after the last time they renewed, when they may have forgotten a lot of the steps.

That last point underscores how much of this has nothing to do with PKI and everything to do with horrible UI: the failure mode for not having a valid certificate is generally horrible — looping selection dialogs, low-level TLS failure messages with no indication of what you can do to fix things, etc.


> The public implementations are generally trying to verify current ownership of the specified email address.

Are there any implementations that don't? For example, when I create an account on news.ycombinator.com, does it really need to verify my email address, rather than using a signed message sent via HTTPS during the sign up process?

> Consider e.g. all of the sites which trust Google or Facebook to authenticate users and imagine what it'd be like if that could be extended so you could ask that trusted third-party which keys correspond to a verified email address.

Perhaps we need to rethink using email for verification. For server side authentication, we have certificate authorities to handle verification of a given server's identity. In your example, Google or Facebook (or both) could serve as certificate authorities for the client certificate used for a given website.

Again, I would say that most websites do not (or should not) need my email address in order for me to sign up for an account. My web browser should be able to manage verifying my identity with a website as well as adding other trusted web browsers.

> That last point underscores how much of this has nothing to do with PKI and everything to do with horrible UI: the failure mode for not having a valid certificate is generally horrible

Unfortunately, that is very true. It would be nice if some serious effort could be directed to improve the process. I think that if we were using certificate authentication, as opposed to password based, it would be much harder for people's accounts to be compromised, even through "social engineering".


Startcom does client certificate authentication.

The hassle is that you need to install the certificate on every device you want to access the page from.


Client certificates, a.k.a. mutual TLS or "MTLS" authentication, are widely used within infrastructural deployments of TLS, where there are automated mechanisms to provision and rotate certificates.

For federated use cases, they are less popular, likely due to the added complexity being a blocker for adoption.


Personally I've always used x509 client certs with a self signed root authority / intermediate authority for internal tooling at companies i've worked for. This is possible because 1) i happen to already have a good understanding of openssl and how to use it, secure keys 2) i have control over the devices im provisioning so i can make them trust my root cert for timing/x509/etc.

It would be really cool if there was a service like letsyncrypt that would provision "client" certs for this sort of use but revocation lists and things are a little annoying for large scale use.

EDIT: Another reason why this can be a hassle is that while Safari/IE/Chrome use the system to evaluate trusted certs on all OSs i've tested, firefox uses its own implementation so you have to add all the certs yourself. This is ... frustrating from a management perspective because you have to keep two sets of docs explaining what to do for new hires and etc.

EDIT 2: I've always been curious if its a net security benefit that firefox does this. On one hand, they are less vulnerable to OS-specific attacks and can automatically un-trust root certs that are compromised for whatever reason, but you're then trusting Mozillas implementation of something that is admittedly very complex.


In Firefox, there's a configuration setting (from about:config) in Windows (not sure about other OSes) that can be used to tell Firefox to use the system certificate store for root CAs. There are also deployment mechanisms where this can be pushed as one of the default policies. [1]

The Firefox Enterprise mailing list is the place to go to for deeper level help on these things. [2]

[1]: https://wiki.mozilla.org/CA:AddRootToFirefox

[2]: https://mail.mozilla.org/listinfo/enterprise


Thank you! It's always just been a passing annoyance so I never bothered to look into it, but TIL.


Considering that mozilla is one of the founding members of Let's Encrypt, I think and hope that they are pretty competent in PKI.


Heh you make a good point.


SSL client authentication usually winds up authenticating physical devices similarly to SSH keys, but there is near zero infrastructure simplifying/protecting/distributing a single software certificate for an individual user (especially non-technical users!) across multiple devices -- and this is really complicated by mobile devices. (Enrolling a new cert on every machine is an exponential management nightmare.)

SSL client authentication is widely used by the US military on smartcards requiring additional hardware readers: https://en.wikipedia.org/wiki/Common_Access_Card. AFAIK using a smartcard doesn't work reliably on Mac without installing 3rd-party software. The difficulties of usage in practice has spawned a cottage industry of commercial software and support, like http://militarycac.com.

I assume companies selling end user hardware tokens like YubiKey would love to see client certificate authentication become more usable, but initiatives like FIDO U2F seem to be gaining momentum instead.


Poor support of SSL certificates in browsers is commonly attributed to bad UX but the real reason is that they are credentials that can be re-used by multiple services to track you as an individual. Newer standards like U2F completely compartmentalizes origins so that if you register to service X a different service Y will not know who you are.

SSL certificates also doesn't work in HTTP/2 (because of multiplexing multiple requests).

Benefits include storing private key in a hardware tokens, most OSes support them out of the box. You can just plug your token into USB port, visit site that requests a client certificate, enter PIN and be done (e.g. Yubico PIV applet).

HTML also has/had <keygen> element that would generate private key in a browser, send the public key to be signed to a webpage essentially creating private/public key credentials but that is being removed from browsers.

For inter-service communication I'd definitely consider using SSL client certificates pinning private keys e.g. to TPM but regular users can't be bothered with it.

If you're interested check out Token Binding that makes tokens (cookies, etc.) bound to TLS connections essentially providing security of client certificates for tokens.


I have worked with systems that use it to authenticate machine to machine communication (e.g. a web backend authenticating itself to another service doing work for it). In that environment it works well.

Using it to authenticate a person regardless of device doesn't work so well from a usability point of view.


I've been experimenting with using the server's public SSL key as a client certificate to authenticate self-hosted and cross-server web service requests since the cert should be available at runtime in common enterprise setups yet incentives align to keep it well-secured.

I would appreciate pointers to any open source libraries demonstrating best practices and/or promoting this approach, specifically protecting against replay attacks and race conditions that come up as the cert is renewed (much more often - thanks Let's Encrypt!).


Separately: client certificate authentication is apparently great for "Internet of Things" (buzzword alert!) device authentication, so some of the rough edges may be worn off as things move forward.


> Using it to authenticate a person regardless of device doesn't work so well from a usability point of view.

It might be better to have accounts per device rather than per person in that case.


Unless you have a well controlled environment (that includes client machines), certificate provisioning, revocation and management would not only be painful, but also require complex ways to share the private key part securely (during first time installation and renewal). Expecting users to know how to install a certificate in the browsers, machines and devices they use would be a non-starter.

In my observation, people who try to go this route in an uncontrolled environment mess it up by sending the certificate (including the private key) in unencrypted email (which is the default in most cases) or using other insecure mechanisms. The only ones who'd even attempt this are those who may not go through a security check or audit.

[If there are easy ways to handle this in an uncontrolled environment, I'd like to know more.]


I also want to know. It's extremely secure. It's also how I'm blocking my origin ip to where only Cloudflare can access it in case it's leaked. Safer and easier than a whitelist.


Probably good missing browser support.

I mean with support: getting the certs in there.

Once they are in your keychain, clientcerts are really nice.


PAST looks good.

https://github.com/paragonie/past

Basically JWT but without the pitfalls as far as I can see.


Definitely depends on timeline; PAST is a reasonable recommendation gaining momentum as best practice. The recent Show HN annoucement discussed many caveats of authentication tokens:

Show HN: PAST, a secure alternative to JWT | https://news.ycombinator.com/item?id=16070394 (2018Jan:361 points,137 comments)


Downside is that this is very new and there is only a PHP library.


I’d say it depends a lot.

If your API just serves public non-user-specific data, a simple API key might be okay. The obvious downside of this method is that a user leaking their client API key is a big problem, especially if your users are likely to distribute code that makes requests (e.g. a mobile app that makes requests to your API).

The state of the art is probably still OAuth, where clients regularly request session keys. This means a leaked key probably won’t cause problems for very long. The obvious downside of this is complexity, but that can be mitigated by releasing client libraries that smooth over the process.


One thing to be aware of with OAuth 2.0 is Refresh Tokens. If the spec is followed, the Refresh Tokens are long-lived and never expire (the spec makes a suggestion that you revoke used tokens, but it's not required), so if they are leaked you are in for a bad time.

There's an RFC that goes into some of the security considerations of OAuth 2.0, that should be required reading if you implement it (even from a pre-built library): https://tools.ietf.org/html/rfc6819


If the Refresh Tokens are leaked, you revoke them and the user has to re-authenticate.

It's crucial that clients are able to respond to their refresh tokens being revoked.

The good thing is that it is a standard workflow, contrary to API key being revoked, which is generally not handled (most people hard-code API key in their client).


What's the appeal of tokens that never expire? You cannot delete the revokations after the token has expired.


Can you explain why only "public non-user-specific data" is suitable for basic auth over HTTPS?

For most SasS products, basic auth or an API key is going to be fine. In fact, a ton of SasS vendors do exactly that. It's also totally fine for, say, an enterprise API used by a partner or clients.

Oauth is a cluster-fuck of terribleness, a nightmare for you to work with and a nightmare for your consumers to use. If you do it, you will need to have excellent support docs and examples or have to hand-hold external devs to get it working. The only time I might start considering OAuth is if you want other apps to be granted permissions to use the API on behalf of the user, where you want some granularity of which parts they can access.

I'm not saying OAuth doesn't have a use, but it's awful, overcomplicated implementation means it's a huge time-sink compared to basic auth over HTTPS and I certainly wouldn't recommend it without a very good reason.


The OAuth use case you mention is spot on but with OAuth clients available in every language remotely popular I don't agree it's complicated. And I think that even if you don't have a strong user case for OAuth you will sooner or later. Better to go with the standard practice for user centric APIs instead of using ad hoc solutions.


It is vastly more complicated, you have to do all sorts of redirecting and capturing with OAuth that you simply don't have to do with basic authentication.

And woe betide you if you're not using a framework that vaguely plugs + plays oauth.

Couple that with all the shennanigans involved when trying to get two servers to talk to each other without a human involved in oauth.


AWS has their own v4 signature method that I always thought was neat.

Key benefits:

* Secret not included in request

* Verifies integrity of the message (since its contents are signed)

* Protection against replay attacks

It's probably overkill in a lot of situations, but I've always liked how even if TLS were compromised, all the attacker would gain is the ability to see the requests--not modify them or forge new ones.

I haven't used JWT before, but reading one of the links below, it looks like it covers a lot of the same stuff (although you'd have to implement your own replay protection if you want that).


One generic solution is to have identity on the server (users table) and generate one or more tokens for each user. When a user wants to make an authenticated API call, they have to add the approprite header to their request:

    curl -X GET https://127.0.0.1:8000/api/example/ -H 'Authorization: Token 9944b09199c62bcf9418ad846dd0e4bbdfc6ee4b'
Note: HTTPS is required for all of this to be secure.

This is what comes out of the box with Django Rest Framework.


It makes me sad that in 2018 that it is entirely reasonable for such a simple and common question to elicit so many answers. Of course no one solution fits all use cases, but skimming the comments there seems to be a very diverse range of suggestions. Wouldn't it be lovely if there was one stand-out solution that was so good it was a no-brainer?

FWIW I have ended up using OAuth2 for this situation a few times, and it always feels more complicated than I'd like.


This is arguably self-serving but I am happy there isn’t one. The appeal to coding is that it hasn’t matured to being computerized LEGO’s, where I spend my time connecting prebuilt components unspecialized for my application.

But we are not alone in this regard. Bridge building is centuries more mature than software engineering and their shapes, materials and construction methods still change regularly. I would expect to see this trend remain for a long time. Engineering is an inherently creative practice, often staffed by appropriately creative people. The continued evolution, including the trial and error approach, are likely to continue for decades.


I do agree with your sentiment, but is creativity something we want to encourage with security? Security is hard, and most developers won't know when they've made a mistake implementing something. The laws of physics haven't changed for bridge builders. In the end, everyday consumers are the ones suffering from the recent hacks.


Auth0 and Okta have tons of docs on this. Even if you don't use their services, they have much to read.

Also, here is a good recent video for ASP Net Core 2, that includes extra things like HSTS, etc. Even if your not in ASP, the concepts will be relevant https://www.youtube.com/watch?v=z2iCddrJRY8


OpenId its an extension of OAuth. OpenID provides "Authentication" while OAuth or JWT provides "Authorization." But the real question is what language are you using? If you are using ASP.NET I'd recommend reading this: https://docs.microsoft.com/en-us/aspnet/web-api/overview/sec...


If there is nothing special about it, I'd recommend JWT, for the simple reason that you have less load on your DB (and more on your CPU but that is usually not the bottleneck)


There is no single good answer to this question without taking into account the security consideration of the API in question and the consumers. On a high level all solutions work just fine as long as we understand the tradeoff's involved (cpu, IO, revocation, complexity,..). The different solutions that could be tried with ease are: 1. Network filtering - If the API consumers can be limited by identifying their IP addresses 2. Bearer tokens - Simple random string that can be passed in the header (depending on number of consumers, ability to revoke/change tokens it can become little complex) 3. JWT's - Similar to bearer tokens without the ability of revocation and extra cost of key management and CPU (the signature verifications are quite costly). 4. OAuth - Better call it 2-legged OAuth since its between servers only. Its the ideal one with both revocation possibility and signature verification.

The first three could be implemented easily inhouse and are suited when number of consumers are small. Its better to use some third party SAAS service or OAuth server for the fourth one. I work professionally on these things and these implementations can be time consuming. More than often people dont take into account their requirements when choosing a solution.


If you are looking for something along the lines of OAuth2 - you should BTW! Highly recommended if your API is going to be consumed by first-party client apps on different platforms or third-party clients - one of the best setups I've come across is Laravel Passport[1].

If you don't mind running a PHP application, or it being built in Laravel, (I don't, but some do) it's actually a really good implementation of a solid OAuth package[2] (Disclaimer: I am a maintainer on oauth2-server).

You can set this up in a couple of days, and it'll be ready to roll for the majority of use-cases. With a few simple tweaks and additions, you can have it doing extra stuff pretty easily.

In one build I'm working on, this acts as just the authentication layer. The actual API that relies on the tokens this generates sits elsewhere and could be written in any other language (it's not in this case).

[1]: https://github.com/laravel/passport [2]: https://github.com/thephpleague/oauth2-server


Anyone know amy good resources for the following scenario:

WEB API that a device needs to authenticate to. Can't store password on device (it's a device we don't control). No user, so authentication has to be all autommated.

i.e. we need to run software on a clients machine, and it has to authenticate to our web api to send us data.

We obviously don't want to hard code the credentials in the software as that can be trivially extracted.


> Can't store password on device (it's a device we don't control). No user, so authentication has to be all autommated.

Am I missing something, or have you painted a contradiction?

* You want the device to hold some secret

* You want the device to be able to prove that it holds the secret

* You don't trust the device to hold a secret

If I'm understanding this correctly, then you've left the realm of cryptography and entered the realm of obfuscation.

Edit

This isn't necessarily a losing battle, but it changes the way we need to think about the problem.

Games consoles and DRM'ed video media (Blu-Ray and HDCP) do something similar in not trusting the end-user: they want to hold the key to the kingdom whilst ensuring the user never sees it. They've done this with varying levels of success.


Could it be a possibility to generate a keypair on the machine and then attempt to register itself to your webserver supplying client-name, IP and public-key. Then you would be able to see and OK any attempts to connect. Once you've OK'ed it, it would be able to authenticate and communicate normally as an authenticated device.


Can't store password on device (it's a device we don't control)

Can you expand on this? Because storing a device-specific password (or api key, which is essentially the same) would be my first suggestion.

If it's because you can't configure the device, then my suggestion would be to create a process that embeds the device key into the software before deploying to each particular device.


We need to run software on clients machines, we need this software to be running as service (no UI). This service needs to communicate back to use securely via our Web API.

We could have a password entered by our systems guys who deploy to a new machine for the first time, the service encrypts and stores that on disc, then each time it wants to talk to us it can decrypt its password.

I'm not sure if that would be a good solution, or is it just as insecure as having password in the code.


I Found a somewhat interesting solution for this issue. I assume you know what an ssh public/private key is, correct? Well, apparently, there is a way to apply this concept of ssh public key authentication to the HTTP protocol.

This technique is referred to as "Mutual Authentication": http://www.cafesoft.com/products/cams/ps/docs32/admin/SSLTLS...

Basically, it's 2-way SSL. You use signed SSL certs to authenticate the server to the client and the client to the server. You could use your own cert signing server or employ a third party cert signing service.

Using this method, your techs would need to set up the SSL cert for the client machine when installing the software, or, the SSL setup procedure could be part of the software installation procedure.

Interesting idea that may solve your problem. Hope this helps.


This brings in the question of how long the client certificate would be valid for and how it would be renewed before expiry. If sending a tech costs a good sum of money, one may be tempted to use certificates that are valid for decades, which may or may not be a good idea depending on the client environment, advances in cracking some algorithms or proving collisions in hashing, and business related factors.


Cert renewal can be automated the same way letsencrypt does it for instance.


Let's Encrypt validates during each renewal if the server still controls the DNS and/or HTTP endpoint. The point of the limited duration is to ensure that an attacker who got a copy of the certificate, but who doesn't control the DNS or HTTP endpoint, can't keep using it for long.

In this case, I don't see any automated check that can verify that the client trying to renew the cert is the original device, so there's no point in limiting the lifetime of the certificate, unless you send a person to do that verification manually.


That is an interesting limitation. I'm sure there is some way to get around it. However, I'm not a network security expert. I just thought using SSL certs for authentication was an interesting idea.


SSL client certs are useful, but they don't fix the problem feared by LandR: like a password, they too can be copied and used by someone who controls the machine.


Well, I mean that's true of almost every authentication method. If I have 2FA set up and someone knows my password and has access to my phone, of course they get into my account. I have a pentester friend who told me once: "Nothing is unbreakable or un-exploitable." I tend to believe that. Things like social engineering can always be applied to get the information you need to spoof credentials or gain access to critical systems. If someone is motivated and has enough resources, there's no amount of security methods that can stop them.


> password entered encrypts and stores that on disc, then each time it wants to talk to us it can decrypt its password.

Is there a reason you don't want to use tokens? Upon authenticating once (admin, manually), the web service would generate a token, which it would store and potentially have to revoke.

With something like OAuth, the token could be more temporary and automatically replaced during each use, to avoid having one secret (whether it be a password or token) that could be leaked and used by multiple clients.


I'm not sure if that would be a good solution, or is it just as insecure as having password in the code.

It's just as insecure, as the software would need to store the decryption key itself in plain-text.

But why are you so concerned about keeping the password secret? As long as each device has a different password, you can identify abusive uses (too many requests, or from multiple sources, etc) and block that account.

What do you fear that the client could do with the device password?


That sounds a lot like hardware-id based DRM, doesn't it?

Kind of like the (in)famous Denuvo

https://en.wikipedia.org/wiki/Denuvo

Which obviously can be cracked, but it takes a long time.


Ideally you use some kind of time-limited API tokens, and find a way to automatically distributed new API tokens, before the old ones expire.

That way, the breach of a single device doesn't immediately give the attacker unlimited access to the API.

You should also monitor for unusual activity, and blacklist API keys and devices with such activity.


1. As secret, use encrypted(some internal device id, pregenerated-key)

2. Generate pregenerated-key upon first login (maybe based on email or tel no?). Just like, e.g., Signal does it

3. On your servers, check if pregenerated-key and/or email is used more than once at the same time, if so invalidate it and direct user to 2.


We already do number 3 :)

We monitor for the same login being used twice at the same time and disconnect both and delete the account.


Just wanted to throw Azure API Management out there https://azure.microsoft.com/en-us/services/api-management

If you happen to be using Azure. I found it very useful for everything you'd want to do with your API, one of them being able to tie down security as much or as little as you need. It even builds a front end for anyone who has access to use for reference. But that's just one of the cool features.


From the cheap seats (as I am a liberal arts major) and currently an entrepreneur trying to launch some microservices for my resume editing and other professional services business using R-project and the plumbing api creator package. What about a fairly lengthy random password provided to clients (human beings) they input into the intake form using Typeform, then the underlying code checks for it in the "authorized" file and removes it after 1 time use? The form feeds the api inputs directly.

TIA.


Use Auth0

eta: I don't work for them, but really no need to roll your own.


Depending on your usecase, a quick setup would be to use https://auth0.com/ They have a lot of documentation and samples to get started.

We have implemented it for authentication with a Asp.Net Core webservice, with a REST based API. Authorization is also possible, either by working with the JWT token scopes, or using the Auth0 app_metadata.


Depends on what you want. You can just use an API key if it's for easy access, through a header.

If you want more, then use username + pass. Encrypt both or generate something from both of them. Eg. encrypt(username):encrypt(pass)

If you want more, use private & public keys, which receive a session token the first time ( when authenticating).

...

I think the end result would be a self hosted oauth server with permission management.


Authentication is such a mess, I don't even know where to begin. Most APIs rely on some sort of token-based auth, communicated via the header format: "Authorization: Bearer abc123", as opposed to placing it in the Cookie, as most web sites will do. Many solutions exist, like OAuth2, JWT, etc. but that's ultimately what it all boils down to.


Is there any reason to favor bearer tokens over cookies?


If you use a cookie for an API, it will look like you don't know what you are doing. Also, there are extra rules around Cookies (expiration, length, etc.) that may bite you if you use them outside a browser context.


Ah, so there are other contexts (e.g. native mobile apps) that may be sharing the API, not just browser (web) apps. I think I get it. Thanks.


If you choose to use JWTs I suggest still keeping a database of tokens and validate against that. This way you have an option to revoke the token and force client to get a new one. This is useful for when token data becomes stale, e.g. email changed, roles added etc. Simply keeping it all in token is not enough.


I think that Auth Tokens are easy to understand, use, and implement. They can generally be invalidated if the developer feels that they have been compromised.

Some APIs also use self-invalidating auth tokens based on an expiry date. For more secure data, I'd prefer that.


Use Access-control-allow-origin and set it to only allow calls from a specific address.


Can someone fake the origin?


This is controlled on browser level and most (all?) browsers implement this. Origin can be faked by just using anything that can make a http request, like curl. It exists to protect users not the server.


from browser ? No. from non-browser clients like curl ? Yes. And your server will never be able to tell if it is fake or not


Client side SSL certifikate or digest authentication or basic acces authentication.


Auth Tokens are easy to work with for customers and the API developers. Generate a token, and then authenticate. I prefer them.

http://page.rest does a good job.


JWT is pretty easy to understand.

Create a token, put your userId in it, set an expiry date.

If a request comes with a token check if token is valid, check the userId & expiry date otherwise throw error.


I suggest JSON Web Tokens. Check this out https://jwt.io/introduction/


I would suggest everyone to stay away from jwt unless they're willing to spend the time to learn how it works.

I believe the meta is that jwt is solid itself but allows doing things "wrong". Guardrails so to speak are insufficient if not outright lacking.

I'd say just go with plain text token for a web app. I don't like the idea of trusting the client because I don't understand how jwt works.


Trusting in what sense? If my token only has the userId as data, what kind of trust is needed?


Some libraries don't make it easy (or possible) to check that the algorithm used by the JWT sent by the client is in fact the algorithm you're using and want the client to come back with, see i.e. https://auth0.com/blog/critical-vulnerabilities-in-json-web-...


I see, but sticking to HS256 should solve this without much headache.


I have an internal REST API (Tomcat server) on a Windows network that uses the WAFFLE library.

I am interested to know whether HN thinks this is considered secure?


For users or applications within your own network?


What would you recommend for both?


For applications using a HMAC token with some sort of timestamp which can be checked for replay attacks. AWS has a good guide: https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthenti....

For users, I'd add a OAuth layer to the application layer and still have this application using a HMAC like above. You want to try keep things 'stateless' when it comes to your API's.


Within your own network a simple key/secret combination is enough, as the secret can just be stored as an environment variable, for example.

For users you'd need some way for the users to "fetch the secret", which is effectively what logging in is. At that point you should just use JWT or oAuth.


OAuth(Robust, Many Libraries), JWT(Easy to understand and implement), API Keys/Tokens(Simple and fast)


In addition to token/key, for some APIs in the past I've added IP address filters.


Using IP address filters would require knowing the client environment and keeping some kind of planning and communication mechanisms of changes in what. Big enterprises would have teams and a lead time of a few months to sort this out, adding more overhead and costs to the service provider (which would have to somehow be recovered or absorbed).


Oauth2 tokens or jwt.


Seems to me the answer is indeed that simple: use OAuth2 and be done.


OAuth 2.0 is so bloated that it scares people off. Something like the client credentials flow is relatively easy to implement on your own and is basically lets clients exchange a client_id (username) and secret (password) for an API key.

Bonus: If you stay close enough to the standard you can plugin a real OAuth 2.0 provider if/when you decide you need it.


> OAuth 2.0 is so bloated that it scares people off

I think we're thinking the same thought, maybe my terminology is sloppy.

Suppose we just say "Use this token-generation endpoint (with your credentials) to generate a session token, and attach that token by means of OAuth 2.0 Bearer Token in subsequent requests to other endpoints".

Doing that, we can easily scythe off any bloat, no? We don't care about people signing-in with their Google accounts, or anything like that. Or is that what 'client credentials flow' means?


So you need to get an access token by validating against a third-party (keycloak, auth0) to access your own API? That's a pain.


Just use a regular oauth server library in your language/framework of choice.


Third-party? Token-issuance is just another endpoint, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: