Hacker News new | past | comments | ask | show | jobs | submit login
What’s your API’s “Time To 200”? (shkspr.mobi)
166 points by edent on May 22, 2021 | hide | past | favorite | 84 comments



I've put a fair amount of work into getting this exact time as low as possible for my exchange rates API [1]. I've noticed slightly better conversion each time I've taken a major step out of the process.

I've specifically eliminated some of the steps this article cites in its example of a tedious flow - for instance I changed user accounts to be confirmed by default and then only disable them retroactively if a user doesn't click the activation link within 24 hours. This way you don't need to wait for the confirmation email. Even though I use Postmark delivery times can be surprisingly variable.

I'm not sure how I could further improve the current flow, which is 1. put your email into the landing page, 2. then choose a password for your account, then 3. you're presented with an example request format including your already activated API key. Suggestions welcome!

I guess because the scope of my service is so limited it's easy to have this fast flow, no complex libraries or auth is involved.

1. https://www.exchangerate-api.com


> I'm not sure how I could further improve the current flow

Remove 1 and 2.


So I actually do have a version of my API that doesn't require any sign up at all for the users that prefer this!

You can see it here: https://www.exchangerate-api.com/docs/free

That said, as much as some users want an open endpoint with zero authentication there are many others who want an actual account, commercial support, high availability, more features etc. These users are also the ones that pay for development, infrastructure etc. so my service has to be 95% built around the flow that includes signup.


Yes but you can let users sign up with their API token at any later time.


You need to assign the API key to an email both for contacting and also a way to limit abuse.


Abuse of free APIs is a big issue, I've definitely experienced it a lot and see other API developers in this thread mentioning it.

With my experience though I found that trying to limit signups to prevent abuse caused so much friction for legitimate users that I actually decide to change my strategy to the following:

1. Allow essentially unrestricted access to the free account on a separate domain/hosting so that people don't feel the need to churn through accounts with bots etc. and the load can be separated out. Hence this page: https://www.exchangerate-api.com/docs/free My signup form actually automatically redirects some classes of disposable email, bot signup etc. to this page!

2. Make sure that anything particularly resource intensive or that's a good reason to sign up for my service is only accessible after payment. I would love to give out more functionality for free but unfortunately the people that take advantage mean it's just not economically possible.

So for me the main reason to get an email address is 1.) so that users can have a better experience - get usage notifications, updates about the API that might affect them, share the account with a colleague etc.

And 2.) so that business users can be satisfied. Pretty much anyone running a company that is relying on an API will want to have an account, see how the upgrade process would work if they needed it etc. even if they're only starting off with a free plan.


But you could give a short lived highly limited API key out for testing to allow the potential user to test the API for their needs before bothering to make an account and providing their personal information.


How do you prevent someone from automating repeated "get new temp key"?


Also, a way to register via that temporary api access


KeepFlying's idea together with this point is a great suggestion, I'm going to see if I can prototype something along these lines.

Basically hand out tons of short lived credentials right from a widget on the main landing page, together with each API response giving a link to a signup form that can convert the key into a fully fledged account.

Thanks for the suggestion!


For a more professional look, use higher res or vector graphics. Parts of the page (including the fixed header) look blurry, even on my phone with a relatively moderate by todays standards 1024px wide screen. I might be very particular about this, but I find it almost painful to look at the page for this reason.


Most companies seem to maintain two APIs: a “private API” that they use to actually to deliver their website or service, and a second “developer API” for the exclusive use of third party services.

If you’re just interested in putting together a quick hack or proof of concept, the private API often has a “time to 200” orders of magnitude faster than the public developer API: pop open the Network tab on your browser, perform an action, copy as cURL, run in terminal. Boom, 200. Twiddle a few parameters so it does what you want, and get on with building your demo or hack.

If that kind of learn-by-example speed was available for the developer API - or, better yet, companies actually used the same API for their service as a form of both dogfooding and to provide a great example, the API world would be a happier place.


I don't think any consumer of a public API would appreciate the churn of a private API. Either that or your web team is going to suffocate without the ability to make breaking changes.


Many companies only have one API, and sadly it’s just the private one.


If it's reachable by your computer and it has the right credentials for it, it's not really "private".


This is fine for toy projects but not for commercial applications. Consuming private APIs without permission and making money from them leads to lawsuits.


Personally, few things annoy me more than APIs that can only be accessed via an SDK, and can't easily be called through e.g. cURL.


I seriously discount your api if I can't do my initial review via curl. As the author stated, having to do all this setup creates friction, your api is supposed to be solving a problem for me. If it just creates different problems you're not offering a good solution. If I can quickly see how it works via curl it will be that much easier to evaluate.


Also please have an example API key that actually works. There are some ways to do this depending on your use case

- Have a rotating example key in your docs that rotates once a week.

- Have the example API key return redacted data, example data, or old data instead of real data instead of failing out as an invalid key.

- Add something to the response that doesn't make it usable in production (e.g. a TTS API response can have some duck noises in the background for the example key)

- Limit the total number of requests per IP address to something that's usable for dev work but not usable in production

It reduces a LOT of friction for the user to be able to just curl something off your website instead of going through the whole registration process to get the first 200.


Agreed, extra credit when the docs use the key from your account so that you can just grab strait from the docs, I think it's stripe who I'm thinking of who does that, sooooo nice.


Or even better: allow me to paste the url in a browser and get a meaningful result.


My API currently does auth inline in the GET request URL in order to make this possible. An example request would be: GET https://v6.exchangerate-api.com/v6/YOUR-API-KEY/latest/USD

I've had a fair number of users send me feedback saying this isn't the best practice, I should use tokens in HTTP auth headers or use various other auth schemes.

But from my perspective, for an API that is offering really very simple functionality, using HTTPS & not handling user data etc. then this is quite OK - especially when you consider the benefits of how simple it is to get up and running.

I have quite a few university course conveners include my free API in their entry level CS classes because it's super fast & rewarding for students to go from finding my API to then having a JSON object in their code, no tokens required!


The problem with having your API key in the URL is that they'll likely be logged all over the place when they're meant to be secret. You probably have the keys being leaked in logs, error reports, metrics, etc.


True and if this API handled user data or anything substantially private that would be a HUGE deal and super dangerous.

But it seems like in this case it's mostly a rate limiting and identification exercise and not a secure protection of user data so the impact of exposure is substantially lower. So it does seem reasonable here.

Though I hope that OP has documented all over the place "do as I say not as I do" so people don't copy this pattern.


dmlittle's concern is a valid one and for most other types of API I would definitely agree it's not the right approach.

I still think it's reasonable for my use case but perhaps I should add another auth scheme as an optional alternative for the user who is concerned about their key potentially being caught in logs.

Your point about the documentation is also a good one - I should probably add a specific page just about the authentication approach. Added to the to-do list! Thanks.


FWIW you can add HTTP Basic Auth information in URL links and all major browsers and other HTTP clients (for the most part) should interpret it correctly.

https://:[API_KEY]@v6.exchangerate-api.com/v6/latest/USD

If you only have an API key and not a token (username) and secret (password) I recommend passing the API key as a password as some logging solutions do log the basic auth username in the data recorded.


How does that work for authenticated requests?


The same way you would build any login system for a browser application.


famous example of this, are most of the aws api's that because of their signing features and hashes can really only be called sanely from an sdk.

such a pain to debug, i'm not sure why they have such signing features. is it for security


> is it for security

Yes.

EDIT: specifically, it is do intercepting a request doesn’t allow you to issue new requests.


Isn't that what https provides. Or were the APIs creates before https was popular


The use case is different.

For example, S3. S3’s authentication scheme allow you create a limited used download link and passing to a untrusted user


> is it for security

It is because TLS client certificates do not exist.


I would say it is fair for AWS to have a complicated signing method, because you really do not want your AWS account falling into the wrong hands or being intercepted through some MITM attack. With an AWS account an attacker can bankrupt you pretty much instantly.


Looking at you literally all of AWS.


To be fair, it’s not like the GCP APIs are much easier to use. It’s probably the right thing for infra and payment services to have a higher bar for API security, plus they’re hugely complex and usually used as an SDK. That said, I haven’t been very impressed with the Google API dev experience. Example: I was just trying to post some timing data to Google Analytics via Go, and the documented example didn’t work in addition to being very hard to find. I eventually got lost in Firebase docs, which appeared to be wholly unrelated to what I was trying to do, had many additional separate APIs, required that I set up a new project in Firebase in addition to my GA “property”… On top of all of this, the GitHub auto-generated API examples were outdated (used a previous version that referenced deprecated docs) with no hints as to where to go from there.

In the end, I gave up, because I wasn’t willing to invest hours into learning all the idiosyncrasies. Ended up using Influxdata’s SaaS and got what I wanted going in about 15 minutes. To be fair, most of that time was also because their API docs didn’t actually work as posted and didn’t go through properly installing the client lib, which someone more experienced with Go probably would have gotten past more quickly.


If you are trying to pass custom json to GA, simply you can't do that. You would have to use Firebase analytics to send custom JSON. For GA, all the data that is passed must fall under these types, categories, actions, labels (and value)


It always boggles my mind when I try to use a new API and I have to jump through a hundred hurdles to start using it.

Let me use your service and start paying you!

In the ocean of bad APIs out there I'll pick yours if you can offer:

1. An easy process to onboard

2. Good documentation

3. Usage based pricing


I would add:

4. A meaningful indication of API stability and planned longevity

5. A viable method to run automatic integration testing during development

I want an honest answer to how often you're expecting to break my integration and cause me extra work, and if you do that, I want to be confident that I've done enough to update my integration so it still works without having to manually retest the entire thing.


Maybe we should collect some good and bad examples here. Personally I dislike the onboarding situation at Twilio, their documentation is good though. Mailjet has good documentation and decent onboarding.


I spent a good hour just getting the Box cli running yesterday... Which requires a developer account, creating a 'custom app' to facilitate the CLI interactions, going into an obscure admin interface to approve the request to make the custom app, generating a key pair and downloading configurations for the app, downloading npm crud, and sharing data with the service account associated with the custom app, using uids that aren't terribly obvious in the user facing world.

But then it works just fine! :P


If you have some kind of pricing model I understand the basic email validation and possibly getting a payment method up-front. It's too easy to program around a simple usage cap.


Having a fake/temporary/nonpersistent/whatever-makes-sense instance one can quickly test in swagger or similar tools just to get a feel would be nice, though.


Agreed with the first two. I’m happy with a different pricing model so long as it’s financially viable at our usage tier and self-serve. On a project with a budget, or with funding, there isn’t a meaningful difference between a free trial, prorated per call, and paying for a month upfront.


Those are all "usage" based if you aren't nitpicking. The major alternative is app store (and some SDKs) style of revenue-bases pricing


Fair. Revenue-based is indeed frustrating if it inhibits self-serve; otherwise, I’ve no issue with it.


While "Time to 200" is an important metric, it's also important to have complete enough documentation for people to effectively use the API. There shouldn't be wholly undocumented corner cases.

For example, I was using the Docker Engine API and I was trying to use the container GET archive call. The documentation says to use a file path, but it doesn't indicate the behavioral difference between using a file path ending in "/" and one ending in "/."

I had to look at the "docker cp" documentation to figure that out.

If your documentation makes it difficult to complete a project because it only covers happy paths or frequent uses, then your users are going to have a rough time.

Both the "Time to 200" and "Time to real project usage" indicate how important usability testing and documentation is.


>only covers happy paths or frequent uses,

Maybe, but there's no way a dev can possibly think of every single crazy thing an end user will try to do. There's a reason things are referred to as edge cases. You design a system to to work a specific way, and then document the workflow to make it work. Anything outside the documented procedure is eperimental. Sure, a dev can build as many bozo tests into the thing that they can think of, but users will always come up with something different.


Valid point. Perhaps use of "corner cases" didn't help my point. I mean that there shouldn't be valid, intended functionality that remains undocumented.

Another example is an API I use that has a property that has no documentation other than the property's presence. I just roll my eyes at that.


If your API has unpredictable corner cases, your data model or API is wrong


If you have useful API's with a free tier, users will be constantly trying to steal them.

I've had to require email verification, non-cloud IP signup, block signup from IPs of already blocked users, in order to combat abuse. In all of these cases the user is just prompted to add a card to continue using.

I wish it weren't this way though, as it does harm the user's experience...


We're doing something similar, requiring email verification in certain cases based on past traffic, ip address source, etc. Unfortunately, we had to straight up block known temporary email addresses because there was too much abuse.


I believe "time to 200" is a reference to "time to triangle," which Playstation architect Mark Cerny and others have used to describe the amount of time it takes to get a game engine up and running on a new console to the point where it can render its first triangle [1].

You can see him talking about time to triangle on various Playstation consoles here [2] (and if you have the time, it's definitely worth watching the entire talk).

[1] https://www.engadget.com/2013-06-28-cerny-ps4s-time-to-trian...

[2] https://www.youtube.com/watch?v=ph8LyNIT9sg&t=162s


Is it really that important that the time to get to 200 should be small? I understand that it may be frustrating to get started, but assuming I am stuck to this API for a long time, I am lot more worried about stability, quality, performance and availability of the system than time to onboard.


A case where it can matter is when there is no clear commitment yet to use that API.

For instance if there is 2 or 3 alternative services, and you want to explore one of them to have a better idea of the trade-offs. Setting up an account and making “real” requests will be your benchmark.

Actually, even for a service with a decent chance to commit to it, there will still be an exploration phase to get an estimate for the implementation cost. Depending on how much the devs struggle to just try the API, the project could get more or less reprioritized for lower hanging fruits.


I'm with you on those other aspects being very important, but I'm not sure they are more important in general. After all, relatively few APIs are truly essential, offering access to some exclusive facility that users of that API couldn't also find somewhere else and/or implement themselves. For everything else, unless you can hold enough of a potential user's initial interest for them to carry on trying things out, the other stuff doesn't matter.


It’s important if you lose a significant number of potential API users along the way.


I frequently evaluate APIs. If you want me to use your API, getting me to a point where I've seen it actually running against some data that I typed in is really important. So give me an API explorer, rate-limited by IP, that lets me click an example link from your documentation and see an actual response.


I don't want to require API keys (or payment) for read access to public data.

It is acceptable to not provide support unless you pay, and to allow more requests in a time period if you sign up. (This seems to be the case for the exchange rate API mentioned in another comment, so that is OK.)


I've long thought about this as the 15 minute rule. Meaning, with curl, reading the docs/examples, and 15 minutes I should be able to get a response back from you API.


Reminds me about that time last year a cloud compute provider didnt even have an automated sign up process

https://news.ycombinator.com/item?id=22940781

I said its surprising how much breath the sales engineer is wasting given how coveted a continual flow of oxygen is now. I was being facetious in April 2020 but who knew how insensitive that would become!



At Supabase we have tracked “Time to first query” from the start, which includes things like signing up, provisioning a database, creating a schema, adding supabase-js to your app, and querying your data for the first time

We generalised it over time to “time to value” for anything that isn’t onboarding/data-fetching


I found some of the stats on here quite interesting https://status.zapier.com/#app-status Some of these apps may not be doing single api calls but it's a pretty big list gives a bit of a yard stick.


Zero for us at http://api.case.law because most of our endpoints don't require registration. Getting case text does require fast free registration, though.


Thanks for the link - this is super cool! I definitely plan to explore.


Yay! Reach out through the contact form/email with any questions! The docs are at: https://case.law/docs/


With GraphQL, which is ignoring HTTP status codes, it's always 200 OK. -_-


I like to think that we’re doing okay with geocod.io, but I would seriously love some feedback. Should we automatically generate your first API key? Make docs more prominent? Anything else?


The Upload Spreadsheet button without any auth or form required is great. For any prospective users looking for this feature your time to being useful to them is approaching the theoretical limit. Only way faster would be to embed a small version of that page's form somewhere on your main landing page...

I really can't think of any other suggestions - your landing page is excellent, fast and I imagine highly converting with all the social proof. I also like the specific landing pages for each customer segment a lot, I really need to do that for my service.

Your site inspires me to work more on mine!


The demo link is a good start - but it would be nice to have the JSON output pretty-printed.

I'm not sure that the first thing I should see in the documentation is the changelog.

But, other than that, I like it. If you offered UK/EU geocoding, I'd use it :-)


Thanks so much for the feedback! I've considered checking the User Agent and rendering pretty-printed JSON if e.g. a webbrowser is used, but I am a bit worried that UA-dependent behavior could be confusing. Perhaps the downside to always rendering pretty-printed JSON is minimal? Would love some thoughts on this.

Good call on the changelog being front and center, moving it a bit further down now.

Thanks! UK/EU geocoding may or may not happen in the future :)


Is there anyone that does API management as a service?

Something that would consolidate account, tokens, billing, tracking, firewalling etc of an API?

Somewhere where one could plug their api and monetize it easily?


RapidAPI. You can consume multiple API's from one account and as a developer it's a good way to monetize an API if there isn't too much competition in your niche. Not sure what you mean by firewalling but RapidAPI does authenticate their requests to your endpoint so as a developer you can do access control in this way.


Yep. Can vouch for RapidAPI.


This sounds like the services API gateway provides.

I imagine all the cloud platforms have similar products


Jira is more like 2000 /s


“Time to 200”? Why not “Zero to 200”? Feels closer to the car simile


Lots of people don't drive cars. I figured most programmers would be familiar with "Time To Live" (TTL).


I thought it was a reference to “Time to Interactive”. https://web.dev/interactive/


For a non-HTTP specific phrase representing the same concept, I've tended toward "Time to first dopamine hit". :)

That feeling of not being sure if a tool/API/service/SDK/library/hardware is going to work for your purposes and then you get that first example/test/demo running and get your first response...

"Ok, yep, this is good! Now is it gonna let me change this small thing so I can..."

And the positive feedback loop has begun!

It's definitely a metric that impacts developer adoption & is IMO something that needs to be routinely tracked in order to reduce the time taken to get started & catch any unexpected regressions.


0-100, as a concept, maps closer to your idea. I knew about TTL, but didn't make the connection until you pointed it out.


It's not a good analogy:

* Zero is not a HTTP return code

* You don't want to get all the returen codes up to 200, as you'd get with speed. You want HTTP 200 OK and nothing else.


One of the best things that docker has done for the development world is make the "Time To 200" pretty short. We essentially just need to run `docker-compose up -d --build` and you can run a huge array of applications. You just go to http://fontend.localhost and get routed to the right container. It's magic.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: