Hacker News new | past | comments | ask | show | jobs | submit | paulspringett's comments login

Nice! Have you got plans to support JSON / logstash formatted logs?


We actually already do this, but we don't show the results in the log viewer, but we do parse JSON and a whole bunch of various logstash patterns + some of our custom ones. You can even search on them today by doing: fieldname:keyword but it's just hard since you don't know what the fieldnames are...you can guess but we're exposing those soon!


Interesting that the article talks about load tests but omits any results.

I was trying out a Gateway API + Lambda + DynamoDB setup in the hope that it would be a highly scalable data capture solution.

Sadly the marketing doesn't match the reality. The performance both in terms of reqs/sec and response time were pretty poor.

At 20 reqs/sec - no errors and majority of response times around 300ms

At 45 reqs/sec - 40% of responses took more than 1200ms, min request time was ~350ms

At 50 reqs/sec - v slow response times, lots of SSL handshake timeout errors. I think requests were throttled by Lambda but I would expect a 429 response as per the docs rather than SSL errors.

My hope was that Lambda would spin up more functions as demand increased, but if you read the FAQs carefully it looks as though there are default limits. You can ask these to be changed but that doesn't make scaling very realtime.


Correct. Lambda isn't designed for high data through put. That's what Amazon Kinesis is for. Each Kinesis shard can handle 1000KB/s data injestion rates. You would write your data to a kinesis stream, then use Lambda to respond to the kinesis event to write data to your DynamoDB table.


Thanks for the info on this, I hadn't seen Kinesis before. I also tried something similar with S3 upload but Kinesis looks a much better solution for what I'm trying to do.


Kinesis isn't a good idea for low-latency queueing. It can handle high throughput, but it can often take anywhere from one to ten seconds for a message to make it through the queue.

Given that DynamoDB can reliably write in the 4-5ms range, a kinesis queue may not be necessary. Unless the point of the Kinesis layer is to keep the cost of DynamoDB provisioning low?


Are you using Node.js or Java for your Lambda function?

If you are using "Node.js", you may be seeing slow times if you are not calling "context.done" in the correct place, or if you have code paths that don't call it.

Not calling context.done could either cause Node.js to exit, either because the node event loop is empty, or because the code times out, and Lambda kills it.

When node exits, the container shuts down, which means Lambda can't re-use the container for the next invoke and needs to create a new one. The "cold-start" path is much slower than the "warm-start" path. When Lambda is able to re-use containers, invokes will be much faster than when it can't.

Also, how are you initializing your connection to DDB? Is it happening inside your Lambda function? If you either move it to the module initializer (for Node), or to a static constructor (for Java), you may also see speed improvements.

If you initiate a connection to DDB inside the Lambda function, that code will run on every invoke. However, if you create it outside the Lambda function (again in the Module initializer or in a static constructor) then that code will only run once, when the container is spun up. Subsequent invokes that use the same container will be able to re-use the HTTP connection to DDB, which will improve invoke times.

Also, you may want to consider increasing the amount of RAM allocated to your Lambda function. The "memory size" option is badly named, and it controls more than just the maximum ram you are allowed to use. It also controls the proportion of CPU that your container is allowed to use. Increasing the memory size will result in a corresponding (linear) increase in CPU power.

One final thing to keep in mind when scaling a Lambda function is that Lambda mainly throttles on the number of concurrent requests, not on the transactions per second (TPS). By default Lambda will allow up to 100 concurrent requests.

If you want a maximum limit greater than that you do have to call us, but we can set the limit fairly high for you.

The scaling is still dynamic, even if we have to raise your upper limit. Lambda will spin up and spin down servers for you, as you invoke, depending on actual traffic.

The default limit of 100 is mainly meant as safety limit. For example, unbounded recursion is a mistake we see frequently. Having default throttles in place is good protection, both for us and for you. We generally want to make sure that you really want to use a large number of servers before we go ahead and allocate them to you.

For example, a lot of folks use Lambda with S3 for generating thumbnail images, sometimes in several different sizes. A common mistake some folks make when implementing this for the first time is to write back to the same s3 bucket they are triggering off of, without filtering the generated files from the trigger. The end result is an exponential explosion of Lambda requests. Having a safety limit in place helps with that.

In any case, if you are having trouble getting Lambda to scale, I'm happy to try and help.


Thanks for detailed reply.

I'm using Node.js, this is a gist of the Lambda function: https://gist.github.com/paulspringett/ec6d3df65e977342d6ea

I'm initialising the DDB connection outside the function as you suggest. However, I'm calling context.succeed() not context.done() -- would this be problematic?

I'll try increasing the "memory size" and requesting an increased concurrent request limit too, thanks.


Your code looks correct. I would expect something closer to 50ms in the warm path (300ms in the cold path seems about right).

I'll take a look tomorrow and see if I can reproduce what you are seeing. I'm not super familiar with API gateway, so there could be some config issues over there.

If you want to discuss this more offline, feel free to contact me at "scottwis AT amazon".


The usual complaint on HN is that it is too easy to rapidly consume AWS resources. The usual solution proposed is low initial limits or caps.

You can't have it both ways.

I suggest making that limits request, then retesting and reposting; otherwise, you've sold this benchmark short.


I don't think you could call iCloud Drive either.


Hey, I'm one of the developers at Songkick - I'd love to hear your feedback on the API. Feel free to reply here or email me paul.springett{at}songkick{dot}com


Thanks for the reply. I'll definitely reach out. SongKick is dope. Keep up the good work.


Really interesting to read through the source code and get an idea of how you're using Go to write APIs, thanks for sharing!


Deciding on a pattern on writing http APIs in Go was a bit of chore. Ended up using the `pat` library for chaining middleware. Quite extensible and light weight. Also, using context to pass objects through the requests chain is a neat trick.


The caching part looks really interesting -- have you considered adding support for more fine-grained caching control, such as respecting etags and last-modified times?

Thanks for sharing this!


Yup! That's on the todo list. Auto-caching, as I'm calling it, would figure out a how long to cache something for and then be able to later on use HEAD to check if the url in question has changed.

That cache time could be as long as 5 seconds, so that Templar is checking pretty often if the upstream has changed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: