Hacker News new | past | comments | ask | show | jobs | submit login
Try my new hosted Linode memcached service (BETA)
33 points by ritonlajoie on Sept 25, 2010 | hide | past | favorite | 25 comments
Dear Hners !

I did it, I'm opening it, for you to see and test. Well, it took me one week but hey, that was fun to hack into memcached. So, I'm opening a service for Linode users (Dallas data center only for now but more to come !) that lets you share a memcached server.

The idea is kind of an experiment : you share _one_ instance. I don't provide you with fixed and different memcached servers, but you read it correctly : only one. This one is having a 200MB bucket for now but will be upgraded if needed.

How it works: you connect your memcache client to the linode which is running the memcached and I let you in if you registered to the beta. You can use the 200MB bucket as you wich, provided that the maximum expire (TTL) is set to 30 minutes (1800 seconds). Don't ask more, it won't work :)

Well, for the security, you'll probably be happy to know that you can't read/delete/modify keys that are not yours.

You can see this service as a super bursting and free (for now) super service !

In the future, if it works, I plan to make it a paid service but surprise, I would charge only 1 or 2 dollars/month. This way, you can temporary burst a 200MB (more coming !) memcached server for nearly free.

Hope you enjoy it and I'm opened to any suggestion for improvements. To participate to the beta, please visit this webpage which will explain you what you can/can't do , and the way to get it.

http://www.henri.pro/2010/09/25/memcached-shared-instance-beta/

Have a nice week end everyone ! (As usual I'm on #startups my nickname is henri if you want to chit chat)




You could get rid of some of your limitations and make things easier on yourself by using the same technology we use to run memcached at heroku: http://github.com/northscale/bucket_engine

In its current form, it's binary only (because we do vertical multi-tenancy there and have no control over multiple applications being on the same instance). It'd be pretty easy to make an IP address based ACL mode for the thing and then you could run binary or ascii just as easily.

Advantages:

* Key containment (e.g. you can do flush_all) * You don't have to hack up your own memcached server * Binary protocol is a bit easier on the server with things like large multi-gets because they don't cause so much request swell

We've also got some basic management stuff for creating and manipulating instances (independently of auth, since we let SASL deal with that).

Disadvantages:

* You'd have to hack in your IP addr -> bucket mapping * If you want TTL limits, you'd need to hack that in, too

Of course, I imagine both are quite easy and would be welcome contributions to the project. :)

Let me know if you're interested and need any help.


Obligatory question: why would I use a shared memcache when having memcache on own server is as easy as sudo apt-get install memcache?


I'm glad you asked :) Right now I'm offering a 200MB cache. The goal is to be able to 'burst' it. Hosting your own memcached is nice, and easy as you said, but then you are using your RAM and this can be a problem for some low cost linodes that are full of apps.

If this trial works, I can relatively say that we won't be talking about 200MB, but GB of "burstable" cache. And you won't get it for free on Linode.

Also, this is an experiment.


I would consider using it if its cheaper than turning on another node within my cloud environment. One thing less to manage.

edit- Just realized you have to being using Linode as well. Now I question this too. I thought this was going to be Memcached as a service.


Pardon me, memcached is designed to be run in a LAN. Right now I have my box on Linode. Sorry to ask, what do you mean by "as a service" ? If you are on EC2 or Slicehost, I'm not trying on these networks right now, but as said on the page I linked, if the experiment is a success on Linode, I will surely make it to other VPS providers.

Hope I could answer your question..


Sorry, I've been recently investigating a couple of Memcached services that are used over a REST service and so it was on my mind when I first read your service description.

The reason I was slightly interested is because Windows Azure right now doesn't have good support for any sort of distributed cache system. So we've been shopping around.


Memcache as a service will defeat the whole purpose of having cache in memory. Latencies will kill you.

Memcache will only make sense if it is hosted in the private LAN.


Memcache will only make sense if it is hosted in the private LAN.

I think the point is that he's only offering it to Linode users in the Dallas data center, which is where his service is located. Therefore the latency will be in the sub 1ms range, comparable to a private LAN. There may be other issues with this approach, but latency isn't one of them, at least in this implementation.


Yes, I agree. I was replying to the other commentator who asked memcache as a service. Memcache on public network will kill the objectives because latency would then be the main issue.


Thought the same thing too, but apparently there are companies that offer it and there are people buying it.



The whole point of memcached is that it is right next to your web server, accessible through loop-back devices and such.

To stick it on the other side of a wire is defeating the purpose, that will in a great many cases increase the time of the request beyond what it would have taken to re-create the original request.

Not to rain on your parade, but I think this is less than useful for production.


Hi jacques, Regarding the request time, if you stay in the datacenter, the response time is very very low. (to the ms).

Regarding the production, right now it's not intented to replace a real dedicated memcached server.


You should do a benchmark of some simple page that uses no caching, memcached locally and memcached remote but in the same datacenter, that would be interesting information.

We don't use Linode, but we do use memcached to store 'partials', and for now I'm skeptical about running that memcached on the other side of a wire, latency tends to fluctuate quite a bit in a busy DC, so you may have to run your test over an extended period of time to get good data.

It helps if your machine is equipped with two ethernet interfaces, one for 'local' traffic and one that is facing the outside world.


I'm not sure how Linode does it, but I've worked on dedicated machines that have separate, internal routes. (Slicehost VPSes have this option available -- unmetered -- too.)

When compared to heavy processing on large database queries, memcached -- even with a few extra ms of latency -- can still be a major performance win.

Although this largely depends on the applications you're making. I've been working with GIS applications lately, where a large distributed cache (most memcached clients automatically support sharding) is about the best you can do without a large hardware budget.


Linode allows internal routes as well, and it is unmetered (it just has to be in the same datacenter, though that should be a given).


I don't think thats really true. In my experience its far more common to run memcached on a separate server (or servers, it is clusterable) than it is to run it locally, at least for anything serious enough to require separate web and database servers.


I can see four different scenarios here:

  - local box

  - same rack, or at least not further than one hop away on a switch

  - same datacenter (multiple switches)

  - somewhere on the net
I would expect the first two to give good performance, the third adequate and the fourth too slow to be usable, and unreliable to boot.


Here's a related idea: Someone should start a very large Redis instance, say 8GB of RAM on an EC2 node, with shared reads (i.e., all clients can read all keys), but protected updates. Each app would use a prefix on its keys. This could be used as an open message passing bus between apps that need to communicate.


Why dont you go one step further and offer shared NoSQL instances - for instance, Redis - which can pretty much double up for what memcached does.. but also so much more.

One step even further would be, people can spin up on-demand instances of whichever NoSQL server they need - Cassandra, Voldemort, Redis, Riak... you name it.


Hi, well, that's a nice idea that I thought also. But the thing is, I'm full time somewhere ! Lets see if this Linode limited one works, and if there is a demand I will surely think seriously about it.

edit: just to be clear, I'm not offering one instance of memcached per person. It's only one instance for all.


What's the point of that? I'd get better performance from a disk cache, rather than connecting to some distant server with crazy latency.


Having a remote cache gets rid of one of the few reasons for having a cache at all.


I think at this stage it's people with apps hosted in the Linode datacenter in Dallas being targeted, i.e. on the same private network as this service.


It's a cache in the data center of Linode




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: