My guess is that there are companies with "legacy" applications, that can't really be re-written into a distributed system, have a large footprint, but still need to be run.
The special sub-category of those are huge RDBMS instances - a pretty common choke point in growing companies with weaker engineering teams. Some of those companies would pay basically any price to keep those DBs running.
I've temporarily scaled up to c4.8xlarge for a few hours every now and then to get some parallelized computations done quickly. Plays nicely with Clojure's (pmap) function.
applied ML research here also -- a lot of interactive (but highly parallelizable) modeling, graphing. Using medium-size data sets around 3-4GB in ram, by the time you forked it a few times, you easily end up beyond the m4.10xlarge or c4.8xlarge limits.
IMO theres an awkward space between small data and big data where it isn't really worth spending a long time to treat it like a real "big data" problem, and the x1 instance gives you an easy-out.