I don't understand the "lots of inodes" comment, but any filesystem limit on inodes isn't relevant to this, which is just due to RPCs to the metadata server(s). (ls -l is actually worse than coloured plain ls because the size data are on the OSS, not the MDS.)
The canonical advice for general performance is to keep directories reasonably small on Lustre due to possible lock contention, but I don't know the circumstances for which that's actually relevant.
[Metadata operations, such as large builds and tars, are typically slower on our Isilon than on the Lustre filesystem, which has no serious tuning as far as I know.]
> The canonical advice for general performance is to keep directories reasonably small on Lustre due to possible lock contention, but I don't know the circumstances for which that's actually relevant.
I helped a researched debug a Lustre performance issue a while ago. Each job was nothing special, read a few files (maybe a few GB total), do some (serial, no MPI or such) calculations taking maybe 10 min or so, produce output files, again a few GB. No problem, except when the person ran a several hundred of them in parallel as an array job the throughput per job dropped to a small fraction of normal. Turned out that all the jobs were using the same working directory. Slightly tweaking the workflow to have per-job directories fixed it.