Hacker News new | past | comments | ask | show | jobs | submit login

I'll never forget the first time I had to restore a massive sql dump and realized that vim actually segfaults trying to read it.

That's when I discovered the magic of spit(1) "split a file into pieces". I just split the huge dump into one file per table.

Of course a table can also be massive, but at least the file is now more uniform which means you can easier run other tools on it like sed or awk to transform queries.




I'm surprised that vim segfaults! I had it slow to open huge files, but I always assumed it could handle anything, through some magic buffering mechanisms. I could be wrong!

That being said, from the point that one has to edit the dump to restore data... something is very wrong in the restore process (the knowledge of which isn't helpful when you're actually faced with the situation, of course)


Yes you shouldn't be manually restoring sql dumps but I've been working in this field long before versioned source control or pgbackrest existed.


I once had to administer a system where a particular folder had so many files that things stopped working, even the ls command would not complete. (It was probably on ext3 or ext2.)

The workaround involved writing a python script that handled everything in a gradual manner, moving files into subdirectories based on shared prefixes.


oh yes. ls uses 4k buffers for dirents, and in a directory with lots of entries, the time for userspace to hit the kernel to list the entities until that 4k buffer is full, back in the day, became noticable. In my dealings with a system like that, I had a hacked copy of ls that used bigger buffers so at least it wouldn't hang. Tab completion would also hang if there were too many entries.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: