That's for navigation. You want to be able to tell it to 'clean the living room', it needs to know what the living room is(or some of the landmarks). The robots are low on the ground, so tilting the camera up helps.
That's not the only approach though. You can look forward (or just use lidar), but this navigation approach seems to be less sensitive to, say, furniture been moved around.
Say a couch has shiny metallic legs that mess with the depth estimation. An estimate of where the corners of the couch are could give better estimate of the legs and weight one possibility more than another.
I mean, the Roomba drives itself under the couch and gets stuck on it because of a lack of clearance. If you're working at iRobot and trying to make better Roombas, and need more data as to why your height sensor aren't working to prevent the condition, isn't a camera looking up the obvious way forwards to collect additional data?
Why are they labeling furniture in home that Roomba can't possibly reach from the floor?