If you want to investigate free software options then may I suggest you look into Visual SFM (http://ccwu.me/vsfm/) or AliceVision MeshRoom (https://alicevision.org/). I haven't used MicMac, might be good as well.
I tried several years cheap approaches as you requested with no luck. Then I upscaled my gear.
I've used Agisoft's Metashape with great success (https://www.agisoft.com/). The cheaper license offers really good functionality as is.
You do want to have a good camera for good results. My Mavic Pro's (a drone) 12 MP is barely tolerable. With Sony Alpha 6000 (24MP) and good lens, the results are fantastic. Camera phone can work, depending on the capabilities of the camera, but I would use photography and not video material - the images from photography seem to be better quality than frames extracted from video (YMMV).
If you have the patience to collect the image material, the results can be really good.
For example, as a hobby I've been collecting photogrammetry models from an office building being built near my home:
So what you see there is the model as presented by Sketchfab. The textures and model are coming from Agisoft Metashape, Sketchfab is just used as a platform to display the model for public viewing.
The data closer to ground is captured by my Sony Alpha 6000 while the data from above is from drone. I'm happy with the portions of model based on 24MP DSRL images but the drone based material does loook "melted" occasionally.
As a reference, the source data for that model was roughly 800 images.
The data the photogrammetry algorithms consume are coming from the pixel data. More pixels, the better the outcome. Roughly, the resolution and precision you can expect from the result mesh is equivalent to the pixel density of your source material. The algorithms don't invent anything that they can't see in the pictures. This means, that for example for columns you need to take 360 of tens of images around each column to make them appear ok in the model.
I tried several years cheap approaches as you requested with no luck. Then I upscaled my gear.
I've used Agisoft's Metashape with great success (https://www.agisoft.com/). The cheaper license offers really good functionality as is.
DICE has published a pretty good intro into how to use it to capture photogrammetry data in https://www.ea.com/frostbite/news/photogrammetry-and-star-wa....
You do want to have a good camera for good results. My Mavic Pro's (a drone) 12 MP is barely tolerable. With Sony Alpha 6000 (24MP) and good lens, the results are fantastic. Camera phone can work, depending on the capabilities of the camera, but I would use photography and not video material - the images from photography seem to be better quality than frames extracted from video (YMMV).
If you have the patience to collect the image material, the results can be really good.
For example, as a hobby I've been collecting photogrammetry models from an office building being built near my home:
https://sketchfab.com/3d-models/hatsina-20200426-bdd04329548...
So what you see there is the model as presented by Sketchfab. The textures and model are coming from Agisoft Metashape, Sketchfab is just used as a platform to display the model for public viewing.
The data closer to ground is captured by my Sony Alpha 6000 while the data from above is from drone. I'm happy with the portions of model based on 24MP DSRL images but the drone based material does loook "melted" occasionally.
As a reference, the source data for that model was roughly 800 images.
The data the photogrammetry algorithms consume are coming from the pixel data. More pixels, the better the outcome. Roughly, the resolution and precision you can expect from the result mesh is equivalent to the pixel density of your source material. The algorithms don't invent anything that they can't see in the pictures. This means, that for example for columns you need to take 360 of tens of images around each column to make them appear ok in the model.