There’s probably some software tool to give you an internal perspective on it, but if you want to “prove” your frame rate for any application, there’s one obvious way:
1. make the app full-screen;
2. make the app constantly vary the screen output as often as it can;
3. capture the HDMI output of your computer with a hardware video capture box[1];
4. use some scriptable video software to calculate, for the captured video, how many distinct frames (i.e. changes in content from the previous frame) it contains per second, on average.[2]
——
[1] you could also substitute a high-speed camera pointed at your monitor (necessary when the limiting factor is monitor refresh rates.)
[2] Conveniently, if the capture box emits video in a format with inter-frame compression, you can do the new-frame-counting at a low level, without decompressing the video. You parse through the video stream, discarding the “no change” I-frame chunks, and then count the remaining frame chunks grouped by the start-timecode field cast to seconds. Which is, I assume, what OS file-indexing services do to calculate a video’s FPS, for those that do. This answer is slightly high in the case that you’ve got no-change keyframes in the stream, but it should average out to essentially the same number in any video longer than a few seconds.
Never benchmarked a virtual terminal before, but wouldn’t one basically use a timer and a bunch of unbuffered+blocking writes to stdout to determine how fast the terminal is drawing?