If you set the iframe interval at the same fps as the video frame rate, then you will notice the flashing (I would call it judder) every one second and therefore it is visible. If you set the iframe interval at 100 and the video frame rate at 20 then you will notice (very difficult) the judder every 5 seconds. In this case it will not be very much visible but only because our brain "forgets" easily what has happened 5 seconds before. The truth is that at every x frames defined as the iframe rate, the camera cpu generates and stores the iframe which the full image frame and it has the largest size (in bytes) compared with the rest (intermediate) frames. All other intermediate frames are only the differences compared to the last iframe image and therefore occupy less bytes. The generation of the iframe takes more processing time and cpu load compared to the rest of the frames and require more storage space. Therefore the more frequently the iframe is generated the more the camera cpu processing is loaded. The advantage of having more frequent iframes is the speed of decoding the video stream during playback. If you set the iframe every 20fps and the video frame rate at 20fps, then the cpu needs to decode 20 frames before displaying a video stream of 1 second. If you set the iframe at 100 fps and the video frate rate at 20 fps, then the cpu must decode 100frames before it starts to stream the 5 seconds video stream, therefore the starting of the video stream delays and the cpu may be loaded more during playback. It also depends on how fast the hard diskcan provide the data stream to the cpu.
Therefore the selection of the iframe rate is a combination of the cpu camera abilities, the NVR decoding capability and therefore the number of cameras decoded at the same time and the ability of the hard disks to provide the data stream of the cameras decoded for viewing.
If you do not mind the judder or flashing, then let the iframe equal the video framerate.