So much for that theory, thanks!
Every camera consists of the three main components that affect the overall quality of the camera. First is the lens, secondly the CMOS sensor that converts the light into a series of pixels, third is the microprocessor that controls the sensor, the memory for temporary storage (nothing to do with final video storage) and the communications with the IP world and fourthly other ancilliary components like nemory, PoE management etc.
The lens and the sensor are the main component that determine the resolution and visual quality of the image in the video.
The sensor and the microprocessor determine the frate rate and the bitrate of the video image.
The microprocessor determines the maximum bitrate, the compression H264 quality and the various detection analysis techniques plus any other features in the firmware.
There are several manufacturers that produce interchangeable components with varying quality. For example one microprocessor that HIK uses is the Ambarella A5s. This microprocessor is compatible with sensors from Aptina, Sony and Omnivision. It is very feasible that a manufacturer may change one of these components in a production line for various reasons without affecting in general the overall quality of the final product and without changing the model number. (Any car may use oil from different manufacturers with varying costs. These oils are not the same but the car still drives the same!!)
The ghosting effect originates from the sensor and it has to do with the refreshing of the image on the cells of the sensor. These cells require a specific time to charge from the incident light and another time to discharge in order to change for the next cycle. If you do not discharge completely the cell, then part of the light from the previous light intance remains in it and then the light from the next light instance comes in and is stored on top of it. This is the ghosting effect. It is only apparent when light changes (i.e. a moving object). In a still image there is no significant change in the image and therefore no visual ghosting, although technically the ghosting is still there in the sensor cell. However if you
do discharge the sensor cell completely (ie.e fast) then you allow the inherent noise of the cell to become apparent in the digitisation process and now you have the noise or grain in the image. Therefore if you try to eliminate the noise at the sensor level, at the extreme levels you allow the sensor to introduce ghosting at the image.
There are other more sophisticated techniques that they allow the sensor to produce the noise in order to avoide the sensor ghosting but they eliminate the noise at the processing stage after the image captring from the sensor. Now the more processing required, the more powerfull (and expensive) a microprocessor must be and if you reach the processing abilities of the processor, then you have to reduce the frame rate and the bitrate and the compression quality factors in order to produce a video.
It is quite feasible that your camera and my camera may have a different sensor from a different manufacturer that behaves completely different, hence thats why you do not see ghosting in your 4mm camera and I do (assuming of course that everything else like model, firmware etc are the same). Unfortunately we can not know which component every camera may have.
Also do not confuse the visual ghosting with the visual blurring. These effects are different and their elimination requires different techniques.
I hope the above explains in light terms the imaging conversion process and the differences in visual quality.