IVS tuning is done within the camera itself, not BI.
You either have to watch live view and turn on the IVS rules and have someone walk around or have an SD card in the camera and playback events with the IVS rules turned on to see when they are triggering.
Field of view and how you set up the IVS rules impacts the success or failure of the IVS rule and triggering.
So first steps are have you done the global config to calibrate the camera for your field of view and do you have min object size as 0,0?
Also, many have noted that with SMD 3.0 that your new camera is probably loaded with, that the IVS triggers are delayed as a result of improving them to eliminate false triggers.
Keep in mind the the camera and Deepstack operate under two different algorithms. Deepstack will begin analyzing as soon as an image is sent, so you may get an alert with only half a person in the image, whereas Dahua AI requires the camera to first detect the object and then determine if it meets the criteria and then it triggers the IVS. So depending on your field of view, the object on the far range of the camera view may not trigger Dahua AI as it wants to be sure it is the object.
DeepStack under certain lighting conditions can probably detect objects at a further range than the Dahua AI, but it also may be some false triggers as well. For example, at night, DeepStack will mistake the mailbox across the street as a person, but the Dahua AI never does.
I have noticed that the newest algorithms for Dahua AI tend to trigger after the object has passed the IVS, whereas before as soon as a foot would touch the line it would trigger, but now the person can sometimes be past the line before it triggered.