I live on a secondary road, but at times between vehicles, pedestrians, bikes, etc... my road facing cameras can see 10+ hits per minute. To keep up with times when 2 objects pass seconds apart, I have to make sure the triggers and AI reset very quick (less +images and lower msec interval for ai, low make time for and low reset after trigger for motion). It works pretty good... objects have to pass less than a split second apart for one to be skipped, which is extremely rare (also unavoidable?). I am pretty certain the image interval in the camera AI tab tells bi "get each photo this many ms apart in video, and send to ai". So if your object goes in and out of frame quickly, you need a very low interval. For example, +10 images at 100msec means BI will try and send AI an image for T=0, T=0.100, T=0.200, ... to T=1.0sec for a total of 10 images. Obviously if the object moves past the frame in 0.5sec, the last 5 images sent are a waste of cpu. A setting of +10 at 50msec would be much more productive for this example giving AI all 10 images that are relevant... or just remove the wasted images and use +5 at 100msec. This bit of info seems to be lacking clarity in any guides/docs I've seen, and is absolutely critical to optimizing ai for each camera/situation. When you understand this clearly, you can have a wide range of values for different cams, and all of them work very efficiently (also get all of the alert images nicely centered, if you will). Since I realized this, my LPR confirm average has gone from like 50% to 95%; rare to have AI cancel an off screen plate anymore.
I'm not sure if it is possible to confirm a stopped USPS truck. My setup doesn't care if the vehicle is moving or not... I get a push notification anytime a garbage truck, usps, etc pass by, whether they stop nearby or not. If I was to try what you want, I'd probably start messing with BI motion invert... where it detects when motion stops. I'm not sure exactly how that functions, but guessing it triggers once after motion stops, resets when it sees motion, and triggers again when it stops (ie, won't constantly trigger unless there is motion). If that's the case, using that as a trigger to send images to delivery.pt might work the way you want. If you do it this way, I'd play with "start with motion leading image" and see what effect that has. Since you're intentionally sending static images to AI, you could probably get away with just one image and save a ton of cpu cycles. Also, I'm not sure how cancel stationary objects would interact with this; could eliminate parked cars, but might cause problems with 'occupied' states. I'd try playing with that too if it doesn't work well.