Using different model for day vs night?

exx1976

n3wb
Apr 10, 2022
13
1
USA
I'm using CPAI, and I have ipcam-combined and ipcam-dark models loaded. However, I'm getting double identification boxes, and it's taking, in some cases 1m00s on a 370KB jpg for identification. Deepstack used to put alerts up in ms. I switched to CPAI because Deepstack just stopped working one day, and no matter what I did I couldn't get it working again. I had DS tuned very well, and I honestly think that compared to what I have going on now, it was the superior product.

How can I fix the double notifications and speed this up? My system is plenty fast, I'm using an nVidia 3060 on a core i7-12700K with 64GB of RAM and NVME system, alerts, and new storage, spinning disk for stored. I have 12 cameras currently, but plan to add several more.

BI is 5.9.8.2 and CPAI is 2.6.5.0

If there is any other information I can provide, please let me know and I'm happy to do so.

Thank you!
 
You have it running both models at same time.

Set up a night profile and only run the night one then and only the day one during day profile
 
You have it running both models at same time.

Set up a night profile and only run the night one then and only the day one during day profile

I don't know how to do that, but suspected that was the answer. However, my google-foo appears weak today. Could you tell me how to do that or point me at the correct documentation for setting that up?
 


This should get you started with profiles and schedules. Regarding the Code Project response times - if you have a 3060 make sure you are using either YOLO v5 6.2 or YOLO v8. With v5 on my 3060ti my times were between 60-80ms and on v8 they are usually 30-60ms. There isn't a dark specific model for v8 that I know of yet but I have never had a problem with the regular one at night. I also trained a v8 for just person, vehicle, and animal that I posted in my own thread. Additionally make sure "Use mainstream..." isn't checked in the camera's AI configuration. The image is going to get resized to 640px during inference anyways unless you are using a non-standard model that was trained at a higher resolution.
 
Last edited: