IP Cam Talk Custom Community DeepStack Model

Yep that is what mark as vehicle means. It is useless to anyone not using Plate Recognizer or the custom model a member here created. It is ok to leave that in there or delete it.

What it does is Plate Recognizer is sent an image and every image sent counts against the monthly total whether a plate is in it or not. This option prevents BI from sending an image due to a leaf blowing or sun coming in and out of the shadows.
 
  • Like
Reactions: Wen and cbm214
Thanks. I do use Plate Recognizer on one of my cameras. I've seen posts with different values in this field, it usually either car,truck,bus,vehicle or car,truck,bus. Is vehicle a valid label? Is vehicle in the 'objects' model? If you only use the smaller .pt files, say combined.pt then I don't see 'vehicle' listed in the first post in this thread therefore it isn't needed in 'mark as vehicle', correct?

I've now changed all my cameras to use 'objects:0,combined', I'll test over the next 24 hours to see how the Deepstack speeds go.

Thanks for your help and for everyone who's worked on this Custom Model project, thanks
 
Questions. Answers in red
1. Should the 'Default object detection' tick box be ticked or not and why? If you are using the custom models and have no need for the exhaustive items in DS, then yes uncheck that box. Your return times will be much faster.
2. I've seen in some posts in this thread that you also need to edit each camera you wish to use these new .pt files for and add to the 'Custom Models:' field text like 'objects:0,combined'. Do you need to add this to the cameras or not? Yes you do. The reason your response times have increased is because you are now running the default model and all the custom models at the same time for each camera. Go into each camera and use just the model you need for that camera. The custom models you downloaded took out all the junk we don't need like broccoli, toothbrush, etc. LOL, so it allows for a quicker response. Using the default results in the image checking for EVERY item in that list, thus the custom model was created to speed up response times. It is counterproductive to use the both the default objects and the custom models you pulled at the same time.
3. If the answer to Q2 is yes, then what does it mean (yes I have read the BI manual and I understand adding :0 to the end of a text string excludes that model eg 'objects' is disabled), just what is 'objects', I'm assuming it's nothing to do with the tickbox option 'Default object detection' as that does use the word object but not objects? It's not the name of a .pt file, what does 'objects' mean? Objects is the default object detection model in DeepStack. If you do not want to use the default for any camera, then uncheck the box from question #1. If you want to use it for some cameras, then objects:0 has to go in the cameras you do not want the default one to use.
4. I've also seen in other posts in this thread people have added 'dark' to the list in 'Custom Models:' field, again, where has this word come from, I don't see a file called dark.pt. dark.pt is yet another model available in a different location than the custom models you pulled. The ones you pulled were developed by a member here. The dark.pt was developed by another person. GitHub - OlafenwaMoses/DeepStack_ExDark: A DeepStack custom model for detecting common objects in dark/night images and videos.
5. Log file, is there a different log for Deepstack analysis than the one in BI? No, they are all in the same log file. But if in each camera you check the save .DAT file option, then in BI you can pull up the DS specifics for that triggered analysis.
"If you do not want to use the default for any camera, then uncheck the box from question #1. If you want to use it for some cameras, then objects:0 has to go in the cameras you do not want the default one to use."

S
o, I have the "default object detection" unchecked in the global AI settings of BI. I also have "objects:0" in each of my cameras individual DS settings.

Are you saying here that if I have the global setting unchecked, then I don't have to input "objects:0" on all my cameras?

On all my cams I have "objects:0,dark:0, general" for day and "objects:0,general:0,dark" for night.

Thanks
 
  • Like
Reactions: cbm214
"If you do not want to use the default for any camera, then uncheck the box from question #1. If you want to use it for some cameras, then objects:0 has to go in the cameras you do not want the default one to use."

S
o, I have the "default object detection" unchecked in the global AI settings of BI. I also have "objects:0" in each of my cameras individual DS settings.

Are you saying here that if I have the global setting unchecked, then I don't have to input "objects:0" on all my cameras?

On all my cams I have "objects:0,dark:0, general" for day and "objects:0,general:0,dark" for night.

Thanks

That would be correct!
 
  • Like
Reactions: cbm214 and Nunofya
That would be correct!
So, to be clear, I change all my individual cam setting to: "dark:0, general" for day and "general:0,dark" for night.

No need for "objects:0" because I have the "default object detection" unchecked in the global AI settings of BI.
 
That is correct.
One more point or question.

If we are not using the "default object detection" in global settings, the do we still need to use "banana or zebra" in DS to cancel section of the camera settings?

Wasn't that used to cancel out all the miscellaneous stuff in the original DS settings.?
 
The banana or zebra was used to force DeepStack to run for the entire length of time and number of additional images you want sent to it.

The issue with some fields of view is the camera would trigger before the object was in the frame (like headlight shine), and Deepstack would do a quick analysis and say nothing here. The banana or zebra would force it to run all the extra images.

So now you pick an item in these models that you don't see much of and use that instead of zebra or banana.
 
The banana or zebra was used to force DeepStack to run for the entire length of time and number of additional images you want sent to it.

The issue with some fields of view is the camera would trigger before the object was in the frame (like headlight shine), and Deepstack would do a quick analysis and say nothing here. The banana or zebra would force it to run all the extra images.

So now you pick an item in these models that you don't see much of and use that instead of zebra or banana.
So if I'm only using "general" during the day it's only detecting people and vehicles. Banana or zebra wouldn't be part of general, right. And "dark" at night , I don't know what all is in that, cause it's not from Mike.
 
Quick Question. If you have "Mark as Vehicle" ticked and you're not using a plate reader but are using the custom community stack model, does that mean it will recognise the plate if it can see it in a normal (no LPR / camera not set up as LPR) camera picture?

eg In the picture of the 3 cars above from the general camera, would it recognise and record those plates if viewable and recognisable from that camera?
 
You have to be running something that can read plates, either Plate Recognizer or the custom LPR model and set up properly.
 
  • Like
Reactions: CCTVCam
Is anyone able to get Blue Iris to use these custom models with external deepstack setup? I have deepstack docker GPU version running in LXC and it works fine with the default object model. However, whenever I try to run DS with custom model, the default object model seems to work fine, but whenever specify "combined" as the custom model in in the camera setting, it does not work and I'm only seeing status 500 in the deepstack logs.
 
Contrast, brightness and lack of noise in night images is very important for successful detection in DS. I've been tinkering with camera settings since DS came out trying to get it just right. I do get about 75% detection, but with three cameras that are set up for night detection the overall rate is more like 95-99%. Headlights are another problem, even with HLC set to 100 they cause lots of problems.