IP Cam Talk Custom Community DeepStack Model

Hope it's okay to post in this thread. If not please let me know.

I switched over to using general and animal after resolving some issues I was having with BI and DS in general and Python is going nuts. By that I mean there would be as many as 20 Python processes running at the same time eventually consuming all the memory and GPU.

I had the following set:
1657472078927.png
1657473155253.png

I restarted the PC a couple of times but that didn't help.

I had 12 cameras running with basically the same configuration as above.

I could see in the logs that the evaluation time was hitting up against the 15-second limit whereas with the default model the times were generally 13-150ms.

0 7/9/2022 4:50:19.022 PM FrontDoor AI: Alert cancelled [AI: timeout] 15021ms
0 7/9/2022 4:50:20.451 PM App AI has been restarted
1 7/9/2022 4:50:20.964 PM Driveway Events: subscription 00002efd
1 7/9/2022 4:50:20.990 PM FrontLeft Events: subscription 00002efd
1 7/9/2022 4:50:21.451 PM Driveway AI: timeout
0 7/9/2022 4:50:21.452 PM Driveway AI: Alert cancelled [AI: timeout] 15028ms
1 7/9/2022 4:50:21.883 PM Backyard Events: subscription 00002efd
0 7/9/2022 4:50:23.014 PM App AI has been restarted
1 7/9/2022 4:50:25.479 PM Cam1 Events: subscription 00002efd
1 7/9/2022 4:50:26.382 PM FrontLeft Events: subscription 00002efd
1 7/9/2022 4:50:26.383 PM Driveway Events: subscription 00002efd
1 7/9/2022 4:50:27.499 PM Backyard Events: subscription 00002efd
1 7/9/2022 4:50:31.148 PM Cam1 Events: subscription 00002efd
1 7/9/2022 4:50:31.480 PM Backyard AI: timeout
0 7/9/2022 4:50:31.481 PM Backyard AI: Alert cancelled [AI: timeout] 15069ms
1 7/9/2022 4:50:31.844 PM FrontLeft Events: subscription 00002efd
1 7/9/2022 4:50:32.133 PM Driveway Events: subscription 00002efd
1 7/9/2022 4:50:33.139 PM Backyard Events: subscription 00002efd
0 7/9/2022 4:50:33.305 PM App AI has been restarted
1 7/9/2022 4:50:33.400 PM BackPatioDoor AI: timeout
0 7/9/2022 4:50:33.401 PM BackPatioDoor AI: Alert cancelled [AI: timeout] 15157ms
3 7/9/2022 4:50:33.719 PM BYLeft1 MOTION_A
1 7/9/2022 4:50:35.170 PM BYLeft2 AI: timeout
0 7/9/2022 4:50:35.170 PM BYLeft2 AI: Alert cancelled [AI: timeout] 15318ms

I switched back to the default model and BI and DS are using very few resources and response times are back in the ~75ms range.
1657472821445.png
This is my PC configuration running 5.5.8.2
Dell XPS 8950
12th Gen i9-12900
128GB RAM
Nvidia GTX 3070Ti
1x onboard NIC
1x 2 port PCIe NIC
Drive C NVMe 2TB
Drive D 7,200RPM 2TB
Drive E NVMe 2TB
Drive V 7,200RPM 18TB


I would appreciate any help troubleshooting this.

Thanks,
James
 
Update on my earlier post.

I switched one camera over to the custom models and rebooted. Upon login I could hear the fans cranking up. I had just enough time to get the screenshot below before the PC bluescreened with VIDEO_SCHEDULER_INTERNAL_ERROR.

1657476353312.png

I was in the process of stopping the BI service when the bluescreen occurred. I am pretty confident that the bluescreen is a symptom and not the cause.

The actual code was
The computer has rebooted from a bugcheck. The bugcheck was: 0x00000119 (0x000000000000a000, 0xffffa18af3b0f000, 0x0000000000000054, 0x0000000000000055). A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: 9b054e10-3814-46e0-abe8-aac11538912d.

According to Microsoft it could be as a result of any of the following.
1657477414004.png

In my instance, it was a000 so I don't know.

I have BI set to a delayed start and disabled it after the reboot. I went into the registry and removed the smartmodels entry for the one camera I had changed and then started the BI service. It started spawning multiple instances of Python again and bluescreened again. When it restarted I removed the custom models from MyModels and it is stable again.
 
Last edited:
@sebastiantombs - I think I figured out the issue. A while ago I had recently bumped all my cameras up to the highest resolution they offer. For the camera I had been testing sub streams with, an older Dahua, its highest resolution is 1280x960 which is actually a 4:3 format. So the camera actually shows as a 4:3 image in BI. I would just click on Animorphic (force size) and set it to 1920x1080 to make it widescreen. It had been that way for so long, I completely forgot 1920x960 was actually a 4:3 format. So, when I added the 640x480 sub stream, BI used the sub stream's resolution in the layout and in the UI3. BI also changed the Animorphic setting forcing it to the sub stream resolution and graying it out, basically putting the camera back to the default main stream's resolution image size, 4:3 and wiping out my Animorphic change to 1920x1080. So now I have a main stream and sub stream that are 4:3. So even if I click on the camera to make it full screen, it remained 4:3 due to the 4:3 nature of the main stream's 1280x960 native resolution.

I Lowered the main stream's resolution to 1280x720, a widescreen format and BI now shows the camera in a widescreen format in the layout and in the UI3, while still showing the sub stream's resolution as 640x480 in the video tab with Animorphic grayed out. So BI is indeed using the sub stream in the layout and in the UI3, but when I put the camera into full screen, the camera remains widescreen, as it is using the main stream's 1280x720 resolution and I can see in stats for nerds it is now using the main stream's resolution.

So it was my fault all along. I don't know about you or the others here, but I hate the 4:3 format for my cameras. I need them in widescreen format. It kind of sucks that I cannot use the camera's full resolution without it showing as 4:3. I still haven't played with editing of the layout yet, but I will try and see if I can keep the highest resolution and change the layout formatting to present the video feed in widescreen. Even if I can, I'm sure it will change back to 4:3 when I put the camera in full screen mode.

Do you have any cameras that have a main stream resolution that is actually a 4:3 format? If so, and you forced them to widescreen while using both a main and sub stream, how did you do it?

Thanks again for all the help,
Chris
 
Silly question, can I upgrade to the latest BI version 5.5.9.x if I am still on 5.5.7.11 (last version before DS deprecation) and keep my DS integration intact without any changes including keeping all the custom models etc.? I haven't updated since they added SenseAI components and I am assuming the DS integration is still fully backwards compatible?
 
Silly question, can I upgrade to the latest BI version 5.5.9.x if I am still on 5.5.7.11 (last version before DS deprecation) and keep my DS integration intact without any changes including keeping all the custom models etc.? I haven't updated since they added SenseAI components and I am assuming the DS integration is still fully backwards compatible?

Some people have not had issues and others have had issues. My recommendation is unless 5.5.9.x has something you need, stay on your current version.
 
  • Like
Reactions: sebastiantombs
Silly question, can I upgrade to the latest BI version 5.5.9.x if I am still on 5.5.7.11 (last version before DS deprecation) and keep my DS integration intact without any changes including keeping all the custom models etc.? I haven't updated since they added SenseAI components and I am assuming the DS integration is still fully backwards compatible?

I'm in the same boat of you. I don't feel like breaking anything with my DS implementation so I'm holding at 5.5.7.11 until the wrinkles are ironed out with SenseAI.
 
I'm in the same boat of you. I don't feel like breaking anything with my DS implementation so I'm holding at 5.5.7.11 until the wrinkles are ironed out with SenseAI.

I'm way behind the game and looking to set up Deepstack this upcoming weekend while I'm at my cameras' location. I purchased a GeForce GTX 1060 graphics card for this sole purpose. I'm currently on version 5.5.8.2 of Blue Iris. Are you saying Deepstack will not work with this version? I updated not too long ago. I do have version 5.4.7.11 which is the version I was on before I updated to 5.5.8.2. Should I be reverting back? Does anyone have a link to 5.5.7.11?
 
I'm way behind the game and looking to set up Deepstack this upcoming weekend while I'm at my cameras' location. I purchased a GeForce GTX 1060 graphics card for this sole purpose. I'm currently on version 5.5.8.2 of Blue Iris. Are you saying Deepstack will not work with this version? I updated not too long ago. I do have version 5.4.7.11 which is the version I was on before I updated to 5.5.8.2. Should I be reverting back? Does anyone have a link to 5.5.7.11?

It will work, but a few updates back when SenseAI was added, Deepstack was deprecated, so if you are going to use DeepStack, the recommendation would be to go the latest update prior to SenseAI being added. Otherwise you may be banging your head trying to make it work with the most recent update....or it may go smooth.

Here are where the updates are located:

 
Try changing the mode to High. With the mode set at Medium DeepStack will down size the image resolution to 416 x 416 before it tries to detect an object, so with this setting the skunk might be to small to detect. If you set the mode to High DeepStack will down size the image to resolution 640 x 640 and have a better chance at detecting the skunk. After making the change make sure you stop and restart DeepStack.

1658778547341.png
 
Last edited:
Thanks, @MikeLud1 and @105437. The little stinker comes by most nights, so I should be able to get a real world test quickly. I figured I was likely close on my settings because the same cam identifies me as a person under that profile. Now, if it identifies me as a skunk after I make those changes, I’ll consider that it knows something I don’t.
 
Thanks, @MikeLud1 and @105437. The little stinker comes by most nights, so I should be able to get a real world test quickly. I figured I was likely close on my settings because the same cam identifies me as a person under that profile. Now, if it identifies me as a skunk after I make those changes, I’ll consider that it knows something I don’t.
After making the changes, you can play with Testing & Tuning. Find the clip with the skunk, and under Testing & Tuning, pick Deepstack Analysis. Now when you play the clip it should run it through all the custom models you have loaded. Pause it and see if/when it sees that as a skunk.
 
Hey everyone,

Quick question in regards to Deepstack analyzing trigger pictures. How are you guys setting up your Blue Iris triggers on your cameras? At first I just played with the Min. Object Size and Min. Contrast. Then I incorporated zones and only had triggers if an object moved from Zone A to Zone B etc. Do you guys still keep these type of settings and allow Deepstack to analyze pictures based off a trigger from those settings or do you turn off Object Detection and increase the sensitivity of Min. Object Size and Min. Contrast and give Deepstack a better shot at detecting something based on those settings? The reason I ask, is I saw a video where the guy setting up Deepstack made his settings sensitive and allowed Deepstack to analyze all his triggers to get a better shot at detecting something. While this may help in not missing some sort of motion, wouldn't it put strain on the GPU if it is constantly analyzing pictures? What's you take on this?

Chris
 
Hey everyone,

Quick question in regards to Deepstack analyzing trigger pictures. How are you guys setting up your Blue Iris triggers on your cameras? At first I just played with the Min. Object Size and Min. Contrast. Then I incorporated zones and only had triggers if an object moved from Zone A to Zone B etc. Do you guys still keep these type of settings and allow Deepstack to analyze pictures based off a trigger from those settings or do you turn off Object Detection and increase the sensitivity of Min. Object Size and Min. Contrast and give Deepstack a better shot at detecting something based on those settings? The reason I ask, is I saw a video where the guy setting up Deepstack made his settings sensitive and allowed Deepstack to analyze all his triggers to get a better shot at detecting something. While this may help in not missing some sort of motion, wouldn't it put strain on the GPU if it is constantly analyzing pictures? What's you take on this?

Chris
I saw the same video and that's what I followed when setting up Deepstack. But I am also interested in what others are doing. I'm using the CPU and seeing decent response times with those. Haven't tried the GPU.