Be Careful with CodeProject AI 2.1.0

150gb, how much do i want it to be?
I have 22TB of storage and my alerts are set to ~10GB - I think this is your issue

@fenderman can probably confirm if that seems problematic
You alert folder generally can be set very small 1gb or less because nothing is actually stored in that folder unless you store high res alert images - which are not needed unless you have a specific case use.
 
Yep I have over 20TB of storage and my alerts is set to 1GB.

Even though the alerts folder is empty, it is hidden files that are the thumbnail pointers to the BVR files, so with an alerts folder that large that is why you are over 400,000 clips because it now has pointers to bvr files that don't exist anymore.

BI gets unstable after 200,000 clips.
 
I failed to include in my config - I have it set to 10GB limit but in this situation I use a day limitation which corresponds to the amount of days of storage that my BI holds. So my Alerts folder is actually only 89MB even though I have limit set to 10GB. When I add cameras or storage I recalibrate the time so that I am only keeping the amount of alerts that correspond to my footage retention. My footage retention is only based on size though, not based on time limit

Sorry!
 
I'm not sure this is going to fix the Codeproject AI 2.1.0 issue, but we shall see, DB is still repairing.
You cannot run a BI system if the DB takes hours to rebuild no recording takes place during a rebuild.
How many actual files do you have in the alerts folder and your recording folders?
Did you actually delete the DB folder or just hit rebuild?
 
You have a catastrophic issue with your BI setup which needs to be corrected before we can fix the Codeproject AI issue
Yeah, Im still waiting. haven't tested anything.

Dude, I have so many files in the alerts folder its stupid. It crashes explorer just trying to delete files.
 
Yeah, Im still waiting. haven't tested anything.

Dude, I have so many files in the alerts folder its stupid. I crashes explorer just trying to delete files.
That is because one or more of your cameras is set to save high res alert images, turn that off. Delete the entire alert and DB folders.
 
well shit, all except one of my cams had that setting. cant remember why, any benefit to it? processing advantage?
There is no benefit to it if you need to ask. This is the root cause of your problem. Hundreds of thousands of unnecessary files. It is off by default. It is important to understand any option before clicking on it.
 
There is no benefit to it if you need to ask. This is the root cause of your problem. Hundreds of thousands of unnecessary files. It is off by default. It is important to understand any option before clicking on it.

I think i had it on back when the aitool was a thing, i cant recall exactly. I seem to remember the higher quality image being better for image processing but I'm not sure that's accurate anymore.

Ok so its still not detecting objects with 2.1.0. I shut BI down when it installed so it didnt crash the server. Codeproject is putting out this error message however in the log. Any idea what it means?

"
9:33:47:face.py: Fusing layers...
19:33:47:face.py: YOLOv5m summary: 316 layers, 21468630 parameters, 0 gradients
19:34:30:detect_adapter.py: Fusing layers...
19:34:30:detect_adapter.py: YOLOv5.1m summary: 391 layers, 21805053 parameters, 0 gradients
19:34:30:detect_adapter.py: Adding AutoShape...
19:34:31:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (15) must match the size of tensor b (12) at non-singleton dimension 2
19:34:31:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (15) must match the size of tensor b (12) at non-singleton dimension 2
19:34:31:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
19:36:28:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
"
 
I think i had it on back when the aitool was a thing, i cant recall exactly. I seem to remember the higher quality image being better for image processing but I'm not sure that's accurate anymore.

Ok so its still not detecting objects with 2.1.0. I shut BI down when it installed so it didnt crash the server. Codeproject is putting out this error message however in the log. Any idea what it means?

"
9:33:47:face.py: Fusing layers...
19:33:47:face.py: YOLOv5m summary: 316 layers, 21468630 parameters, 0 gradients
19:34:30:detect_adapter.py: Fusing layers...
19:34:30:detect_adapter.py: YOLOv5.1m summary: 391 layers, 21805053 parameters, 0 gradients
19:34:30:detect_adapter.py: Adding AutoShape...
19:34:31:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (15) must match the size of tensor b (12) at non-singleton dimension 2
19:34:31:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (15) must match the size of tensor b (12) at non-singleton dimension 2
19:34:31:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
19:36:28:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 162, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 705, in forward
y = self.model(x, augment=augment) # forward
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 515, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 121, in _forward_once
x = m(x) # run
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/yolo.py", line 75, in forward
wh = (wh * 2) ** 2 * self.anchor_grid # wh
RuntimeError: The size of tensor a (48) must match the size of tensor b (60) at non-singleton dimension 2
"
You can still use a high res image for AI processing. That is a separate setting.
 
yes, but you likely dont even need that and you can save cpu. Particularly if you use a 720p or 1080p substream rather than a D1 substream.
according to BI all of my cameras have a .3MP substream. Im not too worried about resolution for processing images. It works well enough.

What is your opinion on the error log from CPAI 2.1.0 that I posted above? I've reverted back to 2.0.8 in the meantime.
 
Last edited:
  • Like
Reactions: Gyula614
Downloaded and installed CodeProject AI 2.1.1 today and that doesn't work either: can't detect objects. Guess BI has to be updated to work with the latest version.

Mike
 
  • Like
Reactions: OakleyFreak