5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

Amgclk65

Getting the hang of it
Joined
Jan 14, 2018
Messages
104
Reaction score
33
Every look as it should, does it work if you use CPU instead of GPU
Yes. It’ll also work if I backdate the code ai to an earlier version , don’t recall what version. I deleted the installer. Granted it was one of the first version I believe. Also tried uninstalling the Nvidia drivers and reinstalling. I did think it was odd a few members had 1660 and couldn’t get it to work as well.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Yes. It’ll also work if I backdate the code ai to an earlier version , don’t recall what version. I deleted the installer. Granted it was one of the first version I believe. Also tried uninstalling the Nvidia drivers and reinstalling. I did think it was odd a few members had 1660 and couldn’t get it to work as well.
Did you try doing a repair with 1.6.0.0
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
I did not, I'll give it a try. Here is the Cpu with the Gpu disabled.
No luck on the running repair.
I have been doing some research and it looks like this is a know issue with Gtx 16XX cards in all AI programs, it has to do with CUDA automatic mix precision
 

toastie

Getting comfortable
Joined
Sep 30, 2018
Messages
254
Reaction score
82
Location
UK
I'm disappointed, I'm getting no better results with the new software, 1.6.Beta. BI Status log still shows lots of green ticks but "Nothing found". I'm on CUDA 11.7.1 + cuDNN v8.5.0, I've just updated the driver to 517.40 (dated 2022.9.20) for my Quadro T600 GPU, a Nvidia GPU in the RTX/Quadro series. Apart from still having the Reg edit for the custom models, previous re-installs of Code Project AI, uninstall of DeepStack was a while back, I think I'm mostly using standard installs. BI is on 5.6.1.3.

I get scene detection using test images in the browser as has been the case for some time, but anything else is "No predictions returned". No squares round any objects identified which is what I did have with DeepStack.

I've other jobs to attend to today but now and again I'll revisit this thread to get some more ideas and things to try. I remain mystified why this works for some and not others, I'm not the only one who hasn't got it working.
edit: typo corrected
 
Last edited:

CrazyAsYou

Getting comfortable
Joined
Mar 28, 2018
Messages
246
Reaction score
262
Location
England, Near Sheffield
I have been doing some research and it looks like this is a know issue with Gtx 16XX cards in all AI programs, it has to do with CUDA automatic mix precision
I can confirm the same issues with a GTX 1650, nothing to-do with GPU ram as I have only module running with one custom model. The same setup work fine with my GTX 970 both are 4GB ram and the CUDA, Driver etc install are the same. The GTX 1650 was fine on deepstack with CUDA 10.5 so I think Mike is correct there is a clearly a bug with 16xx cards and newer CUDA. The version of Codesense make no difference at all in my testing experience
 

Amgclk65

Getting the hang of it
Joined
Jan 14, 2018
Messages
104
Reaction score
33
Did anyone have luck with this, if so let me know so I can have the developer add this fix to the next release
Hey. Just tried it. No luck. I wonder if backdating the drivers may work?
I got the 1660 just for code AI :). That’s my luck.
 

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
In the earlier versions, I placed all my custom models in the '\CodeProject\AI\AnalysisLayer\CustomDetection\assets" directory. With the new version it installed the "default" models with a 0 size and then said in the console log:

C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models\delivery.pt does not exist

delivery.pt was one of the custom model files that I had recently installed, but now it wants me to place it in the above path instead? I did so and it started working but now I'm confused... are the custom models still going into the original assets folder or this new version (1.6-Beta) ??

Oh and I replaced all the existing models with a zero length size with what I had backed up before installing the latest beta and they began working. Weirdness.

I still cannot run anything with CUDA support on this GT 730 - but I am assuming my video card just isn't good enough for that and since this i7 DELL does not have the power supply interfaces needed for my GeForce GTX 970 - I'm stuck with just running CPU:


2022-09-22 16:12:48: Unable to load model at C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models\ipcam-dark.pt (CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.)
2022-09-22 16:12:48: Unable to create YOLO detector for model ipcam-dark

... but the times aren't TOO bad...

2022-09-23 00:00:28: Module 'Object Detection (YOLO)' (command: custom) took 103ms
2022-09-23 00:00:28: Sending response for request from detection_queue took 4ms
2022-09-23 00:00:29: Retrieved detection_queue command
2022-09-23 00:00:29: Detecting using ipcam-dark
2022-09-23 00:00:29: Module 'Object Detection (YOLO)' (command: custom) took 104ms
2022-09-23 00:00:29: Sending response for request from detection_queue took 2ms
2022-09-23 00:00:40: Retrieved detection_queue command
2022-09-23 00:00:40: Detecting using ipcam-animal
2022-09-23 00:00:40: Module 'Object Detection (YOLO)' (command: custom) took 90ms
2022-09-23 00:00:40: Sending response for request from detection_queue took 2ms
2022-09-23 00:00:44: Retrieved detection_queue command
2022-09-23 00:00:44: Detecting using ipcam-dark
2022-09-23 00:00:44: Module 'Object Detection (YOLO)' (command: custom) took 102ms
2022-09-23 00:00:44: Sending response for request from detection_queue took 1ms
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
In the earlier versions, I placed all my custom models in the '\CodeProject\AI\AnalysisLayer\CustomDetection\assets" directory. With the new version it installed the "default" models with a 0 size and then said in the console log:

C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models\delivery.pt does not exist

delivery.pt was one of the custom model files that I had recently installed, but now it wants me to place it in the above path instead? I did so and it started working but now I'm confused... are the custom models still going into the original assets folder or this new version (1.6-Beta) ??

Oh and I replaced all the existing models with a zero length size with what I had backed up before installing the latest beta and they began working. Weirdness.

I still cannot run anything with CUDA support on this GT 730 - but I am assuming my video card just isn't good enough for that and since this i7 DELL does not have the power supply interfaces needed for my GeForce GTX 970 - I'm stuck with just running CPU:


2022-09-22 16:12:48: Unable to load model at C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\custom-models\ipcam-dark.pt (CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.)
2022-09-22 16:12:48: Unable to create YOLO detector for model ipcam-dark

... but the times aren't TOO bad...

2022-09-23 00:00:28: Module 'Object Detection (YOLO)' (command: custom) took 103ms
2022-09-23 00:00:28: Sending response for request from detection_queue took 4ms
2022-09-23 00:00:29: Retrieved detection_queue command
2022-09-23 00:00:29: Detecting using ipcam-dark
2022-09-23 00:00:29: Module 'Object Detection (YOLO)' (command: custom) took 104ms
2022-09-23 00:00:29: Sending response for request from detection_queue took 2ms
2022-09-23 00:00:40: Retrieved detection_queue command
2022-09-23 00:00:40: Detecting using ipcam-animal
2022-09-23 00:00:40: Module 'Object Detection (YOLO)' (command: custom) took 90ms
2022-09-23 00:00:40: Sending response for request from detection_queue took 2ms
2022-09-23 00:00:44: Retrieved detection_queue command
2022-09-23 00:00:44: Detecting using ipcam-dark
2022-09-23 00:00:44: Module 'Object Detection (YOLO)' (command: custom) took 102ms
2022-09-23 00:00:44: Sending response for request from detection_queue took 1ms
Have them in both folders for now until the Blue Iris integration is done
 

Philip Gonzales

Getting comfortable
Joined
Sep 20, 2017
Messages
697
Reaction score
551
Does SenseAI work with the Nvidia Geforce GTX 1650 SUPER yet? And if not is this something the devs are looking into? Just curious bc I'm currently running Deepstack but saw several posts of ppl having issues with the same graphics card that I have, so don't want to switch over until it works with my graphics card.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
Does SenseAI work with the Nvidia Geforce GTX 1650 SUPER yet? And if not is this something the devs are looking into? Just curious bc I'm currently running Deepstack but saw several posts of ppl having issues with the same graphics card that I have, so don't want to switch over until it works with my graphics card.

You can try what is in the above post after install 1.6.0.0. Tomorrow there is going to be a release with this change.
 

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
I just updated to 1.6.1-Beta and my detection times went WAY UP... I will try and reboot to see if this solves anything with the speed but I've already re-started both BI and CP.AI services and didn't see any change.

2022-09-23 22:12:46: Module 'Object Detection (YOLO)' (command: custom) took 3033ms
2022-09-23 22:12:46: Sending response for request from detection_queue took 9ms
2022-09-23 22:12:47: Module 'Object Detection (YOLO)' (command: custom) took 4084ms
2022-09-23 22:12:47: Sending response for request from detection_queue took 3ms
2022-09-23 22:12:47: Module 'Object Detection (YOLO)' (command: custom) took 3598ms
2022-09-23 22:12:47: Sending response for request from detection_queue took 4ms
2022-09-23 22:12:49: Retrieved detection_queue command

Obviously, these times are unacceptable.
 

MikeLud1

IPCT Contributor
Joined
Apr 5, 2017
Messages
2,141
Reaction score
4,118
Location
Brooklyn, NY
I just updated to 1.6.1-Beta and my detection times went WAY UP... I will try and reboot to see if this solves anything with the speed but I've already re-started both BI and CP.AI services and didn't see any change.

2022-09-23 22:12:46: Module 'Object Detection (YOLO)' (command: custom) took 3033ms
2022-09-23 22:12:46: Sending response for request from detection_queue took 9ms
2022-09-23 22:12:47: Module 'Object Detection (YOLO)' (command: custom) took 4084ms
2022-09-23 22:12:47: Sending response for request from detection_queue took 3ms
2022-09-23 22:12:47: Module 'Object Detection (YOLO)' (command: custom) took 3598ms
2022-09-23 22:12:47: Sending response for request from detection_queue took 4ms
2022-09-23 22:12:49: Retrieved detection_queue command

Obviously, these times are unacceptable.
If you just started CP.AI the first detections for a model will be slow because the model needs to be loaded into memory. Once it is loaded speeds should be back to normal. This was done to save on memory usage.
If you run a detection a second time does the speed improve.
 

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
If you just started CP.AI the first detections for a model will be slow because the model needs to be loaded into memory. Once it is loaded speeds should be back to normal. This was done to save on memory usage.
If you run a detection a second time does the speed improve.
I mean I guess so? I'm still seeing some detections take over 600ms and that was rare on the previous Beta:

2022-09-23 23:24:10: Module 'Object Detection (YOLO)' (command: custom) took 137ms
2022-09-23 23:24:10: Sending response for request from detection_queue took 4ms
2022-09-23 23:24:21: Retrieved detection_queue command
2022-09-23 23:24:21: Detecting using delivery
2022-09-23 23:24:21: Module 'Object Detection (YOLO)' (command: custom) took 155ms
2022-09-23 23:24:21: Sending response for request from detection_queue took 4ms
2022-09-23 23:24:23: Retrieved detection_queue command
2022-09-23 23:24:23: Detecting using ipcam-dark
2022-09-23 23:24:24: Module 'Object Detection (YOLO)' (command: custom) took 1007ms
2022-09-23 23:24:24: Sending response for request from detection_queue took 3ms

I will let it run its course until noon tomorrow and make an assessment and report back.
 
Last edited:

dirk6665

BIT Beta Team
Joined
Feb 13, 2015
Messages
36
Reaction score
18
Location
Pennsylvania
If you just started CP.AI the first detections for a model will be slow because the model needs to be loaded into memory. Once it is loaded speeds should be back to normal. This was done to save on memory usage.
If you run a detection a second time does the speed improve.
also, using the explorer and running a benchmark I received this error (twice):

1663993361656.png

1663993528497.png

and the models available in Vision are different than those in the benchmark.

models.jpg

I deleted openlogo from all directories after I installed "delivery.pt" - no idea where its pulling this from :screwy:

UPDATE: I had to force refresh the page and now it is showing the correct content.

models2.jpg
 
Last edited:
Top