5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

there is the reason i chose that screen name :banghead:

Well, RIF/RTFM/Search struck again.... from what I found (really should have searched better before I posted), it's okay for it to be grayed out, you just define things in each camera tab instead.

edit to add.... for local install of SenseAI on Windows 10; here's the path (for the next person who's struggling... stolen from page 17 of this thread)

C:\Program Files\CodeProject\AI\AnalysisLayer\CustomDetection\assets
 
Last edited:
Last night my SenseAI docker updated to 1.5.7. Now the default object detection cannot be enabled anymore, see picture below:

1.5.7.png

Are others seeing this as well after the 1.5.7 update? And do know how to fix this?
 
Well, I'm not sure this is the desired result using the custom model. Apparently I'm a Fire Hydrant.

I have 2 cameras facing somewhat similar areas (about 50' apart, facing eachother). I walked between both of them; in a big circle as a test to see any differences between the default detection and the custom detection.

note: the cameras have some view of some vacant land behind my house, so I have vehicle detection to see if people are driving off road vehicles up to my property. I could likely do away with "boat"... but I kinda wanna see what it flags as a boat at this point. I have dog/cat/horse/bear/etc. added as that seems to be what it identifies coyotes/deer/mountain lions as, when I was playing with it's recognition.

Capture.JPG


Not sure how I feel about being identified as a "Fire Hydrant", however.
Capture2.JPG

The other camera without the custom model loaded, that saw me... identified me as a person (and not as a Fire Hydrant)




I'll also add, running SenseAI locally versus in a docker has been a HUGE reduction in used resources. Ram usage dropped to about 5-6 gigs of used ram for SenseAI versus the 8-12 gig I saw running in docker. CPU usage is also in about half - running the same versions of SenseAI.
 
Last edited:
  • Like
Reactions: jrbeddow
This time I put the 1.6.0 installer to the test. I left everything running and then ran the 1.6.0 installer. It even retained my previous module selection . I did have to refresh Edge though.
 

Attachments

  • Screenshot 2022-09-22 143642.png
    Screenshot 2022-09-22 143642.png
    171.1 KB · Views: 25
So do we need to change BI to port 32168 ? Mine is working on 5000, so was confused as what would be best.
 
  • Like
Reactions: jrbeddow
OK, preliminary results look promising (ie: I didn't completely break anything...yet ;)). I have this version running on the two cams that I had DeepStack running on previously, and after some minor tweaks to adapt to the newer model names (ipcam-general, ipcam-combined in my case) and ports used (5000 still works for now, it actually had some trouble working with the 32168, but that might be solvable with a reboot).

Could anyone please remind me where to find the classifications (person, vehicle, car, animal, etc....) used when running the various custom models, I seem to have misplaced my links to those explanations.

Also, I have Default Object detection turned off in BI, so naturally the CP-AI dashboard also shows the "Object Detection (.NET)" service as "Not Enabled". I presume that turning it on within BI will start it up. Any reccomendations on when this SHOULD be used (ie: only with a high-power NVIDIA GPU card, as I run CPU only)? Also, what object names apply in that use case?

Finally, I keep seeing references to the various YOLOV5 models: where are they to be found and enabled? I don't even see references to them in the CP-AI dashboard dropdown boxes when testing and benchmarking, much less within BI...
 
OK, preliminary results look promising (ie: I didn't completely break anything...yet ;)). I have this version running on the two cams that I had DeepStack running on previously, and after some minor tweaks to adapt to the newer model names (ipcam-general, ipcam-combined in my case) and ports used (5000 still works for now, it actually had some trouble working with the 32168, but that might be solvable with a reboot).

Could anyone please remind me where to find the classifications (person, vehicle, car, animal, etc....) used when running the various custom models, I seem to have misplaced my links to those explanations.

Also, I have Default Object detection turned off in BI, so naturally the CP-AI dashboard also shows the "Object Detection (.NET)" service as "Not Enabled". I presume that turning it on within BI will start it up. Any reccomendations on when this SHOULD be used (ie: only with a high-power NVIDIA GPU card, as I run CPU only)? Also, what object names apply in that use case?

Finally, I keep seeing references to the various YOLOV5 models: where are they to be found and enabled? I don't even see references to them in the CP-AI dashboard dropdown boxes when testing and benchmarking, much less within BI...

you mean this ?

Screenshot 2022-06-25 154029.png
 
OK, preliminary results look promising (ie: I didn't completely break anything...yet ;)). I have this version running on the two cams that I had DeepStack running on previously, and after some minor tweaks to adapt to the newer model names (ipcam-general, ipcam-combined in my case) and ports used (5000 still works for now, it actually had some trouble working with the 32168, but that might be solvable with a reboot).

Could anyone please remind me where to find the classifications (person, vehicle, car, animal, etc....) used when running the various custom models, I seem to have misplaced my links to those explanations.

Also, I have Default Object detection turned off in BI, so naturally the CP-AI dashboard also shows the "Object Detection (.NET)" service as "Not Enabled". I presume that turning it on within BI will start it up. Any reccomendations on when this SHOULD be used (ie: only with a high-power NVIDIA GPU card, as I run CPU only)? Also, what object names apply in that use case?

Finally, I keep seeing references to the various YOLOV5 models: where are they to be found and enabled? I don't even see references to them in the CP-AI dashboard dropdown boxes when testing and benchmarking, much less within BI...
CodeProject.AI-Custom-IPcam-Models
IPcam-combined Labels: - person, bicycle, car, motorcycle, bus, truck, bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig

IPcam-general Labels (includes dark models images): - person, vehicle

IPcam-animal Labels: - bird, cat, dog, horse, sheep, cow, bear, deer, rabbit, raccoon, fox, skunk, squirrel, pig

IPcam-dark Labels: - Bicycle, Bus, Car, Cat, Dog, Motorcycle, Person
 
Thank you both, very helpful to re-see those and possibly adjust as needed. I'm sure it will help others reviewing this thread, as sometimes things like this tend to get scattered around and somewhat lost in the weeds.

Any suggestions or comments on the other points I brought up (when to use Default object detection...is it too memory/CPU intensive for a CPU only system? In my case an i5-8500). What labels apply then?
How to enable and test some of the other models (YOLOv5l, etc...)?
 
Thank you both, very helpful to re-see those and possibly adjust as needed. I'm sure it will help others reviewing this thread, as sometimes things like this tend to get scattered around and somewhat lost in the weeds.

Any suggestions or comments on the other points I brought up (when to use Default object detection...is it too memory/CPU intensive for a CPU only system? In my case an i5-8500). What labels apply then?
How to enable and test some of the other models (YOLOv5l, etc...)?


This was covered in a post earlier today: 5.5.8 - June 13, 2022 - Code Project’s SenseAI,
 
  • Like
Reactions: jrbeddow
Hey,
Still getting No predictions returned with 1.6 and Gtx 1660 super.
 

Attachments

  • Screenshot 2022-09-22 at 9.00.52 PM.png
    Screenshot 2022-09-22 at 9.00.52 PM.png
    589.5 KB · Views: 20
  • Screenshot 2022-09-22 at 9.02.11 PM.png
    Screenshot 2022-09-22 at 9.02.11 PM.png
    140.9 KB · Views: 21
It’s a 6gb card. Looks to be around 2.4gb been used by Python. With some of the first version of code ai I did see predictions Stats. Granted it never worked with blue iris. I did read a few user also having trouble getting 1660 cards to work in the thread. Not to sure if it’s tied to specific card.
 
It’s a 6gb card. Looks to be around 2.4gb been used by Python. With some of the first version of code ai I did see predictions Stats. Granted it never worked with blue iris. I did read a few user also having trouble getting 1660 cards to work in the thread. Not to sure if it’s tied to specific card.
Do you have the latest CUDA 11.7.1 installed + cuDNN v8.5.0 along with the latest video card driver. The 1660 should have no issue running CodeProject.AI
 
Do you have the latest CUDA 11.7.1 installed + cuDNN v8.5.0 along with the latest video card driver. The 1660 should have no issue running CodeProject.AI
Yeah .
 

Attachments

  • Screenshot 2022-09-22 at 9.47.52 PM.png
    Screenshot 2022-09-22 at 9.47.52 PM.png
    1.3 MB · Views: 26