5.5.8 - June 13, 2022 - Code Project’s SenseAI Version 1 - See V2 here https://ipcamtalk.com/threads/codeproject-ai-version-2-0.68030/

To make sure Code Project AI is working run a test with the below link


View attachment 138116
msi gtx 1660 Super 6GB GDDR6 not working here. gigabyte 1050 Ti 4GB GDDR5 Works! Any guess why? Looks like I have to return the 1660. Know of any cards with 6GB GDDR6 that are known to work? (Under $300) They both use the same driver.
 
Last edited:
I can confirm that the RTX2060 works. Can't say what the cost is in usd but in the UK they sell for ~£250

Are you using the same revision of drivers for both cards
 
  • Like
Reactions: sebastiantombs
On a confirmed alert, perhaps via a "Run Program/Web Request/Do Command", trigger AI analysis on a different camera at a timestamp - <X minutes/seconds>?
Maybe to identify other object types on additional cameras not normally configured.

Perhaps, future feature request?
 
msi gtx 1660 Super 6GB GDDR6 not working here. gigabyte 1050 Ti 4GB GDDR5 Works! Any guess why? Looks like I have to return the 1660. Know of any cards with 6GB GDDR6 that are known to work? (Under $300) They both use the same driver.

I can't get it working on a 1650 GTX, starting to wonder if anyone has it working with any 16XX GTX model
 
I can't get it working on a 1650 GTX, starting to wonder if anyone has it working with any 16XX GTX model
I think you're right. Both 1660 and the 1050 Ti take the same driver. I even did a clean install of windows 10 twice.
I also checked the hash of the cudnn files the script downloads vs the ones you get yourself when you have to login to nvidia. They match.
The only variable I could think of was the card. I have another computer running Blue Iris with deepstack gpu on the 1050 Ti so I put that card in the new computer.
Worked first try.
 
Attached are all the "modulesettings.json" files that will disable the modules and set the Custom Model module to high
The issues some people are see might be related to the amount of GPU memory on older GPU cards. In the above post there is an attachment with modified "modulesettings.json" to disable all the modules except the custom model module. You can try this to see if it helps.
They are aware of this issue and are working on a solution.
 
  • Like
Reactions: sebastiantombs
The issues some people are see might be related to the amount of GPU memory on older GPU cards. In the above post there is an attachment with modified "modulesettings.json" to disable all the modules except the custom model module. You can try this to see if it helps.
They are aware of this issue and are working on a solution.
I had my gtx 1660 super not working with your setup of only custom module and ipcam-general.pt or ipcam-combined.pt. I used all your modulesettings.jason files you posted. I also tried it with all modules enabled.
The 1050 Ti worked with either setup.
 
@MikeLud1
Further to your explanation and recommendation to use low resolution streams for AI analysis. I'm having difficulty understanding how any software can analyse a snapshot of such low resolution for face identification. By way of example, I have attached a capture from my doorbell camera 640 x 480 of visitors. Needless to say, the face is unrecognised even though a good quality capture in higher resolution was in the library. I'm not sure that I could recognise the person from the image captured. The only way I could see face recognition working is for BI to crop just the face and send it for analysis at the original resolution which could be actually end up with an image less than 640 x 480.

Am I missing something fundamental? I would also be interested in your thought on the image processing times which seem excessive to me.
I am looking into how the face recognition handles image processing. Currently I am not using face recognition. From what I can tell from the code (see below) the face model resolution is 416 when set to high

1661973292922.png
 
I am looking into how the face recognition handles image processing. Currently I am not using face recognition. From what I can tell from the code (see below) the face model resolution is 416 when set to high

View attachment 138533
The only way I have succeeded to have a face recognised is to enable the switch to HD on trigger if available. Its interesting to note that the unknown faces filed are recognisable to the eye when this feature is enabled!
 
The issues some people are see might be related to the amount of GPU memory on older GPU cards. In the above post there is an attachment with modified "modulesettings.json" to disable all the modules except the custom model module. You can try this to see if it helps.
They are aware of this issue and are working on a solution.
@MikeLud1 Have you heard any comments from people running an NVIDIA Quadro P400? It only has 2GB of memory, so I'm thinking the GPU version won't be much better than the CPU version running on this dedicated BI PC.

1662060295578.png
1662060256428.png
 
Last edited:
  • Like
Reactions: jrbeddow
@MikeLud1 Have you heard any comments from people running an NVIDIA Quadro P400? It only has 2MB of memory, so I'm thinking the GPU version won't be much better than the CPU version running on this dedicated BI PC.

View attachment 138666
View attachment 138665
I have personally tried a T400 with 2Gb vram and it failed to work. Presently it would appear that you need a minimum of 4 Gb to have success but it's early days and I guess it may in time work with more moderate requirements.
 
  • Like
Reactions: 105437
I have personally tried a T400 with 2Gb vram and it failed to work. Presently it would appear that you need a minimum of 4 Gb to have success but it's early days and I guess it may in time work with more moderate requirements.

If it doesn't work that is a shame because many bought the P400 or GTX1030 because it was cheap and works well with Deepstack. If it doesn't work well with the new AI, folks will just stay with what works for them.
 
@MikeLud1 Have you heard any comments from people running an NVIDIA Quadro P400? It only has 2GB of memory, so I'm thinking the GPU version won't be much better than the CPU version running on this dedicated BI PC.

View attachment 138666
View attachment 138665
I'm running a P400 and switched from DS a couple of weeks ago. It seems to be working fine and the analysis times seem to be around 25% faster (using IPCam-General and IPCam-Combined). I've also disabled the face and scene modules.
 
I'm running a P400 and switched from DS a couple of weeks ago. It seems to be working fine and the analysis times seem to be around 25% faster (using IPCam-General and IPCam-Combined). I've also disabled the face and scene modules.
How much memory? This is the result I had with 2Gb running Python.
Scene detection appears to work. Objects and Faces not predicted. Nvidia T400 GPU

2022-08-26 10:03:28: retrieved detection_queue command
2022-08-26 10:03:28 [Exception: Exception]: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\detection.py", line 83, in objectdetection_callback
det = detector.predictFromImage(img, threshold)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\process.py", line 62, in predictFromImage
pred = self.model(img, augment=False)[0]
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 136, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 159, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 68, in forward
y[..., 0:2] = (y[..., 0:2] * 2 + self.grid) * self.stride # xy
RuntimeError: The size of tensor a (32) must match the size of tensor b (28) at non-singleton dimension 2
 
Last edited:
I had my gtx 1660 super not working with your setup of only custom module and ipcam-general.pt or ipcam-combined.pt. I used all your modulesettings.jason files you posted. I also tried it with all modules enabled.
The 1050 Ti worked with either setup.
In the same boat, 1660 super , says API server online but when I run the test, shown in post #450, nothing found
 
How much memory? This is the result I had with 2Gb running Python.
Scene detection appears to work. Objects and Faces not predicted. Nvidia T400 GPU

2022-08-26 10:03:28: retrieved detection_queue command
2022-08-26 10:03:28 [Exception: Exception]: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\detection.py", line 83, in objectdetection_callback
det = detector.predictFromImage(img, threshold)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\process.py", line 62, in predictFromImage
pred = self.model(img, augment=False)[0]
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 136, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 159, in _forward_once
x = m(x) # run
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\.\models\yolo.py", line 68, in forward
y[..., 0:2] = (y[..., 0:2] * 2 + self.grid) * self.stride # xy
RuntimeError: The size of tensor a (32) must match the size of tensor b (28) at non-singleton dimension 2
My P400 has 2GB of RAM.