So, I have some questions perhaps MikeLud1 or some of the other seasoned hardware evangelists can answer
Firstly, in my BI logs I see the following over and over and I don't recall seeing this until recently:
View attachment 139778
Like literally hundreds and hundreds - I don't know if that has anything to do with my introduction of CodeProject.AI but I suspect not.
Secondly, I have purchased an entire new (used) computer (i7-6700) and 2 Nvidia video cards in the last month in an attempt to get the CPAI running as fast as I can afford. The first card was a Nvidia GeForce GTX 970 - seemed to work well for BI and CP.AI but the new i7-6700 did not have the extra power connectors necessary so then I purchased a Nvidia GeForce GT 730 as it did not have any extra power requirements. When using this card and activating CUDA I get the following errors in the CP.AI console...
2022-08-29 11:51:19 [Exception: Exception]: Traceback (most recent call last):
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\scene.py", line 95, in sceneclassification_callback
cl, conf = classifier.predict(img)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\Vision\intelligencelayer\scene.py", line 46, in predict
logit = self.model.forward(image_tensors)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torchvision\models\resnet.py", line 249, in forward
return self._forward_impl(x)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torchvision\models\resnet.py", line 234, in _forward_impl
x = self.relu(x)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\modules\activation.py", line 98, in forward
return F.relu(input, inplace=self.inplace)
File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\python37\venv\lib\site-packages\torch\nn\functional.py", line 1297, in relu
result = torch.relu_(input)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
and Lastly, I have followed this thread since inception and I believe I have done all the installation steps correctly for CUDA and run the installation script (install_CUDnn.bat) but when I run the "nvidia-smi" - I receive the following output (note that my CUDA version is not 11.7 despite the fact that I could swear I chose that version...) - is the CUDA version being 11.4 instead of 11.7 the reason for the errors above?
View attachment 139777
My CodeProject.AI is functioning fine (disabling CUDA support) as indicated below.
View attachment 139784
I'm rather shocked that out of 3 Nvidia cards (Quadro FX1800, GeForce GTX 970 and GeForce GT 730) only the 930 enabled CUDA on one or two of the detection options ...but that was on my previous machine which was an AMD FX-6300 (six core) and it seemed to run the CPU high all the time, which is why I opted to get the i7-6700.
Anyway - any suggestions would be greatly appreciated. I can run this in CPU mode for certain, but having the hardware - I'd rather see my money put to good use and utilize the hardware.
--Dirk