Bingo, that newer version (by a few hours) of the 2.0.8 installer script took care of the issue. Thank you @MikeLud1
Bingo, that newer version (by a few hours) of the 2.0.8 installer script took care of the issue. Thank you @MikeLud1
Hi all, Can someone explain to me how I would load custom models when I am using CPAI in Docker on a different machine than what blue iris is running on? Do I need to point both the docker and blue iris to the same folder that has the custom models? Does blue iris only need to know of the custom models in order to determine the tags that are trained within the model?
gwithers reply is probably the right way to do this and his info on using Docker has been very useful to me.
I have Docker Desktop running on Windows WSL2 on the same box as BI and could not get any Custom models going until I found Blue Iris and CodeProject.AI ALPR from MikeLud1 which says to create a directory with blank text files using the same names as the custom models. This seems to work and the Docker instance must be downloading/getting the right models as its log file shows it is processing ipcam-combined, general, license-plate etc. Once I had the dummy files in place I could also check the box for ALPR for plates in the BI settings.
Would someone be so kind as to refresh my memory on what causes the whole "double tagging" issue where recognized objects have 2 squares and tags (i.e. dog) around them? I thought it was caused by having the "default object detection" box in BI Settings > AI checked but after unchecking that and rebooting I'm still getting the issue.
Ah...so 2 of the default included custom models you're saying? If that's the case what is the easiest way to disable one of more of these models? Just delete the file from the custom models folder lol?That is two models actively searching for objects.
Glad to see work is progressing on Coral support, hopefully we will see some dramatic improvements in inference time eventually. Two seconds is not really usable at this point, unless that is on a massively larger photo than the usual "medium" size models/images from BI.
Thanks, I did do a clean CPAI install for 2.0.7 because it didn’t work at all with BI initially. I haven’t gone to 2.0.8 yet. Will try today hopefully.For folks having issues with BI not allowing a model size selection, CPAI not displaying a model size, or getting CPAI to stick to a model size selection, I'd suggest a uninstall of CPAI, deleting directories, reboot, and clean re-install.
Doing this fixed my issues for CPAI 2.08 and BI 5.7.0.4.
if the coral support gets figured out is there a plan on which coral boards to support, usb, pci Or m.2?
Ive been running CPAI cpu version for some time now but I recently picked up an nvidia Quadro P620 to try the gpu version.
I now have YOLOv5 6.2 running through GPU using ipcam-combined. seems to be working good...but, when I pull up task manager its shows that all of the traffic is going through internal graphics rather the the P620 GPU. Did I miss something somewhere?
If you have a CUDA enabled Nvidia card please then ensure you
[LIST=1]
[*]install the [B][URL='https://www.nvidia.com/download/index.aspx']CUDA Drivers[/URL] [/B]
[*]Install [B][URL='https://developer.nvidia.com/cuda-11-7-0-download-archive']CUDA Toolkit 11.7[/URL].[/B]
[*]Download and run our [B][URL='https://www.codeproject.com/KB/Articles/5322557/install_CUDnn.zip']cuDNN install script[/URL] [/B]to install cuDNN[B].[/B]
[/LIST]
[B]Nvidia downloads and drivers are challenging! [/B]Please ensure you download a driver that is compatible with CUDA 11.7, which generally means the CUDA driver version 516.94 or below. Version 522.x or above may not work. You may need to refer to the release notes for each driver to confirm
Since we are using CUDA 11.7 (which has support for compute capability 3.7 and above).we can only support NVidia CUDA cards that are equal to or better than a GK210 or Tesla K80 card. Please refer to [URL='https://en.wikipedia.org/wiki/CUDA#GPUs_supported']this table[/URL] of supported cards to determine if your card has compute capability 3.7 or above.
Newer cards such as the GTX 10xx, 20xx and 30xx series, RTX, MX series are fully supported.