Thanks Mike for all your help.Model size has to do with the amount of training data that is in the model file, the more data the better the accuracy, the downside is the more data does slowdown how fast the model can detect objects.
View attachment 154013
View attachment 154011
View attachment 154014
Half-Precision has to do with the format of the data that is being calculate. Some GPU can not do Half-Precision calculations, they are listed below.
Python:no_half = ["TU102","TU104","TU106","TU116", "TU117", "GeoForce GT 1030", "GeForce GTX 1050","GeForce GTX 1060", "GeForce GTX 1060","GeForce GTX 1070","GeForce GTX 1080", "GeForce RTX 2060", "GeForce RTX 2070", "GeForce RTX 2080", "GeForce GTX 1650", "GeForce GTX 1660", "MX550", "MX450", "Quadro RTX 8000", "Quadro RTX 6000", "Quadro RTX 5000", "Quadro RTX 4000" # "Quadro P1000", - this works with half! "Quadro P620", "Quadro P400", "T1000", "T600", "T400","T1200","T500","T2000", "Tesla T4"]
View attachment 154015
I'm running Blue Iris Default detection or Custom.
I have a EVGA GeForce RTX 2060 12GB XC Black Gaming, 12G-P4-2261-KR card.
I see it's on the list as not supporting half precision.
When I look up the specs they say it is supported.
Is this only some interaction with CodeProject?
I only run yolov5 6.2 GPU Cuda enabled with half precision enabled model size large.
When I click on Info it shows half precision enabled.
Can you tell me if half precision is being disabled even though I enabled it and the log shows enabled?
I've always run these settings and have had no issue with detection or detection times.
What kind of issue would I see if I forced half precision and it wasn't supported?