AI: error 500 and AI: error 200

Spoke too soon. Not a single proper alert in over 12 hours. Only 200 and 500 errors.
 
Spoke too soon. Not a single proper alert in over 12 hours. Only 200 and 500 errors.


Same here, it was all good for like an hour and then went right back to 200 and 500 errors. from what I can tell, I didn't notice it until a person walked through one and then back to error-ville.
 
Just upgraded to BI5 using the latest versions and this is what I'm welcomed with. So far no alerts and all AI error 500's.
 
Got a response back from BI support today...

Try stopping and starting the face recognition and object detection in the AI dashboard web page and see if that corrects it.

Thanks,

So, I had just restarted my BI server and AI server this morning. So I simply did this all from BI server, stopped face & object, then started them again (one at a time). So far it's been about an hour. I'm getting alerts again to my phone for objections, but still watching the logs closely to see if any errors come up.
 
Got a response back from BI support today...



So, I had just restarted my BI server and AI server this morning. So I simply did this all from BI server, stopped face & object, then started them again (one at a time). So far it's been about an hour. I'm getting alerts again to my phone for objections, but still watching the logs closely to see if any errors come up.

That'll work for a couple hours max. At least that is my experience.
 
I wrote a small batch file and created a task to restart the service every hour. This has helped, but hopefully is a temporary fix. YMMV

See below.

Create batch file via notepad
Cut and paste below and above ___ into notepad
___

net stop "CodeProject.AI Server"
ping localhost -n 30 >nul
net start "CodeProject.AI Server"
exit

___

Be sure to add the .bat at the end of the filename, and select "all files" when saving to your location
Be sure to have an Enter after Exit
Save and note the location

Open Task scheduler
General tab
Create Task (not basic task)
Name - CodeAI Restart
Description if you like
When Running the task, use the following user account (user must have administrative rights)
Run whether user is logged on or not
Run with highest privileges

Triggers
Begin task On a Schedule
Set a daily time
Repeat every 1 hour for a duration of indefinitely
Enabled

Actions
Action Start a Program
Browse and find your script

Leave all others at default

Save
If prompted, Enter the password for the account with admin access


** Edit **

I had to adjust the repeat time to every 30 minutes. Again, YMMV.
 
Last edited:
Just in case anyone missed this I thought it worthy of a repost here as it has resolved 99% of my Error 500. I say 99% because I had an Error 500 yesterday for no apparent reason. The codeproject log is Error free.

This is a heads up for anyone using a GPU accelerator with BI.

My BI has been logging quite a few Error 500's and after looking at the A.I server log I can see a lot of connection refused errors. After setting BI to NO hardware acceleration, the system has run error free for 10 hours. It would appear that BI and CodeProject server do not play nicely together. Fortunately my CPU has enough power to work just fine without GPU acceleration in my case.

This is the error I found in the A.I server log.

2023-03-15 05:32:49: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (#reqid b4636fc0-b7b0-4709-8b9d-f7fb21f84219) took 100ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-15 05:32:59: ModuleRunner Stop
2023-03-15 05:32:59: Sending shutdown request to python/ObjectDetectionYolo
2023-03-15 05:32:59: Client request 'Quit' in the queue (#reqid af7985ac-67bb-4a17-bd9b-214801cb0587)
2023-03-15 05:33:01: detect_adapter.py: Not using half-precision for the device 'NVIDIA GeForce GTX 1060 6GB'
2023-03-15 05:33:01: detect_adapter.py: [ConnectionRefusedError] : Unable to check the command queue objectdetection_queue. Is the server running, and can you connect to the server?objectdetection_queue: [ConnectionRefusedError] : Unable to check the command queue objectdetection_queue. Is the server running, and can you connect to the server?
2023-03-15 05:33:01: detect_adapter.py: Inference processing will occur on device 'NVIDIA GeForce GTX 1060 6GB'
2023-03-15 05:33:01: detect_adapter.py: Timeout connecting to the server
2023-03-15 05:33:01 [ConnectionRefusedError]: Unable to check the command queue objectdetection_queue. Is the server running, and can you connect to the server?
 
Just in case anyone missed this I thought it worthy of a repost here as it has resolved 99% of my Error 500. I say 99% because I had an Error 500 yesterday for no apparent reason. The codeproject log is Error free.

This is a heads up for anyone using a GPU accelerator with BI.

My BI has been logging quite a few Error 500's and after looking at the A.I server log I can see a lot of connection refused errors. After setting BI to NO hardware acceleration, the system has run error free for 10 hours. It would appear that BI and CodeProject server do not play nicely together. Fortunately my CPU has enough power to work just fine without GPU acceleration in my case.

This is the error I found in the A.I server log.

2023-03-15 05:32:49: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'custom' (#reqid b4636fc0-b7b0-4709-8b9d-f7fb21f84219) took 100ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-15 05:32:59: ModuleRunner Stop
2023-03-15 05:32:59: Sending shutdown request to python/ObjectDetectionYolo
2023-03-15 05:32:59: Client request 'Quit' in the queue (#reqid af7985ac-67bb-4a17-bd9b-214801cb0587)
2023-03-15 05:33:01: detect_adapter.py: Not using half-precision for the device 'NVIDIA GeForce GTX 1060 6GB'
2023-03-15 05:33:01: detect_adapter.py: [ConnectionRefusedError] : Unable to check the command queue objectdetection_queue. Is the server running, and can you connect to the server?objectdetection_queue: [ConnectionRefusedError] : Unable to check the command queue objectdetection_queue. Is the server running, and can you connect to the server?
2023-03-15 05:33:01: detect_adapter.py: Inference processing will occur on device 'NVIDIA GeForce GTX 1060 6GB'
2023-03-15 05:33:01: detect_adapter.py: Timeout connecting to the server
2023-03-15 05:33:01 [ConnectionRefusedError]: Unable to check the command queue objectdetection_queue. Is the server running, and can you connect to the server?

I have posted this before, but seems like a good place to post again based on your experiences:

Around the time AI was introduced in BI, many here had their system become unstable with hardware acceleration on (even if not using DeepStack or CodeProject). Some have also been fine. I started to see that error when I was using hardware acceleration several updates into when AI was added.

This hits everyone at a different point. Some had their system go wonky immediately, some it was after a specific update, and some still don't have a problem, yet the trend is showing running hardware acceleration will result in a problem at some point.

However, with substreams being introduced, the CPU% needed to offload video to a GPU (internal or external) is more than the CPU% savings seen by offloading to a GPU. Especially after about 12 cameras, the CPU goes up by using hardware acceleration.

My CPU % went down by not using hardware acceleration. But if you use it, use plain intel.

Here is a recent thread where someone turned off hardware acceleration based on my post and their CPU dropped 10-15%.

 
Interesting about the GPU / Hardware acceleration; however, I am not using it. I have 20+ cameras, but running dual Xeon Gold processors that run about 3% CPU. I am using CodeAI GPU versions, though.
 
hmm, I'm using two servers, dedicated BI [Intel Core i9 10850k] -- (Hardware accelerated decode) set to Intel +VPP and another server dedicated to AI with nVidia GPU (using the GPU AI). In my camera setup, I have hardware decode (Default) and GPU (any)
 
hmm, I'm using two servers, dedicated BI [Intel Core i9 10850k] -- (Hardware accelerated decode) set to Intel +VPP and another server dedicated to AI with nVidia GPU (using the GPU AI). In my camera setup, I have hardware decode (Default) and GPU (any)
Have you checked the CodeAI error log to see if the error is coming from the AI Server?
 
Have you checked the CodeAI error log to see if the error is coming from the AI Server?

yup, it is... just found this in the logs on my AI server....



Code:
2023-03-22 11:43:15: Client request 'detect' in the queue (#reqid f1e2c8d7-cc8a-4147-96c0-7a9b8b42a67a)
2023-03-22 11:43:15: Request 'detect' dequeued for processing (#reqid f1e2c8d7-cc8a-4147-96c0-7a9b8b42a67a)
2023-03-22 11:43:15: Client request 'detect' in the queue (#reqid 622c96b9-33d3-44d0-b776-abfd3ea98ba8)
2023-03-22 11:43:15: Request 'detect' dequeued for processing (#reqid 622c96b9-33d3-44d0-b776-abfd3ea98ba8)
2023-03-22 11:43:16: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:16: Response received (#reqid f1e2c8d7-cc8a-4147-96c0-7a9b8b42a67a)
2023-03-22 11:43:16: Response received (#reqid 622c96b9-33d3-44d0-b776-abfd3ea98ba8)
2023-03-22 11:43:17: detect_adapter.py: Timeout connecting to the server
2023-03-22 11:43:17: detect_adapter.py:  [Exception] : Traceback (most recent call last):
2023-03-22 11:43:17: detect_adapter.py:  [Exception] : Traceback (most recent call last):
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\detect_adapter.py", line 309, in do_detection
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\detect_adapter.py", line 309, in do_detection
2023-03-22 11:43:17: detect_adapter.py:     det = detector(img, size=640)
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
2023-03-22 11:43:17: detect_adapter.py:     return forward_call(*input, **kwargs)
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
2023-03-22 11:43:17: detect_adapter.py:     return func(*args, **kwargs)
2023-03-22 11:43:17: detect_adapter.py:     det = detector(img, size=640)
2023-03-22 11:43:17 [Exception]: Traceback (most recent call last):
  File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\detect_adapter.py", line 309, in do_detection
    det = detector(img, size=640)
  File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
    with dt[0]:
  File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
    self.start = self.time()
  File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
    torch.cuda.synchronize()
  File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
    return torch._C._cuda_synchronize()
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
2023-03-22 11:43:17: detect_adapter.py:     self.start = self.time()
2023-03-22 11:43:17: detect_adapter.py:     return func(*args, **kwargs)
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
2023-03-22 11:43:17: detect_adapter.py:     torch.cuda.synchronize()
2023-03-22 11:43:17: detect_adapter.py:     with dt[0]:
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
2023-03-22 11:43:17: detect_adapter.py:     return torch._C._cuda_synchronize()
2023-03-22 11:43:17: detect_adapter.py:     self.start = self.time()
2023-03-22 11:43:17: detect_adapter.py: RuntimeError: CUDA error: unknown error
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
2023-03-22 11:43:17: detect_adapter.py: CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2023-03-22 11:43:17: detect_adapter.py:     torch.cuda.synchronize()
2023-03-22 11:43:17: detect_adapter.py: For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
2023-03-22 11:43:17: detect_adapter.py: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last):
2023-03-22 11:43:17: detect_adapter.py:     return torch._C._cuda_synchronize()
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\detect_adapter.py", line 309, in do_detection
2023-03-22 11:43:17: detect_adapter.py: RuntimeError: CUDA error: unknown error
2023-03-22 11:43:17: detect_adapter.py:     det = detector(img, size=640)
2023-03-22 11:43:17: detect_adapter.py: CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2023-03-22 11:43:17: detect_adapter.py: For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2023-03-22 11:43:17: detect_adapter.py: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last):
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\ObjectDetectionYolo\detect_adapter.py", line 309, in do_detection
2023-03-22 11:43:17: detect_adapter.py:     det = detector(img, size=640)
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: detect_adapter.py:     return forward_call(*input, **kwargs)
2023-03-22 11:43:17: detect_adapter.py:     return forward_call(*input, **kwargs)
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
2023-03-22 11:43:17: detect_adapter.py:     return func(*args, **kwargs)
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
2023-03-22 11:43:17: detect_adapter.py:     with dt[0]:
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
2023-03-22 11:43:17: detect_adapter.py:     self.start = self.time()
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
2023-03-22 11:43:17: detect_adapter.py:     torch.cuda.synchronize()
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
2023-03-22 11:43:17: detect_adapter.py:     return torch._C._cuda_synchronize()
2023-03-22 11:43:17: detect_adapter.py: RuntimeError: CUDA error: unknown error
2023-03-22 11:43:17: detect_adapter.py:     return func(*args, **kwargs)
2023-03-22 11:43:17: Client request 'detect' in the queue (#reqid 913f1bdf-acb2-4e9b-a360-ecc31188fdd7)
2023-03-22 11:43:17: detect_adapter.py: CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2023-03-22 11:43:17: detect_adapter.py: For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\models\common.py", line 669, in forward
2023-03-22 11:43:17: Request 'detect' dequeued for processing (#reqid 913f1bdf-acb2-4e9b-a360-ecc31188fdd7)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid f1e2c8d7-cc8a-4147-96c0-7a9b8b42a67a) took 225ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: detect_adapter.py:     with dt[0]:
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 158, in __enter__
2023-03-22 11:43:17: detect_adapter.py:     self.start = self.time()
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\yolov5\utils\general.py", line 167, in time
2023-03-22 11:43:17: detect_adapter.py:     torch.cuda.synchronize()
2023-03-22 11:43:17: detect_adapter.py:   File "C:\Program Files\CodeProject\AI\AnalysisLayer\bin\windows\python37\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in synchronize
2023-03-22 11:43:17: detect_adapter.py:     return torch._C._cuda_synchronize()
2023-03-22 11:43:17: detect_adapter.py: RuntimeError: CUDA error: unknown error
2023-03-22 11:43:17: detect_adapter.py: CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
2023-03-22 11:43:17: detect_adapter.py: For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
2023-03-22 11:43:17: Client request 'detect' in the queue (#reqid 4acad97a-afb5-4aaa-9a7e-c014f076d915)
2023-03-22 11:43:17: Request 'detect' dequeued for processing (#reqid 4acad97a-afb5-4aaa-9a7e-c014f076d915)
2023-03-22 11:43:17: Client request 'detect' in the queue (#reqid 0dd79a32-5caf-464f-917a-27c3907a9d27)
2023-03-22 11:43:17: Client request 'detect' in the queue (#reqid ca8ecc1f-82a3-409f-b248-bd6e9cec4628)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Client request 'detect' in the queue (#reqid 2432c381-42f5-4fa1-bffb-ef80879f75c0)
2023-03-22 11:43:17: Request 'detect' dequeued for processing (#reqid 0dd79a32-5caf-464f-917a-27c3907a9d27)
2023-03-22 11:43:17: Client request 'detect' in the queue (#reqid f5b6346f-8564-4f32-a649-9e9a2bb245b3)
2023-03-22 11:43:17: Client request 'detect' in the queue (#reqid 8aa6d2f9-8362-4d82-b49f-834006003d83)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 622c96b9-33d3-44d0-b776-abfd3ea98ba8) took 213ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Request 'detect' dequeued for processing (#reqid ca8ecc1f-82a3-409f-b248-bd6e9cec4628)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Request 'detect' dequeued for processing (#reqid 2432c381-42f5-4fa1-bffb-ef80879f75c0)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Response received (#reqid 913f1bdf-acb2-4e9b-a360-ecc31188fdd7)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 913f1bdf-acb2-4e9b-a360-ecc31188fdd7) took 94ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:17: Request 'detect' dequeued for processing (#reqid f5b6346f-8564-4f32-a649-9e9a2bb245b3)
2023-03-22 11:43:17: Response received (#reqid 4acad97a-afb5-4aaa-9a7e-c014f076d915)
2023-03-22 11:43:17: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 4acad97a-afb5-4aaa-9a7e-c014f076d915) took 238ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Request 'detect' dequeued for processing (#reqid 8aa6d2f9-8362-4d82-b49f-834006003d83)
2023-03-22 11:43:18: Response received (#reqid 0dd79a32-5caf-464f-917a-27c3907a9d27)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 0dd79a32-5caf-464f-917a-27c3907a9d27) took 259ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Response received (#reqid ca8ecc1f-82a3-409f-b248-bd6e9cec4628)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid ca8ecc1f-82a3-409f-b248-bd6e9cec4628) took 307ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Response received (#reqid 2432c381-42f5-4fa1-bffb-ef80879f75c0)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 2432c381-42f5-4fa1-bffb-ef80879f75c0) took 293ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Response received (#reqid f5b6346f-8564-4f32-a649-9e9a2bb245b3)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid f5b6346f-8564-4f32-a649-9e9a2bb245b3) took 302ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:18: Response received (#reqid 8aa6d2f9-8362-4d82-b49f-834006003d83)
2023-03-22 11:43:18: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 8aa6d2f9-8362-4d82-b49f-834006003d83) took 277ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:21: Client request 'detect' in the queue (#reqid 87c1f949-6b43-4200-8b64-d2a317eccba5)
2023-03-22 11:43:21: Request 'detect' dequeued for processing (#reqid 87c1f949-6b43-4200-8b64-d2a317eccba5)
2023-03-22 11:43:21: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:21: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:21: Response received (#reqid 87c1f949-6b43-4200-8b64-d2a317eccba5)
2023-03-22 11:43:21: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 87c1f949-6b43-4200-8b64-d2a317eccba5) took 129ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:22: Client request 'detect' in the queue (#reqid 6f5b8395-64c5-4b50-bad4-c1325a7ab654)
2023-03-22 11:43:22: Request 'detect' dequeued for processing (#reqid 6f5b8395-64c5-4b50-bad4-c1325a7ab654)
2023-03-22 11:43:22: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:22: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:22: Response received (#reqid 6f5b8395-64c5-4b50-bad4-c1325a7ab654)
2023-03-22 11:43:22: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 6f5b8395-64c5-4b50-bad4-c1325a7ab654) took 108ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:23: Request 'detect' dequeued for processing (#reqid 48e1459d-4e07-4671-b421-f87eef1722a8)
2023-03-22 11:43:23: Client request 'detect' in the queue (#reqid 48e1459d-4e07-4671-b421-f87eef1722a8)
2023-03-22 11:43:23: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:23: Client request 'detect' in the queue (#reqid f548b6de-73da-4135-9382-2e602454cfe7)
2023-03-22 11:43:23: Request 'detect' dequeued for processing (#reqid f548b6de-73da-4135-9382-2e602454cfe7)
2023-03-22 11:43:23: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:23: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:23: Response received (#reqid 48e1459d-4e07-4671-b421-f87eef1722a8)
2023-03-22 11:43:23: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 48e1459d-4e07-4671-b421-f87eef1722a8) took 206ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:23: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:43:23: Response received (#reqid f548b6de-73da-4135-9382-2e602454cfe7)
2023-03-22 11:43:23: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid f548b6de-73da-4135-9382-2e602454cfe7) took 163ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:44:49: Client request 'detect' in the queue (#reqid 53059e1c-8537-49e9-bb7f-4f86214af788)
2023-03-22 11:44:49: Request 'detect' dequeued for processing (#reqid 53059e1c-8537-49e9-bb7f-4f86214af788)
2023-03-22 11:44:49: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:44:49: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:44:49: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid 53059e1c-8537-49e9-bb7f-4f86214af788) took 93ms (command timing) in Object Detection (YOLOv5 6.2)
2023-03-22 11:44:49: Response received (#reqid 53059e1c-8537-49e9-bb7f-4f86214af788)
2023-03-22 11:44:50: Request 'detect' dequeued for processing (#reqid b6c59af0-df07-4239-adeb-516a2f04f4d9)
2023-03-22 11:44:50: Client request 'detect' in the queue (#reqid b6c59af0-df07-4239-adeb-516a2f04f4d9)
2023-03-22 11:44:50: Object Detection (YOLOv5 6.2): Retrieved objectdetection_queue command in Object Detection (YOLOv5 6.2)
2023-03-22 11:44:50: Object Detection (YOLOv5 6.2):  [Exception] : Traceback (most recent call last): in Object Detection (YOLOv5 6.2)
2023-03-22 11:44:50: Response received (#reqid b6c59af0-df07-4239-adeb-516a2f04f4d9)
2023-03-22 11:44:50: Object Detection (YOLOv5 6.2): Queue and Processing Object Detection (YOLOv5 6.2) command 'detect' (#reqid b6c59af0-df07-4239-adeb-516a2f04f4d9) took 142ms (command timing) in Object Detection (YOLOv5 6.2)
 
What video card are you running? Have you tried stopping YOLOv5 and starting YOLO .NET?

I'm using a "NVIDIA Tesla P4 8GB GDDR5 Graphics Card".... let me try that really quick and see what affect is has.

[edit] just switched my AI box to...

Object Detection (YOLOv5 .NET)
Started
GPU (DirectML)

Object Detection (YOLOv5 6.2)
Stopped
GPU (CUDA)
 
  • Like
Reactions: tul9033