Not at this time.Is there a way to load custom models with the RPI docker image while using the Coral or TF-Lite option? I only have one in the drop down.
View attachment 169457
Not at this time.Is there a way to load custom models with the RPI docker image while using the Coral or TF-Lite option? I only have one in the drop down.
View attachment 169457
Try refreshing the page by doing a Ctrl+F5, I think it might fix it.
It is slower then a Coral but has better accuracy, see the below post on CodeProject.AI forum.How does the RKNM compare to the Coral?
ALPR is coming soon for the Orange PiThere a way to accelerate that? We can’t do alpr on the orange pi
ALPR is coming soon for the Orange Pi
YesSweet. Is that using the RKNN acceleration?
Yes, for both Object Detection and ALPR I am using Paddle FastDeployDid you install all the Paddle stuff?
22:13:34:Object Detection (YOLOv5 6.2): Queue request for Object Detection (YOLOv5 6.2) command 'detect' (...e97e48) took 4ms
22:13:34:Object Detection (YOLOv5 6.2): [RuntimeError] : Traceback (most recent call last):
File "/app/preinstalled-modules/ObjectDetectionYolo/detect.py", line 142, in do_detection
det = detector(img, size=640)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/yolov5/models/common.py", line 669, in forward
with dt[0]:
File "/usr/local/lib/python3.8/dist-packages/yolov5/utils/general.py", line 158, in enter
self.start = self.time()
File "/usr/local/lib/python3.8/dist-packages/yolov5/utils/general.py", line 167, in time
torch.cuda.synchronize()
File "/usr/local/lib/python3.8/dist-packages/torch/cuda/init.py", line 688, in synchronize
return torch._C._cuda_synchronize()
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile withTORCH_USE_CUDA_DSA
to enable device-side assertions.
This is over my head, please post your issue on CodeProject.AI forumGuys,
Getting some recurring errors after I updated CP.ai on my Unraid box (it's an external setup that my BI box offloads AI processing to). No other changes were made.
Here's part of the log.
Basically, BI will fail to register any AI events.
Restarting the CP.ai docker works for a while, but then screws itself again.
Here's some screens.
View attachment 169506
View attachment 169508
Also (maybe unrelated), but CTRL+click no longer works on a clip to bring up the AI status window. Even on confirmed alerts. Is anyone else experiencing this? How do I fix it?
This is over my head, please post your issue on CodeProject.AI forum
CodeProject.AI Discussions - CodeProject
www.codeproject.com
Guys,
Getting some recurring errors after I updated CP.ai on my Unraid box (it's an external setup that my BI box offloads AI processing to). No other changes were made.
Here's part of the log.
Basically, BI will fail to register any AI events.
Restarting the CP.ai docker works for a while, but then screws itself again.
I would start by disabling the GPU acceleration and see if that fixes the error. If it is stable after that, we can debug the CUDA issues..
Make sure in your camera AI settings you have Save AI analysis details checked. Also it is CTRL+double click
View attachment 169509
Guys, I keep seeing 'AI error 500' in BI ever since I updated CP.ai to the latest version.
I've tried reinstalling an disabling half precision, but no effect.
Restarting the docker container works temporarily, then keeps failing after a few hours.
Am I the only one with this problem?
CP.ai forums are useless so far in diagnosing it and I want to know if I'm not alone.
Just set the container to 2.0.9 and see if that resolves your issue.
I don't use unraid-- but somewhere when you specify the docker image-- you specify the image to pull, and you probably select the latest...
For me:
image: codeproject/ai-server:latest
Change it to one of these tags...
For me, that would be
codeproject/ai-server:2.1.9
You might need the GPU version or ARM depending on your system...
codeproject/ai-server:gpu-2.1.9