CodeProject.AI Version 2.0

The Coral USB should work in a VM. The M.2 Coral definitely works - I'm running Blue Iris in a Windows Server 2022 VM running in KVM via Proxmox. Works fine.

Which M.2 card are you using as I couldn't see one to fit a normal MB M.2 slot?

Also, did you have to implement any additional code to make ti work or did it work out of the box after plugging it in with the Coral TPU Model?
 
Even though DeepStack support in Blue Iris is deprecated, it still works even in the latest version.
I stand corrected. Current BI does still work with Deepstack. I guess I was just believing the hype. Apparently the only thing that's different is the pulling the custom models list from CodeProject AI server. You could just as easily read the registry key.
The funny thing for me is why the "restart the AI service" message?
According to the docs, calling for the custom model list array is done by the client.
So why would you need to restart the AI service?
I keep seeing the call for the model list in the AI server log.
Oh well, I'm not a programmer.

Sent from my iPlay_50 using Tapatalk
 
My guess is that the Coral TPU detector is using a smaller model size.

View attachment 168835

You can't change the model size for the Coral at the moment, but my guess is that it's using either a tiny or small model. Frigate uses a tiny model and thus is much less accurate with its default config.
Yes, it defaults to medium, but I manually set it to small. Medium results in very high latency, worse than even an old p400 video card, so it defeats the purpose.

If the coral can't reliably be used at "small," there isn't much purpose for it.

The alternative is that the default trained models are just very poor compared to yolo -- resulting in inaccuracies for cctv
 
Yes, it defaults to medium, but I manually set it to small. Medium results in very high latency, worse than even an old p400 video card, so it defeats the purpose.

If the coral can't reliably be used at "small," there isn't much purpose for it.

The alternative is that the default trained models are just very poor compared to yolo -- resulting in inaccuracies for cctv
How did you change the size? Mine doesn't give me the option next to module.
Perhaps that's why it's not working as it is not being picked up by codeproject but it is in windows under usb devices.
 
How did you change the size? Mine doesn't give me the option next to module.
Perhaps that's why it's not working as it is not being picked up by codeproject but it is in windows under usb devices.

You manually edit the modulesettings.json in the appropriate module's folder. Although I doubt that is your issue :(
 
Which M.2 card are you using as I couldn't see one to fit a normal MB M.2 slot?
I've got the dual-edge one:

I use it with this PCIe adapter:

Works great. The two TPUs are exposed as two separate PCIe devices, so you can use them in two different VMs if you wanted to.
 
I've got the dual-edge one:

I use it with this PCIe adapter:

Works great. The two TPUs are exposed as two separate PCIe devices, so you can use them in two different VMs if you wanted to.
How hot do they get? I expect they need plenty of ventilation and may suffer in a tff mini case.
 
I know this is located somewhere, but I can't find it. Where are the License Plates that are captured stored? Sometimes I see them in the log file, but the AI is working and I see the plate captured @ 87% for example, but no clue where to find a list

Thanks for any time anyone gives this matter.
 
I know this is located somewhere, but I can't find it. Where are the License Plates that are captured stored? Sometimes I see them in the log file, but the AI is working and I see the plate captured @ 87% for example, but no clue where to find a list

Thanks for any time anyone gives this matter.
The plates that are captured are stored in the log file
1690825043750.png
 
  • Like
Reactions: David L
How hot do they get? I expect they need plenty of ventilation and may suffer in a tff mini case.
I haven't measured the temperature of mine but I don't think they get too hot. Idle power consumption is ~400mW and maximum is 2W, so there shouldn't be a lot of heat to dissipate.

If you look at table 2 in the data sheet:

It shows that the MobileNet v2 model uses 0.6 W for 7.1 ms interference time (141 fps) in low frequency mode (125MHz), 0.9 W for 3.9 ms (256 fps) in reduced frequency mode (250 Mhz), and 1.4 W for 2.4 ms (416 fps) at max frequency (500Mhz). Those numbers are for long-term sustained use. Sporadically sending it a few frames to analyze (the Blue Iris use case) isn't going to push it anywhere near that much. That benchmark is using some low-resolution demo models from Models - Object Detection | Coral, which are not designed for production use, but Frigate uses them anyways.
 
  • Like
Reactions: Pentagano
I know this is located somewhere, but I can't find it. Where are the License Plates that are captured stored? Sometimes I see them in the log file, but the AI is working and I see the plate captured @ 87% for example, but no clue where to find a list

Thanks for any time anyone gives this matter.

@Mad_Marik If you use UI3 (BI5 webUI), there is a search box in upper left corner when in "Clips" tab. You just need to enter in a partial string. Make sure you are on a recent BI5 release to see this search box.
 
@Mad_Marik Another method to retrieve the data is via BI5 JSON API endpoint. Here is example curl:
Code:
bash> curl -X POST "http://192.168.11.211:81/json" -d '{"cmd": "alertlist","camera":"DrivewayALPR","startdate":1690821647,"enddate":1690913864,"view":"alerts","session":"2695138d262b3b9a0b5652956d98602d"}'

{
  "result": "success",
  "session": "2695138d262b3b9a0b5652956d98602d",
  "data": [
    {
      "camera": "DrivewayALPR",
      "newalerts": 0,
      "newalerttime": "0",
      "path": "@2639874633.bvr",
      "clip": "@2637178018.bvr",
      "file": "DrivewayALPR.20230731_090000.2502421.17-0.jpg",
      "memo": "REDACTED:99%",
      "plate": "REDACTED",
      "offset": 2502388,
      "flags": 403767296,
      "res": "2688x1520",
      "zones": 1,
      "date": 1690821707,
      "color": 9068350,
      "filesize": "15 sec (864K)"
    },
    {
      "camera": "DrivewayALPR",
      "path": "@2639751155.bvr",
      "clip": "@2637178018.bvr",
      "file": "DrivewayALPR.20230731_090000.2482885.17-0.jpg",
      "memo": "REDACTED:100%",
      "plate": "REDACTED",
      "offset": 2482852,
      "flags": 403767296,
      "res": "2688x1520",
      "zones": 1,
      "date": 1690821688,
      "color": 9068350,
      "filesize": "15 sec (902K)"
    }
  ]
}

***Look at BI5 Help PDF for details on these API calls.
***Replace 192.168.11.211:81 with your BI5 server IP address and port
***Replace DrivewayALPR with your camera name
***Replace startdate and enddate values with desired timerange
***I REDACTED the plate values in the above output
 
Last edited:
  • Like
Reactions: David L
I made a quick app to display all the license plates in BI log file. I using ChatGPT to do 100% of the coding. You can select more then one log file to open and filter the log by Time, Camera, or Plate. To use the app just run plate_search.exe, I also included the Python code. Once you run the exe file it take about 10 second to open up. In the below link you can see how many tries it took for ChatGPT to get the code right.



1690860194157.png
 

Attachments

Last edited:
I made a quick app to display all the license plates in BI log file. I using ChatGPT to do 100% of the coding. You can select more then one log file to open and filter the log by Time, Camera, or Plate. To use the app just run plate_search.exe, I also included the Python code. Once you run the exe file it takes 10 second to open up. In the below link you can see how many tries it took for ChatGPT to get the code right.



View attachment 169029
Thats most impressive
 
I cant seem to get CPAI to detect people after sunset, even though it is fairly well lit by two lights, at my front door camera (old Hikvision 4MP DS-2CD2142FWD-IS). It seems to work pretty well during the day, however.

I read through this entire thread (it took a bunch of hours), but did not see much about dark/night issues. The attached image looks pretty clear to me, and BI clearly tags it and alerts CP. Unfortunately, CP/ipcam-combined usually finds nothing (see data analysis). Do i need to run a different custom model for night (with time profiles)? Or Does ipcam-general do a better job than ipcam-combined for night and day for people?

My 2MP Dahua does a better job of a similar image from the left side, but clearly not as good as during the day.

nothing found image.pngnothing found.png

Thoughts on how to improve the false negative behavior of the AI processing?
 
I cant seem to get CPAI to detect people after sunset, even though it is fairly well lit by two lights, at my front door camera (old Hikvision 4MP DS-2CD2142FWD-IS). It seems to work pretty well during the day, however.

I read through this entire thread (it took a bunch of hours), but did not see much about dark/night issues. The attached image looks pretty clear to me, and BI clearly tags it and alerts CP. Unfortunately, CP/ipcam-combined usually finds nothing (see data analysis). Do i need to run a different custom model for night (with time profiles)? Or Does ipcam-general do a better job than ipcam-combined for night and day for people?

My 2MP Dahua does a better job of a similar image from the left side, but clearly not as good as during the day.

View attachment 169079View attachment 169080

Thoughts on how to improve the false negative behavior of the AI processing?
ipcam-general detected the person. This model will work better then ipcam-combined

1690920012521.png
 
  • Like
Reactions: actran and David L
ipcam-general detected the person. This model will work better then ipcam-combined

Interesting. I will test ipcam-general this evening.

When I run the original BI snapshots .jpg images of that image (.27.jpg, the one i sent was a high res screen capture .png) and a few more around that time, none of them resolves as a person with ipcam-general or with ipcam-combined when loaded manually into the Benchmark tool. Occasionally i show up as a cat and very rarely as a person, but not in those images, which to my eye, are very clearly a person with good contrast. Note: the first image (*.27.jpg) resolves in about 600ms with Detect Object vs custom, but only that image. When i process the screen grab that is sent in the above post, i get a person in about 200ms. I suspect that the screenshot jpgs are loosing data that is needed for processing, but they are about the same size as the high-res ones in the alerts folder.

DoorHik.20230727_220451988.27.jpg DoorHik.20230727_220451988.27.jpg DoorHik.20230727_220451988.27.jpg DoorHik.20230727_220512584.31.jpg
no predictions returned.png Capture + AI.PNG
 

Attachments

  • DoorHik.20230727_220452188.28.jpg
    DoorHik.20230727_220452188.28.jpg
    427.3 KB · Views: 20
  • DoorHik.20230727_220452288.29.jpg
    DoorHik.20230727_220452288.29.jpg
    427.6 KB · Views: 15
Interesting. I will test ipcam-general this evening.

When I run the original BI snapshots .jpg images of that image (.27.jpg, the one i sent was a high res screen capture .png) and a few more around that time, none of them resolves as a person with ipcam-general or with ipcam-combined when loaded manually into the Benchmark tool. Occasionally i show up as a cat and very rarely as a person, but not in those images, which to my eye, are very clearly a person with good contrast. Note: the first image (*.27.jpg) resolves in about 600ms with Detect Object vs custom, but only that image. When i process the screen grab that is sent in the above post, i get a person in about 200ms. I suspect that the screenshot jpgs are loosing data that is needed for processing, but they are about the same size as the high-res ones in the alerts folder.

View attachment 169088 View attachment 169088 View attachment 169088 View attachment 169091
View attachment 169092 View attachment 169095
Please don't take this the wrong way but this is a good thread to read that will help with your blurring...you may want to consider adjusting your shutter speed...