CodeProject.AI Version 2.0

Interesting results. animal model picked my furry dog up as a dog. critter as a cat. dark as a cat and combined nothing. I am using combined only at the moment!!
combined
animal
critters
dark

But critter 85% - double the accuracy as the others. My dog does look like a cat tbh
inference speeds not too shabby on a couple of test

1689727864583.png
 

Attachments

  • dog-animal.PNG
    dog-animal.PNG
    60.2 KB · Views: 47
  • dog-combined.PNG
    dog-combined.PNG
    119.9 KB · Views: 44
  • dog-critters.PNG
    dog-critters.PNG
    52.3 KB · Views: 43
  • dog-dark.PNG
    dog-dark.PNG
    70.3 KB · Views: 39
Last edited:
My present understanding is that the Coral accelerator is unable to process medium and large models theirby reducing recognition accuracy. 110ms is an excellent response time for the Coral. Can you clarify if the 30ms and 110ms response times were using the same model size.

It is indeed a fact that software is limited in its capability to accurately determine power consumption. I am curious my self how much power is actually being consumed by the 1060 GPU and have decided to measure my dedicated system with and without the card. I will post the results tomorrow if I find the time.
Mine defaults to "Medium", there is no option to change model size., so I assume it's working on Medium.

1689731546769.png1689731661517.png
 
My present understanding is that the Coral accelerator is unable to process medium and large models thereby reducing recognition accuracy. 110ms is an excellent response time for the Coral. Can you clarify if the 30ms and 110ms response times were using the same model size.

It is indeed a fact that software is limited in its capability to accurately determine power consumption. I am curious my self how much power is actually being consumed by the 1060 GPU and have decided to measure my dedicated system with and without the card. I will post the results tomorrow if I find the time.

Well the results are in for anyone interested. I have tried to be as fair as I can in making this test by taking the average power reading in each configuration when camera activity is modest.
System power consumption with CPU was 38 watts average and GPU was 46 watts average. Quiescent without camera activity, the consumption dropped by 2-3 watts for both configurations.
Interestingly peak power consumption appeared to peak to between 67 - 70 watts for both configurations!

Personally I'm pretty pleased with the result knowing that my GPU card is only consuming an average additional 8 watts for 6 cameras.
 

Attachments

  • CPU Average.jpg
    CPU Average.jpg
    25.9 KB · Views: 30
  • GPU Average.jpg
    GPU Average.jpg
    25.5 KB · Views: 34
  • GPU Removed.jpg
    GPU Removed.jpg
    439.2 KB · Views: 34
  • GTX 1060 Installed.jpg
    GTX 1060 Installed.jpg
    458.6 KB · Views: 35
Well the results are in for anyone interested. I have tried to be as fair as I can in making this test by taking the average power reading in each configuration when camera activity is modest.
System power consumption with CPU was 38 watts average and GPU was 46 watts average. Quiescent without camera activity, the consumption dropped by 2-3 watts for both configurations.
Interestingly peak power consumption appeared to peak to between 67 - 70 watts for both configurations!

Personally I'm pretty pleased with the result knowing that my GPU card is only consuming an average additional 8 watts for 6 cameras.
Agree 10 watts is pretty low, infact your entire system draw is quite low. I really don't want to measure mine (too embarrassed)
 
When you have 2 models defined in the AI section can you attach to the alert which model was used to identify the object? Using critters and combined at the moment and can't see in the dat which model was used.
Where can I see this and possibly have it attached to the alert jpeg?
Thanks
 
When you have 2 models defined in the AI section can you attach to the alert which model was used to identify the object? Using critters and combined at the moment and can't see in the dat which model was used.
Where can I see this and possibly have it attached to the alert jpeg?
Thanks
When I drop the DAT file, it shows me the model. Critters saw this moth as a dog, while ipcam-animal correctly found nothing.
1689795058368.png
 
  • Like
Reactions: Pentagano
I´ve now changed mine to ipcam-general and critters. The critters is great, better and faster for small dogs and cats which is what I look for.
General for people and vehicles appears faster and better for my scenario looking just for people and vehicles only on 1 cam. General also has the dark models built in it says

Update critters is working brilliantly in the dark and for very small far away dog shots.
Awesome!
 
Last edited:
  • Like
Reactions: Nunofya
yep seen those. Would cost 4 times that here in Uruguay and would get destroyed by the kids footballs in no time.
A 5 usd sprinkler activated by nodered suits my needs much better but thanks anyway.

I bet the sun would create false triggers for that kind of sensor also. CPAI would be (is) much more accurate imo
 
  • Like
Reactions: David L
yep seen those. Would cost 4 times that here in Uruguay and would get destroyed by the kids footballs in no time.
A 5 usd sprinkler activated by nodered suits my needs much better but thanks anyway.

I bet the sun would create false triggers for that kind of sensor also. CPAI would be (is) much more accurate imo
I've tried these Orbit sprinklers and didn't like. They were unreliable sensing daylight, so they nailed me, the gardener, kids, all day, but were only so-so sensing deer, my primary opponent. And you can't use it when it gets cold (frozen water lines).
 
When you have 2 models defined in the AI section can you attach to the alert which model was used to identify the object? Using critters and combined at the moment and can't see in the dat which model was used.
Where can I see this and possibly have it attached to the alert jpeg?
Thanks

@Pentagano I have BI5 configured to send to MQTT. Here is my payload:
MQTT.png
Code:
{ "state":"ON", "cam":"&CAM", "memo":"&MEMO", "camera_name":"&NAME", "type":"&TYPE", "last_tripped_time":"&ALERT_TIME", "alert_db":"&ALERT_DB", "json":&JSON}

Note the &JSON. That will be the raw AI response. Example:
Code:
 "json": [
    {
      "api": "ipcam-combined",
      "found": {
        "success": true,
        "count": 1,
        "predictions": [
          {
            "confidence": 0.9143837094306946,
            "label": "person",
            "x_min": 401,
            "y_min": 96,
            "x_max": 503,
            "y_max": 332
          }
        ],
        "message": "Found person",
        "processMs": 25,
        "inferenceMs": 25,
        "code": 200,
        "analysisRoundTripMs": 35
      }
    },
    {
      "api": "ipcam-general",
      "found": {
        "success": true,
        "count": 1,
        "predictions": [
          {
            "confidence": 0.9282132387161255,
            "label": "person",
            "x_min": 401,
            "y_min": 98,
            "x_max": 500,
            "y_max": 331
          }
        ],
        "message": "Found person",
        "processMs": 23,
        "inferenceMs": 22,
        "code": 200,
        "analysisRoundTripMs": 30
      }
    }
  ]
My Home Assistant automation triggers on new MQTT msgs, parse this JSON and take action on the data there. You can see the model name(s) in JSON.

You can probably do something similar with NodeRed automation.
 
Last edited:
@Pentagano I have BI5 configured to send to MQTT. Here is my payload:
View attachment 168067
Code:
{ "state":"ON", "cam":"&CAM", "memo":"&MEMO", "camera_name":"&NAME", "type":"&TYPE", "last_tripped_time":"&ALERT_TIME", "alert_db":"&ALERT_DB", "json":"&JSON"}

Note the &JSON. That will be the raw AI response. Example:
Code:
 "json": [
    {
      "api": "ipcam-combined",
      "found": {
        "success": true,
        "count": 1,
        "predictions": [
          {
            "confidence": 0.9143837094306946,
            "label": "person",
            "x_min": 401,
            "y_min": 96,
            "x_max": 503,
            "y_max": 332
          }
        ],
        "message": "Found person",
        "processMs": 25,
        "inferenceMs": 25,
        "code": 200,
        "analysisRoundTripMs": 35
      }
    },
    {
      "api": "ipcam-general",
      "found": {
        "success": true,
        "count": 1,
        "predictions": [
          {
            "confidence": 0.9282132387161255,
            "label": "person",
            "x_min": 401,
            "y_min": 98,
            "x_max": 500,
            "y_max": 331
          }
        ],
        "message": "Found person",
        "processMs": 23,
        "inferenceMs": 22,
        "code": 200,
        "analysisRoundTripMs": 30
      }
    }
  ]
My Home Assistant automation can parse this JSON and take action on any data there. You can see the model name(s) in JSON.

You can probably do something similar with NodeRed automation.
many thanks - though I have changed my models to ipcam-general and critters now so if cat and dog are not in general I can conclude critters is working.
But if I change back to combined your post your code is great!
Cheers
 
  • Like
Reactions: actran
I've tried these Orbit sprinklers and didn't like. They were unreliable sensing daylight, so they nailed me, the gardener, kids, all day, but were only so-so sensing deer, my primary opponent. And you can't use it when it gets cold (frozen water lines).
never gets cold enough here luckily. We had a rare morning today -1.9c with ice on the cars but short lived and the ground would never freeze.
If I was back in the UK would be a different case for sure