People can mask USPS AI detections

Typically a cloned camera has a specific purpose that uses a single model. For example, my driveway camera is setup to detect people, cats and dogs using the ipcam-combined model 24 hrs a day. I have also cloned the camera to use the USPS model that captures the mailman's vehicle, but only between sunrise & sunset and only if the vehicle is traveling from west to east. This triggers my home automation system to let me know the mail has arrived. A second clone of the same camera is setup to count the number of vehicles that drive by my house using ipcam-general model. I knew there was a lot of traffic on my street, but I had know idea how bad it was until I started counting the cars.

View attachment 161060

I would be curious at the number of cars on my street too. Is this a part of the ipcam-general model? Only on CPAI or Deepstack too?
 
  • Like
Reactions: David L
Is this a part of the ipcam-general model? Only on CPAI or Deepstack too?
Yes and no. Everytime a car passes my house AI triggers a virtual switch in Hubitat. The virtual switch is connected to Home Assistant which counts the number of on/off instances and stores that number in its database. From there I use Grafana to graph the data. It sounds complicated, but it's really not.
 
Last edited:
Typically a cloned camera has a specific purpose that uses a single model. For example, my driveway camera is setup to detect people, cats and dogs using the ipcam-combined model 24 hrs a day. I have also cloned the camera to use the USPS model that captures the mailman's vehicle, but only between sunrise & sunset and only if the vehicle is traveling from west to east. This triggers my home automation system to let me know the mail has arrived. A second clone of the same camera is setup to count the number of vehicles that drive by my house using ipcam-general model. I knew there was a lot of traffic on my street, but I had know idea how bad it was until I started counting the cars.

View attachment 161060
I thought I'd try something similar.

I setup vehicle counting on my street cam using ipcam-general model. This is working well.
I then set up pushing data to HomeSeer via MQTT which is working well.
I set up an sqlite database on the HomeSeer box and have Date and AI memo for each vehicle passing. This is working well.
I then tried to set up grafana. It wouldnt configure right in Docker on synology so I added it on the HomeSeer box running windows. Finally got that working. Now I have to learn how to create dashboards on Grafana. Would like to have a similar page as yours. Any help appreciated.

vehicle count.jpg

database.jpg
 
  • Like
Reactions: David L
Getting closer. I was not smart enough to use my existing database to create the chart, so I redid it with time columns of each hour. I have HomeSeer increment the count for each vehicle passing.

I still need to add daily, weekly and monthly totals. Haven't figured out how to add multiple panels on one dashboard yet.

bargraph.jpg
 
  • Like
Reactions: David L and actran
Haven't figured out how to add multiple panels on one dashboard yet.
Just click on add and select visualization.

Screen Shot 2023-08-22 at 3.26.31 PM.png

For the visualization use Stat and configure the range for daily, weekly or monthly.

Screen Shot 2023-08-22 at 3.28.43 PM.png

Another option would be to copy the chart you have already created and then change the visualization from bar chart to stat. You can then set the range for whatever you want.
 
Last edited:
  • Like
Reactions: David L
Thanks for the pointers. I've got it set up the way I like it.

Learning curve on how to implement sqlite queries to get what I wanted.

Had to use curl to create the png image of the grafana panel to place into HomeSeer.

graf.jpg
hs-bi.jpg
 
  • Like
Reactions: Vettester
Hi @VideoDad ,

Some of us are using Yolov8 models. Could you build a yolov8 version of your model? Or better yet, upload the images to Roboflow as a public project, so people can augment and train.
 
Hi @VideoDad ,

Some of us are using Yolov8 models. Could you build a yolov8 version of your model? Or better yet, upload the images to Roboflow as a public project, so people can augment and train.
I'd have to go back to the instructions I had to find out how to do that. Have you done that?
 
I'd have to go back to the instructions I had to find out how to do that. Have you done that?
I label my images on Roboflow and train on my Ubuntu PC which has a 3080. After installing ultralytics, I train using this command:
`
Code:
yolo task=detect mode=train model=yolov8m.pt data=../data.yaml epochs=300 imgsz=640 batch=-1
 
  • Like
Reactions: David L
I label my images on Roboflow and train on my Ubuntu PC which has a 3080. After installing ultralytics, I train using this command:
`
Code:
yolo task=detect mode=train model=yolov8m.pt data=../data.yaml epochs=300 imgsz=640 batch=-1
Got it... So basically just switching to the yolov8m model. Let me give that a go.
 
I label my images on Roboflow and train on my Ubuntu PC which has a 3080. After installing ultralytics, I train using this command:
`
Code:
yolo task=detect mode=train model=yolov8m.pt data=../data.yaml epochs=300 imgsz=640 batch=-1
Just to give you an update... previously I could train a YOLOv5 model overnight. But though I think I've set up YOLOv8 similarly, it's taking quite a lot longer. When I started with the medium model (yolov8m.pt) it was upwards of an hour per epoch. Now, I've switched it to the small model (yolov8s.pt) and it is about twice as fast, but that's still about 30 minutes per epoch. I'll keep it going, but please don't hold your breath in the meantime. :)
 
  • Like
Reactions: David L
Just to give you an update... previously I could train a YOLOv5 model overnight. But though I think I've set up YOLOv8 similarly, it's taking quite a lot longer. When I started with the medium model (yolov8m.pt) it was upwards of an hour per epoch. Now, I've switched it to the small model (yolov8s.pt) and it is about twice as fast, but that's still about 30 minutes per epoch. I'll keep it going, but please don't hold your breath in the meantime. :)
Are you training with a GPU, if so which one. How many images are you training? If you can send me the images I can train the model on my RTX 4090.
 
Are you training with a GPU, if so which one. How many images are you training? If you can send me the images I can train the model on my RTX 4090.
Ack! No wonder it was taking so long. Somehow in resetting my environment to work with YOLOv8, updating Python to 3.11, etc. it looks like pyTorch is NOT using the GPU. I'm figuring it out now. Thanks for the offer but I'd like to resolve this locally first.

Update: It's processing now using the GPU.
 
Last edited:
  • Like
Reactions: David L
Has there been any work in applying models serially? I know we have the option in BI to run ALPR only on vehicle detections, but for the delivery model, I'd also only want it running if first there was a vehicle detection, and ideally, it would crop and only send the vehicle portion of the image to the model. This is especially useful if say you have a 4k camera and a vehicle at the end of the driveway. If the whole image is fed to a delivery model, and scaled to 640x640, then there might not be enough detail, but if just the vehicle is checked, then there will be plenty of detail.

I think about this for other models as well, like face detection only for confirmed people. If you detect a dog, then you can run a dog breed model, which may identify the dog as yours or someone else's.

There are any number of specialized models which one may want to run only after detecting something in a general model.
 
  • Like
Reactions: David L
Has there been any work in applying models serially? I know we have the option in BI to run ALPR only on vehicle detections, but for the delivery model, I'd also only want it running if first there was a vehicle detection, and ideally, it would crop and only send the vehicle portion of the image to the model. This is especially useful if say you have a 4k camera and a vehicle at the end of the driveway. If the whole image is fed to a delivery model, and scaled to 640x640, then there might not be enough detail, but if just the vehicle is checked, then there will be plenty of detail.

I think about this for other models as well, like face detection only for confirmed people. If you detect a dog, then you can run a dog breed model, which may identify the dog as yours or someone else's.

There are any number of specialized models which one may want to run only after detecting something in a general model.


I was just about to email support about this feature, I tried a few different ways to use triggers and clones.. i do see in the latest build there is an option to send pre-frames. so this might be possible with the new feature.

If vehicle detected > trigger clone camera > cloned camera sends pre-frames to delivery model. etc. Not clean but it could work.
 
Last edited:
Interesting thought and maybe Ken is already thinking of optimizations that could be made.

I know @MikeLud1 is doing some pre-processing of a plate (cropping, straightening) before sending it off to OCR. I don't know if he's got recommendations in improvements that Ken could add directly to BI.

One thing to consider though is the vehicle detection can often miscategorize a delivery vehicle as a bus, car, etc. instead of a truck.
 
I was just about to email support about this feature, I tried a few different ways to use triggers and clones.. i do see in the latest build there is an option to send pre-frames. so this might be possible with the new feature.

If truck detected > trigger clone camera > cloned camera sends pre-frames to delivery model. etc. Not clean but it could work.
See below you can combined labels truck+UPS

1708638135789.png
 
This still runs the image through all models correct? What I am after is only running an image against the delivery.pt model if a vehicle is detected. Right now a person walking down the road will run through both models.
Exactly. One benefit is not needing to run the model, saving some compute, but the larger improvement comes from feeding a cropped version to the delivery.pt model.