OpenALPR Webhook Processor for IP Cameras

Hopefully with time they get it sorted out, if I could have 6-8% idle with GPU doing the rest, I'd be very happy
 
  • Like
Reactions: CamCrazy
Hope so, I have emailed them regarding this and the response was that Rekor allocates CPU to accommodate the data link to GPU, I think they assume people with more than 1 ALPR camera will be using GPU. Sadly for us this is not the case, anyway, I voiced my opinion and they said it would be passed on so :idk:
 
v4.1.0 is released.
  • the processor will now store all images locally after the initial image is pulled from the agent.
  • for best results run the agent scrape after upgrading. it will perform a one time pull from the agent to get all images. the system log will show you its progress. you can safety stop/restart the service while this is happening without causing issues. you will need to start the scrape again, it will pick up where it left off.
  • scraping should now be much faster on larger databases, pulling the images is slow, the agent doesn't respond very fast.
  • If you do not run the agent scrape and try to open up an old plate, the processor will pull the missing plate image from the agent and store it locally so it doesn't have to do it again.
 
Still never got around to testing, do I still need to delete my processor.db before upgrading?
 
Guess I need to allocate some more space to my Docker machine and fire it up!
 
v4.2.0 fixes an outstanding issue with the systems log page using too much memory over time and crashing the browser. it also pre-loads 500 lines of logs on page load now and keeps 500 lines of text on screen, flushing old logs from the UI.
 
  • Like
Reactions: djmadfx and biggen
I’ll give this a try tonight and report back! Using “latest” will pull 4.2.0?
 
Running full scrape now and at first I had "Adding job for image...". Now I'm getting "Unable to retrieve image from agent..." over and over again.

How does one know it retrieved any images and is storing them? Is there a counter or visual cue somewhere that will show how many images the local db is storing?
 
If your processor has records for plates that your agent has since deleted it wont' be able to get the images for them. are the plates images working correctly when you browse them in the list?
 
  • Like
Reactions: biggen
That is probably it. I assume it starts from the earliest? My earliest plate pictures are not there. I'll let it run and see.
 
I think I underestimated the disk space usage of 28,000 plate images. Are you able to add disk usage into the UI? I guess some sort of stat page.

I have 'openalprwebhookprocessor' within a local volume since it does have a database (didn't want to cause lock issues using over NFS or CIFS). Wonder if it might end up being best to have a volume mount so the images can just be stored on an NFS volume. My processor.db is currently 36 GB.
 
I think I underestimated the disk space usage of 28,000 plate images. Are you able to add disk usage into the UI? I guess some sort of stat page.

I have 'openalprwebhookprocessor' within a local volume since it does have a database (didn't want to cause lock issues using over NFS or CIFS). Wonder if it might end up being best to have a volume mount so the images can just be stored on an NFS volume. My processor.db is currently 36 GB.
how big is your rekor agent's db? how many plates are you seeing a day?
 
Rekor's plateimages db is 41 GB. I get 250-300 plates/day (should be images since ~July when I started). I already run backups on the server where Rekor scout runs, so the database is backed up on a daily basis. Maybe have a setting to just keep the last X days of images within processor.db? I don't know what's best in this case.
 
Seems to be working fine now. .db size is growing and the images queued are counting down.
 
Rekor's plateimages db is 41 GB. I get 250-300 plates/day (should be images since ~July when I started). I already run backups on the server where Rekor scout runs, so the database is backed up on a daily basis. Maybe have a setting to just keep the last X days of images within processor.db? I don't know what's best in this case.
This is what I'm thinking I will do to. I don't think I need Rekor to hold 40GB and then have the webhook hold a duplicate 40GB. I also backup my VM daily so I have backups I can always revert to if something screws up.
 
Alright the scrape just finished. I have ~130k plates. The db grew to 35GB in size. I'm not sure how many images that is because I have the Rekor agent set to hold only 32GB worth of plate images. It took about an hour to finish the scrape. Very Very cool indeed!

Is there a way to show how many actual plate images we are holding locally?? Maybe at the bottom of the webpage near the green status light?

Edit: Maybe a bug. On the log page if you click "Download last 24 hours of plates" at the top, a new window pops up called a "blob" that only has two brackets in it []. Not sure what that means. Why is that link up there and how is it different from the scrape in the agent page? I assume it's just to quickly grab the last 24 hours worth of plates and no more.
 
Last edited: