Full ALPR Database System for Blue Iris!

I just realized it can't actually be edited like that. I'm just going to rebuild the image with the logs in it and push it. Takes like 5 minutes. Then try to docker compose down, docker compose pull, docker compose up - the full logs should be in there with that.
 
  • Like
Reactions: Vettester
I seen someone asking about the 172.18.x.x IP address that is a Private IP address and might be for a Alibi camera connected to an Alibi POE NVR.. 172.16.x.x to 172.31.255.255 are private IP addresses like 10.x.x.x and most of the 192.x.x.x are private.. Didn't see anyone cover that question about the IP address..
 
Ah nice ok. I'm definitely no professional either - lots of AI generated code in my repo haha.

Can you share more about your iframe not working? If you could check the same logs and see what it says that would help me figure it out. Mine is working perfectly, so I am a little lost.


On your automation flow, are you also storing the plates in HA? If not, would it possibly make sense to do something like this: Blue Iris detects plate --> Sends data to ALPR DB --> ALPR DB receives a recognition, sends all necessary data to your HA with a webhook --> HA automation logic

This is what I meant to explain in the issue. If I understand correctly it seems like this would be a lot easier since you can just send the data once instead of multiple back and forth requests, while also keeping all logic within HA

I've just pulled the latest image and tested an iframe in HA.

Whitelisted IP: 192.168.99.1

I think it was something to do with the normalisation of the ip4 addresses. Should be fixed in my latest commit - just testing now.

These are the logs:

Code:
Checking IP: 192.168.99.1
All received headers: {
  accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
  'accept-encoding': 'gzip, deflate',
  'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
  connection: 'keep-alive',
  dnt: '1',
  host: '192.168.20.102:3000',
  'upgrade-insecure-requests': '1',
  'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
  'x-forwarded-for': '::ffff:192.168.99.1',
  'x-forwarded-host': '192.168.20.102:3000',
  'x-forwarded-port': '3000',
  'x-forwarded-proto': 'http'
}
Raw x-forwarded-for header: ::ffff:192.168.99.1
Split IPs: [ '::ffff:192.168.99.1' ]
No session cookie block run
 
I am seeing a "Camera" column under Live ALPR Feed page, but I do not see "Camera" column under Database page

Is this a bug or expected behavior?
 
I've just pulled the latest image and tested an iframe in HA.

Whitelisted IP: 192.168.99.1

I think it was something to do with the normalisation of the ip4 addresses. Should be fixed in my latest commit - just testing now.

These are the logs:

Code:
Checking IP: 192.168.99.1
All received headers: {
  accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
  'accept-encoding': 'gzip, deflate',
  'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
  connection: 'keep-alive',
  dnt: '1',
  host: '192.168.20.102:3000',
  'upgrade-insecure-requests': '1',
  'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
  'x-forwarded-for': '::ffff:192.168.99.1',
  'x-forwarded-host': '192.168.20.102:3000',
  'x-forwarded-port': '3000',
  'x-forwarded-proto': 'http'
}
Raw x-forwarded-for header: ::ffff:192.168.99.1
Split IPs: [ '::ffff:192.168.99.1' ]
No session cookie block run

OK so this is interesting. You're right to notice that I was applying the normalization to the config IPs instead of the one from the header. I didn't notice this because mine was still working - it seems like the includes function was still finding it in there even with the fffff:, which makes sense. I will accept the PR, thank you.

I don't understand why it wouldn't have worked for others though. In your testing, can you confirm that you rebuilt and ran it on its own (not in the container)?

How is your HA deployed? HAOS, Docker, ..? The reason the iframe was not working for you is different than why it wasn't for Vettester. We figured out via PM yesterday that HAOS requires you to enable an HTTP option in the config and explicitly allow the IP range that you would like to forward in the header, while the Docker version (by necessity due to the way docker networking works) has the forwarding enabled by default.

Home Assistant Docs HTTP Config
 
I seen someone asking about the 172.18.x.x IP address that is a Private IP address and might be for a Alibi camera connected to an Alibi POE NVR.. 172.16.x.x to 172.31.255.255 are private IP addresses like 10.x.x.x and most of the 192.x.x.x are private.. Didn't see anyone cover that question about the IP address..

This turned out to be the IP from the bridge in the Docker network stack. HAOS strips the forwarded for header by default for security reasons, so the app was seeing the next farthest IP, which is the bridge, a 172.18 range IP. There wouldn't be any way the IP would come from the NVR since the chain of communication is HA --> ALPR app.
 
  • Like
Reactions: Revo2Maxx
I am seeing a "Camera" column under Live ALPR Feed page, but I do not see "Camera" column under Database page

Is this a bug or expected behavior?
Yes, this is intended. I could add a "seen by" column or something like that to show which cameras the plate has been read by, but I think in most cases, the answer is most of them, or not super relevant in the database page unless you have a giant property and cameras really spread out. As of now, you can just click the plate and go see all the recognitions with their corresponding cameras. This made sense to me since the information is more directly related to the recognition records, and you probably want to see it all together. I could see how storing which cams might be useful in a case like: cameras either opposite ends of your property or shooting down different roads; if you wanted to go back and do some deep research on the travel patterns of a vehicle, you could only see as far as you have them maintained in your max records at the moment. For something like this, it would probably make sense to store a count for each camera too.


In the same line of thought - I had thought about adding a "protect" option similar to BI that will move the recognition to another table and protect it from deletion. I will do this when I switch to a better image storage solution.
 
Yes, this is intended. I could add a "seen by" column or something like that to show which cameras the plate has been read by, but I think in most cases, the answer is most of them, or not super relevant in the database page unless you have a giant property and cameras really spread out. As of now, you can just click the plate and go see all the recognitions with their corresponding cameras. This made sense to me since the information is more directly related to the recognition records, and you probably want to see it all together. I could see how storing which cams might be useful in a case like: cameras either opposite ends of your property or shooting down different roads; if you wanted to go back and do some deep research on the travel patterns of a vehicle, you could only see as far as you have them maintained in your max records at the moment. For something like this, it would probably make sense to store a count for each camera too.


In the same line of thought - I had thought about adding a "protect" option similar to BI that will move the recognition to another table and protect it from deletion. I will do this when I switch to a better image storage solution.
For my use case I have three lpr cameras. Two on the street and one on my driveway. I want to be able to know when certain plates are coming up my driveway for Home Assistant announcements. I don’t want HA to announce if certain plates are just driving by on the street. Not sure I’d be able to accomplish this is camera is not in the database unless I am missing something.
 
  • Like
Reactions: algertc
OK so this is interesting. You're right to notice that I was applying the normalization to the config IPs instead of the one from the header. I didn't notice this because mine was still working - it seems like the includes function was still finding it in there even with the fffff:, which makes sense. I will accept the PR, thank you.

I don't understand why it wouldn't have worked for others though. In your testing, can you confirm that you rebuilt and ran it on its own (not in the container)?

How is your HA deployed? HAOS, Docker, ..? The reason the iframe was not working for you is different than why it wasn't for Vettester. We figured out via PM yesterday that HAOS requires you to enable an HTTP option in the config and explicitly allow the IP range that you would like to forward in the header, while the Docker version (by necessity due to the way docker networking works) has the forwarding enabled by default.

Home Assistant Docs HTTP Config
Yes I've tested both a local build and within a docker container and working for me.

My HA is HAOS running on a VM within proxmox. I do have this in my existing HA config as I run Nginx as a HA Add-on for external access which may be why I had a different issue.


configuration.yaml:
Code:
http:
  base_url: http://192.168.40.2:8123 #depreciated
  use_x_forwarded_for: true
  trusted_proxies:
    - 172.30.33.0/24
    - 127.0.0.1
    - ::1

Simple test iFrame card:
Code:
type: iframe
url: http://192.168.20.102:3000

EDIT: I went back and ran main, without my fix, local build outside of docker and it works as expected... strange...
 
Last edited:
Yes I've tested both a local build and within a docker container and working for me.

My HA is HAOS running on a VM within proxmox. I do have this in my existing HA config as I run Nginx as a HA Add-on for external access which may be why I had a different issue.


configuration.yaml:
Code:
http:
  base_url: http://192.168.40.2:8123 #depreciated
  use_x_forwarded_for: true
  trusted_proxies:
    - 172.30.33.0/24
    - 127.0.0.1
    - ::1

Simple test iFrame card:
Code:
type: iframe
url: http://192.168.20.102:3000

Thank you for sending the config. Vettester got locked out by adding the 192 IP. Looks like it needs the loopbacks too. It also seems like maybe the docker network is what needs to be allowed and not the 192?


Try this @Vettester - I think yours is .18, but maybe try the .30.33 if not.
Code:
http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 172.18.0.0/16    # Docker network
    - 127.0.0.1
    - ::1
 
Try this @Vettester - I think yours is .18, but maybe try the .30.33 if not.
Hmmm… I applied Hikky_b’s fix and added the http: code to my config.yaml file in HA and it still didn’t work. However, I found a simple fix. I added my docker IP (172.18.0.1) to the whitelist. It now works like a charm.

Screen Shot 2024-12-03 at 6.22.47 PM.png
 
Hmmm… I applied Hikky_b’s fix and added the http: code to my config.yaml file in HA and it still didn’t work. However, I found a simple fix. I added my docker IP (172.18.0.1) to the whitelist. It now works like a charm.

View attachment 208750
I think that’s going to allow no login for every device regardless of iframe. Not 100% sure. Could you try and let me know?

Glad it’s working at least though . That didn’t even cross my mind.
 
I think that’s going to allow no login for every device regardless of iframe. Not 100% sure. Could you try and let me know?
Yep, no login required which is fine with me. Not sure why this needs to be secured anyway.
 
There are still plenty of threat vectors to the rest of your network opened by having a service completely without auth, even on your LAN with good security. There’s also the fact that there will be secrets stored for integration with other services. The pushover keys might be kind of worthless but anything for HA could be dangerous. I don’t know what else might plug in in the future, so I’m just trying to be principled and forward thinking. Like I said before, I don’t want to be the reason anyone has a problem.

I don’t have any issue with there being the functionality to manually allow all like you did. That actually turns out to be a pretty happy accident imo. Doesn’t endorse or suggest it as an option, but still gives users who understand the implications the ability to do so.

Hopefully I can create a proper home assistant integration once it’s at a mostly stable state.
 
  • Like
Reactions: Vettester