Full ALPR Database System for Blue Iris!

Hmmm.... I just updated to BI 5.9.9.21 and the no stream errors are still there.
It's not a BI change. As @algertc said, the ALPR Database project has been updated to accept a larger payload.

I was NOT getting the 12002 error, so maybe your issue is different. Try checking the log (penultimate tab on the left, just above the configure/gear icon)
 
It's not a BI change. As @algertc said, the ALPR Database project has been updated to accept a larger payload.

I was NOT getting the 12002 error, so maybe your issue is different. Try checking the log (penultimate tab on the left, just above the configure/gear icon)
Yes please share the log. You might need to keep an eye on it so you can see it before it leaves the log storage limit.

Also, since the request is so large it will just say object in the log tab, so you may need to check in the container if the log doesn’t offer any insight.

Sending a base64 image in the payload isn’t really the greatest for a number of reasons. Usually for things like this you would either generate a pre signed url and send that as the response, then the client uploads the file - or, send a url in the request instead and have the server download it. I’m pretty sure BI has support for that already so maybe that would be a better solution. Would fit in nicely with the image storage refactor.

Edit: yep, page 194 Blue Iris 5

I’ve been making everything backwards compatible so far, so right now both filesystem storage and db storage will work and the api route will handle either way. (It’s not fully done yet). There will eventually be a migration script included that you can run to automatically convert all your existing plate reads to the improved storage solution.

I’ll add the ability to use the image url macro instead, while maintaining the current functionality so we can see what happens.
 
Last edited:
It's not a BI change. As @algertc said, the ALPR Database project has been updated to accept a larger payload.

I was NOT getting the 12002 error, so maybe your issue is different. Try checking the log (penultimate tab on the left, just above the configure/gear icon)
As I mentioned in post #215 of this thread the issue of missing plates is not due to anything with this app.

Yes please share the log. You might need to keep an eye on it so you can see it before it leaves the log storage limit.
The log file doesn't show anything in regards to missing plates, but I have noticed an issue with the timestamp. It is off by 5 hours.

Screen Shot 2025-01-15 at 4.21.25 PM.png
 
  • Like
Reactions: algertc
As I mentioned in post #215 of this thread the issue of missing plates is not due to anything with this app.


The log file doesn't show anything in regards to missing plates, but I have noticed an issue with the timestamp. It is off by 5 hours.

View attachment 212172
#235

It definitely does look weird that it’s displayed as 12 hour format when it’s UTC, but I don’t think it’s that important. It’s kind of a complicated stream of things that factor into it. I’m gonna try to work on the rest of the remaining stuff and get things flushed out, with the goal being for people not to need to look at this page very often.

If someone wants to try setting the tz and see if it breaks stuff pls share.

—-
+ Mac user W :cool:
 
Last edited:
Web error 12002 indicates that a request has timed out so maybe those failed calls are not even reaching the server?

Try watching the BI log and simultaneously watching the APP log.
 
I just noticed that the roadmap thing is requiring login to post. I thought I set it to allow anonymous posts so you don’t have to log in. Doesn’t seem to be working right.

So thank you if you made an acc just to post :)
 
  • Like
Reactions: MikeLud1
I just noticed that the roadmap thing is requiring login to post. I thought I set it to allow anonymous posts so you don’t have to log in. Doesn’t seem to be working right.

So thank you if you made an acc just to post :)
I just used my GitHub account to login
 
#235

If someone wants to try setting the tz and see if it breaks stuff pls share.
I've tried to set TZ but it doesn't seem to have any effect on anything. The log is still UTC. Are you able to make it change using TZ?
 
Buttons added in the image viewing modal.
Date range filter fixed

The calendar selection component isn't the greatest and is kind of tricky to use if you want to adjust the start date after already having selected one. The way to do it is to click the end date that you have already selected. This will set that to the new start date.
 
  • Like
Reactions: MikeLud1
If someone wants to try setting the tz and see if it breaks stuff pls share.
I made the below TZ adds and the time zone is correct in the log.

YAML:
version: "3.8"
services:
  app:
    image: algertc/alpr-dashboard:latest
    restart: unless-stopped
    ports:
      - "3000:3000" # Change the first port to the port you want to expose
    environment:
      - NODE_ENV=production
      - ADMIN_PASSWORD=password # Change this to a secure password
      - DB_PASSWORD=password # Change this to match your postgres password
      - TZ=America/New_York
    depends_on:
      - db
    volumes:
      - app-auth:/app/auth
      - app-config:/app/config
    logging:
      driver: "json-file"
      options:
        max-size: "5m"
        max-file: "3"

  db:
    image: postgres:13
    restart: unless-stopped
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password # Change this to a secure password
      - TZ=America/New_York
    volumes:
      - db-data:/var/lib/postgresql/data
      - ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
      - ./migrations.sql:/migrations.sql

    # Make sure you download the migrations.sql file if you are updating your existing database. If you changed the user or database name, you will need to plug that in in the command below.
    command: >
      bash -c "
        docker-entrypoint.sh postgres &
        until pg_isready; do sleep 1; done;
        psql -U postgres -d postgres -f /migrations.sql;
        wait
      "
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  db-data:
  app-auth:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./auth
  app-config:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./config
 
I made the below TZ adds and the time zone is correct in the log.

YAML:
version: "3.8"
services:
  app:
    image: algertc/alpr-dashboard:latest
    restart: unless-stopped
    ports:
      - "3000:3000" # Change the first port to the port you want to expose
    environment:
      - NODE_ENV=production
      - ADMIN_PASSWORD=password # Change this to a secure password
      - DB_PASSWORD=password # Change this to match your postgres password
      - TZ=America/New_York
    depends_on:
      - db
    volumes:
      - app-auth:/app/auth
      - app-config:/app/config
    logging:
      driver: "json-file"
      options:
        max-size: "5m"
        max-file: "3"

  db:
    image: postgres:13
    restart: unless-stopped
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password # Change this to a secure password
      - TZ=America/New_York
    volumes:
      - db-data:/var/lib/postgresql/data
      - ./schema.sql:/docker-entrypoint-initdb.d/schema.sql
      - ./migrations.sql:/migrations.sql

    # Make sure you download the migrations.sql file if you are updating your existing database. If you changed the user or database name, you will need to plug that in in the command below.
    command: >
      bash -c "
        docker-entrypoint.sh postgres &
        until pg_isready; do sleep 1; done;
        psql -U postgres -d postgres -f /migrations.sql;
        wait
      "
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  db-data:
  app-auth:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./auth
  app-config:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./config

Is the same as what you guys had, @Vettester @VideoDad?

Did it break anything else?
 
I was looking through the blue iris docs and there are a few interesting things that came to mind that are possible.

1. Add an "Open in Blue Iris" action button on the live feed that will take you straight to your blue iris opened on the alert clip.
2. Correct plate can correct the memo in blue iris also
3. Protecting a plate could protect the clip in blue iris also

I have chatted with Mike a bit about CPAI and the idea of a built-in way to create and share training data from people's setups. Thinking about this now, I realize that the whole correct plate read functionality is a prime situation to do something similar. If you are already relabeling a plate, might as well collect them all to improve the model. There is the issue of the box if you have the burn box on, but I'm sure I could find a workaround.
 
  • Like
Reactions: MikeLud1
I had only set the TZ in one section. Once I put it into both sections, then my log has local time. And as far as I can tell, the times elsewhere still seem to be correct without shifting. Thanks!
Completely doesn't make any sense to me how changing the database timezone would now make something that has nothing to do with it work properly. Must be some docker quirk with them being grouped together.