Solving some NFS issues on Hikvision cameras

eckhart

n3wb
Joined
Jun 18, 2015
Messages
17
Reaction score
2
Is there no solution to the WD MyCloud NAS problems with Hikvision cameras? I too am stuck with a 4TB unit that I had planned to use as an NAS for 3-4 cameras. I can find no way to set quotas by user or share. The drive is supposed to support NAS, but it will only pass the test with SMB/CIFS, and after it formats it reverts back to Uninitialized. Is the problem with the camera, the drive or both? Is a firmware fix likely for either? This is my first camera, and it was working fine with an SD card. I'm not knowledgeable enough to understand some of the solutions proposed in this thread. Thanks,
 

tate16t

Young grasshopper
Joined
Jul 18, 2015
Messages
94
Reaction score
9
Hey guys,

I'm considering purchasing 2x Hik DS-2CD2032's (I hope this is a good choice). I have an older Netgear ReadyNas Ultra 4. Is anyone using this combination?

Also, what software are most of you using to control recording to the NAS and viewing remotely?

Is the size limitation issue resolved now or still present? Should I consider a NVR?

Thanks!
 

porkhunt

n3wb
Joined
Aug 5, 2015
Messages
3
Reaction score
3
Hi all,

Not sure if this helps anyone, but I solved my issue without quota's and used mounted image files instead. My reason for this was you can't setup quota's on a folder unless your running a ZFS filesystem. As I'm running ext3 I needed to do something else to shrink down my raid array into usable sizes for the cameras.

Here is an example of what I did;

## Create an image file to your preferred size
fallocate -l 50G /data/security/patio.img

## Format the image file as ext3
mkfs -t ext3 -q /data/security/patio.img -F

##Add the NFS mount to your FSTAB to mount the image at your NFS location (/etc/FSTAB)
/data/security/patio.img /data/security/patio ext3 rw,loop,usrquota,grpquota 0 0

The camera then picks your NFS share up as whatever size you created your created image file (50Gb in this instance), and not the whole size of the array, 8Tb in my instance. No when I format the HDD it initialized perfectly and is recording clips to the NAS. Happy days! Hope this helps someone else.
 

alastairstevenson

Staff member
Joined
Oct 28, 2014
Messages
15,930
Reaction score
6,778
Location
Scotland
That's an interestingly different and potentially useful approach.
Was this on a full Linux installation, or on a NAS box?
Was fallocate available 'out of the box' or via an option?
 

porkhunt

n3wb
Joined
Aug 5, 2015
Messages
3
Reaction score
3
That's an interestingly different and potentially useful approach.
Was this on a full Linux installation, or on a NAS box?
Was fallocate available 'out of the box' or via an option?
My install is a full Ubuntu 14.04 install.... I don't have a dedicated NAS and run a software RAID5 array in my server using MDADM.

if your NAS doesn't have 'fallocate' it should probably have 'dd'. I only used 'fallocate' as the syntax is a little easier to use, but 'dd' can do the exact same function. For 'dd' the command would be something like the below for a 50Gb file;

dd if=/dev/zero of=LargeTestFile.img bs=1024 count=0 seek=$[1024*5000]


 

alastairstevenson

Staff member
Joined
Oct 28, 2014
Messages
15,930
Reaction score
6,778
Location
Scotland
I think you are right about dd - every box should have one! I wonder if that applies to the WD My Cloud and Netgear ReadyNAS that have been mentioned on this thread.
On that Ubuntu don't you have the 'LVM' logical volume manager? Giving you storage pools and flexible volume allocation within them?
 

porkhunt

n3wb
Joined
Aug 5, 2015
Messages
3
Reaction score
3
I think you are right about dd - every box should have one! I wonder if that applies to the WD My Cloud and Netgear ReadyNAS that have been mentioned on this thread.
On that Ubuntu don't you have the 'LVM' logical volume manager? Giving you storage pools and flexible volume allocation within them?
Yes that is also an option. My software raid is pretty old and has grown with new disks. For fear of permanently destroying my data, I've been to scared to risk an upgrade to a more feature rich file system. ;)
 

PEM

n3wb
Joined
Aug 5, 2015
Messages
1
Reaction score
0
I haven't tried this on 5.2.5 firmware - just 5.2.0 - there is a limit on the total space of somewhere between 200 and 250GB. Higher than the ceiling and the symptoms as you described occur.
Is there any way (eg thin provisioned volumes as opposed to dedicated partitions) for you to limit the available volume size (not free space) for the camera share?

*Edit* I checked the user manual. There seems to be no flexibility in how storage can be provisioned - no user quotas, no partitioning, no flexible volumes. Nothing that would provide a NAS target for the way the Hikvision camera storage works.
Most NAS devices allow quotas or flexible volumes. This WD model seems very basic indeed.
I have v5.2.5 firmware with Freenas and it still limits the space to approximately 200gb per share. Does anyone know if this limit is raised since release of v5.3.0 firmware?
 

jimmyt

Getting the hang of it
Joined
Sep 12, 2014
Messages
101
Reaction score
4
i got a 400gb to initialize (freenas 9.3) on 5.3.0 and the camera shows it as available. Also survived a reboot - so maybe there has been some progress :)
 
Last edited by a moderator:

Deconomist

n3wb
Joined
Aug 8, 2015
Messages
1
Reaction score
0
Code:
# df -h
Filesystem                Size      Used Available Use% Mounted on
rootfs                    7.9M      6.5M      1.4M  83% /
/dev/root                 7.9M      6.5M      1.4M  83% /
udev                     46.6M     80.0K     46.5M   0% /dev
/dev/ubi1_0              19.8M     10.2M      8.6M  54% /dav
/dev/ubi3_0               1.3M    108.0K      1.1M   9% /davinci
/dev/ubi4_0               1.3M     76.0K      1.1M   6% /config
10.42.0.1:/mnt/sdb1/ipcam001
                        917.1G    326.8G    543.7G  38% /mnt/nfs00
Ok. same problem. Can I fix it?
I am running into this same issue, using a Lenovo ix2-dl NAS with 3TB of storage.
Code:
[FONT=Menlo]# df -h[/FONT]
[FONT=Menlo]Filesystem                Size      Used Available Use% Mounted on[/FONT]
[FONT=Menlo]rootfs                    7.9M      6.5M      1.4M  83% /[/FONT]
[FONT=Menlo]/dev/root                 7.9M      6.5M      1.4M  83% /[/FONT]
[FONT=Menlo]udev                     46.6M     80.0K     46.5M   0% /dev[/FONT]
[FONT=Menlo]/dev/ubi1_0              19.8M     12.5M      6.3M  67% /dav[/FONT]
[FONT=Menlo]/dev/ubi3_0               1.3M    120.0K      1.1M  10% /davinci[/FONT]
[FONT=Menlo]/dev/ubi4_0               1.3M    112.0K      1.1M   9% /config[/FONT]
[FONT=Menlo]//192.168.1.82/Cam01      2.7T    677.4G      2.0T  25% /mnt/nfs00
I set the quote for the user (Cam01) at 200GB which only has write access to a share of the same name. Using the software to "format" the HDD now, but it already failed once and I expect the same result in the morning. Any suggestions?[/FONT]
 

sfryer

n3wb
Joined
Aug 13, 2015
Messages
4
Reaction score
0
Hey!

Do anybody found a solution for NAS SMB/CIFS not working with 5.3 ?

Thanks
Simon
 

sfryer

n3wb
Joined
Aug 13, 2015
Messages
4
Reaction score
0
Ok I finally was able to get my NAS working with 5.3 by simply reformatting the NAS so the test at firmware reconnection passed:

"When the firmware tries to reconnect to the NFS share, it assumes that the full drive capacity (not the free space) reported by the NFS server can be fully utilized."
 

davehope

n3wb
Joined
Aug 8, 2015
Messages
12
Reaction score
2
This is a bit of a long shot, but has anyone tried to reverse-engineer the "info.bin" file that gets created in the root of NFS shares? This seems unlikely to solve the NFS issues, but it's something I've been working on. Here's where I've got to:

Code:
typedef struct {
    CHAR SerialNumber[48]; // SERIALNO_LEN=48
    BYTE MACAddr[6]; // // MACADDR_LEN=6
    
    byte unknown[2]; //
    INT f_bsize; // create_info_file (f_bsize)
    INT f_blocks; // create_info_file (f_blocks)
    INT DataDirs; //


} infoFormat;
Can't for the life of me figure out what the two unknown bytes are.
 
Last edited by a moderator:

Sylv_01

n3wb
Joined
Sep 10, 2015
Messages
10
Reaction score
0
Hello,
I've a question about NFS NAS access : I'm using a DS-2CD2132F-IWS cam with 5.2.5 firmware, and I have a MyBookLive 2To NAS, for movies, pictures, documents etc. record.
In my NAS, I have created a specific share directory for my cam, and I can access it with NFS protocol in the config storage of my cam (testing feels good..).
Altought I have a lot of movies, pictures and documents in my NAS into specifics directories, the totality of the NAS capacity is free of use on the webadmin disk management of the cam !
So if I format, I'm affraid to erase totality of my NAS, including all the existing files !!
Has somebody this kind of config, or can explain me what can be arrived, please ?
Thanks in advance...
 

alastairstevenson

Staff member
Joined
Oct 28, 2014
Messages
15,930
Reaction score
6,778
Location
Scotland
The format is a bit scarily named - it's not a format as we usually know it where existing data is wiped.
What it does is create a framework of placeholder filenames and indexes.
And it will only do this in the share that you have given it access to.
 

Sylv_01

n3wb
Joined
Sep 10, 2015
Messages
10
Reaction score
0
Hello,
thanks a lot, it's true : noting was deleted on my NAS !
The only remark I can have, it's a shame to do not have possibility to fix the volume of data we want to use...
My NAS (2 To) is empty at 80%, so the cam as created the framework for totality for this empty area, being 15 directories with 324 files !
 

Sylv_01

n3wb
Joined
Sep 10, 2015
Messages
10
Reaction score
0
Hello,
so I have the same pb than guys wich have a WD NAS : the capacity of the NAS is to bigger for the cam (1,5 Tb free), and I can't fix quotas or create fix-capacity directories... After few minutes, my cam tell "not initialized" for the HDD NAS...
 

Yostie

n3wb
Joined
Oct 13, 2015
Messages
13
Reaction score
1
Anyone had any success with 5.3.3 and recording to a Synology NAS. Mine initializes, but then goes offline straight away.
 
Top