database - FILESYSTEM vs SQLITE, while storing up-to 10M files -


I would like to store up to -10m files, 2TB storage unit, only the qualities that give me the filenames and their contents ( Data).

Maximum maximum of 100 MB files, more than 1 MB is less than the ability to delete files, and both writing and speed should be preferred - while low storage capacity, recovery or integrity methods do not require.

I thought about NTFS, but most of its features are not required, while can not be disabled and an overhead concern can be considered, some of them are: creation date, revision date, Attribution, Magazine and Certainly Permission

Due to local features a file system is not required, would you recommend that I use SQLite for this requirement? Or is there a clear loss that I should be aware of? (One would guess that removing the file would be a complex task?)

(via SQLite C API)

Using a more appropriate solution to achieve my goal performance To do. Thanks in advance - Doori Bar

If your main requirement is display, go with the original file system. DBMS is not well suited to handle large blobs, so SQLite is not an option at all for you (it also does not even know how everyone understands SQLite plugs for every hole).

To improve the performance of NTFS (or other files that you choose) do not put all files in a single folder, but group files by n characters before their file names or by extension also no.

In addition there are other file systems available on the market and perhaps some of them are likely to disable the features used. You can check and check them out.

Correction: I have done some tests (though not very wide) which do not show any benefit in grouping files in sub-directories for all types of operations and NTFS Handled 26 ^ 4 blank files efficiently from AAAA to ZZWaid in the same directory. So you have to check the efficiency for your particular file system.


Comments