Replies: 1 comment
-
I personally have only had bad experiences with these type of files so i am definitely bias against it. Here are some examples:
These may be preventable, but still i feel that when it comes to archiving data, then you don't want to have all files in the same place. The more centralized, the more vulnerable the data becomes. But on the other hand it becomes easier to work with. So it all depends on your philosophy when it comes to archiving over time and what your purpose is. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I would be curious about interest in this tool moving from a text based storage and tracking system via a massive quantity (for some of us) of per-blog index tracking text files, to a more robust sqlite database file.
The general enhancements over things like recovering from an issue, migration, aggregation of multiple scattered points of data and so on into a singular collated set of tables would offer considerable advantages over the current hierarchical parse and scan of parent and sub folder system where every blog needs it's data parsed from a text file.
It would also expedite the process of moving/migrating an instance, as the current env pathing is saved hard coded per blog index meaning that in a recovery situation we're looking at manually editing what could be a vast about of file paths. I also suspect it would add several considerable speed boosts when it comes to reloading, loading and importing large quantities of blogs.
In short, index.db as a collated sqlite file, nix the per blog index text files, and move to a record-per-blog instead of a config-file-per-blog methodology.
Beta Was this translation helpful? Give feedback.
All reactions