Next: , Previous: , Up: Overview   [Contents][Index]


1.6 How much

In order to prevent from deny of services, we do not use a centralise database but text files spread on all servers using GIT.

Consequently parsers needs a proportional amount of CPU and memory in regards to these file’s sizes (whereas databases without indexe do not). The generated HTML catalogue also requires much more place than a dynamic web site does (moreover, limitation should comes from the number of available inodes on the partition where the HTML catalogue is stored).

All in all, the MEDIATEX system is not designed to handle collections having more than half a million archives (whereas databases easily handle millions). It should handle several such “not so big” collections, but not toot much too.

Following tests are based on the GIT upgrade plus HTML catalogue generation, which is the more consumming query (and which imply parsing most meta-data files). It gives an idea of resources (size on disk, amount of memory and CPU time) involved.

archivesGITRAMHTMLHTML inodestime
27,55030M74M357M88,7171’06
54,95059M132M598M148,2941’47
82,39888M191M840M207,9853’25
110,006118M251M1,1G268,0293’18
137,561147M310M1,3G328,0025’21
165,104177M371M1,6G387,8365’11
192,771207M432M1,8G447,9715’47
220,346237M493M2,1G507,9455’18
247,861267M553M2,3G567,664
302,912326M674M2,8G687,2788’31
330,371356M735M3,0G746,934
358,005386M796M3,2G807,00710’43
385,425416M856M3,5G866,55111’17
412,848446M916M3,7G926,10212’29
440,405977M3,9G985,990
467,899505M1038M4,2G1,045,75523’37
495,383535M1098M4,4G1,105,44514’33

Notice:


Next: , Previous: , Up: Overview   [Contents][Index]