![]() ![]() Since you have the table-of-contents in /tmp/hugearchive-list.txt you can easily extract the useful files only, if so needed.įor what it is worth, on my i3770K desktop with 16Gb RAM and both SSD & disk storage, I made (for experimenting) a useless huge archive (made specifically for the purpose of answering this question, since I don't have your hugearchive.tgz file. Then you can decide if you are able to extract all the files or not, since you know how much total size they need. Details depend on the computer, notably the hardware (if you can afford it, use some SSD, and get at least 8Gbytes of RAM). My guess is that you'll process your huge archive in less than one hour. some newline in their filename, which is possible but weird). Use wc -l /tmp/hugearchive-list.txt to get the number of lines in that table of content, that is the number of files in the archive, unless some files are weirdly and maliciously named (with e.g. ![]() But you'll know what is the total cumulated size of the archive, and you'll know its table of contents. Of course the figures are fictive, you'll get much bigger ones. tgz file to some pipe (so won't consume a lot of disk space) and write the table-of-contents into /tmp/hugearchive-list.txt and you'll get on your stderr something like Total bytes read: 340048000 (331MiB, 169MiB/s) ![]() That command will run gunzip to uncompress the. for PostGreSQL or Sqlite or MariaDB - in compressed form).įirst, you should make a list of the file names in that hugearchive.tgz gziped tar archive and ask for the total count of bytes: tar -tzv -totals -f hugearchive.tgz > /tmp/hugearchive-list.txt Next time you get a huge archive, consider asking it in afio archive format, which has the big advantage of compressing not-too-small files individually (or perhaps ask for some SQL dump - e.g. tgz file is a gnu-zipped compression of a. If possible, put that hugearchive.tgz file on some fast disk (SSD preferably, not magnetic rotating hard disks) and fast Linux-native file system (Ext4, XFS, BTRFS, not FAT32 or NTFS). I suppose you have a Linux laptop or desktop on which your hugearchive.tgz file is on some local disk (not a remote network filesystem, which could be too slow). ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |