I am using aria2c to download a .tar.bz2 and trying to extract it in the same command. I can download the file but not extract it. I can also manually extract the tar.bz2., but not in the same command. Thank you .
---------- Post updated at 05:12 PM ---------- Previous update was at 05:12 PM ----------
hi
could any body tell me how to extract .tar.bz2 files
i tried using tar but in vain.
i found bzip2 in googling but i could not find it on machine unix tru64
please suggest. (1 Reply)
Hello,
I am using a bash script to archive directories of text files located in ${root}:
tar cf ${root}.tar ${root}*
bzip2 ${root}.tar
I'd like to compare the newly produced archive two.tar.bz2 with the second latest one.tar.bz2.
cmp one.tar.bz2 two.tar.bz2
returns
one.tar.bz2 two.tar.bz2... (2 Replies)
Hi,
I am trying to unpack and install .tar.bz2 library.
I was told to cd /
and than tar -jxvf /source-of-library-file?...tar.bz2
to get files unpacked and installed into /
Darius
$ pwd
/
$
$ tar -jxvf /tmp/local/root/ncurses-dev-addon.tar.bz2
ncurses-dev-addon/... (3 Replies)
Hi experts,
I have two tar.bz2 file,:
a.tar.bz2 and b.tar.bz2
I want to put a.tar.bz2 in to b.tar.bz2
eg: b.tar.bz2 only have one file named "b.c" contained
I want it contain "b.c and a.tar.bz2"
I don't want to decompress the b.tar.bz2 to achieve this, I try with "cat a.tar.bz2 >>... (1 Reply)
Does anyone know a reliable source to download firefox-19.0.2.tar.bz2 from? I would think you be able to download from firefox or mozilla somewhere. I haven't gotten anything useful from my google searches. (2 Replies)
while extracting a tar.bz2 file using the command
tar xjf git.tar.bz2
I received error messages that shows Cannot hard link to and Cannot create symlink to error messages
what will be the reason for those error messages. (4 Replies)
I am using the below aria2c command to call an API that downloads a file to a directory. Though the process completes and I get "download complete"... aria2c does not exit to the command line, I have to hit enter to get the command line. I am not sure what is wrong. Am I missing something in... (0 Replies)
In the bash below each .tar.bz2 (usually 2) are extracted and then the original .tar.bz2 is removed. However, only one (presumably the first extracted) is being removed, however both are extracted. I am not sure why this is? Thank you :).
tar.bz2 folders in /home/cmccabe/Desktop/NGS/API
... (3 Replies)
The below bash will untar each tar.bz2 folder in the directory, then remove the tar.bz2.
Each of the tar.bz2 folders ranges from 40-75GB and currently takes ~2 hours to extract. Is there a way to speed up the extraction process?
I am using a xeon processor with 12 cores. Thank you :).
... (7 Replies)
Discussion started by: cmccabe
7 Replies
LEARN ABOUT DEBIAN
httpindex
httpindex(1) General Commands Manual httpindex(1)NAME
httpindex - HTTP front-end for SWISH++ indexer
SYNOPSIS
wget [ options ] URL... 2>&1 | httpindex [ options ]
DESCRIPTION
httpindex is a front-end for index++(1) to index files copied from remote servers using wget(1). The files (in a copy of the remote direc-
tory structure) can be kept, deleted, or replaced with their descriptions after indexing.
OPTIONS
wget Options
The wget(1) options that are required are: -A, -nv, -r, and -x; the ones that are highly recommended are: -l, -nh, -t, and -w. (See the
EXAMPLE.)
httpindex Options
httpindex accepts the same short options as index++(1) except for -H, -I, -l, -r, -S, and -V.
The following options are unique to httpindex:
-d Replace the text of local copies of retrieved files with their descriptions after they have been indexed. This is useful to display
file descriptions in search results without having to have complete copies of the remote files thus saving filesystem space. (See
the extract_description() function in WWW(3) for details about how descriptions are extracted.)
-D Delete the local copies of retrieved files after they have been indexed. This prevents your local filesystem from filling up with
copies of remote files.
EXAMPLE
To index all HTML and text files on a remote web server keeping descriptions locally:
wget -A html,txt -linf -t2 -rxnv -nh -w2 http://www.foo.com 2>&1 |
httpindex -d -e'html:*.html,text:*.txt'
Note that you need to redirect wget(1)'s output from standard error to standard output in order to pipe it to httpindex.
EXIT STATUS
Exits with a value of zero only if indexing completed sucessfully; non-zero otherwise.
CAVEATS
In addition to those for index++(1), httpindex does not correctly handle the use of multiple -e, -E, -m, or -M options (because the Perl
script uses the standard GetOpt::Std package for processing command-line options that doesn't). The last of any of those options ``wins.''
The work-around is to use multiple values for those options seperated by commas to a single one of those options. For example, if you want
to do:
httpindex -e'html:*.html' -e'text:*.txt'
do this instead:
httpindex -e'html:*.html,text:*.txt'
SEE ALSO
index++(1), wget(1), WWW(3)AUTHOR
Paul J. Lucas <pauljlucas@mac.com>
SWISH++ August 2, 2005 httpindex(1)