Hello all.
Sorry, I know this question is similar to many others, but I just can seem to put together exactly what I need.
My file is tab delimitted and contains approximately 1 million rows. I would like to send lines 1,4,& 7 to a file. Lines 2, 5, & 8 to a second file. Lines 3, 6, & 9 to... (11 Replies)
Hello Everyone.
I am trying to display contains of a file from a specific line to a specific line(let say, from line number 3 to line number 5). For this I got the shell script as shown below:
if ; then
if ; then
tail +$1 $3 | head -n $2
else
... (5 Replies)
Hi,
I search all forum, but I can not find solutions of my problem :(
I have multiple files (5000 files), inside there is this data :
FILE 1:
1195.921 -898.995 0.750312E-02-0.497526E-02 0.195382E-05 0.609417E-05
-2021.287 1305.479-0.819754E-02 0.107572E-01 0.313018E-05 0.885066E-05
... (15 Replies)
I have a file that contains 87 lines, each with a set of coordinates (x & y). This file looks like:
1 200.3 -0.3
2 201.7 -0.32
...
87 200.2 -0.314
I have another file which contains data that was taken at certain of these 87 positions. i.e.:
37 125
42 175
86 142
where the first... (1 Reply)
Hi I have a file with over a million lines (rows) and I want to split everything from 500,000 to a million into another file (to make the file smaller). Is there a simple command for this?
Thank you
Phil (4 Replies)
Hi
i have requirement like below
M <form_name> sdasadasdMklkM
D ......
D .....
M form_name> sdasadasdMklkM
D ......
D .....
D ......
D .....
M form_name> sdasadasdMklkM
D ......
M form_name> sdasadasdMklkM
i want split file based on line number by finding... (10 Replies)
Hi,
Need to sort file based on the number of delimeters in the lines.
cat testfile
/home/oracle/testdb
/home
/home/oracle/testdb/newdb
/home/oracle
Here delimeter is "/"
expected Output:
/home/oracle/testdb/newdb
/home/oracle/testdb
/home/oracle
/home (3 Replies)
Hi friends, here is my problem.
I have three files like this..
cat file1.txt
=======
unix is best
unix is best
linux is best
unix is best
linux is best
linux is best
unix is best
unix is best
cat file2.txt
========
Windows performs better
Mac OS performs better
Windows... (4 Replies)
Dear Experts
my scenario is as follows...
I have one source folder "Source" and 2 target folders "Target_123456" & "Target_789101". I have 2 series of files. 123456 series and 789101 series. Each series has got 3 types of fiels "Debit", "Refund", "Claims".
All files are getting... (17 Replies)
Discussion started by: phani333
17 Replies
LEARN ABOUT SUSE
bzz
BZZ(1) DjVuLibre-3.5 BZZ(1)NAME
bzz - DjVu general purpose compression utility.
SYNOPSIS
Encoding:
bzz -e[blocksize] inputfile outputfile
Decoding:
bzz -d inputfile outputfile
DESCRIPTION
The first form of the command line (option -e ) compresses the data from file inputfile and writes the compressed data into outputfile.
The second form of the command line (option -d ) decompressed file inputfile and writes the output to outputfile.
OPTIONS -d Decoding mode.
-e[blocksize]
Encoding mode. The optional argument blocksize specifies the size of the input file blocks processed by the Burrows-Wheeler trans-
form expressed in kilobytes. The default block sizes is 2048 KB. The maximal block size is 4096 KB. Specifying a larger block
size usually produces higher compression ratios and increases the memory requirements of both the encoder and decoder. It is use-
less to specify a block size that is larger than the input file.
ALGORITHMS
The Burrows-Wheeler transform is performed using a combination of the Karp-Miller-Rosenberg and the Bentley-Sedgewick algorithms. This is
comparable to (Sadakane, DCC 98) with a slightly more flexible ranking scheme. Symbols are then ordered according to a running estimate of
their occurrence frequencies. The symbol ranks are then coded using a simple fixed tree and the ZP binary adaptive coder (Bottou, DCC 98).
The Burrows-Wheeler transform is also used in the well known compressor bzip2. The originality of bzz is the use of the ZP adaptive coder.
The adaptation noise can cost up to 5 percent in file size, but this penalty is usually offset by the benefits of adaptation.
PERFORMANCE
The following table shows comparative results (in bits per character) on the Canterbury Corpus ( http://corpus.canterbury.ac.nz ). The very
good bzz performance on the spreadsheet file excl puts the weighted average ahead of much more sophisticated compressors such as fsmx.
+-------------------------------------------------------------------------------------------------------------+
| Compression performance |
| text fax csrc excl sprc tech poem html lisp man play Weighted Average |
+-------------------------------------------------------------------------------------------------------------+
| compress 3.27 0.97 3.56 2.41 4.21 3.06 3.38 3.68 3.90 4.43 3.51 2.55 3.31 |
| gzip -9 2.85 0.82 2.24 1.63 2.67 2.71 3.23 2.59 2.65 3.31 3.12 2.08 2.53 |
| bzip2 -9 2.27 0.78 2.18 1.01 2.70 2.02 2.42 2.48 2.79 3.33 2.53 1.54 2.23 |
| ppmd 2.31 0.99 2.11 1.08 2.68 2.19 2.48 2.38 2.43 3.00 2.53 1.65 2.20 |
| fsmx 2.10 0.79 1.89 1.48 2.52 1.84 2.21 2.24 2.29 2.91 2.35 1.63 2.06 |
| bzz 2.25 0.76 2.13 0.78 2.67 2.00 2.40 2.52 2.60 3.19 2.52 1.44 2.16 |
+-------------------------------------------------------------------------------------------------------------+
Note that DjVu contributors have several entries in this table. Program compress was written some time ago by Joe Orost. Program ppmd is
an improvement of the PPM-C method invented by Paul Howard.
CREDITS
Program bzz was written by Leon Bottou <leonb@users.sourceforge.net> and was then improved by Andrei Erofeev <andrew_erofeev@yahoo.com>,
Bill Riemers <docbill@sourceforge.net> and many others.
SEE ALSO djvu(1), compress(1), gzip(1), bzip2(1)DjVuLibre-3.5 10/11/2001 BZZ(1)