10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hello everyone,
Although it seems easy, I've been stuck with this problem for a moment now and I can't figure out a way to get it done.
My problem is the following:
I have a file where each line is a sequence of IP addresses, example :
10.0.0.1 10.0.0.2
10.0.0.5 10.0.0.1 10.0.0.2... (5 Replies)
Discussion started by: MisterJellyBean
5 Replies
2. Shell Programming and Scripting
I have a file with four columns like
dmn10003t1 PF00001 PF00022 dmn12390t1
dmn10008t1 PF00069 PF00027 dmn9781t1
dmn10008t1 PF00068 PF00027 dmn9781t1
dmn10008t1 PF00069 PF00069 dmn9781t1
dmn12390t1 PF00069 PF00076 dmn10003t1
I want to create a new file by comparing the repeated word pairs... (2 Replies)
Discussion started by: sammy777
2 Replies
3. Shell Programming and Scripting
I have a file that contains the following:
Party_Id1;Party_id2;Party_id3;
1;2;3;
0
0
4;5;6;
0
7;8;9;
How can I adjust the file so it looks like this:
Party_Id1;Party_id2;Party_id3;
1;2;3;
4;5;6;
7;8;9;
I Think the '0' is something like a carriage return, I don't know. But how... (2 Replies)
Discussion started by: katled
2 Replies
4. UNIX for Dummies Questions & Answers
Hi,
I have a huge file which has Lacs of lines. File system got full.
I want your guys help to suggest me a solution so that I can remove all lines from that file but not last 50,000 lines. I want solution which can remove lines from existing file so that I can have some space left with. (28 Replies)
Discussion started by: prashant2507198
28 Replies
5. Shell Programming and Scripting
Hey Gang-
I have a list of servers. I want to exclude servers that begin with and end with certain characters. Is there an easy command to do this?
Example
wvm1234dev
wvm1234pro
uvm1122dev
uvm1122bku
uvm1344dev
I want to exclude any lines that start with "wvm" OR "uvm" AND end... (7 Replies)
Discussion started by: idiotboy
7 Replies
6. Shell Programming and Scripting
Using awk, print all the lines where field 8 is equal to x
I really did try, but this awk thing is really hard to figure out.
file1.txt"Georgia","Atlanta","2011-11-02","x","","","",""
"California","Los Angeles","2011-11-03","x","","","",""... (2 Replies)
Discussion started by: charles33
2 Replies
7. Shell Programming and Scripting
Experts,
I have a file datafile.txt that consists of 1732 Line,
I want to split the file into equal number of lines with 10 file.
(The last file can have 2 line extra to match 1732)
Please advise how to do that,
Thanks in advance.. (2 Replies)
Discussion started by: rveri
2 Replies
8. Shell Programming and Scripting
A small question
I have a test.txt file
I have contents as:
a:google
b:yahoo
:
c:facebook
:
d:hotmail
How do I remove the line with :
my output should be
a:google
b:yahoo
c:facebook
d:hotmail (5 Replies)
Discussion started by: aronmelon
5 Replies
9. Shell Programming and Scripting
Hi gurus,
i'm trying to remove a number of lines from a large file using the following command:
sed '1,5000d' oldfile > newfile
Somehow the lines in the old file are not deleted...
Am I doing this wrongly? Any suggestions? :confused:
Thanks! :)
wee (10 Replies)
Discussion started by: lweegp
10 Replies
10. UNIX for Dummies Questions & Answers
All,
I have a text file with several entries like below:
personname
personname.domain.com
I know there is a way to use vi to remove only the personname.domain.com line. Can someone help? I believe that it involves /s/g/ something...I just can't remember the exact syntax.
Thanks (2 Replies)
Discussion started by: kjbaumann
2 Replies
funidx(7) SAORD Documentation funidx(7)
NAME
Funidx - Using Indexes to Filter Rows in a Table
SYNOPSIS
This document contains a summary of the user interface for filtering rows in binary tables with indexes.
DESCRIPTION
Funtools Table Filtering allows rows in a table to be selected based on the values of one or more columns in the row. Because the actual
filter code is compiled on the fly, it is very efficient. However, for very large files (hundreds of Mb or larger), evaluating the filter
expression on each row can take a long time. Therefore, funtools supports index files for columns, which are used automatically during fil-
tering to reduce dramatically the number of row evaluations performed. The speed increase for indexed filtering can be an order of magni-
tude or more, depending on the size of the file.
The funindex program creates an index on one or more columns in a binary table. For example, to create an index for the column pi in the
file huge.fits, use:
funindex huge.fits pi
This will create an index named huge_pi.idx.
When a filter expression is initialized for row evaluation, funtools looks for an index file for each column in the filter expression. If
found, and if the file modification date of the index file is later than that of the data file, then the index will be used to reduce the
number of rows that are evaluated in the filter. When Spatial Region Filtering is part of the expression, the columns associated with the
region are checked for index files.
If an index file is not available for a given column, then in general, all rows must be checked when that column is part of a filter
expression. This is not true, however, when a non-indexed column is part of an AND expression. In this case, only the rows that pass the
other part of the AND expression need to be checked. Thus, in some cases, filtering speed can increase significantly even if all columns
are not indexed.
Also note that certain types of filter expression syntax cannot make use of indices. For example, calling functions with column names as
arguments implies that all rows must be checked against the function value. Once again, however, if this function is part of an AND expres-
sion, then a significant improvement in speed still is possible if the other part of the AND expression is indexed.
For example, note below the dramatic speedup in searching a 1 Gb file using an AND filter, even when one of the columns (pha) has no index:
time fundisp
huge.fits'[idx_activate=0,idx_debug=1,pha=2348&&cir 4000 4000 1]'
"x y pha"
x y pha
---------- ----------- ----------
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
42.36u 13.07s 6:42.89 13.7%
time fundisp
huge.fits'[idx_activate=1,idx_debug=1,pha=2348&&cir 4000 4000 1]'
"x y pha"
x y pha
---------- ----------- ----------
idxeq: [INDEF]
idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352]
idxand(1): INDEF [IDX_OR_SORT]
idxall(1): [IDX_OR_SORT]
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
3999.48 4000.47 2348
1.55u 0.37s 1:19.80 2.4%
When all columns are indexed, the increase in speed can be even more dramatic:
time fundisp
huge.fits'[idx_activate=0,idx_debug=1,pi=770&&cir 4000 4000 1]'
"x y pi"
x y pi
---------- ----------- ----------
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
42.60u 12.63s 7:28.63 12.3%
time fundisp
huge.fits'[idx_activate=1,idx_debug=1,pi=770&&cir 4000 4000 1]'
"x y pi"
x y pi
---------- ----------- ----------
idxeq: pi start=9473025,stop=9492240 => pi[ROW 9473025:9492240]
idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352]
idxor sort/merge: pi[ROW 9473025:9492240] [IDX_OR_SORT]
idxmerge(5): [IDX_OR_SORT] pi[ROW]
idxall(1): [IDX_OR_SORT]
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
3999.48 4000.47 770
1.67u 0.30s 0:24.76 7.9%
The miracle of indexed filtering (and indeed, of any indexing) is the speed of the binary search on the index, which is of order log2(n)
instead of n. (The funtools binary search method is taken from http://www.tbray.org/ongoing/When/200x/2003/03/22/Binary, to whom grateful
acknowledgement is made.) This means that the larger the file, the better the performance. Conversely, it also means that for small files,
using an index (and the overhead involved) can slow filtering down somewhat. Our tests indicate that on a file containing a few tens of
thousands of rows, indexed filtering can be 10 to 20 percent slower than non-indexed filtering. Of course, your mileage will vary with con-
ditions (disk access speed, amount of available memory, process load, etc.)
Any problem encountered during index processing will result in indexing being turned off, and replaced by filtering all rows. You can turn
filtering off manually by setting the idx_activate variable to 0 (in a filter expression) or the FILTER_IDX_ACTIVATE environment variable
to 0 (in the global environment). Debugging output showing how the indexes are being processed can be displayed to stderr by setting the
idx_debug variable to 1 (in a filter expression) or the FILTER_IDX_DEBUG environment variable to 1 (in the global environment).
Currently, indexed filtering only works with FITS binary tables and raw event files. It does not work with text files. This restriction
might be removed in a future release.
SEE ALSO
See funtools(7) for a list of Funtools help pages
version 1.4.2 January 2, 2008 funidx(7)