Hi I have to grep for 2000 strings in a file one after the other.Say the file name is Snxx.out which has these strings.
I have to search for all the strings in the file Snxx.out one after the other.
What is the fastest way to do it ??
Note:The current grep process is taking lot of time per... (7 Replies)
Morning guys. Another day another question. :rolleyes:
I am knocking up a script to pull some data from a file. The problem is the file is very big (up to 1 gig in size), so this solution:
for results in `grep "^\
... works, but takes ages (we're talking minutes) to run. The data is held... (8 Replies)
Hello,
i have a very big file that has more then 80 MBytes (100MBytes). So with my CVS Application I cannot commit this file (too Big) because it must have < 80 MBytes.
How can I split this file into two others files, i think the AIX Unix command :
split -b can do that, buit how is the right... (2 Replies)
Hi experts,
I just want to know if there is a better solution to my nested while read loops below:
while read line; do
while read line2; do
while read line3; do
echo "$line $line2 $line3"
done < file3.txt
done < file2.txt
done < file1.txt >... (4 Replies)
Hello!
Is there a way i can read a file with n records as one big string using linux shell script? I have a file in the below format -
REC1
REC2
REC3
.
.
.
REC4
Record length is 3000 bytes per record and with a newline char at the end. What i need to do is
- read this file as one... (5 Replies)
Hi Gurus,
I have two big files. I need to compare the different. currently, I am using
sort file1 > file1_temp;
sort file2 > file2_tmp
diff file1_tmp file2_tmp
I can use command
grep -v -f file1 file2
just wondering which way is fast to compare two big files.
Thanks... (4 Replies)
Hi All,
I am new to this forum and this is my first post.
My requirement is like to optimize the time taken to grep the file with 40000 lines.
There are two files FILEA(40000 lines) FILEB(40000 lines).
The requirement is like this, both the file will be in the format below... (11 Replies)
I have a simple script that reads in data from fileA.txt and searches line by line for that data in multiple files (*multfiles.txt). It only prints the data when there is more than 1 instance of it. The problem is that its really slow (3+ hours) to complete the entire process. There are nearly 1500... (10 Replies)
ok guys.
this isnt homework or anything.
i have been using grep -f all my life but i am trying this for a huge file and it doesnt work.
can someone give me a replacement for grep -f pattern file for big files?
thanks (6 Replies)
Gents
Actually I have question and i need your support.
I have this NAS file system mounted as /coresys has size of 7 TB
I need to Split this file system into several file systems as mount points I mean how to can I Split it professionally to different NAS mount points how to can I decide... (2 Replies)
Discussion started by: AbuAliiiiiiiiii
2 Replies
LEARN ABOUT DEBIAN
x2sys_solve
X2SYS_SOLVE(1gmt) Generic Mapping Tools X2SYS_SOLVE(1gmt)NAME
x2sys_solve - Determine systematic corrections from crossovers
SYNOPSIS
x2sys_solve -Ccolumn -TTAG -Emode [ COE_list.d ] [ -V ] [ -W ] [ -Z ] [ -bi[s|S|d|D[ncol]|c[var1/...]] ]
DESCRIPTION
x2sys_solve will use the supplied crossover information to solve for systematic corrections that can then be applied per track to improve
data quality. Several systematic corrections can be solved for using a least-squares approach. Note: Only one data column can be processed
at the time.
-T Specify the x2sys TAG which tracks the attributes of this data type.
-C Specify which data column you want to process. Needed for proper formatting of the output correction table and must match the same
option used in x2sys_list when preparing the input data.
-E The correction type you wish to model. Choose among the following functions f(p), where p are the m parameters per track that we
will fit simultaneously using a least squares approach:
c will fit f(p) = a (a constant offset); records must contain cruise ID1, ID2, COE.
d will fit f(p) = a + b * d (linear drift; d is distance; records must contain cruise ID1, ID2, d1, d2, COE.
g will fit f(p) = a + b sin(y)^2 (1980-1930 gravity correction); records must contain cruise ID1, ID2, latitude y, COE.
h will fit f(p) = a + b cos(H) + c cos(2H) + d sin(H) + e sin(2H) (magnetic heading correction); records must contain cruise ID1,
ID2, heading H, COE.
s will fit f(p) = a * z (a unit scale correction); records must contain cruise ID1, ID2, z1, z2.
t will fit f(p) = a + b * (t - t0) (linear drift; t0 is the start time of the track); records must contain cruise ID1, ID2, t1-t0,
t2-t0, COE.
OPTIONS
No space between the option flag and the associated arguments.
COE_list.d
Name of file with the required crossover columns as produced by x2sys_list. NOTE: If -bi is used then the first two columns are
expected to hold the integer track IDs; otherwise we expect those columns to hold the text string names of the two tracks.
-V Selects verbose mode, which will send progress reports to stderr [Default runs "silently"].
-W Means that each input records has an extra column with the composite weight for each crossover record. These are used to obtain a
weighted least squares solution [no weights].
-Z For -Ed and -Et, determine the earliest time or shortest distance for each track, then use these values as the local origin for time
duration or distance calculations. The local origin is then included in the correction table [Default uses 0].
-bi Selects binary input. Append s for single precision [Default is d (double)]. Uppercase S or D will force byte-swapping. Option-
ally, append ncol, the number of columns in your binary input file if it exceeds the columns needed by the program. Or append c if
the input file is netCDF. Optionally, append var1/var2/... to specify the variables to be read.
EXAMPLES
To fit a simple bias offset to faa for all tracks under the MGD77 tag, try
x2sys_list COE_data.txt -V -TMGD77 -Cfaa -Fnc > faa_coe.txt
x2sys_solve faa_coe.txt -V -TMGD77 -Cfaa -Ec > coe_table.txt
To fit a faa linear drift with time instead, try
x2sys_list COE_data.txt -V -TMGD77 -Cfaa -FnTc > faa_coe.txt
x2sys_solve faa_coe.txt -V -TMGD77 -Cfaa -Et > coe_table.txt
To estimate heading corrections based on magnetic crossovers associated with the tag MGD77 from the file COE_data.txt, try
x2sys_list COE_data.txt -V -TMGD77 -Cmag -Fnhc > mag_coe.txt
x2sys_solve mag_coe.txt -V -TMGD77 -Cmag -Eh > coe_table.txt
To estimate unit scale corrections based on bathymetry crossovers, try
x2sys_list COE_data.txt -V -TMGD77 -Cdepth -Fnz > depth_coe.txt
x2sys_solve depth_coe.txt -V -TMGD77 -Cdepth -Es > coe_table.txt
SEE ALSO x2sys_binlist(1), x2sys_cross(1), x2sys_datalist(1), x2sys_get(1), x2sys_init(1), x2sys_list(1), x2sys_put(1), x2sys_report(1)GMT 4.5.7 15 Jul 2011 X2SYS_SOLVE(1gmt)