If I understand what you're trying to do (and I am not sure that I do), the following seems to do what you want and should be MUCH faster than what you are currently doing. It calls sort once (to sort all of the data in your input files) and awk once (to process each different group of lines found in the sorted input with the same values in the first two fields; and, for each group, to rank the lines found based on the third field; with ranking at both tails of the ranges as requested in post #18 in this VERY long thread).
This script assumes that your input fields are separated by tabs (as shown in your sample data in post #22) and not by spaces as shown in many of your other examples. And, based on that, it uses a tab to separate the rank in the output from the input fields.
This script was written and tested using a Korn shell (instead of bash) but will also work with any shell that uses Bourne shell syntax (including bash and many other). Since you're working with a few hundred files, this code assumes that you will run this script while located in the directory that contains the data files (instead of a parent directory) so that you can reference files with shorter names (such as *05.pnt instead of test/*05.pnt) because you can name more files on the command line without running into ARG_MAX limits if you use shorter names for each of the file pathnames. And this code assumes that you will provide a list of the files to be processed as command line arguments instead of hardcoding the names into the script. If the list of files you need to process does exceed ARG_MAX limits even with the suggested ways to shorten your argument list, we can use a for loop to feed data to sort using xargs and cat, but on most systems in use today, I wouldn't expect that to be necessary.
The script is:
With a named file named post20 containing data extrapolated from your post #20:
and assuming that you saved the above script in a file named rank_files and made it executable, then the command:
would produce the output:
And, with the data you supplied in post #27 split out into 118 different input files with names used there, the command:
produces the output:
which I think does what you want even though the snippet of output you provided in post #27 did not provide the requested adjustments to ranks in the last half of the data for lines with field 1 set to 46.8542 and field 2 set to -121.7292.
It wasn't clear to me why you talked about choosing one file to gather field 1 and 2 values. The above code gathers field 1 and 2 values from all input files specified and produces ranked output for each different set of values in fields 1 and 2. (And, in case some sets of field 1 and 2 values have fewer entries than other values (i.e., do not appear in all input files), the midpoint is recalculated for each group of lines.
If you want to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
suppose u have a file which consist of many data points separated by asterisk
Question is to extract third part in each line .
0.0002*0.003*-0.93939*0.0202*0.322*0.3332*0.2222*0.22020
0.003*0.3333*0.33322*-0.2220*0.3030*0.2222*0.3331*-0.3030
0.0393*0.3039*-0.03038*0.033*0.4033*0.30384*0.4048... (5 Replies)
Hello all,
I have a data file that needs some serious work...I have no idea how to implement the changes that are needed!
The file is a genotypic file with >64,000 columns representing genetic markers, a header line, and >1100 rows that looks like this:
ID 1 2 3 4 ... (7 Replies)
hiii, Help me out..i have a huge set of data stored in a file.This file has has 2 columns which is latitude & longitude of a region. Now i have a program which asks for the number of points & based on this number it asks the user to enter that latitude & longitude values which are in the same... (7 Replies)
Hi,
I am trying to arrange my graphs with GNUPLOT. Although it looked like simple at the beginning, I could not figure out an answer for the following: I want to change the style of my data points (not the line, just exact data points) The terminal assigns first + and then x to them but what I... (0 Replies)
Hi,
I'd like to process multiple files. For example:
file1.txt
file2.txt
file3.txt
Each file contains several lines of data. I want to extract a piece of data and output it to a new file.
file1.txt ----> newfile1.txt
file2.txt ----> newfile2.txt
file3.txt ----> newfile3.txt
Here is... (3 Replies)
Hi, I need help on finding the value of my data that encompasses certain percentage of my total data points (n). Attached is an example of my data, n=30. What I want to do is for instance is find the minimum threshold that still encompasses 60% (n=18), 70% (n=21) and 80% (n=24).
manually to... (4 Replies)
I have a text file that shows the output of my solar inverters. I want to separate this into sections. overview , device 1 , device 2 , device 3. Each device has different number of lines. but they all have unique starting points. Overview starts with 6 #'s, Devices have 4#'s and their data starts... (6 Replies)
Hello Gurus,
Im new to scripting. Got struck with a file merge issue in Unix. Was looking for some direction and stumbled upon this site. I saw many great posts and replies but couldnt find a solution to my issue. Greatly appreciate any help..
I have three csv files -> Apex_10_Latest.csv,... (1 Reply)
We have the data looks like below in a log file.
I want to generat files based on the string between two hash(#) symbol like below
Source:
#ext1#test1.tale2 drop
#ext1#test11.tale21 drop
#ext1#test123.tale21 drop
#ext2#test1.tale21 drop
#ext2#test12.tale21 drop
#ext3#test11.tale21 drop... (5 Replies)