Read values in each col starting 3rd row.Print occurrence value.
Hello Friends,
Hope all are doing fine.
Here is a tricky issue.
my input file is like this
Logic:
1. Please print another column as "0-0-0-0-0" for the first and second rows.
2. Read the first column of third row, which is 1. Look for this value in all columns of first and second row. 1 is not present in first or second rows, so print a value 2 for this.
3. Then read the second column of third row, which is 10. There is 10 in first and not in second rows. So, it basically skipped the second row only. Now print a value of 1.
4. Then read the third column of third row, which is 11. 11 does not appear in first or second rows, so print a value of 2.
5. 19 has no appearances in first or second rows, so its value will be 2.
6. 30 did not appear in first row, but it appears in second row, which is the IMMEDIATE row of the current row that is being read which is row no.3. So, since no rows were skipped, we will print 0 for this one
So far the output will be
Logic Continued:
7. Now read the first column of fourth row, which is 2. Look for this value across anywhere in all the three rows above. Since 2 is not present, we will print a value 3 for this. Because it is not present in the three rows above.
8. Now read the second column of 4th row, which is 6. 6 is also not present across any of the three rows above. So its value will also be 3.
9. Read the third column of 4th row, which is 14. 14 is present in first row only but not in the second or third rows, so it skipped two rows. So print a value of 2 for this.
10. Read forth column of 4th row, which is 15. It is not present in first row. Fine. It is present in second row and not in third row. It basically skipped one IMMEDIATE row which is the third. We don't really care for the first row here. All we worry about is the number of times a value skipped after it appeared in the input file. So, the value for 15 will be 1.
11. Read last column of 4th row, which is 17. It is not present in any of the three rows above. So, we will print 3. Basically, if a value is not present across all top rows of the current row being considered, we HYPOTHESIZE that this value was PRESENT before the first line of the input. That is the reason we are printing 3 for the values that are not seen in any of the rows above row number 4.
So far, the output looks like this
Logic Continued:
12. Now the last row's first value which is 01. This iss present in third row and skipped the 4th. So, its value will be 1. If you see a value present in any row above the current row, then you DONT have to move any way further up because you have already seen that value.
13. Second column of last row, which is 06. This is present in 4th row. So, the value will be zero and DO NOT check any lines above because a value has been encountered.
14. Third column of last row, which is 20. It is present in first row but not in second, third or fourth rows. So, it skipped three rows. Print a value of 3 for this.
15. Forth column of last row, which is 25. This is not present anywhere. "Remember our hypothesis - this value occurred before the first line". So, we are printing 4 for this.
16. Fifth column of last row, which is 29. Present nowhere. So, print a value of 4.
Here is the final output
I would also like to have the frequency of unique numbers in the output column like this here
Please ask me any questions or comments in case of any doubt.
P.S:
a. My columns are always 5.
b. My input file always has 25 records only.
c. A bonus of 5000 bits will be awarded to the best working solution.
Thank You!
Last edited by jacobs.smith; 04-10-2016 at 04:03 PM..
Reason: code tags format
Why is the column added to row 2 always filled with 0-0-0-0-0? Why aren't entries in that row set to 1 if the number in a given column in row 2 is not present in row 1? In the given example, why shouldn't the last field in the output for row 2 be 1-1-1-1-1?
Other than being an interesting puzzle, does this problem address some real-world issue?
In the secondary output:
where do these numbers come from?
If you're counting the number of times a digit appears in the input, 0 occurs 13 times (not 12 times) in your sample input. If you're counting the number of times a value appears in your sample input, 0 (or 00) does not appear at all???
All of your input values are two digit strings. Are we supposed to treat 01 and 1 as the same value or as distinct values? If they are the same, is 010 to be treated as an octal value (decimal 8) or as a decimal value (10)?
Assuming that data values are strings (not numbers that need to be converted to a canonical format), and that you want a count of the number of times a string appears in your input file, the following awk script seems to come close to what you said you wanted:
producing the following output from your sample data:
(although if I were specifying the output format I'd put spaces around the equal signs and before the "time" in the secondary output.
As always, if you want to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
This User Gave Thanks to Don Cragun For This Post:
Assuming that data values are strings (not numbers that need to be converted to a canonical format), and that you want a count of the number of times a string appears in your input file, the following awk script seems to come close to what you said you wanted:
producing the following output from your sample data:
(although if I were specifying the output format I'd put spaces around the equal signs and before the "time" in the secondary output.
As always, if you want to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
Don, Thank you.
You are really a Don!!!!
Your counting at the end is much more comprehensive than what I had thought.
It's all biology related. Definitely real time. Thank you.
Hi Gurus,
I have requirement to identify the records based on one column value.
the sample file as below:
ID AMT, AMT1
100,10, 2
100,20, 3
200,30, 0
200, 40, 0
300, 20, 2
300, 50, 2
400, 20, 1
400, 60, 0
for each ID, there 2 records, if any one record amt1 is 0, the in 4th col add... (5 Replies)
I need help with extract/print lines till stop pattern. This needs to happen after every 3rd occurrence of start pattern and continue till end of file. Consider below is an example of the log file. my start pattern will be every 3rd occurrence of ERROR_FILE_NOT_FOUND and stop pattern will be... (5 Replies)
Hi. How can I read row number from one file and print that corresponding record present at that row in another file.
eg
file1
1
3
5
7
9
file2
11111
22222
33333
44444
55555
66666
77777
88888
99999 (3 Replies)
Hi,
Please help with this.
I have several excel files (with and .xlsx format) with 10-15 columns each.
They all have the same type of data but the columns are not ordered in the same way.
Here is a 3 column example. What I want to do add the alphabet
from column 2 to column 3, provided... (9 Replies)
I am a new user of Unix/Linux, so this question might be a bit simple!
I am trying to join two (very large) files that both have different # of cols and rows in each file.
I want to keep 'all' rows and 'all' cols from both files in the joint file, and the primary key variables are in the rows.... (1 Reply)
Hello everyone,
I am writing a script to process data from the ATP world tour.
I have a file which contains:
t=540 y=2011 r=1 p=N409
t=540 y=2011 r=2 p=N409
t=540 y=2011 r=3 p=N409
t=540 y=2011 r=4 p=N409
t=520 y=2011 r=1 p=N409
t=520 y=2011 r=2 p=N409
t=520 y=2011 r=3 p=N409
The... (4 Replies)
Hello,
I have a 1.6 GB file that I would like to modify by matching some ids in col1 with the ids in col 1 of file2.txt and save the results into a 3rd file.
For example:
File 1 has 1411 rows, I ignore how many columns it has (thousands)
File 2 has 311 rows, 1 column
Would like to... (7 Replies)
Hi Guys...
I am newbie to awk and would like a solution to probably one of the simple practical questions.
I have a test file that goes as:
1,2,3,4,5,6
7,2,3,8,7,6
9,3,5,6,7,3
8,3,1,1,1,1
4,4,2,2,2,2
I would like to know how AWK can get me the distinct values say for eg: on col2... (22 Replies)