Input file:
Output file:
For those not starting with "#" in column 1, I would like to print out those record that column 3 is higher or equal to 100.
Below is the command I try:
Even though the above command I try also can get the desired output result.
But I was not sure whether got any better way through using one liner awk or perl command to archive same goals?
Hi all
This is my output of the some SQL Query
TABLESPACE_NAME FILE_NAME TOTALSPACE FREESPACE USEDSPACE Free
------------------------- ------------------------------------------------------- ---------- --------- ---------... (2 Replies)
I am a beginner in Unix. Though have been asked to write a script to filter(remove duplicates) data from a .dat file. File is very huge containig billions of records.
contents of file looks like
30002157,40342424,OTC,mart_rec,100, ,0
30002157,40343369,OTC,mart_rec,95, ,0... (6 Replies)
Input file (4 DATA record shown in this case):
DATA AA0110
ACCESSION AA0110
VERSION AA0110 GI:157412239
FEATURES Location/Qualifiers
length 1..1170
1..1700
/length="1170"
position ... (5 Replies)
Input file:
data1 0.05
data2 1e-14
data1 1e-330
data2 1e-14
data5 2e-60
data5 2e-150
data1 4e-9
Desired output:
data2 1e-14
data1 1e-330
data2 1e-14
data5 2e-60
data5 2e-150
I would like to filter out those result that column 2 is less than 1e-10.
Command try: (1 Reply)
Hi Gurus,
I have requirement to compare current result with previous reuslt.
The sample case is below.
1 job1 1
1 job2 2
1 job3 3
2 job_a1 1
2 job_a2 2
2 job_a3 3
3 job_b1 1
3 job_b2 2
for above sample file, GID is group ID, for input line, the job run... (1 Reply)
Hello,
I am trying to extract valid data blocks from invalid ones. In the input the data blocks are separated by one or more blank rows. The criteria are
1) second column value must be 30 or more for the row to be valid and considered for calculation and output.
2) the sum of all valid... (2 Replies)
I have two files and would need to filter out records based on certain criteria, these column are of variable lengths, but the lengths are uniform throughout all the records of the file. I have shown a sample of three records below. Line 1-9 is the item number "0227546_1" in the case of the first... (15 Replies)
my sample file is like this
$cat onefile
05/21/18 13:10:07 ABRT US1CPDAY Status 1
05/21/18 21:18:54 ABRT DailyBackup_VFFPRDAPENTL01 Status 6
05/21/18 21:26:24 ABRT DailyBackup_VFFPRDAPENTL02 Status 6
05/21/18 21:57:36 ABRT DailyBackup_vm-ea1ffpreng01 Status 6... (7 Replies)
Discussion started by: gotamp
7 Replies
LEARN ABOUT DEBIAN
x2sys_merge
X2SYS_MERGE(1gmt) Generic Mapping Tools X2SYS_MERGE(1gmt)NAME
x2sys_merge - Merge an updated COEs tables
SYNOPSIS
x2sys_merge -Amain_COElist.d -Mnew_COElist.d
DESCRIPTION
x2sys_merge will read two crossovers data base and output the contents of the main one updated with the COEs in the second one. The second
file should only contain updated COEs relatively to the first one. That is, it MUST NOT contain any new two tracks intersections (This
point is NOT checked in the code). This program is useful when, for any good reason like file editing NAV correction or whatever, one had
to recompute only the COEs between the edited files and the rest of the database.
-A Specify the file main_COElist.d with the main crossover error data base.
-M Specify the file new_COElist.d with the newly computed crossover error data base.
OPTIONS
No space between the option flag and the associated arguments.
EXAMPLES
To update the main COE_data.txt with the new COEs estimations saved in the smaller COE_fresh.txt, try
x2sys_merge -ACOE_data.txt -MCOE_fresh.txt > COE_updated.txt
SEE ALSO x2sys_binlist(1), x2sys_cross(1), x2sys_datalist(1), x2sys_get(1), x2sys_init(1), x2sys_list(1), x2sys_put(1), x2sys_report(1)GMT 4.5.7 15 Jul 2011 X2SYS_MERGE(1gmt)