is intended to match and store file1_header.txt as $file1 and match and store file1_header.vcf.gz as $file2.
These two variables are then passed to the reheader command to be processed. After the command executes the variables are reset using the remaining two files.
I will rerun the code omitting the out and adding the done. Thank you .
---------- Post updated at 04:53 PM ---------- Previous update was at 02:25 PM ----------
The portion of code in bold in the previous post is unchanged, but I updated the match portion of the code to:
adding the missing done did allow the script to execute.
The output that I get in the terminal is:
desired output
Basically, on the first pass the $file1 variable is the .txt and $file2 variable is the matching vcf.gz. Those variable are passed to the reheader command it is executed. After it executes the variables are reset using the other files in the directory. Since the numerical prefix is always unique that is used to perform the match between files. There will also also be a match vcf.gz and .txt. Thank you .
Last edited by cmccabe; 02-02-2017 at 06:56 PM..
Reason: added details
I have searched the internet (including these forums) and perhaps I'm not using the right wording.
What I'm looking for is a function (preferably C) that analyzes the similitude of two numerical or near-numerical values, and returns either a true/false (match/nomatch) or a return code that... (4 Replies)
Hi,
I am trying to write a script which parses a log file and will eventually put the values in an array so that I can perform some math on it. In this file I am only interested in the last 200 lines so here is the command I use to display the contents in a manageable manner.
tail -200... (3 Replies)
Hi,
I have a file of csv data, which looks like this:
file1:
1AA,LGV_PONCEY_LES_ATHEE,1,\N,1,00020460E1,0,\N,\N,\N,\N,2,00.22335321,0.00466628
2BB,LES_POUGES_ASF,\N,200,200,00006298G1,0,\N,\N,\N,\N,1,00.30887539,0.00050312... (10 Replies)
Dear All,
assume i have a file with content:
<Start>6000</Start>
<Stop>7599</Stop>
the output is:
6000
7000
7100
7200
7300
7400
7599
how should we use any awk, sed, perl can do this task, means to extract the uniq prefixes from the start and stop prefix.
Thanks
Jimmy (3 Replies)
The bash below loops through a specific directory dir and finds and writes the oldest folder to a variable called $filename.
#!/bin/bash
# oldest folder stored as variable for analysis, version log created, and quality indicators matched to run
dir=/home/cmccabe/Desktop/NGS/test
find... (2 Replies)
I am trying to create a cronjob that will run on startup that will look at a list.txt file to see if there is a later version of a database using database.txt as the source. The matching lines are written to output.
$1 in database.txt will be in list.txt as a partial match. $2 of database.txt... (2 Replies)
In the below bash I am trying to rename eachof the 3 text files in /home/cmccabe/Desktop/percent by matching the numerical portion of each file to lines 3,4, or 5 in /home/cmccabe/Desktop/analysis.txt. There will always be a match between the files. When a match is found each text file in... (2 Replies)
In the file1 below if $9 and $12 are . (dot) then the value in $8 of file1 is used as a key (exact match) to lookup in each $2 of file2, when a match is found then the value of $4
in file1 is used to look for a range match within +/- 50 using the values in $4 and after in file2. The number of... (9 Replies)
In the awk below I am trying to use the file1 as a match to file2. In file2 the contents of $5,&6,and $7 (always tab-delimited) and are copied to the output under the header Quality metrics. The below executes but the output is empty. I have added comments to help and show my thinking. Thank you... (0 Replies)
I am trying to use bash to loop through a directory /path/to/data using a prefix match from /path/to/file. That match is obtained and works using the code below (in green)... what I can not seem to do is populate or update the corresponding prefix_file.txt in /path/to/data with the values in each... (3 Replies)