Hi:
I have a text file date(pipe delimited) which is loaded in to the DB using sql loader(&CTL files) after some initial validation by the shell script.
Now i have a situation where the shell script needs to check a column in the text file and if it is NULL then it needs send this record/row... (12 Replies)
Hi, could some help me convert CSV file (with double quoted strings) to pipe delimited file:
here you go with the same data:
1,Friends,"$3.99 per 1,000 listings",8158here " 1,000 listings " should be a single field.
Thanks,
Ram (8 Replies)
this is Korn shell unix.
The scenario is I have a pipe delimited text file which needs to be customized. say for example,I have a pipe delimited text file with 15 columns(| delimited) and 200 rows. currently the 11th and 12th column has null values for all the records(there are other null columns... (4 Replies)
hi,
I have pipe delimited flat file as below
1|ab|4.5|9|
2|ac|3|12|
3|ac|4.5|8|
i want to show (display) only 3rd field between pipes.
please help (1 Reply)
Hello All,
Thanks for taking time to read through the thread and for providing any possible solution.
I am trying to pivot a comma separated field in a pipe delimited file. Data looks something like this:
Field1|Field2
123|345,567,789
234|563,560
345|975,098,985,397,984
456|736
Desired... (8 Replies)
Hi,
I have a pipe delimited file as below and I need to replace the 2nd column of each line with null values.
1|10/15/2011|fname1|lname1
2|10/15/2012|fname2|lname2
3|10/15/2013|fname3|lname3
Output file:
1||fname1|lname1
2||fname2|lname2
3||fname3|lname3
I tried this
... (2 Replies)
Hi All ,
I have pipe delimiter file with 11 columns . I need to insert 4 empty columns after column 10 . and After 11 column I need to insert a column which is having the same value for all the rows .
My file
1|2|3|4|5|6|7|8|9|10|11
New file
... (11 Replies)
Hi All,
I have a file which has data like
a,b
c,d
e,f
g,h
And I need to insert a new column at the begining with sequence no( 1 to n)
1,a,b
2,c,d
3,e,f
4,g,h
Please let me know how to acheive this in unix (3 Replies)
Hi,
I need to remove first column from a csv file and i can do this by using below command.
cut -f1 -d, --complement Mytest.csv
I need to implement this in shell scripting, Whenever i am using the above command alone in command line it is working fine.
I have 5 files in my directory and... (3 Replies)
I have an input file as below
Emp1|FirstName|MiddleName|LastName|Address|Pincode|PhoneNumber
1234|FirstName1|MiddleName2|LastName3| Add1 || ADD2|123|000000000
Output :
1234|FirstName1|MiddleName2|LastName3| Add1 ,, ADD2|123|000000000
OR
1234,FirstName1,MiddleName2,LastName3, Add1 ||... (2 Replies)
Discussion started by: styris
2 Replies
LEARN ABOUT DEBIAN
csv2mipe
CSV2MIPE(1) User Contributed Perl Documentation CSV2MIPE(1)NAME
csv2mipe.pl - Generates MIPE file based on 3 tab-delimited files
based on MIPE version v1.1
arguments: * tab-delimited file with PCR-level data
* tab-delimited file with SNP-level data
* tab-delimited file with assay-level data
Columns in file with PCR-level data:
pcr_id
pcr_modified (might be multiple, divided by semi-colon ";")
pcr_project (might be multiple, divided by semi-colon ";")
pcr_researcher (might be multiple, divided by semi-colon ";")
pcr_species
source_type
source_id
design_seq
primer1_oligo
primer1_seq
primer1_tm
primer2_oligo
primer2_seq
primer2_tm
design_remark (might be multiple, divided by semi-colon ";")
use_seq
use_revcomp
use_remark (might be multiple, divided by semi-colon ";")
pcr_remark (might be multiple, divided by semi-colon ";")
Columns in file with SNP-level data:
pcr_id
snp_id
snp_pos
snp_amb
snp_remark (might be multiple, divided by semi-colon ";")
Columns in file with assay-level data:
pcr_id
snp_id
assay_id
assay_type
assay_enzyme
assay_oligo
assay_specific
assay_tail
assay_strand
assay_remark (might be multiple, divided by semi-colon ";")
SYNOPSIS
csv2mipe.pl <pcr_file.csv> <snp_file.csv> <assay_file.csv>
ADDITIONAL INFO
See http://mipe.sourceforge.net
AUTHOR
Jan Aerts (jan.aerts@bbsrc.ac.uk)
perl v5.14.2 2005-07-20 CSV2MIPE(1)