If you have GNU awk or mawk, you could try first converting it to regular newline termination like so:
Perhaps that would solve your memory issues as well.
Hi,
I have a large file(csv format) that I need to split into 2 files. The file looks something like
Original_file.txt
first name, family name, address
a, b, c,
d, e, f,
and so on for over 100,00 lines
I need to create two files from this one file. The condition is i need to ensure... (4 Replies)
Hi experts.
I got a file (500mb max) and need to pivot it (loading into ORCL) and change BLANK delimiter to PIPE |.
Sometimes there are multipel BLANKS (as a particular value may be BLANK, or simply two BLANKS instead of one BLANK).
thanks for your input!
Cheers,
Layout... (3 Replies)
HI,
i've to split a large file which inputs seems like :
Input file name_file.txt
00001|AAAA|MAIL|DATEOFBIRTHT|.......
00001|AAAA|MAIL|DATEOFBIRTHT|.......
00002|BBBB|MAIL|DATEOFBIRTHT|.......
00002|BBBB|MAIL|DATEOFBIRTHT|.......
00003|CCCC|MAIL|DATEOFBIRTHT|.......... (1 Reply)
Hi,
My inputfile contains field separaer is ^.
12^inms^
13^fakdks^ssk^s3
23^avsd^
13^fakdks^ssk^a4
I wanted to print only 2 delimiter occurence i.e
12^inms^
23^avsd^ (4 Replies)
Hi All,
I'm a newbie here, I'm just wondering on how to delete a single record in a large file in unix.
ex.
file1.txt is 1000 records
nikki1
nikki2
nikki3
what i want to do is delete the nikki2 record in file1.txt. is it possible?
Please advise,
Thanks, (3 Replies)
Hello,
I want to split a big file into smaller ones with certain "counts". I am aware this type of job has been asked quite often, but I posted again when I came to csplit, which may be simpler to solve the problem.
Input file (fasta format):
>seq1
agtcagtc
agtcagtc
ag
>seq2
agtcagtcagtc... (8 Replies)
Hi,
I have a file which has many URLs delimited by space. Now i want them to move to separate files each one holding 10 URLs per file.
http://3276.e-printphoto.co.uk/guardian http://abdera.apache.org/ http://abdera.apache.org/docs/api/index.html
I have used the below code to arrange... (6 Replies)
Hello All,
I have a large file, more than 50,000 lines, and I want to split it in even 5000 records. Which I can do using
sed '1d;$d;' <filename> | awk 'NR%5000==1{x="F"++i;}{print > x}'Now I need to add one more condition that is not to break the file at 5000th record if the 5000th record... (20 Replies)
Hi, all.
I have an input file. I would like to generate 3 types of output files.
Input:
LG10_PM_map_19_LEnd_1000560
LG10_PM_map_6-1_27101856
LG10_PM_map_71_REnd_20597718
LG12_PM_map_5_chr_118419232
LG13_PM_map_121_24341052
LG14_PM_1a_456799
LG1_MM_scf_5a_opt_abc_9029993
... (5 Replies)
My requirment is for every record of a particular file I've to check for a record delimeter (e.g. "\n") and if any row doesn't have "\n" then report it in error file .
Please suggest me to go through this. (4 Replies)
Discussion started by: manab86
4 Replies
LEARN ABOUT CENTOS
igawk
IGAWK(1) Utility Commands IGAWK(1)NAME
igawk - gawk with include files
SYNOPSIS
igawk [ all gawk options ] -f program-file [ -- ] file ...
igawk [ all gawk options ] [ -- ] program-text file ...
DESCRIPTION
Igawk is a simple shell script that adds the ability to have ``include files'' to gawk(1).
AWK programs for igawk are the same as for gawk, except that, in addition, you may have lines like
@include getopt.awk
in your program to include the file getopt.awk from either the current directory or one of the other directories in the search path.
OPTIONS
See gawk(1) for a full description of the AWK language and the options that gawk supports.
EXAMPLES
cat << EOF > test.awk
@include getopt.awk
BEGIN {
while (getopt(ARGC, ARGV, "am:q") != -1)
...
}
EOF
igawk -f test.awk
SEE ALSO gawk(1)
Effective AWK Programming, Edition 1.0, published by the Free Software Foundation, 1995.
AUTHOR
Arnold Robbins (arnold@skeeve.com).
Free Software Foundation Nov 3 1999 IGAWK(1)