Sponsored Content
Top Forums Shell Programming and Scripting Script Optimization - large delimited file, for loop with many greps Post 302517399 by verge on Tuesday 26th of April 2011 05:21:57 PM
Old 04-26-2011
Thanks again Corona ... I'm still having trouble with excluding - as a delimiter in

Code:
FS="[^a-zA-Z0-9\\_\\.\\/\\-\\ ]"
 
I've tried \\-, \-, -, '-'

---------- Post updated at 01:56 PM ---------- Previous update was at 01:52 PM ----------

Thanks again for your help ... the difference in processing time reduced greatly ... from almost 6 hours to 20 minutes ... for approx over 3 million lines in about 35K lines

---------- Post updated at 02:21 PM ---------- Previous update was at 01:56 PM ----------

I have a slightly different issue with delimiters. I've been isolating and replacing the delimiters with grep, cut and sed but I know there is a much better way to do this with awk or even perl.

In my file the token seperator can be any non-word character, however the line terminator will also be a non-word character (the rule dictates that these two characters must be different). To add to the issue, user entered fields can include non-word characters, as long as those characters aren't the ones used for the token/line seperators.

Code:
Example record
ISA¼3213¼part-number¼address~GS¼56756¼control{number~ST¼09898~
 
Into 
ISA|3213|part-number|address
GS|56756|control{number
ST|09898

I need to replace the token seperator ¼ with a more standard delimiter like |
I need to replace line terminator ~ with line feed (0D0A)

However ¼ and ~ change from record to record and there are thousands of them in a file. I've csplit the files such that each file has the same token/field seperator and line terminator.

Let me know if I can explain this more clearly!
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Directory sizes loop optimization

I have the following script: #!/usr/bin/ksh export MDIR=/datafiles NAME=$1 SERVER=$2 DIRECTORY=$3 DATABASE=$4 ID=$5 export dirlist=`/usr/bin/ssh -q $ID@$SERVER find $DIRECTORY -type d -print` for dir in $dirlist do SIZE=`</dev/null /usr/bin/ssh -q $ID@$SERVER du -ks $dir` echo... (6 Replies)
Discussion started by: la_womn
6 Replies

2. UNIX for Dummies Questions & Answers

Command that creates file and also greps that file?

I have a command that does something and then creates a log file (importlog.xml). I then want to grep that newly created log (importlog.xml) file for a certain word (success). I then want to write that grep result to a new file (success.log). So far I can run the command which creates the... (2 Replies)
Discussion started by: Sepia
2 Replies

3. Shell Programming and Scripting

Large pipe delimited file that I need to add CR/LF every n fields

I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record. Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies

4. Shell Programming and Scripting

Extracting a portion of data from a very large tab delimited text file

Hi All I wanted to know how to effectively delete some columns in a large tab delimited file. I have a file that contains 5 columns and almost 100,000 rows 3456 f g t t 3456 g h 456 f h 4567 f g h z 345 f g 567 h j k lThis is a very large data file and tab delimited. I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies

5. Shell Programming and Scripting

help with a shell script that greps an error from the logs

Hello everyone. I wrote the following script but the second part is not excecuting. It is not sending the notification by email if the error occurs. the send mail is working so i think the errorr should be in the if statement LOGDIR=/logs/out LOG=`date "+%Y%m%d"`.LOG-FILE.out #the log file ... (11 Replies)
Discussion started by: adak2010
11 Replies

6. Shell Programming and Scripting

Removing dupes within 2 delimited areas in a large dictionary file

Hello, I have a very large dictionary file which is in text format and which contains a large number of sub-sections. Each sub-section starts with the following header : #DATA #VALID 1 and ends with a footer as shown below #END The data between the Header and the Footer consists of... (6 Replies)
Discussion started by: gimley
6 Replies

7. Shell Programming and Scripting

Need a script to convert comma delimited files to semi colon delimited

Hi All, I need a unix script to convert .csv files to .skv files (changing a comma delimited file to a semi colon delimited file). I am a unix newbie and so don't know where to start. The script will be scheduled using cron and needs to convert each .csv file in a particular folder to a .skv... (4 Replies)
Discussion started by: CarpKing
4 Replies

8. Shell Programming and Scripting

Tab Delimited file in loop

Hi, I have requirement to create tab delimited file with values coming from variables. File will contain only two columns separated by tab. Header will be added once. Values will be keep adding upon the script run. If values already exists then values will be replaced. I have done so... (1 Reply)
Discussion started by: sukhdip
1 Replies

9. UNIX for Advanced & Expert Users

Need optimized awk/perl/shell to give the statistics for the Large delimited file

I have a file size is around 24 G with 14 columns, delimiter with "|" My requirement- can anyone provide me the fastest and best to get the below results Number of records of the file First column and second Column- Unique counts Thanks for your time Karti ------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies

10. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies
CUT(1)									FSF								    CUT(1)

NAME
cut - remove sections from each line of files SYNOPSIS
cut [OPTION]... [FILE]... DESCRIPTION
Print selected parts of lines from each FILE to standard output. Mandatory arguments to long options are mandatory for short options too. -b, --bytes=LIST output only these bytes -c, --characters=LIST output only these characters -d, --delimiter=DELIM use DELIM instead of TAB for field delimiter -f, --fields=LIST output only these fields; also print any line that contains no delimiter character, unless the -s option is specified -n with -b: don't split multibyte characters -s, --only-delimited do not print lines not containing delimiters --output-delimiter=STRING use STRING as the output delimiter the default is to use the input delimiter --help display this help and exit --version output version information and exit Use one, and only one of -b, -c or -f. Each LIST is made up of one range, or many ranges separated by commas. Each range is one of: N N'th byte, character or field, counted from 1 N- from N'th byte, character or field, to end of line N-M from N'th to M'th (included) byte, character or field -M from first to M'th (included) byte, character or field With no FILE, or when FILE is -, read standard input. AUTHOR
Written by David Ihnat, David MacKenzie, and Jim Meyering. REPORTING BUGS
Report bugs to <bug-coreutils@gnu.org>. COPYRIGHT
Copyright (C) 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU- LAR PURPOSE. SEE ALSO
The full documentation for cut is maintained as a Texinfo manual. If the info and cut programs are properly installed at your site, the command info cut should give you access to the complete manual. cut (coreutils) 4.5.3 February 2003 CUT(1)
All times are GMT -4. The time now is 10:56 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy