Thanks for your response, daptal. There seems to be a bit of a problem though. I am on a solaris machine, by the way. When I run your awk statement, I get an error:
awk: syntax error near line 1
awk: illegal statement near line 1
When I use nawk or gawk though, the command does run successfully, but does the opposite of what I was expecting, i.e. adds an extra newline instead of the comma. My output in these two cases was:
substr(colName
1
10)
add_months(somedate
-6)
Thanks!
---------- Post updated at 01:36 AM ---------- Previous update was at 01:26 AM ----------
Quote:
Originally Posted by skmdu
Hope I got your question correctly, But I used the different approach to solve this issue ( i,e I didnt check if the new line appears before the line starts with the number ). It works for me. You can try this out.
Thanks for your response, skmdu! This code works when my input is just the examples that I had quoted in my original post. There is a bit of a problem, though - and it is entirely because I didn't give you the entire picture. Here's a more complete input:
My input (if you notice, its all in one line):
My output so far (without your sed statement):
Output when your sed statement is included in my script that produced the above output:
Notice how some lines are joined because of extra commas or braces being present on a specific line. Also, an additional comma was added at the end, after the semi-colon.
Again, I realize it was because I had not given you much to work with to begin with. Thanks again for your help so far! Really, really appreciate it.
Hi All,
I need to select only those records having a non zero record in the first column of a comma delimited file.
Suppose my input file is having data like:
"0","01/08/2005 07:11:15",1,1,"Created",,"01/08/2005"
"0","01/08/2005 07:12:40",1,1,"Created",,"01/08/2005"... (2 Replies)
I'll try explain this as best I can. Let me know if it is not clear.
I have large text files that contain data as such:
143593502 09-08-20 09:02:13 xxxxxxxxxxx xxxxxxxxxxx 09-08-20 09:02:11 N line 1 test
line 2 test
line 3 test
143593503 09-08-20 09:02:13... (3 Replies)
Guys,
I manages to get awk to search and print the files that I want to delete. However I am stuck on the delete portion.
Here is the command that I am using to fins these files.
find /usr/local/apache/conf/vhosts/ -type f | awk '/e$/'
The output is perfect. The files look like so:
... (4 Replies)
I have a huge file (about 2 millions records) contains data separated by “,” (comma). As part of the requirement, I can't change the format. The objective is to remove some of the records with the following condition. If the 23rd field on each line start with 302 , I need to remove that from the... (4 Replies)
Hello,
Need help with following scenario.
A file contains following text:
{beginning of file}
New: This is a new record and it is not
on same line. Since I have lost touch with script
take this challenge and bring all this in one line.
New: Hello losttouch. You seem to be struggling... (4 Replies)
Hi
I need to select lines from a txt file, I have got a line starting with ZMIO:MSISDN= and after a few line I have another line starting with 'MOBILE STATION ISDN NUMBER' and another one starting with 'VLR-ADDRESS' I need to copy these three lines as three different columns in a separate... (3 Replies)
Hi All,
I have following input file. I wish to retain those lines which match multiple search criteria. The search criteria is stored in a variable seperated from each other by comma(,).
SEARCH_CRITERIA = "REJECT, DUPLICATE"
Input File:
ERROR,MYFILE_20130214_11387,9,37.75... (3 Replies)
Hi Guru's,
I am new to unix scripting. I have a huge file with user details in it(file2) and I have another file with a list of users(file1). Script has to search a user from file1 and get all the associated lines from file2.
Example:
fiel1:
cn=abc
cn=DEF
cn=xyx
File 2:
dn:... (10 Replies)
Hi,
I have an input file as shown below:
20140102;13:30;FR-AUD-LIBOR-1W;2.495
20140103;13:30;FR-AUD-LIBOR-1W;2.475
20140106;13:30;FR-AUD-LIBOR-1W;2.495
20140107;13:30;FR-AUD-LIBOR-1W;2.475
20140108;13:30;FR-AUD-LIBOR-1W;2.475
20140109;13:30;FR-AUD-LIBOR-1W;2.475... (2 Replies)