I started learning shell scripting recently and I was able to use grep and awk command to get a columns or rows with a certain format out of one file and put them in a new one using :
and
and I'm currently looking for a way to delete all generated
raws -except the last one-in two files and put the last line of
each file in a single new file .
is there a command that can do that ??!!!!
If there's can you guide on how to use it if possible or even just tell me what it is ??!!
Thank you
Lina
Last edited by Franklin52; 10-14-2011 at 03:33 AM..
Reason: Please use code tags, thank you
I need a script file for backup (zip or tar or gz) of old log files in our unix server (causing the space problem). Could you please help me to create the zip or gz files for each log files in current directory and sub-directories also?
I found one command which is to create gz file for the... (4 Replies)
Hello, I am a very novice user of awk, I have a set of files named file001, file002, file003, file004, etc., each contains four fields (columns of data) separated each by a uneven number of spaces. I want to substitute those spaces by a TAB, so I am using this line of awk script:
awk -v OFS="\t"... (4 Replies)
Bash/scripting newbie here - I feel this might be a trivial problem, but I'm not sure how to tackle it. I've got a folder of a year's worth of files, with some random number of files generated every day of the year (but at least one per day). I'm writing a script to automatically grab the file with... (6 Replies)
Guy
I have this below report and I need someone help me formatting it
SID File Name Started Finished Backup
ABC XYZ.log 10Sep09-012857\n 10Sep09-020748 Successful
I need Started & Finished columns to be formatted as
... (1 Reply)
Hi,
I have a timestamp stored in a variable start_time=17-JUL-2009 03:45,
I need to search a file for this time stamp line by line (where each line has a time stamp in the file) if the time stamp is greater than the start_time variable value need to grep that line for a string and if the string... (2 Replies)
Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text.
I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump)
In using HP-UX large servers.
Any advice will... (8 Replies)
Hi guys - I am new to Unix and learning some basics. I have to create a report of files that are in a specific directory and I have to list filenames with specific titles. This report will be created everyday early in the morning, say at 05:00 AM. (see output file format below)
The 2 categories... (2 Replies)