Scrutinizer if the fields "500" and "202" wouldnt be consequtive in CDR files then would it be so diffucult to reach the same output? I guess it would be a long process to check 500 or 202 fields of same gsm_no in same file as there would be tousands of CDR rows in a file and the desired rows could come several hundred rows after the other one) corrcect?
ive looked around for software to do this in linux, but am still not sure how to. i want to be able to mount a cdr and write to it and be able to take the disk out and finish writing whatever i want to it at a later time. can we do this in linux? (4 Replies)
HI all,
I am relatively new to Unix Shell Scripts ...
I want to know how u can calculate the differnece between the 2 dates.
As if in Oracle by using SYSDATE u get current date and time ..
How one can achieve it in Unix ?
Thanks.. (1 Reply)
Hi,
I need to do croning in PHP to generate a CDR (Call Details Record) file daily from the mysql database.
The CDR file has to be named in a certain sequence such as xx00000xxx200604080850.cdr. A new file is written every day.
The generated CDR file is then ftp over to a server.
I am... (0 Replies)
I am sending the data in userfile and colfile from ksh script to pl/sql script
linto an array with this command
grep '' $userfile |awk '{print "my_user_id("FNR") := '$SQL_QUOTE'"$1"'$SQL_QUOTE';"}' >> $SQL_TEMP_FILE
grep '^\{1,10\}$' $colfile | awk '{print "my_col_id("NR") := "$1";"}' >>... (0 Replies)
Hi Everyon,
I am stuck in a script.I have a file named file1.txt as given below:
It contains 2 columns-count and filename.
cat file1.txt
count filename
100 A_new.txt
1000 A_full.txt
1100 B_new.txt
2000 B_full.txt
1100 C_new.txt
2000 C_full.txt
...................
..................... (10 Replies)
Hi All,
I need to find the date 19days back from the current date:
eg: if today is 17 March 2013
then the output should be : 26 Feb 2013
Can i do this using date command in Korn Shell?
And also if i need future 15 days date from current date, how to that?
Any help appreciated :)
... (3 Replies)
Hi All,
I have one file with two columns separated by tab.
I need to search for second column value of this file in the 5 column of another file. If the match is found replace the 5th column of second file with entire row of the first file.
e.g.
file1
123 D.abc
234 D.rde
4563 ... (2 Replies)
Hi,
I am in a terrible emergency. I have multiple cdr files with line count >6000.
I need to append |0| | | | | | | |random| to end of each line. The random number should never repeat.
Please help with a shell script to process all cdr's in a directory with above requirement. (23 Replies)
Discussion started by: shiburnair
23 Replies
LEARN ABOUT OSF1
uniq
uniq(1) General Commands Manual uniq(1)NAME
uniq - Removes or lists repeated lines in a file
SYNOPSIS
Current Syntax
uniq [-cdu] [-f fields] [-s chars] [input-file [output-file]]
Obsolescent Syntax
uniq [-cdu] [-fields] [+chars] [input-file [output-file]]
The uniq command reads from the specified input_file, compares adjacent lines, removes the second and succeeding occurrences of a line, and
writes to standard output.
STANDARDS
Interfaces documented on this reference page conform to industry standards as follows:
uniq: XCU5.0
Refer to the standards(5) reference page for more information about industry standards and associated tags.
OPTIONS
Precedes each output line with a count of the number of times each line appears in the file. This option supersedes the -d and -u options.
Displays repeated lines only. Ignores the first fields fields on each input line when doing comparisons, where fields is a positive deci-
mal integer. A field is the maximal string matched by the basic regular expression:
[[:blank:]]*[^[:blank:]]*
If the fields argument specifies more fields than appear on an input line, a null string is used for comparisons. Ignores the spec-
ified number of characters when doing comparisons. The chars argument is a positive decimal integer.
If specified with the -f option, the first chars characters after the first fields fields are ignored. If the chars argument speci-
fies more characters than remain on an input line, uniq uses a null string for comparison. Displays unique lines only. Equivalent
to -f fields. (Obsolescent) Equivalent to -s chars. (Obsolescent)
OPERANDS
A pathname for the input file.
If this operand is omitted or specified as -, then standard input is read. A pathname for the output file.
If this operand is omitted, then standard output is written.
DESCRIPTION
The input_file and output_file arguments must be different files. If the input_file operand is not specified, or if it is -, uniq uses
standard input.
Repeated lines must be on consecutive lines to be found. You can arrange them with the sort command before processing.
EXAMPLES
To delete repeated lines in the following file called fruit and save it to a file named newfruit, enter: uniq fruit newfruit
The file fruit contains the following lines:
apples apples bananas cherries cherries peaches pears
The file newfruit contains the following lines:
apples bananas cherries peaches pears
EXIT STATUS
The following exit values are returned: Successful completion. An error occurred.
ENVIRONMENT VARIABLES
The following environment variables affect the execution of uniq: Provides a default value for the internationalization variables that are
unset or null. If LANG is unset or null, the corresponding value from the default locale is used. If any of the internationalization vari-
ables contain an invalid setting, the utility behaves as if none of the variables had been defined. If set to a non-empty string value,
overrides the values of all the other internationalization variables. Determines the locale for the interpretation of sequences of bytes
of text data as characters (for example, single-byte as opposed to multibyte characters in arguments). Determines the locale for the for-
mat and contents of diagnostic messages written to standard error. Determines the location of message catalogues for the processing of
LC_MESSAGES.
SEE ALSO
Commands: comm(1), sort(1)
Standards: standards(5)uniq(1)