sed -n '/,9999$/ s///p' ${NLAP_TEMP}/hist1.out|sort -u -T ${NLAP_TEMP}> ${NLAP_TEMP}/hist2.final
Basically if it matches the last field it deletes that field and then prints the modified line. Also by using sort -u rather than sort | uniq you reduce the number of processes in the pipeline by one.
One of our servers runs Solaris 8 and does not have "ls -lh" as a valid command. I wrote the following script to make the ls output easier to read and emulate "ls -lh" functionality. The script works, but it is slow when executed on a directory that contains a large number of files. Can anyone make... (10 Replies)
hii everyone ,
i have a file in which i have line numbers.. file name is file1.txt
aa bb cc "12" qw
xx yy zz "23" we
bb qw we "123249" jh
here 12,23,123249. is the line number
now according to this line numbers we have to print lines from other file named... (11 Replies)
Hi,
Can any one help me out in solving the problem i have a linux database server it is tooo slow that i am unable to open even the terminial is there any solution to get rid of this problem.How to make this server faster.
Thanks & Regards
Venky (0 Replies)
Hi All,
I have some 80,000 files in a directory which I need to rename. Below is the command which I am currently running and it seems, it is taking fore ever to run this command. This command seems too slow. Is there any way to speed up the command. I have have GNU Parallel installed on my... (6 Replies)
Hi all,
In bash scripting, I use to read files:
cat $file | while read line; do
...
doneHowever, it's a very slow way to read file line by line.
E.g. In a file that has 3 columns, and less than 400 rows, like this:
I run next script:
cat $line | while read line; do ## Reads each... (10 Replies)
Hi,
I have a script below for extracting xml from a file.
for i in *.txt
do
echo $i
awk '/<.*/ , /.*<\/.*>/' "$i" | tr -d '\n'
echo -ne '\n'
done
.
I read about using multi threading to speed up the script.
I do not know much about it but read it on this forum.
Is it a... (21 Replies)
awk "/May 23, 2012 /,0" /var/tmp/datafile
the above command pulls out information in the datafile. the information it pulls is from the date specified to the end of the file.
now, how can i make this faster if the datafile is huge? even if it wasn't huge, i feel there's a better/faster way to... (8 Replies)
Hi,
I have a large number of input files with two columns of numbers.
For example:
83 1453
99 3255
99 8482
99 7372
83 175
I only wish to retain lines where the numbers fullfil two requirements. E.g:
=83
1000<=<=2000
To do this I use the following... (10 Replies)
I have script like below, who is picking number from one file and and searching in another file, and printing output.
Bu is is very slow to be run on huge file.can we modify it with awk
#! /bin/ksh
while read line1
do
echo "$line1"
a=`echo $line1`
if
then
echo "$num"
cat file1|nawk... (6 Replies)
I have nginx web server logs with all requests that were made and I'm filtering them by date and time.
Each line has the following structure:
127.0.0.1 - xyz.com GET 123.ts HTTP/1.1 (200) 0.000 s 3182 CoreMedia/1.0.0.15F79 (iPhone; U; CPU OS 11_4 like Mac OS X; pt_br)
These text files are... (21 Replies)
Discussion started by: brenoasrm
21 Replies
LEARN ABOUT ULTRIX
sortbib
sortbib(1) General Commands Manual sortbib(1)Name
sortbib - sort bibliographic database
Syntax
sortbib [-sKEYS] database...
Description
The command sorts files of records containing refer key-letters by user-specified keys. Records may be separated by blank lines, or by .[
and .] delimiters, but the two styles may not be mixed together. This program reads through each database and pulls out key fields, which
are sorted separately. The sorted key fields contain the file pointer, byte offset, and length of corresponding records. These records
are delivered using disk seeks and reads, so may not be used in a pipeline to read standard input.
By default, alphabetizes by the first %A and the %D fields, which contain the senior author and date. The -s option is used to specify new
KEYS. For instance, -sATD will sort by author, title, and date, while -sA+D will sort by all authors, and date. Sort keys past the fourth
are not meaningful. No more than 16 databases may be sorted together at one time. Records longer than 4096 characters will be truncated.
The command sorts on the last word on the %A line, which is assumed to be the author's last name. A word in the final position, such as
``jr.'' or ``ed.'', will be ignored if the name beforehand ends with a comma. Authors with two-word last names or unusual constructions
can be sorted correctly by using the convention `` '' in place of a blank. A %Q field is considered to be the same as %A, except sorting
begins with the first, not the last, word. The command sorts on the last word of the %D line, usually the year. It also ignores leading
articles (like ``A'' or ``The'') when sorting by titles in the %T or %J fields; it will ignore articles of any modern European language.
If a sort-significant field is absent from a record, places that record before other records containing that field.
Options-sKEYS
Specifies new sort KEYS. For example, ATD sorts by author, title, and date.
See Alsoaddbib(1), indxbib(1), lookbib(1), refer(1), roffbib(1)sortbib(1)