Hi experts,
in my solaris 9 the file- /var/adm/messeages growin too first. by 24 hours 40MB. And always giving the below messages--
bash-2.05# tail -f messages
Nov 9 16:35:38 ME1 last message repeated 1 time
Nov 9 16:35:38 ME1 ftpd: wtmpx /var/adm/wtmpx No such file or directory
Nov 9... (7 Replies)
I have a file with 28,00,000 lines of rows in this the first 80 lines will be chunks .
I want to delete the chunks of 80 lines. I tried tail -f2799920 filename.
is there any efficient way to do this.
Thanks in advance. (7 Replies)
Input:
a
b
b
c
d
d
I need:
a
c
I know how to get this (the lines that have duplicates) :
b
d
sort file | uniq -d
But i need opossite of this. I have searched the forum and other places as well, but have found solution for everything except this variant of the problem. (3 Replies)
I have a command which prints #lines after and before the search string in the huge file
nawk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r;print;c=a}b{r=$0}' b=0 a=10 s="STRING1" FILE
The file is 5 gig big.
It works great and prints 10 lines after the lines which contains search string in... (8 Replies)
Hi,
I have a big (2.7 GB) text file. Each lines has '|' saperator to saperate each columns.
I want to delete those lines which has text like '|0|0|0|0|0'
I tried:
sed '/|0|0|0|0|0/d' test.txt
Unfortunately, it scans the file but does nothing.
file content sample:... (4 Replies)
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Hi all,
I have a big file (about 6 millions rows) and I have to delete same occurrences, stored in a small file (about 9000 rews). I have tried this:
while read line
do
grep -v $line big_file > ok_file.tmp
mv ok_file.tmp big_file
done < small_file
It works, but is very slow.
How... (2 Replies)
Hi,
To load a big file in a table,I have a make sure that all rows in the file has same number of the columns .
So in my file if I am getting any rows which have columns not equal to 6 , I need to delete it . Delimiter is space and columns are optionally enclosed by "".
This can be ... (1 Reply)
Hi All,
I am trying to get some lines from a file i did it with while-do-loop. since the files are huge it is taking much time. now i want to make it faster.
The requirement is the file will be having 1 million lines.
The format is like below.
##transaction, , , ,blah, blah... (38 Replies)
Dear all,
I have stuck with this problem for some days.
I have a very big file, this file can not open by vi command.
There are 200 loops in this file, in each loop will have one line like this:
GWA quasiparticle energy with Z factor (eV)
And I need 98 lines next after this line.
Is... (6 Replies)
Discussion started by: phamnu
6 Replies
LEARN ABOUT DEBIAN
qpid-python-test
qpid-python-test(1) User Commands qpid-python-test(1)NAME
qpid-python-test - run tests of the python QPID library for a broker
SYNOPSIS
qpid-python-test [options] PATTERN ...
DESCRIPTION
Run tests matching the specified PATTERNs.
OPTIONS -h, --help
show this help message and exit
-l, --list
list tests instead of executing them
-b BROKER, --broker=BROKER
run tests against BROKER (default localhost)
-f FILE, --log-file=FILE
log output to FILE
-v LEVEL, --log-level=LEVEL
only display log messages of LEVEL or higher severity: DEBUG, WARN, ERROR (default WARN)
-c CATEGORY, --log-category=CATEGORY
log only categories matching CATEGORY pattern
-m MODULES, --module=MODULES
add module to test search path
-i IGNORE, --ignore=IGNORE
ignore tests matching IGNORE pattern
-I IFILE, --ignore-file=IFILE
ignore tests matching patterns in IFILE
-H, --halt-on-error
halt if an error is encountered
-t, --time
report timing information on test run
-D DEFINE, --define=DEFINE
define test parameters
SEE ALSO
For more information on qpid-python-test please check the QPID wiki at http://qpid.apache.org.
Apache QPID October 2011 qpid-python-test(1)