09-04-2009
Hi ,
Thanks for the update.
We found out the issue, its not due to the tag rearrangment.
But due to the removal of the temporary files in the code like given below:
{if(system("test -r fact_consbill")==0){system("rm fact_cc")}}
{if(system("test -s fact_consbill")==0){system("rm fact_bon")}}
But if we interchange this code then the code is taking the same as the old one, so is there any impact on the test variable regarding the names?
Thanks
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I need help in awk please help immediatly.
This below function is taking lot of time
Please help me to fine tune it so that it runs faster.
The file count is around 3million records
# Process Body
processbody() {
#set -x
while read line
do
... (18 Replies)
Discussion started by: icefish
18 Replies
2. Solaris
using the internal 2 drives mirror was created using raidctl on 100's of our servers . sometime when one drive fails we dont face any issue & we replace the drive with out any problem . but sometimes when one drive fails , system becomes unresponsive and doesnot allow us to login , the only way to... (1 Reply)
Discussion started by: skamal4u
1 Replies
3. Programming
Hi,
I have written a program in C and have to test the return value of the functions. So the normal way of doin this wud b
int rc
rc=myfunction(input);
if(rc=TRUE){
}
else{
}
But instead of doing this I have called the function in the if() condition. Does this have any... (2 Replies)
Discussion started by: sidmania
2 Replies
4. Solaris
Hi all,
I decided to replace my linux router/firewall with Solaris 11 express. This is a pppoe connection directly to my server...no router boxes. I got everything setup, but the performance is terrible on the NAT....really slow. A web page that loads on the server instantly will take... (3 Replies)
Discussion started by: vectox
3 Replies
5. AIX
Hi,
We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points:
/abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN)
Now these 40... (1 Reply)
Discussion started by: aixromeo
1 Replies
6. Solaris
I 'm trying to clone a zfs file system pool/u01 to a new file system called newpool/u01 using following commands
zfs list
zfs snapshot pool/u01@new
zfs send pool/u01@new | zfs -F receive newpool/u01
Its a 100G file system snapshot and copied to same server on different pool and... (9 Replies)
Discussion started by: fugitive
9 Replies
7. Solaris
Hello everyone,
recently we have been experiencing performance issues with chmod. We managed to narrow it down to getcwd.
The following folder exists:
/Folder1/subfol1/subfol2/subfol3
cd /Folder1/subfol1/subfol2/subfol3
truss -D pwd 2>&1 | grep getcwd
0.0001... (4 Replies)
Discussion started by: KotekBury
4 Replies
8. AIX
Good Day Everyone,
Just wonder anyone has encounter AIX 6.1 Memory Performance issues ? What I have in my current scenario is we have 3 datastage servers (Segregate server and EE jobs - for those who know Datastage achitect) and 2 db servers(running HA to load balance 4 nodes partitions for... (3 Replies)
Discussion started by: ckwan
3 Replies
9. UNIX for Dummies Questions & Answers
hi guys
right now I have 6 Virtual Machines (VMs) running on Vmware ESXi 5.1 and attached to Storage SAN.
All these run Suse Linux 11 SP1 x64.
All of a sudden 1 of these VMs is running very slow making high CPU usage and I see al wait % kinda high 40-50%. Apparently since I don't own this... (5 Replies)
Discussion started by: karlochacon
5 Replies
10. What is on Your Mind?
For years we blocked Baiduspider due to the fact their bots do not obey the robots.txt directive and can really hurt site performance when they unleash 100 bots on the site each pulling pages many times per second.
Last year, I unblocked Baiduspider's IP addresses, and now the problem is back.
... (1 Reply)
Discussion started by: Neo
1 Replies
LEARN ABOUT PLAN9
dh_auto_test
DH_AUTO_TEST(1) Debhelper DH_AUTO_TEST(1)
NAME
dh_auto_test - automatically runs a package's test suites
SYNOPSIS
dh_auto_test [buildsystemoptions] [debhelperoptions] [--params]
DESCRIPTION
dh_auto_test is a debhelper program that tries to automatically run a package's test suite. It does so by running the appropriate command
for the build system it detects the package uses. For example, if there's a Makefile and it contains a test or check target, then this is
done by running make (or MAKE, if the environment variable is set). If the test suite fails, the command will exit nonzero. If there's no
test suite, it will exit zero without doing anything.
This is intended to work for about 90% of packages with a test suite. If it doesn't work, you're encouraged to skip using dh_auto_test at
all, and just run the test suite manually.
OPTIONS
See "BUILD SYSTEM OPTIONS" in debhelper(7) for a list of common build system selection and control options.
-- params
Pass params to the program that is run, after the parameters that dh_auto_test usually passes.
NOTES
If the DEB_BUILD_OPTIONS environment variable contains nocheck, no tests will be performed.
SEE ALSO
debhelper(7)
This program is a part of debhelper.
AUTHOR
Joey Hess <joeyh@debian.org>
11.1.6ubuntu2 2018-05-10 DH_AUTO_TEST(1)