Sponsored Content
Full Discussion: AIX performance issue
Operating Systems AIX AIX performance issue Post 302461281 by zxmaus on Saturday 9th of October 2010 08:24:26 PM
Old 10-09-2010
4.2 GHz is p6, not p5 ... Smilie

Apart from that - A notable difference from POWER6 is that the POWER7 executes instructions out-of-order instead of in-order like p5 and predecessors did - and p7 has 12 instruction units per core instead of 8 and a lot more cache. There are certain workloads - particularly single threaded ones that do a particular task, that are significantly slower on p7 - we have in our lab tested the p7 in any possible circumstance before we have decided to go with them - for normal multithreaded webserver and DB workloads they are significantly better in throughput, for stupid single threaded tasks like backups with compression, they take significantly longer - and the differences are bigger when you run 5.3 than when you run 6.1 on p7.
 

10 More Discussions You Might Find Interesting

1. AIX

performance issue

We have a AIX v5.3 on a p5 system with a poor performing Ingres database. We added one CPU to the system to see if this would help. Now there are two CPU's. with sar and topas -P I see good results: CPU usage around 30% with topas I only see good results in the process output screen, the... (1 Reply)
Discussion started by: rein
1 Replies

2. UNIX for Advanced & Expert Users

performance issue

Hi, on a linux server I have the following : vmstat 2 10 procs memory swap io system cpu r b w swpd free buff cache si so bi bo in cs us sy id 0 4 0 675236 39836 206060 1617660 3 3 3 6 8 7 1 1 ... (1 Reply)
Discussion started by: big123456
1 Replies

3. Shell Programming and Scripting

performance issue

I want to read a file. is it good to use File I/O or shell script?? which one is the best option? (1 Reply)
Discussion started by: vishwaraj
1 Replies

4. AIX

Performance issue in expect library on AIX 5.3

Hi All, I am getting a performance issue in expect5.43.0 library on IBM AIX 5.3. When I have used exp_fexpectv call for pattern matching with the expect string, the execution latency of the call is approximately 200 ms. In that way, I am able to complete only 4 or 5 transactions per... (2 Replies)
Discussion started by: ravindra_maddal
2 Replies

5. AIX

Performance issue in AIX 5.3

Is there is any way to increase the CPU utilization of a Embedded SQL program in AIX 5.3 .. for performance purpose. (0 Replies)
Discussion started by: Gyanendra Awast
0 Replies

6. UNIX for Advanced & Expert Users

Performance issue!

In my C program i am using very large file(approx 400MB) to read parts of it frequently. But due to large file the performance of the program goes down very badly. It shows very high I/O usage and I/O wait time. My question is, What are the ways to optimize or tune I/O on linux or how i can get... (10 Replies)
Discussion started by: mavens
10 Replies

7. AIX

performance issue in AIX

Gurus, i have process that runs 5 times a day. it runs normally (takes about 1 hour) to complete in 3 runs but it is takes about ( 3 hrs to complete) two times So i need to figure out why it takes significanlty high time during those 2 runs. The process is a shell script that connect to... (2 Replies)
Discussion started by: mad_man12
2 Replies

8. Shell Programming and Scripting

Performance issue or something else?

Hi All, I have the following script which I use in Nagios to check the health of the applications, the problem with it is that the curl part ($TOTAL) does not return anything after running for 2-3 hrs, even though from command line the script runs fine but not from Nagios. There are 17... (1 Reply)
Discussion started by: jacki
1 Replies

9. UNIX for Dummies Questions & Answers

Performance issue

hi I am having a performance issue with the following requirement i have to create a permutation and combination on a set of three files such that each record in each file is picked and the output is redirected in a specific format but it is taking around 70 odd hours to prepare a combination... (7 Replies)
Discussion started by: mad_man12
7 Replies

10. AIX

Performance issue

Hi, We have 2 lpars on p6 blade. One of the lpar is having 3 core cpu with 5gb memory running sybase as database. An EOD process takes 25 min. to complete. Now we have an lpar on P7 server with entitled cpu capacity of 2 with 16 Gb memory and sybase as database. The EOD process which takes... (17 Replies)
Discussion started by: vjm
17 Replies
ggLockCreate(3) 							GGI							   ggLockCreate(3)

NAME
ggLockCreate, ggLockDestroy, ggLock, ggUnlock, ggTryLock - Lowest common denominator locking facilities SYNOPSIS
#include <ggi/gg.h> void *ggLockCreate(void); int ggLockDestroy(void *lock); void ggLock(void *lock); void ggUnlock(void *lock); int ggTryLock(void *lock); DESCRIPTION
These functions allow sensitive resource protection to prevent simultaneous or interleaved access to resources. For developers accustomed to POSIX-like threading environments it is important to differentiate a gglock from a "mutex". A gglock fills *both* the role of a "mutex" and a "condition" (a.k.a. an "event" or "waitqueue") through a simplified API, and as such there is no such thing as a gglock "owner". A LibGG lock is just locked or unlocked, it does not matter by what or when as long as the application takes care never to create a deadlock that never gets broken. The locking mechanisms are fully functional even in single-threaded, uninterrupted-flow-of-control environments. They must still be used as described below even in these environments; They are never reduced to non-operations. The locking mechanisms are threadsafe, and are also safe to call from inside LibGG task handlers. However, they are not safe to use in a thread that may be cancelled during their execution, and they are not guaranteed to be safe to use in any special context other than a LibGG task, such as a signal handler or asyncronous procedure call. Though the LibGG API does provide ample functionality for threaded environments, do note that LibGG does not itself define any sort of threading support, and does not require or guarantee that threads are available. As such, if the aim of an application developer is to remain as portable as possible, they should keep in mind that when coding for both environments, there are only two situations where locks are appropriate to use. These two situations are described in the examples below. Cleanup handlers created with ggRegisterCleanup(3) should not call any of these functions. LibGG must be compiled with threading support if multiple threads that call any of these functions are to be used in the program. When LibGG is compiled with threading support, the ggLock, ggUnlock, and ggTryLock functions are guaranteed memory barriers for the purpose of multiprocessor data access synchronization. (When LibGG is not compiled with threading support, it does not matter, since separate threads should not be using these functions in the first place.) ggLockCreate creates a new lock. The new lock is initially unlocked. ggLockDestroy destroys a lock, and should only be called when lock is unlocked, otherwise the results are undefined and probably undesir- able. ggLock will lock the lock and return immediately, but only if the lock is unlocked. If the lock is locked, ggLock will not return until the lock gets unlocked by a later call to ggUnlock. In either case lock will be locked when ggLock returns. ggLock is "atomic," such that only one waiting call to ggLock will return (or one call to ggTryLock will return successfully) each time lock is unlocked. Order is *not* guaranteed by LibGG -- if two calls to ggLock are made at different times on the same lock, either one may return when the lock is unlocked regardless of which call was made first. (It is even possible for a call to ggTryLock to grab the lock right after it is unlocked, even though a call to ggLock was already waiting on the lock.) ggTryLock attempts to lock the lock, but unlike ggLock it always returns immediately whether or not the lock was locked to begin with. The return value indicates whether the lock was locked at the time ggTryLock was invoked. In either case lock will be locked when ggTryLock returns. ggUnlock unlocks the lock. If any calls to ggLock or ggTryLock are subsequently invoked, or have previously been invoked on the lock, one of the calls will lock lock and return. As noted above, which ggLock call returns is not specified by LibGG and any observed behavior should not be relied upon. Immediacy is also *not* guaranteed; a waiting call to ggLock may take some time to return. ggUnlock may be called, successfully, even if lock is already unlocked, in which case, nothing will happen (other than a memory barrier.) In all the above functions, where required, the lock parameter *must* be a valid lock, or the results are undefined, may contradict what is written here, and, in general, bad and unexpected things might happen to you and your entire extended family. The functions do *not* vali- date the lock; It is the responsibility of the calling code to ensure it is valid before it is used. Remember, locking is a complicated issue (at least, when coding for multiple environments) and should be a last resort. RETURN VALUE
ggLockCreate returns a non-NULL opaque pointer to a mutex, hiding its internal implementation. On failure, ggLockCreate returns NULL. ggTryLock returns GGI_OK if the lock was unlocked, or GGI_EBUSY if the lock was already locked. ggLockDestroy returns GGI_OK on success or GGI_EBUSY if the lock is locked. EXAMPLES
One use of gglocks is to protect a critical section, for example access to a global variable, such that the critical section is never entered by more than one thread when a function is called in a multi-threaded environment. It is important for developers working in a single-threaded environment to consider the needs of multi-threaded environments when they provide a function for use by others. static int foo = 0; static gglock *l; void increment_foo(void) { ggLock(l); foo++; ggUnlock(l); } In the above example, it is assumed that gglock is initialized using ggLockCreate before any calls to increment_foo are made. Also note that in the above example, when writing for maximum portability, increment_foo should not be called directly or indirectly by a task han- dler which was registered via ggAddTask because a deadlock may result (unless it is somehow known that increment_foo is not being executed by any code outside the task handler.) Another use of gglocks is to delay or skip execution of a task handler registered with ggAddTask(3). It is important for developers work- ing in a multi-threaded environment to consider this when they use tasks, because in single-threaded environments tasks interrupt the flow of control and may in fact themselves be immune to interruption. As such they cannot wait for a locked lock to become unlocked -- that would create a deadlock. static gglock *t, *l, *s; int misscnt = 0; void do_foo (void) { ggLock(t); /* prevent reentry */ ggLock(l); /* keep task out */ do_something(); ggUnlock(l); /* task OK to run again */ if (!ggTryLock(s)) { /* run task if it was missed */ if (misscnt) while (misscnt--) do_something_else(); ggUnlock(s); } ggUnlock(t); /* end of critical section */ } /* This is called at intervals by the LibGG scheduler */ static int task_handler(struct gg_task *task) { int do_one; /* We know the main application never locks s and l at the * same time. We also know it never locks either of the * two more than once (e.g. from more than one thread.) */ if (!ggTryLock(s)) { /* Tell the main application to run our code for us * in case we get locked out and cannot run it ourselves. */ misscnt++; ggUnlock(s); if (ggTryLock(l)) return; /* We got locked out. */ } else { /* The main application is currently running old missed * tasks. But it is using misscnt, so we can't just ask * it to do one more. * * If this is a threaded environment, we may spin here for * while in the rare case that the main application * unlocked s and locked l between the above ggTryLock(s) * and the below ggLock(l). However we will get control * back eventually. * * In a non-threaded environment, the below ggLock cannot * wedge, because the main application is stuck inside the * section where s is locked, so we know l is unlocked. */ ggLock(l); do_something_else(); ggUnlock(l); return; } /* now we know it is safe to run do_something_else() as * do_something() cannot be run until we unlock l. * However, in threaded environments, the main application may * have just started running do_something_else() for us already. * If so, we are done, since we already incremented misscnt. * Otherwise we must run it ourselves, and decrement misscnt * so it won't get run an extra time when we unlock s. */ if (ggTryLock(s)) return; if (misscnt) while (misscnt--) do_something_else(); ggUnlock(s); ggUnlock(l); } In the above example, the lock t prevents reentry into the dofoo subroutine the same as the last example. The lock l prevents do_some- thing_else() from being called while do_something() is running. The lock s is being used to protect the misscnt variable and also acts as a memory barrier to guarantee that the value seen in misscnt is up-to-date. The code in function dofoo will run do_something_else() after do_something() if the task happened while do_something() was running. The above code will work in multi-threaded-single-processor, multi- threaded-multi-processor, and single-threaded environments. Note: The above code assumes do_something_else() is reentrant. SEE ALSO
pthread_mutex_init(3) libgg-1.0.x 2005-08-26 ggLockCreate(3)
All times are GMT -4. The time now is 03:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy