Sponsored Content
Operating Systems HP-UX Bad performance but Low CPU loading? Post 302411162 by GreenShery on Wednesday 7th of April 2010 09:12:42 PM
Old 04-07-2010
I checked some backup job, they're set at other time. Such as 1:00,23:00...
 

10 More Discussions You Might Find Interesting

1. Programming

CPU Loading

How can I measure CPU loading (like performance monitor in Windows OS). I use Solaris but I would like to write portable code. Besides, I have to write programm to load CPU with known percent. How can I use CPU in 30% for example. Thanks for any ideas. (6 Replies)
Discussion started by: serge
6 Replies

2. UNIX for Dummies Questions & Answers

comparing Huge Files - Performance is very bad

Hi All, Can you please help me in resolving the following problem? My requirement is like this: 1) I have two files YESTERDAY_FILE and TODAY_FILE. Each one is having nearly two million data. 2) I need to check each record of TODAY_FILE in YESTERDAY_FILE. If exists we can skip that by... (5 Replies)
Discussion started by: madhukalyan
5 Replies

3. AIX

Bad performance when log in with putty

Hello guys! I'm n00b in AIX and I'm sticked in a problem. (my English is poor enough, but I hope you can understand me :P). So.. I'm trying to connect to an AIX machine with putty, and .. 'using username xxx' appears after 2 sec (OK), but 'xxx@ip's password' appears after 1:15 min. After... (6 Replies)
Discussion started by: combat2k
6 Replies

4. Solaris

Low average cpu utilization.

Hi to all, i have an app on solaris 5.8 writed in C++ (3.2.1) that use multi threading. Hardware has 8 cpu. When i run my app i note that the average of cpu go at least at 40%, and the performance are not so higher.. There is a cpu limitation on solaris, that dedicate only a part of cpu... (3 Replies)
Discussion started by: Moodie
3 Replies

5. Solaris

Sun T1000 application low performance

Hello All. I have Sun T1000 server with Solaris 10. On T1000 installed EMC smarts, application for monitoring network devices via SNMP + SNMP. So, Smarts has own DB (contains object - devices and relationships), file takes 30 mb, now, all queries to DB works very slow, so Smarts works too slow,... (5 Replies)
Discussion started by: hemulll
5 Replies

6. Solaris

By loading I have received the following issue: BAD PBR SIGN.

Hi folks. By disk cloning on Solaris x86, I used a command dd. I pulled out the source and inserted the new disk. By loading I have received the following issue: BAD PBR SIGN. :( (5 Replies)
Discussion started by: wolfgang
5 Replies

7. Solaris

Performance (iops) becomes bad, what is the reason?

I have written a virtual HBA driver named "xmp_vhba". A scsi disk is attached on it. as shown below: xmp_vhba, instance #0 disk, instance #11 But the performance became very bad when we read/write the scsi disk using the vdbench(a read/write io tool). What is the reason? ... (7 Replies)
Discussion started by: ForgetChen
7 Replies

8. AIX

High Runqueue (R) LOW CPU LOW I/O Low Network Low memory usage

Hello All I have a system running AIX 61 shared uncapped partition (with 11 physical processors, 24 Virtual 72GB of Memory) . The output from NMON, vmstat show a high run queue (60+) for continous periods of time intervals, but NO paging, relatively low I/o (6000) , CPU % is 40, Low network.... (9 Replies)
Discussion started by: IL-Malti
9 Replies

9. UNIX for Dummies Questions & Answers

CPU with long hours in top, is this bad?

Hi, We have a Solaris server that has about 43 Oracle databases on it and we also have the Oracle Enterprise Manager - emagent that is used to monitor these databases When running top, the emagent is showing as one of the top process. Excerpts from running top shows something as below: ... (3 Replies)
Discussion started by: newbie_01
3 Replies

10. AIX

AIX lpar bad disk I/O performance - 4k per IO limitation ?

Hi Guys, I have fresh new installed VIO 2.2.3.70 on a p710, 3 physical SAS disks, rootvg on hdisk0 and 3 VIO clients through vscsi, AIX7.1tl4 AIX6.1tl9 RHEL6.5ppc, each lpar has its rootvg installed on a LV on datavg (hdisk2) mapped to vhost0,1,2 There is no vg on hdisk1, I use it for my... (1 Reply)
Discussion started by: frenchy59
1 Replies
BACKUP_KILL(8)						       AFS Command Reference						    BACKUP_KILL(8)

NAME
backup_kill - Terminates a pending or running operation SYNOPSIS
backup kill -id <job ID or dump set name> [-help] backup k k -i <job ID or dump set name> [-h] DESCRIPTION
The backup kill command dequeues a Backup System operation that is pending, or terminates an operation that is running, in the current interactive session. It is available only in interactive mode. If the issuer of the backup interactive command included the -localauth flag, the -cell argument, or both, then those settings apply to this command also. To terminate a dump operation, specify either the dump name (volume_set_name.dump_level_name) or its job ID number, which appears in the output from the backup jobs command. To terminate any other type of operation, provide the job ID number. The effect of terminating an operation depends on the type and current state of the operation: o If an operation is still pending, the Tape Coordinator removes it from the queue with no other lasting effects. o If the Tape Coordinator is unable to process the termination signal before an operation completes, it simply confirms the operation's completion. The operator must take the action necessary to undo the effects of the incorrect operation. o If a tape labeling operation is running, the effect depends on when the Tape Coordinator receives the termination signal. The labeling operation is atomic, so it either completes or does not begin at all. Use the backup readlabel command to determine if the labeling operation completed, and reissue the backup labeltape command to overwrite the incorrect label if necessary. o If a tape scanning operation is running, it terminates with no other effects unless the -dbadd flag was included on the backup command. In that case, the Backup System possibly has already written new Backup Database records to represent dumps on the scanned tape. If planning to restart the scanning operation, first locate and remove the records created during the terminated operation: a repeated backup scantape operation exits automatically when it finds that a record that it needs to create already exists. o If a dump operation is running, all of the volumes written to the tape or backup data file before the termination signal is received are complete and usable. If the operation is restarted, the Backup System performs all the dumps again from scratch, and assigns a new dump ID number. If writing the new dumps to the same tape or file, the operator must relabel it first if the interrupted dump is not expired. If writing the new dump to a different tape or file, the operator can remove the dump record associated with the interrupted dump to free up space in the database. o If a restore operation is running, completely restored volumes are online and usable. However, it is unlikely that many volumes are completely restored, given that complete restoration usually requires data from multiple tapes. If the termination signal comes before the Backup System has accessed all of the necessary tapes, each volume is only partially written and is never brought online. It is best to restart the restore operation from scratch to avoid possible inconsistencies. See also CAUTIONS. CAUTIONS
It is best not to issue the backup kill command against restore operations. If the termination signal interrupts a restore operation as the Backup System is overwriting an existing volume, it is possible to lose the volume entirely (that is, to lose both the contents of the volume as it was before the restore and any data that was restored before the termination signal arrived). The data being restored still exists on the tape, but some data can be lost permanently. OPTIONS
-id <job ID or dump set name> Identifies the backup operation to terminate. Provide one of two types of values: o The operation's job ID number, as displayed in the output of the backup jobs command. o For a dump operation, either the job ID number or a dump name of the form volume_set_name.dump_level_name, where volume_set_name is the name of the volume set being dumped and dump_level_name is the last element in the dump level pathname at which the volume set is being dumped. The dump name appears in the output of the backup jobs command along with the job ID number. -help Prints the online help for this command. All other valid options are ignored. EXAMPLES
The following command terminates the operation with job ID 5: backup> kill 5 The following command terminates the dump operation called "user.sunday1": backup> kill user.sunday1 PRIVILEGE REQUIRED
The issuer must have the privilege required to initiate the operation being cancelled. Because this command can be issued only within the interactive session during which the operation was initiated, the required privilege is essentially guaranteed. SEE ALSO
backup(8), backup_interactive(8), backup_jobs(8) COPYRIGHT
IBM Corporation 2000. <http://www.ibm.com/> All Rights Reserved. This documentation is covered by the IBM Public License Version 1.0. It was converted from HTML to POD by software written by Chas Williams and Russ Allbery, based on work by Alf Wachsmann and Elizabeth Cassell. OpenAFS 2012-03-26 BACKUP_KILL(8)
All times are GMT -4. The time now is 07:23 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy