01-31-2019
Applications and Performance Management (APM) products may be what you are looking for. Some examples would be nagios, Foglight, Big Brother, Solarwinds, New Relic, BMC, and probably a bunch more. An agent or instrumentation is installed on each system to "monitor" activity. End to end process tracing or "transaction" tracing has been the holy grail of APM for decades and very few companies can do it well because there isn't always an easy mapping between systems. Usually you will get partial info for specific transactions or you can only monitor a few transactions in real time due to the huge volume of data that would need to be collected for all transactions.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi,
I am using sftp in my bash script.
I wanted to know whether if there is any way that we can track whether sftp has been successful or not..
Does sftp return any codes?
Thanks in adv (9 Replies)
Discussion started by: borncrazy
9 Replies
2. Linux
Hi all,
Under top command you could see some iowait output.
Is any way to locate which process(s) is causing the high percentage of iowait?
17:48:39 up 19 days, 18:54, 3 users, load average: 3.24, 3.14, 3.17
392 processes: 389 sleeping, 1 running, 2 zombie, 0 stopped
CPU states: cpu user... (3 Replies)
Discussion started by: will_mike
3 Replies
3. AIX
Hi,
Sometimes when I want to unmount some filesystem I get "The requested resource is busy." error.
In such a case I try to find and kill process that uses that filesystem. I do that on random.
Is there a right way to find whitch prosesses use filesystem resource at given time ?
thanks... (1 Reply)
Discussion started by: vilius
1 Replies
4. Solaris
Hi all,
We have a server having much processes running. It is very difficuilt to trace the exact consuming more memory. Howerver, it shows CPU usage in sequence but how memory?
Tried working with TOP command.
Please let me know if something not clear.
Thanks,
Deepak (5 Replies)
Discussion started by: naw_deepak
5 Replies
5. Shell Programming and Scripting
Hi Friendz,
I have 14 DB load scripts say 1,2,3....14. I want a script to call each script automatically, and after completion of every script, it needs to track the logfile and mail that log file to a group.and again it should run the next script in sequence 1,2,3...14
Please help me,need... (1 Reply)
Discussion started by: shirdi
1 Replies
6. UNIX for Advanced & Expert Users
Hello,
I execute an application on my Unix AIX Server and that one crashes after reading some files. These files are very big (80 Mbytes), the application is a CVS Repository.
I have found with a comparaison on a Solaris Server that there are system limitations on my AIX Server in the... (2 Replies)
Discussion started by: steiner
2 Replies
7. AIX
I don't know when the process will start and end, I need write a script to trace it's cpu/memory usage when it is runing. How to write this script? (2 Replies)
Discussion started by: rainbow_bean
2 Replies
8. Shell Programming and Scripting
i have a scenario where i need a script that monitors a process "Monitor" based on process id... there can be any number of instances of this running... i start this across 4 servers in NFS. Now i need a file which has the process ids of the process that are currently in execution at any... (9 Replies)
Discussion started by: niteesh_!7
9 Replies
9. Linux
Hi gurus,
Just wanted to know if there is any way to trace the PID of a process that has already completed it's job and exited.
My tomcat server was found in a hung state and we restarted it. Now we have found that someone has ran a kill -9 "pid" and wanted to know if it is the PID of tomcat.... (1 Reply)
Discussion started by: Hari_Ganesh
1 Replies
10. UNIX for Advanced & Expert Users
Hi,
i want to track a process using its PID in SOLARIS.
i have a code in C++ , which memory is increasing steeply increasing every 20 days, from the code i couldn't see any memory leak.
is there any way in UNIX where with the help of PID i can trace the Process usage evry hour.
... (3 Replies)
Discussion started by: senkerth
3 Replies
LEARN ABOUT CENTOS
trace-cmd-restore
TRACE-CMD-RESTORE(1) TRACE-CMD-RESTORE(1)
NAME
trace-cmd-restore - restore a failed trace record
SYNOPSIS
trace-cmd restore [OPTIONS] [command] cpu-file [cpu-file ...]
DESCRIPTION
The trace-cmd(1) restore command will restore a crashed trace-cmd-record(1) file. If for some reason a trace-cmd record fails, it will
leave a the per-cpu data files and not create the final trace.dat file. The trace-cmd restore will append the files to create a working
trace.dat file that can be read with trace-cmd-report(1).
When trace-cmd record runs, it spawns off a process per CPU and writes to a per cpu file usually called trace.dat.cpuX, where X represents
the CPU number that it is tracing. If the -o option was used in the trace-cmd record, then the CPU data files will have that name instead
of the trace.dat name. If a unexpected crash occurs before the tracing is finished, then the per CPU files will still exist but there will
not be any trace.dat file to read from. trace-cmd restore will allow you to create a trace.dat file with the existing data files.
OPTIONS
-c
Create a partial trace.dat file from the machine, to be used with a full trace-cmd restore at another time. This option is useful for
embedded devices. If a server contains the cpu files of a crashed trace-cmd record (or trace-cmd listen), trace-cmd restore can be
executed on the embedded device with the -c option to get all the stored information of that embedded device. Then the file created
could be copied to the server to run the trace-cmd restore there with the cpu files.
If *-o* is not specified, then the file created will be called
'trace-partial.dat'. This is because the file is not a full version
of something that trace-cmd-report(1) could use.
-t tracing_dir
Used with -c, it overrides the location to read the events from. By default, tracing information is read from the debugfs/tracing
directory. -t will use that location instead. This can be useful if the trace.dat file to create is from another machine. Just tar
-cvf events.tar debugfs/tracing and copy and untar that file locally, and use that directory instead.
-k kallsyms
Used with -c, it overrides where to read the kallsyms file from. By default, /proc/kallsyms is used. -k will override the file to read
the kallsyms from. This can be useful if the trace.dat file to create is from another machine. Just copy the /proc/kallsyms file
locally, and use -k to point to that file.
-o output'
By default, trace-cmd restore will create a trace.dat file (or trace-partial.dat if -c is specified). You can specify a different file
to write to with the -o option.
-i input
By default, trace-cmd restore will read the information of the current system to create the initial data stored in the trace.dat file.
If the crash was on another machine, then that machine should have the trace-cmd restore run with the -c option to create the trace.dat
partial file. Then that file can be copied to the current machine where trace-cmd restore will use -i to load that file instead of
reading from the current system.
EXAMPLES
If a crash happened on another box, you could run:
$ trace-cmd restore -c -o box-partial.dat
Then on the server that has the cpu files:
$ trace-cmd restore -i box-partial.dat trace.dat.cpu0 trace.dat.cpu1
This would create a trace.dat file for the embedded box.
SEE ALSO
trace-cmd(1), trace-cmd-record(1), trace-cmd-report(1), trace-cmd-start(1), trace-cmd-stop(1), trace-cmd-extract(1), trace-cmd-reset(1),
trace-cmd-split(1), trace-cmd-list(1), trace-cmd-listen(1)
AUTHOR
Written by Steven Rostedt, <rostedt@goodmis.org[1]>
RESOURCES
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git
COPYING
Copyright (C) 2010 Red Hat, Inc. Free use of this software is granted under the terms of the GNU Public License (GPL).
NOTES
1. rostedt@goodmis.org
mailto:rostedt@goodmis.org
06/11/2014 TRACE-CMD-RESTORE(1)