Sponsored Content
Top Forums UNIX for Advanced & Expert Users A variable and sum of its value in a huge data. Post 302317427 by varungupta on Tuesday 19th of May 2009 02:24:47 AM
Old 05-19-2009
A variable and sum of its value in a huge data.

Hi Experts,

I got a question..

In the following output of `ps -elf | grep DataFlow` I get:-

242001 A mqsiadm 2076676 1691742 0 60 20 26ad4f400 130164 * May 09 - 3:02 DataFlowEngine EAIDVBR1_BROKER 5e453de8-2001-0000-0080-fd142b9ce8cb VIPS_INQ1 0
242001 A mqsiadm 2081016 1691742 0 60 20 372c6c400 81248 * May 09 - 1:03 DataFlowEngine EAIDVBR1_BROKER 6fc4c4d3-2001-0000-0080-867a1d98f606 TEST2 1
242001 A mqsiadm 2088998 1691742 0 60 20 3facfd400 60924 * May 09 - 1:19 DataFlowEngine EAIDVBR1_BROKER c7072ee8-2001-0000-0080-ecd80b0acb20 EAI_CONTROL 0
242001 A mqsiadm 2093084 1691742 0 60 20 3fad7d400 59072 * May 09 - 1:12 DataFlowEngine EAIDVBR1_BROKER 18b140e8-2001-0000-0080-d0035678abd2 WEBSERVICES 0
242001 A mqsiadm 2117660 1691742 0 60 20 1e2d3e400 70436 * May 09 - 1:18 DataFlowEngine EAIDVBR1_BROKER 2fe72de8-2001-0000-0080-ecd80b0acb20 DISCOUNTING_CFC_APPID 0
242001 A mqsiadm 2121744 1691742 0 60 20 372d6c400 84244 * May 09 - 1:57 DataFlowEngine EAIDVBR1_BROKER a62a39e8-2001-0000-0080-ecd80b0acb20 UTILITY 0
242001 A mqsiadm 2125838 1691742 0 60 20 31ad61400 69952 * May 09 - 1:34 DataFlowEngine EAIDVBR1_BROKER 79922de8-2001-0000-0080-ecd80b0acb20 CTAX 0
242001 A mqsiadm 2134042 1691742 0 60 20 2c2dda400 93060 * May 09 - 2:38 DataFlowEngine EAIDVBR1_BROKER 318d32e8-2001-0000-0080-ecd80b0acb20 IDS_INQ1 0
242001 A mqsiadm 2138132 1691742 0 60 20 42d8a400 92372 * May 09 - 2:24 DataFlowEngine EAIDVBR1_BROKER d6d529e8-2001-0000-0080-bbfc9e717a98 CON_PAYOFFS2 0
242001 A mqsiadm 2146328 1691742 0 60 20 3cad7b400 80092 * May 09 - 1:34 DataFlowEngine EAIDVBR1_BROKER 3ed728e8-2001-0000-0080-bbfc9e717a98 CON_INQ 0

Above BOLD faced I got to pick and RED faced are the memory consumed by those Execution Groups (that are BOLD Faced).
I got to pick "each BOLD faced variable" and got to sum the RED faced value, for the interval of an hour. An hour can be divided into 6 subintervals of 10 seconds.

Please tell me how to achieve that.
I guess array can be used here.

Thanks in Advance.Smilie
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to extract data from a huge file?

Hi, I have a huge file of bibliographic records in some standard format.I need a script to do some repeatable task as follows: 1. Needs to create folders as the strings starts with "item_*" from the input file 2. Create a file "contents" in each folders having "license.txt(tab... (5 Replies)
Discussion started by: srsahu75
5 Replies

2. Shell Programming and Scripting

Split a huge data into few different files?!

Input file data contents: >seq_1 MSNQSPPQSQRPGHSHSHSHSHAGLASSTSSHSNPSANASYNLNGPRTGGDQRYRASVDA >seq_2 AGAAGRGWGRDVTAAASPNPRNGGGRPASDLLSVGNAGGQASFASPETIDRWFEDLQHYE >seq_3 ATLEEMAAASLDANFKEELSAIEQWFRVLSEAERTAALYSLLQSSTQVQMRFFVTVLQQM ARADPITALLSPANPGQASMEAQMDAKLAAMGLKSPASPAVRQYARQSLSGDTYLSPHSA... (7 Replies)
Discussion started by: patrick87
7 Replies

3. UNIX for Dummies Questions & Answers

Copy huge data into vi editor

Hi All, HP-UX dev4 B.11.11 U 9000/800 3251073457 I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this? ... (18 Replies)
Discussion started by: alok.behria
18 Replies

4. Shell Programming and Scripting

search a number in very very huge amount of data

Hi, I have to search a number in a very long listing of files.the total size of the files in which I have to search is 10 Tera Bytes. How to search a number in such a huge amount of data effectively.I used fgrep but it is taking many hours to search. Is there any other feasible solution to... (3 Replies)
Discussion started by: vsachan
3 Replies

5. Red Hat

Disk is Full but really does not contain huge data

Hi All, My disk usage show 100 % . When I check “df –kh” it shows my root partition is full. But when I run the “du –skh /” shows only 7 GB is used. Filesystem Size Used Avail Use% Mounted on /dev/sda1 30G 28G 260MB 100% / How I can identify who is using the 20 GB of memory. Os: Centos... (10 Replies)
Discussion started by: kalpeer
10 Replies

6. Shell Programming and Scripting

Aggregation of huge data

Hi Friends, I have a file with sample amount data as follows: -89990.3456 8788798.990000128 55109787.20 -12455558989.90876 I need to exclude the '-' symbol in order to treat all values as an absolute one and then I need to sum up.The record count is around 1 million. How... (8 Replies)
Discussion started by: Ravichander
8 Replies

7. Shell Programming and Scripting

awk does not work well with huge data?

Dear all , I found that if we work with thousands line of data, awk does not work perfectly. It will cut hundreds line (others are deleted) and works only on the remain data. I used this command : awk '$1==1{$1="Si"}{print>FILENAME}' coba.xyz to change value of first column whose value is 1... (4 Replies)
Discussion started by: ariesto
4 Replies

8. Solaris

The Fastest for copy huge data

Dear Experts, I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers? I already tried using Rsync and tar command. But using these command is too long. Please advice. Thanks Edy (11 Replies)
Discussion started by: edydsuranta
11 Replies

9. Shell Programming and Scripting

Phrase XML with Huge Data

HI Guys, I have Big XML file with Below Format :- Input :- <pokl>MKL=1,FN=1,GBNo=B10C</pokl> <d>192</d> <d>315</d> <d>35</d> <d>0,7,8</d> <pokl>MKL=1,dFN=1,GBNo=B11C</pokl> <d>162</d> <d>315</d> <d>35</d> <d>0,5,6</d> <pokl>MKL=1,dFN=1,GBNo=B12C</pokl> <d>188</d> (4 Replies)
Discussion started by: pareshkp
4 Replies

10. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies
sum(n)									sum								    sum(n)

NAME
sum - calculate a sum(1) compatible checksum SYNOPSIS
package require Tcl 8.2 package require sum ?1.0? ::crc::sum ?-format format? message ::crc::sum ?-format format? -filename file DESCRIPTION
This package provides a Tcl-only implementation of the sum(1) command which calculates a 16 bit checksum value from the input data. The BSD sum algorithm is used by default but the SysV algorithm is also available. COMMANDS
::crc::sum ?-format format? message ::crc::sum ?-format format? -filename file The command takes string data or a file name and returns a checksum value calculated using the sum(1) algorithm. The result is for- matted using the format(n) specifier provided or as an unsigned integer (%u) by default. OPTIONS
-filename name Return a checksum for the file contents instead of for parameter data. -format string Return the checksum using an alternative format template. EXAMPLES
% crc::sum "Hello, World!" 37287 % crc::sum -format 0x%X "Hello, World!" 0x91A7 % crc::sum -file sum.tcl 13392 SEE ALSO
sum(1), cksum(n), crc32(n) AUTHORS
Pat Thoyts KEYWORDS
sum, cksum, checksum, crc, crc32, cyclic redundancy check, data integrity, security crc 1.0 sum(n)
All times are GMT -4. The time now is 03:13 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy