.
.
.
suppose there are 12 CLP04 segment in my file i want to add upto 5 CLP04 then print next line after BPR segment after calculate the total amount
.
.
.
Summing 5 CLP04 values - assuming that those are $5 in CLP segments/records - is not that difficult:
But, on top of what Don Cragun asked, what to do with the two residual values? And, there's just one "BPR segment" in the first line of the file - should the TOTAL_AMOUNT appear there, in line 2?
Hi,
I am trying to extracting the sum of all varibles listed in a file.
The code is as follows
##### FILE1 ########
Value1:2
Value2:2
Value3:6
Value4:5
##### shell script ######
#!/bin/sh
total=0 (2 Replies)
Hi,consider this fields,
$1 $2 $3
981 0 1
984 0 4
985 1 0
987 0 2
990 0 0
993 0 3
995 2 0
996 0 1
999 0 4
for each occurence of zero in column $2 and $3 I need to sum $1 fields, so for example, in this piece of code the result of $1 is 8910. I'm sure... (2 Replies)
Hi Experts,
I am adding a column of numbers with awk , however not getting correct output:
# awk '{sum+=$1} END {print sum}' datafile
2.15291e+06
How can I getthe output like : 2152910
Thank you..
# awk '{sum+=$1} END {print sum}' datafile
2.15079e+06 (3 Replies)
Hi all,
I need to sum values for fields in a delimited file as below:
2010-03-05|||
2010-03-05|||123
2010-03-05|467.621|369.532|
2010-03-06|||
2010-03-06||2|
2010-03-06|||444
2010-03-07|||
2010-03-07|||
2010-03-07|655.456|1019.301|
Code used is:
nawk -F "|" ' { sum +=... (7 Replies)
Hi,
I am unable sum of each column in the loop usng awk command.
Awk is not allowing the parameters in the command.
i am facing the below error.
awk: 0602-562 Field $() is not correct.
Source file
abc.txt
100,200,300,400,500,600,700,800,900
101,201,301,401,501,601,701,801,901
... (1 Reply)
Hi,
I have a file with header, detail and trailer records.
HDR|111
DTL|abc|100|xyz
DTL|abc|50|xyz
TRL|150
I need to add the values in 3rd field from DTL records.
Using awk, I am doing it as follows:
awk -F'|' '$1=="DTL"{a += $3} END {print a}' <source_file>
However, I want to... (3 Replies)
Hi
i have to calculate some numbers, column by column.
Herfore i used a for-loop..
for i in {4..26};do awk -F"," '{x'$i'+=$'$i'}END{print '$i'"\t" x'$i'}' file.tmp;done
----- printout -----
4 660905240
5 71205272
6 8.26169e+07
7 8.85961e+07
8 8.60936e+07
9 7.42238e+07
10 5.6051e+07... (7 Replies)
Hi all,
I have one host i need to run in loop to check the capacity from different frame and get the output to one file and sum it and convert to TB
this is Code
#!/bin/ksh
DATE=`date '+%d%m%y'`
for f in `cat /home/esx-capacity/esx-host.txt`
do
for g in `cat /home/esx-capacity/frame`... (10 Replies)
Discussion started by: ranjancom2000
10 Replies
LEARN ABOUT NETBSD
lfs_cleanerd
LFS_CLEANERD(8) BSD System Manager's Manual LFS_CLEANERD(8)NAME
lfs_cleanerd -- garbage collect a log-structured file system
SYNOPSIS
lfs_cleanerd [-bcDdfmqs] [-i segment-number] [-l load-threshhold] [-n number-of-segments] [-r report-frequency] [-t timeout] node
DESCRIPTION
The lfs_cleanerd command starts a daemon process which garbage-collects the log-structured file system residing at the point named by node in
the global file system namespace. This command is normally executed by mount_lfs(8) when the log-structured file system is mounted. The
daemon will exit within a few minutes of when the file system it was cleaning is unmounted.
Garbage collection on a log-structured file system is done by scanning the file system's segments for active, i.e. referenced, data and copy-
ing it to new segments. When all of the active data in a given segment has been copied to a new segment that segment can be marked as empty,
thus reclaiming the space taken by the inactive data which was in it.
The following options are available:
-b Use bytes written, rather than segments read, when determining how many segments to clean at once.
-c Coalescing mode. For each live inode, check to see if it has too many blocks that are not contiguous, and if it does, rewrite it.
After a single pass through the filesystem the cleaner will exit. This option has been reported to corrupt file data; do not use it.
-D Stay in the foreground, do not become a daemon process. Does not print additional debugging information (in contrast to -d).
-d Run in debug mode. Do not become a daemon process, and print debugging information. More -d s give more detailed debugging informa-
tion.
-f Use filesystem idle time as the criterion for aggressive cleaning, instead of system load.
-i segment-number
Invalidate the segment with segment number segment-number. This option is used by resize_lfs(8), and should not be specified on the
command line.
-l load-threshhold
Clean more aggressively when the system load is below the given threshhold. The default threshhold is 0.2.
-m Does nothing. This option is present for historical compatibility.
-n number-of-segments
Clean this number of segments at a time: that is, pass this many segments' blocks through a single call to lfs_markv, or, if -b was
also given, pass this many segments' worth of blocks through a single call to lfs_markv.
-q Quit after cleaning once.
-r report-frequency
Give an efficiency report after every report-frequency times through the main loop.
-s When cleaning the file system, send only a few blocks through lfs_markv at a time. Don't use this option.
-t timeout
Poll the filesystem every timeout seconds, looking for opportunities to clean. The default is 300, that is, five minutes. Note that
lfs_cleanerd will be automatically awakened when the filesystem is active, so it is not usually necessary to set timeout to a low
value.
SEE ALSO lfs_bmapv(2), lfs_markv(2), lfs_segwait(2), mount_lfs(8)HISTORY
The lfs_cleanerd utility first appeared in 4.4BSD.
BSD August 6, 2009 BSD