Sponsored Content
Full Discussion: Summing file sizes
Special Forums UNIX Desktop Questions & Answers Summing file sizes Post 302738835 by Alexander4444 on Monday 3rd of December 2012 01:37:22 AM
Old 12-03-2012
Thanks!

Thanks for these explanations! They work excellently. I especially appreciate the nice formatting of the output in Mb.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Help on adding file sizes

Hi I need to take a list of files that are defined by an ls -ltr or grep for particular file names - and add up the byte size colum which is field 5 seperated by a space. I tried to do this but I think I am way off: for file in 'ls -ltr | grep 20070916 | nawk -F" " '{temp+=5} END {print... (1 Reply)
Discussion started by: llsmr777
1 Replies

2. Shell Programming and Scripting

Summing the columns of a file

Hi All, I have a file like - num.txt 12, 34, 65, line1 34, 65, 89, line2 43, 65, 77, line3 I want to do two things - 1. Add first three columns of each line and print the line with largest value. i.e. (12+34+65) for 1st line and so on. 2. Add middle column of each line i.e.... (3 Replies)
Discussion started by: asahlot
3 Replies

3. UNIX for Advanced & Expert Users

Summing file size and output

:rolleyes:Hi, I need to find the sum of size of specific files in my directory Say for ex, mydir$ ls -ltr permisssion links user group size date time filename I want to display the sum of size of filenames having pattern "TS55". Note file size in this directory is near 400 MB. mydir$... (1 Reply)
Discussion started by: ramkrix
1 Replies

4. Shell Programming and Scripting

summing up the fields in fixed width file

Hi, I have a fixed width file with some records as given below: " 1000Nalsdjflj243324jljlj" "-0300Njfowjljl309933fsf" " 0010Njsfsjklj342344fsl" I want to sum-up first field values(i.e from 2nd character to 6th character)of each record. so for the above file i want to add (1000 - 300+... (2 Replies)
Discussion started by: srilaxmi
2 Replies

5. Shell Programming and Scripting

Help with file sizes

I have 2 big files in the size of gb. They are same with respect to content, both are “,” delimited. Now both of them are created by two different processes but has the same logic. The problem is they are differing only in few bytes for e.g one file is 202195751 bytes other is 202195773. So... (2 Replies)
Discussion started by: dsravan
2 Replies

6. Homework & Coursework Questions

HELP with Unix scripts in summing columns in a file.

Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted! 1. The problem statement, all variables and given/known data: Hi guys, i'm a new guy here, and it's my first time creating a unix script. can you guys help me out here? i'd... (3 Replies)
Discussion started by: ramneim
3 Replies

7. Homework & Coursework Questions

HELP with Unix scripts in summing columns in a file

1. The problem statement, all variables and given/known data: Hi guys, i'm a new guy here, and it's my first time creating a unix script. can you guys help me out here? i'd really appreciate it. :( Here's my problem: This is the file i'm using, it has 6 columns, the first three columns are... (12 Replies)
Discussion started by: ramneim
12 Replies

8. UNIX for Dummies Questions & Answers

Summing lines in a file

Can anyone tell me how sum values in each record of a file and append that value to the end? For instance a typical record will be: FY12,Budget,771100,,,,,,,,,250,-250 I'd like the record to become FY12,Budget,771100,,,,,,,,,250,-250,0 which can be put into another file. Thank you. (6 Replies)
Discussion started by: LearningLinux2
6 Replies

9. Shell Programming and Scripting

Help summing a file using awk

I'm trying to sum a text file using AWK. Here is an example of the file: 600|3H68| 46 600|3H69| 46 600|3H6F| 290 600|3H6G| 24 600|3HDY| 1 600|3HDY| 3 600|3HE0| 1 600|3HE0| 3 I would like to sum the third field if the first... (7 Replies)
Discussion started by: Drenhead
7 Replies

10. Shell Programming and Scripting

Summing all fields in a file

I was playing around to see how stuff works, and was trying to sum all fields in a file. cat file 1 2 3 4 5 6 7 8 9 10 11 12 I made this script: awk 'BEGIN {OFS=RS}{$1=$1}{s+=$0} END {print "sum="s}' file This gives 15, why not 78? I test it like this awk 'BEGIN... (5 Replies)
Discussion started by: Jotne
5 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 05:14 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy