Sponsored Content
Full Discussion: Shell script - group by
Top Forums Shell Programming and Scripting Shell script - group by Post 302900198 by baladelaware73 on Monday 5th of May 2014 11:32:48 AM
Old 05-05-2014
Lightbulb

Thanks, this is a great approach. It worked for me.
 

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

HELP! Group by in shell script (awk/sed?)

Hello, Could some expert soul please help me with this? I have following file format - task time abc 5 xyz 4 abc 5 xyz 3 ddd 10 ddd 2 I need to generate output as - task ... (5 Replies)
Discussion started by: sncoupons
5 Replies

2. Shell Programming and Scripting

Shell script to rename a group of files

Hello, I am having 1800 files in a directory with a specified format, like amms_850o_prod.000003uNy amms_850o_prod.000003u8x amms_850o_prod.000003taP amms_850o_prod.000003tKy amms_850o_prod.000003si4 amms_850o_prod.000003sTP amms_850o_prod.000003sBg amms_850o_prod.000003rvx... (12 Replies)
Discussion started by: atlantis
12 Replies

3. Shell Programming and Scripting

"group by" using shell script?

not sure if it's called "group by" , but what i'm going to do is like this: i have a file below: 192.168.1.10 192.168.1.10 192.168.1.10 192.168.1.11 192.168.1.15 192.168.1.15 192.168.1.20 192.168.1.22 then i hope to get the result like this: 192.168.1.10 : 3 192.168.1.11 : 1... (6 Replies)
Discussion started by: tiger2000
6 Replies

4. Solaris

su: No shell/No directory! if sys is added to a users secondary group

Hi, When I include a user to the secondary group "sys" GID=3 in Solaris 9 OS I'm not able to login. I get these error. The user home directory and the shell exists. Is this because of any security hardening. # su - agent No directory! # su agent su: No shell # grep taddm /etc/passwd... (14 Replies)
Discussion started by: agent001
14 Replies

5. Shell Programming and Scripting

Shell Script to ignore # and take corresponding user and group

Hi, I have a following file: role.IMPACT_USER.user=admin role.IMPACT_USER.user=dd12345 role.IMPACT_USER.user=ss76767 #role.IMPACT_USER.user=root #role.IMPACT_USER.group=System role.IMPACT_USER.group=ImpactUser #Description: Allow users to login in to Impact, start and stop service... (5 Replies)
Discussion started by: dbashyam
5 Replies

6. Shell Programming and Scripting

Help Linux Shell Group exists

I am having some problems when writing shell as follows: shell runs but returns no results echo "enter group name: " dir="/home" read group if id -g $group > /dev/null 2>&1 then echo "group exits" else echo... (6 Replies)
Discussion started by: kingkner
6 Replies

7. Shell Programming and Scripting

Help me to perform count & group by operation in shell scripting?

Hi All, I want to display the distinct values in the file and for each distinct value how may occurance or there. Test data: test1.dat 20121105 20121105 20121105 20121105 20121106 20121106 20121106 20121105 I need to display the output like Output (2 Replies)
Discussion started by: bbc17484
2 Replies

8. Shell Programming and Scripting

Shell Script to Group by Based on Multiple Fields in a file

Hi, I want to know if there is any simple approach to SUM a field based on group by of different fields for e.g. file1.txt contains below data 20160622|XXX1||50.00||50.00|MONEY|Plan1| 20160622|XXX1||100.00||100.00|MONEY|Plan1| 20160623|XXX1||25.00||25.00|MONEY|Plan1|... (3 Replies)
Discussion started by: cnu_theprince
3 Replies

9. Shell Programming and Scripting

Emulate group-by in shell script

Hello All, I saw this problem on one of the forum and solved it using group-by in oracle sql, though I am a bit curious to implement it using shell script : There is a file having number of operations : Opeation,Time-Taken operation1,83621 operation2,72321 operation3,13288... (11 Replies)
Discussion started by: mukulverma2408
11 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 01:45 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy