Inorder to eliminate new line characters and the empty lines from our data and I used the below syntax :
and eliminated all the new line char and the empty lines in my data.
But except the below sample data. This is one single record and not 2 records. When I use the above mentioned syntax, AWK eliminates the new line char but it considers this as 2 records
1. Entering the serial number a second time forces me to use phone service rather than online.
2. I noticed that with my new PS3, when I tried to download a movie, it said another PS3 (the one I sent in) is still activated. How am I supposed to deactivate"|10/26/2009 0:00:00.
Can someone pls help me.
Thanks.
Last edited by Franklin52; 05-18-2010 at 05:56 AM..
Reason: Please use code tags!
Hello,
I am getting few text files with no EOF ( or end-of-line ) which i need to fix using a command so that i can include it in a script. Now i'm fixing this issue by opening the file in "vi" editor and adding last line.
e.g.,
server1#wc -l temp.txt
9 temp.txt
server1#cat ... (6 Replies)
I have a file something like:
a     b c    d
sdfsdf    f f   f
f   fffff
dfdf
Now when i read it as
while read line
do
echo $line
done< file
I get the o/p as
a b c d
sdfsdf f f f
f fffff
dfdf (5 Replies)
Hi,
I am reading data from file and storing it in a multiline variable. Every line is seperated with "\n" character.
globalstrval="${globalstrval}""${line}""\n"
If the value of globalstrval is like:
1234
ABCD
EFGH
WXYZ
....
If I do,
YLvar=`echo $globalstrval | grep "ABC"`
then... (1 Reply)
Hi,
This is the script and the error I am receiving
Can anyone please suggest ?
For the exmaple below assume we are using vg01
#!/bin/ksh
echo "##### Max Mount Count Fixer #####"
echo "Please insert Volume Group name to check"
read VG
lvs |grep $VG | awk {'print $1'} > /tmp/audit.log
... (2 Replies)
Hi ,
I am doing some enhancements in an existing shell script. There it used the awk command in a function as below :
float_expr() {
IFS=" " command eval 'awk "
BEGIN {
result = $*
print result
exit(result == 0)
}"'
}
It calls the function float_expr to evaluate two values ,... (1 Reply)
Hi,
I have a major problem in my source files. I have around 10 source files(around 20 GB).In one of the source files I am getting the broken line issue.My source files needs to be like this
Sr.No~Ref.No~Address~Acc.No
1~ABC345~No.2/110~456474
2~ABC786~4w/54~458695
... (4 Replies)
Heyas
I'm trying to read/display a file its content and put borders around it (tui-cat / tui-cat -t(ypwriter).
The typewriter-part is a 'bonus' but still has its own flaws, but thats for later.
So in some way, i'm trying to rewrite cat using bash and other commands.
But sadly it fails on... (2 Replies)
Hi All- we have performance issue in unix to read line by line.
I am looking at processing all the records.
description: Our script will read data from a flat file, it will pickup first four character and based on the value it will set up variables accordingly and appended the final output to... (11 Replies)
All- We have a performance issue in reading a file line by line. Please find attached scripts for the same. Currently it is taking some 45 min to parse "512444" lines.
Could you please have a look at it and provide any suggestions to improve the performance.
Thanks,
Balu
... (12 Replies)
I am trying to read the below file line by line for the below operation
i) extract the directory alone and assign it one variable
ii) extract the permission available in the line and add comma between the permissions and assign to another variable
iii) Finally apply setfacl logic as shown in... (3 Replies)
Discussion started by: sarathy_a35
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)