I have an input file with user dn's. e.g.
My script needs to generate an LDIF file based on this input file as below:-
Now I can't work out what the awk statement should look like so that "dn" is assigned the full dn from the file and uid and cn are assigned just a segment. For example, the output for one line should look like:
Clearly the above will not work. Any help much appreciated.
Many thanks
Last edited by Franklin52; 09-01-2009 at 08:00 AM..
Reason: Please use code tags!
hi
i have a file containing lines like
word1,word2,word3,word4,..
word4,word5,word3,word6,...
now i need to make it to look like
word1
word2
word3
word4
.
.
.
in other words ','(comma) is replaced with new line. (5 Replies)
I have gone through all the threads in the forum and tested out different things. I am trying to split a 3GB file into multiple files. Some files are even larger than this.
For example:
split -l 3000000 filename.txt
This is very slow and it splits the file with 3 million records in each... (10 Replies)
Hi All,
Can someone please help me write a script for the following requirement in awk, grep, sed or perl.
Buuuu xxx bbb
Kmmmm rrr ssss uuuu
Kwwww zzzz ccc
Roooowwww eeee
Bxxxx jjjj dddd
Kuuuu eeeee nnnn
Rpppp cccc vvvv cccc
Rhhhhhhyyyy tttt
Lhhhh rrrrrssssss
Bffff mmmm iiiii
Ktttt... (5 Replies)
Hi,
i have a file say file1 having following data
/abc/def:ghi/jkl/ some other text
Now i want to extract only ghi/jkl/using sed, can some one please help me.
Thanks
Sarbjit (2 Replies)
Hi,
I need help to split lines from a file into multiple files.
my input look like this:
13
23 45 45 6 7
33 44 55 66 7
13
34 5 6 7 87
45 7 8 8 9
13
44 55 66 77 8
44 66 88 99 6
I want to split every 3 lines from this file to be written to individual files. (3 Replies)
Hello;
I have a file consists of 4 columns separated by tab. The problem is the third fields. Some of the them are very long but can be split by the vertical bar "|". Also some of them do not contain the string "UniProt", but I could ignore it at this moment, and sort the file afterwards. Here is... (5 Replies)
How can I split following input into stwo strings:
Input:
1^~^2^~^3^~^4^~^5^~^6^~^7^~^8^~^9
Output:
$string1 = 1^~^2^~^
$string2 = 3^~^4^~^5^~^6^~^7^~^8^~^9
Note: the length of string may vary, say upto 15. String 1 will contain only first two. string2 will contain... (10 Replies)
I am trying to run the awk below. My question is when I split the input, then run anotherawk to perform a calculation using that splitas the input there are no issues. When I try to combine them the output is not correct, is the split not working or did I do it wrong? Thank you :).
input
... (8 Replies)
Dear Users,
Appreciate your help if you could help me with splitting a large file > 1 million lines with sed or awk. below is the text in the file
input file.txt
scaffold1 928 929 C/T +
scaffold1 942 943 G/C +
scaffold1 959 960 C/T +... (6 Replies)
Greetings,
i have a string that looks like
Network "123" "ABC"
i need to make it look like:
Network "123"
Network "ABC"
Help please?
Thanks again
Please use CODE tags when displaying sample input, sample output, and code segments (as required by forum rules). (2 Replies)
Discussion started by: hoyanet
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)