Mapping a data in a file and delete line in source file if data does not exist.
Hi Guys,
Please help me with my problem here:
I have a source file:
and a mapping file:
Lets assume that these are all fixed length files.
What I want to happen is if the data in the 4th column of the source file (which is at the 50th-69th positions) does not exist in the 4th column of the mapping file (which is at the 40th-59th position); delete that line in the source file.
This should result like this:
The line:
is deleted since the QWQW2 data is not in the mapping file.
Hi there,
I have written a script called "compare" (see below) to make comparison between 2 files namely test_put.log and Output_A0.log
#!/bin/ksh
while read file
do
found="no"
while read line
do
echo $line | grep $file > /dev/null
if
then
echo $file found
found="yes"
break
fi... (3 Replies)
I want to deal with several data, i.e., data.*.txt with following structure
MSG|20010102|123 125 4562 409|SEND
MSG|20010102|120 230|SEND
MSG|20010102|120 204 5071|SEND
MSG|20010103|2 11 1098 9810|SEND
......
index file index.txt is
11
201
298
100
......
What I want to do is:
1)... (0 Replies)
This is shell programming assignment.
It needs to create a file called .std_dbrc contains
STD_DBROOT=${HOME}/class/2031/Assgn3/STD_DB
(which includes all my simple database files)
and I am gonna use this .std_dbrc in my script file (read the data from the database files)
like this: .... (3 Replies)
Hi there
I would like to create a shell script to do the following:
- delete a line in file1 if it contains the data string in file2
eg: file1
1 100109942004051510601703694 0.00 0.00
2 100109942004051510601702326 0.00 0.00
3 ... (1 Reply)
i need to delete the lines is match from file data 1 & data 2
please help?
data 1
4825307
4825308
4825248
4825309
4825310
4825311
4825336
data 2
4825248 0100362210 Time4Meal 39.00 41.73 MO & MT MT SMS
4825305 0100367565... (2 Replies)
Hi.. I'm into a bump after trying to solve this prob.. i've a file with contents like below.
<blankline>
'pgmId' : 'UNIX',
'pgmData' : 'textfile',
'author' : 'admin',
.......
Now i'm trying to insert a new data after pgmId. so the final output will be... (7 Replies)
I have a list of DNS servers I need to look up information on. Each of these servers has a master and a slave database. Essentially what I need to do is create two text files for each server. One with the Master view and one with the Slave view. There's 20 servers, in the end I should have 40 text... (4 Replies)
Hello,
:wall:
I have a 12 column csv file. I wish to delete the entire line if column 7 = hello and column 12 = goodbye. I have tried everything that I can find in all of my ref books.
I know this does not work
/^*,*,*,*,*,*,"hello",*,*,*,*,"goodbye"/d
Any ideas?
Thanks
Please... (2 Replies)
Hi All,
i have a requirement where i need to format the input RAW file ( which is CSV) by using another mapping file(also CSV file). basically i am getting feed file with dynamic headers by using mapping file (in that target field is mapped with source filed) i have to convert the raw file into... (6 Replies)
Hi
How to compare the source definition file in unix with the data file .
Please can you share me example if some one has done it before (3 Replies)
Discussion started by: Raj4fusion
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)