Now how to import the dump to the oracle server in order to create the database. What are the prerequisites for this??
Could you please stay away from marking arbitrary words with "CODE"-tags, please? CODE-tags are for marking code (as the name suggests) and terminal output, not for highlighting. Consider using bold or italic text for this instead.
To answer your question: that depends on how the dump was created. If it was created with the "exp" utility and the database schema is included in the export:
create the user, grant rights (import, create session and imp_full_database), then import with the "imp" utility:
If the export was done with "expdp" then instead of the "imp" above use:
Hi all,
I want to restore DB file in many mysql servers, i already using script for sending the dumpfile in all servers, but it's just annoying if i have to restore the dumpfile in all servers, i want just execute 1 script, and will restore in all remote mysql servers. I make script but not... (2 Replies)
Hi all,
Help needed urgently.
I am currently writing a shellscript to read data/record from a flat file (.txt) file, and import/upload the data to oracle database. The script is working fine, but it takes too long time (for 18000 records, it takes around 90 mins).
I guess it takes so long... (1 Reply)
Hi for all!
I need for my job one shell script very concrete about connection to oracle databases and writing to text file.
Well I tell you:
1.- I must read a file as parameter from linux interface:
> example.sh fileToRead
2.-I must read line to line in "fileToRead" and take a... (9 Replies)
Hi,
I need to import more than 250K of records to another Database(Oracle).But I want particular column value to be changed in the destination table. Is it possible to do this during export or import process with out modifying data from original table.I do not want to run Update manually.
... (6 Replies)
Hi All,
I have a full oracle dump file that I have exported from a production server. I want to import a specific schema out of the full dump.
Is that possible in oracle. What will be the command for that? (6 Replies)
Hi All,
I am facing one problem related to importing Oracle Dump file.
We have two different version of AIX boxes with oracle (version 10.2.0.4.0) installed.
On one AIX box (version 6.1) we are creating oracle dump file (*.dmp) using oracle exp utility and importing it onto another AIX box... (1 Reply)
Hi all,
i face a problem on (oracle database) installed on server Linux i need to export back as dump file, when i try to export give me the below error.
# expdp system/oracle directory=test dumpfile=Prodfb20150311.dmp logfile=Prodfb20150311.log FULL=y
Export: Release 11.2.0.1.0 -... (2 Replies)
Hi Guys,
I Just wanted your opinion/ suggestion/ Help on my unix script about db2 export data with deli file and import into oracle.
db2 connect to Tablename user id using psswrd
db2 "EXPORT TO '/cardpro/brac/v5/dev/dat/AAAAA.DEL' OF DEL select * FROM AAAAA"
db2 "EXPORT TO... (3 Replies)
1. The problem statement, all variables and given/known data:
are the oracle dump files compatible to direct import into db2?
I already tried many times but it always truncated results.
anyone can help/ advice or suggest?
2. Relevant commands, code, scripts, algorithms:
exp... (3 Replies)
Discussion started by: Sonny_103024
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)