01-24-2017
The import will probably take longer than a plain export because it is writing. Not only does this require the data to be committed (i.e. to disk, not a query commit) but this will write to redo logs, perhaps extend tables etc. beware that cancelling your loader with do one of two things:-
- Commit an incomplete load (which you might need to tidy up)
- Roll-back, which can also take a long time.
For clarity,
- is this an extraordinarily long delay?
- can you still connect to the database with another session?
It might be that the redo/undo areas are full or if redo logs are moved to disk, the filesystem that holds them might be full. Have a look at the oracle logs to see if that's the case.
The
sqlldr will commit at the end anyway. I think you can use the parameter
ROWS=1000 or a suitable number on
sqlldr to force more regular commits, at which point you can use another session to count the rows imported into your table so far.
Does that help?
Robin
10 More Discussions You Might Find Interesting
1. Filesystems, Disks and Memory
Hi,
Can I do the following:
On SunOS 5.8
/etc/vfstab:
remote-host:/Volumes/webdata - /export/home/webdata nfs - yes rw,vers=3,soft,intr,bg,timeo=600
In /etc/auto_direct:
/home/science $HOST:/export/home/webdata/science
/home/science-edu ... (2 Replies)
Discussion started by: bloyall
2 Replies
2. Solaris
I've been wondering about this one, is there any way to do the following with ZFS ACL's (i.e. "copy" the ACL over to another file)?
getfacl /bla/dir1 | setfacl -f - /bla/dir2
I know about inheritence on dirs, it doesn't work in this scenario I'm working on. Just looking to copy the ACL's.
... (3 Replies)
Discussion started by: vimes
3 Replies
3. Shell Programming and Scripting
Hey guys.. I am not sure if this is the right place to post this - but here goes. I need to manipulate an openldap export to match a different schema so that I can import into that system. Basically - its just text manipulation. I have gotten alot of it done just by using simple sed, but I am sorta... (0 Replies)
Discussion started by: i2ambler
0 Replies
4. Shell Programming and Scripting
Problem has been resolved (7 Replies)
Discussion started by: ustechie
7 Replies
5. Shell Programming and Scripting
I have a calling script which consists of calls to other scripts via the sh command.
ie vi callscript.sh
sh smallscript1.sh
extra unix commands
sh smallscript2.sh
exit
In smallscript1, I prompt for a filename, which I handle via :-
read f1
export f1
I then need... (5 Replies)
Discussion started by: malts18
5 Replies
6. UNIX for Dummies Questions & Answers
Hi,
I need to export an existing PGP key and import it into GnuPG on a different machine.
This is how I did the export:
pgp -kx myuser _myuser_public
pgp -kx myuser _myuser_private secring.skr
(this is from the pgp installation directory that contains secring.skr).
This produced two... (0 Replies)
Discussion started by: imchi
0 Replies
7. Solaris
A backup/clone script of ours was recently ran. It normally only clones the rpool and renames in rpoolA. Something must've changed as it found another one of our pools that it shouldn't have. It exported that pool unbeknownst to us. Later on when a coworker realized the other pool was missing he... (2 Replies)
Discussion started by: beantownmp
2 Replies
8. Homework & Coursework Questions
Hi Guys,
I Just wanted your opinion/ suggestion/ Help on my unix script about db2 export data with deli file and import into oracle.
db2 connect to Tablename user id using psswrd
db2 "EXPORT TO '/cardpro/brac/v5/dev/dat/AAAAA.DEL' OF DEL select * FROM AAAAA"
db2 "EXPORT TO... (3 Replies)
Discussion started by: Sonny_103024
3 Replies
9. Homework & Coursework Questions
1. The problem statement, all variables and given/known data:
are the oracle dump files compatible to direct import into db2?
I already tried many times but it always truncated results.
anyone can help/ advice or suggest?
2. Relevant commands, code, scripts, algorithms:
exp... (3 Replies)
Discussion started by: Sonny_103024
3 Replies
10. Shell Programming and Scripting
Hello.
During startup /etc/bash.bashrc.local generates some array
.....
source /.../.../system_common_general_array_env_var
.....
The file system_common_general_array_env_var contains :
LEAP_VERSION='42.3'
ARRAY_MAIN_REPO_LEAP=('zypper_local' 'openSUSE-Leap-'"$LEAP_VERSION"'-Non-Oss' ... (2 Replies)
Discussion started by: jcdole
2 Replies
LEARN ABOUT SUSE
mount.ocfs2
mount.ocfs2(8) OCFS2 Manual Pages mount.ocfs2(8)
NAME
mount.ocfs2 - mount an OCFS2 filesystem
SYNOPSIS
mount.ocfs2 [-vn] [-o options] device dir
DESCRIPTION
mount.ocfs2 mounts an OCFS2 filesystem at dir. It is usually invoked indirectly by the mount(8) command when using the -t ocfs2 option.
OPTIONS
_netdev
The filesystem resides on a device that requires network access (used to prevent the system from attempting to mount these filesys-
tems until the network has been enabled on the system). mount.ocfs2 transparently appends this option during mount. However, users
mounting the volume via /etc/fstab must explicitly specify this mount option to delay the system from mounting the volume until
after the network has been enabled.
atime_quantum=nrsec
The file system will not update atime unless this number of seconds has passed since the last update. Set to zero to always update
atime. It defaults to 60 secs.
relatime
The file system only update atime if the previous atime is older than mtime or ctime.
noatime
The file system will not update access time.
acl / noacl
Enables / disables POSIX ACLs (Access Control Lists) support.
user_xattr / nouser_xattr
Enables / disables Extended User Attributes.
commit=nrsec
Sync all data and metadata every nrsec seconds. The default value is 5 seconds. Zero means default.
data=ordered / data=writeback
Specifies the handling of file data during metadata journalling.
ordered
This is the default mode. All data is forced directly out to the main file system prior to its metadata being committed
to the journal.
writeback
Data ordering is not preserved - data may be written into the main file system after its metadata has been committed to
the journal. This is rumored to be the highest-throughput option. While it guarantees internal file system integrity, it
can allow old data to appear in files after a crash and journal recovery.
datavolume
This mount option has been deprecated in OCFS2 1.6. It has been used in the past (OCFS2 1.2 and OCFS2 1.4), to force the Oracle
RDBMS to issue direct IOs to the hosted data files, control files, redo logs, archive logs, voting disk, cluster registry, etc. It
has been deprecated because it is no longer required. Oracle RDBMS users should instead use the init.ora parameter, filesys-
temio_options, to enable direct IOs.
errors=remount-ro / errors=panic
Define the behavior when an error is encountered. (Either remount the file system read-only, or panic and halt the system.) By
default, the file system is remounted read only.
localflocks
This disables cluster-aware flock(2).
intr / nointr
The default is intr that allows signals to interrupt cluster operations. nointr disables signals during cluster operations.
ro Mount the file system read-only.
rw Mount the file system read-write.
SEE ALSO
mkfs.ocfs2(8) fsck.ocfs2(8) tunefs.ocfs2(8) mounted.ocfs2(8) debugfs.ocfs2(8) o2cb(7)
AUTHORS
Oracle Corporation
COPYRIGHT
Copyright (C) 2004, 2010 Oracle. All rights reserved.
Version 1.4.3 February 2010 mount.ocfs2(8)