Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Building New server using Ufsdump/ufsrestore Post 302270461 by SanjayLinux on Monday 22nd of December 2008 02:02:56 AM
Old 12-22-2008
MySQL Building New server using Ufsdump/ufsrestore

Hi Guys,
Marry X-MAX in advance Smilie
I would like to build a new server using ufsdump/ufsrestore. Both the servers are identical hardware and model. I am using Solaris 10 X86 O/S.
I am having ufsdump "mydump.rootdump.gz" in a Central NFS server.

What I did:-
I took backup of root filesystem (/) of OLD-SERVER using ufssnap/ufsdump.
That backup "mydump.rootdump.gz" put on central NFS server. 10.x.x.n.

Install Solaris 10 on NEW-SERVER. Made identical partions during installation.
After then mounted OLD_SERVER:/mydump.rootdump.gz on /mnt.
NEW-SERVER#mount OLD_SERVER:/mydump.rootdump.gz /mnt
NEW-SERVER@/mnt#cd /
NEW-SERVER@/# gzcat /mnt/mydump.rootdump.gz | ufsrestore rvf -

After running above command I was getting messages like "File already exist"
And then ended with below given messages.
.......
.............

extract file ./usr/lib/fs/udfs/fsck
extract file ./usr/lib/fs/udfs/fstyp
extract file ./usr/lib/fs/udfs/mount
extract file ./usr/lib/fs/xmemfs/mount
extract file ./usr/lib/fs/ufs/fsirand
extract file ./usr/lib/fs/ufs/fsckall
extract file ./usr/lib/fs/ufs/labelit
extract file ./usr/lib/fs/ufs/ufsrestore
Bus Error (core dumped)
NEW-SERVER@/#

When I rebooted my newserver it .. I got boot prompt but after displaying hostname I was getting messages "The System will sync files,save a crash dump if neede ....... panic [cpu7]/thread=fffffffe80009c8000: Unrecoverable Machine-Check ...

Then it started syncing the files...... and again rebooted .....

Could you guys Please help me out from this situation...
My main aim is rebuild an indentical server.
It will great help, If you will give me a step by step procedure for the same.
OS: Solaris 10 X86
Both the server are remote, I can't use Tape drive.. I having root filesystem dump (/) on Central NFS server.

I am waiting your response as soon as possible .....

Thanks in advance ....

Sanjay
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

UFSDUMP & UFSRESTORE problems

Hi, guys ! I need some expert help on following problem: When trying to restore content of a machine using another backup machine using Ufsdump/Ufsrestore, it worked on /local1, but when attemting to do the same on /local2,/local3 and /local4, I get permission problems. Running superuser does... (9 Replies)
Discussion started by: DGoubine
9 Replies

2. UNIX for Dummies Questions & Answers

ufsdump/ufsrestore problem

I ran ufsdump as follows............... # ufsdump 9ucf /dev/rmt/0n /dev/dsk/c1t9d0s3 DUMP: Writing 63 Kilobyte records DUMP: Date of this level 9 dump: Tue 23 Oct 2007 02:19:40 PM PDT DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/rdsk/c1t9d0s3 (gambler:/apps) to... (2 Replies)
Discussion started by: shorty
2 Replies

3. Solaris

Using ufsdump and ufsrestore

HI Gurus, I have a sunfire V445 server running SAP ECC 6.0 with an Oracle database on Solaris 9 (SunOS 5.9). I recently completed a ufsdump to tape of the following files:- /, /usr, /oracle, /export, /sapr3, I want to restore these files from tape onto a different server of the same... (5 Replies)
Discussion started by: charleywasharo
5 Replies

4. UNIX for Advanced & Expert Users

Building New server using ufsdump/ufsrestore

Hi Guys, Marry X-MAX in advance :) I would like to build a new server using ufsdump/ufsrestore. Both the servers are identical hardware and model. I am using Solaris 10 X86 O/S. I am having ufsdump "mydump.rootdump.gz" in a Central NFS server. What I did:- I took backup of root... (1 Reply)
Discussion started by: SanjayLinux
1 Replies

5. Solaris

Building New server Using ufsrestore

Hi Guys, Marry X-MAX in advance :) I would like to build a new server using ufsdump/ufsrestore. Both the servers are identical hardware and model. I am using Solaris 10 X86 O/S. I am having ufsdump "mydump.rootdump.gz" in a Central NFS server. What I did:- I took backup of root... (3 Replies)
Discussion started by: SanjayLinux
3 Replies

6. Solaris

ufsdump to a remote server

Hello, how can I use ufsdump utility for filesystem backup to a remote directory on a server dedicated for backups ( over network) ?? (3 Replies)
Discussion started by: mm00123
3 Replies

7. Solaris

How to pipe ufsdump to ufsrestore?

Dear all, i want to copy the partition contents from a smaller disk to a larger disk but without modifying the target disk partition size. To this end, i tried to use piping ufsdump to ufsrestore as 'ufsdump 0f - /dev/dsk/c0t0d0s0 l ufsrestore -i - /dev/dsk/c0t1d0s0' but it was not successful. Can... (0 Replies)
Discussion started by: hadimotamedi
0 Replies

8. Solaris

How to pipe ufsdump to ufsrestore?

Dear all, i want to copy the partition contents from a smaller disk to a larger disk but without modifying the target disk partition size. To this end, i tried to use piping ufsdump to ufsrestore as 'ufsdump 0f - /dev/dsk/c0t0d0s0 l ufsrestore -i - /dev/dsk/c0t1d0s0' but it was not successful. Can... (6 Replies)
Discussion started by: hadimotamedi
6 Replies

9. Solaris

Using ufsrestore from a DAT tape of ufsdump backup

hi, was wondering if there is a problem with the patches, and if we wish to restore everything back to square one, how should we go about using ufsrestore from a DAT tape of ufsdump backup data ? (1 Reply)
Discussion started by: Exposure
1 Replies

10. Solaris

Server migration - using ufsdump

hi all, i am refreshing my hardware, but i do not want to do a clean installation/reinstallation. I am wondering if i could do - do a ufsdump of the / partition (into a file on a nfs share) - bootup using cdrom in the new machine - mount the boot device/slice, and restore the ufsdump on... (7 Replies)
Discussion started by: javanoob
7 Replies
largefile(5)                                            Standards, Environments, and Macros                                           largefile(5)

NAME
largefile - large file status of utilities DESCRIPTION
A large file is a regular file whose size is greater than or equal to 2 Gbyte ( 2**31 bytes). A small file is a regular file whose size is less than 2 Gbyte. Large file aware utilities A utility is called large file aware if it can process large files in the same manner as it does small files. A utility that is large file aware is able to handle large files as input and generate as output large files that are being processed. The exception is where additional files are used as system configuration files or support files that can augment the processing. For example, the file utility supports the -m option for an alternative "magic" file and the -f option for a support file that can contain a list of file names. It is unspecified whether a utility that is large file aware will accept configuration or support files that are large files. If a large file aware utility does not accept configuration or support files that are large files, it will cause no data loss or corruption upon encountering such files and will return an appropriate error. The following /usr/bin utilities are large file aware: adb awk bdiff cat chgrp chmod chown cksum cmp compress cp csh csplit cut dd dircmp du egrep fgrep file find ftp getconf grep gzip head join jsh ksh ln ls mdb mkdir mkfifo more mv nawk page paste pathchck pg rcp remsh rksh rm rmdir rsh sed sh sort split sum tail tar tee test touch tr uncompress uudecode uuencode wc zcat The following /usr/xpg4/bin utilities are large file aware: awk cp chgrp chown du egrep fgrep file grep ln ls more mv rm sed sh sort tail tr The following /usr/xpg6/bin utilities are large file aware: getconf ls tr The following /usr/sbin utilities are large file aware: install mkfile mknod mvdir swap See the USAGE section of the swap(1M) manual page for limitations of swap on block devices greater than 2 Gbyte on a 32-bit operating sys- tem. The following /usr/ucb utilities are large file aware: chown from ln ls sed sum touch The /usr/bin/cpio and /usr/bin/pax utilities are large file aware, but cannot archive a file whose size exceeds 8 Gbyte - 1 byte. The /usr/bin/truss utilities has been modified to read a dump file and display information relevant to large files, such as offsets. cachefs file systems The following /usr/bin utilities are large file aware for cachefs file systems: cachefspack cachefsstat The following /usr/sbin utilities are large file aware for cachefs file systems: cachefslog cachefswssize cfsadmin fsck mount umount nfs file systems The following utilities are large file aware for nfs file systems: /usr/lib/autofs/automountd /usr/sbin/mount /usr/lib/nfs/rquotad ufs file systems The following /usr/bin utility is large file aware for ufs file systems: df The following /usr/lib/nfs utility is large file aware for ufs file systems: rquotad The following /usr/xpg4/bin utility is large file aware for ufs file systems: df The following /usr/sbin utilities are large file aware for ufs file systems: clri dcopy edquota ff fsck fsdb fsirand fstyp labelit lockfs mkfs mount ncheck newfs quot quota quotacheck quotaoff quotaon repquota tunefs ufsdump ufsrestore umount Large file safe utilities A utility is called large file safe if it causes no data loss or corruption when it encounters a large file. A utility that is large file safe is unable to process properly a large file, but returns an appropriate error. The following /usr/bin utilities are large file safe: audioconvert audioplay audiorecord comm diff diff3 diffmk ed lp mail mailcompat mailstats mailx pack pcat red rmail sdiff unpack vi view The following /usr/xpg4/bin utilities are large file safe: ed vi view The following /usr/xpg6/bin utility is large file safe: ed The following /usr/sbin utilities are large file safe: lpfilter lpforms The following /usr/ucb utilities are large file safe: Mail lpr The following /usr/lib utility is large file safe: sendmail SEE ALSO
lf64(5), lfcompile(5), lfcompile64(5) SunOS 5.10 7 Nov 2003 largefile(5)
All times are GMT -4. The time now is 09:40 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy