06-09-2013
Not sure about the reason without further information, but to me this looks like a communication problem. Check all the cluster networks (see "cllsif") for connectivity and mak sure the disk-heartbeat works as expected.
Another possible reason which comes to mind is the VG: make sure it is varied on in "enhanced concurrent" mode. Maybe there are disk reservations left over somehow: issue a "varyonvg -b -u" to break disk reservations.
I hope this helps.
bakunin
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hello,
Under ksh I have to run a script on one of the nodes of a Solaris 8 cluster which at some time must execute a command on the alternate node:
# rsh <name> "command"
I have to implement this script on all the clusters of my company (a lot of...).
Fortunately, the names of the two nodes... (11 Replies)
Discussion started by: heartwork
11 Replies
2. HP-UX
Need help guys!
when running cmrunnode batch i'm getting this error
cmrunnode : Waiting for cluster to... (1 Reply)
Discussion started by: Tris
1 Replies
3. HP-UX
Hi,
Please advise me whereas I have two node cluster server configured with MC/SG. Application and DB are running on Node 1, while Node 2 is standby.
All the volume group devices are part of cluster environment. There is only one package running at node 1.
Node 2 is having the problem to... (1 Reply)
Discussion started by: rauphelhunter
1 Replies
4. High Performance Computing
hi,
i am trying to setup a 2 node cluster environment. following is what i have;
1. 2 x sun ultra60 - 450MHz procs, 1GB RAM, 9GB HDD, solaris 10
2. 2 x HBA cards
3. 2 x Connection leads to connect ultra60 with D1000
4. 1 x D1000 storage box.
5. 3 x 9GB HDD + 2 x 36GB HDD
first of all,... (1 Reply)
Discussion started by: solman17
1 Replies
5. Solaris
Hi Gurus,
For learning purpose, I have installed a single node cluster 3.2 on Solaris 10 for practice. Now I am welling to create two non-global zone and create them as a fail over. Will appreciate your help and assistance.
Thanks (3 Replies)
Discussion started by: newadmin
3 Replies
6. Solaris
I now the logical name and Virtual IP of the cluster.
How can I find the active sun cluster node without having root access? (3 Replies)
Discussion started by: sreeniatbp
3 Replies
7. HP-UX
Hello,
Is there any way to identify the active node in a HP-UX cluster without root privileges? (3 Replies)
Discussion started by: psimoes79
3 Replies
8. Solaris
Hi Gurus,
I am very new to clustering and for test i have created a single node cluster, now i want to remove the system from cluster. Did some googling however as a newbee in cluster unable to co related the info.
Please help
Thanks (1 Reply)
Discussion started by: kumarmani
1 Replies
9. Solaris
How can we add a shared zfs dataset between 2 zones on a same host. I have sun cluster 3.2 installed in a server which has 2 zones. I want to share a zfs data set between these 2 zones how can we do that ? (7 Replies)
Discussion started by: fugitive
7 Replies
10. Solaris
Hi,
Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is... (0 Replies)
Discussion started by: dn2011
0 Replies
LEARN ABOUT DEBIAN
scrounge-ntfs
scrounge-ntfs(8) BSD System Manager's Manual scrounge-ntfs(8)
NAME
scrounge-ntfs -- helps retrieve data from corrupted NTFS partitions
SYNOPSIS
scrounge-ntfs -l disk
scrounge-ntfs -s disk
scrounge-ntfs [-m mftoffset] [-c clustersize] [-o outdir] disk start end
DESCRIPTION
scrounge-ntfs is a utility that can rescue data from corrupted NTFS partitions. It writes the files retrieved to another working file system.
Certain information about the partition needs to be known in advance.
The -l mode is meant to be run in advance of the data corruption, with the output stored away in a file. This allows scrounge-ntfs to recover
data reliably. See the 'NOTES' section below for recover info when this isn't the case.
OPTIONS
The options are as follows:
-c The cluster size (in sectors). When not specified a default of 8 is used.
-l List partition information for a drive. This will only work when the partition table for the given drive is intact.
-m When recovering data this specifies the location of the MFT from the beginning of the partition (in sectors). If not specified
then no directory information can be used, that is, all rescued files will be written to the same directory.
-o Directory to put rescued files in. If not specified then files will be placed in the current directory.
-s Search disk for partition information. (Not implemented yet).
disk The raw device used to access the disk which contains the NTFS partition to rescue files from. eg: '/dev/hdc'
start The beginning of the NTFS partition (in sectors).
end The end of the NTFS partition (in sectors)
NOTES
If you plan on using this program sucessfully you should prepare in advance by storing a copy of the partition information. Use the -l option
to do this. Eventually searching for disk partition information will be implemented, which will solve this problem.
When only one partition exists on a disk or you want to rescue the first partition there are ways to guess at the sector sizes and MFT loca-
tion. See the scrounge-ntfs web page for more info:
http://memberwebs.com/swalter/software/scrounge/
AUTHOR
Stef Walter <stef@memberwebs.com>
scrounge-ntfs June 1, 2019 scrounge-ntfs