Sponsored Content
Full Discussion: Mount twice sshfs dir
Top Forums Shell Programming and Scripting Mount twice sshfs dir Post 302455180 by canduc17 on Tuesday 21st of September 2010 03:33:14 AM
Old 09-21-2010
Mount twice sshfs dir

Hi everyone.

I have 3 machines, let's call them store, node1 and node2.
I have to mount on node1 and node2 the same directory of store.
So, I launch the sshfs command on node1 and everything works fine.
But when I try to do that on node2, it hangs for a while and then I obtain:
Code:
user@node2:~/ingestionAV2$ echo "password" | sshfs user@store:/mnt/storage/sharedDir ./localDir -o password_stdin -o idmap=user -o uid=1000 -o gid=1000 
Timeout waiting for prompt

Do you know if it's possible to mount twice the same directory with sshfs, on two different machines?

Thanks in advance.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

a script to clone a dir tree, & overwrite the dir struct elsewhere?

hi all, i'm looking for a bash or tcsh script that will clone an empty dir tree 'over' another tree ... specifically, i'd like to: (1) specify a src directory (2) list the directory tree/hiearchy beneath that src dir, w/o files -- just the dirs (3) clone that same, empty dir hierarchy to... (2 Replies)
Discussion started by: OpenMacNews
2 Replies

2. Solaris

Solaris 8 and sshfs

Hi, all.. Does Solaris 8 support sshfs? (Sorry if my question is too simple :o) We are going to mount a file system from Solaris 8 on HP-UX 11i. Will things will go smoothly with this? Will there be any performance problem if the number of users grow to perform I/O operations on mounted fs? ... (4 Replies)
Discussion started by: swmk
4 Replies

3. AIX

Not able to mount user home dir from with automount

Hello there Have anyone configured an AIX 5L machine as NIS client? with homedirectories automounted form an NFS share? The NIS server is running Solaris. I am able to configure the AIX machine as client and user is able to login but I have configured the client to use the automountd... (0 Replies)
Discussion started by: balaji_prk
0 Replies

4. Solaris

home dir mount issue

Hi all, I have to mount my home directory in one box, by default everyone's home directory will mount in all unix boxes which we have. But we have unmounted these home directories from some boxes to keep the data as safe. So for automation purpose i need my home directory only in those boxes to... (2 Replies)
Discussion started by: raghu.iv85
2 Replies

5. UNIX for Dummies Questions & Answers

sshfs twice on the same dir

Hi everyone. I have 3 machines, let's call them store, node1 and node2. I have to mount on node1 and node2 the same directory of store. So, I launch the sshfs command on node1 and everything works fine. But when I try to do that on node2, it hangs for a while and then I... (0 Replies)
Discussion started by: canduc17
0 Replies

6. Solaris

Alternative to sshfs?

I have an automated testing script that relies on the dev box being able to see production's (NFS) share. It uses rsync and ssh to handle transfers and command execution; however, it also needs the production share mounted in order to run Perl code against it when Unix commands via ssh will not do.... (2 Replies)
Discussion started by: effigy
2 Replies

7. Shell Programming and Scripting

ssh, truecrypt, sshfs in a script

Hello all, First time posting, although the site has helped solve many problems in the past! I would like to create a script to simplify a series of commands that I run: Log into the ssh-server (RSA key) ssh username@hostname -p 6110 Once there, I mount a truecrypt volume: truecrypt... (3 Replies)
Discussion started by: freshtoast
3 Replies

8. Solaris

Zone failes to boot due to mount issue, dir exists in zone.

I have two physical servers, with zones that mount local storage. We were using "raw device" in the zonecfg to point to a metadevice on the global zone (it was not mounted in the global zone at any point). It failed to mount on every boot because the directory existed in the zone. I... (6 Replies)
Discussion started by: BG_JrAdmin
6 Replies

9. Shell Programming and Scripting

Sshfs script

Hi, I am new to this forum. I want to setup my personal Dropbox between my home server and the work station in the office. I followed this tutorial danbishop.org/2011/09/10/...-in-os-x-lion/and it works great. :) The trouble now is I am not sure how I can make it to start on boot. ... (3 Replies)
Discussion started by: macpc
3 Replies

10. OS X (Apple)

Can I mount partition at given dir path

Hi, In Linux, I had modified fstab file which used to mount ~/Music, ~/Pictures, etc with disk partitions containing corresponding content or binding directory located at other partition. But I am wondering can I do same in El-Capitan as well? No linking! /media/L-Store/Desktop/Documents ... (0 Replies)
Discussion started by: ezee
0 Replies
FENCE_SANLOCK(8)					      System Manager's Manual						  FENCE_SANLOCK(8)

NAME
fence_sanlock - fence agent using watchdog and shared storage leases SYNOPSIS
fence_sanlock [OPTIONS] DESCRIPTION
fence_sanlock uses the watchdog device to reset nodes, in conjunction with three daemons: fence_sanlockd, sanlock, and wdmd. The watchdog device, controlled through /dev/watchdog, is available when a watchdog kernel module is loaded. A module should be loaded for the available hardware. If no hardware watchdog is available, or no module is loaded, the "softdog" module will be loaded, which emulates a hardware watchdog device. Shared storage must be configured for sanlock to use from all hosts. This is generally an lvm lv (non-clustered), but could be another block device, or NFS file. The storage should be 1GB of fully allocated space. After being created, the storage must be initialized with the command: # fence_sanlock -o sanlock_init -p /path/to/storage The fence_sanlock agent uses sanlock leases on shared storage to verify that hosts have been reset, and to notify fenced nodes that are still running, that they should be reset. The fence_sanlockd init script starts the wdmd, sanlock and fence_sanlockd daemons before the cluster or fencing systems are started (e.g. cman, corosync and fenced). The fence_sanlockd daemon is started with the -w option so it waits for the path and host_id options to be provided when they are available. Unfencing must be configured for fence_sanlock in cluster.conf. The cman init script does unfencing by running fence_node -U, which in turn runs fence_sanlock with the "on" action and local path and host_id values taken from cluster.conf. fence_sanlock in turn passes the path and host_id values to the waiting fence_sanlockd daemon. With these values, fence_sanlockd joins the sanlock lockspace and acquires a resource lease for the local host. It can take several minutes to complete these unfencing steps. Once unfencing is complete, the node is a member of the sanlock lockspace named "fence" and the node's fence_sanlockd process holds a resource lease named "hN", where N is the node's host_id. (To verify this, run the commands "sanlock client status" and "sanlock client host_status", which show state from the sanlock daemon, or "sanlock direct dump <path>" which shows state from shared storage.) When fence_sanlock fences a node, it tries to acquire that node's resource lease. sanlock will not grant the lease until the owner (the node being fenced) has been reset by its watchdog device. The time it takes to acquire the lease is 140 seconds from the victim's last lockspace renewal timestamp on the shared storage. Once acquired, the victim's lease is released, and fencing completes successfully. Live nodes being fenced When a live node is being fenced, fence_sanlock will continually fail to acquire the victim's lease, because the victim continues to renew its lockspace membership on storage, and the fencing node sees it is alive. This is by design. As long as the victim is alive, it must continue to renew its lockspace membership on storage. The victim must not allow the remote fence_sanlock to acquire its lease and con- sider it fenced while it is still alive. At the same time, a victim knows that when it is being fenced, it should be reset to avoid blocking recovery of the rest of the cluster. To communicate this, fence_sanlock makes a "request" on storage for the victim's resource lease. On the victim, fence_sanlockd, which holds the resource lease, is configured to receive SIGUSR1 from sanlock if anyone requests its lease. Upon receiving the signal, fence_sanlockd knows that it is a fencing victim. In response to this, fence_sanlockd allows its wdmd connection to expire, which in turn causes the watchdog device to fire, resetting the node. The watchdog reset will obviously have the effect of stopping the victim's lockspace membership renewals. Once the renewals stop, fence_sanlock will finally be able to acquire the victim's lease after waiting a fixed time from the final lockspace renewal. Loss of shared storage If access to shared storage with sanlock leases is lost for 80 seconds, sanlock is not able to renew the lockspace membership, and enters recovery. This causes sanlock clients holding leases, such as fence_sanlockd, to be notified that their leases are being lost. In response, fence_sanlockd must reset the node, much as if it was being fenced. Daemons killed/crashed/hung If sanlock, fence_sanlockd daemons are killed abnormally, or crash or hang, their wdmd connections will expire, causing the watchdog device to fire, resetting the node. fence_sanlock from another node will then run and acquire the victim's resource lease. If the wdmd daemon is killed abnormally or crashes or hangs, it will not pet the watchdog device, causing it to fire and reset the node. Time Values The specific times periods referenced above, e.g. 140, 80, are based on the default sanlock i/o timeout of 10 seconds. If sanlock is con- figured to use a different i/o timeout, these numbers will be different. OPTIONS
-o action The agent action: on Enable the local node to be fenced. Used by unfencing. off Disable another node. status Test if a node is on or off. A node is on if it's lease is held, and off is it's lease is free. metadata Print xml description of required parameters. sanlock_init Initialize sanlock leases on shared storage. -p path The path to shared storage with sanlock leases. -i host_id The host_id, from 1-128. STDIN PARAMETERS
Options can be passed on stdin, with the format key=val. Each key=val pair is separated by a new line. action=on|off|status See -o path=/path/to/shared/storage See -p host_id=num See -i FILES
Example cluster.conf configuration for fence_sanlock. (For cman based clusters in which fenced runs agents.) Also see cluster.conf(5), fenced(8), fence_node(8). <clusternode name="node01" nodeid="1"> <fence> <method name="1"> <device name="wd" host_id="1"/> </method> </fence> <unfence> <device name="wd" host_id="1" action="on"/> </unfence> </clusternode> <clusternode name="node02" nodeid="2"> <fence> <method name="1"> <device name="wd" host_id="2"/> </method> </fence> <unfence> <device name="wd" host_id="2" action="on"/> </unfence> </clusternode> <fencedevice name="wd" agent="fence_sanlock" path="/dev/fence/leases"/> Example dlm.conf configuration for fence_sanlock. (For non-cman based clusters in which dlm_controld runs agents.) Also see dlm.conf(5), dlm_controld(8). device wd /usr/sbin/fence_sanlock path=/dev/fence/leases connect wd node=1 host_id=1 connect wd node=2 host_id=2 unfence wd TEST
To test fence_sanlock directly, without clustering: 1. Initialize storage node1: create 1G lv on shared storage /dev/fence/leases node1: fence_sanlock -o sanlock_init -p /dev/fence/leases 2. Start services node1: service fence_sanlockd start node2: service fence_sanlockd start 3. Enable fencing node1: fence_sanlock -o on -p /dev/fence/leases -i 1 node2: fence_sanlock -o on -p /dev/fence/leases -i 2 This "unfence" step may take a couple minutes. 4. Verify hosts and leases node1: sanlock status s fence:1:/dev/fence/leases:0 r fence:h1:/dev/fence/leases:1048576:1 p 2465 node2: sanlock status s fence:2:/dev/fence/leases:0 r fence:h2:/dev/fence/leases:2097152:1 p 2366 node1: sanlock host_status lockspace fence 1 timestamp 717 2 timestamp 678 node2: sanlock host_status lockspace fence 1 timestamp 738 2 timestamp 678 5. Fence node2 node1: fence_sanlock -o off -p /dev/fence/leases -i 2 This may take a few minutes to return. When node2 is not dead before fencing, sanlock on node1 will log errors about failing to acquire the lease while node2 is still alive. This is expected. 6. Success node1 fence_sanlock should exit 0 after node2 is reset by its watchdog. SEE ALSO
fence_sanlockd(8), sanlock(8), wdmd(8) 2013-05-02 FENCE_SANLOCK(8)
All times are GMT -4. The time now is 03:51 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy