Sponsored Content
Operating Systems Linux Gentoo How to failover the cluster ? Post 302450188 by solaris_user on Thursday 2nd of September 2010 01:03:49 AM
 

10 More Discussions You Might Find Interesting

1. High Performance Computing

sun Cluster resource group cant failover

I have rcently setup a 4 node cluster running sun cluster 3.2 and I have installed 4 zones on each node. when installing the zones I had to install the zone on all nodes the on the last node do a zlogin -C <zonename> this worked ok. theni I tried to siwitch the zone to node a thei work... (14 Replies)
Discussion started by: lesliek
14 Replies

2. High Performance Computing

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris

Provides a description of how to set up a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. More... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. HP-UX

ServiceGuard cluster & volume group failover

I have a 2-node ServiceGuard cluster. One of the cluster packages has a volume group assigned to it. When I fail the package over to the other node, the volume group does not come up automatically on the other node. I have to manually do a "vgchange -a y vgname" on the node before the package... (5 Replies)
Discussion started by: Wotan31
5 Replies

4. High Performance Computing

SUN Cluster Vs Veritas Cluster

Dear All, Can anyone explain about Pros and Cons of SUN and Veritas Cluster ? Any comparison chart is highly appreciated. Regards, RAA (4 Replies)
Discussion started by: RAA
4 Replies

5. High Performance Computing

Veritas Cluster Server Management Console IP Failover

I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier. Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies

6. Solaris

Sun Cluster 3.1 failover

Hi, We have two sun SPARC server in Clustered (Sun Cluster 3.1). For some reason, System 1 failed over to System 2. Where can I find the logs which could tell me the reason for this failover? Thanks (5 Replies)
Discussion started by: Mack1982
5 Replies

7. Solaris

Sun cluster and Veritas cluster question.

Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC. Am thinking how to migrate to sun cluster setup instead. My plan as follows leave the existing vcs intact as a fallback plan. Then install and build suncluster on... (5 Replies)
Discussion started by: sparcguy
5 Replies

8. Solaris

Sun cluster 4.0 - zone cluster failover doubt

Hello experts - I am planning to install a Sun cluster 4.0 zone cluster fail-over. few basic doubts. (1) Where should i install the cluster s/w binaries ?. ( global zone or the container zone where i am planning to install the zone fail-over) (2) Or should i perform the installation on... (0 Replies)
Discussion started by: NVA
0 Replies

9. Solaris

Solaris Cluster Failover based on scan rate

Dear Experts, If there is a possible Solaris Cluster failover to second node based on scan rate? I need the documentation If solaris cluster can do this. Thank You in Advance Edy (3 Replies)
Discussion started by: edydsuranta
3 Replies

10. Red Hat

Linux Cluster failover issue

Hi Guys, I am not much aware of clusters but i have few questions can someone provide the overview as it would be very helpful for me. How can i perform cluster failover test to see all the services are failing back to other node ? If it is using veritas cluster then what kind of... (2 Replies)
Discussion started by: munna529
2 Replies
DOVEADM-DIRECTOR(1)						      Dovecot						       DOVEADM-DIRECTOR(1)

NAME
doveadm-director - Manage Dovecot directors SYNOPSIS
doveadm [-Dv] director add [-a director_socket_path] host [vhost_count] doveadm [-Dv] director flush [-a director_socket_path] host|all doveadm [-Dv] director map [-a director_socket_path] [-f users_file] [host] doveadm [-Dv] director remove [-a director_socket_path] host doveadm [-Dv] director dump [-a director_socket_path] doveadm [-Dv] director status [-a director_socket_path] [user] DESCRIPTION
doveadm director can be used to manage and query the status of the list of backend mail servers where Dovecot proxy can redirect connec- tions to. OPTIONS
Global doveadm(1) options: -D Enables verbosity and debug messages. -v Enables verbosity, including progress counter. Command specific options: -a director_socket_path This option is used to specify an alternative socket. The option's argument is either an absolute path to a local UNIX domain socket, or a hostname and port (hostname:port), in order to connect a remote host via a TCP socket. By default doveadm(1) will use the socket /var/run/dovecot/director-admin. The socket may be located in another directory, when the default base_dir setting was overridden in /etc/dovecot/dovecot.conf. ARGUMENTS
host A mail server's hostname or IP address. user Is a user's login name. Depending on the configuration, a login name may be for example jane or john@example.com. vhost_count The number of "virtual hosts" to assign to this server. The higher the number is relative to other servers, the more connections it gets. The default is 100. COMMANDS
director add doveadm director add [-a director_socket_path] host [vhost_count] The command's tasks are: * assign a new mail server to the director. * increase/decrease the vhost_count of an already assigned server. director flush doveadm director flush [-a director_socket_path] host|all doveadm director flush drops all user associations either from the given host or all hosts. This command is intended mainly for testing purposes. director map doveadm director map [-a director_socket_path] [-f users_file] [host] The command doveadm director map is used to list current user -> host mappings. -f users_file Path to a file containing all user names (one per line). When given no userdb lookup will be performed. This may be a helpful alternative when for example the network connection to the LDAP or SQL server is slow. host Specify a server's IP address or hostname, to list only mappings of the given host. director remove doveadm director remove [-a director_socket_path] host Use this command in order to remove the given host from the director. director dump doveadm director dump [-a director_socket_path] Dump the current host configuration as doveadm commands. These commands can be easily run after a full director cluster restart to get back to the dumped state. director status doveadm director status [-a director_socket_path] [user] This command is used to show the current usage of all assigned mail servers. When a user name is given, this command shows which server the user is currently assigned to, where the user will be assigned after the current saved assignment gets removed and where the user would be assigned to if the whole proxy cluster was restarted fresh. FILES
/etc/dovecot/dovecot.conf Dovecot's main configuration file. /etc/dovecot/conf.d/10-director.conf Director specific settings. EXAMPLE
Add a director with vhost count 150 (or change existing one's vhost count to 150): doveadm -v director add x1357.imap.ha.example.net 150 2001:db8:543:6861:143::1357: OK Remove a director: doveadm director remove x1357.imap.ha.example.net Query the status of mail hosts in a director: doveadm director status mail server ip vhosts users 192.168.10.1 100 125 192.168.10.2 100 144 192.168.10.3 100 115 Query the status of a user's assignment: doveadm director status user@example.com Current: 192.168.10.1 (expires 2010-06-18 20:17:04) Hashed: 192.168.10.2 Initial config: 192.168.10.3 This means that the user is currently assigned to mail server on IP 192.168.10.1. After all of user's connections have logged out, the assignment will be removed (currently it looks like at 20:17:04, but that may be increased). After the assignment has expired, the user will next time be redirected to 192.168.10.2 (assuming no changes to director settings). If the entire Dovecot proxy cluster was restarted, so that all of the director configuration would revert back to its initial values, the user would be redirected to 192.168.10.3. REPORTING BUGS
Report bugs, including doveconf -n output, to the Dovecot Mailing List <dovecot@dovecot.org>. Information about reporting bugs is avail- able at: http://dovecot.org/bugreport.html SEE ALSO
doveadm(1) Dovecot v2.1 2011-05-11 DOVEADM-DIRECTOR(1)
All times are GMT -4. The time now is 12:20 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy