Ha mailserver, is possible active/active with "constant" connection?
I have setup a mail server, for testing.
My goal is to have a HA mailserver with imaps, when a client connect to a virtual ip, it redirect to two real servers, if a real server crash the other real server "take" the connection.
I have setup a cluster with two keepalived/haproxy lb and two real servers with postfix and Dovecot.The two lb are Debian, the mail servers are Fedora 31.
This is my configuration, on the two lb(load balancers)
As you can see, the mail.domain.priv is the "virtual" server
binded to virtual ip 10.2.0.4(created by keepalived), the real
servers are 10.2.0.5 and 10.2.0.6.
The virtual ip 10.2.0.4 is alias to lo interface, I have created it
with those lines, in the lb
and in the real servers
I skip to post the dovecot/postfix configuration because is
too long, but I have tested it and works fine, as single
server and with the 10.2.0.4 virtual ip.
Of course the real server has the /var/vmail/mydomain shared
using glusterfs(I know is slow, but is only for testing).
I have connected a client, and I can get emails with dovecot
and send emails with postfix using imaps and smtp with starttls
without any problem.
So, what is the problem?
I have tested the cluster shutting down one of the real servers
with a client open(Thunderbird), and the client is "freeze", as
cluster don't exist and cannot read emails.
If I kill the client(Thunderbird), and restart it, it reconnect without problems
to 10.2.0.4 virtual ip(mail.mydomain.priv).
What is wrong?
Is possible to create an ha cluster active/active using keepalived
You problem is with network timeout settings; either the cluster or the clients.
Manually shutting down one of the cluster nodes may not give you the same result as a true CPU/power/whatever failure because the cluster software suite will probably see you do that. It would be better to simply pull out the RJ45 network connection to one of them simulating a network connection failure.
Anyway, the point is that a cluster failover takes time. During this time the virtual ip address is switched from one node to the other. Depending on the cluster suite this will take seconds/minutes. The fact that the client will reconnect to the surviving cluster node after you restart it proves that, had it waited long enough, it would have been able to reconnect on its own.
So the solution is to either (1) configure the cluster to failover faster, or (2) increase the timeout that clients will wait before giving up. That means that a new connection to the virtual ip address can be made before the configured timeout period ends.
Why address on lo interface ?
Getting address on that interface is only used in case of DSR (direct server return) balancing, which haproxy does not do.
Haproxy is L3 and above, while DSR is L2.
Can you remove the lo:0 address entry from ALL servers (LB and mail servers) ?
In your case, VIP address is only on master haproxy node (one of two) with /24 mask (not on lo, and keepalived is handing that.
Also, configure the keepalived in the following manner, then retest :
Haproxy keeps monitoring accessibility of (mail) backend servers, and keepalived keeps monitoring if haproxy is up.
If that is what you need and i understood correctly.
Of course, you can add additional conditions to keepalived to execute failover of VIP address, after you confirm everything is working.
A small addon for active active - so traffic flows thru both haproxys.
You need 2 VIP address for keepalived, on one node first VIP is master, on another second VIP is master.
Both will be on one node in case of node failure.
Then, you add third entry on your DNS system (mymail.example.com) -> pointing to those two VIP addresses.
This is the record you 'attack' from outside with your clients.
Since both VIP IP addresses are always active, clients will be always be able to connect to both when DNS is queried.
Client attempts to make a connection to mymail.example.com ( one VIP is returned in RR fashion from the pool of two ) --> HAPROXY --> your mail server.
Setup sticky session in haproxy and make it listen on 0.0.0.0
Be sure to allow VRRP traffic between those two LB.
In case of failure, everything hicks wrote stands, clients connected to failed VIP will notice a short failover and reconnect to second node.
But only roughly 50% of those, since half of those went to another VIP using same DNS record.
We have one java client which connects to a windows server through ftp in active mode and gets files. When we run this client on hp-ux, it is able to transfer 100k files. But when we run the same client on Linux server it is able to transfer only 200 files at max and it is hanging there... (1 Reply)
From the title you may know that this question has been asked several times and I have done lot of Googling on this.
I have a Wikipedia dump file in XML format. All the contents are in one XML file i.e. all different topics have been put in one XML file. Now I need to separate them and... (1 Reply)
I need to configure 4 ip address (same subnet and mask) in one ipmp group (two interfaces) in an active active formation (link based). Can some one provide the steps or a tutorial link.
Thanks (2 Replies)
I am new to HACMP. So sorry for the newie question. But I did search the forum and it seems that no one asks this before.
So if a 2-node cluster runs in active-active mode (and the same application), what is the benefit of using HACMP ?
If it runs in active-stanby, it is easy to... (9 Replies)
I use two Network Connections at work: Wireless and LAN.
Wireless network has no limitations, but LAN internet has a web filter.
I start a download using my Wireless conn. (At this point, LAN is disabled)
But when I activate my LAN connection my download stops immediately.
LAN... (4 Replies)