On this forum was already posted similar question, but it was 4 years ago and didn't give me answers.
I have two groups of engineers that works in far locations connected via VPN. Physically, the connection is a DSL. Currently we have a linux server in one location that provide files over SMB/CIFS. Thus live of people in the second location is a nightmare.
I can change a lot on the server, but one thing must remain. Workstation need to access files using standard windows mechanisms (SMB/CIFS). No extra software (drivers, clients etc).
To solve it, I want to install second server in the second location and use a distributed filessystem that mainly works in replication mode. Files on both servers will be redistributed by samba in the local network.
There is a great list of distributed filesystems on wikipedia.
The most advanced seams to be lustre. But I have some doubts if it will work for me. Lustre is dedicated to a true cluster with a high speed connection (10GigE or special hardware like RDMA). It also stripe space from all nodes, while I rather want to have replication/mirroring.
XtreemFS seams to be more adequate to me. At least from functional site. But it seams that it is JAVA based solution... I'm very afraid for the performance. There is also some limitation for read/write replication.
GlusterFS: according to some blog this filesystem is not good choice for high-latency connection like DSL. (sorry, forum disallow me to publish valid link, add http prefix: joejulian.name/blog/glusterfs-replication-dos-and-donts/)
What is your recommendation?
Had anyone similar problem?
We did have a problem, we used completely local filesystems instead. Then set up several dynamic rsync connections to keep files synced across machines. Our files on the servers are never more than about 1 minute behind time. We use inotify: checksum and then copy files to a special directory on close if there is a change. rsync is called to update the remote file.
None of our files is huge, all less than 1MB. The remote boxes are in remote locations with only DSL or satellite available.
Our files on the servers are never more than about 1 minute behind time.
Originally Posted by jim mcnamara
None of our files is huge, all less than 1MB.
Private I agree with you, 1 min is acceptable from technical point of view. But I know human's nature. When something get wrong, people will blame that solution (and me personally). I need something that guarantee consistence by design.
Additional, our files usually are rather big, usually over 10MB, some are 100MB.
As I said, from technical point of view rsync is enough good. Problem is in humans nature. They blame everything else than they own faults.
1min uncertainty is a nice opportunity to "hide" humans fault and blame the tool (or what worse me).
Originally Posted by Corona688
How fast are your network links?
In fact 512kbit/s. Exactly it is ADSL, so the real throughput is limited by upstreams of both sites.
Can you please help me with a routing problem?
There are 2 networks:
The default gateway is 192.168.10.1
iPv4 routing is already enabled and working.
With vpnc I've built up an VPN connection and can access my home network... (0 Replies)
I wanted to find out that in my database server which filesystems are shared storage and which filesystems are local. Like when I use df -k, it shows "filesystem" and "mounted on" but I want to know which one is shared and which one is local.
Please tell me the commands which I can run... (2 Replies)
I'm looking for a means to ensure that servers in the two or three datacenters, connected in a ring via IP through two ISPs, can distribute load and/or replicate data among at least two SAN-class disk devices.
I want to evaluate several solutions, and I'm open to solutions ranging from free,... (6 Replies)