I would recommend using scp instead of ftp. The reason is that you can set up a trust relationship using the keys between the two systems and forgo having to script the login.
So, the system with the >10GB file is system 1.
The system that you're moving the files to is system 2.
On System 1:
$ ssh-keygen -t rsa -b 1024
(accept the default file location and don't enter a passphrase)
$ cd ~/.ssh
$ ssh systems2 "mkdir ~/.ssh ; chmod 700 ~/.ssh"
(enter the password for system2)
$ scp id_rsa.pub system2:~/.ssh/authorized_keys
(enter the password for system2)
Now you will be able to ssh and scp from system1 -> system2 without having to give a password. Without going into the details of public key cryptography, this is still a secure connection even though you don't have to give a password. As long as you keep the private key on system1 private you can rely on the security of the connection. It's certainly a lot more secure than scripting a username and password to be sent over an insecure connection like FTP.
Now that you don't have to worry about authentication, you can do a simple find with an scp.
find /FILES -size +20971519 -exec scp {} system2:/directory/path/ \; -exec rm {} \;
This will not recreate the directory structure from system1 onto system2. If you need to recreate the directory structure take a look into rsync.
One thing you'll have to be careful about is transferring and removing a file that is still in the process of being created. If a file is being written and is >10G, but will eventually grow to be 50G, you don't want to start the transfer and delete file from system1 until the file is completely written. Not sure exactly how to deal with that issue, it really boils down to how these files arrive on the first system to begin with.