Something like this
Make sure you have all the directories like /log and the file you are trying to copy.
Also make sure the path of the commands are correct.
If it still doesn't work, past the output of your script with -x i.e. bash -x your_script
I'm looking for a FTP client that is similar to NCFTP in that I can run a full ftp command in one line without needing to access the client first then typing the ftp commands.
Very simple request but I can't find any other tool like that, I have downloaded Kermit thinking I can use it to tranfer... (3 Replies)
Hi All,
Does anyone knows an FTP client that meet these basic requirements?
-Runs in the console therefore not a gui based but a command line based like the normal ftp program.
-auto detects ascii / binary format files.
-runs in linux.
-free and opensource.
Filezilla should be fine but... (3 Replies)
Folks
I am on a quest....
I am looking for a lightweight FTP client capable of FTPS and or SFTP that has good audit and logging capabilities without requiring a central server component. My platforms are Linux, Solaris, AIX, and Windows Server.
The kicker is I have found things that meet the... (3 Replies)
I'm currently investigating the secure ftp connection from AIX using shell script - It looks openssl is already install and don't know command to be used to connect the secure ftp server.
1. Do I need to install certificate on AIX ?.
2. If any one already design the script to connect secure... (0 Replies)
Can anyone please suggest a agood ftp client(GUI) which would support sftp ...on solaris.
I badly need it. I keep doing the transfers frequently, and some of them are binary and I am not sure how I change the mode to binary.
To avoid this and other hassles, it would help if i have a GUI client.
... (6 Replies)
We're just about to start testing a new server build. The application has many FTP/SFTP connections going to different servers. I'd like to temporarily replace the FTP/SFTP binaries with a mock version that will allow us to run all our production code as is, but will prevent the FTPs from actually... (1 Reply)
Hi,
I have a script in which I am using ftp to transfer some files from remote -> local and vice versa, this scripts is invoked by cron. for your reference I am sharing the function also :
=============================================
fn_FileTransfer_LocalToRemote()
{
set -x... (1 Reply)
We have RHEL 5.8 in our environment, I had a query whether we can implement an FTP server using vsftpd package and Linux configurations like setsebool without using any external FTP clients like FileZilla etc. I am very confused on this. The FTP functionalities that should be present are download &... (3 Replies)
1)check vsftpd service is running
service vsftpd status
2)mkdir -p /var/ftp/pub/Packages
Packages will contain all rpm packages
3)copy the xml file to Packages folder
#cp -arf /mnt/hgfs/share/RHEL_DVD/Packages /var/ftp/pub/Packages
4)install the 3 required rpm
rpm --nodeps -ivh... (0 Replies)
Discussion started by: joj123
0 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)