Sponsored Content
Full Discussion: sending larger files via ftp
Top Forums UNIX for Advanced & Expert Users sending larger files via ftp Post 302074440 by matrixmadhan on Tuesday 23rd of May 2006 10:39:26 AM
Old 05-23-2006
sending larger files via ftp

[solaris 5.9]

hi all,

i am looking for ways to make ftp efficient by tuning the parameters

currently,
tcp_max_buf is 1 MB
tcp_xmit_hiwat is 48 KB

say to transmit multiple 2 gb files from unix server to mainframe sys,
will increasing the window size or the send buffer size of the current TCP/IP configuration have an effect on the time taken to tranmit the file from unix server to mainframe sys ?

upon setting new values to conf, is a reboot required ?

is there any maximum file size limit that can be transmitted via ftp?

else or there any other ways to effectively transmit larger files from unix server to mainframe sys.

1) one thing could be compression, but i am not sure of an uncompress binary constructed on the same adaptive Lempel-Ziv coding and available there in the mainframe system.

2) setting the same buffer size for send and receive would avoid unnecessary fragmentation across network prior sending it. But no control can be exercised over the receive buffer size of the mainframe system.

Thanks,
Mad.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Sending email w/ ftp log as attachment

Can this be done? Code samples welcome and encouraged. (2 Replies)
Discussion started by: idesaj
2 Replies

2. Shell Programming and Scripting

FTP repeat sending files

Hi everyone. I wrote a ftp script for sending files. while read FNAME do # Begin ftp ftp -i -n $HOST << END user $USER $PASSWD put $FNAME $FNAME quit END # End ftp done < ftp_sending_list.dat HOST, USER and PASSWD are my account data. This script quits ftp for many times and this is... (1 Reply)
Discussion started by: Euler04
1 Replies

3. HP-UX

Ftp cannot put file larger than 64kb

Hi gurus, I have a problem with ftp access. The first 2 test e.g. Test A & Test B was successful with the file size 64kb (800++ numbers). The third test with file size 120kb was failed. The error is "Netout :Connection reset by peer". No password entered manually since the test run from the... (3 Replies)
Discussion started by: yeazas
3 Replies

4. AIX

Tar files larger than 2GB

Hi, Does anyone know if it is possible to tar files larger than 2GB? The reason being is they want me to dump a single file (which is around 20GB) to a tape drive and they will restore it on a Solaris box. I know the tar have a limitation of 2GB so I am thinking of a way how to overcome this.... (11 Replies)
Discussion started by: depam
11 Replies

5. UNIX for Dummies Questions & Answers

Using UNIX Commands with Larger number of Files

Hello Unix Gurus, I am new to Unix so need some help on this. I am using the following commands: 1) mv -f Inputpath/*. outputpath 2) cp Inputpath/*. outputpath 3) rm -rf somepath/* 4) Find Inputpath/*. Now I get the following error with... (18 Replies)
Discussion started by: pchegoor
18 Replies

6. UNIX for Dummies Questions & Answers

7z command for files larger than 4GB ( unzip doesn't work)

My unzip command doesn't work for files that are greater than 4GB. Consider my file name is unzip -p -a filename.zip, the command doesn't work since the size of the file is larger. I need to know the corresponding 7z command for the same. This is my Unix shell script program: if then ... (14 Replies)
Discussion started by: chandraprakash
14 Replies

7. Shell Programming and Scripting

Backingup larger files with TAR command

I need to backup my database but the files are very large and the TAR command will not let me. I searched aids and found that I could do something with the mknod, COMPRESS and TAR command using them together. I appreciate your help. (10 Replies)
Discussion started by: frizcala
10 Replies

8. UNIX for Dummies Questions & Answers

Split larger files into smaller ones with Column names

Hi, I have one large files of 100000 rows with header column. Eg: Emp Code, Emp Name 101,xxx 102,YYY 103,zzz ... ... I want to split the files into smaller files with only 30000 rows each..File 1,2 and 3 must have 30000 rows and file 4 must contain 10000 rows. But the column... (1 Reply)
Discussion started by: Nivas
1 Replies

9. UNIX for Beginners Questions & Answers

Need to select files larger than 500Mb from servers

I need help modifying these two scripts to do the following: - print files in (MB) instead of (KB) - only select files larger than 500MB -> these will be mailed out daily - Select all files regardless of size all in (MB) -> these will be mailed out once a week this is what i have so far and... (5 Replies)
Discussion started by: donpasscal
5 Replies

10. UNIX for Beginners Questions & Answers

Help with Expect script for pulling log files size larger than 500Mb;

I am new at developing EXPECT scripts. I'm trying to create a script that will automatically connect to a several UNIX (sun solaris and HPUX) database server via FTP and pull the sizes of the listener/alert log files from specified server directory on the remote machines. 1. I want the script... (7 Replies)
Discussion started by: mikebantor
7 Replies
compress(1)						      General Commands Manual						       compress(1)

Name
       compress, uncompress, zcat - compress and expand data

Syntax
       compress [ -f ] [ -v ] [ -c ] [ -b bits ] [ name ...  ]
       uncompress [ -f ] [ -v ] [ -c ] [ name ...  ]
       zcat [ name ...	]

Description
       The  command reduces the size of the named files using adaptive Lempel-Ziv coding.  Whenever possible, each file is replaced by one
       with the extension .Z, while keeping the same ownership modes, access, and modification times.  If  no  files  are  specified,  the
       standard input is compressed to the standard output.  Compressed files can be restored to their original form using or

       The  -f	option	will  force  compression  of  name, even if it does not actually shrink name, or if the corresponding name .Z file
       already exists.	If the -f option is omitted, the user is asked whether an existing name.Z file should be  overwritten  (unless	is
       run in the background under

       The -c (cat) option makes compress/uncompress write to the standard output without changing any files.  Neither -c nor alter files.

       The  command  uses  the modified Lempel-Ziv algorithm.  Common substrings in the file are first replaced by 9-bit codes 257 and up.
       When code 512 is reached, the algorithm switches to 10-bit codes and continues to use more bits until the limit specified by the -b
       flag is reached (default 16).  The bits must be between 9 and 16.  The default can be changed in the source to allow to be run on a
       smaller machine.

       After the bits limit is attained, periodically checks the compression ratio.  If the ratio is  increasing,  continues  to  use  the
       existing  code  dictionary.   However,  if  the	compression ratio decreases, discards the table of substrings and rebuilds it from
       scratch.  This allows the algorithm to adapt to the next block of the file.

       Note that the -b flag is omitted for since the bits parameter specified during compression is encoded within the output along  with
       a number that ensures that neither decompression of random data nor recompression of compressed data is attempted.

       How much each file is compressed depends on the size of the input, the number of bits per code, and the distribution of common sub-
       strings.  Typically, text such as source code or English is reduced by 50-60%.  Compression is  generally  much	better	than  that
       achieved by Huffman coding or adaptive Huffman coding, and takes less time to compute.

       The -v option displays the percent reduction of each file.

       If  an  error  occurs,  exit  status is 1.  However, if the last file was not compressed because it became larger, the status is 2.
       Otherwise, the status is 0.

Options
       -f   Forces compression of name.

       -c   Makes compress/uncompress write to the standard output.

       -b   Specifies the allowable bits limit.  The default is 16.

       -v   Displays the percent reduction of each file.

Diagnostics
       Usage: compress [-fvc] [-b maxbits] [file ...]
       Invalid options were specified on the command line.

       Missing maxbits
       Maxbits must follow -b.

       file: not in compressed format
       The file specified to uncompress has not been compressed.

       file: compressed with xx bits, can only handle yy bits
       The file was compressed by a program that could deal with more bits than the compress code on this machine.   Recompress  the  file
       with smaller bits.

       file: already has .Z suffix -- no change
       The file is assumed to be compressed already.  Rename the file and try again.

       file already exists; do you wish to overwrite (y or n)?
       Type y if you want the output file to be replaced; type n if you do not.

       uncompress: corrupt input
       A SIGSEGV violation was detected which usually means that the input file is corrupted.

       Compression: xx.xx%
       Percent of the input saved by compression.  (For the -v option only.)

       -- not a regular file: unchanged
       If the input file is not a regular file (for example, a directory), it remains unchanged.

       -- has xx other links: unchanged
       The input file has links; it is left unchanged.	See for more information.

       -- file unchanged
       No savings is achieved by compression.  The input remains unchanged.

Restrictions
       Although compressed files are compatible between machines with large memory, -b12 should be used for file transfer to architectures
       with a small process data space (64KB or less).

								       RISC							       compress(1)
All times are GMT -4. The time now is 08:28 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy