Sponsored Content
Full Discussion: Creating > 2GB file
Operating Systems AIX Creating > 2GB file Post 302214055 by markper on Friday 11th of July 2008 07:36:46 PM
Old 07-11-2008
Creating > 2GB file

I am trying to execute a database dump to a file, but can't seem to get around the 2GB file size. I have tried setting the user limit to -1, but no luck.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

File size exceeding 2GB

I am working on HP-Unix. I have a 600 MB file in compressed form. During decompression, when file size reaches 2GB, decompression aborts. What should be done? (3 Replies)
Discussion started by: Nadeem Mistry
3 Replies

2. Programming

C++ Problem, managing >2Gb file

My C++ program returns 'Disk Full' Message when I tried to manage a file larger than 2Gb. the process is very simple: based on a TXT file, the process combine the information generating another temporary file (generating the error) to fillup a database. My FS, during the process, reaches 40%...... (4 Replies)
Discussion started by: ASOliveira
4 Replies

3. UNIX for Advanced & Expert Users

Problem creating files greater than 2GB

With the C code I am able to create files greater than 2GB if I use the 64 bit compile option -D_FILE_OFFSET_BITS=64. There I am using the function fprintf to write into the file. But when I use C++ and ofstream the file is getting truncated when the size grows beyond 2GB. Is there any special... (1 Reply)
Discussion started by: bobbyjohnz
1 Replies

4. Shell Programming and Scripting

efficiently split a 2GB text file into two

Can an expert kindly write an efficient Linux ksh script that will split a large 2 GB text file into two? Here is a couple of sample record from that text file: "field1","field2","field3",11,22,33,44 "TG","field2b","field3b",1,2,3,4 The above rows are delimited by commas. This script is to... (2 Replies)
Discussion started by: ihot
2 Replies

5. UNIX for Dummies Questions & Answers

MAX file size limited to 2GB

Hi All, We are running HP rp7400 box with hpux 11iv1. Recently, we changed 3 kernel parameters a) msgseg from 32560 to 32767 b) msgmnb from 65536 to 65535 c) msgssz from 128 to 256 Then we noticed that all application debug file size increase upto 2GB then it stops. So far we did not... (1 Reply)
Discussion started by: mhbd
1 Replies

6. Programming

Can't create file bigger than 2GB with my application

Hi, I've created a simple application that is supposed to fill up a file with messages up to the size I pass as parameter. The problem is that once the file reaches the 2GB size, it stops growing. The flow of the application, for what is worth, is as follows: while ( bytes written <... (7 Replies)
Discussion started by: emitrax
7 Replies

7. Linux

unzipping file > 2gb

I am not able to unzip file greater then 2gb, Any suggestions how to do that in linux? Regards, Manoj (5 Replies)
Discussion started by: manoj.solaris
5 Replies

8. UNIX for Advanced & Expert Users

How to create a file more than 2GB

Hi, I am executing a SQL query and the output is more than 2GB. Hence the process is failing. How can I have a file created more than 2GB ? Thanks, Risshanth (1 Reply)
Discussion started by: risshanth
1 Replies

9. HP-UX

2GB file size limit

Greetings, I'm attempting to dump a filesystem from a RHEL5 Linux server to a VXFS filesystem on an HP-UX server. The VXFS filesystem is large file enabled and I've confirmed that I can copy/scp a file >2GB to the filesystem. # fsadm -F vxfs /os_dumps largefiles # mkfs -F vxfs -m... (12 Replies)
Discussion started by: bkimura
12 Replies

10. UNIX for Dummies Questions & Answers

Delete the file which crossed 2GB

Hi , I wants to create the bash script for deleting the specified 2gb file and wants to take the backup before doing that. please help me how to do the same,I use RHEL5 server (22 Replies)
Discussion started by: Rahulne25
22 Replies
MYDUMPER(1)							     mydumper							       MYDUMPER(1)

NAME
mydumper - multi-threaded MySQL dumping SYNOPSIS
mydumper [OPTIONS] DESCRIPTION
mydumper is a tool used for backing up MySQL database servers much faster than the mysqldump tool distributed with MySQL. It also has the capability to retrieve the binary logs from the remote server at the same time as the dump itself. The advantages of mydumper are: o Parallelism (hence, speed) and performance (avoids expensive character set conversion routines, efficient code overall) o Easier to manage output (separate files for tables, dump metadata, etc, easy to view/parse data) o Consistency - maintains snapshot across all threads, provides accurate master and slave log positions, etc o Manageability - supports PCRE for specifying database and tables inclusions and exclusions OPTIONS
The mydumper tool has several available options: --help Show help text --host, -h Hostname of MySQL server to connect to (default localhost) --user, -u MySQL username with the correct privileges to execute the dump --password, -p The corresponding password for the MySQL user --port, -P The port for the MySQL connection. Note For localhost TCP connections use 127.0.0.1 for --host. --socket, -S The UNIX domain socket file to use for the connection --database, -B Database to dump --table-list, -T A comma separated list of tables to dump --threads, -t The number of threads to use for dumping data, default is 4 Note Other threads are used in mydumper, this option does not control these --outputdir, -o Output directory name, default is export-YYYYMMDD-HHMMSS --statement-size, -s The maximum size for an insert statement before breaking into a new statement, default 1,000,000 bytes --rows, -r Split table into chunks of this many rows, default unlimited --compress, -c Compress the output files --compress-input, -C Use client protocol compression for connections to the MySQL server --build-empty-files, -e Create empty dump files if there is no data to dump --regex, -x A regular expression to match against database and table --ignore-engines, -i Comma separated list of storage engines to ignore --no-schemas, -m Do not dump schemas with the data --long-query-guard, -l Timeout for long query execution in seconds, default 60 --kill-long-queries, -k Kill long running queries instead of aborting the dump --version, -V Show the program version and exit --verbose, -v The verbosity of messages. 0 = silent, 1 = errors, 2 = warnings, 3 = info. Default is 2. --binlogs, -b Get the binlogs from the server as well as the dump files --daemon, -D Enable daemon mode --snapshot-interval, -I Interval between each dump snapshot (in minutes), requires --daemon, default 60 (minutes) --logfile, -L A file to log mydumper output to instead of console output. Useful for daemon mode. --no-locks, -k Do not execute the temporary shared read lock. Warning This will cause inconsistent backups. AUTHOR
Andrew Hutchings COPYRIGHT
2011, Andrew Hutchings 0.5.1 June 09, 2012 MYDUMPER(1)
All times are GMT -4. The time now is 06:22 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy