Sponsored Content
Top Forums UNIX for Advanced & Expert Users File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this Post 303025153 by kartikirans on Thursday 25th of October 2018 10:19:48 AM
Old 10-25-2018
Thanks for the quick reply, Entire line...
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

search and grab data from a huge file

folks, In my working directory, there a multiple large files which only contain one line in the file. The line is too long to use "grep", so any help? For example, if I want to find if these files contain a string like "93849", what command I should use? Also, there is oder_id number... (1 Reply)
Discussion started by: ting123
1 Replies

2. Shell Programming and Scripting

How to extract data from a huge file?

Hi, I have a huge file of bibliographic records in some standard format.I need a script to do some repeatable task as follows: 1. Needs to create folders as the strings starts with "item_*" from the input file 2. Create a file "contents" in each folders having "license.txt(tab... (5 Replies)
Discussion started by: srsahu75
5 Replies

3. Shell Programming and Scripting

insert a header in a huge data file without using an intermediate file

I have a file with data extracted, and need to insert a header with a constant string, say: H|PayerDataExtract if i use sed, i have to redirect the output to a seperate file like sed ' sed commands' ExtractDataFile.dat > ExtractDataFileWithHeader.dat the same is true for awk and... (10 Replies)
Discussion started by: deepaktanna
10 Replies

4. Shell Programming and Scripting

Split a huge data into few different files?!

Input file data contents: >seq_1 MSNQSPPQSQRPGHSHSHSHSHAGLASSTSSHSNPSANASYNLNGPRTGGDQRYRASVDA >seq_2 AGAAGRGWGRDVTAAASPNPRNGGGRPASDLLSVGNAGGQASFASPETIDRWFEDLQHYE >seq_3 ATLEEMAAASLDANFKEELSAIEQWFRVLSEAERTAALYSLLQSSTQVQMRFFVTVLQQM ARADPITALLSPANPGQASMEAQMDAKLAAMGLKSPASPAVRQYARQSLSGDTYLSPHSA... (7 Replies)
Discussion started by: patrick87
7 Replies

5. Shell Programming and Scripting

Splitting the Huge file into several files...

Hi I have to write a script to split the huge file into several pieces. The file columns is | pipe delimited. The data sample is as: 6625060|1420215|07308806|N|20100120|5572477081|+0002.79|+0000.00|0004|0001|......... (3 Replies)
Discussion started by: lakteja
3 Replies

6. Shell Programming and Scripting

Problem running Perl Script with huge data files

Hello Everyone, I have a perl script that reads two types of data files (txt and XML). These data files are huge and large in number. I am using something like this : foreach my $t (@text) { open TEXT, $t or die "Cannot open $t for reading: $!\n"; while(my $line=<TEXT>){ ... (4 Replies)
Discussion started by: ad23
4 Replies

7. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

8. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

9. UNIX for Dummies Questions & Answers

File comparison of huge files

Hi all, I hope you are well. I am very happy to see your contribution. I am eager to become part of it. I have the following question. I have two huge files to compare (almost 3GB each). The files are simulation outputs. The format of the files are as below For clear picture, please see... (9 Replies)
Discussion started by: kaaliakahn
9 Replies

10. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies
bup(1)							      General Commands Manual							    bup(1)

NAME
bup - Backup program using rolling checksums and git file formats SYNOPSIS
bup [global options...] <command> [options...] DESCRIPTION
bup is a program for making backups of your files using the git file format. Unlike git(1) itself, bup is optimized for handling huge data sets including individual very large files (such a virtual machine images). However, once a backup set is created, it can still be accessed using git tools. The individual bup subcommands appear in their own man pages. GLOBAL OPTIONS
--version print bup's version number. Equivalent to bup-version(1) -d, --bup-dir=BUP_DIR use the given BUP_DIR parameter as the bup repository location, instead of reading it from the $BUP_DIR environment variable or using the default ~/.bup location. COMMONLY USED SUBCOMMANDS
bup-fsck(1) Check backup sets for damage and add redundancy information bup-ftp(1) Browse backup sets using an ftp-like client bup-fuse(1) Mount your backup sets as a filesystem bup-help(1) Print detailed help for the given command bup-index(1) Create or display the index of files to back up bup-on(1) Backup a remote machine to the local one bup-restore(1) Extract files from a backup set bup-save(1) Save files into a backup set (note: run "bup index" first) bup-web(1) Launch a web server to examine backup sets RARELY USED SUBCOMMANDS
bup-damage(1) Deliberately destroy data bup-drecurse(1) Recursively list files in your filesystem bup-init(1) Initialize a bup repository bup-join(1) Retrieve a file backed up using bup-split(1) bup-ls(1) Browse the files in your backup sets bup-margin(1) Determine how close your bup repository is to armageddon bup-memtest(1) Test bup memory usage statistics bup-midx(1) Index objects to speed up future backups bup-newliner(1) Make sure progress messages don't overlap with output bup-random(1) Generate a stream of random output bup-server(1) The server side of the bup client-server relationship bup-split(1) Split a single file into its own backup set bup-tick(1) Wait for up to one second. bup-version(1) Report the version number of your copy of bup. SEE ALSO
git(1) and the README file from the bup distribution. The home of bup is at <http://github.com/apenwarr/bup/>. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup(1)
All times are GMT -4. The time now is 01:06 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy