Sponsored Content
Top Forums Shell Programming and Scripting How to open large datafie in awk? Post 302777513 by busyboy on Friday 8th of March 2013 04:18:26 AM
Old 03-08-2013
this is usual case with large files and can be worked around with split, which in turn makes it possible to use the full data

Code:
#split -b 900m filename prefix
and then
#awk '{print $8"-"$7"-"$6,$9,$4,$5,$12,$15}'  prefix*

should help.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Reading large file, awk and cut

Hello all, I have 2 files, the first (indexFile1) contains start offset and length for each record inside the second file. The second file can be very large, each actual record start offset and length is defined by the entry in indexFile1. Since there are no records separators wc-l returns 0 for... (1 Reply)
Discussion started by: gio001
1 Replies

2. Shell Programming and Scripting

AWK Shell Program to Split Large Files

Hi, I need some help creating a tidy shell program with awk or other language that will split large length files efficiently. Here is an example dump: <A001_MAIL.DAT> 0001 Ronald McDonald 01 H81 0002 Elmo St. Elmo 02 H82 0003 Cookie Monster 01 H81 0004 Oscar ... (16 Replies)
Discussion started by: mkastin
16 Replies

3. Programming

Is there a system call other than 'open' for opening very large files?

Dear all, Inside a C program, I want to open a very big file (about 12 GB) in order to read its content. Here is the code: /* argv contains the path to the file. */ inputFileDescriptor = open(argv, O_RDONLY); if (inputFileDescriptor < 0) { ... (6 Replies)
Discussion started by: dariyoosh
6 Replies

4. Shell Programming and Scripting

Help needed for parsing large XML with awk.

My XML structure looks like: <?xml version="1.0" encoding="UTF-8"?> <SearchRepository> <SearchItems> <SearchItem> ... </SearchItem> <SearchItem> ... ... (1 Reply)
Discussion started by: jasonjustice
1 Replies

5. Shell Programming and Scripting

sed and awk not working on a large record file

Hi All, I have a very large single record file. abc;date||bcd;efg|......... pqr;stu||record_count;date when i do wc -l on this file it gives me "0" records, coz of missing line feed. my problem is there is an extra pipe that is coming at the end of this record like... (6 Replies)
Discussion started by: Gurkamal83
6 Replies

6. UNIX for Dummies Questions & Answers

Find common numbers from two very large files using awk or the like

I've got two files that each contain a 16-digit number in positions 1-16. The first file has 63,120 entries all sorted numerically. The second file has 142,479 entries, also sorted numerically. I want to read through each file and output the entries that appear in both. So far I've had no... (13 Replies)
Discussion started by: Scottie1954
13 Replies

7. Shell Programming and Scripting

awk input large file

Hi...Does anyone know how to input huge file about 25 GB to awk if single file then this works awk '{print}' <hugefile suppose if have to use something like this awk FNR==NR{x=$0;next}{print $0,x}' hugefile1 hugefile2 then how to redirect ? and is there any provision to assign memory... (12 Replies)
Discussion started by: Akshay Hegde
12 Replies

8. Shell Programming and Scripting

awk output yields error: awk:can't open job_name (Autosys)

Good evening, Im newbie at unix specially with awk From an scheduler program called Autosys i want to extract some data reading an inputfile that comprises jobs names, then formating the output to columns for example 1. This is the inputfile: $ more MapaRep.txt ds_extra_nikira_usuarios... (18 Replies)
Discussion started by: alexcol
18 Replies

9. Shell Programming and Scripting

Process multiple large files with awk

Hi there, I'm camor and I'm trying to process huge files with bash scripting and awk. I've got a dataset folder with 10 files (16 millions of row each one - 600MB), and I've got a sorted file with all keys inside. For example: a sample_1 200 a.b sample_2 10 a sample_3 10 a sample_1 10 a... (4 Replies)
Discussion started by: camor
4 Replies

10. Shell Programming and Scripting

Reversing large data set with awk?

Hello! I have quite a bit of map data that I have to edit. I originally had a DOS script that would reverse x1, y1 coordinates in order to change the direction of a particular segment in a map file. It worked wonderfully and all was well, but my bossman told me that there is a boatload of nodes... (9 Replies)
Discussion started by: Mothra
9 Replies
SPLIT(1)						    BSD General Commands Manual 						  SPLIT(1)

NAME
split -- split a file into pieces SYNOPSIS
split -d [-l line_count] [-a suffix_length] [file [prefix]] split -d -b byte_count[K|k|M|m|G|g] [-a suffix_length] [file [prefix]] split -d -n chunk_count [-a suffix_length] [file [prefix]] split -d -p pattern [-a suffix_length] [file [prefix]] DESCRIPTION
The split utility reads the given file and breaks it up into files of 1000 lines each (if no options are specified), leaving the file unchanged. If file is a single dash ('-') or absent, split reads from the standard input. The options are as follows: -a suffix_length Use suffix_length letters to form the suffix of the file name. -b byte_count[K|k|M|m|G|g] Create split files byte_count bytes in length. If k or K is appended to the number, the file is split into byte_count kilobyte pieces. If m or M is appended to the number, the file is split into byte_count megabyte pieces. If g or G is appended to the num- ber, the file is split into byte_count gigabyte pieces. -d Use a numeric suffix instead of a alphabetic suffix. -l line_count Create split files line_count lines in length. -n chunk_count Split file into chunk_count smaller files. -p pattern The file is split whenever an input line matches pattern, which is interpreted as an extended regular expression. The matching line will be the first line of the next output file. This option is incompatible with the -b and -l options. If additional arguments are specified, the first is used as the name of the input file which is to be split. If a second additional argument is specified, it is used as a prefix for the names of the files into which the file is split. In this case, each file into which the file is split is named by the prefix followed by a lexically ordered suffix using suffix_length characters in the range ``a-z''. If -a is not speci- fied, two letters are used as the suffix. If the prefix argument is not specified, the file is split into lexically ordered files named with the prefix ``x'' and with suffixes as above. ENVIRONMENT
The LANG, LC_ALL, LC_CTYPE and LC_COLLATE environment variables affect the execution of split as described in environ(7). EXIT STATUS
The split utility exits 0 on success, and >0 if an error occurs. SEE ALSO
csplit(1), re_format(7) STANDARDS
The split utility conforms to IEEE Std 1003.1-2001 (``POSIX.1''). HISTORY
A split command appeared in Version 3 AT&T UNIX. BUGS
The maximum line length for matching patterns is 65536. BSD
May 9, 2013 BSD
All times are GMT -4. The time now is 07:21 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy