The standards only define the behavior of awk when its input files are text files. A file with 6,600 fields isn't likely to be a text file an any UNIX or Linux system I've seen. The maximum length of a line in a text file is LINE_MAX bytes including the terminating newline character. (You can get the value of LINE_MAX on your system using the command:
The standards allow LINE_MAX to be as low as 2,048 bytes.) Some implementations of awk may accept longer lines and behave as you would like them to. Others will print a diagnostic if an input or output line exceeds LINE_MAX. Others will silently truncate long lines (in this case probably provided truncated output lines). And, others may read LINE_MAX bytes, treat that as a line, and then read the next LINE_MAX bytes as the next input line (guaranteeing garbage output for your application.) Note that if you have an awk that handles long lines as you want it to and try to create arrays of 2,000 fields from 30,000,000 input records and then try printing the results at the end, you'd need for awk to have access to a minimum of 600,000,000,000 bytes of data to store that data even if each field is only one byte long (one byte of data + a terminating null byte + an 8 byte pointer to the string for each field). With data of this magnitude, you will have to process it on the fly; not accumulate data and process it when you find the end of your input file.
The standards do require that conforming implementations of the cut utility be able to handle arbitrary line lengths (assuming that they have access to the memory they need to hold lines being processed). The standards also require that fold (with the -b option) and paste be able to break apart and recreate files that would be text files except for unlimited line lengths, and cat and wc can work on files of any size and type. All other standard text processing utilities (e.g., awk, ed/ex, grep, sed, vi, etc.) have unspecified or undefined behavior if their input files are not text file with lines no longer than LINE_MAX.
Compared to the way that awk and cut index fields, the C code is off by one; it is 0-indexed instead of 1-indexed. So the 100-6700 specified is actually 101-6701 in awk/cut.
If the first field (start=0) is part of the range, every newline will be duplicated in the output.
If the first field is not part of the range (start > 0), there will always be a leading IFS character.
Hi all,
I have a file like this I want to extract only those regions which are big and continous
chr1 3280000 3440000
chr1 3440000 3920000
chr1 3600000 3920000 # region coming within the 3440000 3920000. so i don't want it to be printed in output
chr1 3920000 4800000
chr1 ... (2 Replies)
Hi All,
I am trying to get some lines from a file i did it with while-do-loop. since the files are huge it is taking much time. now i want to make it faster.
The requirement is the file will be having 1 million lines.
The format is like below.
##transaction, , , ,blah, blah... (38 Replies)
Hi all, I'm pretty much a newbie to UNIX. I would appreciate any help with UNIX coding on comparing two large csv files (greater than 10 GB in size), and output a file with matching columns.
I want to compare file1 and file2 by 'id' and 'chain' columns, then extract exact matching rows'... (5 Replies)
Hello,
I have been working as Solaris/Linux Admin since past 8 years. I am looking options for my profile change, but there is some limitation. I worked as 24x7 support for admin, server support, high availability, etc. But been worked on developing side and scripting part.
When I search for Big... (2 Replies)
Hi all
I have a big file which I have attached here.
And, I have to fetch certain entries and arrange in 5 columns
Name Drug DAP ID disease approved or notIn the attached file data is arranged with tab separated columns in this way:
and other data is... (2 Replies)
Hey guys, we will be interested in learning from your experience in using Linux in Big Data projects. Has anyone used Hadoop, or MapR or Horton Works on Linux and any experiences you may have had on these. I am more interested in knowing if a certain distribution of Linux is better supported for... (1 Reply)
Hello,
I have a big data file (160 MB) full of records with pipe(|) delimited those fields. I`m sorting the file on the first field.
I'm trying to sort with "sort" command and it brings me 6 minutes.
I have tried with some transformation methods in perl but it results "Out of memory". I was... (2 Replies)
Hi,
I did read a few posts on the subjects, tried out a few solutions, but did not solve my problem.
https://www.unix.com/302121568-post11.html
https://www.unix.com/shell-programming-scripting/137953-large-file-columns-into-rows-etc-4.html
Please help. Problem very similar to the second link... (15 Replies)
How to cut data from big file
my file around 30 gb
I tried "head -50022172 filename > newfile.txt ,and tail -5454283 newfile.txt. It's slowy.
afer that I tried sed -n '46467831,50022172p' filename > newfile.txt ,also slow
Please recommend me , faster command to cut some data from... (4 Replies)