06-02-2009
Extract data from large file 80+ million records
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--
++++++7777jjjjjjj0000000000 ( header record)
2098 POCG 0000 KKKK
2097 KOLL 0F00 KLLL
2095 LKJH 0L99 L0IU
.
.
.
.
********66666666666**** ( trailer record
Now suppose i enter the key as 2098(field as key) , so all rercords with 2098 as the first record should be moved to new file.
**********************************************
I tried to use grep ...but it took a lot of time ..nearly 45 mintues to give me output file.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
i have a input file which does not have a delimiter
All i Need to do is to identify a line and extract the data from it and run the loop again and need to ensure that it was not extracted earlier
Input file
------------
abcd 12345 egfhijk ip 192.168.0.1 CNN.com
abcd 12345 egfhijk ip... (12 Replies)
Discussion started by: vasimm
12 Replies
2. Shell Programming and Scripting
hi,
I'm trying to sort a file which has 3.7 million records an gettign the following error...any help is appreciated...
sort: Write error while merging.
Thanks (6 Replies)
Discussion started by: greenworld
6 Replies
3. Shell Programming and Scripting
Hi,
I have a huge file say with 2000000 records. The file has 42 fields. I would like to pick randomly 1000 records from this huge file. Can anyone help me how to do this? (1 Reply)
Discussion started by: ajithshankar@ho
1 Replies
4. Shell Programming and Scripting
Hi Guys,
I have a file as follows:
a b c 1 2 3 4
pp gg gh hh 1 2 fm 3 4
g h i j k l m 1 2 3 4
d e f g h j i k l 1 2 3 f 3 4
r t y u i o p d p re 1 2 3 f 4
t y w e q w r a s p a 1 2 3 4
I am trying to extract all the 2's from each row. 2 is just an example... (6 Replies)
Discussion started by: npatwardhan
6 Replies
5. Shell Programming and Scripting
Hello gurus,
I am new to "awk" and trying to break a large file having 4 million records into several output files each having half million but at the same time I want to keep the similar key records in the same output file, not to exist accross the files.
e.g. my data is like:
Row_Num,... (6 Replies)
Discussion started by: kam66
6 Replies
6. Programming
Hi All,
I don't need any code for this just some advice. I have a large collection of heterogeneous data (about 1.3 million) which simply means data of different types like float, long double, string, ints. I have built a linked list for it and stored all the different data types in a structure,... (5 Replies)
Discussion started by: shoaibjameel123
5 Replies
7. Shell Programming and Scripting
Dear All,
I have two files both containing 10 Million records each separated by comma(csv fmt).
One file is input.txt other is status.txt.
Input.txt-> contains fields with one unique id field (primary key we can say)
Status.txt -> contains two fields only:1. unique id and 2. status
... (8 Replies)
Discussion started by: vguleria
8 Replies
8. Shell Programming and Scripting
Hello All,
I have a large file, more than 50,000 lines, and I want to split it in even 5000 records. Which I can do using
sed '1d;$d;' <filename> | awk 'NR%5000==1{x="F"++i;}{print > x}'Now I need to add one more condition that is not to break the file at 5000th record if the 5000th record... (20 Replies)
Discussion started by: ibmtech
20 Replies
9. Shell Programming and Scripting
I have a file, named records.txt, containing large number of records, around 0.5 million records in format below:
28433005 1 1 3 2 2 2 2 2 2 2 2 2 2 2
28433004 0 2 3 2 2 2 2 2 2 1 2 2 2 2
...
Another file is a key file, named key.txt, which is the list of some numbers in the first column of... (5 Replies)
Discussion started by: zenongz
5 Replies
10. Shell Programming and Scripting
Hi All!!
I have a large file containing millions of records. My purpose is to extract 8 characters immediately from the given file.
222222222|ZRF|2008.pdf|2008|01/29/2009|001|B|C|C
222222222|ZRF|2009.pdf|2009|01/29/2010|001|B|C|C
222222222|ZRF|2010.pdf|2010|01/29/2011|001|B|C|C... (5 Replies)
Discussion started by: pavand
5 Replies
LEARN ABOUT DEBIAN
interface-order
INTERFACE-ORDER(5) resolvconf INTERFACE-ORDER(5)
NAME
interface-order - resolvconf configuration file
DESCRIPTION
The file /etc/resolvconf/interface-order is used to control the order in which resolvconf nameserver information records are processed by
those resolvconf update scripts that consult this file. (The name of the file is apt because a resolvconf nameserver information record is
named after the interface with which it is associated.)
The file contains a sequence of shell glob patterns, one per line. The position of a record in the order is the point at which its name
first matches a pattern.
Patterns may not contain whitespace, slashes or initial dots or tildes. Blank lines and lines beginning with a '#' are ignored.
Resolvconf update scripts in /etc/resolvconf/update.d/ that consult this file include the current default versions of dnsmasq, pdnsd and
libc. (Actually they don't read the file directly; they call the utility program /lib/resolvconf/list-records which lists records in the
specified order and omits the names of empty records.)
EXAMPLE
# /etc/resolvconf/interface-order
# Use nameservers on the loopback interface first.
lo*
# Next use records for Ethernet interfaces
eth*
# Next use records for Wi-Fi interfaces
wlan*
# Next use records for PPP interfaces
ppp*
# Last use other interfaces
*
AUTHOR
Resolvconf was written by Thomas Hood <jdthood@gmail.com>.
COPYRIGHT
Copyright (C) 2004, 2011 Thomas Hood
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU-
LAR PURPOSE.
SEE ALSO
resolvconf(8)
resolvconf 18 May 2011 INTERFACE-ORDER(5)