I sincerely apologize. In each case, the output file you got had a filename derived from the 2nd field (i.e., the data between the 1st and 2nd tildes which seems to be a constant for the transactions you selected to print) in a line that contained a transaction number you wanted to print, and the contents of that file was the transactions starting with the transaction after the next to the last transaction number you requested in the big input file through the last transaction number you requested from the big input file.
It comes from me not getting nearly enough sleep, you not providing sample data that matched the actual format of your data, and from me not getting nearly enough sleep. (There were three problems and I'm blaming two of them on not getting enough sleep.) Now that I have cleaned up my test data to match what I believe is your current data format, the following seems to work. Please try this replacement:
Code:
#!/bin/ksh
big_file='/tmp/remedixz.20160120_085021_41222370_1'
trannum='/tmp/transnum'
awk -F '~' '
FNR == NR {
# Gather transaction numbers...
t[$1]
tc = FNR
next
}
{ # Gather transaction lines.
l[++lc] = $0
}
$1 == "%%YEDTRN" && $3 in t {
# We have found a transaction number for a transaction that is to be
# extracted. Save the transaction number and remove this transaction
# from the transaction list.
delete t[transnum = $3]
file = FILENAME "_" transnum
tc--
}
/^0000EOT/ {
# If we have a transaction that is to be printed, print it.
if(transnum) {
# Print the transaction.
for(i = 1; i <= lc; i++)
print l[i] > file
close(file)
printf("Transaction #%s extracted to file %s\n", transnum, file)
# Did we just print the last transaction requested?
if(!tc) {
# Yes. We are done.
exit
}
# No. Clear found transaction number.
transnum = ""
}
# Reset for next transaction.
lc = 0
}' "$trannum" "$big_file"
Hopefully, this will do what you want.
As stated before, if someone wants to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
Hi Don,
Thanks this was working as expected. it written all the 3 transactions as expected to separate files. I want to change the code in such a way that i want to write all three transactions set into single file. could you please help me?
1 . Thanks everyone who read the post first.
2 . I have a log file which size is 143M , I can not use vi open it .I can not use xedit open it too.
How to view it ?
If I want to view 200-300 ,how can I implement it
3 . Thanks (3 Replies)
I have a command which prints #lines after and before the search string in the huge file
nawk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r;print;c=a}b{r=$0}' b=0 a=10 s="STRING1" FILE
The file is 5 gig big.
It works great and prints 10 lines after the lines which contains search string in... (8 Replies)
Hi,
I have a big (2.7 GB) text file. Each lines has '|' saperator to saperate each columns.
I want to delete those lines which has text like '|0|0|0|0|0'
I tried:
sed '/|0|0|0|0|0/d' test.txt
Unfortunately, it scans the file but does nothing.
file content sample:... (4 Replies)
hi,
i have two files.
file1.sh
echo "unix"
echo "linux"
file2.sh
echo "unix linux forums"
now the output i need is
$./file2.sh
unix linux forums (3 Replies)
Hi,
I need a unix command to delete first n (say 100) lines from a log file. I need to delete some lines from the file without using any temporary file. I found sed -i is an useful command for this but its not supported in my environment( AIX 6.1 ). File size is approx 100MB.
Thanks in... (18 Replies)
Hi all
I have a big file which I have attached here.
And, I have to fetch certain entries and arrange in 5 columns
Name Drug DAP ID disease approved or notIn the attached file data is arranged with tab separated columns in this way:
and other data is... (2 Replies)
The dataset I'm working on is about 450G, with about 7000 colums and 30,000,000 rows.
I want to extract about 2000 columns from the original file to form a new file.
I have the list of number of the columns I need, but don't know how to extract them.
Thanks! (14 Replies)
Dear all,
I have stuck with this problem for some days.
I have a very big file, this file can not open by vi command.
There are 200 loops in this file, in each loop will have one line like this:
GWA quasiparticle energy with Z factor (eV)
And I need 98 lines next after this line.
Is... (6 Replies)
Hi all,
I have a file like this I want to extract only those regions which are big and continous
chr1 3280000 3440000
chr1 3440000 3920000
chr1 3600000 3920000 # region coming within the 3440000 3920000. so i don't want it to be printed in output
chr1 3920000 4800000
chr1 ... (2 Replies)
Discussion started by: amrutha_sastry
2 Replies
LEARN ABOUT CENTOS
pagesize
PAGESIZE(1) General Commands Manual PAGESIZE(1)NAME
pagesize - Print supported system page sizes
SYNOPSIS
pagesize [options]
DESCRIPTION
The pagesize utility prints the page sizes of a page of memory in bytes, as returned by getpagesizes(3). This is useful when creating por-
table shell scripts, configuring huge page pools with hugeadm or launching applications to use huge pages with hugectl.
If no parameters are specified, pagesize prints the system base page size as returned by getpagesize(). The following parameters affect
what other pagesizes are displayed.
--huge-only, -H
Display all huge pages supported by the system as returned by gethugepagesizes().
--all, -a
Display all page sizes supported by the system.
SEE ALSO oprofile(1), getpagesize(2), getpagesizes(3), gethugepagesizes(3), hugectl(7), hugeadm(7), libhugetlbfs(7)AUTHORS
libhugetlbfs was written by various people on the libhugetlbfs-devel mailing list.
October 10, 2008 PAGESIZE(1)