Sponsored Content
Full Discussion: Aggregation of Huge files
Top Forums Shell Programming and Scripting Aggregation of Huge files Post 302892687 by Don Cragun on Friday 14th of March 2014 05:35:40 AM
Old 03-14-2014
In your original (non-working) code:
Code:
tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f'

why did you bother adding code to remove all of the " characters from your input when there aren't any double-quote characters in your input file? Please explain in English what the format is for this file and please explain what the format is for the numbers that will be processed by this code. Do some fields sometimes have double quoted strings containing pipe symbols (|)?

Please explain what algorithm is supposed to be used to compute the result that is printed at the end of processing.

In my last message I asks you to:
Quote:
... show us at least one line of input that the script processes incorrectly.
Is this single data line processed incorrectly? (Or is the correct result from processing this line 2380.26?)

I assume that you're using a Solaris system. What is the length (in bytes) of the longest line in your 254368 line file?
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Comparing two huge files

Hi, I have two files file A and File B. File A is a error file and File B is source file. In the error file. First line is the actual error and second line gives the information about the record (client ID) that throws error. I need to compare the first field (which doesnt start with '//') of... (11 Replies)
Discussion started by: kmkbuddy_1983
11 Replies

2. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

3. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

4. High Performance Computing

Huge Files to be Joined on Ux instead of ORACLE

we have one file (11 Million) line that is being matched with (10 Billion) line. the proof of concept we are trying , is to join them on Unix : All files are delimited and they have composite keys.. could unix be faster than Oracle in This regards.. Please advice (1 Reply)
Discussion started by: magedfawzy
1 Replies

5. Shell Programming and Scripting

Help in locating a word in huge files

hi i receive about 5000 files per day in my system. Each of them are like: cat ABC.april24.dat ABH00001990 01993 409009092 0909 INI iop 9033 AAB0000237893784 8430900 898383 AUS 34349089008 849843 9474822 AAA00003849893498098394 84834 348348439 -438939 IN AAA00004438493893849384... (2 Replies)
Discussion started by: Prateek007
2 Replies

6. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

7. AIX

Copy huge files system

Dear Guy’s By using dd command or any strong command, I’d like to copy huge data from file system to another file system Sours File system: /sfsapp File system has 250 GB of data Target File system: /tgtapp I’d like to copy all these files and directories from /sfsapp to /tgtapp as... (28 Replies)
Discussion started by: Mr.AIX
28 Replies

8. Shell Programming and Scripting

Compression - Exclude huge files

I have a DB folder which sizes to 60GB approx. It has logs which size from 500MB - 1GB. I have an Installation which would update the DB. I need to backup this DB folder, just incase my Installation FAILS. But I do not need the logs in my backup. How do I exclude them during compression (tar)? ... (2 Replies)
Discussion started by: DevendraG
2 Replies

9. UNIX for Dummies Questions & Answers

File comparison of huge files

Hi all, I hope you are well. I am very happy to see your contribution. I am eager to become part of it. I have the following question. I have two huge files to compare (almost 3GB each). The files are simulation outputs. The format of the files are as below For clear picture, please see... (9 Replies)
Discussion started by: kaaliakahn
9 Replies

10. Shell Programming and Scripting

Aggregation of huge data

Hi Friends, I have a file with sample amount data as follows: -89990.3456 8788798.990000128 55109787.20 -12455558989.90876 I need to exclude the '-' symbol in order to treat all values as an absolute one and then I need to sum up.The record count is around 1 million. How... (8 Replies)
Discussion started by: Ravichander
8 Replies
showfdmn(8)						      System Manager's Manual						       showfdmn(8)

NAME
showfdmn - Display attributes of an AdvFS file domain SYNOPSIS
/sbin/showfdmn [-k] domain OPTIONS
Displays the total number of blocks and the number of free blocks in terms of 1K blocks. OPERANDS
Specifies the name of an existing AdvFS file domain. DESCRIPTION
The showfdmn command displays the attributes of a file domain and detailed information about each volume in the file domain. The command displays the following file domain attributes: Id A unique number (in hexadecimal format) that identifies a file domain. Date Created The day, month, and time that a file domain was created. LogPgs The number of 8-kilobyte pages in the transaction log of the specified file domain. Version An internal-use-only version number for the AdvFS on-disk data structures. This number is not related to the version number of the base operating system. Domain Name The name of the file domain. The command also displays the following volume information: Vol The volume number within the file domain. An L next to the number indicates that the volume contains the transaction log. 512-Blks The size of the volume in 512-byte blocks. 1K-Blks The size of the volume in 1K blocks. Free The number of blocks in a volume that are available for use. % Used The percent of volume space that is currently allocated to files or metadata. Cmode The I/O consolidation mode. The default mode is on. Rblks The maximum number of 512-byte blocks read from the volume at one time. Wblks The maximum number of 512-byte blocks written to the volume at one time. Vol Name The name of the special device file for the volume. For multi-volume file domains, the showfdmn command also displays the total volume size, total number of free blocks, and the total percent of volume space currently allocated. RESTRICTIONS
A file domain must be active before the showfdmn command can display volume information. A file domain is active when at least one fileset in the file domain is mounted. EXAMPLES
The following example displays domain information for the usr_domain file domain: % showfdmn usr_domain Id Date Created LogPgs Version Domain Name 2b5361ba.000791be Tue Jan 12 16:26 1999 256 4 usr_domain Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name 1L 820164 351580 57% on 256 256 /dev/disk/dsk0d To display information on all file domains on a system, enter the following sequence of commands: % cd /etc/fdmns % showfdmn * SEE ALSO
Commands: chvol(8), mkfdmn(8) Files: advfs(4) showfdmn(8)
All times are GMT -4. The time now is 02:59 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy