Sponsored Content
Full Discussion: Aggregation of Huge files
Top Forums Shell Programming and Scripting Aggregation of Huge files Post 302894676 by Don Cragun on Wednesday 26th of March 2014 04:18:16 PM
Old 03-26-2014
Quote:
Originally Posted by Ravichander
Hi Don !

Thanks for your valubale time and analysis ! I have made the requirement simple :

I have extracted the amount column alone into a seperate file and the data pattern of the same will be like the one shown below:

Code:
 
18781426.84
-2010820
-668398.44
-285369
-253957.7
-272.88
-2732931.94

The maximum amount value in the file is :

Code:
 
-90005467876809.567342220989

Now, I need to take the absolute of the amount and then I need to sum it up. The total number of records will be around 7 million.

Kindly help me with a code to fulfill the above requirement.

Thanks
Ravichander
I have supplied code that meets all of your requirements. You claim that my code doesn't work but have given absolutely no evidence that it does not work.

Please create a small database (with a dozen records instead of 7 million records) and show us the actual values in ALL of the records, the results produced by my code, and the results produced by your database. Without data that we can verify, there is nothing more we can do for you.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Comparing two huge files

Hi, I have two files file A and File B. File A is a error file and File B is source file. In the error file. First line is the actual error and second line gives the information about the record (client ID) that throws error. I need to compare the first field (which doesnt start with '//') of... (11 Replies)
Discussion started by: kmkbuddy_1983
11 Replies

2. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

3. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

4. High Performance Computing

Huge Files to be Joined on Ux instead of ORACLE

we have one file (11 Million) line that is being matched with (10 Billion) line. the proof of concept we are trying , is to join them on Unix : All files are delimited and they have composite keys.. could unix be faster than Oracle in This regards.. Please advice (1 Reply)
Discussion started by: magedfawzy
1 Replies

5. Shell Programming and Scripting

Help in locating a word in huge files

hi i receive about 5000 files per day in my system. Each of them are like: cat ABC.april24.dat ABH00001990 01993 409009092 0909 INI iop 9033 AAB0000237893784 8430900 898383 AUS 34349089008 849843 9474822 AAA00003849893498098394 84834 348348439 -438939 IN AAA00004438493893849384... (2 Replies)
Discussion started by: Prateek007
2 Replies

6. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

7. AIX

Copy huge files system

Dear Guy’s By using dd command or any strong command, I’d like to copy huge data from file system to another file system Sours File system: /sfsapp File system has 250 GB of data Target File system: /tgtapp I’d like to copy all these files and directories from /sfsapp to /tgtapp as... (28 Replies)
Discussion started by: Mr.AIX
28 Replies

8. Shell Programming and Scripting

Compression - Exclude huge files

I have a DB folder which sizes to 60GB approx. It has logs which size from 500MB - 1GB. I have an Installation which would update the DB. I need to backup this DB folder, just incase my Installation FAILS. But I do not need the logs in my backup. How do I exclude them during compression (tar)? ... (2 Replies)
Discussion started by: DevendraG
2 Replies

9. UNIX for Dummies Questions & Answers

File comparison of huge files

Hi all, I hope you are well. I am very happy to see your contribution. I am eager to become part of it. I have the following question. I have two huge files to compare (almost 3GB each). The files are simulation outputs. The format of the files are as below For clear picture, please see... (9 Replies)
Discussion started by: kaaliakahn
9 Replies

10. Shell Programming and Scripting

Aggregation of huge data

Hi Friends, I have a file with sample amount data as follows: -89990.3456 8788798.990000128 55109787.20 -12455558989.90876 I need to exclude the '-' symbol in order to treat all values as an absolute one and then I need to sum up.The record count is around 1 million. How... (8 Replies)
Discussion started by: Ravichander
8 Replies
GLOBUS-GRAM-AUDIT(8)						  GRAM5 Commands					      GLOBUS-GRAM-AUDIT(8)

NAME
globus-gram-audit - Load GRAM4 and GRAM5 audit records into a database SYNOPSIS
globus-gram-audit [--conf CONFIG_FILE] [[--create] | [--update=OLD-VERSION]] [--check] [--delete] [--audit-directory AUDITDIR] [--quiet] DESCRIPTION
The globus-gram-audit program loads audit records to an SQL-based database. It reads $GLOBUS_LOCATION/etc/globus-job-manager.conf by default to determine the audit directory and then uploads all files in that directory that contain valid audit records to the database configured by the globus_gram_job_manager_auditing_setup_scripts package. If the upload completes successfully, the audit files will be removed. The full set of command-line options to globus-gram-audit consist of: --conf CONFIG_FILE Use CONFIG_FILE instead of the default from the configuration file for audit database configuration. --check Check whether the insertion of a record was successful by querying the database after inserting the records. This is used in tests. --delete Delete audit records from the database right after inserting them. This is used in tests to avoid filling the databse with test records. --audit-directory DIR Look for audit records in DIR, instead of looking in the directory specified in the job manager configuration. This is used in tests to control which records are loaded to the database and then deleted. --query SQL Perform the given SQL query on the audit database. This uses the database information from the configuration file to determine how to contact the database. --quiet Reduce the amount of output for common operations. FILES
The globus-gram-audit uses the following files (paths relative to $GLOBUS_LOCATION. etc/globus-gram-job-manager.conf GRAM5 job manager configuration. It includes the default path to the audit directory etc/globus-gram-audit.conf Audit configuration. It includes the information needed to contact the audit database. University of Chicago 08/30/2011 GLOBUS-GRAM-AUDIT(8)
All times are GMT -4. The time now is 12:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy