Sponsored Content
Top Forums Shell Programming and Scripting Best Stratergy to process Huge files Post 302379565 by tene on Friday 11th of December 2009 05:26:39 AM
Old 12-11-2009
There is no dependency between the records neither thay need serialised processing.
I will read each record and add it in a sql query.

eg: select * from table1 where field in (.........)

I will read each field from file and add in this query.Also every 500 fields I will form a new query.

The query will be printed in a file.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Apache process cause huge load

Hello ! I have a problem with an apache process that is causing huge load. It starts from time to time - I'm not sure what is making it start beacause there's nothing in cron, but it appears every few minutes - and when it starts is uses a lot of RAM (up to 1.3GB) and create a huge load on... (1 Reply)
Discussion started by: Sergiu-IT
1 Replies

2. Shell Programming and Scripting

Comparing two huge files

Hi, I have two files file A and File B. File A is a error file and File B is source file. In the error file. First line is the actual error and second line gives the information about the record (client ID) that throws error. I need to compare the first field (which doesnt start with '//') of... (11 Replies)
Discussion started by: kmkbuddy_1983
11 Replies

3. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

4. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

5. High Performance Computing

Huge Files to be Joined on Ux instead of ORACLE

we have one file (11 Million) line that is being matched with (10 Billion) line. the proof of concept we are trying , is to join them on Unix : All files are delimited and they have composite keys.. could unix be faster than Oracle in This regards.. Please advice (1 Reply)
Discussion started by: magedfawzy
1 Replies

6. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

7. AIX

Process ids consuming huge resources ?

Hi All what is the command to check process ids , which are running from long time and which are consuming more cpu? Also how to check, what a particular PID is running what For Ex: i have a pid :3223722 which is running since from long time, if i want to check what is this... (1 Reply)
Discussion started by: sidharthmellam
1 Replies

8. AIX

Copy huge files system

Dear Guy’s By using dd command or any strong command, I’d like to copy huge data from file system to another file system Sours File system: /sfsapp File system has 250 GB of data Target File system: /tgtapp I’d like to copy all these files and directories from /sfsapp to /tgtapp as... (28 Replies)
Discussion started by: Mr.AIX
28 Replies

9. Shell Programming and Scripting

Difference between two huge .csv files

Hi all, I need help on getting difference between 2 .csv files. I have 2 large . csv files which has equal number of columns. I nned to compare them and get output in new file which will have difference olny. E.g. File1.csv Name, Date, age,number Sakshi, 16-12-2011, 22, 56 Akash,... (10 Replies)
Discussion started by: Dimple
10 Replies

10. Shell Programming and Scripting

Aggregation of Huge files

Hi Friends !! I am facing a hash total issue while performing over a set of files of huge volume: Command used: tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f' Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies
MSSQL_FETCH_OBJECT(3)													     MSSQL_FETCH_OBJECT(3)

mssql_fetch_object - Fetch row as object

SYNOPSIS
object mssql_fetch_object (resource $result) DESCRIPTION
mssql_fetch_object(3) is similar to mssql_fetch_array(3), with one difference - an object is returned, instead of an array. Indirectly, that means that you can only access the data by the field names, and not by their offsets (numbers are illegal property names). Speed-wise, the function is identical to mssql_fetch_array(3), and almost as quick as mssql_fetch_row(3) (the difference is insignifi- cant). PARAMETERS
o $result - The result resource that is being evaluated. This result comes from a call to mssql_query(3). RETURN VALUES
Returns an object with properties that correspond to the fetched row, or FALSE if there are no more rows. EXAMPLES
Example #1 mssql_fetch_object(3) example <?php // Send a select query to MSSQL $query = mssql_query('SELECT [username], [name] FROM [php].[dbo].[userlist]'); // Check if there were any records if (!mssql_num_rows($query)) { echo 'No records found'; } else { // Print a nice list of users in the format of: // * name (username) echo '<ul>'; while ($row = mssql_fetch_object($query)) { echo '<li>' . $row->name . ' (' . $row->username . ')</li>'; } echo '</ul>'; } // Free the query result mssql_free_result($query); ?> NOTES
Note Field names returned by this function are case-sensitive. Note This function sets NULL fields to the PHP NULL value. SEE ALSO
mssql_fetch_array(3), mssql_fetch_row(3). PHP Documentation Group MSSQL_FETCH_OBJECT(3)
All times are GMT -4. The time now is 12:05 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy