Ok - Just to clarify what i am trying to do. I am trying to get the creation of the file to be as efficient as possible. Or there may be another way of achieving my objective without creating a file. Explained below....
The first part of the file variable written to the file should read:
The $firstline, $secondline variable are actually XML tags with information in.
The $thirdline contains an Insert statement to a database
<Insertquery>Insert into newtable<Insertquery>
The $forthline contains the values to enter into the table
<values>'one', '2012-03-23 $INCREMENT', 'message'<values>
I am trying to update a database table which can only be updated through the use of an xml file with all the various tags in and the correct format.
The newfile.xml is the file that is used as the template which updates the database. However, the second value to be entered in the database need to be a unique value. This is why i have added the $INCREMENT in the values.
A java call is made and it used the XML file to insert into a database the values. I need to insert 1000 records however, i think the creation of the file is making it slow at the moment as it is taking 1 second per transaction.
Hello,
I have a Supermicro server with a P4SCI mother board running Debian Sarge 3.1. This is the "dmidecode" output related to RAM info:
RAM speed information is incomplete.. "Current Speed: Unknown", is there anyway/soft to get the speed of installed RAM modules? thanks!!
Regards :)... (0 Replies)
hi i have a script that is taking the difference of multiple columns in a file from a value from a single row..so far i have a loop to do that.. all the data is floating point..fin has the difference between array1 and array2..array1 has 700 x 300= 210000 values and array2 has 700 values..
... (11 Replies)
Hey together,
You should know, that I'am relatively new to shell scripting, so my solution is probably a little awkward.
Here is the script:
#!/bin/bash
live_dir=/var/lib/pokerhands/live
for limit in `find $live_dir/ -type d | sed -e s#$live_dir/##`; do
cat $live_dir/$limit/*... (19 Replies)
I have a script that processes a fair amount of data -- say, 25-50 megs per run. I'd like ideas on speeding it up. The code is actually just a preprocessor -- I'm using another language to do the heavy lifting. But as it happens, the preprocessing takes much more time than the final processing... (3 Replies)
I analysed disk performance with blktrace and get some data:
read:
8,3 4 2141 2.882115217 3342 Q R 195732187 + 32
8,3 4 2142 2.882116411 3342 G R 195732187 + 32
8,3 4 2144 2.882117647 3342 I R 195732187 + 32
8,3 4 2145 ... (1 Reply)
Hi Guys,
I have a script that I am using to convert some text files to xls files. I create multiple temp. files in the process of conversion. Other than reducing the temp. files, are there any general tricks to help speed up the script?
I am running it in the bash shell.
Thanks. (6 Replies)
I had written a perl script to compare two files: new and master and get the output of the first file i.e. the first file: words that are not in the master file
STRUCTURE OF THE TWO FILES
The first file is a series of names
ramesh
sushil
jonga
sudesh
lugdi
whereas the second file (could be... (4 Replies)
hey guys i have a perl script wich use to compare hashes but it tookes a long time to do that so i wich i will have the soulition to do it soo fast
he is the code
<redacted> (1 Reply)
Hi
I have written a shell script which will test 300 to 500 IPs to find which are pinging and which are not pinging.
the script which give output as
10.x.x.x is pining
10.x.x.x. is not pining
-
-
-
10.x.x.x is pining
like above.
But, this script is taking... (6 Replies)
Hello,
I am basic level shell script developer. I have developed the following script. The shell script basically tracking various files containing certain strings. I am finding options to make the script run more faster. Any help/suggestion would be appreciated :)
#! /bin/bash
# Greps for... (6 Replies)