04-12-2003
big file processeing
hi,
i have a very big file that holding data, how could i pick line by line from this file.
the following process can illustrate better:
file
-------------------
123444444 |
122314567 |-----------data
146689000 |
c=123444444 ---------- c is variable
process c ----------- passing c to script (sh,sql....etc)
c=122314567 ---------- c is variable
process c ----------- passing c to script (sh,sql....etc)
c=146689000 ---------- c is variable
process c ----------- passing c to script (sh,sql....etc)
i think u understand what i want to do
please help
thanx in advance
8 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
1 . Thanks everyone who read the post first.
2 . I have a log file which size is 143M , I can not use vi open it .I can not use xedit open it too.
How to view it ?
If I want to view 200-300 ,how can I implement it
3 . Thanks (3 Replies)
Discussion started by: chenhao_no1
3 Replies
2. Shell Programming and Scripting
am currently working on a batch processing script and i am stuck
I am not very familiar with the korn shell I need to do the following:
Process an input file with the following
information:
SOURCE FILE
533650_MSCIEUROPE_AvgWeight_YTD_EXP.XLS/Daily/test/Ceurope/EuropeFactset/YTD/... (1 Reply)
Discussion started by: chambala5
1 Replies
3. Solaris
Hi,
I am using Sun Solaris 5.9 OS. I have found a file called wtmpx having a size of 5.0 GB. I want to clear this file using :>/var/adm/wtmpx. My query is, would it cause any problem to the running live system.
Could anyone suggest the best method to clear the file without causing problem to... (6 Replies)
Discussion started by: Vijayakumarpc
6 Replies
4. Shell Programming and Scripting
Hi
I have two files, one is 1.6 GB. I would like to add one extra column of information to the large file at a specific location (after its 2nd column).
For example:
File 1 has two columns more than 1000 rows like this
MM009987 1
File 2 looks like this
MM00098 MM00076 3 4 2 4 2... (1 Reply)
Discussion started by: sogi
1 Replies
5. UNIX for Dummies Questions & Answers
I have a 5000 line config.log file with several "maybe" errors. Any reccomendations on finding solvable problems? (2 Replies)
Discussion started by: NeedLotsofHelp
2 Replies
6. Shell Programming and Scripting
Hi,
I have 2 files
format of file 1 is:
a1
b2
a2
c2
d1
f3
format of file 2 is (tab delimited):
a1 1.2 0.5 0.06 0.7 0.9 1 0.023
a3 0.91 0.007 0.12 0.34 0.45 1 0.7
a2 1.05 2.3 0.25 1 0.9 0.3 0.091
b1 1 5.4 0.3 9.2 0.3 0.2 0.1
b2 3 5 7 0.9 1 9 0 1
b3 0.001 1 2.3 4.6 8.9 10 0 1 0... (10 Replies)
Discussion started by: Lucky Ali
10 Replies
7. Emergency UNIX and Linux Support
We got data that was supposed to be CSV, but was sent in a huge XML file.
I've downloaded xmlstarlet, but I'm darned if I can get it to operate the "sel" feature to look down a path and get any sort of value. I see pieces of what should be paths, but they seem to have extraneous characters, and... (7 Replies)
Discussion started by: gmark99
7 Replies
8. UNIX for Beginners Questions & Answers
Hello Friends,
I have a big file that is transferred to my UNIX system and it seems it has CR as the line delimiter
When I run
file <filename>
<filename>: ASCII text, with CR line terminators
How do I convert the file to one with LF as terminators so that my code that runs on UNIX can... (3 Replies)
Discussion started by: mehimadri12
3 Replies
LEARN ABOUT DEBIAN
tangram::type::dump::perl
Tangram::Type::Dump::Perl(3pm) User Contributed Perl Documentation Tangram::Type::Dump::Perl(3pm)
NAME
Tangram::Type::Dump::Perl - map any Perl object as scalar dump
SYNOPSIS
use Tangram::Core;
use Tangram::Type::Dump::Perl; # always
$schema = Tangram::Schema->new(
classes => { NaturalPerson => { fields => {
perl_dump =>
{
diary => # diary is a perl hash
{
col => 'diarydata',
sql => 'TEXT',
indent => 0,
terse => 1,
purity => 0
},
lucky_numbers => 'int', # use defaults
}
DESCRIPTION
Maps arbitrary Perl data structures by serializing to a string representation. The persistent fields are grouped in a hash under the
"perl_dump" key in the field hash.
Serialization is done by Data::Dumper, which traverses the Perl data structure and creates a string representation of it. The resulting
string will be mapped to the DBMS as a scalar value. During restore, the scalar value will be eval'd to reconstruct the original data
structure.
As of Tangram 2.07.1, persistent references are safely handled via the Tangram::Type::Dump utility class.
The field names are passed in a hash that associates a field name with a field descriptor. The field descriptor may be either a hash or a
string. The hash uses the following fields:
* col
* sql
* indent
* terse
* purity
The optional fields "col" and "sql" specify the column name and the column type for the scalar value in the database. If not present, "col"
defaults to the field name and "sql" defaults to VARCHAR(255). Values will be always quoted as they are passed to the database.
The remaining optional fields control the serialization process. They will be passed down to Data::Dumper as values to the corresponding
Data::Dumper options. The default settings are: no indentation ("indent=0"), compact format ("terse=1"), and quick dump ("purity=0").
AUTHOR
This mapping was contributed by Gabor Herr <herr@iti.informatik.tu-darmstadt.de>
perl v5.8.8 2006-03-29 Tangram::Type::Dump::Perl(3pm)