data::stream::bulk::chunked(3pm) [debian man page]
Data::Stream::Bulk::Chunked(3pm) User Contributed Perl Documentation Data::Stream::Bulk::Chunked(3pm)NAME
Data::Stream::Bulk::Chunked - combine streams into larger chunks
VERSION
version 0.11
SYNOPSIS
use Data::Stream::Bulk::Chunked;
Data::Stream::Bulk::Chunked->new(
stream => $s,
chunk_size => 10000,
);
DESCRIPTION
This is a stream which wraps an existing stream to give more items in a single block. This can simplify application code which does its own
processing one block at a time, and where processing larger blocks is more efficient.
ATTRIBUTES
stream
The stream to chunk. Required.
chunk_size
The minimum number of items to return in a block. Defaults to 1 (which does nothing).
METHODS
get_more
See Data::Stream::Bulk::DoneFlag.
Returns at least "chunk_size" items. Note that this isn't guaranteed to return exactly "chunk_size" items - it just returns multiple
full blocks from the backend. Also, the final block returned may have less than "chunk_size" items.
AUTHOR
Yuval Kogman <nothingmuch@woobling.org>
COPYRIGHT AND LICENSE
This software is copyright (c) 2012 by Yuval Kogman.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
perl v5.14.2 2012-02-14 Data::Stream::Bulk::Chunked(3pm)
Check Out this Related Man Page
Data::Stream::Bulk::DBI(3pm) User Contributed Perl Documentation Data::Stream::Bulk::DBI(3pm)NAME
Data::Stream::Bulk::DBI - N-at-a-time iteration of DBI statement results.
VERSION
version 0.11
SYNOPSIS
use Data::Stream::Bulk::DBI;
my $sth = $dbh->prepare("SELECT hate FROM sql"); # very big resultset
$sth->execute;
return Data::Stream::Bulk::DBI->new(
sth => $sth,
max_rows => $n, # how many at a time
slice => [ ... ], # if you want to pass the first param to fetchall_arrayref
);
DESCRIPTION
This implementation of Data::Stream::Bulk api works with DBI statement handles, using "fetchall_arrayref" in DBI.
It fetches "max_rows" at a time (defaults to 500).
ATTRIBUTES
sth The statement handle to call "fetchall_arrayref" on.
slice
Passed verbatim as the first param to "fetchall_arrayref". Should usually be "undef", provided for completetness.
max_rows
The second param to "fetchall_arrayref". Controls the size of each buffer.
Defaults to 500.
METHODS
get_more
See Data::Stream::Bulk::DoneFlag.
Calls "fetchall_arrayref" to get the next chunk of rows.
all Calls "fetchall_arrayref" to get the raminder of the data (without specifying "max_rows").
AUTHOR
Yuval Kogman <nothingmuch@woobling.org>
COPYRIGHT AND LICENSE
This software is copyright (c) 2012 by Yuval Kogman.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
perl v5.14.2 2012-02-14 Data::Stream::Bulk::DBI(3pm)
Is there any possibility that a Stream Read and Write queues will interchange messages of any kind. If so what are the different possiblites and under what circumstances ?
Thanks in advance. (4 Replies)
Hi All,
I am trying to read data from two files and then compare them and only print the records on the screen that have a same ID.i.e TAGNO =CUSTOMERNO
For Eg
My Input Files are (a) Transaction (b) Customer detail
The data in file a is like:
TagNo Date Time Station... (0 Replies)
Hi, I'm a new member here. Can anyone tell me what DDF(Data Definition File) is? Could you give me a brief example of DDF file? and how is it implemented in C?
Any suggestion wil be appreciated. Thank's :) (4 Replies)
I've got a problem i'm hoping other more experienced programmers have had to deal with sometime in their careers and can help me: how to get fullnames that were chunked together into one field in an old database into separate more meaningful fields.
I'd like to get the records that nicely fit... (2 Replies)
Under smit, one has to manually select each fix with F7. there 9000 fixes left to be marked. How Can I manually install/Mark all of these without SMIT.
---------- Post updated at 02:29 PM ---------- Previous update was at 01:15 PM ----------
From the command line
instfix -T -d... (4 Replies)
Hi Gurus,
I am working with a korn shell script to simplify some operations of creation file of spool for a Bulk-Copy. Thi is my code:
In a for cycle:
gzcat ${file} |awk '{ print substr($0,1,5)";"substr($0,108,122)";"substr($0,124,131)";"substr($0,139,152)";"substr($0,154,163)";"}' >>... (6 Replies)
I'm trying to write a script that can compile my students' homework submissions in bulk. My students' c code is buried in a file path that looks like this:
./Homework\ X/Doe, John/Submission\ Attachments
Where I'm struggling is determining how to navigate to each of the submission attachment... (11 Replies)
Hi All, :D
Actullay I am looking for a smart way :b: to parse files in a directory whose count is around 2000000 :eek: in a single day.
Find is working with me but taking a lot of times :confused:, sometimes even a day which is not helping me.:wall:
So anyone can help me know a smart... (5 Replies)