05-27-2011
I need to analyse some vmcore files
I need to analyse some vmcore files, do you guys know how can i get a free version of the "Solaris Crash Analysis Tool "
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi,
Can we modify the GDB source code so as to analyze core dumps from different targets? From my analysis, I think we need to build our section table statically for each target. i.e., including the various address boundaries in build_section_table() function. If this is the case, then the GDB... (2 Replies)
Discussion started by: nsdeeps
2 Replies
2. Solaris
in solaris 8 environment,frequently os panic happened and someone advise me check vmcore.:(
for crash dump facility can we use SUNEXPLORER data collector package including with analyse result of vmcore like ?
It may provides panic message included program counter address, perhaps
... (3 Replies)
Discussion started by: mahadib
3 Replies
3. UNIX for Advanced & Expert Users
hi you all!
I can write a network program to send and receive some messages. I use
read() and write() functions for extracting of sending messages via a given socket. By doing so, i know only the actions performed at the application layer of the TCP/IP suite. But i want to control the actual... (2 Replies)
Discussion started by: solomonml
2 Replies
4. Shell Programming and Scripting
how i can read vmcore.x (x=0,1,...) file
--------------------------------------------------------------------------------
how i can read vmcore.x (x=0,1,...) file in text format or any readable format. have any idea?
I heard 'strings' command work on it, but in solaris when i tried -- it... (4 Replies)
Discussion started by: mahadib
4 Replies
5. Shell Programming and Scripting
Hello there,
i am trying to write a shell script to analyse some of my log files.
I want the script to check if there is a logfile from yesterday or today (some times the script that creates the logfile takes a bit longer and its after 00:00) and search the logfile itself if the script was... (0 Replies)
Discussion started by: Linien
0 Replies
6. UNIX for Advanced & Expert Users
Dear All,
I am new to this forum. This is my first.
I am facing customer issue. Customer has got core file while running the server.
He had sent core file and details from pstack, pmap and pldd commands.
I have to debug this application, please help me to fix this issue.
I am using sparc... (1 Reply)
Discussion started by: KiranBangalore
1 Replies
7. Solaris
Dear All,
I am new to this forum. This is my first.
I am facing customer issue. Customer has got core file while running the server.
He had sent core file and details from pstack, pmap and pldd commands.
I have to debug this application, please help me to fix this issue.
I am using sparc 10... (4 Replies)
Discussion started by: KiranBangalore
4 Replies
8. Shell Programming and Scripting
Hi all,
I'm working with a peice of software that runs on Linux that allows planning trips in cars through maps. This software has different variations depending on the type of car, e.g. BMW, Audi, Hyundai, etc... Each variation has a dependency on common external components that are not... (1 Reply)
Discussion started by: emoshaya
1 Replies
9. UNIX for Dummies Questions & Answers
Hi,
Someone please analyse the following o/p of fdisk -l and tell me what it means for /dev/sda, /dev/sdb, /dev/sdc ....
Disk /dev/sda: 53.6 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start ... (5 Replies)
Discussion started by: stunn3r
5 Replies
10. HP-UX
Hi All,
When can we see these messages in the syslog.
We have service guard cluster software installed on hpux 11iv3 servers.
We were able to see the below error so many times in our syslog messages
cmdisklockd: Unable to convert device to I/O tree node: I/O tree node does not exist.
... (2 Replies)
Discussion started by: Sachin1987
2 Replies
LEARN ABOUT DEBIAN
kinosearch1::analysis::tokenizer
KinoSearch1::Analysis::Tokenizer(3pm) User Contributed Perl Documentation KinoSearch1::Analysis::Tokenizer(3pm)
NAME
KinoSearch1::Analysis::Tokenizer - customizable tokenizing
SYNOPSIS
my $whitespace_tokenizer
= KinoSearch1::Analysis::Tokenizer->new( token_re => qr/S+/, );
# or...
my $word_char_tokenizer
= KinoSearch1::Analysis::Tokenizer->new( token_re => qr/w+/, );
# or...
my $apostrophising_tokenizer = KinoSearch1::Analysis::Tokenizer->new;
# then... once you have a tokenizer, put it into a PolyAnalyzer
my $polyanalyzer = KinoSearch1::Analysis::PolyAnalyzer->new(
analyzers => [ $lc_normalizer, $word_char_tokenizer, $stemmer ], );
DESCRIPTION
Generically, "tokenizing" is a process of breaking up a string into an array of "tokens".
# before:
my $string = "three blind mice";
# after:
@tokens = qw( three blind mice );
KinoSearch1::Analysis::Tokenizer decides where it should break up the text based on the value of "token_re".
# before:
my $string = "Eats, Shoots and Leaves.";
# tokenized by $whitespace_tokenizer
@tokens = qw( Eats, Shoots and Leaves. );
# tokenized by $word_char_tokenizer
@tokens = qw( Eats Shoots and Leaves );
METHODS
new
# match "O'Henry" as well as "Henry" and "it's" as well as "it"
my $token_re = qr/
# start with a word boundary
w+ # Match word chars.
(?: # Group, but don't capture...
'w+ # ... an apostrophe plus word chars.
)? # Matching the apostrophe group is optional.
# end with a word boundary
/xsm;
my $tokenizer = KinoSearch1::Analysis::Tokenizer->new(
token_re => $token_re, # default: what you see above
);
Constructor. Takes one hash style parameter.
o token_re - must be a pre-compiled regular expression matching one token.
COPYRIGHT
Copyright 2005-2010 Marvin Humphrey
LICENSE, DISCLAIMER, BUGS, etc.
See KinoSearch1 version 1.00.
perl v5.14.2 2011-11-15 KinoSearch1::Analysis::Tokenizer(3pm)