Sponsored Content
Top Forums UNIX for Advanced & Expert Users Enomem in Journal Retry Error Post 69340 by blowtorch on Thursday 14th of April 2005 06:37:21 AM
Old 04-14-2005
Shooting in the dark here, but if you are using journal file systems, the problem could be that there is insufficient memory to do whatever the kernel-jfs are trying to do...

Hey,
I googled this and found this. Will probably help.

Last edited by blowtorch; 04-14-2005 at 07:41 AM.. Reason: for more info
 

9 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

ext3: No journal on filesystem on dm-0

Hi Linuxers, I am a newbie here and loggin this facilities regularly. Recently my PC experience a power trip, my system could not boot up after restarting. I did the following : - Boot up with "linux rescue" using installation disk FC3 - In a shell, run "lvm vgchange... (0 Replies)
Discussion started by: chowkimhan
0 Replies

2. Shell Programming and Scripting

retry process in ftp

hi #!/bin/bash SERVER=10.89.40.35 USER=xyz PASSWD=xyz ftp -in $SERVER<<EOF user $USER $PASSWD mkdir PPL cd /path of remote dir lcd /path of local dir hash bin put <file name> bye <<EOF The above ftp script i have to schedule in crontab at a particular instance of time run daily.... (2 Replies)
Discussion started by: rookie250
2 Replies

3. Shell Programming and Scripting

Retry upon FTP failure

I am using the following code in a C Shell script to transfer files to a remote server: ftp -n logxx.xxxx.xxx.xxx.com <<DO_FTP1 quote user $user_name quote pass $password ascii put $js_file_name bin put $FinalZipFile quit DO_FTP1 This code works great except on those rare occasions... (8 Replies)
Discussion started by: phudgens
8 Replies

4. Shell Programming and Scripting

Shell Script to Retry and Exit

ok, so I'm trying to add a function to my local script that runs a command on a remote host. The reason why this is needed is that, there are other scripts that run different commands on the same remote host. so the problem is that many times there are multiple scripts being run on the remote... (1 Reply)
Discussion started by: SkySmart
1 Replies

5. AIX

mprotect fails with ENOMEM in text segment

Hi guys, I use AIX version 5 on IBM Power 5+ machine. I am currently trying to experiment with sort of self-modifying code, like this: ucontext_t ut; getcontext(&ut); int iar = ut.uc_mcontext.jmp_context.iar; int pageSize = getpagesize(); int rest = iar % pageSize; void *ptr = iar -... (6 Replies)
Discussion started by: manolo123
6 Replies

6. UNIX for Advanced & Expert Users

How to manipulate the conditions between every retry in wget?

Hi , When i hit the URL using WGET command ,it is retrying according to the number of retry we mentioned along with Wget command. my expectation : 1) If 1st try is failed and iam retrying again before 2nd retry i have to check for "xxxxxxx" entry in the log file. 2) If "XXXXXXX" entry is... (4 Replies)
Discussion started by: vinothsekark
4 Replies

7. Shell Programming and Scripting

If then else - Retry operation

I need to read a file line by line, then depending on the contents of each line, type in a code that will get written to an array. The problem I have is when I ask the user to confirm the input code, if it is wrong, how do i Return to ask again? Any thing I try increments the file to the next... (6 Replies)
Discussion started by: kcpoole
6 Replies

8. UNIX for Dummies Questions & Answers

Wget retry on 500 internal error

Hello Guys, I am trying to generate static site, I have perl script that wget the url, so the problem is sometimes wget has 500 internal error, this is failing to get that page. So I am thinking of retrying that url with 500 response. system $command = 'wget ... -i inputfile -o outfile" Is... (2 Replies)
Discussion started by: neal
2 Replies

9. Solaris

Unrecovered read error No retry

We encountered these error 2 times(e.g. Solaris 10 with NetWorker installed) with in the month of August, but we couldn't pin point the root cause, it might be bad sector, bad cable or software incompatibility? Do you experience these issue or please share your understanding about this? Thanks... (0 Replies)
Discussion started by: B@S
0 Replies
DBIx::Class::Manual::Troubleshooting(3pm)		User Contributed Perl Documentation		 DBIx::Class::Manual::Troubleshooting(3pm)

NAME
DBIx::Class::Manual::Troubleshooting - Got a problem? Shoot it. "Can't locate storage blabla" You're trying to make a query on a non-connected schema. Make sure you got the current resultset from $schema->resultset('Artist') on a schema object you got back from connect(). Tracing SQL The "DBIC_TRACE" environment variable controls SQL tracing, so to see what is happening try export DBIC_TRACE=1 Alternatively use the "storage->debug" class method:- $schema->storage->debug(1); To send the output somewhere else set debugfh:- $schema->storage->debugfh(IO::File->new('/tmp/trace.out', 'w'); Alternatively you can do this with the environment variable, too:- export DBIC_TRACE="1=/tmp/trace.out" Can't locate method result_source_instance For some reason the table class in question didn't load fully, so the ResultSource object for it hasn't been created. Debug this class in isolation, then try loading the full schema again. Can't get last insert ID under Postgres with serial primary keys Older DBI and DBD::Pg versions do not handle "last_insert_id" correctly, causing code that uses auto-incrementing primary key columns to fail with a message such as: Can't get last insert id at /.../DBIx/Class/Row.pm line 95 In particular the RHEL 4 and FC3 Linux distributions both ship with combinations of DBI and DBD::Pg modules that do not work correctly. DBI version 1.50 and DBD::Pg 1.43 are known to work. Can't locate object method "source_name" via package There's likely a syntax error in the table class referred to elsewhere in this error message. In particular make sure that the package declaration is correct. For example, for a schema " MySchema " you need to specify a fully qualified namespace: " package MySchema::MyTable; ". syntax error at or near "<something>" ... This can happen if you have a relation whose name is a word reserved by your database, e.g. "user": package My::Schema::User; ... __PACKAGE__->table('users'); __PACKAGE__->add_columns(qw/ id name /); __PACKAGE__->set_primary_key('id'); ... 1; package My::Schema::ACL; ... __PACKAGE__->table('acl'); __PACKAGE__->add_columns(qw/ user_id /); __PACKAGE__->belongs_to( 'user' => 'My::Schema::User', 'user_id' ); ... 1; $schema->resultset('ACL')->search( {}, { join => [qw/ user /], '+select' => [ 'user.name' ] } ); The SQL generated would resemble something like: SELECT me.user_id, user.name FROM acl me JOIN users user ON me.user_id = user.id If, as is likely, your database treats "user" as a reserved word, you'd end up with the following errors: 1) syntax error at or near "." - due to "user.name" in the SELECT clause 2) syntax error at or near "user" - due to "user" in the JOIN clause The solution is to enable quoting - see "Setting_quoting_for_the_generated_SQL" in DBIx::Class::Manual::Cookbook for details. column "foo DESC" does not exist ... This can happen if you are still using the obsolete order hack, and also happen to turn on SQL-quoting. $rs->search( {}, { order_by => [ 'name DESC' ] } ); Since DBIx::Class >= 0.08100 and SQL::Abstract >= 1.50 the above should be written as: $rs->search( {}, { order_by => { -desc => 'name' } } ); For more ways to express order clauses refer to "ORDER_BY_CLAUSES" in SQL::Abstract Perl Performance Issues on Red Hat Systems There is a problem with slow performance of certain DBIx::Class operations using the system perl on some Fedora and Red Hat Enterprise Linux system (as well as their derivative distributions such as Centos, White Box and Scientific Linux). Distributions affected include Fedora 5 through to Fedora 8 and RHEL5 upto and including RHEL5 Update 2. Fedora 9 (which uses perl 5.10) has never been affected - this is purely a perl 5.8.8 issue. As of September 2008 the following packages are known to be fixed and so free of this performance issue (this means all Fedora and RHEL5 systems with full current updates will not be subject to this problem):- Fedora 8 - perl-5.8.8-41.fc8 RHEL5 - perl-5.8.8-15.el5_2.1 This issue is due to perl doing an exhaustive search of blessed objects under certain circumstances. The problem shows up as performance degradation exponential to the number of DBIx::Class row objects in memory, so can be unnoticeable with certain data sets, but with huge performance impacts on other datasets. A pair of tests for susceptibility to the issue and performance effects of the bless/overload problem can be found in the DBIx::Class test suite, in the "t/99rh_perl_perf_bug.t" file. Further information on this issue can be found in <https://bugzilla.redhat.com/show_bug.cgi?id=379791>, <https://bugzilla.redhat.com/show_bug.cgi?id=460308> and http://rhn.redhat.com/errata/RHBA-2008-0876.html <http://rhn.redhat.com/errata/RHBA-2008-0876.html> Excessive Memory Allocation with TEXT/BLOB/etc. Columns and Large LongReadLen It has been observed, using DBD::ODBC, that creating a DBIx::Class::Row object which includes a column of data type TEXT/BLOB/etc. will allocate LongReadLen bytes. This allocation does not leak, but if LongReadLen is large in size, and many such row objects are created, e.g. as the output of a ResultSet query, the memory footprint of the Perl interpreter can grow very large. The solution is to use the smallest practical value for LongReadLen. perl v5.14.2 2010-06-03 DBIx::Class::Manual::Troubleshooting(3pm)
All times are GMT -4. The time now is 06:50 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy