Sponsored Content
Top Forums Programming Reason for Segmentation fault Post 302146701 by LivinFree on Wednesday 21st of November 2007 11:43:56 PM
Old 11-22-2007
By the way, many Linux distributions disable core file creations by default. Stupid for developers, but safer for users (I guess.)

Try:
Code:
ulimit -c unlimited

Before running your program, of course Smilie You should then get a core on a SIGSEGV.
 

10 More Discussions You Might Find Interesting

1. Programming

segmentation fault

hi all i'm trying to execute a c program under linux RH and it gives me segmentation fault, this program was running under unix at&t anybody kow what the problem could be? thanx in advance regards (2 Replies)
Discussion started by: omran
2 Replies

2. AIX

Segmentation fault

Hi , During execution a backup binary i get following error "Program error 11 (Segmentation fault), saving core file in '/usr/datatools" Riyaz (2 Replies)
Discussion started by: rshaikh
2 Replies

3. Linux

Segmentation fault

Hi, on a linux Red HAT(with Oracle DB 9.2.0.7) I have following error : RMAN> delete obsolete; RMAN retention policy will be applied to the command RMAN retention policy is set to redundancy 2 using channel ORA_DISK_1 Segmentation fault What does it mean ? And the solution ? Many thanks. (0 Replies)
Discussion started by: big123456
0 Replies

4. UNIX for Dummies Questions & Answers

Segmentation Fault

Hi, While comparing primary key data of two tables thr bteq script I am getting this Error. This script is a shell script. *** Error: The following error was encountered on the output file. Script.sh: 3043492 Segmentation fault(coredump) Please let me know how to get through it. ... (5 Replies)
Discussion started by: monika
5 Replies

5. Programming

segmentation fault

If I do this. Assume struct life { char *nolife; } struct life **life; // malloc initialization & everything if(life->nolife == 0) Would I get error at life->nolife if it is equal to 0. wrong accession? (3 Replies)
Discussion started by: joey
3 Replies

6. Programming

segmentation fault

What is segmentation fault(core dumped) (1 Reply)
Discussion started by: gokult
1 Replies

7. Programming

Using gdb, ignore beginning segmentation fault until reproduce environment segmentation fault

I use a binary name (ie polo) it gets some parameter , so for debugging normally i do this : i wrote script for watchdog my app (polo) and check every second if it's not running then start it , the problem is , if my app , remain in state of segmentation fault for a while (ie 15 ... (6 Replies)
Discussion started by: pooyair
6 Replies

8. Homework & Coursework Questions

Segmentation Fault

this is a network programming code to run a rock paper scissors in a client and server. I completed it and it was working without any error. After I added the findWinner function to the server code it starts giving me segmentation fault. -the segmentation fault is fixed Current problem -Also... (3 Replies)
Discussion started by: femchi
3 Replies

9. Programming

Segmentation fault

I keep getting this fault on a lot of the codes I write, I'm not exactly sure why so I'd really appreciate it if someone could explain the idea to me. For example this code #include <stdio.h> main() { unsigned long a=0; unsigned long b=0; int z; { printf("Enter two... (2 Replies)
Discussion started by: sizzler786
2 Replies

10. Programming

C. To segmentation fault or not to segmentation fault, that is the question.

Oddities with gcc, 2.95.3 for the AMIGA and 4.2.1 for MY current OSX 10.14.1... I am creating a basic calculator for the AMIGA ADE *NIX emulator in C as it does not have one. Below are two very condensed snippets of which I have added the results inside the each code section. IMPORTANT!... (11 Replies)
Discussion started by: wisecracker
11 Replies
limit(1)                                                                                                                                  limit(1)

NAME
limit, ulimit, unlimit - set or get limitations on the system resources available to the current shell and its descendents SYNOPSIS
/usr/bin/ulimit [-f] [blocks] sh ulimit [ - [HS] [ a | cdfnstv]] ulimit [ - [HS] [ c | d | f | n | s | t | v]] limit csh limit [-h] [ resource [limit]] unlimit [-h] [resource] ksh ulimit [-HSacdfnstv] [limit] /usr/bin/ulimit The ulimit utility sets or reports the file-size writing limit imposed on files written by the shell and its child processes (files of any size may be read). Only a process with appropriate privileges can increase the limit. sh The Bourne shell built-in function, ulimit, prints or sets hard or soft resource limits. These limits are described in getrlimit(2). If limit is not present, ulimit prints the specified limits. Any number of limits may be printed at one time. The -a option prints all lim- its. If limit is present, ulimit sets the specified limit to limit. The string unlimited requests the largest valid limit. Limits may be set for only one resource at a time. Any user may set a soft limit to any value below the hard limit. Any user may lower a hard limit. Only a super-user may raise a hard limit. See su(1M). The -H option specifies a hard limit. The -S option specifies a soft limit. If neither option is specified, ulimit will set both limits and print the soft limit. The following options specify the resource whose limits are to be printed or set. If no option is specified, the file size limit is printed or set. -c maximum core file size (in 512-byte blocks) -d maximum size of data segment or heap (in kbytes) -f maximum file size (in 512-byte blocks) -n maximum file descriptor plus 1 -s maximum size of stack segment (in kbytes) -t maximum CPU time (in seconds) -v maximum size of virtual memory (in kbytes) csh The C-shell built-in function, limit, limits the consumption by the current process or any process it spawns, each not to exceed limit on the specified resource. If limit is omitted, print the current limit; if resource is omitted, display all limits. -h Use hard limits instead of the current limits. Hard limits impose a ceiling on the values of the current limits. Only the privi- leged user may raise the hard limits. resource is one of: cputime Maximum CPU seconds per process. filesize Largest single file allowed. Limited to the size of the filesystem (see df(1M)). datasize The maximum size of a process's heap in kilobytes. stacksize Maximum stack size for the process. The default stack size is 2**64. coredumpsize Maximum size of a core dump (file). This is limited to the size of the filesystem. descriptors Maximum number of file descriptors. Run the sysdef(1M) command to obtain the maximum possible limits for your system. The values reported are in hexadecimal, but can be translated into decimal numbers using the bc(1) command. memorysize Maximum size of virtual memory. limit is a number, with an optional scaling factor, as follows: nh Hours (for cputime). nk n kilobytes. This is the default for all but cputime. nm n megabytes or minutes (for cputime). mm:ss Minutes and seconds (for cputime). unlimit removes a limitation on resource. If no resource is specified, then all resource limitations are removed. See the description of the limit command for the list of resource names. -h Remove corresponding hard limits. Only the privileged user may do this. ksh The Korn shell built-in function, ulimit, sets or displays a resource limit. The available resources limits are listed below. Many systems do not contain one or more of these limits. The limit for a specified resource is set when limit is specified. The value of limit can be a number in the unit specified below with each resource, or the value unlimited. The -H and -S flags specify whether the hard limit or the soft limit for the given resource is set. A hard limit cannot be increased once it is set. A soft limit can be increased up to the value of the hard limit. If neither the -H or -S options is specified, the limit applies to both. The current resource limit is printed when limit is omitted. In this case, the soft limit is printed unless -H is specified. When more than one resource is specified, then the limit name and unit is printed before the value. -a Lists all of the current resource limits. -c The number of 512-byte blocks on the size of core dumps. -d The number of K-bytes on the size of the data area. -f The number of 512-byte blocks on files written by child processes (files of any size may be read). -n The number of file descriptors plus 1. -s The number of K-bytes on the size of the stack area. -t The number of seconds (CPU time) to be used by each process. -v The number of K-bytes for virtual memory. If no option is given, -f is assumed. Per-Shell Memory Parameters The heapsize, datasize, and stacksize parameters are not system tunables. The only controls for these are hard limits, set in a shell startup file, or system-wide soft limits, which, for the current version of the Solaris OS, is 2**64bytes. The following option is supported by ulimit: -f Sets (or reports, if no blocks operand is present), the file size limit in blocks. The -f option is also the default case. The following operand is supported by ulimit: blocks The number of 512-byte blocks to use as the new file size limit. /usr/bin/ulimit Example 1: Limiting the Stack Size The following example limits the stack size to 512 kilobytes: example% ulimit -s 512 example% ulimit -a time(seconds) unlimited file(blocks) 100 data(kbytes) 523256 stack(kbytes) 512 coredump(blocks) 200 nofiles(descriptors) 64 memory(kbytes) unlimited sh/ksh Example 2: Limiting the Number of File Descriptors The following command limits the number of file descriptors to 12: example$ ulimit -n 12 example$ ulimit -a time(seconds) unlimited file(blocks) 41943 data(kbytes) 523256 stack(kbytes) 8192 coredump(blocks) 200 nofiles(descriptors) 12 vmemory(kbytes) unlimited csh Example 3: Limiting the Core Dump File Size The following command limits the size of a core dump file size to 0 kilobytes: example% limit coredumpsize 0 example% limit cputime unlimited filesize unlimited datasize 523256 kbytes stacksize 8192 kbytes coredumpsize 0 kbytes descriptors 64 memorysize unlimited Example 4: Removing the limitation for core file size The following command removes the above limitation for the core file size: example% unlimit coredumpsize example% limit cputime unlimited filesize unlimited datasize 523256 kbytes stacksize 8192 kbytes coredumpsize unlimited descriptors 64 memorysize unlimited See environ(5) for descriptions of the following environment variables that affect the execution of ulimit: LANG, LC_ALL, LC_CTYPE, LC_MES- SAGES, and NLSPATH. The following exit values are returned by ulimit: 0 Successful completion. >0 A request for a higher limit was rejected or an error occurred. See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWcsu | +-----------------------------+-----------------------------+ |Interface Stability |Standard | +-----------------------------+-----------------------------+ bc(1), csh(1), ksh(1), sh(1), df(1M), su(1M), swap(1M), sysdef(1M), getrlimit(2), attributes(5), environ(5), standards(5) 19 Aug 2005 limit(1)
All times are GMT -4. The time now is 02:57 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy