plock() memory locking problems


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users plock() memory locking problems
# 1  
Old 01-25-2002
plock() memory locking problems

I'm experiencing some strangeness when using plock().

I'm on a Solaris 5.7/SPARC machine with 64MB of memory.

I can show that plock() works by successfully locking down 10 MB of memory.

Then, I ask for 40 MB, and I get a failure notice because there is "Not enough memory" available.

I then try to get 10 MB again, and get the same "Not enough memory" failure.

Why doesn't the 10 MB lock down the second time?
I have freed everything, the memory should be available again.

Here's a link to the source code... any ideas?

http://www.wegotitall.com/user_space/troccola/plock.c

Thanks!
# 2  
Old 01-26-2002
First, a process's memory is divided into pieces called segments. There is a text segment and a code segment which are required to exist by standards and there may be others. malloc() allocates free space at the end of the data seqment. If there isn't enough free space, malloc will use the sbrk() system call to add space. Thus malloc can and usually does make a process grow. free() on the other hand will never shrink a process. It just adds the area being freed to the space available for subsequent malloc() calls.

Next, plock(PROCLOCK) attempts to lock the entire text and data seqments into core. Looking at your post and your code, I gather that you think the plock(PROCLOCK) somehow will only lock whatever the last malloc() call allocated. plock() doesn't know or care about malloc(), it operates on entire segments.

So your program first malloc'ed 10MB which increased the size of your process by about 10MB. At this point, plock(PROCLOCK) worked. Then you freed the area and malloc'ed 40MB. Well you get use the spare 10MB that you freed, but malloc() must increase your process by another 30MB to satisfy the request. At this point, your process is now too big to be locked into memory. It stays too big when you do the free(). It won't need to grow as you malloc the final 10MB, it just allocates 10MB from the 40MB that you freed. But it still can't be locked.
# 3  
Old 01-29-2002
OK, I see exactly what you mean. I think you are correct.

However, I'm not out of the woods just yet.

From the sbrk man page, I found that...

sbrk adds incr bytes to the break value and changes the allocated space accordingly. incr can be negative, in which case the amount of allocated space is decreased.

So, with intentions of "shrinking" the process size, I've added

sbrk(-1 * MB_to_lock * MEGABYTE);

to my code immediately following the plock call that fails.

Unfortunately, with the sbrk call, the very next malloc now causes a SEGV. I don't understand why.

Assuming my process size is N bytes to start, shouldn't it change like this?

Start up : N bytes
After lock(10) : N + 10 MB
After free(10) : N + 10 MB ( free doesn't shrink process size )
After lock(40) : N + 40 MB
After free(40) : N + 40 MB ( free doesn't shrink process size )
After sbrk(-40 MB ) : N ( sbrk -40MB should shrink process sz)

Am I missing something?

Thanks in advance,
T
# 4  
Old 01-29-2002
Like I said malloc/free will never shrink the process. But also, they aren't prepared for you to sneak in and do it for them. So malloc() is assuming that the process is still the same size it left it. And when it tries to reference the now non-existant space that it previously allocated, you get a SIGSEGV.
# 5  
Old 01-29-2002
So, in a nutshell, there's no way to shrink the process size again.
# 6  
Old 01-30-2002
Quote:
Originally posted by troccola
So, in a nutshell, there's no way to shrink the process size again.
Yow! I didn't mean to imply that. There is no way to shrink the data segment if you are using the standard malloc and free routines. To be more accurate, there is no way to shrink the data segment's virtual size. If you lay off plock(), the kernel will happily grow and shrink its physical size as required. (And by the way, repeatedly locking and unlocking segments as you are doing is guaranteed to devastate system performance if memory is tight.)

Also, you can call sbrk() directly. There is a nasty can of worms here. The direction in which the heap grows is undefined. It may grow in the same direction as the stack. Or it may grow in the reverse direction. So when you allocate a buffer you may need to return a pointer to the first new byte or the last new byte. This can change from system to system. So you will want your own routine that sits on top of sbrk to hide the idiosyncrasies of pointer arithmetic from the application. And be aware that this routine cannot be written portably.

And using the stack is another way processes grow and then shrink. Automatic variables spring into existence when a function is called and vanish when it returns.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. AIX

Memory problems.

Hi All, Just loaded AIX 6.1 and then got Firefox running on the workstation. To test out I wanted to download some small files from 'Perlz', and during this I'm told there's not enough room in the Downloads folder. What do I do to resize folders from the default and generally move memory around? In... (3 Replies)
Discussion started by: Box_the_Jack_in
3 Replies

2. Solaris

Memory problems in Blade 6340

We have a 6000 chassis with three blades in it. Two of the blades have "Oracle/Sun" memory in them with no complaints. The third blade is populated with Dataram dimms. That 3rd blade continues to flag a slot bad. Oracle has said they would not support the blade with Dataram memory it it. I didn't... (1 Reply)
Discussion started by: brownwrap
1 Replies

3. Red Hat

Memory problems in NFS client server

Hi all, i have some doubts in a situation that i fail to get an answer in Google. I have a solaris 10 nfs server and 5 centos 6.0 nfs clients. The problem/situation is that in the clients the free memory is "disappearing" along the time (passing to used)..and it gets free if i umount the... (5 Replies)
Discussion started by: blast
5 Replies

4. Shell Programming and Scripting

A bad configured logrotate can cause a problems with memory leak ?

I am newbe to unix. I have a very serious problem on my server. I have a java application running, and all day on Monday morning, the process that is associated with this java is locked. Usually I doing a shutdown by the shutdown java command , you have to kill the process with the kill-kill... (1 Reply)
Discussion started by: jjoottaa
1 Replies

5. Solaris

Memory problems on a -sunfire T2000

I work with a network management tool, which, for various reasons, is installed on a solaris server.This is a Sunfire T2000 server with 16 CPUs and 8GB of RAM. I have installed a Solaris 10 ZFS and 8GB swap. From the beginning I had problems with memory occupation that it rises progressively to 95%... (4 Replies)
Discussion started by: drusa79
4 Replies

6. Programming

Problems with shared memory and lists

Hi, I need to put in shared memory a list made with object of this structure: typedef struct Obj{ char objname; struct Obj *nextObj; }Object I've filled my list with (for example) 10 elements, but when i try to put it in shared memory to be read by another process i get segmentation fault... (6 Replies)
Discussion started by: BeNdErR
6 Replies

7. Programming

[C] Problems with shared memory

Hi everbody, i have a problem with shared memory and child-processes in C (unix). I have a server that do forks to create (N) child processes. This processes work with a shared "stuct" thanks to shared memory and a semaphore. The problem is when a child modify the shared memory and the others... (2 Replies)
Discussion started by: hurricane86
2 Replies

8. HP-UX

UNIX memory problems

I don't know if this is better suited for the application section, but here goes. We are currently running HP-UX 11 as our database server. The database is Progress version 9.1C. As of late, some of our batch processes that run on the UNIX db server are erroring out because of what appear to... (3 Replies)
Discussion started by: eddiej
3 Replies

9. UNIX and Linux Applications

UNIX memory problems w/Progress DB

We are currently running HP-UX 11 as our database server. The database is Progress version 9.1C. As of late, some of our batch processes that run on the UNIX db server are erroring out because of what appear to be memory issues(at least according to Progress). The db error messages indicate... (0 Replies)
Discussion started by: eddiej
0 Replies

10. UNIX for Advanced & Expert Users

Program/ Memory Problems

I need some advise. I have an application server running several applications. When I try and start a particular application when the others are running I receive the following. This is appearing in the core file that is created. ... (1 Reply)
Discussion started by: dbrundrett
1 Replies
Login or Register to Ask a Question