Sponsored Content
Full Discussion: Memory problems.
Operating Systems AIX Memory problems. Post 302967421 by agent.kgb on Tuesday 23rd of February 2016 04:34:43 PM
Old 02-23-2016
1. You have to understand, in which "folder" there is not enough space. Every "folder" has some path. It can be e.g. /home/user/Downloads.

2. After you have found the path, you can check, if it is really a problem with a space in the filesystem. You can do it in easy way - df -g /your/own/path. For example:

Code:
# df -g /home/user/Downloads
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd1           6.00      3.07   49%      132     1% /home

Paste the output of the command above to the forum. From the command above you see:
  • File system device (/dev/hd1)
  • How many free space it has (3.07GB)
  • Where it is mounted on (/home)
All the parameters are required in the following steps.


If you have more then enough space on the filesystem, but you still can't create file, you have problem somewhere else.


3. If you don't have enough space in the filesystem, you usually can expand it. But first you have to find out, if you have enough space on your hard drives. All filesystems lay in volume groups. You can find a volume group, where your filesystem lays, using the following command:


Code:
# getlvodm -b $(getlvodm -l hd1)
rootvg


hd1 here is the filesystem device from the output of df -g - /dev/hd1.


4. The usual way to look if you have enough space in the volume group is to use lsvg. For example:


Code:
# lsvg rootvg
VOLUME GROUP:       rootvg                   VG IDENTIFIER:  000252f30000d60000000139a72af0ba
VG STATE:           active                   PP SIZE:        128 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      546 (69888 megabytes)
MAX LVs:            256                      FREE PPs:       195 (24960 megabytes)
LVs:                14                       USED PPs:       351 (44928 megabytes)
OPEN LVs:           13                       QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        yes
MAX PPs per VG:     32512
MAX PPs per PV:     1016                     MAX PVs:        32
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
PV RESTRICTION:     none                     INFINITE RETRY: no
DISK BLOCK SIZE:    512                      CRITICAL VG:    no


rootvg in this case is from the output in the step 3.


5. As you can see in this example, the volume group has more than 20 GB free. It means that you can expand the filesystem by 20 GB. But I would suggest always to leave some spare capacity in a volume group. Let's say you want to add another 15GB to your filesystem. You get mountpoint from the step 2 - in our example it is /home, and execute the following command:


Code:
# chfs -a size=+15G /home
Filesystem size changed to 44040192


Please note plus sign after =. You have to be root or have similar authorizations to execute the command.



After that you can check, if the filesystem was expanded:
Code:
# df -g /home
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd1          21.00     18.06   14%      132     1% /home


As a side note. There can be multiple different situations, then you can't write in the filesystem or even expand it, disregard of the free space in it. This is just a short intro, how to expand filesystems, not a full set of documentation and all possible troubleshooting techniques. If you have a problem, try always to be as specific as you can, noting all the commands you've executed and their output.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

plock() memory locking problems

I'm experiencing some strangeness when using plock(). I'm on a Solaris 5.7/SPARC machine with 64MB of memory. I can show that plock() works by successfully locking down 10 MB of memory. Then, I ask for 40 MB, and I get a failure notice because there is "Not enough memory" available. I... (5 Replies)
Discussion started by: troccola
5 Replies

2. UNIX for Advanced & Expert Users

Program/ Memory Problems

I need some advise. I have an application server running several applications. When I try and start a particular application when the others are running I receive the following. This is appearing in the core file that is created. ... (1 Reply)
Discussion started by: dbrundrett
1 Replies

3. UNIX and Linux Applications

UNIX memory problems w/Progress DB

We are currently running HP-UX 11 as our database server. The database is Progress version 9.1C. As of late, some of our batch processes that run on the UNIX db server are erroring out because of what appear to be memory issues(at least according to Progress). The db error messages indicate... (0 Replies)
Discussion started by: eddiej
0 Replies

4. HP-UX

UNIX memory problems

I don't know if this is better suited for the application section, but here goes. We are currently running HP-UX 11 as our database server. The database is Progress version 9.1C. As of late, some of our batch processes that run on the UNIX db server are erroring out because of what appear to... (3 Replies)
Discussion started by: eddiej
3 Replies

5. Programming

[C] Problems with shared memory

Hi everbody, i have a problem with shared memory and child-processes in C (unix). I have a server that do forks to create (N) child processes. This processes work with a shared "stuct" thanks to shared memory and a semaphore. The problem is when a child modify the shared memory and the others... (2 Replies)
Discussion started by: hurricane86
2 Replies

6. Programming

Problems with shared memory and lists

Hi, I need to put in shared memory a list made with object of this structure: typedef struct Obj{ char objname; struct Obj *nextObj; }Object I've filled my list with (for example) 10 elements, but when i try to put it in shared memory to be read by another process i get segmentation fault... (6 Replies)
Discussion started by: BeNdErR
6 Replies

7. Solaris

Memory problems on a -sunfire T2000

I work with a network management tool, which, for various reasons, is installed on a solaris server.This is a Sunfire T2000 server with 16 CPUs and 8GB of RAM. I have installed a Solaris 10 ZFS and 8GB swap. From the beginning I had problems with memory occupation that it rises progressively to 95%... (4 Replies)
Discussion started by: drusa79
4 Replies

8. Shell Programming and Scripting

A bad configured logrotate can cause a problems with memory leak ?

I am newbe to unix. I have a very serious problem on my server. I have a java application running, and all day on Monday morning, the process that is associated with this java is locked. Usually I doing a shutdown by the shutdown java command , you have to kill the process with the kill-kill... (1 Reply)
Discussion started by: jjoottaa
1 Replies

9. Red Hat

Memory problems in NFS client server

Hi all, i have some doubts in a situation that i fail to get an answer in Google. I have a solaris 10 nfs server and 5 centos 6.0 nfs clients. The problem/situation is that in the clients the free memory is "disappearing" along the time (passing to used)..and it gets free if i umount the... (5 Replies)
Discussion started by: blast
5 Replies

10. Solaris

Memory problems in Blade 6340

We have a 6000 chassis with three blades in it. Two of the blades have "Oracle/Sun" memory in them with no complaints. The third blade is populated with Dataram dimms. That 3rd blade continues to flag a slot bad. Oracle has said they would not support the blade with Dataram memory it it. I didn't... (1 Reply)
Discussion started by: brownwrap
1 Replies
All times are GMT -4. The time now is 02:23 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy