Sponsored Content
Top Forums UNIX for Dummies Questions & Answers NOQUEUE: low on space ( have 0, SMTP-DAEMON needs 101 in /var/spool/mqueue) Post 22864 by Dolly on Wednesday 12th of June 2002 04:35:15 AM
Old 06-12-2002
Dear RTM,

Thanx for the immediate reply . Found one core file in /opt/splash/ .. .. deleted.

/var/adm/messages.* - 5 files deleted

/usr/perl5 -- deleted
/usr/apache -- deleted

still showing the same statistics .. 100% used

# df -k


/dev/dsk/c0t0d0s7 2645310 2583839 8565 100% /


/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 670328 0 670328 0% /var/run
swap 670920 592 670328 1% /tmp

/dev/dsk/c0t0d0s0 5109622 446087 4612439 9%
/export/home


no mail system is working on it..

Its a server for all ftp accounts & primary DNS

what is /var/adm/wtmpx ...? quite big file ... found while checking /var/adm/messages *


# fuser /dev/dsk/c0t0d0s7

/dev/dsk/c0t0d0s7: 23145ctom 22827ctom 22822tom 22819ctom 21169crtom 19176ctom 19175ctom 1909ctom 22616ctom 22600ctom 22588ctom 22555ctom 16442ctom 428ctom 425ctom 424ctom 420ctom 409ctom
407ctom 401ctom 400ctom 399ctom 387ctom 385ctom 384ctom 352ctom 348ctom 338ctom 322ctom 320ctom 319ctom 274ctom 273ctom 271ctom 270ctom 252ctom 239ctom 238ctom
231ctom 221ctom 212ctom 201ctom 196ctom 189ctom 187ctom 160ctom 46ctom 44ctom
3c 2c 1ctom 0c

how to take action on this ? shall I kill all process corres. to /dev/dsk/c0t0d0s7 ?

Regards,
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

sendmail[5129]: NOQUEUE: low on space

sendmail: NOQUEUE: low on space (have 0, SMTP-DAEMON needs 101 in /var/spool/mqueue) We're getting this message on our Solaris (Apache/1.3.19) server. Can anyone give me an idea of what to look at? I'd really appreciate it! (2 Replies)
Discussion started by: sitemouse
2 Replies

2. UNIX for Dummies Questions & Answers

/var/spool/mqueue !!!

I keep having this msg on my SunOS console : Jun 29 08:57:40 bersimis sendmail: NOQUEUE: low on space (have 0, SMTP-DAEMON needs 101 in /var/spool/mqueue) I tried to make some space by deleting the files in it, but the msg came back ... Any tips ? Thanks (3 Replies)
Discussion started by: Wingman21
3 Replies

3. UNIX for Dummies Questions & Answers

Send Mail NOQUEUE low on space

Need help to resolve this message "NOQUEUE: low on space (have 18, SMTP-DAEMON needs 101 in /var/spool/mqueue)" Attached capture file from server HPUX 10.2 Ace root@PS01 : Aug 13 18:27:30 SANG last message repeated 7 times Aug 13 18:27:45 PS01 sendmail: NOQUEUE: low on space (have 18,... (2 Replies)
Discussion started by: real-chess
2 Replies

4. Shell Programming and Scripting

Daemon 101

I think I have an issue almost like Sammy_T's. I want to make a piece of code run as a daemon. I have some java, along with it 15 classpath's converted to a shell script that I can "runmyjavap". The script is just what I need to run after compiling it: #!/bin/sh java -classpath : ...(from... (3 Replies)
Discussion started by: Miller_K
3 Replies

5. UNIX for Dummies Questions & Answers

/VAR/SPOOL/MAIL question

Hi, We have all the user account in a home direcory where their mail is stored and retrieved by email clients. We do however have /var/spool/mail with all the user accounts in it as well Our sendmail.cf is configured to use /var/spool/mqueue as the queue so .what is /var/spool/mail being used... (3 Replies)
Discussion started by: mojoman
3 Replies

6. UNIX for Dummies Questions & Answers

/var/spool/mail

Hi, How can i get my mail on either /var/spool/mail or /var/mail? I use mail and sendmail command to send mail. But everytime I send mail it comes to my outlook inbox and when I check with mail command I get the message "No mail for siba". (Note siba is my user Id.) (2 Replies)
Discussion started by: siba.s.nayak
2 Replies

7. UNIX for Advanced & Expert Users

Mail going to /var/spool/mqueue instead of being sent

Hello, I have a bunch of cron jobs in the crontab. For some reason mail from the cron jobs started going to /var/spool/mqueue instead of being sent. Does anyone know why mail from cron jobs would go to the queue instead of being sent? (9 Replies)
Discussion started by: xadamz23
9 Replies

8. Solaris

/var/spool/clientmqueue

Hi, solaris : 9 can we delete the files from this location /var/spool/clientmqueue . I found around 40K files lying in this location. Regards (1 Reply)
Discussion started by: maooah
1 Replies

9. AIX

High Runqueue (R) LOW CPU LOW I/O Low Network Low memory usage

Hello All I have a system running AIX 61 shared uncapped partition (with 11 physical processors, 24 Virtual 72GB of Memory) . The output from NMON, vmstat show a high run queue (60+) for continous periods of time intervals, but NO paging, relatively low I/o (6000) , CPU % is 40, Low network.... (9 Replies)
Discussion started by: IL-Malti
9 Replies

10. Solaris

/var/spool/mqueue issue

Hi guys . I have a solaris machine serving as a DNS server for my environment. Everytime I go into /var/spool/mqueue , there are an aweful lot of emails with names likes: qfqB6ChrpL006644. When I cat the file , I get the following output: H??Received: from machine.domain.com... (3 Replies)
Discussion started by: Junaid Subhani
3 Replies
volume-config(4)						   File Formats 						  volume-config(4)

NAME
volume-config - Solaris Volume Manager volume configuration information for top down volume creation with metassist SYNOPSIS
/usr/share/lib/xml/dtd/volume-config.dtd DESCRIPTION
A volume configuration file, XML-based and compliant with the volume-config.dtd Document Type Definition, describes the detailed configura- tion of the volume or volumes to be created, including the names, sizes and configurations of all the components used in the volume or vol- umes. This configuration file can be automatically generated by running metassist with the -d option, or can be manually created. The volume configuration file can then be used to either generate a command file or to directly create volumes by running metassist and specifying the volume configuration file as input to the command. As a system administrator, you would want to change, manually create, or edit the volume configuration file only if there are small details of the configuration that you want to change. For example, you might want to change names for volumes or hot spare pools, mirror read option, or stripe interlace values. It would be possible to also select different devices or change slice sizes or make similar changes, but that is generally not recommended. Substantial changes to the volume-config file could result in a poor or non-functional configuration. With a volume-config file, you can run metassist and provide the file as input to the command to generate either a command file or to actu- ally set up the configuration. Defining Volume Configuration The top level element <volume-config> surrounds the volume configuration data. This element has no attributes. A volume configuration requires exactly one <diskset> element, which must be the first element of the volume configuration. Additionally, the volume-config can have zero or more of the following elements: <disk>, <slice>, <hsp>, <concat>, <stripe>, <mirror> as required to define the configuration of the volume to be created. Defining Disk Set Within the <volume-config> element, a <diskset> element must exist. The <diskset> element, with the name attribute, specifies the name of the diskset in which to create the volume or volumes. This element and attribute are required. If this named disk set does not exist, it is created upon implementation of this volume configuration. Defining Slice The volume configuration format provides for a <slice> element that defines the name of a slice to use as a component of a volume. The <slice> element requires a name attribute which specifies a full ctd name. If the <slice> is newly created as part of the volume configura- tion, the startsector and sizeinblocks attributes must be specified. If the slice was previously existing, these attributes need not be specified. Defining Hot Spare Pool The volume configuration format provides for a <hsp> element that defines the name of a hot spare pool to use as a component of a configu- ration. The <hsp> element requires a name attribute which specifies a hot spare pool name. Slices defined by <slice> elements contained in the <hsp> element are included in the hot spare pool when metassist creates it." Defining Stripe The <stripe> element defines stripes (interlaced RAID 0 volumes) to be used in a volume. The <stripe> element takes a required name attribute to specify a name conforming to Solaris Volume Manager naming requirements. If the name specifies an existing stripe, no <slice> elements are required. If the name specifies a new stripe, the <slice> elements to construct the slice must be specified within the <stripe> element. The <stripe> elements takes an optional interlace attribute as value and units (for example, 16KB, 5BLOCKS, 20MB). If this value isn't specified, the Solaris Volume Manager default value is used. Defining Concat The <concat> element defines concats (non-interlaced RAID 0 volumes) to be used in a configuration. It is the same as a <stripe> element, except that the interlace attribute is not valid. Defining Mirror The <mirror> element defines mirrors (RAID 1 volumes) to be used in a volume configuration. It can contain combinations of <concat> and <stripe> elements (to explicitly determine which volumes are used as submirrors). The <mirror> element takes a required name attribute to specify a name conforming to Solaris Volume Manager naming requirements. The <mirror> element takes an optional read attribute to define the mirror read options (ROUNDROBIN, GEOMETRIC, or FIRST) for the mirrors. If this attribute is not specified, the Solaris Volume Manager default value is used. The <mirror> element takes an optional write attribute to define the mirror write options (PARALLEL, SERIAL, or FIRST) for the mirrors. If this attribute is not specified, the Solaris Volume Manager default value is used. The <mirror> element takes an optional passnum attribute (0-9) to define the mirror passnum that defines the order in which mirrors are resynced at boot, if required. Smaller numbers are resynced first. If this attribute is not specified, the Solaris Volume Manager default value is used. EXAMPLES
Example 1 Specifying a Volume Configuration The following is an example volume configuration: <!-- Example configuration --> <volume-config> <!-- Specify the existing disk set to use --> <diskset name="redundant"/> <!-- Create slices --> <slice name="/dev/dsk/c0t0d1s7" startsector="1444464" sizeinblocks="205632BLOCKS"/> <slice name="/dev/dsk/c0t0d1s6" startsector="1239840" sizeinblocks="102816KB"/> <!-- Create a concat --> <concat name="d12"> <slice name="/dev/dsk/c0t0d0s7"/> <slice name="/dev/dsk/c0t0d0s6"/> <slice name="/dev/dsk/c0t0d1s7"/> <slice name="/dev/dsk/c0t0d1s6"/> <!-- Create (and use) a HSP --> hsp name="hsp0"> <slice name="/dev/dsk/c0t0d4s0"/> <slice name="/dev/dsk/c0t0d4s1"/> <slice name="/dev/dsk/c0t0d4s3"/> <slice name="/dev/dsk/c0t0d4s4"/> </hsp> </concat> <!-- Create a stripe --> <stripe name="d15" interlace="32KB"> <slice name="/dev/dsk/c0t0d0s7"/> <slice name="/dev/dsk/c0t0d1s7"/> <!-- Use a previously-defined HSP --> <hsp name="hsp0"/> </stripe> <!-- Create a mirror --> <mirror name="d10"> <!-- Submirror 1: An existing stripe --> <stripe name="d11"/> <!-- Submirror 2: The concat defined above --> <concat name="d12"/> <!-- Submirror 3: A stripe defined here --> <stripe name="d13"> <slice name="/dev/dsk/c0t0d2s6"/> <slice name="/dev/dsk/c0t0d2s7"/> <slice name="/dev/dsk/c0t0d3s6"/> slice name="/dev/dsk/c0t0d3s7"/> </stripe> </mirror> </volume-config> FILES
/usr/share/lib/xml/dtd/volume-config.dtd SEE ALSO
metassist(1M), metaclear(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metare- cover(1M), metareplace(1M), metaroot(1M), metaset(1M), metasync(1M), metattach(1M), mount_ufs(1M), mddb.cf(4) Solaris Volume Manager Administration Guide SunOS 5.11 8 Aug 2003 volume-config(4)
All times are GMT -4. The time now is 04:07 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy