Determining Values for NIce and Priority items in limits.conf file


 
Thread Tools Search this Thread
Operating Systems Linux Determining Values for NIce and Priority items in limits.conf file
# 8  
Old 03-06-2013
I read somewhere that low pri can actually run faster/cheaper because the slice is larger, or something such. I find the SA's go nuts if you use it all nice -19, even though it works on fine. Appearance of impropriety.
# 9  
Old 03-06-2013
On an idle machine quite possibly, though it sounds heavily implementation-specific and application-specific too. Big timeslices matter for CPU bound things.

On a loaded system, a nice -19'd process will get barely any time, politely behaving or not. That's not a bug or anything the scheduler can fix, that's simply the system doing what you told it to.

Unless everything else is 19'ed too, of course.

I agree that priority can be used intelligently, but think it should be up to the sysadmin to raise priorities above average. Leaving it up to the users can cause problems. Leaving it up to the sysadmin can cause problems too, but at least there's just one of them Smilie

Last edited by Corona688; 03-06-2013 at 06:19 PM..
# 10  
Old 03-07-2013
Paging i/o being an exception -- favoring that can create thrashing. I accidentally found I could severely slow a system using mmap() to map a file and then read the data, for a long list of files in succession (an mmap() based fgrep). Memory was full of old mapped page images, and everyone else was on swap. There should be some limit on how many pages of ram one pid can have 'originated', something like 80%, so you can use ram for speed, but not so you roll everyon else out, maybe invoked when too may processes are awaiting page in. Many OS now use mmap() for input buffering of data flat files -- no buffer needed.

For a system to be very responsive to priority, you need prioritized queues for i/o that reach out into the peripherals and networks, and that raises a lot of issues off-host. With all the buffering, NFS, remote printers, SANs and such, things tend to get democratic and ballistic early on in the flow. Getting the CPU first is not enough to keep the low guys from filling the queue with requests.

Emotionally, people think a system runs faster when everyone has more priority! Smilie LOL!
# 11  
Old 03-07-2013
Yes, and severe swap is particularly nasty in that it can steal time from high-priority things indirectly. Stealing their memory now, means stealing their time later -- they'll go for their data and get a context switch instead. If you're burning a CD, that can mean coasters. Dealing with this in the OS itself is difficult since it adds so much overhead to each page operation though.

Linux supports voluntary measures for cache control, though. You can use madvise to tell the kernel you're done with a page, and so avoid cluttering up the cache with it.
# 12  
Old 03-07-2013
I thought CD burners, the new ones, have enough buffer to survive o/s and app underflow. I guess it depends on how long a track is, or whether the firmware.CD hardware/medaia allows it to see where it left off and turn on writing right there.

The mmap()/munmap() is neat, though, as it allows a 32 bit program to use unlimited RAM. Things mapped stay in memory even if unmapped and are available to a new map, so you can swap super-pages of a huge set of files in your limited address space. Of course, mmap64() allows you to leave it all mapped, at the cost of fatter code. Some sort of client-server arrangement allows tight 32 bit clients to access all that data on a tight 64 bit data server.
# 13  
Old 03-07-2013
Quote:
Originally Posted by DGPickett
I thought CD burners, the new ones, have enough buffer to survive o/s and app underflow.
16 megs of buffer vs 600 megs of data, worst case there's never enough.
Quote:
I guess it depends on how long a track is, or whether the firmware.CD hardware/media allows it to see where it left off and turn on writing right there.
Modern buffer underrun protection isn't quite that perfect, it leaves little recoverable errors on the disk. I don't think a CDR/DVDR has the angular resolution to turn on the laser right spot-on where it left off, so I think it leaves little markers for itself when it must. It can survive brief underruns, brief being the key.
Quote:
The mmap()/munmap() is neat, though, as it allows a 32 bit program to use unlimited RAM. Things mapped stay in memory even if unmapped and are available to a new map, so you can swap super-pages of a huge set of files in your limited address space.
Interesting idea.

Last edited by Corona688; 03-07-2013 at 01:55 PM..
# 14  
Old 03-07-2013
Well, infrequent -- once you underrun and it writes a glitch-gap, you have all the time in the world to get back to writing, presumably after buffers are full again, but too many times on one CD/DVD and the bandwidth and capacity are impacted. It might be more forgiving on data CDs, since they are internally segmented. If a music CD is full 600 MB with 12 tracks, the average track is 50 MB, so 16 megs is a good percentage. Classical music albumns might have very few, larger tracks compared to rockNroll, like 11:30 for Beethoven's 2nd Symphony in D Major, Movement 2 Larghetto = 100 MB.
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Soft and hard limits for nproc value in /etc/security/limits.conf file (Linux )

OS version : RHEL 6.5 Below is an excerpt from /etc/security/limits.conf file for OS User named appusr in our server appusr soft nproc 2047 appusr hard nproc 16384 What will happen if appusr has already spawned 2047 processes and wants to spawn 2048th process ? I just want to know... (3 Replies)
Discussion started by: kraljic
3 Replies

2. Shell Programming and Scripting

Call Nice command (priority) from /bin/ksh

Hello, I am just starting with shell scripting, as everyone will soon see from my question. What I'm trying to do is call the Nice command to set the script process priority from /bin/ksh. The difference is I'm running it not directly through the shell, but through Bigfix (very similar to... (3 Replies)
Discussion started by: solly119
3 Replies

3. Red Hat

max/ideal value of items in limits.conf in rhel5?

i want to set limits in /etc/security/limits.conf.My os is rhel 5.2. It was giving continuous messages in in /var/log/secure like : continuously. I have changed values of priority and nice to "0" from unlimited and messages are not comming. But i want to know what is the ideal/maximum... (3 Replies)
Discussion started by: pankajd
3 Replies

4. Red Hat

Modifying limits.conf & pam.d

Hello all, I'm running Oracle 10.2 on RHEL5. Current value of ulimit -n is set to a low value of 1024. I need to increase it to 65536 using the following procedure. cat >> /etc/security/limits.conf <<EOF oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard... (3 Replies)
Discussion started by: luft
3 Replies

5. Solaris

Solaris counterpart of /etc/security/limits.conf

Hi, How can we set per user core file size, etc in solaris, i.e. I want solaris counterpart/equivalent of linux /etc/security/limits.conf. TIA (0 Replies)
Discussion started by: slash_blog
0 Replies

6. UNIX for Advanced & Expert Users

/etc/security/limits.conf

HI, To restrict the number of files and number of processes used the user we use the following configuration in the file /etc/security/limits.conf. oracle soft nofile 65572 oracle hard nofile 65572 oracle soft noproc 16384 oracle soft noproc 16384 My question is what do the 'soft' and... (1 Reply)
Discussion started by: praveen_b744
1 Replies

7. HP-UX

urgent help required on changing process priority using nice

Hi folks, Hope you can help me. I have a process that is currently running at nice 20 and need it to run faster (-10?). How do I change the process using nice? I have the process number and thought it would be along the lines of; nice -10 process_id but it doesn't seem to like that. (1 Reply)
Discussion started by: gshuttleworth
1 Replies

8. UNIX for Dummies Questions & Answers

limits.conf

I have line in this file that says: username - maxlogins 1 and user can login 2 times instad of one. does enybody know why? and how can I fix that? (2 Replies)
Discussion started by: shooroop
2 Replies

9. Linux

limits.conf

Hello! How do make the limits.conf parameters work for a normal user. Ive changed both the hard and soft parameter for the specific user. It used to be 4096 and i changed it to 16384. But when i use the ulimit -n, all i got is permissen denied. Witch i can understand. But my question is? how... (1 Reply)
Discussion started by: dozy
1 Replies
Login or Register to Ask a Question