I'm trying to set the open files value to 4000 on a SLES 9 system.
Current values:
I can set it using this:
But this obviously sets it only for the shell session where I run the command to set it. I want to set this to 4000 for all time.
What I've tried so far:
Extract from /etc/security/limits.conf:
In the sshd and login files in /etc/pam.d,
What am I missing here? I even tried a reboot (I don't know if it was required, but the server isn't live, so I can experiment a bit :P)
Thanks for checking reborg. I do have an extra line in the /etc/pam.d/sshd file "auth optional pam_lockout.so minuid=100". I doubt that it has anything to do with the limits.conf file though.
An update...
I found that connecting by telnet sets the value to 4000 as set in /etc/security/limits.conf, but using ssh still keeps the old (default) value of 1024. A colleague suggested that this has something to do with the UsePrivilegeSeparation directive in sshd_config.
However this directive was originally not present in my sshd_config file (I don't know the default value), and I tried setting it to both yes and no (restarting sshd each time, ofcourse), but it made no difference.
Again, if anyone has any further suggestions, it would help.
Reborg, if you could tell me what the UsePrivilegeSeparation is set to on your SuSE box, it would help too.
Fixed the problem! A close look at the sshd_config file showed that Use_PAM wasn't set at all. Set that to yes. Also set ChallengeResponseAuthentication to no.
After this a restart of sshd is all that it takes to fix it.
Ubuntu users,
I am configuring an Ubuntu 14.04 server as a load injector.
I have appended the hard and soft limits to /etc/security/limits.conf for any user (apart from root):
* hard nofile 65536
* soft nofile 65536
I am seeing the figure 65536 in... (5 Replies)
Hello all,
I have been tasked with finding the current open file descriptors versus the limit set. In Linux, this can be done like so:
cat /proc/sys/fs/file-nr
3391 969 52427
| | |
| | |
| | maximum open file descriptors
| total free allocated... (2 Replies)
I wrote a perl program that simultaneously reads in data from 691 tar.gz files using zcat. I can run one instance of the program without any issues and the memory and swap sizes are negligible. However, when I attempt to run more than 1 I start to get fork: resource unavailable messages. Are... (6 Replies)
I have a system with the following settings:
min:0.10
Assigned: 2.0
Max: 6.0
Partition is uncapped weight is 128.
I would like to know if even if this is uncapped, is the max it can use 6?
The actual pool has 16.
I remember reading about this somewhere but I don't remember can anyone... (3 Replies)
Hi
How to increase maximum number of open file in "sco xenix binary" running in "sco unix openserver 5.0.7" ?
I have changed "NOFILES" kernel parameter to 512, but xenix binray can't open more than 60.
tnx (4 Replies)
hi,
iam getting error when i assign a variable to an array of more that 315 character in length
set -A array <variable>
<variable> value is 000001 000002 and up to 000045
it is giving error as
"The specified subscript cannot be greater than 1024."
can any one help me to solve this (2 Replies)
i am trying to change the max number of files you can open under this
enviroment
Solaris , bash .
i first check the current setting by
ulimit -a
returns like this.
------------------------------------
data seg size (kbytes) unlimited
file size (blocks) unlimited
open files... (1 Reply)
I have set the maximum no of file descriptors open in a process to the value 8192 using the following lines
set rlim_fd_max=8192
set rlim_fd_cur=8192
in the /etc/system file.
I rebooted the machine and the command ulimit -n / -Hn both display the limits as 8192. However when I run my... (2 Replies)
Hello all!
I have found a new home, this place is great!
I have been searching for days to find a way to set a max size for a log.txt file using a cron job exicuting a shell script. Is it possible for a script to remove older entries in a log file to maintain a limited file size? If so,... (5 Replies)