setting ulimit -n with a value more than 1024000


 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users setting ulimit -n with a value more than 1024000
# 1  
Old 07-26-2009
setting ulimit -n with a value more than 1024000

I would like to set the maximum number or open files per process to be greater than 1024000 (for specific application scalability purpose). I am using RHEL 5.3/Ext4.

%sysctl fs.file-max
fs.file-max = 164766821

I also have added the folloing to /etc/security/limits.conf

* soft nofile 4096000
* hard nofile 4096000


However I am stuck with
ulimit -n <value>

The max value is limited to 1024000.
How can I increase the maximum?

Sean
# 2  
Old 07-27-2009
I am thinking that would need a kernel re-compile...

why on earth would you need such large numbers??
# 3  
Old 07-27-2009
Hi Sean,

Code:
ulimit -n unlimited

should do..

/ilan
# 4  
Old 07-27-2009
Quote:
Originally Posted by ilan
Hi Sean,

Code:
ulimit -n unlimited

should do..

/ilan
this may work but I feel it is asking for problems.

the ulimit is there for a reason, you may find your system becomes unresponsive.
# 5  
Old 07-27-2009
I think the OP is aware of that, however what kind of application will use more than million file descriptors ? If those are the requirements of a software that will work on a standalone system, you better think of clusters, unless the application's overhead / footprint is very small.
# 6  
Old 07-27-2009
Thank you for the replies.

Actually I tried ulimit -n unlimited before and I got the same error as using
ulimit -n <value> when value is greater than 1024000

The application indeed is running on a cluster setting. In the extreme scaling situation the application will need to open a huge number of files concurrently. Will it have scaling issue? This is a part of my intention - to find it out.

I have a feeling that it might need kernel recompilation. Any hint which conf file or .h file needs to be changed?


Sean
# 7  
Old 07-30-2009
Could you explain the problem a little more? Maybe some sort of client/server system work better than a cluster. More and smaller file tables instead of one gigantic global file table. Or a method in which fewer files containing more data could be used, are they of fixed size? What kind of data? Altering the kernel may have unexpected consequences, the limit might be that "low" for a reason.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Help with setting coredumpsize using ulimit

A coredump is being created by one of our applications on Solaris server and occupying entire space on the mount, thereby bringing down the application. While we try to identify the root cause, i tried to limit to limit the size of the core dump. Executed below command in shell and also updated... (2 Replies)
Discussion started by: kesani
2 Replies

2. AIX

Ulimit setting

Hi, Our application team is asking me to set ulimit parameter in my AIX 6.1 TL8 box. Some of them i set already. address space limit (kbytes) (-M) unlimited locks (-L) unlimited locked address space (kbytes) (-l) 64 nice (-e) ... (3 Replies)
Discussion started by: sunnybee
3 Replies

3. Solaris

Is there a difference between setting a user as nologin and setting it as a role?

Trying to figure out the best method of security for oracle user accounts. In Solaris 10 they are set as regular users but have nologin set forcing the dev's to login as themselves and then su to the oracle users. In Solaris11 we have the option of making it a role because RBAC is enabled but... (1 Reply)
Discussion started by: os2mac
1 Replies

4. Red Hat

setting ulimit for a user

The root user runs the following ulimit -a | grep open and gets a result of open files (-n) 8162 A user runs the same command and gets a result of open files (-n) 2500 How can you set the ulimit of the user to... (2 Replies)
Discussion started by: jsanders
2 Replies

5. Solaris

ulimit

Hello, could you help me please? I write in command line: "ulimit 500" -> i've set the max size of 512-bytes blocks that i can write in one file. But when after it i use ulimit.3c in my program: "ulimit(UL_GETFSIZE);" the result turns out 1000. Why is it so? They always differ so that one is... (2 Replies)
Discussion started by: Zhenya_
2 Replies

6. UNIX for Advanced & Expert Users

Help with Ulimit Setting

All, Our SA is considering setting the max open files from 2048 to 30K. This sounds like a drastic change. Does anybody have an idea of the negative impacts of increasing the open files too high? Would like to know if this change could negatively impact our system. What test should we run to... (2 Replies)
Discussion started by: wcrober
2 Replies

7. Solaris

ulimit

how do i check the ulimit set on my server.. ca i know whats the command ?? thanks in advance .. (5 Replies)
Discussion started by: expert
5 Replies

8. UNIX for Advanced & Expert Users

Setting Ulimit problem

I changed the standard Ulimit sometime back. But when I change it back, the setting does not get updated. How do I make the change permanent Waitstejo (7 Replies)
Discussion started by: Waitstejo
7 Replies

9. Shell Programming and Scripting

Setting Ulimit

How do i set ulimit for user (4 Replies)
Discussion started by: Krrishv
4 Replies

10. Solaris

ulimit setting problem on Solaris

How do you make the ulimit values permanent for a user? by default, the root login has the following ulimits: # ulimit -a time(seconds) unlimited file(blocks) unlimited data(kbytes) unlimited stack(kbytes) 8192 coredump(blocks) unlimited nofiles(descriptors) 1024 memory(kbytes)... (2 Replies)
Discussion started by: kiem
2 Replies
Login or Register to Ask a Question