Sponsored Content
Top Forums UNIX for Advanced & Expert Users setting ulimit -n with a value more than 1024000 Post 302338148 by sysgate on Monday 27th of July 2009 06:50:51 AM
Old 07-27-2009
I think the OP is aware of that, however what kind of application will use more than million file descriptors ? If those are the requirements of a software that will work on a standalone system, you better think of clusters, unless the application's overhead / footprint is very small.
 

10 More Discussions You Might Find Interesting

1. Solaris

ulimit setting problem on Solaris

How do you make the ulimit values permanent for a user? by default, the root login has the following ulimits: # ulimit -a time(seconds) unlimited file(blocks) unlimited data(kbytes) unlimited stack(kbytes) 8192 coredump(blocks) unlimited nofiles(descriptors) 1024 memory(kbytes)... (2 Replies)
Discussion started by: kiem
2 Replies

2. Shell Programming and Scripting

Setting Ulimit

How do i set ulimit for user (4 Replies)
Discussion started by: Krrishv
4 Replies

3. UNIX for Advanced & Expert Users

Setting Ulimit problem

I changed the standard Ulimit sometime back. But when I change it back, the setting does not get updated. How do I make the change permanent Waitstejo (7 Replies)
Discussion started by: Waitstejo
7 Replies

4. Solaris

ulimit

how do i check the ulimit set on my server.. ca i know whats the command ?? thanks in advance .. (5 Replies)
Discussion started by: expert
5 Replies

5. UNIX for Advanced & Expert Users

Help with Ulimit Setting

All, Our SA is considering setting the max open files from 2048 to 30K. This sounds like a drastic change. Does anybody have an idea of the negative impacts of increasing the open files too high? Would like to know if this change could negatively impact our system. What test should we run to... (2 Replies)
Discussion started by: wcrober
2 Replies

6. Solaris

ulimit

Hello, could you help me please? I write in command line: "ulimit 500" -> i've set the max size of 512-bytes blocks that i can write in one file. But when after it i use ulimit.3c in my program: "ulimit(UL_GETFSIZE);" the result turns out 1000. Why is it so? They always differ so that one is... (2 Replies)
Discussion started by: Zhenya_
2 Replies

7. Red Hat

setting ulimit for a user

The root user runs the following ulimit -a | grep open and gets a result of open files (-n) 8162 A user runs the same command and gets a result of open files (-n) 2500 How can you set the ulimit of the user to... (2 Replies)
Discussion started by: jsanders
2 Replies

8. Solaris

Is there a difference between setting a user as nologin and setting it as a role?

Trying to figure out the best method of security for oracle user accounts. In Solaris 10 they are set as regular users but have nologin set forcing the dev's to login as themselves and then su to the oracle users. In Solaris11 we have the option of making it a role because RBAC is enabled but... (1 Reply)
Discussion started by: os2mac
1 Replies

9. AIX

Ulimit setting

Hi, Our application team is asking me to set ulimit parameter in my AIX 6.1 TL8 box. Some of them i set already. address space limit (kbytes) (-M) unlimited locks (-L) unlimited locked address space (kbytes) (-l) 64 nice (-e) ... (3 Replies)
Discussion started by: sunnybee
3 Replies

10. Solaris

Help with setting coredumpsize using ulimit

A coredump is being created by one of our applications on Solaris server and occupying entire space on the mount, thereby bringing down the application. While we try to identify the root cause, i tried to limit to limit the size of the core dump. Executed below command in shell and also updated... (2 Replies)
Discussion started by: kesani
2 Replies
IDS2NGRAM(1)						User Contributed Perl Documentation					      IDS2NGRAM(1)

NAME
ids2ngram - generate n-gram data file from ids file SYNOPSIS
ids2ngram [option]... ids_file... DESCRIPTION
ids2ngram generates idngram file, which is a sorted [id1,..,idN,freq] array, from binary id stream files. Here, the id stream files are always generated by mmseg or slmseg. Basically, it finds all occurrence of n-words tuples (i.e. the tuple of (id1,..,idN)), and sorts these tuples by the lexicographic order of the ids make up the tuples, then write them to specified output file. INPUT
The input file is presented as a binary id stream, which looks like: [id0,...,idX] OPTIONS
All the following options are mandatory. -n,--NMax N Generates N-gram result. ids2ngram does only support uni-gram, bi-gram, and trigram, so any number not in the range of 1..3 is not valid. -s,--swap swap-file Specify the temporary intermediate file. -o, --out output-file Specify the result idngram file, e.g. the array of [id1, ..., idN, freq] -p, --para N Specify the maximum n-gram items per paragraph. ids2ngram writes to the temporary file on a per-paragraph basis. Every time it writes a paragraph out, it frees the corresponding memory allocated for it. When your computer system permits, a higher N is suggested. This can speed up the processing speed because of less I/O. EXAMPLE
Following example will use three input idstream file idsfile[1,2,3] to generate the idngram file all.id3gram. Each para (internal map size or hash size) would be 1024000, using swap file for temp result. All temp para result would eventually be merged to got the final result. ids2ngram -n 3 -s /tmp/swap -o all.id3gram -p 1024000 idsfile1 idsfile2 idsfile3 AUTHOR
Originally written by Phill.Zhang <phill.zhang@sun.com>. Currently maintained by Kov.Chai <tchaikov@gmail.com>. SEE ALSO
mmseg(1), slmseg(1), slmbuild (1). perl v5.14.2 2012-06-09 IDS2NGRAM(1)
All times are GMT -4. The time now is 03:59 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy