Actually I tried ulimit -n unlimited before and I got the same error as using
ulimit -n <value> when value is greater than 1024000
The application indeed is running on a cluster setting. In the extreme scaling situation the application will need to open a huge number of files concurrently. Will it have scaling issue? This is a part of my intention - to find it out.
I have a feeling that it might need kernel recompilation. Any hint which conf file or .h file needs to be changed?
Sean
In AIX you can set the hard limit to -1 for an unlimited ulimit in /etc/security/limits. I think in Linux the file is /etc/security/limits.conf. Each user has its own hard limits. There should be examples in the file.
A coredump is being created by one of our applications on Solaris server and occupying entire space on the mount, thereby bringing down the application.
While we try to identify the root cause, i tried to limit to limit the size of the core dump.
Executed below command in shell and also updated... (2 Replies)
Hi,
Our application team is asking me to set ulimit parameter in my AIX 6.1 TL8 box.
Some of them i set already.
address space limit (kbytes) (-M) unlimited
locks (-L) unlimited
locked address space (kbytes) (-l) 64
nice (-e) ... (3 Replies)
Trying to figure out the best method of security for oracle user accounts. In Solaris 10 they are set as regular users but have nologin set forcing the dev's to login as themselves and then su to the oracle users.
In Solaris11 we have the option of making it a role because RBAC is enabled but... (1 Reply)
The root user runs the following
ulimit -a | grep open
and gets a result of
open files (-n) 8162
A user runs the same command and gets a result of
open files (-n) 2500
How can you set the ulimit of the user to... (2 Replies)
Hello, could you help me please?
I write in command line: "ulimit 500"
-> i've set the max size of 512-bytes blocks that i can write in one file.
But when after it i use ulimit.3c in my program: "ulimit(UL_GETFSIZE);"
the result turns out 1000. Why is it so? They always differ so that one is... (2 Replies)
All,
Our SA is considering setting the max open files from 2048 to 30K. This sounds like a drastic change. Does anybody have an idea of the negative impacts of increasing the open files too high? Would like to know if this change could negatively impact our system. What test should we run to... (2 Replies)
I changed the standard Ulimit sometime back. But when I change it back, the setting does not get updated.
How do I make the change permanent
Waitstejo (7 Replies)
How do you make the ulimit values permanent for a user?
by default, the root login has the following ulimits:
# ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 1024
memory(kbytes)... (2 Replies)
IDS2NGRAM(1) User Contributed Perl Documentation IDS2NGRAM(1)NAME
ids2ngram - generate n-gram data file from ids file
SYNOPSIS
ids2ngram [option]... ids_file...
DESCRIPTION
ids2ngram generates idngram file, which is a sorted [id1,..,idN,freq] array, from binary id stream files. Here, the id stream files are
always generated by mmseg or slmseg. Basically, it finds all occurrence of n-words tuples (i.e. the tuple of (id1,..,idN)), and sorts these
tuples by the lexicographic order of the ids make up the tuples, then write them to specified output file.
INPUT
The input file is presented as a binary id stream, which looks like:
[id0,...,idX]
OPTIONS
All the following options are mandatory.
-n,--NMax N
Generates N-gram result. ids2ngram does only support uni-gram, bi-gram, and trigram, so any number not in the range of 1..3 is not
valid.
-s,--swap swap-file
Specify the temporary intermediate file.
-o, --out output-file
Specify the result idngram file, e.g. the array of [id1, ..., idN, freq]
-p, --para N
Specify the maximum n-gram items per paragraph. ids2ngram writes to the temporary file on a per-paragraph basis. Every time it writes a
paragraph out, it frees the corresponding memory allocated for it. When your computer system permits, a higher N is suggested. This can
speed up the processing speed because of less I/O.
EXAMPLE
Following example will use three input idstream file idsfile[1,2,3] to generate the idngram file all.id3gram. Each para (internal map size
or hash size) would be 1024000, using swap file for temp result. All temp para result would eventually be merged to got the final result.
ids2ngram -n 3 -s /tmp/swap -o all.id3gram -p 1024000 idsfile1 idsfile2 idsfile3
AUTHOR
Originally written by Phill.Zhang <phill.zhang@sun.com>. Currently maintained by Kov.Chai <tchaikov@gmail.com>.
SEE ALSO mmseg(1), slmseg(1), slmbuild (1).
perl v5.14.2 2012-06-09 IDS2NGRAM(1)