folks,
In my working directory, there a multiple large files which only contain one line in the file. The line is too long to use "grep", so any help?
For example, if I want to find if these files contain a string like "93849", what command I should use?
Also, there is oder_id number... (1 Reply)
Hi,
I have a huge file of bibliographic records in some standard format.I need a script to do some repeatable task as follows:
1. Needs to create folders as the strings starts with "item_*" from the input file
2. Create a file "contents" in each folders having "license.txt(tab... (5 Replies)
Hi Experts,
I got a question..
In the following output of `ps -elf | grep DataFlow` I get:-
242001 A mqsiadm 2076676 1691742 0 60 20 26ad4f400 130164 * May 09 - 3:02 DataFlowEngine EAIDVBR1_BROKER 5e453de8-2001-0000-0080-fd142b9ce8cb VIPS_INQ1 0
242001 A mqsiadm... (5 Replies)
Hi All,
HP-UX dev4 B.11.11 U 9000/800 3251073457
I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this?
... (18 Replies)
Hi All,
My disk usage show 100 % . When I check “df –kh” it shows my root partition is full. But when I run the “du –skh /” shows only 7 GB is used.
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G 28G 260MB 100% /
How I can identify who is using the 20 GB of memory.
Os: Centos... (10 Replies)
Hi Friends !!
I am facing a hash total issue while performing over a set of files of huge volume:
Command used:
tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f'
Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Dear all ,
I found that if we work with thousands line of data, awk does not work perfectly. It will cut hundreds line (others are deleted) and works only on the remain data.
I used this command : awk '$1==1{$1="Si"}{print>FILENAME}' coba.xyz to change value of first column whose value is 1... (4 Replies)
Dear Experts,
I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers?
I already tried using Rsync and tar command. But using these command is too long.
Please advice.
Thanks
Edy (11 Replies)
HI Guys,
I have Big XML file with Below Format :-
Input :-
<pokl>MKL=1,FN=1,GBNo=B10C</pokl>
<d>192</d>
<d>315</d>
<d>35</d>
<d>0,7,8</d>
<pokl>MKL=1,dFN=1,GBNo=B11C</pokl>
<d>162</d>
<d>315</d>
<d>35</d>
<d>0,5,6</d>
<pokl>MKL=1,dFN=1,GBNo=B12C</pokl>
<d>188</d> (4 Replies)
Discussion started by: pareshkp
4 Replies
LEARN ABOUT CENTOS
hugetlbfs_find_path
HUGETLBFS_FIND_PATH(3) Library Functions Manual HUGETLBFS_FIND_PATH(3)NAME
hugetlbfs_find_path, hugetlbfs_find_path_for_size - Locate an appropriate hugetlbfs mount point
SYNOPSIS
#include <hugetlbfs.h>
const char *hugetlbfs_find_path(void);
const char *hugetlbfs_find_path_for_size(long page_size);
DESCRIPTION
These functions return a pathname for a mounted hugetlbfs filesystem for the appropriate huge page size. For hugetlbfs_find_path, the
default huge page size is used (see gethugepagesize(3)). For hugetlbfs_find_path_for_size, a valid huge page size must be specified (see
gethugepagesizes(3)).
RETURN VALUE
On success, a non-NULL value is returned. On failure, NULL is returned.
SEE ALSO libhugetlbfs(7), gethugepagesize(3), gethugepagesizes(3)AUTHORS
libhugetlbfs was written by various people on the libhugetlbfs-devel mailing list.
March 7, 2012 HUGETLBFS_FIND_PATH(3)