Ok, I'm stumped and can't seem to find relevant info.
(I'm not even sure, I might have asked something similar before.):
I'm trying to use shell scripting/UNIX commands to extract URLs from a fairly large web page, with a view to ultimately wrapping this in PHP with exec() and including the... (2 Replies)
I have a very big file of 5gb size and there are about 50 million records in there. I have to delete the records based on recrord number that I know fromoutside with out opening the file. The record numbers are very random like 5000678, 7890005 etc.
Can somebody let me know how i can... (5 Replies)
Hi,
I have one huge record and know that each record in the file is 550 bytes long. How do I parse out individual records from the single huge record.
Thanks, (4 Replies)
Hi everyone.
I am a newbie to Linux stuff. I have this kind of problem which couldn't solve alone. I have a text file with records separated by empty lines like this:
ID: 20
Name: X
Age: 19
ID: 21
Name: Z
ID: 22
Email: xxx@yahoo.com
Name: Y
Age: 19
I want to grep records that... (4 Replies)
I have 2 files; one file (say, details.txt) contains the details of employees and another file (say, emp.txt) has some selected employee names. I am extracting employee details from details.txt by using emp.txt and the corresponding code is:
while read line
do
emp_name=`echo $line`
grep -e... (7 Replies)
Hi,
I have a file where there "Tab" seperated values are present.I need to identify duplicate entries based on column 1 & 6 only .
For e.g :
I tried using uniq ..but the output is only having one duplicate entry, instead of both the entries.I need both the above entries .
uniq -f5... (2 Replies)
Hi All,
I am new to this forum and this is my first post.
My requirement is like to optimize the time taken to grep the file with 40000 lines.
There are two files FILEA(40000 lines) FILEB(40000 lines).
The requirement is like this, both the file will be in the format below... (11 Replies)
Hi All,
I have a requirement to extract para in XML file on the basis of another list file having specific parameters.
I will extract these para from XML and import in one scheduler tool.
file2
<FOLDER DATACENTER="ControlMserver" VERSION="800" PLATFORM="UNIX" FOLDER_NAME="SH_AP_INT_B01"... (3 Replies)
awk 'NR==FNR{arr;next} $0 in arr' /tmp/Data_mismatch.sh /prd/HK/ACCTCARD_20160115.txt
edit by bakunin: seems that one CODE-tag got lost somewhere. i corrected that, but please check your posts more carefully. Thank you. (5 Replies)
Discussion started by: suresh_target
5 Replies
LEARN ABOUT CENTOS
gethugepagesizes
GETHUGEPAGESIZES(3) Library Functions Manual GETHUGEPAGESIZES(3)NAME
gethugepagesizes - Get the system supported huge page sizes
SYNOPSIS
#include <hugetlbfs.h>
int gethugepagesizes(long pagesizes[], int n_elem);
DESCRIPTION
The gethugepagesizes() function returns either the number of system supported huge page sizes or the sizes themselves. If pagesizes is
NULL and n_elem is 0, then the number of huge pages the system supports is returned. Otherwise, pagesizes is filled with at most n_elem
page sizes.
RETURN VALUE
On success, either the number of huge page sizes supported by the system or the number of huge page sizes stored in pagesizes is returned.
On failure, -1 is returned and errno is set appropriately.
ERRORS
EINVAL n_elem is less than zero or n_elem is greater than zero and pagesizes is NULL.
Also see opendir(3) for other possible values for errno. This error occurs when the sysfs directory exists but cannot be opened.
NOTES
This call will return all huge page sizes as reported by the kernel. Not all of these sizes may be usable by the programmer since mount
points may not be available for all sizes. To test whether a size will be usable by libhugetlbfs, hugetlbfs_find_path_for_size() can be
called on a specific size to see if a mount point is configured.
SEE ALSO oprofile(1), opendir(3), hugetlbfs_find_path_for_size(3), libhugetlbfs(7)AUTHORS
libhugetlbfs was written by various people on the libhugetlbfs-devel mailing list.
October 10, 2008 GETHUGEPAGESIZES(3)