PAGESIZE(1) General Commands Manual PAGESIZE(1)NAME
pagesize - Print supported system page sizes
SYNOPSIS
pagesize [options]
DESCRIPTION
The pagesize utility prints the page sizes of a page of memory in bytes, as returned by getpagesizes(3). This is useful when creating por-
table shell scripts, configuring huge page pools with hugeadm or launching applications to use huge pages with hugectl.
If no parameters are specified, pagesize prints the system base page size as returned by getpagesize(). The following parameters affect
what other pagesizes are displayed.
--huge-only, -H
Display all huge pages supported by the system as returned by gethugepagesizes().
--all, -a
Display all page sizes supported by the system.
SEE ALSO oprofile(1), getpagesize(2), getpagesizes(3), gethugepagesizes(3), hugectl(7), hugeadm(7), libhugetlbfs(7)AUTHORS
libhugetlbfs was written by various people on the libhugetlbfs-devel mailing list.
October 10, 2008 PAGESIZE(1)
Check Out this Related Man Page
pagesize(1) User Commands pagesize(1)NAME
pagesize - display the size or sizes of a page of memory
SYNOPSIS
/usr/bin/pagesize [-a]
DESCRIPTION
The pagesize utility prints the default size of a page of memory in bytes, as returned by getpagesize(3C). This program is useful in con-
structing portable shell scripts.
OPTIONS
The following option is supported:
-a Prints out all possible hardware address translation sizes supported by the system.
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Availability |SUNWcsu |
+-----------------------------+-----------------------------+
SEE ALSO ppgsz(1), getpagesize(3C), getpagesizes(3C), attributes(5)SunOS 5.10 4 May 2001 pagesize(1)
Hi All,
I want to delete the contents of few files which are really huge in size.
How the same can be done by using sed. Is there any other alternative to sed.
Thanks in advance (10 Replies)
I have a huge file with lot of rows... with each row around 400 characters.. with spaces as well..
(e.g)
Line1:
"AC254600606 USDMI000001Anom01130073981 0000000000000.002005040720991231 ... (13 Replies)
Ok,
So I have a huge file that has over 12000 lines in it.
in this file, there are 589 occurrences of the string "use five-minute-interval" spread in various areas in the file.
How can i replace the the last 250 of the occurrences of "use five-minute-interval" with "use... (10 Replies)
Dear Guy’s
By using dd command or any strong command, I’d like to copy huge data from file system to another file system
Sours File system: /sfsapp
File system has 250 GB of data
Target File system: /tgtapp
I’d like to copy all these files and directories from /sfsapp to /tgtapp as... (28 Replies)
Hi,
I have two huge file; each one has approximately 150000 lines. I need to compare both of them and store the unmatched lines into a different file.
I have searched for everything in google but did not get solution.
Files are:
File1
NRALBAMINDB20003726
NRALBAMINDB20003727... (16 Replies)
Hi All,
HP-UX dev4 B.11.11 U 9000/800 3251073457
I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this?
... (18 Replies)
Hi Ppl,
I have a requirement like i will be getting files of huge size daily and if the file size is so huge ,the files will be split into many parts and sent.The first file will have the header details followed by detail records and the consecutive files will have detail records and the last... (11 Replies)
Hi,
I need to correct line breaks for huge files (more than 1MM records in a file) and then format it properly.
Except the header and trailer, each record starts with 'D'.
Requirement:Scan the whole file except the header and trailer records and see if any of the records start with... (19 Replies)
I recently went through Understanding the linux kernel, to get an idea of how system calls and interrupts function in an x86 based machine.
However, the level of detail has left me slightly confused. Here's what I understand.
System call process:
User mode:
User code calls a library... (11 Replies)
What do i need to do have the below perl program load 205 million record files into the hash. It currently works on smaller files, but not working on huge files. Any idea what i need to do to modify to make it work with huge files:
#!/usr/bin/perl
$ot1=$ARGV;
$ot2=$ARGV;
open(mfileot1,... (12 Replies)
Hi Friends !!
I am facing a hash total issue while performing over a set of files of huge volume:
Command used:
tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f'
Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Please join me in congratulating and thanking Corona688 for 20,000 top quality posts at unix.com !
https://www.unix.com/members/1-albums112-picture651.png (11 Replies)
I have two files:
1. A huge 8 GB text file. (big_file.txt)
2. A huge list of words approximately 8 million of them. (words_file.txt). Each word is separated by a newline.
What I intend to do is to read each word "w" from words_file.txt and search for that word in big_file.txt. Then extract two... (12 Replies)
Hi All,
I am trying to get some lines from a file i did it with while-do-loop. since the files are huge it is taking much time. now i want to make it faster.
The requirement is the file will be having 1 million lines.
The format is like below.
##transaction, , , ,blah, blah... (38 Replies)