folks,
In my working directory, there a multiple large files which only contain one line in the file. The line is too long to use "grep", so any help?
For example, if I want to find if these files contain a string like "93849", what command I should use?
Also, there is oder_id number... (1 Reply)
Hello All,
I need some assistance to extract a piece of information from a huge file.
The file is like this one :
database information
ccccccccccccccccc
ccccccccccccccccc
ccccccccccccccccc
ccccccccccccccccc
os information
cccccccccccccccccc
cccccccccccccccccc... (2 Replies)
I have a file with data extracted, and need to insert a header with a constant string, say: H|PayerDataExtract
if i use sed, i have to redirect the output to a seperate file like
sed ' sed commands' ExtractDataFile.dat > ExtractDataFileWithHeader.dat
the same is true for awk
and... (10 Replies)
Hi, All
I have a huge file which has 450G. Its tab-delimited format is as below
x1 A 50020 1
x1 B 50021 8
x1 C 50022 9
x1 A 50023 10
x2 D 50024 5
x2 C 50025 7
x2 F 50026 8
x2 N 50027 1
:
:
Now, I want to extract a subset from this file. In this subset, column 1 is x10, column 2 is... (3 Replies)
I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;”
Here is the sample of 5 lines in the file:
Name1;phone1;address1;city1;state1;zipcode1
Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Hi, Great minds, I have some files, in fact header files, of CTD profiler, I tried a lot C programming, could not get output as I was expected, because my programming skills are very poor, finally, joined unix forum with the hope that, I may get what I want, from you people,
Here I have attached... (17 Replies)
I have a huge list of files (about 300,000) which have a pattern like this.
.I 1
.U
87049087
.S
Am J Emerg
.M
Allied Health Personnel/*; Electric Countershock/*;
.T
Refibrillation managed by EMT-Ds:
.P
ARTICLE.
.W
Some patients converted from ventricular fibrillation to organized... (1 Reply)
Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file
File delimiter "|"
Need to have Sum of all columns, with column number : aggregation (summation) for each column
File not having the header
Like below -
Column 1 "Total
Column 2 : "Total
...
...... (2 Replies)
I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat)
File 1 - 15 columns
File 2 - 15 columns
Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
LEARN ABOUT CENTOS
pagesize
PAGESIZE(1) General Commands Manual PAGESIZE(1)NAME
pagesize - Print supported system page sizes
SYNOPSIS
pagesize [options]
DESCRIPTION
The pagesize utility prints the page sizes of a page of memory in bytes, as returned by getpagesizes(3). This is useful when creating por-
table shell scripts, configuring huge page pools with hugeadm or launching applications to use huge pages with hugectl.
If no parameters are specified, pagesize prints the system base page size as returned by getpagesize(). The following parameters affect
what other pagesizes are displayed.
--huge-only, -H
Display all huge pages supported by the system as returned by gethugepagesizes().
--all, -a
Display all page sizes supported by the system.
SEE ALSO oprofile(1), getpagesize(2), getpagesizes(3), gethugepagesizes(3), hugectl(7), hugeadm(7), libhugetlbfs(7)AUTHORS
libhugetlbfs was written by various people on the libhugetlbfs-devel mailing list.
October 10, 2008 PAGESIZE(1)