Sponsored Content
Top Forums Shell Programming and Scripting Help on splitting this huge file Post 302325232 by Prateek007 on Saturday 13th of June 2009 08:45:47 PM
Old 06-13-2009
Question Help on splitting this huge file

Hi ,

i have files coming in my system which are very huge in MB and GBs, all these files are in a single line, there is no newline character.

I need to get only last 700 bytes of these files, of this i am splitting the files by "split -b 700 filename" but this gives all the splitted files, i need only the last chunk of 700 files, i am getting this difficulty as files are very large, can some one please help me getting only last 700 bytes of any HUGE file which does not have new line character in it..

Thanks in advance Smilie
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Splitting huge XML Files into fixsized wellformed parts

Hi, I need to split xml-files with sizes greater than 2 gb into smaler chunks. As I dont want to end up with billions of files, I want those splitted files to have configurable sizes like 250 MB. Each file should be well formed having an exact copy of the header (and footer as the closing of the... (0 Replies)
Discussion started by: Malapha
0 Replies

2. Shell Programming and Scripting

sorting huge file

Hi All I am sorting a huge file -rw-r--r-- 1 rama users 448156978 May 13 18:48 102384.temp $ sort -k 1,40n 102384.temp > 102384.temp1 msgcnt 1468 vxfs: mesg 001: vx_nospace - /dev/vg00/var file system full (1 block extent) sort: A write error occurred while sorting. I thought... (3 Replies)
Discussion started by: dhanamurthy
3 Replies

3. Shell Programming and Scripting

splitting huge xml into multiple files

hi all i have a some huge html files (500MB to 1GB). Each file has multiple <html></html> tags <html> ................. .................... .................... </html> <html> ................. .................... .................... </html> <html> .................... (5 Replies)
Discussion started by: uttamhoode
5 Replies

4. Shell Programming and Scripting

insert a header in a huge data file without using an intermediate file

I have a file with data extracted, and need to insert a header with a constant string, say: H|PayerDataExtract if i use sed, i have to redirect the output to a seperate file like sed ' sed commands' ExtractDataFile.dat > ExtractDataFileWithHeader.dat the same is true for awk and... (10 Replies)
Discussion started by: deepaktanna
10 Replies

5. Shell Programming and Scripting

Huge File Comparison

Hi i need to compare two fixed length files and produce the differences if any to a seperate file. I have to capture each and every differneces line by line. Ideally my files should not have any differences but if there are any then it should be captured without any miss. Also my files sizes are... (4 Replies)
Discussion started by: naveenn08
4 Replies

6. Shell Programming and Scripting

Splitting the Huge file into several files...

Hi I have to write a script to split the huge file into several pieces. The file columns is | pipe delimited. The data sample is as: 6625060|1420215|07308806|N|20100120|5572477081|+0002.79|+0000.00|0004|0001|......... (3 Replies)
Discussion started by: lakteja
3 Replies

7. Shell Programming and Scripting

Optimised way for search & replace a value on one line in a very huge file (File Size is 24 GB).

Hi Experts, I had to edit (a particular value) in header line of a very huge file so for that i wanted to search & replace a particular value on a file which was of 24 GB in Size. I managed to do it but it took long time to complete. Can anyone please tell me how can we do it in a optimised... (7 Replies)
Discussion started by: manishkomar007
7 Replies

8. Shell Programming and Scripting

splitting a huge line of file into multiple lines with fixed number of columns

Hi, I have a huge file with a single line. But I want to break that line into lines of with each line having five columns. My file is like this: code: "hi","there","how","are","you?","It","was","great","working","with","you.","hope","to","work","you." I want it like this: code:... (1 Reply)
Discussion started by: rajsharma
1 Replies

9. Shell Programming and Scripting

Need help splitting huge single record file

I was given a data file that I need to split into multiple lines/records based on a key word. The problem is that it is 2.5GB or bigger and everything I try in perl or sed causes a Segmentation fault. Can someone give me some other ideas. The data is of the form:... (5 Replies)
Discussion started by: leolson
5 Replies

10. UNIX for Dummies Questions & Answers

My file system is 100%, can't find the huge file

Please help. My file system is 100%, I can't seem to find what is taking so much space. The total hard drive space is 150Gig free but I got nothing now. I did to this to find the big file but it's taking so much time. Is there any other way? du -ah / | more find ./ -size +200M... (3 Replies)
Discussion started by: samnyc
3 Replies
ALLOC_HUGEPAGES(2)					     Linux Programmer's Manual						ALLOC_HUGEPAGES(2)

NAME
alloc_hugepages, free_hugepages - allocate or free huge pages SYNOPSIS
void *alloc_hugepages(int key, void *addr, size_t len, int prot, int flag); int free_hugepages(void *addr); DESCRIPTION
The system calls alloc_hugepages() and free_hugepages() were introduced in Linux 2.5.36 and removed again in 2.5.54. They existed only on i386 and ia64 (when built with CONFIG_HUGETLB_PAGE). In Linux 2.4.20 the syscall numbers exist, but the calls fail with the error ENOSYS. On i386 the memory management hardware knows about ordinary pages (4 KiB) and huge pages (2 or 4 MiB). Similarly ia64 knows about huge pages of several sizes. These system calls serve to map huge pages into the process's memory or to free them again. Huge pages are locked into memory, and are not swapped. The key argument is an identifier. When zero the pages are private, and not inherited by children. When positive the pages are shared with other applications using the same key, and inherited by child processes. The addr argument of free_hugepages() tells which page is being freed: it was the return value of a call to alloc_hugepages(). (The memory is first actually freed when all users have released it.) The addr argument of alloc_hugepages() is a hint, that the kernel may or may not follow. Addresses must be properly aligned. The len argument is the length of the required segment. It must be a multiple of the huge page size. The prot argument specifies the memory protection of the segment. It is one of PROT_READ, PROT_WRITE, PROT_EXEC. The flag argument is ignored, unless key is positive. In that case, if flag is IPC_CREAT, then a new huge page segment is created when none with the given key existed. If this flag is not set, then ENOENT is returned when no segment with the given key exists. RETURN VALUE
On success, alloc_hugepages() returns the allocated virtual address, and free_hugepages() returns zero. On error, -1 is returned, and errno is set appropriately. ERRORS
ENOSYS The system call is not supported on this kernel. FILES
/proc/sys/vm/nr_hugepages Number of configured hugetlb pages. This can be read and written. /proc/meminfo Gives info on the number of configured hugetlb pages and on their size in the three variables HugePages_Total, HugePages_Free, Hugepagesize. CONFORMING TO
These calls are specific to Linux on Intel processors, and should not be used in programs intended to be portable. NOTES
These system calls are gone; they existed only in Linux 2.5.36 through to 2.5.54. Now the hugetlbfs file system can be used instead. Mem- ory backed by huge pages (if the CPU supports them) is obtained by using mmap(2) to map files in this virtual file system. The maximal number of huge pages can be specified using the hugepages= boot parameter. COLOPHON
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/. Linux 2007-05-31 ALLOC_HUGEPAGES(2)
All times are GMT -4. The time now is 04:05 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy