Hello ,
I'm trying to split a file which contains a single very long line.
My aim is to split this single line each 120 characters.
I tried with the sed command :
`cat ${MYPATH}/${FILE}|sed -e :a -e 's/^.\{1,120\}$/&\n/;ta' >{MYPATH}/${DEST}`
but when I wc -l the destination file it is... (2 Replies)
I have to delete 1st 7000 lines of a file which is 12GB large. As it is so large, i can't open in vi and delete these lines. Also I found one post here which gave solution using perl, but I don't have perl installed. Also some solutions were redirecting the o/p to a different file and renaming it.... (3 Replies)
Dear All,
I am working with windoes OS but remote a linux machine. I wonder the way to copy an paste some part of a huge file in linux machine.
the contain of file like as follow:
...
dump annealling all custom 10 anneal_*.dat id type x y z q
timestep 0.02
run 200000
Memory... (2 Replies)
Hi,
I would like to clarify about the NAWK array to store multiple lines from huge file.
The file is having an unique REF.NO, I wants to store the lines (it may be
100+ lines) till I found the new REF.NO.
How can I apply NAWK - arrays for the above?
Rgds,
sharif. (1 Reply)
I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;”
Here is the sample of 5 lines in the file:
Name1;phone1;address1;city1;state1;zipcode1
Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Dear all,
I have a big file:2879(rows)x400,170 (columns) like below. I 'd like to split the file into small pieces:2879(rows)x2000(columns) per file (the last small piece will be 2879x170.
So far, I only know how to create one samll piece at one time. But actually I need to repeat this work... (6 Replies)
Hi,
I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each.
Please help me as Split command cannot work here as it might miss tags..
Format of the file is as below
<!--###### ###### START-->... (6 Replies)
Gents
I have huge NAS File System as /sys with size 10 TB and I want to Split each 1TB in spirit File System to be mounted in the server.
How to can I do that without changing anything in the source.
Please your support. (1 Reply)
Discussion started by: AbuAliiiiiiiiii
1 Replies
LEARN ABOUT DEBIAN
createrepo
createrepo(8)createrepo(8)NAME
createrepo - Create repomd (xml-rpm-metadata) repository
SYNOPSIS
createrepo [options] <directory>
DESCRIPTION
createrepo is a program that creates a repomd (xml-based rpm metadata) repository from a set of rpms.
OPTIONS -u --baseurl <url>
Optional base url location for all files. (not used by any clients at this time)
-o --outputdir <url>
Optional output directory (useful for read only media).
-x --exclude <package>
File globs to exclude, can be specified multiple times.
-i --pkglist <filename>
specify a text file which contains the complete list of files to include in the repository from the set found in the directory. File
format is one package per line, no wildcards or globs.
-q --quiet
Run quietly.
-g --groupfile <groupfile>
A precreated xml filename to point to for group information.
See examples section below for further explanation.
-v --verbose
Run verbosely.
-c --cachedir <path>
Specify a directory to use as a cachedir. This allows createrepo to create a cache of checksums of packages in the repository. In
consecutive runs of createrepo over the same repository of files that do not have a complete change out of all packages this
decreases the processing time dramatically.
--update
If metadata already exists in the outputdir and an rpm is unchanged (based on file size and mtime) since the metadata was generated,
reuse the existing metadata rather than recalculating it. In the case of a large repository with only a few new or modified rpms
this can significantly reduce I/O and processing time.
-C --checkts
Don't generate repo metadata, if their timestamps are newer than its rpms. This option decreases the processing time drastically
again, if you happen to run it on an unmodified repo, but it is (currently) mutual exclusive with the --split option.
--split
Run in split media mode. Rather than pass a single directory, take a set of directories corresponding to different volumes in a
media set.
-p --pretty
Output xml files in pretty format.
-V --version
Output version.
-h --help
Show help menu.
-d --database
Generate sqlite databases for use with yum.
EXAMPLES
Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e.
/path/to/rpms/comps.xml).
createrepo -g comps.xml /path/to/rpms
FILES
repodata/filelists.xml.gz
repodata/other.xml.gz
repodata/primary.xml.gz
repodata/repomd.xml
SEE ALSO
yum (8) yum.conf (5)
AUTHORS
Seth Vidal <skvidal@phy.duke.edu>
BUGS
Any bugs which are found should be emailed to the mailing list: rpm-metadata@linux.duke.edu
Seth Vidal 2005 Jan 2 createrepo(8)