Sponsored Content
Top Forums Shell Programming and Scripting How to extract data from a huge file? Post 302159595 by srsahu75 on Friday 18th of January 2008 03:53:20 AM
Old 01-18-2008
Yes, I need

Yes, I need to extract information between the main tags ( inclusive of the tags ).
starting from
<dublin_core schema="dc">
to
</dublin_core>

Save the extract as dublin_core.xml in the respective folders item_* which are created from the string (item_*) before <dublin_core schema="dc">

And save another file 'contents' in each folder with the content as license.txt(tab \t)bundle:LICENSE
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

search and grab data from a huge file

folks, In my working directory, there a multiple large files which only contain one line in the file. The line is too long to use "grep", so any help? For example, if I want to find if these files contain a string like "93849", what command I should use? Also, there is oder_id number... (1 Reply)
Discussion started by: ting123
1 Replies

2. Shell Programming and Scripting

How to extract a piece of information from a huge file

Hello All, I need some assistance to extract a piece of information from a huge file. The file is like this one : database information ccccccccccccccccc ccccccccccccccccc ccccccccccccccccc ccccccccccccccccc os information cccccccccccccccccc cccccccccccccccccc... (2 Replies)
Discussion started by: Marcor
2 Replies

3. Shell Programming and Scripting

insert a header in a huge data file without using an intermediate file

I have a file with data extracted, and need to insert a header with a constant string, say: H|PayerDataExtract if i use sed, i have to redirect the output to a seperate file like sed ' sed commands' ExtractDataFile.dat > ExtractDataFileWithHeader.dat the same is true for awk and... (10 Replies)
Discussion started by: deepaktanna
10 Replies

4. Shell Programming and Scripting

How to extract a subset from a huge dataset

Hi, All I have a huge file which has 450G. Its tab-delimited format is as below x1 A 50020 1 x1 B 50021 8 x1 C 50022 9 x1 A 50023 10 x2 D 50024 5 x2 C 50025 7 x2 F 50026 8 x2 N 50027 1 : : Now, I want to extract a subset from this file. In this subset, column 1 is x10, column 2 is... (3 Replies)
Discussion started by: cliffyiu
3 Replies

5. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

6. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

7. Shell Programming and Scripting

Extract header data from one file and combine it with data from another file

Hi, Great minds, I have some files, in fact header files, of CTD profiler, I tried a lot C programming, could not get output as I was expected, because my programming skills are very poor, finally, joined unix forum with the hope that, I may get what I want, from you people, Here I have attached... (17 Replies)
Discussion started by: nex_asp
17 Replies

8. Shell Programming and Scripting

Extract few content from a huge list of files

I have a huge list of files (about 300,000) which have a pattern like this. .I 1 .U 87049087 .S Am J Emerg .M Allied Health Personnel/*; Electric Countershock/*; .T Refibrillation managed by EMT-Ds: .P ARTICLE. .W Some patients converted from ventricular fibrillation to organized... (1 Reply)
Discussion started by: shoaibjameel123
1 Replies

9. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies

10. UNIX for Advanced & Expert Users

File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this

I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat) File 1 - 15 columns File 2 - 15 columns Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
lookbib(1)						      General Commands Manual							lookbib(1)

Name
       indxbib, lookbib - build inverted index for a bibliography, lookup bibliographic references

Syntax
       indxbib database...
       lookbib database

Description
       The  makes  an inverted index to the named databases (or files) for use by and These files contain bibliographic references (or other kinds
       of information) separated by blank lines.

       A bibliographic reference is a set of lines, constituting fields of bibliographic information.  Each field starts on a line beginning  with
       a  ``%'',  followed  by	a key-letter, then a blank, and finally the contents of the field, which may continue until the next line starting
       with ``%''.

       The command is a shell script that calls and The first program, truncates words to 6 characters, and maps upper case  to  lower	case.	It
       also  discards words shorter than 3 characters, words among the 100 most common English words, and numbers (dates) < 1900 or > 2000.  These
       parameters can be changed.  The second program, inv, creates an entry file (.ia), a posting file (.ib), and a tag file (.ic),  all  in  the
       working directory.

       The command uses an inverted index made by to find sets of bibliographic references.  It reads keywords typed after the ``>'' prompt on the
       terminal, and retrieves records containing all these keywords.  If nothing matches, nothing is returned except another ``>'' prompt.

       It is possible to search multiple databases, as long as they have a common index made by In that case, only the first argument given to	is
       specified to

       If  does  not  find the index files (the .i[abc] files), it looks for a reference file with the same name as the argument, without the suf-
       fixes.  It creates a file with a '.ig' suffix, suitable for use with It then uses this fgrep file to find references.  This method is  sim-
       pler to use, but the .ig file is slower to use than the .i[abc] files, and does not allow the use of multiple reference files.

Files
       x.ia, x.ib, x.ic, where x is the first argument, or if these are not present, then x.ig, x

See Also
       addbib(1), lookbib(1), refer(1), roffbib(1), sortbib(1),

																	lookbib(1)
All times are GMT -4. The time now is 11:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy