Sponsored Content
Top Forums Shell Programming and Scripting How to extract data from a huge file? Post 302159595 by srsahu75 on Friday 18th of January 2008 03:53:20 AM
Old 01-18-2008
Yes, I need

Yes, I need to extract information between the main tags ( inclusive of the tags ).
starting from
<dublin_core schema="dc">
to
</dublin_core>

Save the extract as dublin_core.xml in the respective folders item_* which are created from the string (item_*) before <dublin_core schema="dc">

And save another file 'contents' in each folder with the content as license.txt(tab \t)bundle:LICENSE
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

search and grab data from a huge file

folks, In my working directory, there a multiple large files which only contain one line in the file. The line is too long to use "grep", so any help? For example, if I want to find if these files contain a string like "93849", what command I should use? Also, there is oder_id number... (1 Reply)
Discussion started by: ting123
1 Replies

2. Shell Programming and Scripting

How to extract a piece of information from a huge file

Hello All, I need some assistance to extract a piece of information from a huge file. The file is like this one : database information ccccccccccccccccc ccccccccccccccccc ccccccccccccccccc ccccccccccccccccc os information cccccccccccccccccc cccccccccccccccccc... (2 Replies)
Discussion started by: Marcor
2 Replies

3. Shell Programming and Scripting

insert a header in a huge data file without using an intermediate file

I have a file with data extracted, and need to insert a header with a constant string, say: H|PayerDataExtract if i use sed, i have to redirect the output to a seperate file like sed ' sed commands' ExtractDataFile.dat > ExtractDataFileWithHeader.dat the same is true for awk and... (10 Replies)
Discussion started by: deepaktanna
10 Replies

4. Shell Programming and Scripting

How to extract a subset from a huge dataset

Hi, All I have a huge file which has 450G. Its tab-delimited format is as below x1 A 50020 1 x1 B 50021 8 x1 C 50022 9 x1 A 50023 10 x2 D 50024 5 x2 C 50025 7 x2 F 50026 8 x2 N 50027 1 : : Now, I want to extract a subset from this file. In this subset, column 1 is x10, column 2 is... (3 Replies)
Discussion started by: cliffyiu
3 Replies

5. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

6. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

7. Shell Programming and Scripting

Extract header data from one file and combine it with data from another file

Hi, Great minds, I have some files, in fact header files, of CTD profiler, I tried a lot C programming, could not get output as I was expected, because my programming skills are very poor, finally, joined unix forum with the hope that, I may get what I want, from you people, Here I have attached... (17 Replies)
Discussion started by: nex_asp
17 Replies

8. Shell Programming and Scripting

Extract few content from a huge list of files

I have a huge list of files (about 300,000) which have a pattern like this. .I 1 .U 87049087 .S Am J Emerg .M Allied Health Personnel/*; Electric Countershock/*; .T Refibrillation managed by EMT-Ds: .P ARTICLE. .W Some patients converted from ventricular fibrillation to organized... (1 Reply)
Discussion started by: shoaibjameel123
1 Replies

9. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies

10. UNIX for Advanced & Expert Users

File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this

I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat) File 1 - 15 columns File 2 - 15 columns Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
tracker-extract(1)						   User Commands						tracker-extract(1)

NAME
tracker-extract - Extract metadata from a file. SYNOPSYS
tracker-extract [OPTION...] FILE... DESCRIPTION
tracker-extract reads the file and mimetype provided in stdin and extract the metadata from this file; then it displays the metadata on the standard output. NOTE: If a FILE is not provided then tracker-extract will run for 30 seconds waiting for DBus calls before quitting. OPTIONS
-?, --help Show summary of options. -v, --verbosity=N Set verbosity to N. This overrides the config value. Values include 0=errors, 1=minimal, 2=detailed and 3=debug. -f, --file=FILE The FILE to extract metadata from. The FILE argument can be either a local path or a URI. It also does not have to be an absolute path. -m, --mime=MIME The MIME type to use for the file. If one is not provided, it will be guessed automatically. -d, --disable-shutdown Disable shutting down after 30 seconds of inactivity. -i, --force-internal-extractors Use this option to force internal extractors over 3rd parties like libstreamanalyzer. -m, --force-module=MODULE Force a particular module to be used. This is here as a convenience for developers wanting to test their MODULE file. Only the MOD- ULE name has to be specified, not the full path. Typically, a MODULE is installed to /usr/lib/tracker-0.7/extract-modules/. This option can be used with or without the .so part of the name too, for example, you can use --force-module=foo Modules are shared objects which are dynamically loaded at run time. These files must have the .so suffix to be loaded and must con- tain the correct symbols to be authenticated by tracker-extract. For more information see the libtracker-extract reference documen- tation. -V, --version Show binary version. EXAMPLES
Using command line to extract metadata from a file: $ tracker-extract -v 3 -f /path/to/some/file.mp3 Using a specific module to extract metadata from a file: $ tracker-extract -v 3 -f /path/to/some/file.mp3 -m mymodule ENVIRONMENT
TRACKER_EXTRACTORS_DIR This is the directory which tracker uses to load the shared libraries from (used for extracting metadata for specific file types). These are needed on each invocation of tracker-store. If unset it will default to the correct place. This is used mainly for testing purposes. FILES
$HOME/.config/tracker/tracker-extract.cfg SEE ALSO
tracker-store(1), tracker-sparql(1), tracker-stats(1), tracker-info(1). tracker-extract.cfg(5). /usr/lib/tracker-0.7/extract-modules/ GNU
July 2007 tracker-extract(1)
All times are GMT -4. The time now is 08:04 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy