Sponsored Content
Full Discussion: Extract lines from files
Top Forums Shell Programming and Scripting Extract lines from files Post 302342305 by my_Perl on Sunday 9th of August 2009 05:32:09 AM
Old 08-09-2009
Definitely, Thanks a lot.

---------- Post updated 08-09-09 at 04:32 AM ---------- Previous update was 08-08-09 at 03:17 PM ----------

Hi drl

The perl script works well for lesser number of sentences but when the number crosses 15 or more. A group of sentences merge together to form a line.Also, Some of the sentences get printed repeatedly. In fact, I want to run this program for thousands of sentences. Is this problem due to the array or something else?
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

is it hard to extract particular lines & strings from the files??

Hi Experts, I have lots of big size files. Below is the snapshot of a file. From the files i want extract informmation like belows. What could be command or script for that? DELETE RESP:940120105 CREATE RESP:0 GET RESP:0 File contains like below- ... ... <log... (8 Replies)
Discussion started by: thepurple
8 Replies

2. Shell Programming and Scripting

extract nth line of all files and print in output file on separate lines.

Hello UNIX experts, I have 124 text files in a directory. I want to extract the 45678th line of all the files sequentialy by file names. The extracted lines should be printed in the output file on seperate lines. e.g. The input Files are one.txt, two.txt, three.txt, four.txt The cat of four... (1 Reply)
Discussion started by: yogeshkumkar
1 Replies

3. Shell Programming and Scripting

How to extract lines between tags into different files?

I have an xml file with the below data: unix>Cat address.xml <Address City=”Amsterdam” Street = “station straat” ZIPCODE="2516 CK " </Address> <Address City=”Amsterdam” Street = “Leeuwen straat” ZIPCODE="2517 AB " </Address> <Address City=”The Hauge” Street = “kirk straat” ... (1 Reply)
Discussion started by: LinuxLearner
1 Replies

4. UNIX for Dummies Questions & Answers

Extract lines with specific words with addition 2 lines before and after

Dear all, Greetings. I would like to ask for your help to extract lines with specific words in addition 2 lines before and after these lines by using awk or sed. For example, the input file is: 1 ak1 abc1.0 1 ak2 abc1.0 1 ak3 abc1.0 1 ak4 abc1.0 1 ak5 abc1.1 1 ak6 abc1.1 1 ak7... (7 Replies)
Discussion started by: Amanda Low
7 Replies

5. Shell Programming and Scripting

Search for a pattern,extract value(s) from next line, extract lines having those extracted value(s)

I have hundreds of files to process. In each file I need to look for a pattern then extract value(s) from next line and then search for value(s) selected from point (2) in the same file at a specific position. HEADER ELECTRON TRANSPORT 18-MAR-98 1A7V TITLE CYTOCHROME... (7 Replies)
Discussion started by: AshwaniSharma09
7 Replies

6. Shell Programming and Scripting

Can you extract (remove) lines from log files?

I use "MineOS" (a linux distro with python scripts and web ui included for managing a Minecraft Server). The author of the scripts is currently having a problem with the Minecraft server log file being spammed with certain entries. He's working on clearing up the spam. But in the meantime, I'm... (8 Replies)
Discussion started by: nbsparks
8 Replies

7. Shell Programming and Scripting

Extract lines from text files

I have some files containing the following data # RESIDUE AA STRUCTURE BP1 BP2 ACC N-H-->O O-->H-N N-H-->O O-->H-N TCO KAPPA ALPHA PHI PSI X-CA Y-CA Z-CA 1 196 A M 0 0 230 0, 0.0 2,-0.2 0, 0.0 0, 0.0 0.000 360.0 360.0 360.0 76.4 21.7 -6.8 11.3 2 197 A D + 0 0 175 1,-0.1 2,-0.1 0, 0.0 0, 0.0... (10 Replies)
Discussion started by: edweena
10 Replies

8. Shell Programming and Scripting

ksh sed - Extract specific lines with mulitple occurance of interesting lines

Data file example I look for primary and * to isolate the interesting slot number. slot=`sed '/^primary$/,/\*/!d' filename | tail -1 | sed s'/*//' | awk '{print $1" "$2}'` Now I want to get the Touch line for only the associate slot number, in this case, because the asterisk... (2 Replies)
Discussion started by: popeye
2 Replies

9. Shell Programming and Scripting

Extract lines that appear twice

I have a text file that looks like this : root/user/usr1/0001/abab1* root/user/usr1/0001/abab2* root/user/usr1/0002/acac1* root/user/usr1/0002/acac2* root/user/usr1/0003/adad1* root/user/usr1/0004/aeae1* root/user/usr1/0004/aeae2* How could I code this to extract just the subjects... (9 Replies)
Discussion started by: LeftoverStew
9 Replies

10. UNIX for Dummies Questions & Answers

Extract the same lines from the two files

I used to use this script to extract the same lines from two files: grep -f file1 file2 > outputfile now I have file1 AB029895 AF208401 AF309648 AF526378 AJ444445 AJ720950 AJ851546 AY568629 AY591907 AY994087 BU116401 BU116599 BU119689 BU121308 BU125622 BU231446 BU236750 BU237045 (4 Replies)
Discussion started by: yuejian
4 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 01:39 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy