Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Extracting a block of text from a large file using variables? Post 302891063 by Klor on Monday 3rd of March 2014 03:21:51 PM
Old 03-03-2014
Extracting a block of text from a large file using variables?

Hi UNIX Members,

I've been tasked with performing the following:
Extract a block of data in column form
#This data changes each time, therefore automating future procedures

Please Note the following:
line = reading a line from a file_list that leads to the data
The filename is called mol_1($line).opt.out, the block of text is in the opt.out file

So far I have:
Set variables to assign line numbers the text I require lies between ( $ORB and $MULL)
######I would like to extract the text between these two variables#####
I'd like to make a seperate file 'HOMO'(Highest Occupied Molecular Orbital) in which is a temporary data file which I can work with.

Scope of file : $ORB = 18172, $MULL = 18278
Line numbers needed to extract is 18172-18278 (however these change therefore are created as variables)



The current script I have is:
Code:
#!/bin/bash -f
while read line
do

#mkdir HOMO
#cd ./B3LYP_631Gstar2.$line

OBEGIN=$(cat -n $line.opt.out | grep "ORBITAL ENERGIES" | tail -1 | cut -f 1)
OEND=4
ORB=$(( $OBEGIN + $OEND ))
#echo $ORB

MBEGIN=$(cat -n $line.opt.out | grep "MULLIKEN POPULATION ANALYSIS" | tail -1 | cut -f 1)
MEND=3
MULL=$(( $MBEGIN - $MEND ))

###### Works up till here, above is to show what i'm working with####
###Below is the 3 methods i've tried

#sed '$ORB,$MULL!d' $line.opt.out > HOMO
#awk 'NR==$ORB,NR==$MULL' $line.opt.out | cut -f1  > HOMO
#grep -E '($ORB: ){5}$MULL' < $line.opt.out > HOMO

#echo $MULL
done<$1

~

Summary of task: Extracting a block of text from a large file using variables $ORB and $MULL as numbers

#######EXTRA#####
If you know how to extract data that is in columns ( Say I need only column 2 and column 4 out of 6 columns) to be in columns in another file (sort of like an excel file) that's be much appreciated!


Warm Regards, Klor

Last edited by Scrutinizer; 03-03-2014 at 04:33 PM.. Reason: code tags
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Parsing file and extracting the useful data block

Greetings All!! I have a very peculiar problem where I have to parse a big text file and extract useful data out of it with starting and ending block pattern matching. e.g. I have a input file like this: sample data block1 sample data start useful data end sample data block2 sample... (5 Replies)
Discussion started by: arminder
5 Replies

2. UNIX for Dummies Questions & Answers

extracting text and reusing the text to rename file

Hi, I have some ps files where I want to ectract/copy a certain number from and use that number to rename the ps file. eg: 'file.ps' contains following text: 14 (09 01 932688 0)t the text can be variable, the only fixed element is the '14 ('. The problem is that the fixed element can appear... (7 Replies)
Discussion started by: JohnDS
7 Replies

3. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

4. Shell Programming and Scripting

Help with splitting a large text file into smaller ones

Hi Everyone, I am using a centos 5.2 server as an sflow log collector on my network. Currently I am using inmons free sflowtool to collect the packets sent by my switches. I have a bash script running on an infinate loop to stop and start the log collection at set intervals - currently one... (2 Replies)
Discussion started by: lord_butler
2 Replies

5. Shell Programming and Scripting

Extracting a portion of data from a very large tab delimited text file

Hi All I wanted to know how to effectively delete some columns in a large tab delimited file. I have a file that contains 5 columns and almost 100,000 rows 3456 f g t t 3456 g h 456 f h 4567 f g h z 345 f g 567 h j k lThis is a very large data file and tab delimited. I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies

6. Shell Programming and Scripting

splitting a large text file into paragraphs

Hello all, newbie here. I've searched the forum and found many "how to split a text file" topics but none that are what I'm looking for. I have a large text file (~15 MB) in size. It contains a variable number of "paragraphs" (for lack of a better word) that are each of variable length. A... (3 Replies)
Discussion started by: lupin..the..3rd
3 Replies

7. Shell Programming and Scripting

extracting block of lines from a file

consider the input file which i am dealing with looks like this.. #cat 11.sql create table abc ( . . . ) engine=Innodb ; . . etc . . . create table UsM ( blah blah blah ) engine=Innodb ; (5 Replies)
Discussion started by: vivek d r
5 Replies

8. Shell Programming and Scripting

help extracting text from file

Hello I have a large file with lines beginning with 552, 553, 554, below is a small sample, I need to extract the data you can see below highlighted in bold from this file on the same location on every line and output it to a new file. Thank you in advance for any help 55201KL... (2 Replies)
Discussion started by: firefox2k2
2 Replies

9. UNIX for Dummies Questions & Answers

Extracting lines from a text file based on another text file with line numbers

Hi, I am trying to extract lines from a text file given a text file containing line numbers to be extracted from the first file. How do I go about doing this? Thanks! (1 Reply)
Discussion started by: evelibertine
1 Replies

10. Shell Programming and Scripting

Awk: passing shell variables through and extracting text

Hello, new to the forums and to awk. Glad to be here. :o I want to pass two shell (#!/bin/sh) variables through to awk and use them. They will determine where to start and stop text extraction. The code with the variables hard-coded in awk works fine; the same code, but with the shell... (7 Replies)
Discussion started by: bedtime
7 Replies
FSTRIM(8)						       System Administration							 FSTRIM(8)

NAME
fstrim - discard unused blocks on a mounted filesystem SYNOPSIS
fstrim [-o offset] [-l length] [-m minimum-free-extent] [-v] mountpoint DESCRIPTION
fstrim is used on a mounted filesystem to discard (or "trim") blocks which are not in use by the filesystem. This is useful for solid- state drives (SSDs) and thinly-provisioned storage. By default, fstrim will discard all unused blocks in the filesystem. Options may be used to modify this behavior based on range or size, as explained below. The mountpoint argument is the pathname of the directory where the filesystem is mounted. OPTIONS
The offset, length, and minimum-free-extent arguments may be followed by binary (2^N) suffixes KiB, MiB, GiB, TiB, PiB and EiB (the "iB" is optional, e.g. "K" has the same meaning as "KiB") or decimal (10^N) suffixes KB, MB, GB, PB and EB. -h, --help Print help and exit. -o, --offset offset Byte offset in filesystem from which to begin searching for free blocks to discard. Default value is zero, starting at the begin- ning of the filesystem. -l, --length length Number of bytes after starting point to search for free blocks to discard. If the specified value extends past the end of the filesystem, fstrim will stop at the filesystem size boundary. Default value extends to the end of the filesystem. -m, --minimum minimum-free-extent Minimum contiguous free range to discard, in bytes. (This value is internally rounded up to a multiple of the filesystem block size). Free ranges smaller than this will be ignored. By increasing this value, the fstrim operation will complete more quickly for filesystems with badly fragmented freespace, although not all blocks will be discarded. Default value is zero, discard every free block. -v, --verbose Verbose execution. When specified fstrim will output the number of bytes passed from the filesystem down the block stack to the device for potential discard. This number is a maximum discard amount from the storage device's perspective, because FITRIM ioctl called repeated will keep sending the same sectors for discard repeatedly. fstrim will report the same potential discard bytes each time, but only sectors which had been written to between the discards would actually be discarded by the storage device. Further, the kernel block layer reserves the right to adjust the discard ranges to fit raid stripe geometry, non-trim capable devices in a LVM setup, etc. These reductions would not be reflected in fstrim_range.len (the --length option). AUTHOR
Lukas Czerner <lczerner@redhat.com> Karel Zak <kzak@redhat.com> SEE ALSO
mount(8) AVAILABILITY
The fstrim command is part of the util-linux package and is available from ftp://ftp.kernel.org/pub/linux/utils/util-linux/. util-linux November 2010 FSTRIM(8)
All times are GMT -4. The time now is 02:50 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy