Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Extract lines with specific words with addition 2 lines before and after Post 302665301 by Amanda Low on Monday 2nd of July 2012 09:50:22 AM
Old 07-02-2012
Hi Jayan Jay,

Thank you very much for your reply.

Firstly I would like to apologize about my unclear example. Actually I have a big dataset which I need to extract lines with specific words from the 3rd column which is "abc1.1" from this example together with 2 lines above and 2 lines below. This pattern will loop for the whole dataset.

Is there any possibility?

Thank you very much in advance.

Best wishes,
Amanda
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

extract specific lines from file

hi, how would i extract a range of lines in a file by using the line number? ex: file contains: 1 title 2 i want 3 this part 4 to be taken out 5 from this file 6 and sent to 7 another file 8 not needed 9 end of file In this case, i want to copy line number 2 to 7 on a new... (2 Replies)
Discussion started by: apalex
2 Replies

2. Shell Programming and Scripting

Ignore some lines with specific words from file comparison

Hi all, I need help in doing this scenario. I have two files with multiple lines. I want to compare these two files but ignoring the lines which have words like Tran, Loc, Addr, Charge. Also if i have a word Credit in line, i want to tokenize (i.e string after character " ... (2 Replies)
Discussion started by: jakSun8
2 Replies

3. UNIX for Dummies Questions & Answers

Extract specific lines for graphing

Hello, I have a very large text file with about 2 million lines. Each of the lines starts like.. SNP_12345678 A 1212, 121, 343, ... SNP_12345678 B 4567, 567, 454, ... and so on. I want to extract specific SNPs and plot them by GNUplot or excel. The file is too large to be opened by text... (1 Reply)
Discussion started by: genehunter
1 Replies

4. Shell Programming and Scripting

Keep lines with specific words up in an order

I hava a file with following data: number|CREDIT_ID|NULL date|SYS_CREATION_DATE|NULL varchar2|GGS_COMMIT_CHAR|NULL varchar2|GGS_OP_TYPE|NULL number|GGS_SCN|NULL| number|GGS_LOG_SEQ|NULL number|GGS_LOG_POS|NULL number|GGS_ORACREC_SCN|NULL varchar2|BATCH_ID|NULL char|GGS_IMAGE_TYPE|NULL ... (6 Replies)
Discussion started by: kolesunil
6 Replies

5. Shell Programming and Scripting

Extract specific lines from a file

Hi, I have a file which contains DDL statements- CREATE TABLE, CREATE INDEX, ALTER TABLE etc. I have to only pick CREATE TABLE statements from the file- Source : ---------------------------------------------- --DDL for table abc -------------------------------------------- CREATE TABLE... (4 Replies)
Discussion started by: newb
4 Replies

6. UNIX Desktop Questions & Answers

Display a specific words from a multiple lines

well, i am so not familiar with this kind of things but i am gonna explain extactly what i am looking for so hopfully someone can figure it out :) i have a command that shows memory usage besides the process name, for example(the command output): 500 kb process_1 600 kb process_2 700 kb... (4 Replies)
Discussion started by: Portabello
4 Replies

7. Shell Programming and Scripting

how to print specific lines or words

Hi, Please have a look on below records. STG_HCM_STATE_DIS_TAX_TBL.1207.Xfm: The value of the row is: EMPLID = 220677 COMPANY = 919 BALANCE_ID = 0 BALANCE_YEAR = 2012 STG_HCM_STATE_DIS_TAX_TBL.1207.Xfm: ORA-00001: unique constraint (SYSADM.PS_TAX_BALANCE) violated ... (4 Replies)
Discussion started by: Sachin Lakka
4 Replies

8. Shell Programming and Scripting

Extract lines with unique value using a specific column

Hi there, I need a help with extracting data from tab delimited file which look like this #CHROM POS ID REF ALT Human Cow Dog Mouse Lizard chr2 3033 . G C 0/0 0/0 0/0 1/1 0/0 chr3 35040 . G T 0/0 0/0 ./. 1/1 0/1 chr4 60584 . T G 1/1 1/1 0/1 1/1 0/0 chr10 7147815 . G A 0/0 1/1 0/0 0/0... (9 Replies)
Discussion started by: houkto
9 Replies

9. Shell Programming and Scripting

ksh sed - Extract specific lines with mulitple occurance of interesting lines

Data file example I look for primary and * to isolate the interesting slot number. slot=`sed '/^primary$/,/\*/!d' filename | tail -1 | sed s'/*//' | awk '{print $1" "$2}'` Now I want to get the Touch line for only the associate slot number, in this case, because the asterisk... (2 Replies)
Discussion started by: popeye
2 Replies

10. Shell Programming and Scripting

Extract specific lines based on another file

I have a folder containing text files. I need to extract specific lines from the files of this folder based on another file input.txt. How can I do this with awk/sed? file1 ARG 81.9 8 81.9 0 LEU 27.1 9 27.1 0 PHE .0 10 .0 0 ASP 59.8 11 59.8 0 ASN 27.6 12 27.6 0 ALA .0 13 .0 0... (5 Replies)
Discussion started by: alanmathew84
5 Replies
H5MATH(1)							      h5utils								 H5MATH(1)

NAME
h5math - combine/create HDF5 files with math expressions SYNOPSIS
h5math [OPTION]... OUTPUT-HDF5FILE [INPUT-HDF5FILES...] DESCRIPTION
h5math takes any number of HDF5 files as input, along with a mathematical expression, and combines them to produce a new HDF5 file. HDF5 is a free, portable binary format and supporting library developed by the National Center for Supercomputing Applications at the Uni- versity of Illinois in Urbana-Champaign. A single h5 file can contain multiple data sets; by default, h5math creates a dataset called "h5math", but this can be changed via the -d option, or by using the syntax HDF5FILE:DATASET. The -a option can be used to append new datasets to an existing HDF5 file. The same syntax is used to specify the dataset used in the input file(s); by default, the first dataset (alphabetically) is used. A simple example of h5math's usage is: h5math -e "d1 + 2*d2" out.h5 foo.h5 bar.h5:blah which produces a new file, out.h5, by adding the first dataset in foo.h5 with twice the "blah" dataset in bar.h5. In the expression (spec- ified by -e), the first input dataset (from left to right) is referred to as d1, the second as d2, and so on. In addition to input datasets, you can also use the x/y/z coordinates of each point in the expression, referenced by "x" "y" and "z" vari- ables (for the first three dimensions) as well as a "t" variable that refers to the last dimension. By default, these are integers start- ing at 0 at the corner of the dataset, but the -0 option will change the x/y/z origin to the center of the dataset (t is unaffected), and the -r res option will specify the "resolution", dividing the x/y/z coordinates by res. All of the input datasets must have the same dimensions, which are also the dimensions of the output. If there are no input files, and you are defining the output purely by a mathematical formula, you can specify the dimensions of the output explicitly via the -n size option, where size is e.g. "2x2x2". Sometimes, however, you want to use only a smaller-dimensional "slice" of multi-dimensional data. To do this, you specify coordinates in one (or more) slice dimension(s), via the -xyzt options. OPTIONS
-h Display help on the command-line options and usage. -V Print the version number and copyright info for h5math. -v Verbose output. -a If the HDF5 output file already exists, append the data as a new dataset rather than overwriting the file (the default behavior). An existing dataset of the same name within the file is overwritten, however. -e expression Specify the mathematical expression that is used to construct the output (generally in " quotes to group the expression as one item in the shell), in terms of the variables for the input datasets and the coordinates as described above. Expressions use a C-like infix notation, with most standard operators and mathematical functions (+, sin, etc.) being supported. This functionality is provided (and its features determined) by GNU libmatheval. -f filename Name of a text file to read the expression from, if no -e expression is specified. Defaults to stdin. -x ix, -y iy, -z iz, -t it This tells h5math to use a particular slice of a multi-dimensional dataset. e.g. -x uses the subset (with one less dimension) at an x index of ix (where the indices run from zero to one less than the maximum index in that direction). Here, x/y/z correspond to the first/second/third dimensions of the HDF5 dataset. The -t option specifies a slice in the last dimension, whichever that might be. See also the -0 option to shift the origin of the x/y/z slice coordinates to the dataset center. -0 Shift the origin of the x/y/z slice coordinates to the dataset center, so that e.g. -0 -x 0 (or more compactly -0x0) returns the central x plane of the dataset instead of the edge x plane. (-t coordinates are not affected.) This also shifts the origin of the x/y/z variables in the expression so that 0 is the center of the dataset. -r res Use a resolution res for x/y/z (but not t) variables in the expression, so that the data "grid" coordinates are divided by res. The default res is 1. For example, if the x dimension has 21 grid steps, setting a res of 20 will mean that x variables in the expression run from 0.0 to 1.0 (or -0.5 to 0.5 if -0 is specified), instead of 0 to 20. -r does not affect the coordinates used for slices, which are always integers. -n size The output dataset must be the same size as the input datasets. If there are no input datasets (if you are defining the output purely by a formula), then you must specify the output size manually with this option: size is of the form MxNxLx... (with M, N, L being integers) and may be of any dimensionality. -d name Write to dataset name in the output; otherwise, the output dataset is called "data" by default. Also use dataset name in the input; otherwise, the first input dataset (alphabetically) in a file is used. Alternatively, use the syntax HDF5FILE:DATASET (which over- rides the -d option). BUGS
Send bug reports to S. G. Johnson, stevenj@alum.mit.edu. AUTHORS
Written by Steven G. Johnson. Copyright (c) 2005 by the Massachusetts Institute of Technology. h5utils May 23, 2005 H5MATH(1)
All times are GMT -4. The time now is 10:04 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy