Pairing the nth elements on multiple lines iteratively
Hello,
I'm trying to create a word translation section of a book. Each entry in the word list will come from a set of linguistically analyzed texts.
Each sentence in the text has the following format. The first element in each line is the "name" of the line (i.e. "A","B","C","D"). The first line is the object language, the second line is a morpheme gloss, the third and fourth lines are stem/word-level translations:
What I'd like to do is pull the nth element in 2 or more lines (not counting the line "name"), and output them as a pair (or n-tuple) on the same line, later to be exported as columns to a spreadsheet. So for the above, I'd like:
Note that the initial "name" elements occur several thousand times in the file, and I'd like to take care of all lines so named at the same time. Thanks, any ideas?
Last edited by John Lyon; 08-01-2015 at 02:19 PM..
Reason: adding more info
Note that the initial "name" elements occur several thousand times in the file, and I'd like to take care of all lines so named at the same time.
If you're saying that a single "name" can appear more than once in your input file and that when that happens the input lines need to be combined somehow, you need to give us sample input where that condition exists and show us how that is supposed to affect the output.
First, save the following in a file named tester:
And, if, and only if, you want to run this on a Solaris/SunOS system, change awk in the script to /usr/xpg4/bin/awk or nawk. Then make the script executable:
And, if you have input files file containing your sample input and file2 containing:
then the command:
produces the output you requested:
and the command:
produces the output:
which shows how you can use a comma (or any other character string you want) as the output field separator and that it will correctly align output fields adjusted to account for input files where the number of input columns is not a constant. (If your input always has five input columns and you always want to produce four output rows, you can easily simplify this script some; but I'll leave that as a simple exercise for the reader.)
This User Gave Thanks to Don Cragun For This Post:
Thanks to you both for your replies. I was trying to keep it simple, but I should've added more information, I think. Here goes:
The data come from a LaTeX file, which uses a package called "Expex" which formats interlinear analyses of a non-English language.
The following two examples show how the data is laid out. The first line "\gla" is the object language, the second line "\glb" is the underlying form, the third line "\glc" is the morpheme gloss, the fourth line "\glc" is the word translation (the package doesn't allow "\gld" for whatever reason), and the last line "\glft" is the sentence translation. As you see, the number of words varies from example to example, just as natural language sentences may be shorter, or longer.
Each "word" is enclosed in curly brackets in the first two lines (though other sets of curly brackets may be nested within words), but only separated by spaces in the second two lines. The curly brackets are necessary to delimit words in the first two lines since some latex commands (e.g. "\ts" below) require blank spaces after them.
The \glft line may be ignored, but what I'd like exactly is the following, where "&" denotes a column separator in LaTeX and "\\" indicates a newline. Each line has 4 "words", i.e. the nth word in each of the first four lines in the examples above.
Etcetera. Once the first example is done, the second example would be appended to the above list. Eventually each line will be sorted alphabetically by the first "column". It'd also be nice to be able to choose which input lines to include in the output, though I'd greatly appreciate any more assistance you could give in obtaining the basic result just outlined. Thanks again.
Please don't give us "Etcetera."! Show us the exact output you are trying to produce from the 11 line sample input you showed us.
We need to see what is supposed to be done in the output when there are unequal numbers of "words" in input lines.
We need to see how the output lines corresponding to groups of input lines are supposed to be separated.
If you want output sorted, you also need to explain MUCH more clearly what the sort key is and explain how sorting on the 1st column of the output is going to maintain groups of associated output lines??? (The sort utility sorts lines; not line groups!)
You have been given sample awk scripts that work with the sample input you originally provided. Have you tried modifying those scripts to work with your (radically) different real input? What did you try? Where did you get stuck?
Thanks for the reply, apologies for being vague, I'm new to all this. To be clear, the following input consists of two example sentences. There are a combined total of 15, curly-bracket enclosed words in the \gla lines of these two examples (3 in the first, 12 in the second):
Given this input, this is the initial output I'm looking for:
Then, these 15 lines would be sorted, the sort key being the first letter of the first word in each line, so the above 15 lines (corresponding to the total of 15 words in the \gla lines of the two unmodified examples), would be sorted like this:
Lines 5/6 and lines 8/9 above are duplicates, so the duplicate entries will be removed from the list, yielding 13 lines:
The result will be an alphabetized vocabulary list, ready to be dropped into a "tabularx" table environment in LaTeX.
I had some luck with danmero's suggestion:
However, it only worked if (a) all of the extra blank spaces within "words" were removed (since it seems to use blank spaces as a word delimiter), and (b) only one example at a time is modified (since I think it assumes "line names" do not occur multiple times). Both of these issues are my fault, for not being clear during the initial posting about the nature of the data I'm working with. Also, I don't yet know enough about awk to identify what in the above command needs changing. Thanks for your assistance and patience! I hope this helps to clarify.
I'm confused, I thought you said that each <space> character in the 3rd and later lines in each input "sentence" separated "words". So, in the 12th line of your desired output:
why are there two <space> characters in the middle of the single "word" marked in red from the following input line?:
I thought I could modify my earlier suggested awk script to handle your new requirements, but since my code thinks there are two additional fields in the 9th line of your sample input, it gets confused and produces the wrong output.
I need your help to discover missing elements for each box.
In theory each box should have 4 items: ITEM01, ITEM02, ITEM08, and ITEM10.
Some boxes either have a missing item (BOX02 ITEM08) or might have da duplicate item (BOX03 ITEM02) and missing another one (BOX03 ITEM01).
file01.txt
... (2 Replies)
Hi,
I have a huge list of archives (.gz). Each archive is about 40MB. A file is generated every minute so if I want to analyze the data for 1 hour I get already 60 files for example.
These are text files, ';' separated, each line having about 300 fields (columns).
What I need to do is to... (11 Replies)
GM,
I have an issue at work, which requires a simple solution. But, after multiple attempts, I have not been able to hit on the code needed.
I am assuming that sed, awk or even perl could do what I need.
I have an application that adds extra blank page feeds, for multiple reports, when... (7 Replies)
Hi,
I want to rename several files like this:
example:
A0805120817.BHN
A0805120818.BHN
.....
to:
20120817.0805.N
20120818.0805.N
......
How can i do this via terminal or in shell bash script ?
thanks, (6 Replies)
Hi all,
Here is my problem for which i am breaking my head for past three days..
I have parted command output as follows..
Model: ATA WDC WD5000AAKS-0 (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type ... (3 Replies)
Greetings experts. Searched the forums (perhaps not hard enough?) - Am searching for a method to capture all output from a log file following the nth occurrence of a known string.
Background:
Using bash, I want to monitor my Oracle DB alert log file. The script will count the total # of... (2 Replies)
This code works perfect when using a machine with only one interface online. (Excluding the loopback of course) But when I have other interface up for vmware or a vpn the output gets mixed up. I know I had this working when I was just reading ip's from files so I know it is not a problem with... (8 Replies)
I have several files (around 50) that have the similar format. I need to extract the 5th line from every file and output that into a text file. So far, I have been able to figure out how to do it for a single file:
$ awk 'NR==5' text1.txt > results.txt
OR
$ sed -n '5p' text1.txt > results.txt... (6 Replies)
Hi all,
I would like to extract the line number of the n-th occurrency of a given string in a file.
e.g.
xxx
yyy
xxx
zzz
xxx
the second occurrency of xxx is at line 3.
What is the fastest way to do it in bash?
Thank you, (8 Replies)
Hi,
I got a lot of files looking like this:
1
0.5
6
All together there are ard 1'000'000 lines in each of the ard 100 files.
I want to build the average for every line, and write the result to a new file.
The averaging should start at a specific line, here for example at line... (10 Replies)