Sponsored Content
Top Forums Programming AWK processing of a three-column file Post 302591871 by chrisjorg on Saturday 21st of January 2012 10:31:08 AM
Old 01-21-2012
Ok, thanks, now my output data is of the form:

Code:
4.61531
4.59969
4.45344
4.245
4.24344
4.0775
3.90438
3.86375
3.84125
3.76875
3.63406
3.39844
3.73563
3.65938
3.60906
3.31
3.73688
3.61813
3.45938
....etc...

and I have called the two data files I have c7eq.dat (dimension 25:1) and c7ax.dat (dimension 41:1)



In my Fortran program I wish to process this data:

Code:
Program average_2
      implicit none
      double precision, dimension (25:1) :: H
      double precision, dimension (41:1) :: G

      open (unit=2, file="c7eq.dat", form="unformatted")
      read (2) G
      open (unit=3, file="c7ax.dat", form="unformatted")
      read (3) H

However I get the error:
forrtl: severe (24): end-of-file during read, unit 2, file /home/guest/c7eq.dat

Anyone can help me here?

---------- Post updated at 10:31 AM ---------- Previous update was at 09:06 AM ----------

The error further says

Image PC Routine Line Source
Exercise8.exe 08088DE2 Unknown Unknown Unknown
Exercise8.exe 08087D59 Unknown Unknown Unknown
Exercise8.exe 08087CD5 Unknown Unknown Unknown
Exercise8.exe 0805FB82 Unknown Unknown Unknown
Exercise8.exe 0805F8B2 Unknown Unknown Unknown
Exercise8.exe 0805568F Unknown Unknown Unknown
Exercise8.exe 08049DCC Unknown Unknown Unknown
Exercise8.exe 08049BE9 Unknown Unknown Unknown
libc.so.6 F7C1C390 Unknown Unknown Unknown
Exercise8.exe 08049B11 Unknown Unknown Unknown
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

processing a file with sed and awk

Hello, I have what is probably a simple task in text manipulation, but I just can't wrap my brain around it. I have a text file that looks something like the following. Note that some have middle initials in the first field and some don't. john.r.smith:john.smith@yahoo.com... (4 Replies)
Discussion started by: manouche
4 Replies

2. Shell Programming and Scripting

AWK processing -numbers to another column

Hi Guys, I'm trying to clean up my home logger file and can't seem to work this out. Here is my data: 10-19-2009 08:39 00.2 00.0 00.7 01.1 49.1 0.0 11.9 270.1 -49.1 220.9 10-19-2009 08:40 00.2 00.0 00.7 00.7 49.1 0.0 171.9 171.9 49.1 220.9 10-19-2009 08:41 00.1 00.0 00.7 00.8 24.5 0.0... (2 Replies)
Discussion started by: BeJay
2 Replies

3. Shell Programming and Scripting

awk help in processing file.

I am trying to process file which has following data #23456789012345 ACNASPSA13N0N0 ACNAPCPA05N0N0 ACNAFATS11N0N0 I want to take out each line from the file and what to put in the file by name which if part of the line starting from offset 10 to 15. It means I want to create three file... (3 Replies)
Discussion started by: ekb
3 Replies

4. Shell Programming and Scripting

Help with File Processing (AWK)

Input File: 1234, 2345,abc 1,24141,gw 222,rff,sds 2232145,sdsd,121 Output file to be generated: 000001234,2345,abc 000000001,24141,gw 000000222,rff,sds 002232145,sdsd,121 i.e; the first column is padded to get 9 digits. I tried with following: (1 Reply)
Discussion started by: karumudi7
1 Replies

5. Shell Programming and Scripting

Help with File Processing (AWK)

Input File: 1234, 2345,abc 1,24141,gw 222,rff,sds 2232145,sdsd,121 Output file to be generated: 000001234,2345,abc 000000001,24141,gw 000000222,rff,sds 002232145,sdsd,121 i.e; the first column is padded to get 9 digits. I tried with following: (3 Replies)
Discussion started by: karumudi7
3 Replies

6. Shell Programming and Scripting

Help with File processing - Extracting the column

I have a line from table space report: 5 135_TT ms Normal 1774336.0 1774208.0 761152.0 1013056.0 57.1% Now I have to get 1013056.0 as o/p. For this I tried cut -f32 -d" " previously it worked now it is showing empty space. Suggest me the best code for this which... (1 Reply)
Discussion started by: karumudi7
1 Replies

7. Shell Programming and Scripting

Awk: Need help replacing a specific column in a file by part of a column in another file

Hi, I have two input files as File1 : ABC:client1:project1 XYZ:client2-aa:project2 DEF:client4:proj File2 : client1:W-170:xx client2-aa:WT-04:yy client4:L-005A:zz Also, array of valid values can be hardcoded like Output : ABC:W:project1 XYZ:WT:project2 (1 Reply)
Discussion started by: aa2601
1 Replies

8. Programming

awk processing / Shell Script Processing to remove columns text file

Hello, I extracted a list of files in a directory with the command ls . However this is not my computer, so the ls functionality has been revamped so that it gives the filesizes in front like this : This is the output of ls command : I stored the output in a file filelist 1.1M... (5 Replies)
Discussion started by: ajayram
5 Replies

9. Shell Programming and Scripting

Help with file processing using awk

hello All, I'm new to AWK programming and learned myself few things to process a file and deal with duplicate lines, but I got into a scenario which makes me clueless to handle. Here is the scenario.. Input file: user role ----- ---- AAA add AAA delete BBB delete CCC delete DDD ... (10 Replies)
Discussion started by: julearn
10 Replies

10. Shell Programming and Scripting

Processing a formatted file with awk

Hi - I want to interrogate information about my poker hands, sessions are all recorded in a text file in a particular format. Each hand starts with the string <PokerStars> followed by a unique hand reference and other data like date/time. There is then all the information about each hand. My first... (5 Replies)
Discussion started by: rbeech23
5 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 06:10 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy