Sponsored Content
Top Forums UNIX for Beginners Questions & Answers Calculating correlations across columns in awk Post 303026254 by Corona688 on Thursday 22nd of November 2018 10:53:32 AM
Old 11-22-2018
Apologies, I thought the data was old data.

By simple correlation you mean pearson's? And does your data file actually have the double-newlines and odd spacing shown?
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Calculating totals in AWK

Hello, With the following small script I list the size of documents belonging to a certain user by each time selecting the bytes-field of that file ($7). Now it fills the array with every file it finds so in the end the output of some users contains up to 200.000 numbers. So how can I calculate... (7 Replies)
Discussion started by: Hille
7 Replies

2. Shell Programming and Scripting

calculating endless columns

I have about 5000 columns of data that i need to convert all of it into pecentages. for shorter colums i have been using this code: {print $1/($1+$2)*100,$2/($1+$2),$3/($3+$4)*100 .....} but this is a teadious process... is there anyway to do it without having to write all of them out? sample... (20 Replies)
Discussion started by: chronicx
20 Replies

3. Shell Programming and Scripting

Awk program for calculating dates.

Hi All, I have a txt file which has hundreds of lines and 41 fields. I have a requirement to pick up field 14 from the text file which is a date fiels in the following format. Field 14 : Data Type : NUMERIC DATE (YYYYMMDD) Field Length : 8 Example of Data :20090415 Field 42 : Data Type... (2 Replies)
Discussion started by: nua7
2 Replies

4. Shell Programming and Scripting

[Solved] awk calculating between lines

Hey guys, maybe you can help me with this... I want to read input.dat line by line, while doing a simple calculation between the second column value of the current line and the second column value of the next line (like a difference). input is something like this: 0 3.945757 1 ... (1 Reply)
Discussion started by: origamisven
1 Replies

5. Shell Programming and Scripting

Calculating an integer with awk

I would like to extract a number from $0 and calculate if it can be devided by 25. Though the number can also be less then 25 or bigger than 100. How do i extract the number and how can the integer be calculated? String: "all_results">39</span>I am looking for the number between "all_results"> ... (5 Replies)
Discussion started by: sdf
5 Replies

6. Shell Programming and Scripting

AWK way of calculating growth

Hi All, IS there any 'awk' way to manipulate following data? Fruit Date Count Apple 20/08/2011 5 Apple 27/08/2011 7 Apple 05/09/2011 11 Apple 12/09/2011 3 Apple 19/09/2011 25 . . . . Orange 20/08/2011 9 Orange 27/08/2011 20 Orange 27/08/2011 7 Orange 05/09/2011 15 Orange... (3 Replies)
Discussion started by: aniketdixit
3 Replies

7. Shell Programming and Scripting

Calculating the epoch time from standard time using awk and calculating the duration

Hi All, I have the following time stamp data in 2 columns Date TimeStamp(also with milliseconds) 05/23/2012 08:30:11.250 05/23/2012 08:30:15.500 05/23/2012 08:31.15.500 . . etc From this data I need the following output. 0.00( row1-row1 in seconds) 04.25( row2-row1 in... (5 Replies)
Discussion started by: ks_reddy
5 Replies

8. Shell Programming and Scripting

Calculating average with awk

I need to find the average from a file like: data => BW:123 M:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0' data => BW:123 N:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0' data => BW:123 N:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0' data => BW:123 N:30 RTD:0 1 0 1 0 0 1 1 1 1 0 0 1 1 0' data => BW:123 N:30 RTD:0 1... (4 Replies)
Discussion started by: Slagle
4 Replies

9. Shell Programming and Scripting

Calculating Running Variance Using Awk

Hi all, I am attempting to calculate a running variance for a file containing a column of numbers. I am using the formula variance=sum((x-mean(x))^2)/(n-1), where x is the value on the current row, and mean(x) is the average of all of the values up until that row. n represents the total number... (1 Reply)
Discussion started by: Jahn
1 Replies

10. UNIX for Dummies Questions & Answers

Calculating cumulative frequency using awk

Hi, I wanted to calculate cumulative frequency distribution of my data that involves several arithmetic calls. I did things in excel but its taking me forever. this is what I want to do: var1.txt contains n observations which I have to compute for frequency which is given by 1/n and subsequently... (7 Replies)
Discussion started by: ida1215
7 Replies
mcxarray(1)							  USER COMMANDS 						       mcxarray(1)

  NAME
      mcxarray - Transform array data to MCL matrices

  SYNOPSIS
      mcxarray [options]

      mcxarray	[-data	fname  (input  data  file)]  [-imx  fname  (input matrix file)] [-co num ((absolute) cutoff for output values (required))]
      [--pearson (use Pearson correlation (default))] [--spearman (use Spearman rank correlation)] [-fp <mode> (use fingerprint  measure)]  [--dot
      (use  dot product)] [--cosine (use cosine)] [-skipr <num> (skip <num> data rows)] [-skipc <num> (skip <num> data columns)] [-o fname (output
      file fname)] [-write-tab <fname> (write row labels to file)] [-l <num> (take labels from column <num>)] [-digits <num>  (output  precision)]
      [--write-binary  (write  output  in  binary format)] [-t <int> (use <int> threads)] [-J <intJ> (a total of <intJ> jobs are used)] [-j <intj>
      (this job has index <intj>)] [-start <int> (start at column <int> inclusive)] [-end <int> (end  at  column  <int>  EXclusive)]  [--transpose
      (work  with the transposed data matrix)] [--rank-transform (rank transform the data first)] [-tf spec (transform result network)] [-table-tf
      spec (transform input table before processing)] [-n mode (normalize input)] [--zero-as-na  (treat  zeroes  as  missing  data)]  [-write-data
      <fname>  (write  data  to file)] [-write-na <fname> (write NA matrix to file)] [--job-info (print index ranges for this job)] [--help (print
      this help)] [-h (print this help)] [--version (print version information)]

  DESCRIPTION
      mcxarray can either read a flat file containing array data (-data) or a matrix file satisfying the mcl input format (-imx).  In  the  former
      case it will by default work with the rows as the data vectors. In the latter case it will by default work with the columns as the data vec-
      tors (note that mcl matrices are presented as a listing of columns).  This can be changed for both using the --transpose option.

      The input data may contain missing data in the form of empty columns, NA values (not available/applicable), or NaN values  (not  a  number).
      The  program keeps track of these, and when computing the correlation between two rows or columns ignores all positions where any one of the
      two has missing data.

  OPTIONS
      -data fname (input data file)
	Specify the data file containing the expression values.  It should be tab-separated.

      -imx fname (input matrix file)
	The expression values are read from a file in mcl matrix format.

      --pearson (use Pearson correlation (default))
      --spearman (use Spearman rank correlation)
      --cosine (use cosine)
      --dot (use the dot product)
	Use one of these to specify the correlation measure. Note that the dot product is not normalised and should only be used  with	very  good
	reason.

      -fp <mode> (specify fingerprint measure)
	Fingerprints  are used to define an entity in terms of it having or not having certain traits. This means that a fingerprint can be repre-
	sented by a boolean vector, and a set of fingerprints can be represented by an array of such vectors. In the presence of many  traits  and
	entities  the  dimensions of such a matrix can grow large. The sparse storage employed by MCL-edge is ideally suited to this, and mcxarray
	is ideally suited to the computation of all pairwise comparisons between such fingerprints.  Currently mcxarray  supports  five  different
	types  of  fingerprint,  described  below.   Given  two fingerprints, the number of traits unique to the first is denoted by a, the number
	unique to the second is denoted by b, and the number that they have in common is denoted by c.

	hamming
	  The Hamming distance, defined as a+b.

	tanimoto
	  The Tanimoto similarity measure, c/(a+b+c).

	cosine
	  The cosine similarity measure, c/sqrt((a+c)*(b+c)).

	meet
	  Simply the number of shared traits, identical to c.

	cover
	  A normalised and non-symmetric similarity measure, representing the fraction of traits shared relative to the number of traits by a sin-
	  gle entity.  This gives the value c/(a+c) in one direction, and the value c/(b+c) in the other.

      -skipr <num> (skip <num> data rows)
	Skip the first <num> data rows.

      -skipc <num> (skip <num> data columns)
	Ignore the first <num> data columns.

      -l <num> (take labels from column <num>)
	Specifies to construct a tab of labels from this data column.  The tab can be written to file using -write-tab fname.

      -write-tab <fname> (write row labels to file)
	Write  a tab file. In the simple case where the labels are in the first data column it is sufficient to issue -skipc 1.  If more data col-
	umns need to be skipped one must explicitly specify the data column to take labels from with -l l.

      -t <int> (use <int> threads)
      -J <intJ> (a total of <intJ> jobs are used)
      -j <intj> (this job has index <intj>)
	Computing all pairwise correlations is time-intensive for large input.	If you	have  multiple	CPUs  available  consider  using  as  many
	threads.  Additionally	it  is	possible  to spread the computation over multiple jobs/machines.  Conceptually, each job takes a number of
	threads from the total thread pool.  Additionally, the number of threads (as specified by -t) currently must be the same for all jobs,	as
	it  is	used  by  each job to infer its own set of tasks.  The following set of options, if given to as many commands, defines three jobs,
	each running four threads.

	-t 4 -J 3 -j 0
	-t 4 -J 3 -j 1
	-t 4 -J 3 -j 2

      --job-info (print index ranges for this job)
      -start <int> (start at column <int> inclusive)
      -end <int> (end at column <int> EXclusive)
	--job-info can be used to list the set of column ranges to be processed by the job as a result of the command line options -t, -J, and -j.
	If a job has failed, this option can be used to manually split those ranges into finer chunks, each to be processed as a new sub-job spec-
	ified with -start and -end.  With the latter two options, it is impossible to use parallelization of any kind (i.e. any of the -t, -J, and
	-j options).

      -o fname (output file fname)
	Output file name.

      -digits <num> (output precision)
	Specify the precision to use in native interchange format.

      --write-binary (write output in binary format)
	Write output matrices in native binary format.

      -co num ((absolute) cutoff for output values)
	Output	values of magnitude smaller than num are removed (set to zero).  Thus, negative values are removed only if their positive counter-
	part is smaller than num.

      --transpose (work with the transpose)
	Work with the transpose of the input data matrix.

      --rank-transform (rank transform the data first)
	The data is rank-transformed prior to the computation of pairwise measures.

      -write-data <fname> (write data to file)
	This writes the data that was read in to file.	If --spearman is specified the data will be rank-transformed.

      -write-na <fname> (write NA matrix to file)
	This writes all positions for which no data was found to file, in native mcl matrix format.

      --zero-as-na (treat zeroes as missing data)
	This option can be useful when reading data with the -imx option, for example after it has been loaded from label input  by  mcxload.	An
	example  case  is  the processing of a large number of probe rankings, where not all rankings contain all probe names. The rankings can be
	loaded using mcxload with a tab file containing all probe names.  Probes that are present in the ranking are given a positive ordinal num-
	ber  reflecting  the ranking, and probes that are absent are implicitly given the value zero. With the present option mcxarray will handle
	the correlation computation in a reasonable way.

      -n mode (normalization mode)
	If mode is set to z the data will be normalized based on z-score. No other modes are currently supported.

      -tf spec (transform result network)
      -table-tf spec (transform input table before processing)
	The transformation syntax is described in mcxio(5).

      --help (print help)
      -h (print help)

      --version (print version information)

  AUTHOR
      Stijn van Dongen.

  SEE ALSO
      mcl(1), mclfaq(7), and mclfamily(7) for an overview of all the documentation and the utilities in the mcl family.

  mcxarray 12-068						      8 Mar 2012							 mcxarray(1)
All times are GMT -4. The time now is 10:37 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy