Dear all,
I have an AWK script which provides frequency of words. However I am interested in getting the frequency of chunked data. This means that I have already produced valid chunks of running text, with each chunk on a line. What I need is a script to count the frequencies of each string. A pseudo sample is provided below
The output would be
I have been able to sort the data so that all similar strings are clubbed together
My question is how do I manipulate a script so that a whole line is treated as an entity and lines that match (I have come till there) can be treated as one unit and a frequency counter set up.
My awk script handles space as delimiter but I do not know how to make it recognise start of line and end of line CRLF as delimiters.
I am sure this tool will be useful to people who work with chunked big data.
Many thanks
I've got a problem i'm hoping other more experienced programmers have had to deal with sometime in their careers and can help me: how to get fullnames that were chunked together into one field in an old database into separate more meaningful fields.
I'd like to get the records that nicely fit... (2 Replies)
I have a large file with fields delimited by '|', and I want to run some analysis on it. What I want to do is count how many times each field is populated, or list the frequency of population for each field.
I am in a Sun OS environment.
Thanks,
- CB (3 Replies)
dear all.. i need help
i have data
ID,A,B,C,D,E,F,G,H --> header
917188,4,1,2,1,4,6,3,5 --> data
i want output :
ID,OUT1,OUT2,OUT3 --> header
917188,3,3,2
where OUT1 is count of 1 and 2 from $2-$9
OUT2 is count of 3 and 4 from $2-$9... (3 Replies)
I need to write a shell script "cmn" that, given an integer k, print the k most common words in descending order of frequency.
Example Usage:
user@ubuntu:/$ cmn 4 < example.txt :b: (3 Replies)
Hi all,
I am trying to analyze my data, and I will need your experience.
I have some files with the below format:
res1 = TYR res2 = ASN
res1 = ASP res2 = SER
res1 = TYR res2 = ASN
res1 = THR res2 = LYS
res1 = THR res2 = TYR
etc (many lines)
I am... (3 Replies)
Hi, I have tab-deliminated data similar to the following:
dot is-big 2
dot is-round 3
dot is-gray 4
cat is-big 3
hot in-summer 5
I want to count the frequency of each individual "unique" value in the 1st column. Thus, the desired output would be as follows:
dot 3
cat 1
hot 1
is... (5 Replies)
Discussion started by: owwow14
5 Replies
LEARN ABOUT DEBIAN
gr_plot_fft
GR_PLOT_FFT(1) User Commands GR_PLOT_FFT(1)NAME
gr_plot_fft - Frequency domain GNU Radio plotting
SYNOPSIS
gr_plot_fft: [options] input_filename
DESCRIPTION
Takes a GNU Radio complex binary file and displays the I&Q data versus time as well as the frequency domain (FFT) plot. The y-axis values
are plotted assuming volts as the amplitude of the I&Q streams and converted into dBm in the frequency domain (the 1/N power adjustment out
of the FFT is performed internally). The script plots a certain block of data at a time, specified on the command line as -B or --block.
This value defaults to 1000. The start position in the file can be set by specifying -s or --start and defaults to 0 (the start of the
file). By default, the system assumes a sample rate of 1, so in time, each sample is plotted versus the sample number. To set a true time
and frequency axis, set the sample rate (-R or --sample-rate) to the sample rate used when capturing the samples.
OPTIONS -h, --help
show this help message and exit
-d DATA_TYPE, --data-type=DATA_TYPE
Specify the data type (complex64, float32, (u)int32, (u)int16, (u)int8) [default=complex64]
-B BLOCK, --block=BLOCK
Specify the block size [default=1000]
-s START, --start=START
Specify where to start in the file [default=0]
-R SAMPLE_RATE, --sample-rate=SAMPLE_RATE
Set the sampler rate of the data [default=1.0]
SEE ALSO gr_plot_fft_c(1)gr_plot_fft_f(1)gr_plot_fft 3.5 December 2011 GR_PLOT_FFT(1)