Search Results

Search: Posts Made By: s052866
23,069
Posted By Scrutinizer
Thanks alister... Another factor that might...
Thanks alister...

Another factor that might prove an important factor is which awk or which grep is used.
For example when using the same extended regex ^83 *(1[0-9][0-9][0-9]|2000)$
I got the...
23,069
Posted By alister
That's a reasonable assumption, but it turns out...
That's a reasonable assumption, but it turns out to be incorrect (at least with the implementation I tested).

In my testing, the following code is over three times faster than the original...
23,069
Posted By Corona688
The problem, really, is that you have a huge...
The problem, really, is that you have a huge amount of data, not a slow program. How big are your records, really?

I haven't heard perl suggested to increase speed before. I don't think using an...
23,069
Posted By jayan_jay
Performance not tested .. To avoid arithmetic...
Performance not tested .. To avoid arithmetic calculations, try with egrep

$ egrep "83 (1...$|2000)" infile
83 1453
$
23,069
Posted By Klashxx
If the first value is fixed try: awk '/^83...
If the first value is fixed try:
awk '/^83 *[12][0-9][0-9][0-9]/{if($2>=1000 && $2<=2000){print}}' infile
1,793
Posted By Franklin52
You can give this a try: #!/bin/bash ...
You can give this a try:
#!/bin/bash

FILE="$1"

awk 'NR==1{print;next}!val{val=$NF;print ;next}{p=val;val=$NF;$NF= $NF-p}1' $FILE > newfile
9,248
Posted By Scrutinizer
Try this: awk '{printf...
Try this:
awk '{printf (/>/)?RS"%s"RS:"%s",$0}END{print x}' infile
your code does not work because of this: substr($0,1,1)
3,755
Posted By alister
sed 'n;s/......//' Test run with the data...
sed 'n;s/......//'

Test run with the data you provided in your latest post:
$ sed 'n;s/......//' MS_Pa-plex_1_tag1.sanfastq
@HWUSI-EAS656_0034:7:12:1:291#0/1...
Showing results 1 to 8 of 8

 
All times are GMT -4. The time now is 04:01 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy