Sponsored Content
Top Forums Shell Programming and Scripting How to change the orders of the lines in a txt according to its length Post 302584514 by methyl on Friday 23rd of December 2011 12:20:40 PM
Old 12-23-2011
Mostly Shell: Count the number of "/" characters, prepend the number to the line, sort, remove the prepended field.
Code:
#!/bin/ksh
cat filename.txt | while read line
do
        count=`echo "${line}"|tr -cd '/'|wc -c`
        echo "${count} ${line}"
done | sort -nr | cut -d' ' -f2-

This User Gave Thanks to methyl For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Sed working on lines of small length and not large length

Hi , I have a peculiar case, where my sed command is working on a file which contains lines of small length. sed "s/XYZ:1/XYZ:3/g" abc.txt > xyz.txt when abc.txt contains lines of small length(currently around 80 chars) , this sed command is working fine. when abc.txt contains lines of... (3 Replies)
Discussion started by: thanuman
3 Replies

2. Shell Programming and Scripting

Pivot variable record length file and change delimiter

Hi experts. I got a file (500mb max) and need to pivot it (loading into ORCL) and change BLANK delimiter to PIPE |. Sometimes there are multipel BLANKS (as a particular value may be BLANK, or simply two BLANKS instead of one BLANK). thanks for your input! Cheers, Layout... (3 Replies)
Discussion started by: thomasr
3 Replies

3. Shell Programming and Scripting

Uniform length for all the lines in file

Hi, I have a file with different width for each line. like first line with 45characters and second line of 30 characters. But I want to make all the lines to 45 characters in file. Appreciate your inputs Thanks Arun: (1 Reply)
Discussion started by: arund_01
1 Replies

4. Shell Programming and Scripting

Delete lines where line length is < x

Hi all, I am using Sed to convert a log file into a csv, so far so good. Kudos to this forum for helping me thus far! My current problem. There are some lines in the log file that I do not want. How can I delete lines where the line legth is less than say 100? Here are some sample lines... (6 Replies)
Discussion started by: Mr. Rocco
6 Replies

5. Shell Programming and Scripting

sed to cp lines x->y from 1.txt into lines a->b in file2.txt

I have one base file, and multiple target files-- each have uniform line structure so no need to use grep to find things-- can just define sections by line number. My question is quite simple-- can I use sed to copy a defined block of lines (say lines 5-10) from filename1.txt to overwrite an... (3 Replies)
Discussion started by: czar21
3 Replies

6. Shell Programming and Scripting

merging two .txt files by alternating x lines from file 1 and y lines from file2

Hi everyone, I have two files (A and B) and want to combine them to one by always taking 10 rows from file A and subsequently 6 lines from file B. This process shall be repeated 40 times (file A = 400 lines; file B = 240 lines). Does anybody have an idea how to do that using perl, awk or sed?... (6 Replies)
Discussion started by: ink_LE
6 Replies

7. UNIX for Dummies Questions & Answers

find lines in file1.txt not found in file2.txt memory problem

I have a diff command that does what I want but when comparing large text/log files, it uses up all the memory I have (sometimes over 8gig of memory) diff file1.txt file2.txt | grep '^<'| awk '{$1="";print $0}' | sed 's/^ *//' Is there a better more efficient way to find the lines in one file... (5 Replies)
Discussion started by: raptor25
5 Replies

8. Shell Programming and Scripting

How to change variable length field?

Hello, I have a file with a date field with various lengths. For example: m/d/yyyy hh:mm or h:mm mm/dd/yyyy hh:mm or h:mm Is there a way using sed or awk to change the field to m/d/y ? I don't need the hours and minutes in that field, just the date in the proper format. Thanks in... (6 Replies)
Discussion started by: sonnyo916
6 Replies

9. Shell Programming and Scripting

Generate fixed length txt file

hi, i am using below query to generate the fixed length txt file. this sql is being called from shell script. This is supposed to be a fixed record file with the below definitions. There must be 2 byte filler after the CAT_ID AND each line should have total of 270 bytes. field ... (1 Reply)
Discussion started by: itzkashi
1 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 09:32 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy