Sponsored Content
Top Forums Shell Programming and Scripting removing special characters @ EOL Post 302311160 by Franklin52 on Tuesday 28th of April 2009 04:36:46 AM
Old 04-28-2009
With tr:

Code:
tr -d '\r' < file > newfile

With sed the ^M must be typed as <CTRL-v> <Enter>:

Code:
sed 's/^M//' file > newfile

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Removing special characters in file

I have file special.txt with the following data. <header info> 123$ty5%98&0asd 1@356fgbv78 09*&^5jkns43( ...........some more rows. In my output file, I want to eliminate all the special characters in my file and I want all other data. need some help. (6 Replies)
Discussion started by: srivsn
6 Replies

2. AIX

Removing a filename which has special characters passed from a pipe with xargs

Hi, On AIX 5200-07-00 I have a find command as following to delete files from a certain location that are more than 7 days old. I am being told that I cannot use -exec option to delete files from these directories. Having said that I am more curious to know how this can be done. an sample... (3 Replies)
Discussion started by: jerardfjay
3 Replies

3. Solaris

removing special characters, white spaces from a field in a file

what my code is doing, it is executing a sql file and the resullset of the query is getting stored in the text file in a fixed format. for that fixed format i have used the following code:: Code: awk -F":"... (2 Replies)
Discussion started by: priyanka3006
2 Replies

4. Shell Programming and Scripting

Removing special characters

Dear Friends, I want to remove text between two patters. Problem is, it has random special characters like \ / | * ` ~ ! $ etc. These random special characters has no fixed length. But these special characters are appearing between a fixed pattern e.g. DM&^%#|#!\/?CT Expected output... (14 Replies)
Discussion started by: anushree.a
14 Replies

5. UNIX for Dummies Questions & Answers

awk for removing special characters and extra commas

Hi, I have a .csv file which as empty lines with comma and some special characters in 3rd column as below. Source data 1,2,3,4,%#,6 ,,,,,, 1,2,3,4,5,6 Target Data 1,2,3,4,5,6I need to remove blank lines and special charcters I am trying to get this using the below awk awk -F","... (2 Replies)
Discussion started by: shruthidwh
2 Replies

6. Shell Programming and Scripting

Replace special characters with Escape characters?

i need to replace the any special characters with escape characters like below. test!=123-> test\!\=123 !@#$%^&*()-= to be replaced by \!\@\#\$\%\^\&\*\(\)\-\= (8 Replies)
Discussion started by: laknar
8 Replies

7. Shell Programming and Scripting

Removing special characters - Control M

I have developed a small script to remove the Control M characters that get embedded when we move any file from Windows to Unix. For some reason, its not working in all scenarios. Some times I still see the ^M not being removed. Is there anything missing in the script: cd ${inputDir}... (7 Replies)
Discussion started by: vskr72
7 Replies

8. UNIX for Dummies Questions & Answers

Removing unnecessary eol ($) character from Oracle sql query output

Hi All, I am fetching oracle query result in shell variable. As columns numbers are more the output wraps in unix terminal .i.e one complete record in db gets store in multiple lines. with each line ends with $ character. I want to remove these unnecessary $ character but to keep required $... (8 Replies)
Discussion started by: Harshal22
8 Replies

9. Linux

File conversion and removing special characters from a file in Linux

I have a .CSV file when I check for the special characters in the file using the command cat -vet filename.csv, i get very lengthy lines with "^@", "^I^@" and "^@^M" characters in between each alphabet in all of the records. Using the code below file filename.csv I get the output as I have a... (2 Replies)
Discussion started by: dhruuv369
2 Replies

10. Shell Programming and Scripting

Removing blank/white spaces and special characters

Hello All , 1. I am trying to do a task where I need to remove Blank spaces from my file , I am usingawk '{$1=$1}{print}' file>file1Input :- ;05/12/1990 ;31/03/2014 ; Output:- ;05/12/1990 ;31/03/2014 ;This command is not removing all spaces from... (6 Replies)
Discussion started by: himanshu sood
6 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 07:45 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy