Defragmenting


 
Thread Tools Search this Thread
Top Forums UNIX for Dummies Questions & Answers Defragmenting
# 1  
Computer

Hello to everyone who comes to this forum, I am a newbie here and to UNIX. Well my ? is there a command in UNIX to defrag. I was given a server wich runs SCO UNIX Open Server R5 and tring to learn all I can.
kikkin
# 2  
Defragmentation is not an issue with UNIX-based file systems as it is in Windows based systems. You do not need to run any 'defragment programs' in the UNIX environment. Your question is interesting and I will research into why this is so and get back to you. Quite frankly, I've always taken for granted that UNIX filesystems do not require 'user defragmentation routines' and never considered why. Good question.
# 3  
Here is an explanation of how unix filesystems work as compared to Dos/Windows. Just comes down to better design...

removed html tags for readability --oombera

Last edited by oombera; 02-19-2004 at 02:15 PM..
# 4  
Copyright (C) 1998, The University of Washington, Seattle WA, USA

PxT, Great job of finding the right info! So good, I'll post it. Thanks for a great reply for everyone to learn from Smilie

Quote:

Copyright (C) 1998, The University of Washington, Seattle WA, USA

Disk fragmentation (and how to get rid of it).
==============================================

Unix disk partitions are organized around "cylinders", which are the
combined set of disk tracks read by all heads on all platters of a
disk drive. This makes disk read operations more efficient because
larger amounts of data can be read without having to reposition, or
"seek", the heads.

Unix file systems (and there are many kinds of file systems for Unix)
reside on all the cylinders within a given partition. They allocate
files in the blocks that make up these cylinders, trying to make all
the blocks contiguous and within the same cylinder. This means file
I/O is made more efficient because the heads do not need to seek to
different locations on the disc to read or write all the blocks within
a given cylinder. Many files can then all reside in the same cylinder
and the heads will not need to seek to read/write any of them. (It
should be noted that disc I/O is also speeded by file buffer caching
in RAM so that write operations can be optimized in relation to other
file I/O requests and CPU utilization, although file reads/writes that
are not in, or are too large for, the cache end up being limited by
the raw filesystem speed.)

Primitive operating systems like DOS, on the other hand, allocate
files on a sector by sector basis and tend towards a lot of head
seeking as files extend to other tracks on the disk. They often do
not do any buffering or caching (althoug newer versions of DOS, add-on
programs, and some disc drive controllers, do add some buffering.)

As file systems fill, fewer free blocks are available within
cylinders. When a new file is then created, it cannot fit within the
remaining free blocks within a given cylinder and must be broken up
with file extents that will fall within different cylinders. When file
extents are split up across cylinders, Unix file systems start to
behave like DOS and will begin to force head seeking from cylinder to
cylinder to read or write the blocks of the file. This situation is
referred to as "disk fragmentation".

Microsoft DOS, which is a single-user, single-tasking operating
system, allows a single program to take full control of the system and
thus ensure that there are no buffers of un-written file data. They
can thus go through the disk drive, sector by sector, and recreate
files such that they are organized contiguously on disk again. This
is called "defragmentation" and it results in greatly sped up disk
access.

The newer versions of Microsoft Windows (other than Windows NT, which
is a true multi-tasking operating system), allow a minimal amount of
multi-tasking and file system buffering, requires that you disable
buffering and allow the defragmentation utility to take full control
of the disk to ensure that no corruption of the file system occurs
during defragmentation. This is similar to the "single user" mode of
Unix systems (where nobody is allowed to use the system, other than
the system administrator, and no services are running that might try
to create files).

There are many DOS and Windows defragmentation utilities to handle
this common and frequent problem of disk fragmentation with DOS file
systems.

Unix, on the other hand, has much less likelihood of having situations
of high disk fragmentation that will impact I/O performance.
("Better" is not the same as "perfect". Some file systems, like the
Berkeley Fast File System, try to allocate block in at least a
rotationally efficient, if not perfectly contiguous, manner. You can
also get some improvements in fragment allocation via "tunefs". See
"man tunefs" and _The Design an Implementation of the 4.3 BSD Unix
Operating System_, listed in the Library section.)

The most likely time for fragmentation problems to crop up is when the
disk partitions near 100% capacity. This is a good reason to monitor
disk usage and ensure plenty of free space is generally available.

A common way to defragment Unix file systems is to do a backup, remake
the file system, then restore the files. Note that you must do the
backup using a program that operates at the level of directories and
files (e.g., "cpio", "tar", "dump"), rather than dealing with raw
partitions themselves (e.g., "dd"). Those backup utilities that
operate on raw partitions will maintain the existing blocking, and
thus the fragmentation, when used to restore partitions. Utilities
that back up files one at a time, effectively re-consolidating the
fragmented blocks while writing them to tape, and then lay them back
down in contiguous blocks in the fresh file system. "dump" (and its
inverse, "restore") are the more popular utilities for backs for
several reasons

For more information, see:

ftp://sunsite.unc.edu/pub/Linux/docs/HOWTO/mini/Partition


Besides using this backup/new file system/restore method, there are
options for using defragmentation utilities in Unix. Some of them are
listed here.

Digital Unix ============

Polycenter Advanced File System (AdvFS) has its own defragmenter
named, appropriately, "defragment".

From the "advfs" man page:

The POLYCENTER Advanced File System (AdvFS) is a file system option on
the DEC OSF/1 operating system. The Advanced File System provides
rapid crash recovery, high performance, and a flexible structure that
enables you to manage your file system while it is on line.

An optional set of utilities is available for AdvFS that expands the
capabilities of the file system. The POLYCENTER Advanced File System
Utilities provide functions such as adding volumes without
reconfiguring the directory hierarchy of the file system, cloning
filesets, and improving system performance with file defragmentation,
domain balancing, and file striping. A graphical user interface (GUI)
that simplifies file system management is available With the AdvFS
Utilities.

The Advanced File System component is licensed with the DEC OSF/1
operating system and is available as an optional subset during
installation. The POLYCENTER Advanced File System Utilities is
available as a separately licensed layered product.


Linux
=====

From: Stephen Tweedie <sct@dcs.ed.ac.uk>
Subject: defrag-0.70 - Linux defragmenter
Date: Thu, 21 Aug 1997 14:10:51 GMT

=====BEGIN PGP SIGNED MESSAGE=====


Announcing defrag-0.70

The latest release of the Linux filesystem defragmenter, defrag-0.70,
is now available at

linux.dcs.ed.ac.uk:/pub/linux/defrag/defrag-0.70.tar.gz

(to appear at)
sunsite.unc.edu:/pub/Linux/system/filesystems/defrag-0.70.tar.gz

This release includes all of the patches I have been sent against the
0.6x defrags. Please let me know if there are any other changes
outstanding.

New in 0.70:
************

Tidied up colour support and attributes for fragmented blocks.

Added 64 bit device access support to allow use on filesystems >2GB.

Added a new e2defrag.static target with no graphic display support for
use on root floppies (for those who want to defragment their root
filesystem).

New in 0.62:
************

Thanks to Ulrich E. Habel (espero@b31.hadiko.de) for this update:

Picture mode is now colorized. Now ext2 V2 Inode-informations are read
from the Linux-Includes. A bug fixed in valid-check of filesystem.



Stephen Tweedie <sct@dcs.ed.ac.uk>



HP-UX
=====

Commercial products exist to handle defragmentation. One such product
is described here:

EAGLE Software, Inc. has announced Version 3.00 of DISK_PAK for
UNIX. DISK_PAK can safely eliminate file system fragmentation as well
as cluster frequently accessed files for peak file system
responsiveness.

IRIX
====

"fsr" (File System Reorganizer)

Run nightly via "cron"

From an SGI administrators list email message:

http://www.sgi.com/Archive/comp.sys..../Apr/0131.html

--------------------------

In article <1qcchqINN6bt@srvr1.engin.umich.edu>
hillig@U.Chem.LSA.UMich.EDU (Kurt Hillig) writes:

>Can anyone translate the output of the "-v" option of fsr? The man
>page says:
>
> -v Verbose. Print cryptic information about each file being
> reorganized.

The following line:

movex() i1152 0+3676 -> 23768

means: for the file whose inode number (ls -i) is 1152 move 3676
blocks starting at logical block 0 to file system block 23768.

This line:

slidex() i65 0+160 -> 7834 (1)

means: "slide" the 160 block extent starting at logical block 0
to file system block 7834, one block at a time "(1)".

(Try fsr -vv to see even more output :-)

>The reason I'm asking is that I run fsr weekly (Saturday nights) but
>this afternoon my /usr/users filesystem was giving the error:
>
>Apr 12 12:01:41 Uranium unix: lv1 (/usr/users): Out of contiguous space

Allocations for indirect extents require contiguous space, but only up
to 32 blocks. Sounds like a large file is trying to grow in a very
fragmented file system. Sure would be nice if the above error identified
the file trying to grow, huh? :-)

Try doing ``fsr -s /dev/rwhatever'' (just print frag statistics)
then run ``fsr /dev/rwhatever'' (drop the -v unless you really want to see
absolutely every block as its moved) and afterwards run fsr -s again.
You might need to run ``fsr /dev/rwhatever'' a couple of times.

Since fsr works extra hard to blow out the page cache and does all I/O
raw and synchronously you'll see a reasonable perf hit on your system so
you might want to run this off peak. However if clearing up the fragmentation
is more important it's perfectly safe to run fsr no matter how busy the
file system or system.

The only issue is that fsr can't reorganize files which are currently open
(fsr -v will say ``ino=XXXX in use...'') so things like /usr/adm/SYSLOG will
probably never be touched, but the idea is that enough other files are not
being used so that they can be moved to defragment free space.

Bent

--------------------------
[Edited by Neo on 01-08-2001 at 06:51 PM]
# 5  
Question Hard drive

I was getting a message something to the nature of not having any more sapace in the hard drive. How do I clean up it up and just keep what I need wich is the OS? I think the hard drive has a second partision wich it has Dos.
kikkin
# 6  
1. Delete any files under /tmp
2. Delete any core files, the command is;
find / -name core -print
This will give you a list of core files. You can then run file on them to make sure that they are a core file and not a regular file. To automatically delete any file named core, use;
find / -name core -exec rm -f {} \;

What OS are you unning? Does is specify which file system is full? run df -k from the prompt and post any filesystems more than 90% full.
# 7  
When one partition is getting full and there are few options to reduce the size, the normal fix is to move directory trees to new partitions and then create a symbolic link from the new directory node to the old one.

For example, lets say that your /user directory is big because you have a lot of users and /user is in the root partition. You would copy the entire /user directory using either a recursive copy (or TAR) to another partition, for example /usr1. Depending on the actual file system, you could just move the old one (just to be safe) i.e. move /user to /user.old Then you would link /usr1/user to /user using a command like:

Code:
ln -sf /usr1/user /user

After you test it and it works OK (always test!) then you can delete/remove the original files. You can't do this if you have active processes from the filesystem (you will get a filesystem busy error) or if your shell is in the directory, BTW. Also, moving dynamic libaries and other core libaries can be done, but requires experience. Suggest you relocate user and temporary files first. It takes a lot of courage to move and relocate system binaries and libs, because one false step can result in a filesystem which must be rebuilt from a boot disk.

[Edited by Neo on 01-10-2001 at 01:06 PM]
 

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #212
Difficulty: Medium
According to OSPF, an autonomous system boundary router is a router that is connected by using more than one routing protocol and that exchanges routing information with routers autonomous systems.
True or False?

1 More Discussions You Might Find Interesting

1. Linux

defragmenting

Can i defragment a windows FAT32 vol in linux iF so Then how do i do that also How do i mount a nfts partition in read write mode need help (5 Replies)
Discussion started by: wojtyla
5 Replies

Featured Tech Videos