100% full file system - error when trying to increase
Hi,
I came across a situation where I had to increase a filesystem that was 100% full and received this error:
Code:
[root]/tmp[common]:chfs -a size=+5000M /tmp
0516-634 lquerypv: /tmp directory does not have enough space,
delete some files and try again.
0516-788 extendlv: Unable to extend logical volume.
Is there a way to override this if I am unable to delete files in the file system to create some space?
How can I check how much space is left in the solaris file system? and how can I increase those space in the file system?. I am trying to install Oracle Database on Solaris 8.But, it keep giving me error message says that"There is not enough space on the volume you have specified".
Thanks
... (1 Reply)
Hi frnz,
Need an urgent help...
I have installed solaris 8 in a sunblade workstation with 136GB hdd.
While installation it has taken a default filesystem size as 1.37GB for root.
AFtr completing the installation i have extended the root partition to 130GB.
But still df output shows... (4 Replies)
Hi,
I'm getting an error with my filesystems.
After
/dev/dsk/c0t0d0s7.......................100%............/export/home
and
#ls -l
drwxr----.......................512......TT_DB
drw..............................8192.....lost+found
drw...............................512......oracle... (10 Replies)
Hi all,
we are usig aix 4.3 and i need to increase the size of "/u01" file sytem which is mounted on logical volume "lv00", but "/u01" file system size is 9 GB and logical volume "lvoo" size 9 GB.how do i increase the size of /u01.do i increase the size of logical volume "lv00" and then... (2 Replies)
Hello Admins,
I am running a redhat linux 5 on vmware workstation.
I need to increase or add some more space to my root (/) partition. I don't have any LVM configured..
Please suggest.
# df -kh
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 3.8G 3.1G ... (4 Replies)
Hi All,
I am using this commands to dynamically increase ZFS swap space on Solaris
my question is:
1- after i make these commands it will permanent or it will remove after restart
2- how to make it permanent
# swap -l
swapfile dev swaplo bloques libre
/dev/zvol/dsk/rpool/swap... (4 Replies)
Dear Friends,
I would like to increase the size of a file system from 10GB to 15GB.
System is runing on HP-UX 11.31.
Please help in the matter.
Regards,
Bhagawati Pandey (3 Replies)
I have Solaris-10 server running ZFS file-system. ctdp04_vs03-pttmsp01 is one of the non global zone. I wanted to increase a /ttms/prod file-system of zone, which is actually /zone/ctdp04_vs03-pttmsp01/ttms/prod on global server.
I have added a new disk of 9 GB, which is emcpower56a and now I can... (16 Replies)
Hi,
How/what is the procedure to increase file system in linux server ?
Regards,
Maddy (6 Replies)
Discussion started by: Maddy123
6 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)