Should I use a CoW filesystem on my PC if I only wanted snapshot capabilities ?


 
Thread Tools Search this Thread
Top Forums UNIX for Beginners Questions & Answers Should I use a CoW filesystem on my PC if I only wanted snapshot capabilities ?
# 8  
Old 02-29-2020
After thinking about this some more, let me please add....

I have nothing against anyone using btrfs and someday, when it is default on Ubuntu, I will probably follow the crowd and use btrfs; but as long as ext4 is the Ubuntu default, I will stick with ext4.

However, for anyone who wants to get into btrfs, one approach is to start with btrfs on a non-root partition (as Peasant suggested, as I recall). Maybe you can stress-test btrfs in that way by crashing your system, unplugging the power cord, etc and see if you are happy with how btrfs recovers. When you are comfortable, then maybe try a root partition with btrfs with proper backups on your desktop machine.

After all, the OP was talking about his desktop machine, not some remote linux server on the other side of the planet where you want a very safe filesystem like ext4.

Some people (unlike me) prefer to be early adopters of these kinds of technologies and if you really want to use btrfs, then go for it. I am not really an "early adopter" of any technology which I don't really have an operational need for, so I am waiting for tech like btrfs to be the OOTB default with Ubuntu. That is the signal I am looking for.

Where Ubuntu goes (default and mainstream), I will follow, generally speaking.

However, I was not always like this with Linux and in the early days (over 20 years ago), I was keen to be a very early adopter. However, over the years, I am less of an "early adopter" and more of a "late adopter" of new tech like btrfs. This is only me.

I encourage others to march to the beat of your own music, not the beat of my music. I only offer my opinion about what I do and do not do. This is only my opinion and in no way is presented as "the truth" or "the way to go".

btrfs has a lot of great features. If btrfs fits into your current plans, then go for it as you deem appropriate. Just because "Neo" is stuck on ext4 until Ubuntu makes btrfs the Ubuntu default, there is no reason not to use btrfs if the spirit moves you, especially on your desktop machine (not a server in a data center far, far away....).
These 2 Users Gave Thanks to Neo For This Post:
# 9  
Old 02-29-2020
Other the licencing issues, ZFS on Linux is production quality.

I'm no zealot but name me one file system on Linux (other then BRTFS) which has all those features so to say inline :
  • Builtin in bit rot protection with raid levels integrated or non-protected with copies feature.
  • Snapshots ( accessible via one cd command) / clones / compression / dedup / encryption one command away - granularity per filesystem / transparent to user.
  • Incremental send - receive, local or remote, protocol agnostic (STDIN/STDOUT).
  • Quotas, reservations - one command away.
  • Not using standard (not to say obsolete) LRU for caching, but ARC.
  • Ability to increase performance with dedicated devices for L2ARC/ZIL.
  • Sharing volumes/files using NFS, CIFS (NAS) or ISCSI (block) - couple of commands away.
  • Compatibility between systems running Opensolaris & BSD derivatives, endianess aware.
Those are features offered by enterprise array systems for big price tag and additional licences.

Of course, much can be achieved with various other tools (LVM snaps, rsync tools, hard links, xfsdump, dump etc.)

In ZFS all is builtin, one file system to rule them all Smilie

Image

Regards
Peasant.
These 4 Users Gave Thanks to Peasant For This Post:
# 10  
Old 03-02-2020
Quote:
Originally Posted by Neo
No.

I do not recommend those file systems.

Your are better off running ext4, a RAID configuration (I run RAID1, but do not depend on it), and doing regular backups on your data based on your risk management model (this is the most critical).

Nothing beats a strong filesystem and a very well thought out backup and recovery plan.

That is my view. YMMV

On the desktop, I run macOS and have a similar strategy. I make full backups often, based on the activity on the system. The more activity and files (and the nature of the files) created, the more frequent the backups.
Ok, so can you tell me something about the Timeshift program. Lets say I make certain changes to the root filesystem which makes my system unbootable. Or I install a malicious or problematic update. Then will a program like TimeShift help ?

My question is that what do I do if the system becomes unbootable, how do I recover then ?

What do you use ?
What is your strategy ?

I hope its not taking a cold boot backup with CloneZilla.

--- Post updated at 12:14 AM ---

Quote:
Originally Posted by Peasant
If you are using LVM, you can use logical volume snapshot ability to achieve want you want.

BRTFS i have not used, but i did read some horror stories some time ago.
Probably nothing to worry about for home use, since those bugs were about raid protection.

ZFS in ubuntu, for instance, is openzfs (OpenZFS)
This is a mature and high quality file system & volume manager, but i would not use it for root just yet.

For data disks, i see no reason to reap the benefits of snapshots, compression and deduplication.
Just be sure those dedup tables fit in memory Smilie

Here is a quick example of LVM snapshots from my home box which is filesystem agnostic :
Code:
root@box:~# vgdisplay  dumpvg
  --- Volume group ---
  VG Name               dumpvg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476931
  Alloc PE / Size       262144 / 1.00 TiB
  Free  PE / Size       214787 / 839.01 GiB
  VG UUID               Qtrbm7-GEAz-CcgA-JOUV-6eZ1-qrZ6-L4ey3h
root@box:~# mount | grep dumpvg
/dev/mapper/dumpvg-dumpvol on /srv/dump type xfs (rw,noatime,attr2,inode64,noquota)
cd /srv/dump/some_files
root@box:/srv/dump/some_files# ls -rlt
total 4
-rw-r--r-- 1 root root 19 Feb 28 15:49 file1.txt
root@box:/srv/dump/some_files# cat file1.txt 
Some stuff written
root@box:/srv/dump/some_files#

You need to have free space in volume group to create a snapshot, as i do, so lets create a snapshot.
We also specify that 5GB of total space in VG that can be consumed by system to maintain snapshot.
Code:
root@box:/srv/dump/some_files# lvcreate -L 5G -s -n lv_snap_$(date "+%Y%m%d%H%M") /dev/dumpvg/dumpvol
  Logical volume "lv_snap_202002281552" created.
root@box:/srv/dump/some_files# rm -f file1.txt
root@box:/srv/dump/some_files# mount -o ro,nouuid /dev/dumpvg/lv_snap_202002281552 /srv/dump_snap/
root@box:/srv/dump/some_files# ls -lrt /srv/dump_snap/some_files/
total 4
-rw-r--r-- 1 root root 19 Feb 28 15:49 file1.txt

Hope that helps
Regards
Peasant.
Can this work if the system becomes unbootable ? If so how ? Grub doesn't have the lvm tools for restoring snapsots, AFAIK.

Also what if the drive is encrypted ? I will be going with full disk encryption.

How do you do this if the system becomes unbootable ?

--- Post updated at 12:19 AM ---

Quote:
Originally Posted by drl
Hi.

I am a big fan of Virtual Machine technology. Here is what I do:

On my main workstation I install a small, stripped-down (i.e. no Office, etc.) Linux distribution as a host -- I prefer Debian GNU/Linux. I then install Virtualbox and create a VM. On the VM I install my day-to-day work environment -- again Debian.

Whenever I have mods to install, I use VB to take a snapshot. Then I install the mods. I leave the snapshot for some time (it's a CoW). If it works for a few days, a week, etc., then I merge the collected CoW changes into the VM (by, ironically, deleting the snapshot). If the mods fail to run, I restore the running system. I've had to do the restore perhaps 3 times in years, and it goes quite quickly, as does the creating and merging of the CoW.

Finally if the VM seems OK, then I install the mods into the host system.

For a plan for backup, we use a separate computer as a backup server, and on that we have a set of mirrored disks. We use that to run rsnapshot to backup our running, day-to-day systems. ( You might be able to run rsnapshot on the running system itself, in which case you can then use LVM, and rsnapshot will do its own LVM snapshot, do the backup, and remove its snapshot; then you could, say, tar up the resulting backup and send it to another computer ).

We have a small shop and rsnapshot helps us in many ways. The rsnapshot utility is a pull system, so the remote needs passphrase-less access to the system being backed up. The big advantage of rsnapshot is that it uses hard links, so conserving storage dramatically. For example, I backup my day-to-day system every hour, day, week, month, so 24+7+4+12 -> 47 collections, yet rarely goes over 20 GB, but oddly, if you look at any single collection, it is 20 GB, all due to the magic of hard links. The code also uses an algorithm that it transfers only the changes of a particular file, thus saving real time and network time. It also handles all renaming, copying, and removing of necessary files to accomplish the rotation of backup collection names.

We are also interested in the zfs filesystem. Our rsnapshot backup server was replaced this month with a newer model -- the old one lasted 17 years (2005..2020). In addition to being an external backup, I also installed VBox there and, as VMs, installed as guests Ubuntu 19.10 on zfs, as well as FreeBSD 12 on zfs. Since the host is on a RAID1 mirror, I didn't need the additional support for zfs mirroring (but it might be of some interest later on to experiment with them). We've had a Solaris VM for a long time on zfs:
Code:
OS, ker|rel, machine: SunOS, 5.11, i86pc
Distribution        : Solaris 11.3 X86

As additional VMs, we also installed a guest that is the same as the host, and we're using that as a test bed (just as I do with my day-to-day workstation). This new install is of distribution Debian GNU/Linux buster. We also like to have the next rev available, so I installed the testing version, known as bullseye.

So VMs are what we use for experimentation, as well as to make backups, like MS System Restore Points, easy.

Best wishes ... cheers, drl
Thats interesting, but I really need restoration capabilities on my base system.

Could you tell me how rsnapshot will help in a case where lets say the system is unbootable ? How do you restore then using rsnapshot ? Does it work when the system is unbootable ?

--- Post updated at 12:23 AM ---

Quote:
Originally Posted by stomp
Hi,

some personal experiences and experiences from others:
  • zfs
    I'm very fond of the ease of use of zfs administration. Few simple commands which all did what I expected them to do. Compression is recommended. Data Checksumming is also a great feature. Deduplication is not as clearly recommended. It needs lots of resources(RAM). It's not that flexible as LVM or btrfs, but in a Desktop-Environment, this should not be a problem. No Problems in several years of usage. Some reading about zfs is recommended for basic configuration (ashift, blocksize values.) getting root fs to zfs is manual work. But for a data partition it's very easy to use. If I need more flexibility, I will use lvm.
  • btrfs
    The times when btrfs had grave bugs are long gone. If you use it, make sure you do not use features that are marked as experimental. (For example the btrfs internal raid5 implementation. Use linux software raid with btrfs, if you want raid). I read some entries in another forum from someone who uses it at scale at will never change it for any other filesystem(he had experieance with all major filesystems) . It also has checksums and snapshot capabilities. The flexibility far exceeds zfs and lvm.

    I checked it out and decided not to use it, for these reasons:
    1. Some things are more complicated. You have to work your way through the documentation quite carefully. For example. You can not trust the values of du and df. The complexity of the filesystems circumvents that this is always correct and consistent. btrfs introduces own tools in addition to the tools everybody is used to.
    2. Things do not work the way I like them to: If I have some raid and a disk is failing, I would expect the system to come up and maybe complain about that it is in degraded mode or just have some file, which would be my part to check. And if the disk is replaced, I expect an easy command or even automatic restoration of the failed disk. That seems not to be the case with btrfs. If you then use your file system in degraded mode, bad things can happen and your raid will do things like writing data only to one disk despite the fact that the device may be set up as a mirror. Those data may be kept non-replicated even if you replace and reintegrate a new disk afterwards. That of course may lead to data loss. That's not an error of the filesystem, but the ordinary procedure. One has to know correctly how btrfs is to be operated according to the documentation or you'll get into serious trouble.
    I'd say btrfs is an advanced filesystem, which get's you a lot in terms of flexibility, performance, robustness, features and data security. It comes with the cost of taking your time and really get to learn the details, which can become really important.

On the other hand filesystem snapshotting ist not what you get with windows system restore points. I think that is a lot of voodoo going on there with windows system restore points(meaning it is complex under the hood). I did not ever test if filesystem-snapshots really get you a working system if you roll them back. Maybe it just works. But if you have a database, there's no guarantee, that the data will be consistent with such a snapshot.

I worked on my personal workstation within a VM too. Snapshots were possible. But the user experience was a mess. Regular Problems with the virtualization environment(virtualbox) and speed too drove me away from that and back to bare metal. For testing things virtual machines are great. Snapshotting ist great there. But not for the main workstation(for me).

Personally OS state snapshotting it's a feature, that I liked to have on windows(system restore points), but I never missed on linux, even if it would be nice to have it. I broke linux at the beginning a lot.(Because I liked experimenting). But since I'm working on a linux machine, I know what better not to do and I never had the need to reinstall the system due to a broken os.

Recommendation for the lazy: If you want to experiment: Use a virtual machine. For your workstation: Use a proper backup strategy and get to know how to validate and restore it. Backup is important!

And in opposition to windows, if it would really be necessary, it's a piece of cake to take any computer install linux on it and get your backup onto it and have every setting restored. You just do not have to endlessly reboot and klick and update.
What would be that proper strategy ? CloneZilla ? Or LVM snapshotting ?

--- Post updated at 12:27 AM ---

Quote:
Originally Posted by Peasant
Other the licencing issues, ZFS on Linux is production quality.

I'm no zealot but name me one file system on Linux (other then BRTFS) which has all those features so to say inline :
  • Builtin in bit rot protection with raid levels integrated or non-protected with copies feature.
  • Snapshots ( accessible via one cd command) / clones / compression / dedup / encryption one command away - granularity per filesystem / transparent to user.
  • Incremental send - receive, local or remote, protocol agnostic (STDIN/STDOUT).
  • Quotas, reservations - one command away.
  • Not using standard (not to say obsolete) LRU for caching, but ARC.
  • Ability to increase performance with dedicated devices for L2ARC/ZIL.
  • Sharing volumes/files using NFS, CIFS (NAS) or ISCSI (block) - couple of commands away.
  • Compatibility between systems running Opensolaris & BSD derivatives, endianess aware.
Those are features offered by enterprise array systems for big price tag and additional licences.

Of course, much can be achieved with various other tools (LVM snaps, rsync tools, hard links, xfsdump, dump etc.)

In ZFS all is builtin, one file system to rule them all Smilie

Image

Regards
Peasant.
Interesting analogy. But doesn't zfs use more RAM than btrfs ? So isn't btrfs "the better ZFS" for desktops ?

Also this snapshot thing is really confusing me ? Restore incase of unbootable system with encrypted HDD ?

--- Post updated at 12:29 AM ---

Quote:
Originally Posted by jim mcnamara
FWIW:
Don't Use ZFS on Linux: Linus Torvalds - It's FOSS

Torvalds will not allow Linux kernel inclusion of ZFS support because of Oracle's position on ZFS licensure, this was important to us because we have only ZFS on Solaris 11/12 boxes. We did not want different files systems for production Linux servers - but that is what we got....ext4

How this plays out on a home desktop I cannot say exactly. I would recommend NOT using ZFS for Linux boot filesystems - as @neo said.
Is the licensing the only problem for Torvalds ?
# 11  
Old 03-02-2020
Quote:
Also what if the drive is encrypted ? I will be going with full disk encryption.
Encryption makes the backup task more difficult. That's why I avoid encryption except where I really have the need for it(which was never case for me so far, and thus I won't be helpful on the topic due to the lack of experience with it). You probably want to encrypt your backup space too. You need to get aquainted with the tools to mount the encrypted storage from within your running os and from a preferred rescue system.

Quote:
What would be that proper backup strategy ? CloneZilla ? Or LVM snapshotting ?

Could you tell me how rsnapshot will help in a case where lets say the system is unbootable ? How do you restore then using rsnapshot ? Does it work when the system is unbootable ?
Since you're a beginner, a CloneZilla can be a fallback solution until you're famillar enough with your linux os. With CloneZilla you can save and restore the os partition without knowing very much about linux.

For an easy start you may take a usb disk and put your data there. A more advanced and safe approach can be to have a networked device that connects via network to your computer and backups the updated data regularly. (rsnapshot can be used with both variants).

There are lots of backup tools. A simple backup tool is the mentioned rsnaphot. I would recommend it too. It's a file based backup in contrast to the image based backup of clonezilla. It's not primarily targeted for full system restores, but that can be done without problem too. If your system crashes completely, you can take the following steps to recover an os installation by only having the files:

Full system restore to an empty disk
  1. boot into a rescue system via pendrive or cd/dvd (systemrescuecd, grml or knoppix are 3 good alternatives)
  2. partition, format and mount hard disk(maybe also a replacement hard disk)
  3. copy data from the backup to the mounted disk
  4. change filesystem config file(/etc/fstab)
  5. Change configuration of boot loader(grub) and install boot loader onto hard disk

These are quite some steps to learn the commands if you never did it. But once you got that, restoration is easy. For me with many years of linux experience this had become childplay for me. I did it hundreds of times with very different systems and in contrast to windows, where this is just not possible: There maybe challenging advanced linux setups, but 99,9% are solvable and most of them with ease, when you have the basic knowledge.

... and of course one can just examine the problem using a rescue system and fix it without the need for a full restore.

Some cases I experienced which required fixing it from outside the running os:
  • forgotten root password: I had to append init=/bin/bash at the boot screen, reset the password
  • misconfiguration of the bootloader(grub): Boot into resuce, mount the disk, fix the bootloader config
  • a corrupted file system where one or more essential file(s) were missing(very rare): Boot into rescue, copy the backup files of the base system onto the system
  • replacing a faulty disk or migrating a system from one hardware to another: the full program from above
  • an installation action, which wasn't thouroughly reviewed, so essential packages got removed(very rare): Boot into rescue, install the packages again.
  • troubleshooting with a password protected bootloader: boot into rescue, do the fixing from there and/or remove the password protection from the boadloader config

Quote:
Is the licensing the only problem for Torvalds?
Yes. And for him and many others in the field of the OSS-Community it's a complete showstopper. It's absolutely inacceptable for those to put work into something where lawyers could come and pry the work out of the hands of the community.

Quote:
But doesn't zfs use more RAM than btrfs ? So isn't btrfs "the better ZFS" for desktops ?
There are tales from the past about the enormous memory hunger of zfs. Those tales belong to the lands of fairy tales and myths. But at the core there is a grain of truth. btrfs is more resource efficient than zfs. I read that a system with 1 GB should be an adequate minimum for use with zfs. If you have a new system with 8 GB of ram or more, this won't be a limiting factor. And again: If you really want to use deduplication: Carefully read the documentation before you decide to use it! And as already written: the same goes for btrfs!

Last edited by stomp; 03-03-2020 at 07:19 AM..
These 2 Users Gave Thanks to stomp For This Post:
# 12  
Old 03-04-2020
Another comment:

A con against zfs is the inability to remove VDEVs. A VDEV is a subpart of a volume.

Example:

Say you have a data volume consisting of a single disk(=vdev, 1 TB). You decide to replace your vdev of a single disk with a raid-1 vdev(1 TB), since you want add redundancy to be safe in case of a disk crash. That's possible. Over the years, you add another 2 vdevs(2x2TB,2x4TB) with raid-1 arrays. So you then have 3 vdevs making up your volume consisting of 2 disks each with an overall capacity of 7 TB.

You now decide you want to increase your storage again and simultaneously reorganize your 3 x raid1(6 disks=>7 TB usable) to 1 x raidz2(5x6 TB =>18 TB usable) to be able to cope with more simulateous disk crashes(2 disk crashes without data loss here) and at the same time reduce the number of active disks(6->5).

With zfs this is only possible by reformatting, since device removal is not fully supported by now. So you have to copy all the data, which must be done offline. ZFS top level device removal is in development at the moment, but i expect some years to pass until even raidz vdevs can be removed.

With LVM you can just add the new underlying disks and remove the old disks. No problem. All is possible to be done online. Btrfs can do that to and is even flexible to do more advanced migrations.

And Here are some experience reports about btrfs and zfs from users:

ZFS Vs BTRFS : linux

Some not to long gone data loss stories about btrfs are also there. I assume the cause may be lacking knowledge about file system operation. But of course that's only a suspicion.

Last edited by stomp; 03-04-2020 at 06:52 AM..
# 13  
Old 03-04-2020
Regarding ZFS, I tend to follow Linus on this. Linus is a smart guy and he knows what he is talking about.


Don't Use ZFS on Linux: Linus Torvalds - Last updated January 10, 2020

Quote:
The benchmarks I've seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?. - Linus Torvalds
Should I use a CoW filesystem on my PC if I only wanted snapshot capabilities ?-linus-torvalds-zfs-quotes-1jpg
This User Gave Thanks to Neo For This Post:
# 14  
Old 03-04-2020
@Neo: That are definitely good points. Regarding performance, I do not need very high performance for my zfs servers, so I have no comparison to other servers using btrfs/ext4.

What's regarding the quote it has no real maintenance behind it either any more, as I see it the zfs on linux code seems to be in development steadily. The names of the top contributors Brian Behlendorf or Matthew Ahrens sound familiar to me in terms of respectable international known programmers. So before hearing more about that I assume, the OpenZFS project is very active at the moment.

Code frequency . openzfs/zfs . GitHub
Contributors to openzfs/zfs . GitHub

Update:

Here's a more comprehensive benchmark comparison(from 2015, quite ancient, important because btrfs evolved a lot in recent years) between zfs and btrfs(and partly also with xfs and ext4):

https://www.diva-portal.org/smash/ge...FULLTEXT01.pdf

Quote:
Conclusion

The conclusion of the main findings is that Btrfs showed a high average throughput for most of the
tests, compared to the other file systems. By comparing the results to the results from previous
similar work it shows that the performance has improved greatly in recent years, especially the
multiple disk performance. Btrfs has had big problems with the stability of RAID 5 and RAID 6 and
only in the latest Linux kernel, at the time of writing, was it considered stable. Therefore a RAID 5
setup was also included in the experiments and the results surprisingly showed that Btrfs had
significantly higher average throughput than the other file systems. ZFS however did not perform as
well
but the exact reason for this could not be established from the data gathered in the experiments,
but is instead listed as in the future work section (see Section 7.3).
Update-2:

A Performance comparison of zfs vs xfs of Percona - the mysql experts (from 2018)

About ZFS Performance - Percona Database Performance Blog

Quote:
Conclusion

We have seen in this post why the general perception is that ZFS under-performs compared to XFS or EXT4. The presence of B-trees for the files has a big impact on the amount of metadata ZFS needs to handle, especially when the recordsize is small. The metadata consists mostly of the non-leaf pages (or internal nodes) of the B-trees. When properly cached, the performance of ZFS is excellent. ZFS allows you to optimize the use of EBS volumes, both in term of IOPS and size when the instance has fast ephemeral storage devices. Using the ephemeral device of an i3.large instance for the ZFS L2ARC, ZFS outperformed XFS by 66%.
Here another guide on tuning with zfs

I just place it here for others to read. The topic is quite interesting for me right now. The document is a reminder to verify every setting you made in a complex system by testing it and reverting it if it did not improve the situation. He shows a lot of examples which had negative performance impact, like LVM, Compression, Default Proxmox CPU-assignment(kvm64, for performance its better to use host-cpu, but you may sacrifice live-migration capability if you have a mixed hardware pool), Proxmox Storage Driver, ...

ZFS performance tuning - Martin Heiland

Last edited by stomp; 03-04-2020 at 12:21 PM..
This User Gave Thanks to stomp For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. What is on Your Mind?

Anybody want to talk about Dirty Cow?

Hi All, How worried is everyone about the Dirty Cow Linux exploit? Has anybody experienced attacks yet? From the research I've done it seems that the exploit is "reliable" (that is it works nearly every time on vulverable systems) which is not good news. We all believe that Unix/Linux... (3 Replies)
Discussion started by: hicksd8
3 Replies

2. UNIX for Advanced & Expert Users

Linux capabilities discussion

Hi I'm trying to compile my linux kernel with CONFIG_SECURITY_CAPABILITIES=y. any idea what this thing does ?? Also another question , If I compile the kernel that I'm currently using , what'll happen ? ~cheers (3 Replies)
Discussion started by: leghorn
3 Replies

3. Linux

Broadcom under Fedora 18 (Spherical Cow)

So I'm having a problem getting a Broadcom BCM4312 wireless controller to work under the broadcom-wl module $uname Linux 3.8.11-200.fc18.x86_64 #1 SMP Wed May 1 19:44:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux lspci -v 05:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g... (2 Replies)
Discussion started by: Skrynesaver
2 Replies

4. UNIX for Advanced & Expert Users

Use of Capabilities

I wonder if anyone could assist with some problems I'm having with Linux Capabilities and their use when using the commands "nice" and "schedtool". I run a couple of PCs, one is an elderly AMD Sempron 2800+ (32-bit, 2GHz clock and 3GB memory) that is used as a family multimedia system running... (3 Replies)
Discussion started by: MikeGM
3 Replies

5. Solaris

Cannot use filesystem while sending a snapshot

I've got a Solaris 11 Express installed on my machine. I have created a raidz2 zpool named shares and a simple one-disc zpool named backup. I have made a script that would send a daily snapshot of shares to backup. I use these commands zfs snapshot shares@DDMMRRRRHHMM zfs send -i shares@....... (10 Replies)
Discussion started by: RychnD
10 Replies

6. Filesystems, Disks and Memory

Wanted: Geographically distributed filesystem solution

I'm looking for a means to ensure that servers in the two or three datacenters, connected in a ring via IP through two ISPs, can distribute load and/or replicate data among at least two SAN-class disk devices. I want to evaluate several solutions, and I'm open to solutions ranging from free,... (6 Replies)
Discussion started by: otheus
6 Replies

7. Red Hat

Adding capabilities to an RPM

Hi. I downloaded a package that could only be installed on RHEL5, and not 4 or 3, so I got the source in order to compile it on RHEL 3 so hopefully it will work on all versions. So I have the source for a working package, but when I build it in RHEL 3 and then try to install it in RHEL 5, it... (6 Replies)
Discussion started by: Boaz
6 Replies

8. UNIX for Advanced & Expert Users

p570 Capabilities

Hi there. I've been tasked with making a new design for our Unix systems :eek: Now the question I have is; How many LPARs can a p570 hold WITHOUT using a VIO Server. Many Thanks Kees (1 Reply)
Discussion started by: KeesH
1 Replies

9. UNIX for Dummies Questions & Answers

Unix Capabilities?

We are looking into buying a new software, billing software that is, and want to know if you can run that on the same UNIX server as another major software? Is there a limit to the different types of software Unix can run, or is it like windows where you can install as many as you like? ... (2 Replies)
Discussion started by: hoz
2 Replies
Login or Register to Ask a Question