Organization in a big file system


Login or Register for Dates, Times and to Reply

 
Thread Tools Search this Thread
# 1  
Organization in a big file system

hello

I have a file system with 737 Go of data (oracle)
I want to add 230 Go.
IBM technician says to me that it's better (for performance) to backup the file system, rebuild it with the new 250Go and restore it....
737 Go to backup, it is not very simple... !!!!
You confirm what says the ibm tehnician ?

thank you for your help.
# 2  
If you create a large filesystem and then load it with data, the data will be distributed more of less evenly across the entire filesystem. If you extend a filesystem, the free space is mostly at the end of the disk. Either one could be better than the other under various circumstances. Suppose you never add any data...the heads never will need to move to the last 25% (or so) of the disk. Suppose a file near the beginning of the disk grows... it will probably be extended into cylinder groups at the end of the disk. Now the heads must move from one end of the disk to the other end of the disk just to read that file. Because this is Oracle, you need an Oracle expert to comment on this.

But even if you simply extend the filesystem.... (Or even if you leave it at the same size!) you absolutely need a backup! What if you try to extend it and the process fails? If the size of this filesystem is so large that you are having trouble making backups, maybe you should not extend it. Maybe it is time for a new filestem instead. And talk to your DBA. You probably need to use some kind of Oracle backup. You cannot just backup the dbf files of a live database.
# 3  
i agree. my impression would be that you would probably want to backup and rebuild for performance reasons. as for backups... you probably should already be doing that anyways. Smilie
# 4  
If you cannot have your DBAs shutdown the database for the duration of your backup, perhaps they can put it into "hot backup mode"? This puts changes into separate files so that you can get a consistant backup of the 6 files oracle uses. You can't backup a "live" oracle database because if these 6 files aren't consistant, oracle won't touch them.

There is something about "PP size". I've yet to see this explained very well (I'm more of a Solaris guy). Essentially, you can only extend a filesystem so far. If you know you're going to expand a given filesystem, you can go in and mess with the numbers (when you build the filesystem) and make them big enough to expand things down the road (and what exactly this costs you; why you wouldn't just do this all the time, I don't have a clear understanding of either). If you just accept the defaults, then at some point, if you expand and expand and expand, you'll eventually reach a point where you can't expand anymore (something runs out of fingers and toes to count on). Whereas if you build a new filesystem from scratch, and you make it big, the defaults will bump up to some bigger numbers.

Sorry I don't have a better explaination; I don't understand it well myself. Perhaps someone who does understand all this can explain it better. I'm just thinking that perhaps this is what IBM is talking about.

Of course, you can call IBM back and ask them to explain it to you...

PS: It is my understand that a JFS2 filesystem doesn't have all this. I don't know when they added JFS2 or if that is an option for you.
# 5  
with the original jfs, i think there was a limit of how many "PP"s you can have per physical volume... or something like that. with jfs2, i don't think there's limit that anyone would realistically reach for most applications. certainly, 1TB in this case could easily be attained.
# 6  
thank you for your responses.
So i understand better: the new physical volumes should be at the end of the file system, and the heads must move from the beginning to the end.
Effectively the data are already saved, but there is always the risk of a corrupted backup... Or perhaps if I install 1To more, i move the file system to the new, i verify that all is right and I delete the old file system.
The file system is jfs2, and i have verified, i can extend it Smilie
# 7  
hello

IBM advices to me that there is the possibility with reorgvg.
When i add a disk in a big volume group, the data is not written on the new disk, but Aix write on the old disks and after, when they are full, on the new disk.
With reorgvg, data are moved on each disk, so the new, for best performance.
Login or Register for Dates, Times and to Reply

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #317
Difficulty: Easy
Ada Lovejoy is often considered the first computer programmer.
True or False?

8 More Discussions You Might Find Interesting

1. Solaris

Split a big file system to several files

Gents Actually I have question and i need your support. I have this NAS file system mounted as /coresys has size of 7 TB I need to Split this file system into several file systems as mount points I mean how to can I Split it professionally to different NAS mount points how to can I decide... (2 Replies)
Discussion started by: AbuAliiiiiiiiii
2 Replies

2. What is on Your Mind?

Big Data for System Admins

Hello, I have been working as Solaris/Linux Admin since past 8 years. I am looking options for my profile change, but there is some limitation. I worked as 24x7 support for admin, server support, high availability, etc. But been worked on developing side and scripting part. When I search for Big... (2 Replies)
Discussion started by: nightup2222
2 Replies

3. UNIX for Dummies Questions & Answers

Recursive file organization?

Does anyone have any idea of how I can make something like the code below run recursively? I'll run it on a tree of directories all with different names and all containing a sequence of .dpx files. I've tried to do it using find and exec but can't get it to work right. What it needs to do is... (4 Replies)
Discussion started by: scribling
4 Replies

4. Shell Programming and Scripting

Help with re-organization data

Input file DATA2.2 POSITION_152486.2 COLUMN689699.2 DATA2.2 ROW00000342066 UNIT00000342313 DATA7.2 POSITION_017891.4 COLUMN060361.4 DATA7.2 ROW00000379319 UNIT00000368623 DATA7.2 ROW00000421241 UNIT00000400736 DATA8.1 POSITION_153254.2 COLUMN694986.2 DATA8.1 ROW00000379288... (1 Reply)
Discussion started by: perl_beginner
1 Replies

5. UNIX for Dummies Questions & Answers

How big is too big a config.log file?

I have a 5000 line config.log file with several "maybe" errors. Any reccomendations on finding solvable problems? (2 Replies)
Discussion started by: NeedLotsofHelp
2 Replies

6. UNIX for Dummies Questions & Answers

File organization, /bin and /src

The /src file is obviously designed to contain source code, so when I download programs, I should put them in /src (because they contain the source files + the executables)? What do most people do with the executables? Do they copy them to /bin, make links to them in /bin, or just leave them in... (4 Replies)
Discussion started by: css136
4 Replies

7. UNIX for Dummies Questions & Answers

Theory question about the organization of a UNIX file...

Hi, I am quite sure that I am posting a question in the very wrong forum but I have to give a try. It's a question about UNIX theory. I don't have any clue of how to solve this question. If someone could kindly provide some good references or give me the formulas, it will be really... (1 Reply)
Discussion started by: ti_ma
1 Replies

8. UNIX for Dummies Questions & Answers

How to view a big file(143M big)

1 . Thanks everyone who read the post first. 2 . I have a log file which size is 143M , I can not use vi open it .I can not use xedit open it too. How to view it ? If I want to view 200-300 ,how can I implement it 3 . Thanks (3 Replies)
Discussion started by: chenhao_no1
3 Replies

Featured Tech Videos