Which is more expensive ? | Unix Linux Forums | Programming

  Go Back    


Programming Post questions about C, C++, Java, SQL, and other programming languages here.

Which is more expensive ?

Programming


Closed Thread    
 
Thread Tools Search this Thread Display Modes
    #1  
Old 08-13-2007
vino's Avatar
vino vino is offline Forum Advisor  
Supporter (in vino veritas)
 
Join Date: Feb 2005
Last Activity: 9 April 2013, 8:19 AM EDT
Location: Sydney, Down Under
Posts: 2,847
Thanks: 0
Thanked 12 Times in 12 Posts
Which is more expensive ?

I have the following code snippet's. Which one among these would be more expensive ?


Code:
#1
        for (int fd = 0; fd <= 1024; ++fd)
            close(fd);


Code:
#2
        for (int fd = 0; fd <= 1024; fd += 8)
        {
            close(fd);
            close(fd+1);
            close(fd+2);
            close(fd+3);
            close(fd+4);
            close(fd+5);
            close(fd+6);
            close(fd+7);
        }


Last edited by vino; 08-13-2007 at 10:27 AM..
Sponsored Links
    #2  
Old 08-13-2007
cbkihong cbkihong is offline Forum Advisor  
Advisor
 
Join Date: Sep 2002
Last Activity: 2 May 2012, 10:50 AM EDT
Location: Hong Kong, China
Posts: 1,622
Thanks: 0
Thanked 9 Times in 8 Posts
Quote:
Originally Posted by vino View Post

Code:
#2
        for (int fd = 0; fd <= 128; ++fd)
        {
            close(fd);
            close(fd+1);
            close(fd+2);
            close(fd+3);
            close(fd+4);
            close(fd+5);
            close(fd+6);
            close(fd+7);
        }

But isn't that this will only close fd up to 135?
Sponsored Links
    #3  
Old 08-13-2007
vino's Avatar
vino vino is offline Forum Advisor  
Supporter (in vino veritas)
 
Join Date: Feb 2005
Last Activity: 9 April 2013, 8:19 AM EDT
Location: Sydney, Down Under
Posts: 2,847
Thanks: 0
Thanked 12 Times in 12 Posts
Quote:
Originally Posted by cbkihong View Post
But isn't that this will only close fd up to 135?
My very bad.

I actually meant a loop which runs for 1024 against another loop which runs for 128 times in which 8 fd's are closed each time.


Code:
        for (int fd = 0; fd <= 1024; fd += 8)
        {
            close(fd);
            close(fd+1);
            close(fd+2);
            close(fd+3);
            close(fd+4);
            close(fd+5);
            close(fd+6);
            close(fd+7);
        }

I know the close call gets called 1024 times. But what about the looping part ? Is there any benefit at all ?
    #4  
Old 08-13-2007
reborg's Avatar
reborg reborg is offline Forum Advisor  
Administrator Emeritus
 
Join Date: Mar 2005
Last Activity: 29 March 2012, 7:00 PM EDT
Location: Ireland
Posts: 4,464
Thanks: 0
Thanked 10 Times in 10 Posts
I don't think there would be any real difference here, there might me a very marginal improvement in the second case due to the number of comparison you are doing, but would expect that to be marginal. You are after all incrementing the same number of times in both cases anc calling close() the same number of time.

Just curious, is this shutdown code or is it after forking that you are closing the open fds?
Sponsored Links
    #5  
Old 08-13-2007
vino's Avatar
vino vino is offline Forum Advisor  
Supporter (in vino veritas)
 
Join Date: Feb 2005
Last Activity: 9 April 2013, 8:19 AM EDT
Location: Sydney, Down Under
Posts: 2,847
Thanks: 0
Thanked 12 Times in 12 Posts
Quote:
Originally Posted by reborg View Post
I don't think there would be any real difference here, there might me a very marginal improvement in the second case due to the number of comparison you are doing, but would expect that to be marginal. You are after all incrementing the same number of times in both cases anc calling close() the same number of time.

Just curious, is this shutdown code or is it after forking that you are closing the open fds?
This is for closing the open fds after forking. I was told that you generally need to close only 64 fds instead of the RLIMIT.rlim_cur after forking. Any thoughts ?
Sponsored Links
    #6  
Old 08-13-2007
porter porter is offline Forum Advisor  
Registered User
 
Join Date: Jan 2007
Last Activity: 8 January 2008, 6:50 PM EST
Posts: 2,965
Thanks: 0
Thanked 5 Times in 5 Posts
Quote:
Originally Posted by vino View Post
I was told that you generally need to close only 64 fds instead of the RLIMIT.rlim_cur after forking. Any thoughts ?
You need to close the file descriptors you need to close.

The number 64 comes from some editions of UNIX's hard coded limit.

There are alternatives...

1. If the reason is the program is going to call 'exec' then the code that opens the file descriptors could set the close-on-exec bit.

2. Modular threaded code could use pthead_atfork to set up a call back to close a file descriptor if required.
Sponsored Links
    #7  
Old 08-13-2007
Perderabo's Avatar
Perderabo Perderabo is online now Forum Staff  
Unix Daemon (Administrator Emeritus)
 
Join Date: Aug 2001
Last Activity: 16 April 2014, 9:11 AM EDT
Location: Ashburn, Virginia
Posts: 9,831
Thanks: 42
Thanked 378 Times in 225 Posts
I would code your first snippet but compile with an optimizer. These days optimizers will unroll loops if unrolling is advantageous. That particular loop is not a real great candidate for unrolling anyway. A better candidate would be:

for(i=0; i<100; i++) A[i]=0;

Most superscalar cpus can execute:
A[i]=0;
A[i+1]=0;
A[i+2]=0;
simultaneously. How deep it can go depends on the cpu and that's why leaving unrolling to an optimizer is a good idea. The optimizer should know the target cpu. But your case involved a system call which is different. You're only saving some loop overhead.

Apparently, if you explicitly unroll a loop when it is not advantageous, most optimizers will not reroll the loop. At least this was the case circa 1998 when my copy of "High Performance Computing" was published. If you have that book, see chapter 8, "Loop Optimizations" and chapter 9, "Understanding Parallelism". This is still a great book and it's not just for Fortran programmers.

Anyway, if you are not in control of which fd's might be open, you need to to loop up to OPEN_MAX closing them. High fd's might have been opened and then setrlimit() called lower to the max fd.
Sponsored Links
Closed Thread

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

More UNIX and Linux Forum Topics You Might Find Helpful
Thread Thread Starter Forum Replies Last Post
calling pthread_self (on ubuntu), expensive? gorga Programming 11 06-08-2010 02:52 PM



All times are GMT -4. The time now is 09:11 AM.