Sponsored Content
Full Discussion: Solaris NIS+ and RedHat NIS+
Top Forums UNIX for Advanced & Expert Users Solaris NIS+ and RedHat NIS+ Post 15482 by larry on Thursday 14th of February 2002 05:28:10 PM
Old 02-14-2002
MySQL

Thanks

That perfectSmilie
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

linux redhat and solaris NIS+

Hello all, I am wondering if anyone had success with installing a redhat linux (PC box) on a Solaris NIS+ network. I have gotten information on how to do this but have been unsuccessful. The information that I have gotten is a little out dated and is not 100%. ... (0 Replies)
Discussion started by: larry
0 Replies

2. UNIX for Dummies Questions & Answers

nis plus on linux redhat

Hello all, I wanted to know if any one had successfully install nis+ utils and use nis+ client on a linux (redhat) and have a Solaris nis+ server. I am able to install the client software on the linux and am able to see the nis+ tables. The passwd tables' works file but the auto_master and... (10 Replies)
Discussion started by: larry
10 Replies

3. Red Hat

How to configure a Redhat 9.0 as NIS+ client

I would like to know how to configure a Redhat 9.0 as NIS+ client. I have seen the post of larry, but it is not enough information for me to do the configuration. Could anyone tell me how to do it? Please help! Thank you! (3 Replies)
Discussion started by: alexhon
3 Replies

4. Linux

configure NIS in RedHat 9.0

Hi All i want to configure NIS on RedHat 9.0. pls help me i need the configure file pls mail this configure files with regards sudeendra (0 Replies)
Discussion started by: sudeendra
0 Replies

5. UNIX for Dummies Questions & Answers

NIS map mail.aliases specified, but NIS not running

Hi all, I just took over the admin role from someone and I wanna setup sendmail (just to send mail from the host) however, after I config all the resolv.conf, nssitch.conf, hosts file and when I try to send a mail out, after I punched ctl-D, it returned he following, "NIS map mail.aliases... (2 Replies)
Discussion started by: stancwong
2 Replies

6. UNIX for Advanced & Expert Users

SUSE 9 and 10 NIS clients with RedHat 8.0 NIS server not working

We have a RedHat 8.0 NIS master, with a RedHat 8.0 NIS Slave. We also have a small number of SUSE 9.1 and SUSE 10 machines here for evaluation. However, no matter what i do, the SUSE machines will not talk to the NIS Servers. If i broadcast for NIS Servers for the specified NIS domain, it... (1 Reply)
Discussion started by: fishsponge
1 Replies

7. Solaris

How to configure a NIS client bound to the NIS server in another subnet?

Hi, all. I have a Solaris client here needs to bind to NIS server in another subnet. Following is the configuration i made on the client, 1) edit /etc/inet/hosts to add an entry of the NIS server -- nserver01 2) execute `domainname` to set local NIS domain to the domain of the NIS server.... (1 Reply)
Discussion started by: sn_wukong
1 Replies

8. UNIX for Advanced & Expert Users

NIS server setup on redhat 4 AS4

Please do reply if anyone has done NIS server setup on redhat Please let me know steps to do NIS server configuration on redhat & NIS client configuration on suse (2 Replies)
Discussion started by: timus1980
2 Replies

9. Shell Programming and Scripting

Validating that the NIS and NIS+ services are disabled

I'm creating a scrip for auditing our AIX box's to ensure that they are built according to our system standards. I'm not sure on the logic for checking to see if the NIS and NIS+ services are disabled. any idea's? (2 Replies)
Discussion started by: sport
2 Replies

10. Solaris

Solaris - Moving off NIS to something better

Mostly Solaris 11 with some 10 and even fewer 8 and 9. At the moment, NIS is used. Looking for a better way and to get rid of NIS. (13 Replies)
Discussion started by: psychocandy
13 Replies
TC(8)                                                                  Linux                                                                 TC(8)

NAME
tbf - Token Bucket Filter SYNOPSIS
tc qdisc ... tbf rate rate burst bytes/cell ( latency ms | limit bytes ) [ mpu bytes [ peakrate rate mtu bytes/cell ] ] burst is also known as buffer and maxburst. mtu is also known as minburst. DESCRIPTION
The Token Bucket Filter is a classful queueing discipline available for traffic control with the tc(8) command. TBF is a pure shaper and never schedules traffic. It is non-work-conserving and may throttle itself, although packets are available, to ensure that the configured rate is not exceeded. It is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, send- ing out data exactly at the configured rates. Much higher rates are possible but at the cost of losing the minimal burstiness. In that case, data is on average dequeued at the config- ured rate but may be sent much faster at millisecond timescales. Because of further queues living in network adaptors, this is often not a problem. ALGORITHM
As the name implies, traffic is filtered based on the expenditure of tokens. Tokens roughly correspond to bytes, with the additional con- straint that each packet consumes some tokens, no matter how small it is. This reflects the fact that even a zero-sized packet occupies the link for some time. On creation, the TBF is stocked with tokens which correspond to the amount of traffic that can be burst in one go. Tokens arrive at a steady rate, until the bucket is full. If no tokens are available, packets are queued, up to a configured limit. The TBF now calculates the token deficit, and throttles until the first packet in the queue can be sent. If it is not acceptable to burst out packets at maximum speed, a peakrate can be configured to limit the speed at which the bucket empties. This peakrate is implemented as a second TBF with a very small bucket, so that it doesn't burst. To achieve perfection, the second bucket may contain only a single packet, which leads to the earlier mentioned 1mbit/s limit. This limit is caused by the fact that the kernel can only throttle for at minimum 1 'jiffy', which depends on HZ as 1/HZ. For perfect shap- ing, only a single packet can get sent per jiffy - for HZ=100, this means 100 packets of on average 1000 bytes each, which roughly corre- sponds to 1mbit/s. PARAMETERS
See tc(8) for how to specify the units of these values. limit or latency Limit is the number of bytes that can be queued waiting for tokens to become available. You can also specify this the other way around by setting the latency parameter, which specifies the maximum amount of time a packet can sit in the TBF. The latter calcula- tion takes into account the size of the bucket, the rate and possibly the peakrate (if set). These two parameters are mutually exclusive. burst Also known as buffer or maxburst. Size of the bucket, in bytes. This is the maximum amount of bytes that tokens can be available for instantaneously. In general, larger shaping rates require a larger buffer. For 10mbit/s on Intel, you need at least 10kbyte buffer if you want to reach your configured rate! If your buffer is too small, packets may be dropped because more tokens arrive per timer tick than fit in your bucket. The minimum buffer size can be calculated by dividing the rate by HZ. Token usage calculations are performed using a table which by default has a resolution of 8 packets. This resolution can be changed by specifying the cell size with the burst. For example, to specify a 6000 byte buffer with a 16 byte cell size, set a burst of 6000/16. You will probably never have to set this. Must be an integral power of 2. mpu A zero-sized packet does not use zero bandwidth. For ethernet, no packet uses less than 64 bytes. The Minimum Packet Unit determines the minimal token usage (specified in bytes) for a packet. Defaults to zero. rate The speed knob. See remarks above about limits! See tc(8) for units. Furthermore, if a peakrate is desired, the following parameters are available: peakrate Maximum depletion rate of the bucket. The peakrate does not need to be set, it is only necessary if perfect millisecond timescale shaping is required. mtu/minburst Specifies the size of the peakrate bucket. For perfect accuracy, should be set to the MTU of the interface. If a peakrate is needed, but some burstiness is acceptable, this size can be raised. A 3000 byte minburst allows around 3mbit/s of peakrate, given 1000 byte packets. Like the regular burstsize you can also specify a cell size. EXAMPLE &; USAGE To attach a TBF with a sustained maximum rate of 0.5mbit/s, a peakrate of 1.0mbit/s, a 5kilobyte buffer, with a pre-bucket queue size limit calculated so the TBF causes at most 70ms of latency, with perfect peakrate behaviour, issue: # tc qdisc add dev eth0 handle 10: root tbf rate 0.5mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540 To attach an inner qdisc, for example sfq, issue: # tc qdisc add dev eth0 parent 10:1 handle 100: sfq Without inner qdisc TBF queue acts as bfifo. If the inner qdisc is changed the limit/latency is not effective anymore. SEE ALSO
tc(8) AUTHOR
Alexey N. Kuznetsov, <kuznet@ms2.inr.ac.ru>. This manpage maintained by bert hubert <ahu@ds9a.nl> iproute2 13 December 2001 TC(8)
All times are GMT -4. The time now is 08:05 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy