Sponsored Content
Top Forums Shell Programming and Scripting Failure rate of a node / Data center Post 303020515 by Don Cragun on Saturday 21st of July 2018 08:28:18 PM
Old 07-21-2018
You said you want the MTBF for each node. The node 127.0.0.1 was always down (for all three times it appeared in the data in post #4 and for both times it appeared in the data in post #1).

If a node is never up, isn't the mean time between failures zero? What value were you expecting?
This User Gave Thanks to Don Cragun For This Post:
 

6 More Discussions You Might Find Interesting

1. Virtualization and Cloud Computing

Cloud Enabling Computing for the Next Generation Data Center

Hear how the changing needs of massive scale-out computing is driving a transfomation in technology and learn how HP is supporting this new evolution of the web. More... (1 Reply)
Discussion started by: Linux Bot
1 Replies

2. HP-UX

Need to set up a HP cluster system in a data center

What are the server requirements, Software requirements, Network requirements etc, Please help me.. as 'm new 'm unable to get things done @ my end alone. Please refrain from typing subjects completely in upper case letters to get more attention, ty. (5 Replies)
Discussion started by: Sounddappan
5 Replies

3. Red Hat

Problem in RedHat Cluster Node while network Failure or in Hang mode

Hi, We are having many RedHat linux Server with Cluster facility for availability of service like HTTPD / MySQL. We face some issue while some issue related to power disturbance / fluctuation or Network failure. There is two Cluster Node configured in... (0 Replies)
Discussion started by: hirenkmistry
0 Replies

4. What is on Your Mind?

Cut Over to New Data Center and Upgraded OS Done. :)

Three days ago we received an expected notice from our long time data center that they were going dark on Sept 12th. About one and a half hours ago, after three days of marathon work, I just cut over the unix.com to a new data center with a completely new OS and Ubuntu distribution. (22 Replies)
Discussion started by: Neo
22 Replies

5. What is on Your Mind?

Resolved: Issue in Server Data Center

Dear All, There was a problem in the data center data, which caused the server to be unreachable for about an hour. Server logs show the server did not crash or go down. Hence, I assume there was a networking issue at the data center. Still waiting for final word on what happened. ... (4 Replies)
Discussion started by: Neo
4 Replies

6. What is on Your Mind?

OUTAGE: Data Center Problem Resolved.

There was a problem with our data center today, creating a site outage (server unreachable). That problem has been resolved. Basically, it seems to have been a socially engineered denial-of-service attack against UNIX.com; which I stopped as soon as I found out what the problem was. Total... (2 Replies)
Discussion started by: Neo
2 Replies
NG_HUB(4)						   BSD Kernel Interfaces Manual 						 NG_HUB(4)

NAME
ng_hub -- packet distribution netgraph node type SYNOPSIS
#include <netgraph/ng_hub.h> DESCRIPTION
The hub node type provides a simple mechanism for distributing packets over several links. Packets received on any of the hooks are for- warded out the other hooks. Packets are not altered in any way. HOOKS
A hub node accepts any request to connect, regardless of the hook name, as long as the name is unique. CONTROL MESSAGES
This node type supports the generic control messages, plus the following: NGM_HUB_SET_PERSISTENT This command sets the persistent flag on the node, and takes no arguments. SHUTDOWN
This node shuts down upon receipt of a NGM_SHUTDOWN control message, or when all hooks have been disconnected. Setting the persistent flag via a NGM_HUB_SET_PERSISTENT control message disables automatic node shutdown when the last hook gets disconnected. SEE ALSO
netgraph(4), ng_bridge(4), ng_ether(4), ng_one2many(4), ngctl(8), nghook(8) HISTORY
The ng_hub node type appeared in FreeBSD 5.3. AUTHORS
Ruslan Ermilov <ru@FreeBSD.org> BSD
May 5, 2010 BSD
All times are GMT -4. The time now is 04:21 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy