Sponsored Content
Special Forums UNIX and Linux Applications High Performance Computing Building Linux cluster for mechanical engineering software Post 302918831 by DGPickett on Thursday 25th of September 2014 05:44:39 PM
Old 09-25-2014
Just a few thoughts:
  • Don't overlook Lustre, the high bandwidth distributed NFS.
  • VM is a run in the opposite direction, but for some things it can be appropriate. Watch your reliability aand administrative models, as more VMs is just that much more load on them.
  • Clusters are usually homogenous. There are other tactics for distrbuted processing that are more heterogenous-friendly.
  • A remote X like vnc often has much higher performance, due to low Xserver latency.
 

4 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

building and running a software in different linux kernel versions

my Querry is if i build a software on a specific linux kernel and then try to run it on another linux kernel ....what can be the possible problems or what errors can most probably appear while running the binary in an updated version of linux. (1 Reply)
Discussion started by: mobydick
1 Replies

2. High Performance Computing

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris

Provides a description of how to set up a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. More... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. High Performance Computing

Building a Linux Virtual Server cluster

Hi Guys, I'm busy building a LVS-NAT cluster on Red-Hat server 5.1 and I need a kernel that has LVS capabilities for a red-hat server 5.1. Is the anyone who can advise me where I can get this kernel. I have already visited the following site Ultra Monkey: and this has old kernels e.g. 2.4.20... (2 Replies)
Discussion started by: Linux Duke
2 Replies

4. Red Hat

Free Cluster software with Red Hat Linux 5.0

Hi, I would like to know wheather any free cluster software is coming with Red Hat Ent Linux Medias? or needs to be purchased seperately. (3 Replies)
Discussion started by: manoj.solaris
3 Replies
UPSCLI_LIST_START(3)						    NUT Manual						      UPSCLI_LIST_START(3)

NAME
upscli_list_start - begin multi-item retrieval from a UPS SYNOPSIS
#include <upsclient.h> int upscli_list_start(UPSCONN_t *ups, int numq, const char **query) DESCRIPTION
The upscli_list_start() function takes the pointer ups to a UPSCONN_t state structure, and the pointer query to an array of numq query elements. It builds a properly-formatted request from those elements and transmits it to upsd(8). Upon success, the caller must call upscli_list_next(3) to retrieve the elements of the list. Failure to retrieve the list will most likely result in the client getting out of sync with the server due to buffered data. USES
This function implements the "LIST" command in the protocol. As a result, you can use it to request many different things from the server. Some examples are: o LIST UPS o LIST VAR <ups> o LIST RW <ups> o LIST CMD <ups> o LIST ENUM <ups> <var> o LIST RANGE <ups> <var> QUERY FORMATTING
To see the list of variables on a UPS called su700, the protocol command would be LIST VAR su700. To start that list with this function, you would populate query and numq as follows: int numq; const char *query[2]; query[0] = "VAR"; query[1] = "su700"; numq = 2; All escaping of special characters and quoting of elements with spaces are handled for you inside this function. ERROR CHECKING
This function checks the response from upsd(8) against your query. If it is not starting a list, or is starting the wrong type of list, it will return an error code. When this happens, upscli_upserror(3) will return UPSCLI_ERR_PROTOCOL. RETURN VALUE
The upscli_list_start() function returns 0 on success, or -1 if an error occurs. SEE ALSO
upscli_fd(3), upscli_get(3), upscli_readline(3), upscli_sendline(3), upscli_ssl(3), upscli_strerror(3), upscli_upserror(3) Network UPS Tools 05/31/2012 UPSCLI_LIST_START(3)
All times are GMT -4. The time now is 06:26 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy