Sponsored Content
Special Forums UNIX and Linux Applications High Performance Computing Building Linux cluster for mechanical engineering software Post 302918888 by Don Cragun on Friday 26th of September 2014 03:34:02 AM
Old 09-26-2014
Some of your questions are so vague that it is hard to make any informed suggestions. How would you respond if you got a request from someone to tell them how to choose the best vehicle? (Who is going to be driving it? How many passengers do you need to carry? How much weight do you need to be able to tow? How much secured cargo space do you need? What are the weather conditions where it will be driven? What type of terrain does it need to traverse? ...)

I know very little about about ME and nothing about Ansys CFD. Are you trying to build a cluster to support hundreds of users submitting thousands of jobs? Are you trying to build a cluster than can break a single huge job into thousands of threads and run all of those threads simultaneously? Do you have any experience writing thread-safe code?

Can you use only open-source software? Of course you can! You can write all of the code you need and make it available for everyone to use as they see fit.

Does open-source software already exist for all of the code you want to run? How can we guess at that from what you've told us? We have no idea what all of the code you want to run needs to do.

If you don't know the difference between a heterogeneous cluster and a homogeneous cluster, you probably don't have the background needed to design the cluster you want. Please consider hiring an architect with experience setting up and running an HPC data center who you can sit down with and discuss budget, capabilities, computing projects to be run, users to be supported, software to be run, software to be written, etc., etc., etc. Setting up an HPC data center is a very complex, expensive undertaking.
 

4 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

building and running a software in different linux kernel versions

my Querry is if i build a software on a specific linux kernel and then try to run it on another linux kernel ....what can be the possible problems or what errors can most probably appear while running the binary in an updated version of linux. (1 Reply)
Discussion started by: mobydick
1 Replies

2. High Performance Computing

Building a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris

Provides a description of how to set up a Solaris Cluster Express cluster in a VirtualBox on OpenSolaris. More... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. High Performance Computing

Building a Linux Virtual Server cluster

Hi Guys, I'm busy building a LVS-NAT cluster on Red-Hat server 5.1 and I need a kernel that has LVS capabilities for a red-hat server 5.1. Is the anyone who can advise me where I can get this kernel. I have already visited the following site Ultra Monkey: and this has old kernels e.g. 2.4.20... (2 Replies)
Discussion started by: Linux Duke
2 Replies

4. Red Hat

Free Cluster software with Red Hat Linux 5.0

Hi, I would like to know wheather any free cluster software is coming with Red Hat Ent Linux Medias? or needs to be purchased seperately. (3 Replies)
Discussion started by: manoj.solaris
3 Replies
scds_syslog_debug(3HA)					 Sun Cluster HA and Data Services				    scds_syslog_debug(3HA)

NAME
scds_syslog_debug - write a debugging message to the system log SYNOPSIS
cc [flags...] -I /usr/cluster/include file -L /usr/cluster/lib -l dsdev #include <rgm/libdsdev.h> void scds_syslog_debug(int debug_level, constchar *format... DESCRIPTION
The scds_syslog_debug() function writes a debugging message to the system log. It uses the facility returned by the scha_cluster_getlogfa- cility(3HA) function. All syslog messages are prefixed with: SC[<resourceTypeName>,<resourceGroupName>,<resourceName>,<methodName> If you specify a debug_level greater than the current debugging level being used, no information is written. The DSDL defines the maximum debugging level, SCDS_MAX_DEBUG_LEVEL, as 9. The scds_initialize(3HA) function, which the calling program must call before scds_syslog_debug(), retrieves the current debugging level from the file: /var/cluster/rgm/rt/<resourceTypeName>/loglevel. Caution - Messages written to the system log are not internationalized. Do not use gettext() or other message translation functions in conjunction with this function. PARAMETERS
The following parameters are supported: debug_level Debugging level at which this message is to be written. Valid debugging levels are between 1 and SCDS_MAX_DEBUG_LEVEL, which is defined as 9 by the DSDL. If the specified debugging level is greater than the debugging level set by the calling program, the message is not written to the system log. format Message format string, as specified by printf(3C) ... Variables, indicated by the format parameter, as specified by printf(3C) EXAMPLES
Example 1 Display All Debugging Messages To see all debugging messages for resource type SUNW.iws, issue the following command on all nodes of your cluster echo 9 > /var/cluster/rgm/rt/SUNW.iws/loglevel Example 2 Suppress Debugging Messages To suppress debugging messages for resource type SUNW.iws, issue the following command on all nodes of your cluster echo 0 > /var/cluster/rgm/rt/SUNW.iws/loglevel FILES
/usr/cluster/include/rgm/libdsdev.h Include file /usr/cluster/lib/libdsdev.so Library ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWscdev | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
printf(3C), scds_syslog(3HA), scha_cluster_getlogfacility(3HA), syslog(3C), syslog.conf(4), attributes(5) Sun Cluster 3.2 7 Sep 2007 scds_syslog_debug(3HA)
All times are GMT -4. The time now is 04:53 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy