Sponsored Content
Special Forums News, Links, Events and Announcements Linux online training resources Post 302488485 by thanhdat on Monday 17th of January 2011 11:04:12 AM
Old 01-17-2011
You can find many useful docs here :
Red Hat Enterprise Linux
 

3 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

PERL Online Resources

Could someone suggest some good on-line PERL resources? tutorials and References? Thanks. Gregg (1 Reply)
Discussion started by: gdboling
1 Replies

2. UNIX for Dummies Questions & Answers

What training resources do people use?

Out of curiosity, what training resources would people recommend for beginners to UNIX/scripting? Do you find forums such as these are better than books in providing help as and when? Or do you think more formal training is better? (2 Replies)
Discussion started by: kutz13
2 Replies

3. Solaris

free learning resources and training from Sun microsystems plus discounted Certification Voucher

Hi all, If you are interested on taking Sun microsystems training from Java to business skills , if so drop by SAI program it's free for students and Educational Institutions (0 Replies)
Discussion started by: h@foorsa.biz
0 Replies
MPSCNNBatchNormalizationNode(3) 			 MetalPerformanceShaders.framework			   MPSCNNBatchNormalizationNode(3)

NAME
MPSCNNBatchNormalizationNode SYNOPSIS
#import <MPSNNGraphNodes.h> Inherits MPSNNFilterNode. Instance Methods (nonnull instancetype) - initWithSource:dataSource: Class Methods (nonnull instancetype) + nodeWithSource:dataSource: Properties MPSCNNBatchNormalizationFlags flags Detailed Description A node representing batch normalization for inference or training Batch normalization operates differently for inference and training. For inference, the normalization is done according to a static statistical representation of data saved during training. For training, this representation is ever evolving. In the low level MPS batch normalization interface, during training, the batch normalization is broken up into two steps: calculation of the statistical representation of input data, followed by normalization once the statistics are known for the entire batch. These are MPSCNNBatchNormalizationStatistics and MPSCNNBatchNormalization, respectively. When this node appears in a graph and is not required to produce a MPSCNNBatchNormalizationState -- that is, MPSCNNBatchNormalizationNode.resultState is not used within the graph -- then it operates in inference mode and new batch-only statistics are not calculated. When this state node is consumed, then the node is assumed to be in training mode and new statistics will be calculated and written to the MPSCNNBatchNormalizationState and passed along to the MPSCNNBatchNormalizationGradient and MPSCNNBatchNormalizationStatisticsGradient as necessary. This should allow you to construct an identical sequence of nodes for inference and training and expect the to right thing happen. Method Documentation - (nonnull instancetype) initWithSource: (MPSNNImageNode *__nonnull) source(nonnull id< MPSCNNBatchNormalizationDataSource >) dataSource + (nonnull instancetype) nodeWithSource: (MPSNNImageNode *__nonnull) source(nonnull id< MPSCNNBatchNormalizationDataSource >) dataSource Property Documentation - (MPSCNNBatchNormalizationFlags) flags [read], [write], [nonatomic], [assign] Options controlling how batch normalization is calculated Default: MPSCNNBatchNormalizationFlagsDefault Author Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code. Version MetalPerformanceShaders-100 Thu Feb 8 2018 MPSCNNBatchNormalizationNode(3)
All times are GMT -4. The time now is 01:16 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy