Sponsored Content
Top Forums Shell Programming and Scripting Python: make dual vector dot-product more pythonic Post 303041217 by figaro on Monday 18th of November 2019 04:18:23 PM
Old 11-18-2019
Python: make dual vector dot-product more pythonic

I have this dot product, calculating weighted means, and is applied to two columns in a list:
Code:
# calculate weighted values in sequence
for i in range(len(temperatures)-len(weights)):
  temperatures[i].append(sum([weights[j]*temperatures[i+j][5] for j in range(len(weights))]))
  temperatures[i].append(sum([weights[j]*temperatures[i+j][6] for j in range(len(weights))]))

The calculation is a running dot-product, ie the list of temperature samples is far larger than the list of weights, hence the correction of subtracting len(weights) at the end of the main loop.
This traverses the list of weights twice, which is inefficient and degrades performance. How could this be done in a more pythonic way?

I also have concerns about the main loop. Would this be considered more pythonic?:
Code:
# calculate weighted values in sequence
for i in range(len(temperatures)):
   try:
      # weighted calculation here
   except:
      # do nothing, because array out of bounds


Last edited by figaro; 11-18-2019 at 06:01 PM.. Reason: Emphasise the fact that the lists are not of the same length, ie the dot product calculates a running weighted mean.
 

8 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

make: WARNING- Product is not licensed.

I do beg for my bad english ofr advance (french..) I've a problem with unix sco openserver 5 enterprise system I can't do any make at all! when i do it, i've this message : make: WARNING- Product is not licensed. however, i've entered the license number, code and data information... ... (3 Replies)
Discussion started by: joedelabush
3 Replies

2. UNIX for Dummies Questions & Answers

Backup of Product Database

HI, I know its scarey me asking this, but system is homegrown and I am just having fun, but at the same time dont want to have tooo much fun where the phrase "little knowledge is dangerous" perfectly fits my actions ;-). I have a couple of packages that are failing to be removed... #... (2 Replies)
Discussion started by: Student37
2 Replies

3. HP-UX

TrustedMigration product ?

I'm running HP-UX B.11.23 U ia64 I've got SOX auditors asking me if we have the TrustedMigration product. I don't know what that is and google isn't being helpful. Can you tell me what this product is (and what it is for) and how to know if my system is running it and/or prove that it is... (1 Reply)
Discussion started by: LisaS
1 Replies

4. Shell Programming and Scripting

Make python script ignore .htaccess

I just wrote a tiny script with the help of ghostdog74 to search all my files for special content phrases. After a few modifications I now made it work, but one problem is left. The files are located in public_html folder, so there might also be .htaccess files. So I ignored scanning of that... (4 Replies)
Discussion started by: medic
4 Replies

5. Programming

vector c++

hello guys. i'm new to c++. i've problem using two dimensional vector. i've a project of making conway's game of life. this is the code that i have made so far. my problem is how can i give a two dimensional vector through main. glider.vec1 = vec; is not correct way to give a two... (2 Replies)
Discussion started by: nishrestha
2 Replies

6. Hardware

Fedora 16 dual monitor - dual head - automatic monitor shutdown

Hi, I am experiencing troubles with dual monitors in fedora 16. During boot time both monitors are working, but when system starts one monitor automatically shut down. It happend out of the blue. Some time before when I updated system this happend but then I booted older kernel release and... (0 Replies)
Discussion started by: wakatana
0 Replies

7. UNIX for Beginners Questions & Answers

Which Product to Choose?

Okay, I have an Asus A8NSLI board with an Athlon 64 and I dunno, maybe 8gig Ram and Windows has crashed for the last time so I've finally had enough and I'll make it a Unix machine. I have a new 1Tera drive and I'm all set to go. Which brand of Unix/Linux can you advise me to go for? The... (3 Replies)
Discussion started by: abrogard
3 Replies

8. Shell Programming and Scripting

Pythonic Parsing

Experts and All, Hello ! I am trying to fabricate a simple shell script in python that has taken me almost 5 hours to complete. I am using python 3.6. So, I am trying to read a file, parse the log file and trying to answer this basic question of how many GET's and how many POST's are there... (1 Reply)
Discussion started by: ManoharMa
1 Replies
MPSLSTMDescriptor(3)					 MetalPerformanceShaders.framework				      MPSLSTMDescriptor(3)

NAME
MPSLSTMDescriptor SYNOPSIS
#import <MPSRNNLayer.h> Inherits MPSRNNDescriptor. Class Methods (nonnull instancetype) + createLSTMDescriptorWithInputFeatureChannels:outputFeatureChannels: Properties BOOL memoryWeightsAreDiagonal id< MPSCNNConvolutionDataSource > inputGateInputWeights id< MPSCNNConvolutionDataSource > inputGateRecurrentWeights id< MPSCNNConvolutionDataSource > inputGateMemoryWeights id< MPSCNNConvolutionDataSource > forgetGateInputWeights id< MPSCNNConvolutionDataSource > forgetGateRecurrentWeights id< MPSCNNConvolutionDataSource > forgetGateMemoryWeights id< MPSCNNConvolutionDataSource > outputGateInputWeights id< MPSCNNConvolutionDataSource > outputGateRecurrentWeights id< MPSCNNConvolutionDataSource > outputGateMemoryWeights id< MPSCNNConvolutionDataSource > cellGateInputWeights id< MPSCNNConvolutionDataSource > cellGateRecurrentWeights id< MPSCNNConvolutionDataSource > cellGateMemoryWeights MPSCNNNeuronType cellToOutputNeuronType float cellToOutputNeuronParamA float cellToOutputNeuronParamB float cellToOutputNeuronParamC Detailed Description This depends on Metal.framework The MPSLSTMDescriptor specifies a LSTM block/layer descriptor. The RNN layer initialized with a MPSLSTMDescriptor transforms the input data (image or matrix), the memory cell data and previous output with a set of filters, each producing one feature map in the output data and memory cell, according to the LSTM formulae detailed below. The user may provide the LSTM unit a single input or a sequence of inputs. Description of operation: Let x_j be the input data (at time index t of sequence, j index containing quadruplet: batch index, x,y and feature index (x=y=0 for matrices)). Let h0_j be the recurrent input (previous output) data from previous time step (at time index t-1 of sequence). Let h1_i be the output data produced at this time step. Let c0_j be the previous memory cell data (at time index t-1 of sequence). Let c1_i be the new memory cell data (at time index t-1 of sequence). Let Wi_ij, Ui_ij, Vi_ij, be the input gate weights for input, recurrent input and memory cell (peephole) data respectively Let bi_i be the bias for the input gate Let Wf_ij, Uf_ij, Vf_ij, be the forget gate weights for input, recurrent input and memory cell data respectively Let bf_i be the bias for the forget gate Let Wo_ij, Uo_ij, Vo_ij, be the output gate weights for input, recurrent input and memory cell data respectively Let bo_i be the bias for the output gate Let Wc_ij, Uc_ij, Vc_ij, be the memory cell gate weights for input, recurrent input and memory cell data respectively Let bc_i be the bias for the memory cell gate Let gi(x), gf(x), go(x), gc(x) be neuron activation function for the input, forget, output gate and memory cell gate Let gh(x) be the activation function applied to result memory cell data Then the new memory cell data c1_j and output image h1_i are computed as follows: I_i = gi( Wi_ij * x_j + Ui_ij * h0_j + Vi_ij * c0_j + bi_i ) F_i = gf( Wf_ij * x_j + Uf_ij * h0_j + Vf_ij * c0_j + bf_i ) C_i = gc( Wc_ij * x_j + Uc_ij * h0_j + Vc_ij * c0_j + bc_i ) c1_i = F_i c0_i + I_i C_i O_i = go( Wo_ij * x_j + Uo_ij * h0_j + Vo_ij * c1_j + bo_i ) h1_i = O_i gh( c1_i ) The '*' stands for convolution (see MPSRNNImageInferenceLayer) or matrix-vector/matrix multiplication (see MPSRNNMatrixInferenceLayer). Summation is over index j (except for the batch index), but there is no summation over repeated index i - the output index. Note that for validity all intermediate images have to be of same size and all U and V matrices have to be square (ie. outputFeatureChannels == inputFeatureChannels in those). Also the bias terms are scalars wrt. spatial dimensions. Method Documentation + (nonnull instancetype) createLSTMDescriptorWithInputFeatureChannels: (NSUInteger) inputFeatureChannels(NSUInteger) outputFeatureChannels Creates a LSTM descriptor. Parameters: inputFeatureChannels The number of feature channels in the input image/matrix. Must be >= 1. outputFeatureChannels The number of feature channels in the output image/matrix. Must be >= 1. Returns: A valid MPSNNLSTMDescriptor object or nil, if failure. Property Documentation - cellGateInputWeights [read], [write], [nonatomic], [retain] Contains weights 'Wc_ij', bias 'bc_i' and neuron 'gc' from the LSTM formula. If nil then assumed zero weights, bias and no neuron (identity mapping). Defaults to nil. - cellGateMemoryWeights [read], [write], [nonatomic], [retain] Contains weights 'Vc_ij' - the 'peephole' weights - from the LSTM formula. if YES == memoryWeightsAreDiagonal, then the number of weights used is the number of features in the memory cell image/matrix. If nil then assumed zero weights. Defaults to nil. - cellGateRecurrentWeights [read], [write], [nonatomic], [retain] Contains weights 'Uc_ij' from the LSTM formula. If nil then assumed zero weights. Defaults to nil. - cellToOutputNeuronParamA [read], [write], [nonatomic], [assign] Neuron parameter A for 'gh'. Defaults to 1.0f. - cellToOutputNeuronParamB [read], [write], [nonatomic], [assign] Neuron parameter B for 'gh'. Defaults to 1.0f. - cellToOutputNeuronParamC [read], [write], [nonatomic], [assign] Neuron parameter C for 'gh'. Defaults to 1.0f. - cellToOutputNeuronType [read], [write], [nonatomic], [assign] Neuron type definition for 'gh', see MPSCNNNeuronType. Defaults to MPSCNNNeuronTypeTanH. - forgetGateInputWeights [read], [write], [nonatomic], [retain] Contains weights 'Wf_ij', bias 'bf_i' and neuron 'gf' from the LSTM formula. If nil then assumed zero weights, bias and no neuron (identity mapping).Defaults to nil. - forgetGateMemoryWeights [read], [write], [nonatomic], [retain] Contains weights 'Vf_ij' - the 'peephole' weights - from the LSTM formula. if YES == memoryWeightsAreDiagonal, then the number of weights used is the number of features in the memory cell image/matrix. If nil then assumed zero weights. Defaults to nil. - forgetGateRecurrentWeights [read], [write], [nonatomic], [retain] Contains weights 'Uf_ij' from the LSTM formula. If nil then assumed zero weights. Defaults to nil. - inputGateInputWeights [read], [write], [nonatomic], [retain] Contains weights 'Wi_ij', bias 'bi_i' and neuron 'gi' from the LSTM formula. If nil then assumed zero weights, bias and no neuron (identity mapping). Defaults to nil. - inputGateMemoryWeights [read], [write], [nonatomic], [retain] Contains weights 'Vi_ij' - the 'peephole' weights - from the LSTM formula. if YES == memoryWeightsAreDiagonal, then the number of weights used is the number of features in the memory cell image/matrix. If nil then assumed zero weights. Defaults to nil. - inputGateRecurrentWeights [read], [write], [nonatomic], [retain] Contains weights 'Ui_ij' from the LSTM formula. If nil then assumed zero weights. Defaults to nil. - memoryWeightsAreDiagonal [read], [write], [nonatomic], [assign] If YES, then the 'peephole' weight matrices will be diagonal matrices represented as vectors of length the number of features in memory cells, that will be multiplied pointwise with the peephole matrix or image in order to achieve the diagonal (nonmixing) update. Defaults to NO. - outputGateInputWeights [read], [write], [nonatomic], [retain] Contains weights 'Wo_ij', bias 'bo_i' and neuron 'go' from the LSTM formula. If nil then assumed zero weights, bias and no neuron (identity mapping). Defaults to nil. - outputGateMemoryWeights [read], [write], [nonatomic], [retain] Contains weights 'Vo_ij' - the 'peephole' weights - from the LSTM. if YES == memoryWeightsAreDiagonal, then the number of weights used is the number of features in the memory cell image/matrix. If nil then assumed zero weights. Defaults to nil. - outputGateRecurrentWeights [read], [write], [nonatomic], [retain] Contains weights 'Uo_ij' from the LSTM formula. If nil then assumed zero weights. Defaults to nil. Author Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code. Version MetalPerformanceShaders-100 Thu Feb 8 2018 MPSLSTMDescriptor(3)
All times are GMT -4. The time now is 07:18 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy