Sponsored Content
Top Forums Shell Programming and Scripting Python: make dual vector dot-product more pythonic Post 303041217 by figaro on Monday 18th of November 2019 04:18:23 PM
Old 11-18-2019
Python: make dual vector dot-product more pythonic

I have this dot product, calculating weighted means, and is applied to two columns in a list:
Code:
# calculate weighted values in sequence
for i in range(len(temperatures)-len(weights)):
  temperatures[i].append(sum([weights[j]*temperatures[i+j][5] for j in range(len(weights))]))
  temperatures[i].append(sum([weights[j]*temperatures[i+j][6] for j in range(len(weights))]))

The calculation is a running dot-product, ie the list of temperature samples is far larger than the list of weights, hence the correction of subtracting len(weights) at the end of the main loop.
This traverses the list of weights twice, which is inefficient and degrades performance. How could this be done in a more pythonic way?

I also have concerns about the main loop. Would this be considered more pythonic?:
Code:
# calculate weighted values in sequence
for i in range(len(temperatures)):
   try:
      # weighted calculation here
   except:
      # do nothing, because array out of bounds


Last edited by figaro; 11-18-2019 at 06:01 PM.. Reason: Emphasise the fact that the lists are not of the same length, ie the dot product calculates a running weighted mean.
 

8 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

make: WARNING- Product is not licensed.

I do beg for my bad english ofr advance (french..) I've a problem with unix sco openserver 5 enterprise system I can't do any make at all! when i do it, i've this message : make: WARNING- Product is not licensed. however, i've entered the license number, code and data information... ... (3 Replies)
Discussion started by: joedelabush
3 Replies

2. UNIX for Dummies Questions & Answers

Backup of Product Database

HI, I know its scarey me asking this, but system is homegrown and I am just having fun, but at the same time dont want to have tooo much fun where the phrase "little knowledge is dangerous" perfectly fits my actions ;-). I have a couple of packages that are failing to be removed... #... (2 Replies)
Discussion started by: Student37
2 Replies

3. HP-UX

TrustedMigration product ?

I'm running HP-UX B.11.23 U ia64 I've got SOX auditors asking me if we have the TrustedMigration product. I don't know what that is and google isn't being helpful. Can you tell me what this product is (and what it is for) and how to know if my system is running it and/or prove that it is... (1 Reply)
Discussion started by: LisaS
1 Replies

4. Shell Programming and Scripting

Make python script ignore .htaccess

I just wrote a tiny script with the help of ghostdog74 to search all my files for special content phrases. After a few modifications I now made it work, but one problem is left. The files are located in public_html folder, so there might also be .htaccess files. So I ignored scanning of that... (4 Replies)
Discussion started by: medic
4 Replies

5. Programming

vector c++

hello guys. i'm new to c++. i've problem using two dimensional vector. i've a project of making conway's game of life. this is the code that i have made so far. my problem is how can i give a two dimensional vector through main. glider.vec1 = vec; is not correct way to give a two... (2 Replies)
Discussion started by: nishrestha
2 Replies

6. Hardware

Fedora 16 dual monitor - dual head - automatic monitor shutdown

Hi, I am experiencing troubles with dual monitors in fedora 16. During boot time both monitors are working, but when system starts one monitor automatically shut down. It happend out of the blue. Some time before when I updated system this happend but then I booted older kernel release and... (0 Replies)
Discussion started by: wakatana
0 Replies

7. UNIX for Beginners Questions & Answers

Which Product to Choose?

Okay, I have an Asus A8NSLI board with an Athlon 64 and I dunno, maybe 8gig Ram and Windows has crashed for the last time so I've finally had enough and I'll make it a Unix machine. I have a new 1Tera drive and I'm all set to go. Which brand of Unix/Linux can you advise me to go for? The... (3 Replies)
Discussion started by: abrogard
3 Replies

8. Shell Programming and Scripting

Pythonic Parsing

Experts and All, Hello ! I am trying to fabricate a simple shell script in python that has taken me almost 5 hours to complete. I am using python 3.6. So, I am trying to read a file, parse the log file and trying to answer this basic question of how many GET's and how many POST's are there... (1 Reply)
Discussion started by: ManoharMa
1 Replies
MPSCNNConvolutionGradientState(3)			 MetalPerformanceShaders.framework			 MPSCNNConvolutionGradientState(3)

NAME
MPSCNNConvolutionGradientState SYNOPSIS
#import <MPSCNNConvolution.h> Inherits MPSNNGradientState, and <MPSImageSizeEncodingState>. Properties __nonnull id< MTLBuffer > gradientForWeights __nonnull id< MTLBuffer > gradientForBiases MPSCNNConvolution * convolution Additional Inherited Members Detailed Description The MPSCNNConvolutionGradientState is returned by resultStateForSourceImage:sourceStates method on MPSCNNConvolution object. Note that resultStateForSourceImage:sourceStates:destinationImage creates the object on autoreleasepool. It will be consumed by MPSCNNConvolutionGradient. This used by MPSCNNConvolutionTranspose encode call that returns MPSImage on left hand side to correctly size the destination. Note that state objects are not usable across batches i.e. when batch is done you should nuke the state object and create new one for next batch. This state exposes the gradient with respect to weights and biases, as computed by the MPSCNNConvolutionGradient kernel, as a metal buffer to be used during weights and biases update. The standard weights and biases update formula is: weights(t+1) = f(weights(t), gradientForWeights(t)) and biases(t+1) = f(biases(t), gradientForBiases(t)), where the weights(t)/biases(t) are the wegihts and the biases at step t that are provided by data source provider used to create MPSCNNConvolution and MPSCNNConvoltuionGradient objects. There are multiple ways user can update weights and biases as described below: 1) For check pointing, i.e. updating weights/biases and storing: once the command buffer on which MPSCNNConvolutionGradient is enqueued is done (e.g. in command buffer completion callback), the application can simply use float* delta_w = (float*)((char*)[gradientForWeights contents]); float* delta_b = (float*)((char*)[gradientForBiases contents]); to update the weights and biases in the data provider directly. The application can instead provide a metal kernel that reads from gradientForWeights and gradientForBiases buffer and the buffer created using data provided by the data source to do any kind of update it will like to do, then read back the updated weights/biases and store to the data source. Note that lifetime of the gradientForWeights and gradientForBiases buffer is the same as the MPSCNNConvolutionGradientState. So it's the applications's responsibility to make sure the buffer is alive (retained) when the update kernel is running if the command buffer doesn't retain the buffer. Also, in order to gaurantee that the buffer is correctly synchronized for CPU side access, it is the application's responsibility to call [gradientState synchronizeOnCommandBuffer:] before accessing data from the buffer. 2) For a CPU side update, once the weights and biases in the data source provider are updated as above, the original MPSCNNConvolution and MPSCNNConvolutionGradient objects need to be updated with the new weigths and biases by calling the -(void) reloadWeightsAndBiasesWithDataSource:(id<MPSDataSourceProvider>) method. Again application needs to call [gradientState synchronizeOnCommandBuffer:] before touching data on CPU side. 3) The above CPU side update requires command buffer to be done. If the application doesn't want to update its data source provider object and would prefer to directly enqueue an update of the internal MPSCNNConvolution and MPSCNNConvolutionGradient weights/biases buffers on the GPU without CPU side involvement, it needs to do following: i) get gradientForWeights and gradientForBiases buffers from this gradient state object and set it as source of update kernel ii) create a temporary buffer, dest, of same size and set it as destination of update kernel iii) enqueue update kernel on command buffer iv) call reloadWeightsAndBiasesWithCommandBuffer:dest:weightsOffset:biasesOffset on MPSCNNConvolution and MPSCNNConvolutionGradient objects. This will reload the weights from application's update kernel in dest on GPU without CPU side involvement. Property Documentation - convolution [read], [nonatomic], [retain] The convolution filter that produced the state. - gradientForBiases [read], [nonatomic], [assign] A buffer that contains the loss function gradients with respect to biases. - gradientForWeights [read], [nonatomic], [assign] A buffer that contains the loss function gradients with respect to weights. Each value in the buffer is a float. The layout of the gradients with respect to the weights is the same as the weights layout provided by data source i.e. it can be interpreted as 4D array gradientForWeights[outputFeatureChannels][kernelHeight][kernelWidth][inputFeatureChannels/groups] Author Generated automatically by Doxygen for MetalPerformanceShaders.framework from the source code. Version MetalPerformanceShaders-100 Thu Feb 8 2018 MPSCNNConvolutionGradientState(3)
All times are GMT -4. The time now is 11:50 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy