Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

How to perform torch.meshgrid over multiple tensors parallely

Let’s say we have a tensor x of size [60,9] and a tensor y of size [60,9]
Is it possible to do an operation like xx,yy = torch.meshgrid(x,y) such that xx and yy is of size [60,9,9] and xx[i,:,:], yy[i,:,:] is basically torch.meshgrid(x[i],y[i])

The built-in torch.meshgrid operation only accepts 1d tensors, is it possible to do the above operation without using for loops(which is inefficient as it does not take use of GPU’s parallel operation)

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

>Solution :

I don’t believe you will gain anything since the initialization of the tensors is not done on the GPU. So a proposed approach would indeed be to loop over x and y or using map as an iterable:

grids = map(torch.meshgrid, zip(x,y))
Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading