Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Contact

RuntimeError: mat1 and mat2 shapes cannot be multiplied (5400×64 and 5400×64)

I’m working on an image classification network and got a problem with the right values of inputs and outputs in the forward() function. I don’t have an idea to solve this, because they seem the same to me. The error comes from this line:
x = F.relu(self.fc1(x)), but I can’t figure it out.

Can anyone please help me with this problem?

That’s my code:

MEDevel.com: Open-source for Healthcare and Education

Collecting and validating open-source software for healthcare, education, enterprise, development, medical imaging, medical records, and digital pathology.

Visit Medevel

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 8, kernel_size=2)
        self.conv2 = nn.Conv2d(8, 12, kernel_size=2)
        self.conv3 = nn.Conv2d(12, 18, kernel_size=2)
        self.conv4 = nn.Conv2d(18, 24, kernel_size=2)
        self.fc1 = nn.Linear(5400, 64)
        self.fc2 = nn.Linear(64, 2)

    def forward(self, x):
        print(f'1. {x.size()}')
        x = self.conv1(x)
        x = F.max_pool2d(x, 2)
        x = F.relu(x)
        print(f'2. {x.size()}')
        x = self.conv2(x)
        x = F.max_pool2d(x, 2)
        x = F.relu(x)
        print(f'3. {x.size()}')
        x = self.conv3(x)
        x = F.max_pool2d(x, 2)
        x = F.relu(x)
        print(f'4. {x.size()}')
        x = self.conv4(x)
        x = F.max_pool2d(x, 2)
        x = F.relu(x)
        print(f'5. {x.size()}')
        x = x.view(-1, x.size(0)) 
        print(f'6. {x.size()}')
        x = F.relu(self.fc1(x))
        print(f'7. {x.size()}')
        x = self.fc2(x)
        print(f'8. {x.size()}')
        
        return torch.sigmoid(x)

That’s the print output:

1. torch.Size([64, 3, 256, 256])
2. torch.Size([64, 8, 127, 127])
3. torch.Size([64, 12, 63, 63])
4. torch.Size([64, 18, 31, 31])
5. torch.Size([64, 24, 15, 15])
6. torch.Size([5400, 64])

>Solution :

I think changing

x = x.view(-1, x.size(0))

to

x = x.view([-1, 5400], x.size(0))

Will solve your problem, You see that in print 6:

6. torch.Size([5400, 64])

the batch size 64 is in the 1 axes and not in the 0 axes. The fully connected layer expects an input of size 5400 therefore changing this will likely solve since you do not know that batch size but you know that the input to the fully-connected is 5400.

Add a comment

Leave a Reply

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Discover more from Dev solutions

Subscribe now to keep reading and get access to the full archive.

Continue reading