I’m trying to implement a custom neural network model using PyTorch for a classification task.
When I inspect the output probabilities, they don’t sum up to 1. I’ve added a torch.nn.Softmax(dim=1) layer at the end of my model, which should normalize the output to probabilities, but it doesn’t seem to be working.
def custom_model(input_size, output_size):
model = torch.nn.Sequential(
torch.nn.Linear(input_size, 64),
torch.nn.ReLU(),
torch.nn.Linear(64, output_size),
torch.nn.Softmax(dim=1) # Softmax layer for classification
)
return model
input_size = 10
output_size = 5
model = custom_model(input_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimiser = torch.optim.SGD(model.parameters(), lr=0.001)
input_data = torch.randn(32, input_size)
target = torch.randint(0, output_size, (32,))
output = model(input_data)
loss = criterion(output, target)
optimiser.zero_grad()
loss.backward()
optimiser.step()
print(f'Loss: {loss.item()}')
Can anyone help?
>Solution :
I think the issue is that the softmax function was put in the wrong place. I would replace softmax with torch.nn.CrossEntropyLoss(). This means that you get one.