When I create a Conv layer and pass padding as float by mistake, I get a confusing error msg, which makes it harder to debug. Also, the error is thrown when I am doing the forward pass.
self.cnn = nn.Conv1d(in_channels=self.input_size, out_channels=self.num_filters,kernel_size=kw_l,padding=1.0)
self.cnn(inp) # error is thrown in this line
The error msg is "RuntimeError: argument 1 must be tuple of int, not tuple"
To clarify, I think that it might be helpful if the RuntimeError is updated to include the type of the incorrect tuple, so its clear that the type is expected to be an int.
import torch
import torch.nn as nn
from torch.autograd import Variable
input = Variable(torch.randn(1, 1, 10))
output = nn.Conv1d(1, 1, 3, padding=1)(input) # fine
output = nn.Conv1d(1, 1, 3, padding=1.0)(input) # error
This error message is thrown by the TupleParser
, at which point it doesn't know which param it's parsing but just the index of the param (in the args list).
As the tuple parser is only used for parsing params to autograd.functions.*
we could extend the TupleParser::parse
functions to also take in the param name so that it can pass it through to TupleParser::invalid_type
to throw an error which is more meaningful?
Or we could just add these type checks at the python level?
@apaszke @soumith I'm happy to send a PR, if you let me know which method you prefer/an alternative.
adding param names to tuple parser sounds good to me