å®è£ ã¯ç°¡åã§ãããå¿ èŠãªããã£ã³ã°ã®æ°ãèšç®ãããšããé çã®çš®ã«æ©ãŸãããŠããå€ãã®äººã ãå©ããããšãã§ããŸãã
cc @ezyang @gchanan @ zou3519 @albanD @mruberry
ããã¯ãã䟡å€ãããããã§ãã ããªããææ¡ããŠããã€ã³ã¿ãŒãã§ãŒã¹ã¯äœã§ããïŒ nn.Conv2d(..., padding="same")
ïŒ
TensorFlowã®åãåäœãæ¢ããŠããå Žåãè¿œå ãããã¯ã»ã«æ°ã¯å ¥åãµã€ãºã«äŸåãããããå®è£ ã¯ããã»ã©åçŽã§ã¯ãªãããšã«æ³šæããŠãã ããã åèã®ããã«https://github.com/caffe2/caffe2/blob/master/caffe2/proto/caffe2_legacy.protoãåç §ããŠãã ãã
åé¡ãšåç
§ã瀺ããŠããã ãããããšãããããŸãã
@fmassaãè¿°ã¹ãåé¡ã解決ããããã«ã2ã€ã®ã€ã³ã¿ãŒãã§ãŒã¹ãææ¡ããŸãã
ãŸãã @ southithãè¿°ã¹ãããã«ãæåã®ã€ã³ã¿ãŒãã§ãŒã¹ã¯nn.Conv*d(..., padding="same")
ã forward()
åŒã³åºãããšã«ããã£ã³ã°ãèšç®ããŸãã
ãã ããåæåãã§ãŒãºã§å
¥å圢ç¶ãããã£ãŠããå Žåã¯ãéå¹ççãªæ¹æ³ã«ãªããŸãã ãããã£ãŠã nn.CalcPadConv*d(<almost same parameters as Conv*d>)
ãããªã€ã³ã¿ãŒãã§ã€ã¹ããå§ãããŸãã ããã䜿çšãããšããŠãŒã¶ãŒã¯åæåæã«æ¢ç¥ã®å¹
ãšé«ãã䜿çšããŠããã£ã³ã°ãèšç®ããåºåïŒããã£ã³ã°ã®åœ¢ç¶ïŒãnn.Conv2d(...)
ããã£ã³ã°ãã©ã¡ãŒã¿ãŒã«æž¡ãããšãã§ããŸãã
2çªç®ã®ææ¡ãææå°æ©ã®æé©åã§ããå¯èœæ§ããããã©ããã¯ããããŸããã
ãããã«ã€ããŠã©ãæããŸããïŒ ããè¯ãååã®ã¢ã€ãã¢ã¯ãããŸããïŒ
éå¹çã®æ倧ã®åå ã¯ã padding=same
å Žåãå¿
èŠãšããä»ã®ãã¹ãŠã®ç³ã¿èŸŒã¿ã®åã«F.pad
ã¬ã€ã€ãŒãè¿œå ããå¿
èŠããããšããäºå®ã«ãããšæããŸãïŒããã£ã³ã°ã®éãåãã§ãªãå¯èœæ§ãããããïŒå·ŠåŽãšå³åŽïŒãããšãã°ãTensorFlowãcudnn
å Žåã«ãããåŠçããæ¹æ³ãåç
§ããŠãã ããã ã€ãŸãã nn.CalcPadConv*d
ã¯éåžžnn.Conv*d(..., padding="same")
åããããé«äŸ¡ã«ãªããšããããšã§ãã
ç³ã¿èŸŒã¿ã®äž¡åŽã§ç°ãªãããã£ã³ã°ããµããŒããããšïŒCaffe2ã®ããã«ãå·Šãå³ãäžãäžïŒããããããå¹ççã«ããããšãã§ããŸãããcudnnã¯ãŸã ããããµããŒãããŠããªãããããããã®å Žåã¯è¿œå ã®ããã£ã³ã°ãå¿ èŠã«ãªããŸãã
ãŸãã padding="same"
ãnn.Conv*d
ã«è¿œå ãããšãããããnn.*Pool*d
ã§ãåãããã«ãã¹ãã ãšæããŸãã
å°ãæ°ã«ãªãã®ã¯ããŠãŒã¶ãŒãpadding=same
ã®åäœãTFãšåçã§ãããšæåŸ
ããŠãããããããªãããããã©ãŒãã³ã¹ã®äœäžãæåŸ
ããŠããªããããããªããšããããšã§ãã
ã©ãæããŸããïŒ
ãªãããã¯éå¹ççã§ããããïŒ ãã¹ãŠã®åé²ã¹ãããã§ããã£ã³ã°ãèšç®ããã ãã§ã¯ãããŸãããïŒ ã³ã¹ãã¯å°ããã¯ããªã®ã§ããããæé©åããå¿
èŠã¯ãããŸããã ã»ãã³ãã£ã¯ã¹ãå®å
šã«ã¯ç解ããŠããªããããããŸãããããªãF.pad
ãå¿
èŠã«ãªãã®ãããããŸããã
ããã£ã³ã°ãå ¥åãµã€ãºã«äŸåãããã®ã¯ããªãæªãããšã§ãã @YangqingããããŸããŸãªã·ãªã¢ã«åãšå¹çã®çç±ãããããæªãèãã§ããçç±ãæŠèª¬ããŠãããã«ã€ããŠå éšã§è©±ãåã£ããšããã§ãã
@fmassa ãç§ãæå³ããã®ã¯ã __init__()
ã䜿çšããŠnn.CalcPadConv*d()
ã ããªããèšã£ãããã«ããã®æ¹æ³ã¯èšç®ãããããã£ã³ã°ãå¥åŠãªãšãã«ããŸãããã ãã§ã¯ãããŸããã ãããã£ãŠã F.pad
ã¬ã€ã€ãŒãè¿œå ããå¿
èŠããããŸãããŸãã¯ãå¥æ°ã®ããã£ã³ã°ã«å¯ŸããF.conv*d
ãµããŒãã圹ç«ã€ã¯ãã§ãã
ç·šéïŒæ¬¡ã«ãç§ãææ¡ããã®ã¯é¢æ°ã§ãããããšãã°torch.nn.utilsãŸãã¯torch.utilsã«é 眮ããå¿ èŠããããŸãã
çµæãšããŠãç§ãææ¡ããã®ã¯ãïŒæ¬äŒŒã³ãŒãïŒã®ãããªåçŽãªå¹çšé¢æ°ã§ãã
def calc_pad_conv1d(width, padding='same', check_symmetric=True, ... <params that conv1d has>):
shape = <calculate padding>
assert not check_symmetric or <shape is symmetric>, \
'Calculated padding shape is asymmetric, which is not supported by conv1d. ' \
'If you just want to get the value, consider using check_symmetric=False.'
return shape
width = 100 # for example
padding = calc_pad_conv1d(width, ...)
m = nn.Conv1d(..., padding=padding)
ãŸãããã®é¢æ°ã¯ããŠãŒã¶ãŒã«æå©ãªF.pad
ã§äœ¿çšã§ããŸãã
@ qbx2ããããããªãã®ææ¡ãå®å šã«ã¯ç解ããŠããŸããããTensorFlowã®åäœãåçŸãããã®ã§ããã°ãããã§ååã§ã¯ãªããšæããŸãã
ããã¯ãTensorFlow SAME
ããã£ã³ã°ãæš¡å£ããŠãããšæããã®ã®ã¹ããããã§ãïŒ nn.Conv2d
ãF.conv2d_same_padding
åŒã³åºãããšãã§ããããã«ãæ©èœã€ã³ã¿ãŒãã§ã€ã¹ã«æžãçããŠããŸãïŒïŒ
def conv2d_same_padding(input, weight, bias=None, stride=1, dilation=1, groups=1):
input_rows = input.size(2)
filter_rows = weight.size(2)
effective_filter_size_rows = (filter_rows - 1) * dilation[0] + 1
out_rows = (input_rows + stride[0] - 1) // stride[0]
padding_needed =
max(0, (out_rows - 1) * stride[0] + effective_filter_size_rows -
input_rows)
padding_rows = max(0, (out_rows - 1) * stride[0] +
(filter_rows - 1) * dilation[0] + 1 - input_rows)
rows_odd = (padding_rows % 2 != 0)
# same for padding_cols
if rows_odd or cols_odd:
input = F.pad(input, [0, int(cols_odd), 0, int(rows_odd)])
return F.conv2d(input, weight, bias, stride,
padding=(padding_rows // 2, padding_cols // 2),
dilation=dilation, groups=groups)
ããã¯äž»ã«ããããšããã®TensorFlowã³ãŒãããã³ããŒããŒã¹ããããŸã
ã芧ã®ãšãããããã«ã¯å€ãã®é ãããããšãèµ·ãã£ãŠããã®ã§ã padding='same'
è¿œå ãã䟡å€ã¯ãªããšæããŸãã ãŸããTensorFlowã§SAME
åäœãè€è£œããªãããšãçæ³çã§ã¯ãªããšæããŸãã
èãïŒ
@fmassaã¯ãããã®éãã§ãã forward()
ããšã«ããã£ã³ã°ãèšç®ããã®ã¯éå¹ççãããããŸããã
ãã ããç§ã®ææ¡ã¯ã forward()
åŒã³åºãããšã«ããã£ã³ã°ãèšç®ããããšã§ã¯ãããŸããã ç 究è
ïŒéçºè
ïŒã¯ãå®è¡åã«ç»åã®ãµã€ãºãnn.Conv2d
ã«ãªããšäºæ³ããå ŽåããããŸãã ãŸãããåããããã£ã³ã°ãå¿
èŠãªå Žåã¯ããã®é¢æ°ã䜿çšããŠããSAMEããæš¡å£ããããã«å¿
èŠãªããã£ã³ã°ãèšç®ã§ããŸãã
ããšãã°ãç 究è
ã200x200ã300x300ã400x400ã®ç»åãæã£ãŠãããšããŸãã 次ã«ãåæåãã§ãŒãºã§3ã€ã®ã±ãŒã¹ã®ããã£ã³ã°ãèšç®ãã察å¿ããããã£ã³ã°ã䜿çšããŠç»åãF.pad()
æž¡ãããšãã§ããŸãã ãŸãã¯ã forward()
åŒã³åºãã®åã«ã nn.Conv2d
ã®ããã£ã³ã°ãã£ãŒã«ããå€æŽããã ãã§ãã ãããåç
§ããŠãã ããïŒ
>>> import torch
>>> import torch.nn as nn
>>> from torch.autograd import Variable
>>> m = nn.Conv2d(1,1,1)
>>> m(Variable(torch.randn(1,1,2,2))).shape
torch.Size([1, 1, 2, 2])
>>> m.padding = (1, 1)
>>> m(Variable(torch.randn(1,1,2,2))).shape
torch.Size([1, 1, 4, 4])
ã¯ãã pytorchã³ã¢ã«ãããã£ã³ã°èšç®ãŠãŒãã£ãªãã£æ©èœããè¿œå ãããã ãã§ãã
ç 究è
ãåå
¥åç»åãµã€ãºã«äŸåããããã£ã³ã°ãå¿
èŠãªå Žåãç»åãnn.Conv2d
ã«æž¡ãåã«ãé¢æ°ãF.pad()
ãšçµã¿åãããããšãã§ããŸãã forward()
åŒã³åºãããšã«å
¥åãããã£ã³ã°ãããã©ãããã³ãŒãã©ã€ã¿ãŒã«æ±ºå®ãããããšæããŸãã
è¿ãå°æ¥ãpytorchã«åæ§ã®APIãå®è£ ããèšç»ã¯ãããŸããïŒ ãã³ãœã«ãããŒ/ã±ã©ã¹ã®ããã¯ã°ã©ãŠã³ãããæ¥ã人ã ã¯ç¢ºãã«ãããé«ãè©äŸ¡ããã§ãããã
ãããã£ãŠãåºæ¬çãªããã£ã³ã°èšç®æŠç¥ïŒTensorFlowãšåãçµæã¯åŸãããŸã
def _get_padding(padding_type, kernel_size):
assert padding_type in ['SAME', 'VALID']
if padding_type == 'SAME':
return tuple((k - 1) // 2 for k in kernel_size))
return tuple(0 for _ in kernel_size)
ããã¯ããªãã@ im9uriã念é ã«çœ®ããŠããããšã§ããïŒ
ããã¯ç§ãèããŠãããã®ãšäŒŒãŠããŸãããããªããåã«è¿°ã¹ãããã«ãèšç®ã¯ã¹ãã©ã€ããšæ¡åŒµã§è€éã«ãªããŸãã
ãŸããConvTranspose2dãªã©ã®ä»ã®ç³ã¿èŸŒã¿æŒç®ã§ãã®ãããªAPIã䜿çšããããšãã§ããŸãã
ãã¹ã©ã€ãã£ã³ã°ãŠã£ã³ããŠæŒç®åãã¯ãã¹ãŠé察称ããã£ã³ã°ããµããŒãããå¿ èŠããããšæããŸãã
ãåããè°è«ã«ã€ããŠ...
@soumithå
¥åãµã€ãºã«å¿ããŠããã£ã³ã°ãäœæããã®ãæªãçç±ã説æããŠãã ããã
ãããåé¡ã§ããå Žåããšã«ãããå®çšçãªè§£æ±ºçã¯ããåããã䜿çšãããšãã«stride == 1
ãèŠæ±ããããšã§ããå¯èœæ§ããããŸãã stride == 1
å Žåãããã£ã³ã°ã¯å
¥åãµã€ãºã«äŸåããã1åã ãèšç®ã§ããŸãã ãŠãŒã¶ãŒãstride > 1
padding='same'
ã䜿çšããããšãããšãã³ã³ã¹ãã©ã¯ã¿ãŒã¯ValueError
å¿
èŠããããŸãã
ç§ã¯ç¥ã£ãŠããŸããããã¯æãã¯ãªãŒã³ãªè§£æ±ºçã§ã¯ãããŸããããå¶çŽã¯ç§ã«ãšã£ãŠååã«åççã«èãããŸãïŒ
stride > 1
ãã³ãœã«ãããŒã«ã¯åœãŠã¯ãŸããŸããããã®ããããåãããšããåèªã䜿çšãããšãå°ã誀解ãæãIMOã«ãªããŸããstride > 1
ã®ãã³ãœã«ãããŒã®åäœãæ¬åœã«å¿
èŠãšããŠããå Žåã¯ã»ãšãã©æ³åã§ããŸããããå
ã®ã»ãã³ãã£ã¯ã¹ããåããã«ãããšããã¡ãããã¹ãã©ã€ãç³ã¿èŸŒã¿ã䜿çšããŠãæå³ããããŸãããåºåãå
¥åãšåããµã€ãºã§ããå¿
èŠãããå Žåãconv2dã®ããã¥ã¡ã³ãã«ã¯ãåºåãµã€ãºã®æ瀺çãªåŒãèšèŒãããŠããŸãã ããšãã°ãHoutãšHinãçãããããšãããã£ã³ã°ã解決ã§ããŸãã
def _get_padding(size, kernel_size, stride, dilation):
padding = ((size - 1) * (stride - 1) + dilation * (kernel_size - 1)) //2
return padding
åãããã£ã³ã°ã¯ããã£ã³ã°=ïŒkernel_size --strideïŒ// 2ãæå³ããã®ã§ãããã£ã³ã°= "same"ãå°å ¥ãããæžã蟌ãŸãããšãã«ãŒãã«ã®ãµã€ãºãšã¹ãã©ã€ããèªåçã«èªã¿åããïŒnn.Conv2dã«ãèšèŒãããŠããŸãïŒãããã£ã³ã°ãé©çšãããŸããããã«å¿ããŠèªåçã«
ããã¯ãåç
§çšã«same
ããã£ã³ã°ãããéåžžã«åçŽãªConv2dã¬ã€ã€ãŒã§ãã æ£æ¹åœ¢ã®ã«ãŒãã«ãšstride = 1ãdilation = 1ãgroups = 1ã®ã¿ããµããŒãããŸãã
class Conv2dSame(torch.nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, bias=True, padding_layer=torch.nn.ReflectionPad2d):
super().__init__()
ka = kernel_size // 2
kb = ka - 1 if kernel_size % 2 == 0 else ka
self.net = torch.nn.Sequential(
padding_layer((ka,kb,ka,kb)),
torch.nn.Conv2d(in_channels, out_channels, kernel_size, bias=bias)
)
def forward(self, x):
return self.net(x)
c = Conv2dSame(1,3,5)
print(c(torch.rand((16,1,10,10))).shape)
# torch.Size([16, 3, 10, 10])
ãããPyTorchã«è¿œå ããããã©ãããŸã è©äŸ¡ãããŠããå Žåã¯ãè€éã/éå¹çæ§ãšéçºè ã®äœ¿ããããã®ãã¬ãŒããªãã«ã€ããŠïŒ
PyTorchã®äžå¿çãªç®æšã¯ãç 究ãšãããã³ã°ã®ããã®åªãããã©ãããã©ãŒã ãæäŸããããšã§ãã ãããã£ãŠãããããã¹ãŠã®[æ¬çªç°å¢ã§ã®äœ¿çš]æé©åãè¿œå ããäžæ¹ã§ããããã䜿ãããããšåŒãæãã«ããªãããã«ãå³ããèšèšå¶çŽã«åãçµãã§ããŸããã
ã¡ãªã¿ã«ãç§ã¯Kerasãšå
ã®tf.layers
/ estimatorAPIã䜿çšããçµæŽãæã£ãŠããŸãã ãã¹ãŠãsame
ããã£ã³ã°ããµããŒãããŠããŸãã çŸåšãå
ã
TFã§PyTorchã䜿çšããŠäœæããç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ãåå®è£
ããŠããŸãããŒãããã£ã³ã°ã®æŒç®ãèªåã§çµã¿èŸŒãå¿
èŠããã£ããããçŽåæ¥ããããŸããã
ãäžå¿çãªç®æšããæ¬åœã«äœ¿ããããã«çŠç¹ãåœãŠãŠããå ŽåãïŒåè¿°ã®ããã«ïŒãã¹ãŠã®ãã©ã¯ãŒããã¹ã§ãŒãããã£ã³ã°ãèšç®ããå¹çãäœäžãããšããŠããéçºè ã®å¹çãšä¿å®æ§ã®èŠ³ç¹ããæéãç¯çŽã§ããŸãïŒåè¿°ã®ãšããïŒãããšãã°ããŒãããã£ã³ã°ãèšç®ããããã«ã«ã¹ã¿ã ã³ãŒããèšè¿°ããå¿ èŠããªãããšïŒã¯ããã¬ãŒããªãã®äŸ¡å€ããããããããŸããã èãïŒ
ãã®æ©èœã䜿çšããŸã
padding=SAME
ãªãã·ã§ã³ã®APIãæäŸã§ããªãã®ã¯ãªãã§ããïŒ èª°ããããã£ã³ã°ã®è¿œå è²»çšãè² æ
ããããšãããšããªãå Žåã¯ããããããŠãã ããã å€ãã®ç 究è
ã«ãšã£ãŠãã©ããããããã¿ã€ãã³ã°ã¯å¿
é ã§ãã
ã¯ãã誰ãããããè¿œå ããŠæ¿èªããŠããããªããããã¯çŽ æŽãããããšã§ãã
ééããªããããè¿œå ããŠãã ãããã³ããŒã¯ãããæãã§ããŸãã
pytorchã¯ä»ããããµããŒãããŠããŸããïŒ VGGã®æåã®æäœãšåãæäœã䜿çšããŠãpadding =ïŒkernel_size-1ïŒ/ 2ãèšå®ã§ããŸããïŒ
VGGãããã¯ââãŒã¯ã¯ãæåã®ã°ã«ãŒãã§åºåãµã€ãºãå€æŽããªãããã«ããããšãã§ããŸãã 次ã«ãã¹ãã©ã€ãã䜿çšããŠæ©èœãããã®ãµã€ãºãå€æŽã§ããŸãããåé¡ãªãããã«èãããŸããïŒ
ããã¯ãdeepfakesããåãconv2dã®ããã£ã³ã°ãåŒã³åºã1ã€ã®äŸã§ãã
# modify con2d function to use same padding
# code referd to <strong i="6">@famssa</strong> in 'https://github.com/pytorch/pytorch/issues/3867'
# and tensorflow source code
import torch.utils.data
from torch.nn import functional as F
import math
import torch
from torch.nn.parameter import Parameter
from torch.nn.functional import pad
from torch.nn.modules import Module
from torch.nn.modules.utils import _single, _pair, _triple
class _ConvNd(Module):
def __init__(self, in_channels, out_channels, kernel_size, stride,
padding, dilation, transposed, output_padding, groups, bias):
super(_ConvNd, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.dilation = dilation
self.transposed = transposed
self.output_padding = output_padding
self.groups = groups
if transposed:
self.weight = Parameter(torch.Tensor(
in_channels, out_channels // groups, *kernel_size))
else:
self.weight = Parameter(torch.Tensor(
out_channels, in_channels // groups, *kernel_size))
if bias:
self.bias = Parameter(torch.Tensor(out_channels))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
n = self.in_channels
for k in self.kernel_size:
n *= k
stdv = 1. / math.sqrt(n)
self.weight.data.uniform_(-stdv, stdv)
if self.bias is not None:
self.bias.data.uniform_(-stdv, stdv)
def __repr__(self):
s = ('{name}({in_channels}, {out_channels}, kernel_size={kernel_size}'
', stride={stride}')
if self.padding != (0,) * len(self.padding):
s += ', padding={padding}'
if self.dilation != (1,) * len(self.dilation):
s += ', dilation={dilation}'
if self.output_padding != (0,) * len(self.output_padding):
s += ', output_padding={output_padding}'
if self.groups != 1:
s += ', groups={groups}'
if self.bias is None:
s += ', bias=False'
s += ')'
return s.format(name=self.__class__.__name__, **self.__dict__)
class Conv2d(_ConvNd):
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True):
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
super(Conv2d, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation,
False, _pair(0), groups, bias)
def forward(self, input):
return conv2d_same_padding(input, self.weight, self.bias, self.stride,
self.padding, self.dilation, self.groups)
# custom con2d, because pytorch don't have "padding='same'" option.
def conv2d_same_padding(input, weight, bias=None, stride=1, padding=1, dilation=1, groups=1):
input_rows = input.size(2)
filter_rows = weight.size(2)
effective_filter_size_rows = (filter_rows - 1) * dilation[0] + 1
out_rows = (input_rows + stride[0] - 1) // stride[0]
padding_needed = max(0, (out_rows - 1) * stride[0] + effective_filter_size_rows -
input_rows)
padding_rows = max(0, (out_rows - 1) * stride[0] +
(filter_rows - 1) * dilation[0] + 1 - input_rows)
rows_odd = (padding_rows % 2 != 0)
padding_cols = max(0, (out_rows - 1) * stride[0] +
(filter_rows - 1) * dilation[0] + 1 - input_rows)
cols_odd = (padding_rows % 2 != 0)
if rows_odd or cols_odd:
input = pad(input, [0, int(cols_odd), 0, int(rows_odd)])
return F.conv2d(input, weight, bias, stride,
padding=(padding_rows // 2, padding_cols // 2),
dilation=dilation, groups=groups)
ç§ãããã«ãšãŠãæè¬ããŠãããšèšãããã«ç«ã¡å¯ãã ãã§ãã çŸåšããã³ãœã«ãããŒããåçŽãªã¢ãã«ã移æ€ããŠãããèšç®ã«ã¯éåžžã«é·ãæéãããããŸã...
ãã®ã¹ã¬ããã¯ã¡ããã©ãªããªã£ãããã§ãã ããã§ã®èŠªæã®æ°ãèãããšãããé«éãªãããã¿ã€ãã³ã°ã®ããã«ãã®æ©èœãè¿œå ããããšã¯æ¬åœã«çŽ æŽãããããšã§ãã
ç§ã¯ããã«ã€ããŠã®ææ¡ãæžããŸãããããŠç§ãã¡ã¯ãããå®è¡ãã誰ããèŠã€ããããšãã§ããŸãã
ç§ã¯ãããv1.1ã®ãã€ã«ã¹ããŒã³ã«åœãŠã¯ããŠããŸãã
ããããšããããªãã¯çŽ æŽãããã§ãïŒ ãŸããããã£ã³ã°åŒæ°ã4ã¿ãã«ãåãå ¥ããããã«ããããã«ãå¥ã®æ©èœèŠæ±ãæåº
@soumithpytorchã«åãããã£ã³ã°ã¢ãŒãããããšäŸ¿å©ã§ãã
@soumithã³ã³ãã€ã«åã€ã³ã¿ãŒãã§ãŒã¹ã䜿ã£ãŠã¿ãŸãããïŒ
model=torch.compile(model,input_shape=(3,224,224))
TensorFlowã®å®è¡æ¹æ³ã«åºã¥ããŠãæ¡åŒµãšã¹ãã©ã€ãããµããŒãããåãããã£ã³ã°ã䜿çšããŠConv2DãäœæããŸããã ããã¯ãªã¢ã«ã¿ã€ã ã§èšç®ããŸãããäºåèšç®ããå Žåã¯ãããã£ã³ã°ãinitïŒïŒã«ç§»åããå ¥åãµã€ãºãã©ã¡ãŒã¿ãŒãæå®ããŸãã
import torch as tr
import math
class Conv2dSame(tr.nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, dilation=1):
super(Conv2dSame, self).__init__()
self.F = kernel_size
self.S = stride
self.D = dilation
self.layer = tr.nn.Conv2d(in_channels, out_channels, kernel_size, stride, dilation=dilation)
def forward(self, x_in):
N, C, H, W = x_in.shape
H2 = math.ceil(H / self.S)
W2 = math.ceil(W / self.S)
Pr = (H2 - 1) * self.S + (self.F - 1) * self.D + 1 - H
Pc = (W2 - 1) * self.S + (self.F - 1) * self.D + 1 - W
x_pad = tr.nn.ZeroPad2d((Pr//2, Pr - Pr//2, Pc//2, Pc - Pc//2))(x_in)
x_out = self.layer(x_pad)
return x_out
äŸ1ïŒ
å
¥å圢ç¶ïŒïŒ1ã3ã96ã96ïŒ
ãã£ã«ã¿ïŒ64
ãµã€ãºïŒ9x9
Conv2dSame(3, 64, 9)
ãããå
¥ãã®åœ¢ç¶ïŒïŒ1ã3ã104ã104ïŒ
åºå圢ç¶ïŒïŒ1ã64ã96ã96ïŒ
äŸ2ïŒ
以åãšåãã§ãããstride = 2ã§ãã
Conv2dSame(3, 64, 9, 2)
ãããå
¥ãã®åœ¢ç¶=ïŒ1ã3ã103ã103ïŒ
åºå圢ç¶=ïŒ1ã64ã48ã48ïŒ
@jpattsåºå圢ç¶ã®èšç®ãééã£ãŠãããšæããŸãh=w=28, stride=3, kernel_size=1
ããã«ãã³ãŒãã¯ãã³ãœã«ãããŒãšã¯ç°ãªãçµæã«ãªãã¯ãã§ãã
äºåã«èšç®ãè¡ãããªã¢ã³ãã¯æ¬¡ã®ãšããã§ãã
def pad_same(in_dim, ks, stride, dilation=1):
"""
Refernces:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/common_shape_fns.h
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/common_shape_fns.cc#L21
"""
assert stride > 0
assert dilation >= 1
effective_ks = (ks - 1) * dilation + 1
out_dim = (in_dim + stride - 1) // stride
p = max(0, (out_dim - 1) * stride + effective_ks - in_dim)
padding_before = p // 2
padding_after = p - padding_before
return padding_before, padding_after
å ¥åãã£ã¡ã³ã·ã§ã³ãããã£ãŠããŠããã®å Žã§èšç®ãããŠããªãå Žåã¯ã次ã®ããã«äœ¿çšã§ããŸãã
# Pass this to nn.Sequential
def conv2d_samepad(in_dim, in_ch, out_ch, ks, stride, dilation=1, bias=True):
pad_before, pad_after = pad_same(in_dim, ks, stride, dilation)
if pad_before == pad_after:
return [nn.Conv2d(in_ch, out_ch, ks, stride, pad_after, dilation, bias=bias)]
else:
return [nn.ZeroPad2d((pad_before, pad_after, pad_before, pad_after)),
nn.Conv2d(in_ch, out_ch, ks, stride, 0, dilation, bias=bias)]
ãã ãããã®å Žåãå ¥åãã£ã¡ã³ã·ã§ã³ã«å¯ŸããŠããã€ãã®ç°¿èšãè¡ãå¿ èŠãããããïŒãããäž»èŠãªåé¡ã§ãïŒãäžèšã䜿çšãããšã次ã®ããšã圹ç«ã€å ŽåããããŸãã
def conv_outdim(in_dim, padding, ks, stride, dilation):
if isinstance(padding, int) or isinstance(padding, tuple):
return conv_outdim_general(in_dim, padding, ks, stride, dilation)
elif isinstance(padding, str):
assert padding in ['same', 'valid']
if padding == 'same':
return conv_outdim_samepad(in_dim, stride)
else:
return conv_outdim_general(in_dim, 0, ks, stride, dilation)
else:
raise TypeError('Padding can be int/tuple or str=same/valid')
def conv_outdim_general(in_dim, padding, ks, stride, dilation=1):
# See https://arxiv.org/pdf/1603.07285.pdf, eq (15)
return ((in_dim + 2 * padding - ks - (ks - 1) * (dilation - 1)) // stride) + 1
def conv_outdim_samepad(in_dim, stride):
return (in_dim + stride - 1) // stride
@mirceamironencoææããŠãããŠããããšããç§ã¯ãããçŽ æ©ãæ±ãããŠãã§ãã¯ããŸããã§ããã 代ããã«å€©äºã䜿çšããããã«æŽæ°
@harritayloråæããŸãããã®æ©èœã«ããã
@kylemcdonald
ããã¯ãåç §çšã«
same
ããã£ã³ã°ãããéåžžã«åçŽãªConv2dã¬ã€ã€ãŒã§ãã æ£æ¹åœ¢ã®ã«ãŒãã«ãšstride = 1ãdilation = 1ãgroups = 1ã®ã¿ããµããŒãããŸããclass Conv2dSame(torch.nn.Module): def __init__(self, in_channels, out_channels, kernel_size, bias=True, padding_layer=torch.nn.ReflectionPad2d): super().__init__() ka = kernel_size // 2 kb = ka - 1 if kernel_size % 2 == 0 else ka self.net = torch.nn.Sequential( padding_layer((ka,kb,ka,kb)), torch.nn.Conv2d(in_channels, out_channels, kernel_size, bias=bias) ) def forward(self, x): return self.net(x) c = Conv2dSame(1,3,5) print(c(torch.rand((16,1,10,10))).shape) # torch.Size([16, 3, 10, 10])
kb = ka - 1 if kernel_size % 2 else ka
å¿
èŠããããŸããïŒ
ããã¯Conv1dã«ãé©çšãããŸããïŒ
ãã¶ããã¯ã©ã¹ConvNDã«æ°ããããã£ã³ã°ã¡ãœãããè¿œå ããã®ã¯è³¢æãªéžæã§ãããã¡ãœããããªãŒããŒããŒãããããšã§ãããã£ã³ã°ã¹ã±ãžã¥ãŒã«ãç°¡åã«å»¶é·ã§ããŸãã
@soumithããã®ææ¡ãæžããå ŽåããŸãã¯èª°ããäœãããå¿
èŠãããããèŠçŽããå Žåãç§ã¯ãããããããåãããšãã§ããŸãã äžèšã§å€ãã®è°è«ããããç§ãã¡ãäœã«èœã¡çããã®ãããããŸããã å
¥åããŒã¿ã«å¿ããŠããã£ã³ã°ãèšç®ããŠããŸããïŒããŒã«ã«ãpadding="same"
ãå®è£
ããå¿
èŠããããŸããïŒ
å æããã£ã³ã°ãè¿œå ãããã®ã§ããã ãŸãããããconv1dã«è¿œå ããŠãã ããã
ããæç¹ã§ã³ã¡ã³ãã®ãã©ããŒããããŸãããããã®æ©èœã¯kerasã§éåžžã«ããŸãæ©èœããŠãããšæããŸãã æ£ç¢ºã«åŸãå¿
èŠããããŸãã
@Chilleeããã«è¡ããŸãïŒ
次ã®ã¬ã€ã€ãŒã«ããã£ã³ã°ãè¿œå ããå¿ èŠããããŸãã
æåã®PRã§ã¯ãã·ã³ãã«ã«ä¿ã¡ãConv * dã«åºå·ããŸãããã
äžã§èª¬æããè€éãã¯ã same
ããã£ã³ã°ãªãã·ã§ã³ãèšè¿°ãããåŸãã¬ã€ã€ãŒãæ¬è³ªçã«åçã«ãªãããšã§ãã ã€ãŸããã¢ãã«ã®ãšã¯ã¹ããŒãïŒONNXãšã¯ã¹ããŒããªã©ïŒã«æé©ãªéçã«æ¢ç¥ã®ã¬ã€ã€ãŒã®ãã©ã¡ãŒã¿ãŒãããåçãªã¬ã€ã€ãŒã®ãã©ã¡ãŒã¿ãŒã«ãªããŸãã ãã®å Žåãåçãã©ã¡ãŒã¿ãŒã¯padding
ã§ãã
ããã¯ããªãç¡å®³ã«èŠããŸãããã¢ãã€ã«ããšããŸããã¯ããŒããŠã§ã¢ã©ã³ã¿ã€ã ãªã©ã®éãããã©ã³ã¿ã€ã ã§ã¯ãéçãªåœ¢ç¶ã®åæãšæé©åãè¡ãå Žåãªã©ãééçæ§ãéåžžã«éèŠã«ãªããŸãã
ãã1ã€ã®å®éçãªæ¬ ç¹ã¯ããã®åçã«èšç®ãããpadding
ãåžžã«å¯Ÿç§°ã§ãããšã¯éããªãããšã§ããããã¯ãã«ãŒãã«ã®ãµã€ãº/ã¹ãã©ã€ããèšåŒµä¿æ°ãããã³å
¥åãµã€ãºã«ãã£ãŠã¯ãããã£ã³ã°ãé察称ã§ããå¿
èŠãããå Žåãããããã§ãïŒã€ãŸããç°ãªãå·ŠåŽãšå³åŽã®ããã£ã³ã°éïŒã ããšãã°ãCuDNNã«ãŒãã«ã䜿çšã§ããªãããšãæå³ããŸãã
çŸåšãConv2dã®çœ²åã¯æ¬¡ã®ãšããã§ãã
torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')
ããã§ã¯ã padding
ãint
ãŸãã¯tuple
ã®intïŒã€ãŸããé«ã/å¹
ã®å次å
ïŒã«ãªãããã«ãµããŒãããŠããŸãã
same
å€ãæã€æååãåãåãã padding
è¿œå ã®ãªãŒããŒããŒãããµããŒãããå¿
èŠããããŸãã
same
ããã£ã³ã°ã¯ã output
ãµã€ãºãinput
ãµã€ãºãšåãã«ãªãããã«ç³ã¿èŸŒã¿ã«äžããåã«ã input
ãããã£ã³ã°ããå¿
èŠããããŸãã
'same'
ãpadding
ã«äžããããå Žåãå次å
ã§å¿
èŠãªå·Šå³ã®ããã£ã³ã°ã®éãèšç®ããå¿
èŠããããŸãã
å¿ èŠãªLïŒå·ŠïŒãšRïŒå³ïŒã®ããã£ã³ã°ãèšç®ãããåŸãèæ ®ãã¹ã2ã€ã®ã±ãŒã¹ããããŸãã
L
çããpadding
å€ã§F.conv2d
ãåŒã³åºãã ãã§ããinput_padded = F.pad(input, ...)
ãåŒã³åºãã input_padded
ãF.conv2d
éä¿¡ããŸããèšããŸã§ããªããJITãã¹ã§ãæ©èœããããã«ãã¹ãããå¿ èŠããããŸã
åç §çšã®@Chilee ãããã«https://github.com/mlperf/inference/blob/master/others/edge/object_detection/ssd_mobilenet/pytorch/utils.py#L40ããã€ã³ã¹ãã¬ãŒã·ã§ã³ãåŸãããã®æœåšçãªå®è£ ããã
ãã¹ããããæ§æã®TFå®è£ ãšäžèŽããŸãããããã¹ãã¯ç¶²çŸ çã§ã¯ãããŸããã§ãã
@soumithããã€ãã®ç°¡åãªè³ªåïŒ
functional.conv2d
ä»ããŠãããå®è£
ãã¹ãã§ã¯ãªãçç±ã¯ãããŸããïŒ ããªããæžãããã¶ã€ã³ã¯ããããã¹ãã§ã¯ãªãããšãæ瀺ããŠããããã§ãã padding
= "same"ã«ã€ããŠã¯ãã¬ã€ã€ãŒã«åºæã®ããã«èŠãããã®ã¯äœããããŸããã ïŒç·šéïŒNvmãç§ãèŠãŠããF.conv2d
implãéååããããã®ã§ããããšã«æ°ã¥ããŠããŸããã§ããïŒãvalid
ããã£ã³ã°ã¢ãŒãã¯ã padding=0
ããã£ã³ã°ã¢ãŒããšåçŽã«åçã ãšæããŸãããïŒãŸãããŠãŒã¶ãŒãé察称ããã£ã³ã°ãåŠçããããã®ç°¡åãªä¿®æ£ã¯ãªãããã§ãã çºçããå¿
èŠã®ããããã£ã³ã°ã®éã決å®ããããã®å®å
šãªã«ãŒã«ã¯æ¬¡ã®ãšããã§ãã
ãã£ã¡ã³ã·ã§ã³ã«æ²¿ã£ãŠ(ceil(x/stride) -1)*stride + (filter-1)*dilation + 1 - x
ã ç¹ã«ãããã2ã®åæ°ã§ãªãå Žåã¯ãé察称ããã£ã³ã°ãå®è¡ããå¿
èŠããããŸãããããå¶æ°ãµã€ãºã®ãã£ã«ã¿ãŒã§ã®ã¿çºçãããšããåäŸãšããŠã input = 10, stride=3, filter=3, dilation=1
åããŸãã ãããçºçããå¯èœæ§ã®ããç¶æ³ã解決ããããã®ç°¡åãªã«ãŒã«ã¯ãããŸããã
ããã«ã stride=1
ã®å Žåãé€ããŠãããã£ã³ã°ãéçã«æ±ºå®ããããšã¯ã§ããŸããããã®å Žåã¯ã ceil(x/stride) = x
ã§ãããããã£ã³ã°ã¯(filter-1)*dilation
çãããªããŸãã
@Chillee ïŒ1ïŒã«ã€ããŠãçç±ã¯ãããŸãããç§ã¯ãããã©ãŒãã³ã¹ãªã©ã®åœ±é¿ã«ã€ããŠèããŠããŸããã§ããã
ïŒ2ïŒã¯ãã
ããã«ãstride = 1ã®å Žåãé€ããŠãããã£ã³ã°ãéçã«æ±ºå®ããããšã¯ã§ããŸããããã®å ŽåãceilïŒx / strideïŒ= xã§ãããïŒfilter-1ïŒ* dilationã«çããããã£ã³ã°ããããŸãã
ã¯ãããã ããstride = 1ã¯äžè¬çã§ããéçããã£ã³ã°ã®å©ç¹ã¯ååã§ãããç¹å¥ã«åŠçããå¿ èŠããããŸãã
é察称ã®ããã£ã³ã°ã«ã€ããŠããŸããŸã.....
padding=SAME
ãªãã·ã§ã³ã®APIãæäŸã§ããªãã®ã¯ãªãã§ããïŒ èª°ããããã£ã³ã°ã®è¿œå è²»çšãè² æ ããããšãããšããªãå Žåã¯ããããããŠãã ããã å€ãã®ç 究è ã«ãšã£ãŠãã©ããããããã¿ã€ãã³ã°ã¯å¿ é ã§ãã
ã¯ãã
padding=SAME
ãªãã·ã§ã³ã®APIãæäŸã§ããªãã®ã¯ãªãã§ããïŒ èª°ããããã£ã³ã°ã®è¿œå è²»çšãè² æ ããããšãããšããªãå Žåã¯ããããããŠãã ããã å€ãã®ç 究è ã«ãšã£ãŠãã©ããããããã¿ã€ãã³ã°ã¯å¿ é ã§ãã
åæïŒ ç§ã¯ãã®ãã¡ããã³ã®ãããã£ã³ã°ãã§4æéç«ã¡åŸçããŸããã
ãã®åé¡ã®è§£æ±ºçã«é¢ããææ°æ å ±ã¯ãããŸããïŒ
ãããŒãããã§ç§ã¯PytorchãKeras / Tensorflow2.0ãããç°¡åã ãšæããŸãã...
@zwepå§ããã«ã¯ããå°ãåªåãå¿ èŠã§ãã ç ©ãããå¯èœæ§ã®ãããã©ã€ã¢ãã³ã°ã«ãŒããäœæããå¿ èŠããããã¬ã€ã€ãŒãããæ瀺çã«äœæããå¿ èŠããããŸãã ãããïŒäžåºŠïŒè¡ããšããããè¶ ããŠå®éã®æ¹åãããã«é²ããããšãã§ããŸãã
ç§ã®èŠªæã®ã«ãŒã«ã¯ãããªããäœçŸäžåããã£ãããšããããªããKerasã䜿ãããšã§ã/è¶
æšæºã
ç 究éçºãè¡ãããŠãããšãã¯ãã€ã§ãpytorchã䜿çšããŠãã ããã
ãããããã£ã³ã°ããã1dã³ã³ããŒãžã§ã³ã®ç§ã®ã³ãŒãã§ã
ããŒããã€ã³ããŒã
ããŒãããã€ã³ããŒãnn
numpyãnpãšããŠã€ã³ããŒãããŸã
torch.functionalãFãšããŠã€ã³ããŒãããŸã
class Conv1dSamePad(nn.Module):
def __init__(self, in_channels, out_channels, filter_len, stride=1, **kwargs):
super(Conv1dSamePad, self).__init__()
self.filter_len = filter_len
self.conv = nn.Conv1d(in_channels, out_channels, filter_len, padding=(self.filter_len // 2), stride=stride,
**kwargs)
nn.init.xavier_uniform_(self.conv.weight)
# nn.init.constant_(self.conv.bias, 1 / out_channels)
def forward(self, x):
if self.filter_len % 2 == 1:
return self.conv(x)
else:
return self.conv(x)[:, :, :-1]
class Conv1dCausalPad(nn.Module):
def __init__(self, in_channels, out_channels, filter_len, **kwargs):
super(Conv1dCausalPad, self).__init__()
self.filter_len = filter_len
self.conv = nn.Conv1d(in_channels, out_channels, filter_len, **kwargs)
nn.init.xavier_uniform_(self.conv.weight)
def forward(self, x):
padding = (self.filter_len - 1, 0)
return self.conv(F.pad(x, padding))
class Conv1dPad(nn.Module):
def __init__(self, in_channels, out_channels, filter_len, padding="same", groups=1):
super(Conv1dPad, self).__init__()
if padding not in ["same", "causal"]:
raise Exception("invalid padding type %s" % padding)
self.conv = Conv1dCausalPad(in_channels, out_channels, filter_len, groups=groups) \
if padding == "causal" else Conv1dSamePad(in_channels, out_channels, filter_len, groups=groups)
def forward(self, x):
return self.conv(x)
@danFromTelAviv圌ã¯ãã³ãŒããããããšãã ãã®pytorchå²åŠã念é ã«çœ®ããŠãã ããïŒ
2020幎ã§ããPytorchã«ã¯ãŸã padding='same'
ããããŸãããïŒ
ããã¯ãä»»æã®ã«ãŒãã«ãµã€ãºãã¹ãã©ã€ããããã³æ¡åŒµã«å¯ŸããŠåãããã£ã³ã°ãæ©èœããã1ã€ã®æ¹æ³ã§ãïŒã«ãŒãã«ãµã€ãºãæ©èœããŸãïŒã
class Conv1dSame(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, dilation=1):
super().__init__()
self.cut_last_element = (kernel_size % 2 == 0 and stride == 1 and dilation % 2 == 1)
self.padding = math.ceil((1 - stride + dilation * (kernel_size-1))/2)
self.conv = nn.Conv1d(in_channels, out_channels, kernel_size, padding=self.padding, stride=stride, dilation=dilation)
def forward(self, x):
if self.cut_last_element:
return self.conv(x)[:, :, :-1]
else:
return self.conv(x)
nn.Conv2d
ããåãããã£ã³ã°ãæ©èœãå¿
èŠã§ãã
ãšããã§ãäžèšã®ããã©ãŒãã³ã¹/ã·ãªã¢ã«åã®æžå¿µã«å ããŠãTFã®ãµã€ãºã«äŸåãããåããããã£ã³ã°ã¢ãŒããé©åãªããã©ã«ãã§ã¯ãªãçç±ã«ã¯ãæ£ç¢ºæ§/æ£ç¢ºæ§ã®çç±ããããŸãã https://github.com/tensorflow/tensorflow/issues/18213ã§èª¬æããŸããããå®éã«ã¯å€ãã®googleç¬èªã®ã³ãŒãããµã€ãºã«äŸåããªããåããããã£ã³ã°ã¢ãŒãã代ããã«äœ¿çšããŠããããšã瀺ããŸããã
ãã®åé¡ã«ã€ããŠã¯çŸåšé²è¡äžã®äœæ¥ããªãããã§ãããããå Žåã¯ããµã€ãºã«äŸåããªã解決çã§ããããšãé¡ã£ãŠããŸãã
ããã«ã¡ã¯ã @ ppwwyyxx YuxinããåçããããšãããããŸãã
@ McHughes288ããã®å®è£
ã¯è¯ããšæããŸããã圌ã®å®è£
ã«ã€ããŠã®ããªãã®æèŠã¯çåã§ãã
ããã§Conv1D SAMEããã£ã³ã°ã®ããã®ç§ã®è§£æ±ºçã¯ãïŒå Žåã«ã®ã¿æ£ããåäœããŠããdilation==1
ïŒ groups==1
ããªããæ¡åŒµããã°ã«ãŒããèãããšããã£ãšè€éãªïŒïŒ
import torch.nn.functional as F
from torch import nn
class Conv1dSamePadding(nn.Conv1d):
"""Represents the "Same" padding functionality from Tensorflow.
NOTE: Only work correctly when dilation == 1, groups == 1 !!!
"""
def forward(self, input):
size, kernel, stride = input.size(-1), self.weight.size(
2), self.stride[0]
padding = kernel - stride - size % stride
while padding < 0:
padding += stride
if padding != 0:
# pad left by padding // 2, pad right by padding - padding // 2
# in Tensorflow, one more padding value(default: 0) is on the right when needed
input = F.pad(input, (padding // 2, padding - padding // 2))
return F.conv1d(input=input,
weight=self.weight,
bias=self.bias,
stride=stride,
dilation=1,
groups=1)
@Chilleeã¯ããã®æ©èœã«åŒãç¶ãåãçµãã€ãã
@wizcheuã®ã³ãŒããèªãã åŸãpadding = 'same'ã䜿çšããŠå¥ã®ããŒãžã§ã³ã®conv1dãäœæããŸã
class Conv1dPaddingSame(nn.Module):
'''pytorch version of padding=='same'
============== ATTENTION ================
Only work when dilation == 1, groups == 1
=========================================
'''
def __init__(self, in_channels, out_channels, kernel_size, stride):
super(Conv1dPaddingSame, self).__init__()
self.kernel_size = kernel_size
self.stride = stride
self.weight = nn.Parameter(torch.rand((out_channels,
in_channels, kernel_size)))
# nn.Conv1d default set bias=TrueïŒso create this param
self.bias = nn.Parameter(torch.rand(out_channels))
def forward(self, x):
batch_size, num_channels, length = x.shape
if length % self.stride == 0:
out_length = length // self.stride
else:
out_length = length // self.stride + 1
pad = math.ceil((out_length * self.stride +
self.kernel_size - length - self.stride) / 2)
out = F.conv1d(input=x,
weight = self.weight,
stride = self.stride,
bias = self.bias,
padding=pad)
return out
ããã«é¢ããæŽæ°ã¯ãããŸããïŒ
æŽæ°ã¯ãããŸããïŒ
@ peterbell10ã¯ããã©ããŒã§ãããã©ããPRããªã³ã¯ããŠããŸãã
æãåèã«ãªãã³ã¡ã³ã
è¿ãå°æ¥ãpytorchã«åæ§ã®APIãå®è£ ããèšç»ã¯ãããŸããïŒ ãã³ãœã«ãããŒ/ã±ã©ã¹ã®ããã¯ã°ã©ãŠã³ãããæ¥ã人ã ã¯ç¢ºãã«ãããé«ãè©äŸ¡ããã§ãããã