-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon 7th No.36】为 Paddle 代码转换工具新增 API 转换规则(第 3 组) #479
Conversation
Thanks for your contribution! |
单测未通过,请保证CI通过,CI不通过不予合入:
|
能否修复下CI看错误,谢谢。 |
…nto hackathon7-part1
…nto hackathon7-part1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
修改一下comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 所有单测都只测了sym=False,对bool参数正反都测下。包括requires_grad。其他参数的取值你也考虑多样性问题
- 所有测试的check_value都关了,这个是因为有随机性问题吗?如果没有随机性等特殊问题,这个正常情况是需要打开的
1、收到,增加例子。 torch.signal.windows.blackman(5, dtype=torch.float32)
paddle.audio.functional.get_window("blackman",5, dtype='float32') pytorch: tensor([-1.4901e-08, 3.4000e-01, 1.0000e+00, 3.4000e-01, -1.4901e-08])
paddle: Tensor(shape=[5], dtype=float32, place=Place(cpu), stop_gradient=True,
[-0.00000001, 0.20077014, 0.84922987, 0.84922987, 0.20077014]) |
那这个不太合理,除非API的实现被论证有Bug,这个需要给一个明确的结论出来,是API有Bug还是转换有Bug。需要逐个排查并且打开check_value。 另外你上面的对应是有问题的,应该是这样:
两者是能对应上的,同时说明 已合入的映射文档也可能是存在Bug的,需要一并修复。 |
之前看代码,paddle的general_cosine和pytorch的运算就不同。 paddle: M, needs_trunc = _extend(M, sym)
fac = paddle.linspace(-math.pi, math.pi, M, dtype=dtype)
w = paddle.zeros((M,), dtype=dtype)
for k in range(len(a)):
w += a[k] * paddle.cos(k * fac)
return _truncate(w, needs_trunc) pytorch: constant = 2 * torch.pi / (M if not sym else M - 1)
k = torch.linspace(start=0,
end=(M - 1) * constant,
steps=M,
dtype=dtype,
layout=layout,
device=device,
requires_grad=requires_grad)
a_i = torch.tensor([(-1) ** i * w for i, w in enumerate(a)], device=device, dtype=dtype, requires_grad=requires_grad)
i = torch.arange(a_i.shape[0], dtype=a_i.dtype, device=a_i.device, requires_grad=a_i.requires_grad)
return (a_i.unsqueeze(-1) * torch.cos(i.unsqueeze(-1) * k)).sum(0) 两者调用linspace的参数不同,结果也不同。 |
blackman和general_cosine好像没法改,其他的我来改。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zhwesky2010 改好了,不好意思,粗心大意,给您添麻烦了。2个PR都改了。DOC的pr也改了。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有一些代码会引入问题,需要增强传入一个变量的测试,参考 [开发技巧] 的第2条:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
重复的单测不用复制那么多,每个单测都有很多完全重复的case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR Docs
Docs:
PR APIs