-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to know how much the last loss #9
Comments
Hi! |
Hello |
How do you calculate the accuracy? For every test example there are multiple ground truth rectangles (grasp positions) and only one rectangle predicted. The algorithm for evaluating if an example is a success, takes one random GT rectangle from the example and compares it with the predicted one. So you need to run the evaluation ( I did it like this: You can code something to do this, instead of running the code manually lots of times and noting which was a success (I did it maybe 15 times). Temporarly I'm unable to contribute to the repository because I lack a PC to do it. I am stuck with my old personal laptop. |
I see! |
But I still don not understand why the loss is stable at 30000-40000! |
Thank you for your great answer! |
When I run grasp_det.py, it seems the x_hat, h_hat, w_hat become NAN with only a few epochs. Is it reasonable and how to avoid that? |
There are some NAN mistakes in the files:
pcd0132cpos.txt
pcd0165cpos.txt
you can delete them!
At 2018-01-28 14:42:20, "Lu Chen" <notifications@github.com> wrote:
When I run grasp_det.py, it seems the x_hat, h_hat, w_hat become NAN with only a few epochs. Is it reasonable and how to avoid that?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
It works, thx! |
Are you Chinese?
在 2018-01-30 10:57:51,"Lu Chen" <notifications@github.com> 写道:
@xiaoshuguo750
It works, thx!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Yeah ! |
同学你好 |
weixin:409067552
在 2018-04-25 21:25:24,"sujie" <notifications@github.com> 写道:
同学你好
你现在还有在做抓取相关的么
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
同学你好!我现在正在做抓取相关的研究工作,你进展怎么样?我用该代码中已保存的模型测试了我自己的数据,但效果很不理想,然后我有去测试康奈尔抓取数据集中的数据,效果竟然同样很不理想,而且我不知道是什么原因,你测试的结果怎么样呢? |
同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。 |
看一下论文
|
您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗 |
就是这个代码的论文呀
|
好的,我再仔细看看,您好,方便加个微信好友吗 |
在这说吧,看到会回复你的
|
所以得需要俩个数据集一个是生成抓取姿态的数据集 Cornell grasping dataset还有imagedataset,那个imagedataset数据集150个G都需要下载吗??? |
哥们,你现在抓取搞的怎么样了,方便留个邮箱吗?想交流一下。。 |
weihuhuhu同学,你的代码跑通了吗 |
我没有在跑这个程序了,但是我也在做抓取策略生成,可以加微信交流18798824036
| |
贵州大学白强
|
|
18798824036@163.com
|
签名由网易邮箱大师定制
On 11/9/2020 17:20,2017301500326<notifications@github.com> wrote:
weihuhuhu同学,你的代码跑通了吗
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float64: 'Tensor("truediv:0", shape=(), dtype=float64, device=/device:CPU:0)' |
哥,我想请问一下,如果我要自己训练模型的话,数据集的照片要用RGB-D吗?还是说只要RBG就可以了? |
Hello
Run grab_det.py, after 10000 steps, the loss is stable at 30000-40000, and how much is the reasonable loss!
The text was updated successfully, but these errors were encountered: