Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I want to know how much the last loss #9

Open
HEUzhouhanwen opened this issue Jan 3, 2018 · 27 comments
Open

I want to know how much the last loss #9

HEUzhouhanwen opened this issue Jan 3, 2018 · 27 comments

Comments

@HEUzhouhanwen
Copy link

Hello
Run grab_det.py, after 10000 steps, the loss is stable at 30000-40000, and how much is the reasonable loss!

@HEUzhouhanwen HEUzhouhanwen changed the title I want to know how many the last loss I want to know how much the last loss Jan 3, 2018
@tnikolla
Copy link
Owner

tnikolla commented Jan 3, 2018

Hi!
I don't remember it now. Try evaluating the validation set. I could achieve a little more than 60 percent accuracy.

@HEUzhouhanwen
Copy link
Author

Hello
Try evaluating the validation set.I use ../robot-grasp-detection/models/grasp/m4/m4.ckpt, but the accuracy is about 30%, what is my problem?
Thank you!

@tnikolla
Copy link
Owner

tnikolla commented Jan 5, 2018

How do you calculate the accuracy?

For every test example there are multiple ground truth rectangles (grasp positions) and only one rectangle predicted. The algorithm for evaluating if an example is a success, takes one random GT rectangle from the example and compares it with the predicted one. So you need to run the evaluation (grasp_det.py) multiple times so all the GT rectangles from one example are compared.

I did it like this:
Run grasp_det.py the first time and note which example was a success, for example 1,3,6,8 from 10 were a success. Run it a second time and you will get a success for 0,1,3,6. Run it a third time and you get for example, 0,5,6. Accumulating the successes you get 0,1,3,5,6,8 from 10 examples (10 images with their annotated ground truth grasping positions). The accuracy is 6/10 = 60%.

You can code something to do this, instead of running the code manually lots of times and noting which was a success (I did it maybe 15 times).

Temporarly I'm unable to contribute to the repository because I lack a PC to do it. I am stuck with my old personal laptop.

@HEUzhouhanwen
Copy link
Author

I see!
Thank you!

@HEUzhouhanwen
Copy link
Author

But I still don not understand why the loss is stable at 30000-40000!

@tnikolla
Copy link
Owner

The algorithm finds one grasping position for every object (image) and this is not true in the dataset and also in the real life. Let's think about an image of a pencil (symmetry).
pencil

When training, only one ground truth (red rectangle) is used in one pass (forward- and backprop, updating the weights). The ground truths lie in the text files of the dataset. there are a few for every image (theoretically this number is infinit) After training, the model has learned the average of all ground truths, the green rectangle. Continuing with training, with a batch size of 100 images, there will always be GTs that are far from the predicted rectangle. So, the RMS loss will move around some value.

Now, if we have again the a pencil-like image in the test set, the algorithem will predict a grasp. To find out if this predicted grasp is a success, it will be evaluated (two criterias) using only one random ground truth from the test set. So, if the first GT is randomly chosed, the predicted grasp is no success because of IoU; although we can see that this is a success for a real robot. The predicted grasp will not be a success for every GT except for the forth one where IoU and the angle meet the criteria.

What do you think?

@xiaoshuguo750
Copy link

xiaoshuguo750 commented Jan 16, 2018

Thank you for your great answer!
Very clear!
Wonderful description!
Shu Guo!&HEUzhouhanwen

@clvictory
Copy link

When I run grasp_det.py, it seems the x_hat, h_hat, w_hat become NAN with only a few epochs. Is it reasonable and how to avoid that?

@xiaoshuguo750
Copy link

xiaoshuguo750 commented Jan 29, 2018 via email

@clvictory
Copy link

@xiaoshuguo750

It works, thx!

@xiaoshuguo750
Copy link

xiaoshuguo750 commented Jan 30, 2018 via email

@clvictory
Copy link

@xiaoshuguo750

Yeah !

@woshisj
Copy link

woshisj commented Apr 25, 2018

同学你好
你现在还有在做抓取相关的么

@xiaoshuguo750
Copy link

xiaoshuguo750 commented Apr 29, 2018 via email

@lx-onism
Copy link

weixin:409067552 在 2018-04-25 21:25:24,"sujie" notifications@github.com 写道: 同学你好 你现在还有在做抓取相关的么 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

同学你好!我现在正在做抓取相关的研究工作,你进展怎么样?我用该代码中已保存的模型测试了我自己的数据,但效果很不理想,然后我有去测试康奈尔抓取数据集中的数据,效果竟然同样很不理想,而且我不知道是什么原因,你测试的结果怎么样呢?

@weiwuhuhu
Copy link

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

@woshisj
Copy link

woshisj commented Feb 16, 2019

看一下论文
V. EXPERIMENTS AND EVALUATION
C. Pretraining
z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

@weiwuhuhu
Copy link

看一下论文
V. EXPERIMENTS AND EVALUATION
C. Pretraining
z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

@woshisj
Copy link

woshisj commented Feb 16, 2019

就是这个代码的论文呀
论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks
第 V 章 C 节

看一下论文
V. EXPERIMENTS AND EVALUATION
C. Pretraining
z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

@weiwuhuhu
Copy link

就是这个代码的论文呀
论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks
第 V 章 C 节

看一下论文
V. EXPERIMENTS AND EVALUATION
C. Pretraining
z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

@woshisj
Copy link

woshisj commented Feb 16, 2019

在这说吧,看到会回复你的

就是这个代码的论文呀
论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks
第 V 章 C 节

看一下论文
V. EXPERIMENTS AND EVALUATION
C. Pretraining
z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

@weiwuhuhu
Copy link

在这说吧,看到会回复你的

就是这个代码的论文呀
论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks
第 V 章 C 节

看一下论文
V. EXPERIMENTS AND EVALUATION
C. Pretraining
z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

所以得需要俩个数据集一个是生成抓取姿态的数据集 Cornell grasping dataset还有imagedataset,那个imagedataset数据集150个G都需要下载吗???

@1458763783
Copy link

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

哥们,你现在抓取搞的怎么样了,方便留个邮箱吗?想交流一下。。

@woyuni
Copy link

woyuni commented Nov 9, 2020

weihuhuhu同学,你的代码跑通了吗

@1458763783
Copy link

1458763783 commented Nov 11, 2020 via email

@zhoumo1121
Copy link

Thank you for your great answer!
Very clear!
Wonderful description!
Shu Guo!&HEUzhouhanwen

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float64: 'Tensor("truediv:0", shape=(), dtype=float64, device=/device:CPU:0)'
How to solve the error when running the grasp_det.py?

@Jonho111
Copy link

在这说吧,看到会回复你的

就是这个代码的论文呀
论文名字是:Real-Time Grasp Detection Using Convolutional Neural Networks
第 V 章 C 节

看一下论文
V. EXPERIMENTS AND EVALUATION
C. Pretraining
z

But I still don not understand why the loss is stable at 30000-40000!

同学,我是做抓取新手。在开始阅读代码的时候,我对一步就很不理解,他说需要将imagenet数据转换为tfrecord,然后需要下imagenet的数据集。。。这里我很不理解,文章不是用的cornell grasping dataset,为什么需要下载imagenet数据集,主要是看到Imagenet数据集太大,学校网速太慢了,希望大佬能给解答一下,谢谢。。。

您好,您给我推荐的论文我没查到。论文名称是:Experiment AND EVALUATION吗?方便加个好友吗

好的,我再仔细看看,您好,方便加个微信好友吗

哥,我想请问一下,如果我要自己训练模型的话,数据集的照片要用RGB-D吗?还是说只要RBG就可以了?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests