Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues of evaluate code #10

Open
heng94 opened this issue Aug 27, 2018 · 19 comments
Open

Issues of evaluate code #10

heng94 opened this issue Aug 27, 2018 · 19 comments

Comments

@heng94
Copy link

heng94 commented Aug 27, 2018

Hi ,
when I run the evaluate.py , There is only " gt" result , no "prediction " result .What code should I modify to get the "prediction" result ! Thanks

@bertjiazheng
Copy link

I have the same issue, is anyone able to solve it?

@DRAhmadFaraz
Copy link

@HanochZzhou @bertjiazheng Can you show your gt" results screenshot.?
let me help you then.

@bertjiazheng
Copy link

Hi, @DRAhmadFaraz, I have got the predictions of PlaneNet, but the per-pixel recall seems to be lower than that in the paper. Can you reproduce the evaluation in the paper?

Here are the results
Per-Pixel Recall: [0. , 0.11259292, 0.39307635, 0.50202066, 0.53868158, 0.55395703, 0.57877339, 0.57877339, 0.60665465, 0.60665465, 0.60665465, 0.62022231, 0.62022231]
Per-Plane Recall: [0. , 0.16, 0.32, 0.36, 0.38, 0.42, 0.46, 0.46, 0.48, 0.48, 0.48, 0.5 , 0.5 ]

@jcliu0428
Copy link

I have the same issue.Anyone knows how to fix it?

@jcliu0428
Copy link

@bertjiazheng Have you got the prediction result in your 'evaluate folder'? Did you fix your command line to run evaluate.py? Thanks.

@bertjiazheng
Copy link

@jcliu0428

Run with command line:

python evaluate.py --dataFolder=*** --numImages=760 --methods=2000000 --numOutputPlanes=10

@jcliu0428
Copy link

@bertjiazheng Have you change anything in checkpoint.ckpt?
When I try your command line,it shows that:
InvalidArgumentError (see above for traceback): Unsuccessful TensorSliceReader constructor: Failed to get matching files on checkpoint/planenet_hybrid3_bl0_dl0_ll1_pb_pp_ps_sm0/checkpoint.ckpt: Not found: checkpoint/planenet_hybrid3_bl0_dl0_ll1_pb_pp_ps_sm0; No such file or directory

@bertjiazheng
Copy link

bertjiazheng commented Nov 15, 2018

@jcliu0428 You don't need to modify anything. Make sure the methods=2000000, 0 will disable the corresponding method.

@jcliu0428
Copy link

@bertjiazheng I modify ALL_METHODS,it also works.Thanks a lot!

@XYZ-qiyh
Copy link

XYZ-qiyh commented Feb 19, 2019

Hello, I'm confused about the python evaluate.py output result
evaluate_output
what does this mean?
Emm, where should I find the recall ?
@bertjiazheng @art-programmer Thanks a lot~

@bertjiazheng
Copy link

@QTODD you can find the pixel recall here and plane recall here.

@XYZ-qiyh
Copy link

XYZ-qiyh commented Mar 1, 2019

Hey, when I run the evaluate.py, I only got the prediction of PlaneNet.
What can I do to get the results of other existing methods ?? @jcliu0428 @bertjiazheng

@skq-5233
Copy link

Why is there no quantitative comparison output when I run evaluate.py?Looking forward to your reply. Thank you!@bertjiazheng

@skq-5233
Copy link

Hi, @DRAhmadFaraz, I have got the predictions of PlaneNet, but the per-pixel recall seems to be lower than that in the paper. Can you reproduce the evaluation in the paper?

Here are the results
Per-Pixel Recall: [0. , 0.11259292, 0.39307635, 0.50202066, 0.53868158, 0.55395703, 0.57877339, 0.57877339, 0.60665465, 0.60665465, 0.60665465, 0.62022231, 0.62022231]
Per-Plane Recall: [0. , 0.16, 0.32, 0.36, 0.38, 0.42, 0.46, 0.46, 0.48, 0.48, 0.48, 0.5 , 0.5 ]

Hello, when I run python evaluate.py, I do not know what the reason is and do not output quantitative results such as rel, rmse, etc. I hope you take the time to reply to me. Thank you!

@skq-5233
Copy link

Hello, when I run python evaluate.py, I do not know what the reason is and do not output quantitative results such as rel, rmse, etc. I hope you take the time to reply to me. Thank you!
@bertjiazheng

@skq-5233
Copy link

Hey, when I run the evaluate.py, I only got the prediction of PlaneNet.
What can I do to get the results of other existing methods ?? @jcliu0428 @bertjiazheng

Hello, why do I run the test code: python evaluate.py --dataFolder="./data" --numImages=760 --methods=2000000 --numOutputPlanes=10, there is no quantitative result output (such as rel, rmse, etc.)? Hope you take the time to reply! ! Thank you very much! ! !

@skq-5233
Copy link

@HanochZzhou @bertjiazheng Can you show your gt" results screenshot.?
let me help you then.
Hello, when I run python evaluate.py, I do not know what the reason is and do not output quantitative results such as rel, rmse, etc. I hope you take the time to reply to me. Thank you!

@skq-5233
Copy link

Hello, I'm confused about the python evaluate.py output result
evaluate_output
what does this mean?
Emm, where should I find the recall ?
@bertjiazheng @art-programmer Thanks a lot~

Hello, you run the test code: python evaluate.py Is there quantitative output (such as rel, rmse, etc.)? Hope you take the time to reply! ! Thank you very much! ! !

@skq-5233
Copy link

Hi ,
when I run the evaluate.py , There is only " gt" result , no "prediction " result .What code should I modify to get the "prediction" result ! Thanks

Hello, you run the test code: python evaluate.py Is there quantitative output (such as rel, rmse, etc.)? Hope you take the time to reply! ! Thank you very much! ! !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants