Skip to content
This repository has been archived by the owner on Jan 17, 2019. It is now read-only.

Model Stage is always initial status, never be executed #30

Open
ljzhu1990 opened this issue Nov 3, 2016 · 8 comments
Open

Model Stage is always initial status, never be executed #30

ljzhu1990 opened this issue Nov 3, 2016 · 8 comments

Comments

@ljzhu1990
Copy link

Hello, I followed the user guide.Firstly, I created a assert, and then,I created a model . But, the model was always inital status ,and it has never been executed. Can you help me solve this problem ? Thank you very much.

@bhlx3lyx7
Copy link
Contributor

Thanks for your question, we really want to help you resolve it. First, what's your environment, the docker env we provided or the env you built yourself? And then, can you tell us the detail of the asset you've created, we'll have a look at it. Thank you.

@ljzhu1990
Copy link
Author

I'm glad to receive your reply. I run griffin in my own environment following 'How deploy and run at local '. Firstly,I installed MongoDB , Hadoop , Spark, Hive and so on. I deployed griffin engine and hive in the same hadoop cluster. Next, I used your sample data and loaded data into hive table. Their hdfs path were : '/hivetable/user_info_src/' and '/hivetable/user_info_target' . Then, I created two data asserts which named user_info_src and user_info_target, and entered their hdfs paths each other. Next, I created a 'Accuracy' DQ model, selecting part columns from source and targert data assert for comparison. But it never execute it , and I can't get any result. There were not error info printed, too. So, Can you help me find out problems? Thank you very much.

@bhlx3lyx7
Copy link
Contributor

bhlx3lyx7 commented Nov 4, 2016

Well, I've reviewed our "How to deploy and run at local" part, and find that our steps are not described clearly, also with some version mistakes. I've modified that document now.
Would you mind give us your email to discuss the detail issues? Or you can contact us through "Google Groups" mentioned here.
Thanks a lot.

@ljzhu1990
Copy link
Author

My email is: ljzhu1990@163.com. I will be glad to discuss the issues with you and I'm also interested in griffin. Best wishes!

@titt
Copy link

titt commented Apr 4, 2017

Hello,
I have almost the same issue. I created some model but any metric was created from my model...
I followed the user guide.
I use a hortonworks sandbox with Spark1.6.2, hadoop2.7.3, hive1.2.1, tomcat 7.0.75.0, java1.8.0_111, mongodb3.4.3.
I created users_info_src and users_info_target table and _SUCCESS file in hdfs table.
I created a new directory with env.sh, griffin_jobs.sh, griffin_regular_run.sh, tmp, griffin-models.jar and I runned nohup ./griffin_regular_run.sh &.
I also put griffin-core/target/ROOT.war in /usr/share/tomcat/webapps.
At first I had an error 500 when I tried to create a dataassets, I put griffin-core/target/ROOT dir in /usr/share/tomcat/webapps to correct that.
And now I have the same issue that describe in this case. Have you solve the case ?

Thank you very much

@sorabh89
Copy link

sorabh89 commented Jun 8, 2017

Hi, I'm facing the same issue. I'm using the docker procedure.
Also please let me know if I need to create a hive table for a file that i update in hdfs, or can it use the raw file itself for the model without a hive table.
And also please update the documentation. Its hard to understand it from the documentation provided.
Thanks.

@luzx02
Copy link
Contributor

luzx02 commented Jun 8, 2017 via email

@luzx02
Copy link
Contributor

luzx02 commented Jun 8, 2017 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants