Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

line 46: 3085 Illegal instruction #62

Closed
hdulbj opened this issue Sep 10, 2016 · 4 comments
Closed

line 46: 3085 Illegal instruction #62

hdulbj opened this issue Sep 10, 2016 · 4 comments
Labels

Comments

@hdulbj
Copy link

hdulbj commented Sep 10, 2016

I built paddle from source code on ubuntu14 successfully.
But when I try to run demo sentiment,./train.sh,it raise
what's wrong?

I0910 01:24:00.869675 3085 Util.cpp:113] Calling runInitFunctions
I0910 01:24:00.870432 3085 Util.cpp:126] Call runInitFunctions done.
[INFO 2016-09-10 01:24:02,138 networks.py:1122] The input order is [word, label]
[INFO 2016-09-10 01:24:02,138 networks.py:1129] The output order is [cost_0]
I0910 01:24:02.245304 3085 Trainer.cpp:169] trainer mode: Normal
I0910 01:24:12.127163 3085 PyDataProvider2.cpp:219] loading dataprovider dataprovider::process
[INFO 2016-09-10 01:24:12,160 dataprovider.py:22] dict len : 101745
I0910 01:24:12.680724 3085 PyDataProvider2.cpp:219] loading dataprovider dataprovider::process
[INFO 2016-09-10 01:24:12,681 dataprovider.py:22] dict len : 101745
I0910 01:24:12.687304 3085 GradientMachine.cpp:134] Initing parameters..
I0910 01:24:13.452847 3085 GradientMachine.cpp:141] Init parameters done.
Current Layer forward/backward stack is
LayerName: lstmemory_0
LayerName: fc_layer_0
LayerName: embedding_0
LayerName: word
*** Aborted at 1473495865 (unix time) try "date -d @1473495865" if you are using GNU date ***
Current Layer forward/backward stack is
PC: @ 0x7b0581 hppl::relu()
Current Layer forward/backward stack is
*** SIGILL (@0x7b0581) received by PID 3085 (TID 0x7f38e974a700) from PID 8062337; stack trace: ***
Current Layer forward/backward stack is
@ 0x7f39014e5340 (unknown)
Current Layer forward/backward stack is
@ 0x7b0581 hppl::relu()
Current Layer forward/backward stack is
@ 0x5b722c paddle::LstmCompute::forwardOneSequence<>()
Current Layer forward/backward stack is
@ 0x5b77cb paddle::LstmCompute::forwardBatch<>()
Current Layer forward/backward stack is
@ 0x62b278 paddle::LstmLayer::forwardBatch()
Current Layer forward/backward stack is
@ 0x62cad0 paddle::LstmLayer::forward()
Current Layer forward/backward stack is
@ 0x53749c paddle::NeuralNetwork::forward()
Current Layer forward/backward stack is
@ 0x54e447 paddle::TrainerThread::forward()
Current Layer forward/backward stack is
@ 0x55027c paddle::TrainerThread::computeThread()
Current Layer forward/backward stack is
@ 0x7f39004a1a60 (unknown)
Current Layer forward/backward stack is
@ 0x7f39014dd182 start_thread
Current Layer forward/backward stack is
@ 0x7f38ffc0930d (unknown)
Current Layer forward/backward stack is
@ 0x0 (unknown)
/home/lbj/Paddle-master/build/bin/paddle: line 46: 3085 Illegal instruction (core dumped) ${DEBUGGER} $MYDIR/../opt/paddle/bin/paddle_trainer ${@:2}

@liuyuuan
Copy link
Contributor

Check if your cpu supports avx instructions, if not, try to compile PaddlePaddle with WITH_AVX=OFF

@hdulbj
Copy link
Author

hdulbj commented Sep 11, 2016

first i did'nt turn it off,when i use "paddle train --help",it raise "illegal instruction".But when i already turned it off,"paddle train --help" cmd is ok.However, when i use the demo sentiment,cmd "./train"i raised the error.So what's wrong?

@reyoung
Copy link
Collaborator

reyoung commented Sep 12, 2016

@hdulbj The fixes is already checkin in version 4a880f0. Please get the latest code and rebuild, reinstall paddle.

It will fix the sigill here.

@hdulbj
Copy link
Author

hdulbj commented Sep 12, 2016

Thx,it worked.but a new problem appears

@reyoung reyoung closed this as completed Sep 12, 2016
@qingqing01 qingqing01 added the Bug label Oct 25, 2016
thisjiang pushed a commit to thisjiang/Paddle that referenced this issue Oct 28, 2021
gglin001 pushed a commit to graphcore/Paddle-fork that referenced this issue Dec 8, 2021
* fix ~IpuBackend() quit error

* refactor logic of GetLRFromScope

* add updateOptimizerFromHost

* add unittest
wangxicoding pushed a commit to wangxicoding/Paddle that referenced this issue Dec 9, 2021
* update BERT finetuning README for ChnSentaCorp

* fix dead links in readme

* simplify description of BERT finetuning on ChnSentiCorp
zhoutianzi666 pushed a commit to zhoutianzi666/Paddle that referenced this issue May 23, 2022
Thunderbrook pushed a commit to Thunderbrook/Paddle that referenced this issue Jul 19, 2022
AnnaTrainingG pushed a commit to AnnaTrainingG/Paddle that referenced this issue Sep 19, 2022
zmxdream pushed a commit to zmxdream/Paddle that referenced this issue Oct 10, 2023
Fridge003 pushed a commit to Fridge003/Paddle that referenced this issue Mar 13, 2024
Aurelius84 pushed a commit that referenced this issue Mar 26, 2024
* implement FuseFilteredStmtPatterns

* update

* split trivial op into a single file.

* fix compiler complaints

* rename StmtIter to StmtPtr

* declare group_pattern.InferShardableAxes

* refine signature of group_pattern.InferShardableAxes

* move group_pattern.InferShardableAxes to group_pattern_util.InferShardableAxes

* implement group_pattern_util.InferShardableAxes

* add group_pattern_util.InferShardableAxesFromSink

* ReversedInferShardableAxes support sinks

* update op lower

* support multiple sinks in group_pattern_util.InferShardableAxes

* update

* fix link error

* update

* remove FusionOp to OpList

* update

* update

* update

* update

* declare group_pattern_util.h

* fix compiler complains

* declare group_pattern_util.ClusteringHelper

* refine signature of group_pattern_util.ClusterIntoGroupPatternsFromOpList

* update op lowr

* add todo

* minor refine by group_pattern_util.OpSet

* update

* update

* update (#57)

* update

* update

* Cinn trivalop fuse (#58)

* fix

* refactor StmtFusionHelper by OpTopo

* Complete: CreateReduceExpr function.

* update

* recursive done.

* update

* Cinn trivalop fuse (#59)

* clean all the TODO.

* update

* fix cluster

* remove unused OpTopo.downstream_disconnected_ops

* Cinn trivalop fuse (#60)

* fix compile rror

* update

* Cinn trivalop fuse (#61)

* add R + T skeleon

* add search utils.

* update

* Cinn trivalop fuse (#62)

* push

* update

* fix

* fix transformer

* fix

* Implement iterator vars fetching in ReduceOp

* small fix

* add GetOuterIterVars API

* fix

* fix compile complain

* modify GetOutputIters of TrivialOp

* remove dumplicate code in visit

* implement ClusterIntoGroupPatternsFromOpList

* Fix most error in trivial_op.cc.

* CreateReduceExpr is OK!

* fix

* add CheckIterEq

* implement group_pattern_util.ClusteringEngine and groupp_pattern_util.ClusteringPolicy

* SinkTrivialTransform OK!

* update

* fix init_tensor name problem.

* update

* fix compiler complains

* refactor ShardableAxesSignature by group_pattern.SoleOutputShardableAxes

* split trivial_op.cc

* update

* implement group_pattern_util.MakeShardableAxesSignature4ReduceOp

* update

* implement group_pattern_util.MakeEmptyShardableAxesSignature

* add helper class group_pattern_util.ShardableAxesProvider

* implement group_pattern_util.MakeShardableAxesSignature4BroadcastOp

* update

* update

* fix softmax error.!

* fix

* update

* merge

* fix

* Implement new OpMergeWithOp and add a relevant flag

* update

* update

* fix reduce_load error. add splitReduceTransform

* fix conflict

* update

* update

* update

* disable horizontal fusion

* fix

* Add some VLOG

* Fix group cluster bug (#71)

* fix

* fix dyshape

* fix

* init split cluster files

* update

* update

* update

* spliting

* update

* spliting

* spliting

* pattern utils

* update

* update

* clean cmake

* update

* update

* update

* fix clustering_engine

* fix fusion_helper

* update

* fix

* update

* update

* update

* update

* fix

* fix some erros

* update

* update

* fix split with num problem

* update

* fix

* fix static issues

* fix

* init split cluster files (#72)

* update

* update

* update

* update

* update

* update

* update

* update

* update

* split shardable axes provider (#73)

* update

* update

* fix broadcast (#75)

* update

* update

* fix

* fix code format

* fix code format

* remove unittest

* update

* update (#77)

* update

* update

* update

---------

Co-authored-by: tc20042008 <156998525+tc20042008@users.noreply.github.com>
Co-authored-by: feifei-111 <2364819892@qq.com>
Co-authored-by: jiahy0825 <jiahongyu@baidu.com>
Co-authored-by: zhangbaizhou <zhangbaizhou@baidu.com>
Co-authored-by: Baizhou Zhang <eddiezhang@pku.edu.cn>
co63oc pushed a commit to co63oc/Paddle that referenced this issue Mar 26, 2024
* implement FuseFilteredStmtPatterns

* update

* split trivial op into a single file.

* fix compiler complaints

* rename StmtIter to StmtPtr

* declare group_pattern.InferShardableAxes

* refine signature of group_pattern.InferShardableAxes

* move group_pattern.InferShardableAxes to group_pattern_util.InferShardableAxes

* implement group_pattern_util.InferShardableAxes

* add group_pattern_util.InferShardableAxesFromSink

* ReversedInferShardableAxes support sinks

* update op lower

* support multiple sinks in group_pattern_util.InferShardableAxes

* update

* fix link error

* update

* remove FusionOp to OpList

* update

* update

* update

* update

* declare group_pattern_util.h

* fix compiler complains

* declare group_pattern_util.ClusteringHelper

* refine signature of group_pattern_util.ClusterIntoGroupPatternsFromOpList

* update op lowr

* add todo

* minor refine by group_pattern_util.OpSet

* update

* update

* update (PaddlePaddle#57)

* update

* update

* Cinn trivalop fuse (PaddlePaddle#58)

* fix

* refactor StmtFusionHelper by OpTopo

* Complete: CreateReduceExpr function.

* update

* recursive done.

* update

* Cinn trivalop fuse (PaddlePaddle#59)

* clean all the TODO.

* update

* fix cluster

* remove unused OpTopo.downstream_disconnected_ops

* Cinn trivalop fuse (PaddlePaddle#60)

* fix compile rror

* update

* Cinn trivalop fuse (PaddlePaddle#61)

* add R + T skeleon

* add search utils.

* update

* Cinn trivalop fuse (PaddlePaddle#62)

* push

* update

* fix

* fix transformer

* fix

* Implement iterator vars fetching in ReduceOp

* small fix

* add GetOuterIterVars API

* fix

* fix compile complain

* modify GetOutputIters of TrivialOp

* remove dumplicate code in visit

* implement ClusterIntoGroupPatternsFromOpList

* Fix most error in trivial_op.cc.

* CreateReduceExpr is OK!

* fix

* add CheckIterEq

* implement group_pattern_util.ClusteringEngine and groupp_pattern_util.ClusteringPolicy

* SinkTrivialTransform OK!

* update

* fix init_tensor name problem.

* update

* fix compiler complains

* refactor ShardableAxesSignature by group_pattern.SoleOutputShardableAxes

* split trivial_op.cc

* update

* implement group_pattern_util.MakeShardableAxesSignature4ReduceOp

* update

* implement group_pattern_util.MakeEmptyShardableAxesSignature

* add helper class group_pattern_util.ShardableAxesProvider

* implement group_pattern_util.MakeShardableAxesSignature4BroadcastOp

* update

* update

* fix softmax error.!

* fix

* update

* merge

* fix

* Implement new OpMergeWithOp and add a relevant flag

* update

* update

* fix reduce_load error. add splitReduceTransform

* fix conflict

* update

* update

* update

* disable horizontal fusion

* fix

* Add some VLOG

* Fix group cluster bug (PaddlePaddle#71)

* fix

* fix dyshape

* fix

* init split cluster files

* update

* update

* update

* spliting

* update

* spliting

* spliting

* pattern utils

* update

* update

* clean cmake

* update

* update

* update

* fix clustering_engine

* fix fusion_helper

* update

* fix

* update

* update

* update

* update

* fix

* fix some erros

* update

* update

* fix split with num problem

* update

* fix

* fix static issues

* fix

* init split cluster files (PaddlePaddle#72)

* update

* update

* update

* update

* update

* update

* update

* update

* update

* split shardable axes provider (PaddlePaddle#73)

* update

* update

* fix broadcast (PaddlePaddle#75)

* update

* update

* fix

* fix code format

* fix code format

* remove unittest

* update

* update (PaddlePaddle#77)

* update

* update

* update

---------

Co-authored-by: tc20042008 <156998525+tc20042008@users.noreply.github.com>
Co-authored-by: feifei-111 <2364819892@qq.com>
Co-authored-by: jiahy0825 <jiahongyu@baidu.com>
Co-authored-by: zhangbaizhou <zhangbaizhou@baidu.com>
Co-authored-by: Baizhou Zhang <eddiezhang@pku.edu.cn>
Aurelius84 pushed a commit that referenced this issue Apr 6, 2024
* update

* Cinn trivalop fuse (#62)

* push

* update

* fix

* fix transformer

* fix

* Implement iterator vars fetching in ReduceOp

* small fix

* add GetOuterIterVars API


* fix compile complain

* modify GetOutputIters of TrivialOp

* remove dumplicate code in visit

* implement ClusterIntoGroupPatternsFromOpList
---------

Co-authored-by: feifei-111 <2364819892@qq.com>
Co-authored-by: zhangbaizhou <zhangbaizhou@baidu.com>
Co-authored-by: jiahy0825 <jiahongyu@baidu.com>
Co-authored-by: tc20042008 <156998525+tc20042008@users.noreply.github.com>
Co-authored-by: Baizhou Zhang <eddiezhang@pku.edu.cn>
co63oc pushed a commit to co63oc/Paddle that referenced this issue Apr 9, 2024
* update

* Cinn trivalop fuse (PaddlePaddle#62)

* push

* update

* fix

* fix transformer

* fix

* Implement iterator vars fetching in ReduceOp

* small fix

* add GetOuterIterVars API


* fix compile complain

* modify GetOutputIters of TrivialOp

* remove dumplicate code in visit

* implement ClusterIntoGroupPatternsFromOpList
---------

Co-authored-by: feifei-111 <2364819892@qq.com>
Co-authored-by: zhangbaizhou <zhangbaizhou@baidu.com>
Co-authored-by: jiahy0825 <jiahongyu@baidu.com>
Co-authored-by: tc20042008 <156998525+tc20042008@users.noreply.github.com>
Co-authored-by: Baizhou Zhang <eddiezhang@pku.edu.cn>
zmxdream pushed a commit to zmxdream/Paddle that referenced this issue Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants