-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensor draft for review #2645
tensor draft for review #2645
Conversation
paddle/framework/tensor.cc
Outdated
return static_cast<const T*>(holder->Ptr()); | ||
} | ||
|
||
bool Tensor::NeedReset() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NeedReset () const
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
paddle/framework/tensor.cc
Outdated
|
||
template <typename T> | ||
void CopyFrom(const Tensor& src) { | ||
if ((void*)&src == (void*)this) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe need more checking? Is CopyFrom always success?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some checking in Tensor::Data()
. I can't find out more potential problems for the moment... Do you have any idea?
paddle/framework/tensor.cc
Outdated
T* src_ptr = src.Data<T>(); | ||
T* dst_ptr = MutableData<T>(src.Dims()); | ||
for (int i = 0; i < len; ++i) { | ||
dst_ptr[i] = src_ptr[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not correct for tensor
with GPUPlace
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's just for CPU at present. GPU part will be implemented later.
paddle/framework/tensor.cc
Outdated
|
||
void Tensor::Resize(const DDim& dims) { | ||
dims_ = dims; | ||
return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why add so many return
at the end of a void func()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just my personal habit... I'm used to adding return;
to mark all return points of a void
function.
Of course, they can be removed, if they break common code style or might cause confusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please strictly follow a common style as we work as a team. Let's remove all unnecessary code.
paddle/framework/tensor.cc
Outdated
if (product(dims) != product(dims_)) { | ||
// TODO: error: "Reshape() can not change tensor's numel". | ||
} | ||
_dims = dim; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_dims
==> dims_
what is the differents between Resize
and Reshape
, have a check step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resize()
can change tensor's numel so the data block might be re-allocated, while Reshape()
never change numel and all data will certainly be retained. When users invoke Resize()
and Reshape()
, they have different expectations about how we will handle their data (retain or discard).
However, from the implementation perspective, the difference is really just a checking step.
paddle/framework/tensor.cc
Outdated
namespace framework { | ||
|
||
template <typename T> | ||
const T* Tensor::Data() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Template should implement in header.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/framework/tensor.cc
Outdated
|
||
void Tensor::Reshape(const DDim& dims) { | ||
if (product(dims) != product(dims_)) { | ||
// TODO: error: "Reshape() can not change tensor's numel". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can not reshape change tensor's numel?
In each mini-batch training, each output of Op should be Reshape
, because the shape of input tensor could be changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users can use Resize()
to change tensor's numel.
size_(size) {} | ||
|
||
virtual std::type_info TypeInfo() const { return typeid(T); } | ||
virtual void* Ptr() const { return static_cast<void*>(ptr_.get()); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe Ptr() should not be added in interface, it should return T* like
struct Placeholder {
virtual ~Placeholder() {} // for rtti
};
template <typename T>
struct PlaceholderImpl : public Placeholder {
T* Ptr() const { return ptr_.get(); }
};
template <typename T>
const T* Data() const {
auto holder = std::dynamic_pointer_cast<PlaceholderImpl<T>>(holder_);
ASSERT(holder != nullptr);
return holder->Ptr();
}
Using static_cast
everywhere make compiler cannot check type for us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See PR #2647
paddle/framework/tensor.cc
Outdated
const T* Tensor::Data() const { | ||
PADDLE_ASSERT(holder_ != nullptr); | ||
PADDLE_ASSERT(holder_->Place() == place_); | ||
PADDLE_ASSERT(holder_->Size() >= product(dims_) * sizeof(T)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to check the type
of holder_
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is worth discussing, I think.
paddle/framework/tensor.cc
Outdated
|
||
bool Tensor::NeedReset() { | ||
return (holder_ == nullptr || holder_->Place() != place_ || | ||
holder_->Size() < product(dims_) * sizeof(T)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no T
. Do there need a template
? And again, do we need to check the type of holder_
here?
paddle/framework/tensor.cc
Outdated
|
||
void Tensor::ShareData(const Tensor& src) { | ||
if (src.NeedReset()) { | ||
// TODO: error: "Src tensor need to be reseted before calling ShareData()". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the meaning of reset? Does it actually mean resize?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NeedReset
means the underlying memory block needs to be re-allocated.
paddle/framework/tensor.cc
Outdated
return holder_; | ||
} | ||
|
||
const DDim& Tensor::Dims() const { return dims_; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put these simple functions into the header file, the compiler can do inline optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/framework/tensor.h
Outdated
int len = product(src.Dims()); | ||
T* src_ptr = src.Data<T>(); | ||
T* dst_ptr = MutableData<T>(src.Dims()); | ||
for (int i = 0; i < len; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use memcpy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure we need many methods defined here. By the Occam's Razor, I'd delete those that are not mandetory.
paddle/framework/tensor.cc
Outdated
|
||
void Tensor::Resize(const DDim& dims) { | ||
dims_ = dims; | ||
return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please strictly follow a common style as we work as a team. Let's remove all unnecessary code.
paddle/framework/tensor.cc
Outdated
|
||
int Tensor::Numel() const { return product(dims_); } | ||
|
||
void Tensor::Resize(const DDim& dims) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my design, I don't see that we need Resize
. Indeed, I followed @qingqing01 's suggestion and removed the ability of the constructor to set the size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean that if users want to change tensor's size, they can invoke mutable_data(DDim)
directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
paddle/framework/tensor.cc
Outdated
return; | ||
} | ||
|
||
const std::shared_ptr<Tensor::Placeholder>& Tensor::Holder() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that users don't need to and shouldn't know the concept "holder", which is defined as a private type inside Tensor in my design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
paddle/framework/tensor.h
Outdated
|
||
// must be POD types | ||
template <typename T, typename = std::enable_if<std::is_pod<T>::value>::type> | ||
T* MutableData() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please follow Google C++ style guide carefully -- there is a reason there of using the name mutable_data
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accessors and mutators (get and set functions) may be named like variables. These often correspond to actual member variables, but this is not required.
Yes, it' ok to using mutable_data()
.
By the way, do we need to rename functions in DDim
and Place
to follow Google C++ style?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is a good idea.
paddle/framework/tensor.h
Outdated
} | ||
|
||
template <typename T> | ||
void CopyFrom(const Tensor& src) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we need CopyFrom
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may need CopyFrom
when the src
Tensor is on GPU. We may need to copy GPU Tensor to CPU and do something such as writing to a file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @Xreki .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like what we need is a method Serialize
that fills in a protobuf message; instead a Copy
or CopyFrom
?
void Tensor::Serialize(TensorProto* proto) {
if (is_gpu_place(Place()) {
Tensor cpu_tensor;
cudaMemcpy(cpu_tensor.mutable_data(Size(), CPUPlace()), data(), Size());
cpu_tensor.Serialize(proto);
} else {
// fill in proto
}
}
} | ||
|
||
template <typename T> | ||
bool NeedReset() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think @hedaoyuan 's comment means these functions can be inlined by using inline
keyword.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is no inline keyword that will also be inline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR started with many features not necessary or shouldn't be there (partially due to a misunderstanding of the lazy memory allocation described in https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.md).
As the review procedure going on, it's getting closer and closer to #2611.
I'd suggest that we close this PR, and merge #2611. After that, we can change it to use shared_ptr other than unique_ptr and support shared data.
using paddle::platform::get_place; | ||
|
||
public: | ||
explicit Tensor(DDim dims) : dims_(dims), place_(get_place()) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please be aware that I followed @qingqing01 's suggestion 3 days ago #2611 (review) and removed Tensor::Tensor
, Tensor::place_
, Tensor::dims_
in my PR #2611 so to make a concise and consistent syntax.
} | ||
|
||
template <typename T> | ||
bool NeedReset() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NeedReset is not something should be exposed.
} | ||
|
||
// must be POD types | ||
template <typename T, typename = std::enable_if<std::is_pod<T>::value>::type> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I followed @qingqing01 's suggestion and removed multiple signatures of mutable_data.
return mutable_data<T>(); | ||
} | ||
|
||
int Rank() const { return arity(dims_); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be removed following @qingqing01 's suggestion.
|
||
int Rank() const { return arity(dims_); } | ||
|
||
int Numel() const { return product(dims_); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is should be removed too.
|
||
int Numel() const { return product(dims_); } | ||
|
||
void Reshape(const DDim& dims) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no need for Reshape, because we can simply call mutable_data(new_place, new_dim)
.
const paddle::platform::Place& Place() const { return place_; } | ||
|
||
template <typename T> | ||
bool IsType() const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IsType should be removed because we can interpret a Tensor of any type -- just call mutable_data<T>(place, dim)
.
size_t size_; // size of the memory block. | ||
}; | ||
|
||
std::shared_ptr<Placeholder> holder_; // holds the memory block if allocated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following @qingqing01 's suggestion, dims_
and place
should be removed.
I agree to close this PR and merge #2611. |
@Canpio Could you please approve PR #2611 ? Thanks. |
see #2611 |
This is only a draft of tensor implementation for review and it has not been fully tested.
So DO NOT MERGE THIS PR.
Comments are welcome if you have any idea or suggestion for my code.
In addition, during my coding work I came up with several other questions that need to be discussed:
DDim
andPlace
) to follow google c++ style?paddle/platform/assert.h
). Are we going to unify them?Stride
is removed in current tensor design. Is that going to be a problem?