Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensor draft for review #2645

Closed
wants to merge 8 commits into from
Closed

Conversation

JiayiFeng
Copy link
Collaborator

@JiayiFeng JiayiFeng commented Jun 28, 2017

This is only a draft of tensor implementation for review and it has not been fully tested.
So DO NOT MERGE THIS PR.
Comments are welcome if you have any idea or suggestion for my code.

In addition, during my coding work I came up with several other questions that need to be discussed:

  1. Do we need to rename functions from Majel (such as member functions of DDim and Place) to follow google c++ style?
  2. In ported Majel code, we use glog to do asserting jobs, while in the other part of refactored Paddle we plan to use our own macro(paddle/platform/assert.h). Are we going to unify them?
  3. Stride is removed in current tensor design. Is that going to be a problem?

return static_cast<const T*>(holder->Ptr());
}

bool Tensor::NeedReset() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NeedReset () const

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


template <typename T>
void CopyFrom(const Tensor& src) {
if ((void*)&src == (void*)this) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe need more checking? Is CopyFrom always success?

Copy link
Collaborator Author

@JiayiFeng JiayiFeng Jun 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some checking in Tensor::Data(). I can't find out more potential problems for the moment... Do you have any idea?

T* src_ptr = src.Data<T>();
T* dst_ptr = MutableData<T>(src.Dims());
for (int i = 0; i < len; ++i) {
dst_ptr[i] = src_ptr[i];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not correct for tensor with GPUPlace.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's just for CPU at present. GPU part will be implemented later.


void Tensor::Resize(const DDim& dims) {
dims_ = dims;
return;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why add so many return at the end of a void func()

Copy link
Collaborator Author

@JiayiFeng JiayiFeng Jun 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's just my personal habit... I'm used to adding return; to mark all return points of a void function.
Of course, they can be removed, if they break common code style or might cause confusion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please strictly follow a common style as we work as a team. Let's remove all unnecessary code.

if (product(dims) != product(dims_)) {
// TODO: error: "Reshape() can not change tensor's numel".
}
_dims = dim;
Copy link
Member

@jacquesqiao jacquesqiao Jun 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_dims ==> dims_
what is the differents between Resize and Reshape, have a check step?

Copy link
Collaborator Author

@JiayiFeng JiayiFeng Jun 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resize() can change tensor's numel so the data block might be re-allocated, while Reshape() never change numel and all data will certainly be retained. When users invoke Resize() and Reshape(), they have different expectations about how we will handle their data (retain or discard).

However, from the implementation perspective, the difference is really just a checking step.

namespace framework {

template <typename T>
const T* Tensor::Data() const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Template should implement in header.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


void Tensor::Reshape(const DDim& dims) {
if (product(dims) != product(dims_)) {
// TODO: error: "Reshape() can not change tensor's numel".
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can not reshape change tensor's numel?

In each mini-batch training, each output of Op should be Reshape, because the shape of input tensor could be changed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users can use Resize() to change tensor's numel.

size_(size) {}

virtual std::type_info TypeInfo() const { return typeid(T); }
virtual void* Ptr() const { return static_cast<void*>(ptr_.get()); }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe Ptr() should not be added in interface, it should return T* like

struct Placeholder {
  virtual ~Placeholder() {}  // for rtti
};

template <typename T>
struct PlaceholderImpl : public Placeholder {
  T* Ptr() const { return ptr_.get(); }
};

template <typename T>
const T* Data() const {
  auto holder = std::dynamic_pointer_cast<PlaceholderImpl<T>>(holder_);
  ASSERT(holder != nullptr);
  return holder->Ptr();
}

Using static_cast everywhere make compiler cannot check type for us.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See PR #2647

const T* Tensor::Data() const {
PADDLE_ASSERT(holder_ != nullptr);
PADDLE_ASSERT(holder_->Place() == place_);
PADDLE_ASSERT(holder_->Size() >= product(dims_) * sizeof(T));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to check the type of holder_ here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is worth discussing, I think.


bool Tensor::NeedReset() {
return (holder_ == nullptr || holder_->Place() != place_ ||
holder_->Size() < product(dims_) * sizeof(T));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no T. Do there need a template? And again, do we need to check the type of holder_ here?


void Tensor::ShareData(const Tensor& src) {
if (src.NeedReset()) {
// TODO: error: "Src tensor need to be reseted before calling ShareData()".
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the meaning of reset? Does it actually mean resize?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NeedReset means the underlying memory block needs to be re-allocated.

return holder_;
}

const DDim& Tensor::Dims() const { return dims_; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put these simple functions into the header file, the compiler can do inline optimization.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

int len = product(src.Dims());
T* src_ptr = src.Data<T>();
T* dst_ptr = MutableData<T>(src.Dims());
for (int i = 0; i < len; ++i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use memcpy?

Copy link
Collaborator

@wangkuiyi wangkuiyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure we need many methods defined here. By the Occam's Razor, I'd delete those that are not mandetory.


void Tensor::Resize(const DDim& dims) {
dims_ = dims;
return;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please strictly follow a common style as we work as a team. Let's remove all unnecessary code.


int Tensor::Numel() const { return product(dims_); }

void Tensor::Resize(const DDim& dims) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my design, I don't see that we need Resize. Indeed, I followed @qingqing01 's suggestion and removed the ability of the constructor to set the size.

Copy link
Collaborator Author

@JiayiFeng JiayiFeng Jun 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean that if users want to change tensor's size, they can invoke mutable_data(DDim) directly?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

return;
}

const std::shared_ptr<Tensor::Placeholder>& Tensor::Holder() const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that users don't need to and shouldn't know the concept "holder", which is defined as a private type inside Tensor in my design.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


// must be POD types
template <typename T, typename = std::enable_if<std::is_pod<T>::value>::type>
T* MutableData() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please follow Google C++ style guide carefully -- there is a reason there of using the name mutable_data.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accessors and mutators (get and set functions) may be named like variables. These often correspond to actual member variables, but this is not required.

Yes, it' ok to using mutable_data().

By the way, do we need to rename functions in DDim and Place to follow Google C++ style?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is a good idea.

}

template <typename T>
void CopyFrom(const Tensor& src) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need CopyFrom?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may need CopyFrom when the src Tensor is on GPU. We may need to copy GPU Tensor to CPU and do something such as writing to a file.

Copy link
Collaborator Author

@JiayiFeng JiayiFeng Jun 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @Xreki .

Copy link
Collaborator

@wangkuiyi wangkuiyi Jun 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like what we need is a method Serialize that fills in a protobuf message; instead a Copy or CopyFrom?

void Tensor::Serialize(TensorProto* proto) {
  if (is_gpu_place(Place()) {
    Tensor cpu_tensor;
    cudaMemcpy(cpu_tensor.mutable_data(Size(), CPUPlace()), data(), Size());
    cpu_tensor.Serialize(proto);
  } else {
    // fill in proto
  }
}

}

template <typename T>
bool NeedReset() const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think @hedaoyuan 's comment means these functions can be inlined by using inline keyword.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is no inline keyword that will also be inline.

Copy link
Collaborator

@wangkuiyi wangkuiyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR started with many features not necessary or shouldn't be there (partially due to a misunderstanding of the lazy memory allocation described in https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor.md).

As the review procedure going on, it's getting closer and closer to #2611.

I'd suggest that we close this PR, and merge #2611. After that, we can change it to use shared_ptr other than unique_ptr and support shared data.

using paddle::platform::get_place;

public:
explicit Tensor(DDim dims) : dims_(dims), place_(get_place()) {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please be aware that I followed @qingqing01 's suggestion 3 days ago #2611 (review) and removed Tensor::Tensor, Tensor::place_, Tensor::dims_ in my PR #2611 so to make a concise and consistent syntax.

}

template <typename T>
bool NeedReset() const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NeedReset is not something should be exposed.

}

// must be POD types
template <typename T, typename = std::enable_if<std::is_pod<T>::value>::type>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I followed @qingqing01 's suggestion and removed multiple signatures of mutable_data.

return mutable_data<T>();
}

int Rank() const { return arity(dims_); }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be removed following @qingqing01 's suggestion.


int Rank() const { return arity(dims_); }

int Numel() const { return product(dims_); }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is should be removed too.


int Numel() const { return product(dims_); }

void Reshape(const DDim& dims) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need for Reshape, because we can simply call mutable_data(new_place, new_dim).

const paddle::platform::Place& Place() const { return place_; }

template <typename T>
bool IsType() const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IsType should be removed because we can interpret a Tensor of any type -- just call mutable_data<T>(place, dim).

size_t size_; // size of the memory block.
};

std::shared_ptr<Placeholder> holder_; // holds the memory block if allocated.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following @qingqing01 's suggestion, dims_ and place should be removed.

@JiayiFeng
Copy link
Collaborator Author

I agree to close this PR and merge #2611.

@wangkuiyi
Copy link
Collaborator

wangkuiyi commented Jul 3, 2017

@Canpio Could you please approve PR #2611 ? Thanks.

@JiayiFeng
Copy link
Collaborator Author

see #2611

@JiayiFeng JiayiFeng closed this Jul 3, 2017
@JiayiFeng JiayiFeng deleted the tensor_draft branch August 1, 2017 18:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants