Skip to content

Commit

Permalink
Branch 152232810 (tensorflow#8988)
Browse files Browse the repository at this point in the history
* Improve py_func error handling.

Automatically translate some python errors into corresponding TF errors at runtime.
Change: 152156821

* Update interaction with libpng so that we use the public API instead of
knowledge of the internal libpng data structures.
Change: 152167754

* TensorBoard plugins now contain their own name/route prefix.
Change: 152167807

* Passes trainable flag to separable_conv2d biases.
Change: 152170239

* Saving resource variables with a caching device.
Change: 152171539

* Drop loss from estimator_spec.eval_metric_ops, as required by core Estimator.
Change: 152179924

* sample_stats.percentile DOCFIX.
Change: 152182295

* Added a memory optimizer to grappler.
Change: 152184170

* Change default behavior of the tf runs selector:

- If there are fewer than 41 runs, enable them all by default
- If there are 41 runs or more, disable them all by default

This is in response to user complaints that having it enable only the first ten runs by default was confusing, because it was not obvious to users that some runs had been disabled.
However, it still solves the initial user complaint that having very many runs simultaneously enabled would lag the UI.

I also changed the "toggle all runs" button to try to turn everything off before turning everything on.
Also, I improved the logic for detecting when the runs selection is back in the default state, so that we can avoid generating long URI strings wherever possible.
Change: 152188948

* Autogenerated Change: Change TensorBoard TAG to 52
Change: 152189000

* Remove warning that only happening with config cuda.
Change: 152189205

* Make resource variable shared name consistent with non-resource variables.

Remove colocation constraint from resource variable cached value with the
variable itself.
Change: 152192203

* Add a way to specify the optimization order; refactor and add constant folding to meta optimizer.
Change: 152193646

* Backport fixes and improvements from external Keras.
Change: 152198296

* Merge changes from github.
Change: 152200430

* Go: Update generated wrapper functions for TensorFlow ops.
Change: 152200754

* Update ops-related pbtxt files.
Change: 152203174

* Make ImportGraphDef() work with functions.

In addition to modify graph_constructor.cc, this patch adds some other
functionality to enable importing fucntions:
* Ability to add FunctionDefLibraries to Graphs and
  FunctionLibraryDefinitions (in addition to existing functions)
* FunctionDefsEqual() utility function
Change: 152205258

* Expand contrib test to more than just test targets.
Change: 152206822

* Preserve graph version during optimization
Change: 152213262

* Exclude enter and exit nodes from shape refiner's constant folding.
Change: 152213637

* Allow reshape_mover and algebraic_simplifier to make multiple mutations, by avoiding the short-circuit
std::any_of.
Change: 152232810

* fixing workspace.bzl

* workspace.bzl further fixes

* fixing tensorflow.bzl merge conflicts

* fixing typo in dnn.h

* fixing bad merge for dnn.h
  • Loading branch information
rohan100jain authored Apr 5, 2017
1 parent b93a88f commit e69f717
Show file tree
Hide file tree
Showing 129 changed files with 6,371 additions and 3,742 deletions.
15 changes: 8 additions & 7 deletions tensorflow/compiler/tests/nary_ops_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,13 +116,14 @@ def testStridedSlice(self):
np.array([1, 1], dtype=np.int32)],
expected=np.array([[], []], dtype=np.float32))

if (np.int64 in self.int_types):
self._testNAry(lambda x: array_ops.strided_slice(*x),
[np.array([[], [], []], dtype=np.float32),
np.array([1, 0], dtype=np.int64),
np.array([3, 0], dtype=np.int64),
np.array([1, 1], dtype=np.int64)],
expected=np.array([[], []], dtype=np.float32))
if np.int64 in self.int_types:
self._testNAry(
lambda x: array_ops.strided_slice(*x), [
np.array([[], [], []], dtype=np.float32), np.array(
[1, 0], dtype=np.int64), np.array([3, 0], dtype=np.int64),
np.array([1, 1], dtype=np.int64)
],
expected=np.array([[], []], dtype=np.float32))

self._testNAry(lambda x: array_ops.strided_slice(*x),
[np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]],
Expand Down
15 changes: 8 additions & 7 deletions tensorflow/compiler/xla/service/algebraic_simplifier.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1348,13 +1348,14 @@ Status AlgebraicSimplifierVisitor::HandleMinimum(HloInstruction* minimum,
StatusOr<bool> AlgebraicSimplifier::Run(HloModule* module) {
XLA_VLOG_LINES(2,
"AlgebraicSimplifier::Run(), before:\n" + module->ToString());
bool changed =
std::any_of(module->computations().begin(), module->computations().end(),
[=](const std::unique_ptr<HloComputation>& computation) {
return AlgebraicSimplifierVisitor::Run(
computation.get(), is_layout_sensitive_,
valid_bitcast_callback_, enable_dot_simplification_);
});
bool changed = false;
for (auto& comp : module->computations()) {
if (AlgebraicSimplifierVisitor::Run(comp.get(), is_layout_sensitive_,
valid_bitcast_callback_,
enable_dot_simplification_)) {
changed = true;
}
}
XLA_VLOG_LINES(2,
"AlgebraicSimplifier::Run(), after:\n" + module->ToString());
return changed;
Expand Down
20 changes: 9 additions & 11 deletions tensorflow/compiler/xla/service/reshape_mover.cc
Original file line number Diff line number Diff line change
Expand Up @@ -234,17 +234,15 @@ bool TrySinkReshapeOrTranspose(HloComputation* computation,
} // namespace

StatusOr<bool> ReshapeMover::Run(HloModule* module) {
return std::any_of(
module->computations().begin(), module->computations().end(),
[](const std::unique_ptr<HloComputation>& computation) {
std::list<HloInstruction*> postorder =
computation->MakeInstructionPostOrder();
return std::any_of(postorder.begin(), postorder.end(),
[&computation](HloInstruction* instruction) {
return TrySinkReshapeOrTranspose(computation.get(),
instruction);
});
});
bool changed = false;
for (const auto& comp : module->computations()) {
for (HloInstruction* instruction : comp->MakeInstructionPostOrder()) {
if (TrySinkReshapeOrTranspose(comp.get(), instruction)) {
changed = true;
}
}
}
return changed;
}

} // namespace xla
51 changes: 51 additions & 0 deletions tensorflow/compiler/xla/service/reshape_mover_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -202,5 +202,56 @@ TEST_F(ReshapeMoverTest, ScalarReshapeNotMovedAcrossSelect) {
EXPECT_EQ(select, computation->root_instruction());
}

// Tree looks like this:
//
// add1
// |
// +- reshape2 - param2
// |
// +- reshape3 - add0
// |
// + reshape0 - param0
// |
// + reshape1 - param1
//
// We expect reshape{0,1} AND reshape{2,3} to be lifted.
TEST_F(ReshapeMoverTest, MultiplePasses) {
auto shape1 = ShapeUtil::MakeShape(F32, {1, 8, 1, 7});
auto shape2 = ShapeUtil::MakeShape(F32, {8, 7, 1});
auto shape3 = ShapeUtil::MakeShape(F32, {8, 7});
HloComputation::Builder builder(TestName());
auto param0 = builder.AddInstruction(
HloInstruction::CreateParameter(0, shape1, "param0"));
auto param1 = builder.AddInstruction(
HloInstruction::CreateParameter(1, shape1, "param1"));
auto param2 = builder.AddInstruction(
HloInstruction::CreateParameter(2, shape2, "param2"));
auto reshape0 =
builder.AddInstruction(HloInstruction::CreateReshape(shape2, param0));
auto reshape1 =
builder.AddInstruction(HloInstruction::CreateReshape(shape2, param1));
auto add0 = builder.AddInstruction(HloInstruction::CreateBinary(
shape2, HloOpcode::kAdd, reshape0, reshape1));
auto reshape2 =
builder.AddInstruction(HloInstruction::CreateReshape(shape3, param2));
auto reshape3 =
builder.AddInstruction(HloInstruction::CreateReshape(shape3, add0));
auto add1 = builder.AddInstruction(HloInstruction::CreateBinary(
shape3, HloOpcode::kAdd, reshape2, reshape3));

auto module = MakeUnique<HloModule>(TestName());
auto computation = module->AddEntryComputation(builder.Build());
EXPECT_EQ(add1, computation->root_instruction());
EXPECT_TRUE(ReshapeMover().Run(module.get()).ValueOrDie());
EXPECT_EQ(HloOpcode::kReshape, computation->root_instruction()->opcode());
EXPECT_EQ(HloOpcode::kAdd,
computation->root_instruction()->operand(0)->opcode());
const auto& add_params =
computation->root_instruction()->operand(0)->operands();
EXPECT_EQ(2, add_params.size());
EXPECT_EQ(HloOpcode::kParameter, add_params[0]->opcode());
EXPECT_EQ(HloOpcode::kReshape, add_params[1]->opcode());
}

} // namespace
} // namespace xla
9 changes: 4 additions & 5 deletions tensorflow/contrib/distributions/python/ops/sample_stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def percentile(x,
keep_dims=False,
validate_args=False,
name=None):
"""Compute the `q`-th percentile of `x` along leading (sample) dimensions.
"""Compute the `q`-th percentile of `x`.
Given a vector `x`, the `q`-th percentile of `x` is the value `q / 100` of the
way from the minimum to the maximum in in a sorted copy of `x`.
Expand All @@ -58,7 +58,7 @@ def percentile(x,
```python
# Get 30th percentile with default ('linear') interpolation.
# Get 30th percentile with default ('nearest') interpolation.
x = [1., 2., 3., 4.]
percentile(x, q=30.)
==> 2.0
Expand Down Expand Up @@ -91,11 +91,10 @@ def percentile(x,
axis: Optional `0-D` or `1-D` integer `Tensor` with constant values.
The axis that hold independent samples over which to return the desired
percentile. If `None` (the default), treat every dimension as a sample
dimension, returning a scalar
dimension, returning a scalar.
interpolation : {"lower", "higher", "nearest"}. Default: "nearest"
This optional parameter specifies the interpolation method to
use when the desired quantile lies between two data points
`i < j`:
use when the desired quantile lies between two data points `i < j`:
* lower: `i`.
* higher: `j`.
* nearest: `i` or `j`, whichever is nearest.
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/contrib/keras/python/keras/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@
from tensorflow.contrib.keras.python.keras import wrappers


__version__ = '2.0.0-tf'
__version__ = '2.0.2-tf'
24 changes: 17 additions & 7 deletions tensorflow/contrib/keras/python/keras/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,18 +24,28 @@
from tensorflow.contrib.keras.python.keras.utils.generic_utils import deserialize_keras_object


def softmax(x):
def softmax(x, axis=-1):
"""Softmax activation function.
Arguments:
x : Tensor.
axis: Integer, axis along which the softmax normalization is applied.
Returns:
Tensor, output of softmax transformation.
Raises:
ValueError: In case `dim(x) == 1`.
"""
ndim = K.ndim(x)
if ndim == 2:
return K.softmax(x)
elif ndim == 3:
e = K.exp(x - K.max(x, axis=-1, keepdims=True))
s = K.sum(e, axis=-1, keepdims=True)
elif ndim > 2:
e = K.exp(x - K.max(x, axis=axis, keepdims=True))
s = K.sum(e, axis=axis, keepdims=True)
return e / s
else:
raise ValueError('Cannot apply softmax to a tensor '
'that is not 2D or 3D. '
'Here, ndim=' + str(ndim))
raise ValueError('Cannot apply softmax to a tensor that is 1D')


def elu(x, alpha=1.0):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -163,8 +163,8 @@ def ResNet50(include_top=True,
specified in your Keras config file.
Arguments:
include_top: whether to include the 3 fully-connected
layers at the top of the network.
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization)
or "imagenet" (pre-training on ImageNet).
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
Expand Down
Loading

0 comments on commit e69f717

Please sign in to comment.