Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Wrong output shapes of MaxPool #18731

Closed
3 tasks done
dkurt opened this issue Jul 24, 2023 · 11 comments · Fixed by #18965
Closed
3 tasks done

[Bug] Wrong output shapes of MaxPool #18731

dkurt opened this issue Jul 24, 2023 · 11 comments · Fixed by #18965
Assignees
Labels
bug Something isn't working category: nGraph OpenVINO Runtime Library - nGraph
Milestone

Comments

@dkurt
Copy link
Contributor

dkurt commented Jul 24, 2023

System information (version)
  • OpenVINO Source=> pip install
  • OpenVINO Version=> 2023.0
  • Operating System / Platform => Linux
  • Compiler =>GCC
  • Problem classification =>nGraph
  • Device use: => any
  • Framework => any (reproduced with pure nGraph builder)
  • Model name => ❔
Detailed description

Wrong output shape calculation during shape propagation. For a specified parameters (kernel 2, stride 2, pad 1, ceil rounding mode) and input 5x5 expected output is 3x3 (as ceil[ (5 + 1 + 1 - 2) / 2 ] = ceil(2.5) = 3).

Steps to reproduce

using nGraph:

#include <iostream>
#include <vector>
#include <openvino/openvino.hpp>
#include <ngraph/ngraph.hpp>

int main() {
    std::vector<size_t> input_shape{1, 3, 5, 5};
    std::vector<size_t> strides{2, 2};
    std::vector<size_t> kernel{2, 2};
    std::vector<size_t> pads{1, 1};
    auto inp = std::make_shared<ngraph::op::Parameter>(ngraph::element::f32, ngraph::Shape(input_shape));
    auto max_pool = std::make_shared<ngraph::op::v1::MaxPool>(
        inp,
        ngraph::Strides(strides),
        ngraph::Shape(pads),
        ngraph::Shape(pads),
        ngraph::Shape(kernel),
        ngraph::op::RoundingType::CEIL,
        ngraph::op::PadType::EXPLICIT
    );
    std::cout << max_pool->get_shape() << std::endl;

    return 0;
}

using PyTorch:

import numpy as np
import torch
import torch.nn as nn
import openvino
from openvino.runtime import Core

print('PyTorch version', torch.__version__)
print('OV version', openvino.runtime.__version__)

class Model(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__()
        self.pool = nn.MaxPool2d(kernel_size=(2, 2), stride=2, padding=1, dilation=1, ceil_mode=True)

    def forward(self, x):
        return self.pool(x)

m = Model()
inp = torch.tensor(np.random.standard_normal((1, 3, 5, 5)))
ref = m(inp)
print('ref shape', ref.shape)

torch.onnx.export(m, inp, "model.onnx")

# Run with OpenVINO

core = Core()
compiled = core.compile_model("model.onnx")
req = compiled.create_infer_request()
out = req.infer(np.array(inp))
out = next(iter(out))
print('out shape', out.shape)
PyTorch version 1.13.1+cpu
OV version 2023.0.0-10926-b4452d56304-releases/2023/0
ref shape torch.Size([1, 3, 3, 3])
out shape [1,3,4,4]
Issue submission checklist
  • I report the issue, it's not a question
  • I checked the problem with documentation, FAQ, open issues, Stack Overflow, etc and have not found solution
  • There is reproducer code and related data files: images, videos, models, etc.
@dkurt dkurt added bug Something isn't working support_request labels Jul 24, 2023
@avitial avitial added category: nGraph OpenVINO Runtime Library - nGraph and removed support_request labels Jul 25, 2023
@praasz
Copy link
Contributor

praasz commented Jul 26, 2023

Thanks for reporting the issue.

According to the pytorch documentation MaxPool2d the output H,W dimension should be calculated as:

ceil((5+2*1-1*(2-1)-1)/2 + 1) = ceil(3.5) -> 4

There is +1 after stride divide. The output shape should be [1,3,4,4]

The ONNX and OpenVINO use same equation for calculation as pytorch.
Please see the exported ONNX model it has H and W dimensions also calculated as 4,4 and same result is from OpenVINO.

It looks like ceil mode in pytorch is not working correctly. When in given example the ceil mode is replaced by floor then output shape is still same.

@dkurt
Copy link
Contributor Author

dkurt commented Jul 26, 2023

Please take a look at the note:

When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.

So for 5x5 input, kernel 2x2, stride 2x2 and 1x1 paddings there are 3x3 sliding windows:

 0 x0 x1 x2 x3 x4 0
|----|-----|-----|

PyTorch shape compute works correctly and decreases output shape by 1 at the end: https://github.com/pytorch/pytorch/blob/15442915cf450347d077a783a9c3ea20ff220686/aten/src/ATen/native/Pool.h#L41-L56

@praasz
Copy link
Contributor

praasz commented Jul 27, 2023

Please take a look at the note:

When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.

So for 5x5 input, kernel 2x2, stride 2x2 and 1x1 paddings there are 3x3 sliding windows:

 0 x0 x1 x2 x3 x4 0
|----|-----|-----|

PyTorch shape compute works correctly and decreases output shape by 1 at the end: https://github.com/pytorch/pytorch/blob/15442915cf450347d077a783a9c3ea20ff220686/aten/src/ATen/native/Pool.h#L41-L56

Yes right and this is the difference.
As workaround to exporting PyTorch to ONNX model maybe this will help:

torch.onnx.export(m, inp, "model.onnx", opset_version=9)

Looks like using opset 9 the result is same as PyTorch and from opset 10 conversing is not compatible.

@dkurt
Copy link
Contributor Author

dkurt commented Jul 27, 2023

@praasz, thanks for a workaround, but initially this problem is reproduced with nGraph API as well:

#include <iostream>
#include <vector>
#include <openvino/openvino.hpp>
#include <ngraph/ngraph.hpp>

int main() {
    std::vector<size_t> input_shape{1, 3, 5, 5};
    std::vector<size_t> strides{2, 2};
    std::vector<size_t> kernel{2, 2};
    std::vector<size_t> pads{1, 1};
    auto inp = std::make_shared<ngraph::op::Parameter>(ngraph::element::f32, ngraph::Shape(input_shape));
    auto max_pool = std::make_shared<ngraph::op::v1::MaxPool>(
        inp,
        ngraph::Strides(strides),
        ngraph::Shape(pads),
        ngraph::Shape(pads),
        ngraph::Shape(kernel),
        ngraph::op::RoundingType::CEIL,
        ngraph::op::PadType::EXPLICIT
    );
    std::cout << max_pool->get_shape() << std::endl;

    return 0;
}

prints

[1,3,4,4]

@praasz
Copy link
Contributor

praasz commented Aug 3, 2023

@praasz, thanks for a workaround, but initially this problem is reproduced with nGraph API as well:
...
prints

[1,3,4,4]

This is expected result in OpenVINO and it can be different than PyTorch.
There is PR which improve PyTorch frontend conversion. With change proposed by #18965 converting model from PyTorch to OpenVINO should give same results.

Example to convert model:

import numpy as np
import torch
import torch.nn as nn

from openvino.frontend import FrontEndManager
from openvino.runtime import PartialShape

from openvino.frontend.pytorch.ts_decoder import TorchScriptPythonDecoder

class Model(nn.Module):
    def __init__(self, *args, **kwargs):
        super().__init__()
        self.pool = nn.MaxPool2d(2,stride=2, padding=1, dilation=1, ceil_mode=True)

    def forward(self, x):
        return self.pool(x)


input_shape = (1,3,5,5)
m = Model()
inp = torch.tensor(np.random.standard_normal(input_shape))
ref = m(inp)

fe_manager = FrontEndManager()
fe = fe_manager.load_by_framework('pytorch')
decoder = TorchScriptPythonDecoder(m)
im = fe.load(decoder)
om = fe.convert(im)

om.inputs[0].get_node().set_partial_shape(PartialShape(input_shape))
om.validate_nodes_and_infer_types()
print(f'Input {input_shape} -> results ref: {ref.shape} ov: {om.get_output_partial_shape(0)}')

@dkurt
Copy link
Contributor Author

dkurt commented Aug 3, 2023

As I already mentioned, the problem is not in PyTorch/ONNX conversion but with nGraph itself. There is no data outside the padding and output shape should be 3x3. I have extended my C++ sample above so you can see the garbage at last row/col:

#include <iostream>
#include <vector>
#include <openvino/openvino.hpp>
#include <ngraph/ngraph.hpp>

int main() {
    srand(123);

    std::vector<size_t> input_shape{1, 1, 5, 5};
    std::vector<size_t> strides{2, 2};
    std::vector<size_t> kernel{2, 2};
    std::vector<size_t> pads{1, 1};
    auto inp = std::make_shared<ngraph::op::Parameter>(ngraph::element::f32, ngraph::Shape(input_shape));
    auto max_pool = std::make_shared<ngraph::op::v1::MaxPool>(
        inp,
        ngraph::Strides(strides),
        ngraph::Shape(pads),
        ngraph::Shape(pads),
        ngraph::Shape(kernel),
        ngraph::op::RoundingType::CEIL,
        ngraph::op::PadType::EXPLICIT
    );
    std::cout << "OpenVINO output shape: " << max_pool->get_shape() << std::endl;

    // Create a function
    ngraph::ParameterVector inputs{inp};
    ngraph::ResultVector outs;
    outs.push_back(std::make_shared<ngraph::op::Result>(max_pool));
    auto func = std::make_shared<ngraph::Function>(outs, inputs);

    // Run model
    ov::Core core;
    auto compiled = core.compile_model(func, "CPU");
    auto req = compiled.create_infer_request();

    std::vector<float> inpData(5*5, 0);
    std::vector<float> outData(4*4, 0);
    for (int i = 0; i < inpData.size(); ++i) {
        inpData[i] = static_cast<float>(rand()) / RAND_MAX;
    }

    ov::Tensor inpTensor(ov::element::f32, {1, 1, 5, 5}, inpData.data());
    ov::Tensor outTensor(ov::element::f32, {1, 1, 4, 4}, outData.data());
    req.set_input_tensor(inpTensor);
    req.set_output_tensor(outTensor);
    req.infer();

    std::cout << std::endl << "input:" << std::endl;
    for (int i = 0; i < 5; ++i) {
        for (int j = 0; j < 5; ++j) {
            std::cout << inpData[i * 5 + j] << " ";
        }
        std::cout << std::endl;
    }

    std::cout << std::endl << "output:" << std::endl;
    for (int i = 0; i < 4; ++i) {
        for (int j = 0; j < 4; ++j) {
            std::cout << outData[i * 4 + j] << " ";
        }
        std::cout << std::endl;
    }

    return 0;
}
OpenVINO output shape: [1,1,4,4]

input:
0.0600514 0.788318 0.203068 0.348563 0.361609 
0.134639 0.375968 0.259322 0.0443163 0.879562 
0.630366 0.377145 0.319729 0.827858 0.425105 
0.486632 0.790695 0.95527 0.713757 0.352038 
0.84194 0.36238 0.383021 0.0784863 0.174064 

output:
0.0600514 0.788318 0.361609 -3.40282e+38 
0.630366 0.377145 0.879562 -3.40282e+38 
0.84194 0.95527 0.713757 -3.40282e+38 
-3.40282e+38 -3.40282e+38 -3.40282e+38 -3.40282e+38

@praasz
Copy link
Contributor

praasz commented Aug 3, 2023

This is expected behaviour. To change it new version of operator is required.

Other solution is to use additional sub-graph like in created PR which will remove unwanted data.

@dkurt
Copy link
Contributor Author

dkurt commented Aug 3, 2023

Do you mean #18965? It just have not linked, thanks.

Should I open a separate ticket for nGraph?

@praasz
Copy link
Contributor

praasz commented Aug 3, 2023

Do you mean #18965? It just have not linked, thanks.

Should I open a separate ticket for nGraph?

Yes, #18965.

@ilya-lavrenov Do we need new ticket or this can be used as new feature request for operator?

@dkurt
Copy link
Contributor Author

dkurt commented Aug 24, 2023

@praasz, should we close this issue?

@ilya-lavrenov ilya-lavrenov linked a pull request Sep 16, 2023 that will close this issue
@ilya-lavrenov
Copy link
Contributor

@ilya-lavrenov Do we need new ticket or this can be used as new feature request for operator?

Let's create a new feature request ticket for new operator if required.

@ilya-lavrenov ilya-lavrenov added this to the 2023.1 milestone Sep 16, 2023
github-merge-queue bot pushed a commit that referenced this issue Mar 14, 2024
### Details:
 - Core implementation of MaxPool-14 and AvgPool-14
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area
 - No changes to reference implementation were necessary

### Related PRs
-
[Specification](#22930)
 - [Python API](#22966)
 - [PT FE](#23027)
- [Downgrade
transformations](#23381)

### Tickets:
 - 131961

### Context
#18731

---------

Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
github-merge-queue bot pushed a commit that referenced this issue Mar 18, 2024
…RCH` (#22930)

### Details:
 - Add specification for `MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

### Related PRs
- [Reference and
Core](#22796)
 - [Python API](#22966)
 - [PT FE](#23027)
- [Downgrade
transformations](#23381)

### Tickets:
 - 131961

### Context
#18731

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
github-merge-queue bot pushed a commit that referenced this issue Mar 22, 2024
### Details:
 - Extend Python API with`MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

### Related PRs
 - #22930
 - #22796
 - #23027
 - #23381
 - #23582

### Tickets:
 - 131961

### Context
#18731

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Shubham-Sahoo added a commit to Shubham-Sahoo/openvino that referenced this issue Mar 26, 2024
[Specification] MaxPool-14 and AvgPool-14 - new ceiling mode `CEIL_TORCH` (openvinotoolkit#22930)

 - Add specification for `MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

- [Reference and
Core](openvinotoolkit#22796)
 - [Python API](openvinotoolkit#22966)
 - [PT FE](openvinotoolkit#23027)
- [Downgrade
transformations](openvinotoolkit#23381)

 - 131961

openvinotoolkit#18731

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

[TF FE] Support ApproximateEqual operation for TensorFlow (openvinotoolkit#23351)

 - *Adding operation support for ApproximateEqual operation*
 - *Addresses issue openvinotoolkit#22082 *

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[OV JS] Expose export_model()/import_model() (openvinotoolkit#23366)

- Expose `compiledModel::export_model()`, a method to export a compiled
model to the binary data stream.
- Expose `core::import_model(model_file : Buffer, device_name : str)`, a
method to import a compiled model from a previously exported one.

 - *134820* *134818*

---------

Co-authored-by: Vishniakov Nikolai <nikolai.vishniakov@intel.com>

[core] Low precision element iterator and `u2, u3, u6` types (openvinotoolkit#23279)

 - Introduce new low precision types `u2`, `u3`, `u6`.
- Introduce `ov::element::Iterator` for low precision types like `u1,
u2, u3, u4, i4, u6`:
- Gives pointer like access to low precision values in Tensor,
containers etc.
- Can be used by STL algorithms to access data in unified algorithms for
data manipulation.
- Can be used in Constant, Convert operators to replace duplicate
implementations for accessing low precision data (bin-size reduction).
- Can be used for operator reference implementation or plugin if there
is no hardware specific solution.

 - [CVS-126998](https://jira.devtools.intel.com/browse/CVS-126998)
- Part of
[CVS-128024](https://jira.devtools.intel.com/browse/CVS-128024)

[DOCS]  Updated file (openvinotoolkit#23509)

 - *item1*
 - *...*

 - *ticket-id*

Add 'pad' operator support for ov::preprocess::PrePostProcessor (openvinotoolkit#23093)

 - Add 'pad' preprocessor operator
- openvinotoolkit#23068

 - [CVS-121548](https://jira.devtools.intel.com/browse/CVS-121548)

[API][AUTO] Fail to get PERF_COUNT from compiled_model (openvinotoolkit#23123)

 - *Fail to get PERF_COUNT from compiled_model*

 - *CVS-130349*

[GPU] Fix dynamic loop's not matched issue during multiple shapes are inferenced (openvinotoolkit#22806)

- *Fix the issue which second infer with updated shape in dynamic loop
doesn't update sliced layout.*
- *Fix the issue that the optimized reshape doesn't reinterpret output
memory in update_output_layout()*

 - *122739*
 - *131544*

[DOCS] Add docs about ignored subgraphs (openvinotoolkit#23435)

- Add documentation about `nncf.Subgraph`

 - 100999

[TRANSFORMATIONS] Fix Optional to match even with no inputs (openvinotoolkit#23471)

[TRANSFORMATIONS] Fix Optional to match even with no inputs

The Optional pattern type may create a wrong pattern to match if no
inputs are provided to the Optional node. If no inputs present to the
Optional type, it will not create an alternative branch(es) to check
against resulting in the incorrect matching.

Fix that by adding a check for the number of inputs being 0.

Do a minor refactoring/renaming for the readability purposes.

 CSV-133523

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

---------

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

Enable Paddle FastSpeech2 model (openvinotoolkit#23311)

 - *Enable Paddle FastSpeech2 model*
     - *fix issue in 'set_value'*
     - *add 'round' op*

 - *CVS-134638*

[Conformance Test] Fix cache test case failure for auto plugin (openvinotoolkit#23473)

- check if the blob size remains the same as it was during the initial
caching of the compiled model, rather than comparing it with a specified
number, such as 1 in this case.
- count the size of cached blobs after the model compilation is
completed on all HW plugin within AUTO plugin.

 - CVS-130395

[GPU] Remove unused formats (openvinotoolkit#23431)

+ Most of them are in onednn weights format.

 - *119476*

[CPU][ARM] Make f16 precision as default for CNN (openvinotoolkit#22839)

Remove mentioning of compatibility folder in mac docs (openvinotoolkit#23542)

 - *item1*
 - *...*

 - *ticket-id*

[TRANSFORMATIONS] Fix ReshapeAMatMul pattern to work with shared node as reshape input (openvinotoolkit#23535)

- *`ReshapeAMatMul` worked incorrect in case of using shared nodes as
reshape input*
 - *Fix: to reconnect reshape input to new `shape_of` pattern*

 - *[CVS-134625](https://jira.devtools.intel.com/browse/CVS-134625)*

[TF FE] Support complex tensors for Reciprocal operations (openvinotoolkit#23355)

- *Extended loader Reciprocal by propagating ComplexTypeMark from input
to output and to represent output complex type tensor as a
floating-point type tensor with an auxiliary dimension that concatenates
real and imaginary parts of complex tensor.*
- *Performed reciprocal for complex numbers.*
- *Wrapped the complex result with ComplexTypeMark and returned the
result*

 - openvinotoolkit#23234

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[GPU] Fix SIMD for non supporting platforms (openvinotoolkit#23540)

 - Check is simd 8 is supported

 - *[CVS-133769](https://jira.devtools.intel.com/browse/CVS-133769)*

[PT FE] Fix typo and improve the error info. (openvinotoolkit#23507)

 - *Fix the typo of the code (then -> than)*
- *Improve the error info here, to let developer know the size of output
if the assertion fails.*

 - *No ticket id*

[PT FE] Fix sporadic issue in quantized tests (openvinotoolkit#23520)

 - *Relax quantized tests condition to remove sporadicity.*

 - *CVS-129734*

[GPU] Fixed not to set GATHER_AXIS_SHAPE_INFO_INDEX when input0 is static (openvinotoolkit#23548)

- This PR fixes `Gather` not to set GATHER_AXIS_SHAPE_INFO_INDEX when
input0 is static.
 - It enables some functional tests again.

Add test for CoreImpl::get_versions() (openvinotoolkit#23336)

Closes [23298](openvinotoolkit#23298)
- [CVS-132140](https://jira.devtools.intel.com/browse/CVS-132140)

---------

Co-authored-by: Oleg Pipikin <oleg.pipikin@intel.com>

[PT FE] Add ModuleExtension (openvinotoolkit#23536)

 - *Continuation of openvinotoolkit#22867*

 - *CVS-133733*

---------

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

[api conformance] Fix batch/hetero plugins config (openvinotoolkit#23547)

 - *item1*
 - *...*

 - *ticket-id*

[Transformations] Added If operation to NMS path propagation for ignore negative indices in Gather (openvinotoolkit#23451)

 - *127874*

[TF FE] Test TextVectorization on white-space string input and Equal on empty string tensor (openvinotoolkit#23572)

**Details:** Test `tf.keras.TextVectorization` on white-space string
input and Equal on empty string tensor.

**Ticket:** 135749

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

[PT FE] Make ModuleExtension  patching in independent function scope (openvinotoolkit#23584)

 - *Make ModuleExtension patching in independent function scope*

 - *ticket-id*

[GPU] Increase FC tile_b size for INT4 shape agnostic kernel  (openvinotoolkit#23532)

- Increased FC tile_B size for INT4 shape agnostic kernel for improving
context processing

 - 133444

[GPU] Enable 8bit compression support on dGPU via oneDNN (openvinotoolkit#22740)

 - Enable 8bit compression support on dGPU via oneDNN
 - Update oneDNN version
 - Enable oneDNN primitives cache

Ticket: 124115

[CPU] Add PagedAttention support (openvinotoolkit#23524)

 - *Support PagedAttention support, depends on:*
- openvino_contrib:
openvinotoolkit/openvino_contrib#867
    - vLLM: ilya-lavrenov/vllm#4
 - *TODO*
    - Models with alibi feature

 - *[134329](https://jira.devtools.intel.com/browse/CVS-134329)*
 - *[134327](https://jira.devtools.intel.com/browse/CVS-134327)*

[GPU] In gemm_tile_kernel, applied to use block read when N and K byte-size is aligned 4. (openvinotoolkit#23400)

- *Element by element read is the bottle-neck in gemm_tiled kernel.
Enable block-read when N and K size are aligned 4byte with N and K are
leftover*.
- *Increasing tile_n_size has performance improvement when m_size and
n_size are not shallow and n_size is aligned at 32.*
 - *Add GEMM_TILE_M/N/K/SIMD environment variables for convenience.*

 - *134279*

---------

Signed-off-by: hyunback <hyunback.kim@intel.com>

[CPU] [ARM64] jit eltwise: int8 support (openvinotoolkit#22687)

 - *int8 support*

 - *CVS-128643*

[ONNX] Extended ReduceMax by opsets 13,18,20 (openvinotoolkit#23475)

 - Extended ReduceMax by opsets 13,18,20
 - Updated a using opset for ONNX to 20
 - Added tests for additional supported types
 - Enabled backend tests

  - Closes openvinotoolkit#20555

[CPU] Enable concat nspc layout inplace for urlnet model cases (openvinotoolkit#23454)

- *enable concat nspc layout inplace for channel only cases, with these
concat node use inplace impl, urlnet model gain performance benefits,
and this(intermediate concat node is nspc layout but actually is one
dimension) could be common case especially for models with 1D input*

 - *130282*

[CPU]Fix GPT-J RoPE fusion (openvinotoolkit#23519)

 - *Support new RoPE pattern of GPT-J*
- *Local test shows 17 % improvement for 2nd token latency for BF16 in
`Intel(R) Xeon(R) Platinum 8468`*

 - *CVS-134949*

Torch Compile - New Op Support (openvinotoolkit#23310)

New op support for:
 - torch.export updates
 - benchmarking model support
 - chatglm2 support

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: ynimmaga <yamini.nimmagadda@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>

[DOCS] Latency highlight for OV devices + update of Optimize Inference for master (openvinotoolkit#23575)

Jira: 133389

* Added an indication on Latency being the default use for OV devices
* Streamlined the Optimize Inference article for better clarity.

[TF FE] Support complex tensors for OnesLike operation (openvinotoolkit#23445)

 - *Adding support for OnesLike operation on complex type tensor*
- Closes openvinotoolkit#22953

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[CI] [GHA] Remove usage of the `SimenB/github-actions-cpu-cores` action (openvinotoolkit#23583)

 - The action does not have a License.
 - `cmake` should figure out the # of cores for parallel.

[CPU] [ARM64] jit select (openvinotoolkit#23450)

 - *[CPU] [AARCH64] jit select*

 - *CVS-135445*

New DB schema for GitHub metrics script (openvinotoolkit#23606)

Improvements and fixes for the script which sends GitHub Workflow
metrics to a database. See also:
[23484](openvinotoolkit#23484)

[JS API] Extract code from CompiledModel getters (openvinotoolkit#23515)

- Extract the same logic structure from `CompileModel::input` and
`CompileModel::output`
- Add a private `CompileModel::get_node` method that gets the specified
input or output node.

Note:
No changes to argument validation or conversion.

 - *127617*

constraints openvino-dev: Limit mpmath<1.4 (openvinotoolkit#23601)

- Limit mpmath because of
pytorch/pytorch#120995 and
sympy/sympy#26273

[GPU] Re-enable memory reuse for gemm (openvinotoolkit#23600)

- Since openvinotoolkit#22726 gemm is derived from multi-stage impl which had memory
reuse flag enforced to false for all sub-classes.
- This patch enables memory reuse back for gemm kernel to reduce memory
consumption.

 - *135361*

[TF FE] Support TensorFlow 2.16 (openvinotoolkit#23562)

**Details:** Support TensorFlow 2.16

**Ticket:** TBD

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

[IE TESTS][OP CONFORMANCE] Move `ConstRanges` range calculation to `InGenData` constructor (openvinotoolkit#23427)

 - *Move static const range initialization to `InData` structure*

 - *[125993](https://jira.devtools.intel.com/browse/CVS-125993)*

Enable new property model_distribution_policy for CPU inference (openvinotoolkit#23077)

 - *Enable new property model_distribution_policy for CPU inference*
 -- *Add C++ interface and test cases*
 -- *Add Python interface and test cases*

 - *CVS-127844*

[CPU] optimize PagedAttention's shape inference (openvinotoolkit#23603)

 - *Specific shape inference for PagedAttention*
 - *...*

 - *ticket-id*

[CPU] [ARM64] jit equal (openvinotoolkit#23266)

 - *[CPU] [AARCH64] jit eltwise Equal

 - *CVS-134691*

[GPU] Fix count non zero for empty input (openvinotoolkit#23597)

- Adds buffer reset to 0 in `count_nonzero` impl in case of empty input
tensor as currently we may try to allocate random amount of memory in
subsequent `gather_nonzero` call

[PyOV] Add Python API for MaxPool-14 and AvgPool-14 (openvinotoolkit#22966)

 - Extend Python API with`MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

 - openvinotoolkit#22930
 - openvinotoolkit#22796
 - openvinotoolkit#23027
 - openvinotoolkit#23381
 - openvinotoolkit#23582

 - 131961

openvinotoolkit#18731

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

[Spec] Clarify specification for StridedSlice (openvinotoolkit#23039)

- Add notes with descriptions of: Out of Bounds, Indexing in Reverse,
Negative Indices
 - Clarified length of masks
 - Clarified the definition of `-1` value
- Described in detail the behavior of masks, aligned with Reference
Implementation
 - Added more latex-like style, add the examples for the missing masks.

 - 90128

[TRANSFORMATIONS] Remove use of legacy names from transformations (openvinotoolkit#23574)

[TRANSFORMATIONS] Remove use of legacy names from transformations

API function create_ie_output_name() and get_ie_output_name() are
deprecated in a28a000 ("Deprecated functions to operate with legacy
port names (openvinotoolkit#22717)")

Remove usages of create_ie_output_name() in Transformations

CVS-132087
Signed-off-by: Andrii Staikov andrii.staikov@intel.com

---------

Signed-off-by: Andrii Staikov andrii.staikov@intel.com

[Opset14][Spec] ConvertPromoteTypes-14 specification (openvinotoolkit#23264)

- *This PR introduces specification for ConvertPromoteTypes-14 op -
conversion op used to align two inputs to common type*
- *Operator was introduced for PyTorch Frontend, rules also match
Tensorflow https://www.tensorflow.org/guide/tf_numpy_type_promotion*
- PR with core implementation:
openvinotoolkit#22566
- Draft PR with improvements to core + replacement it PTFe:
openvinotoolkit#22770

 - *129197*

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>

[CPU][ARM] Upgrade to ACL v24.02.1 (openvinotoolkit#22598)

oneDNN PR: openvinotoolkit/oneDNN#227

[API CONFORMANCE] Modify API conformance suite for SW plugins (openvinotoolkit#23557)

 - *Move some properties from mandatory to optional for sw plugins*
 - *...*

 - *[133459](https://jira.devtools.intel.com/browse/CVS-133459)*

Calculate model weights hash in parallel (openvinotoolkit#23605)

- Calculate model weights hash in parallel in case of reading model from
buffer

 - CVS-134771

[DOCS] improve legacy section formatting (openvinotoolkit#23512)

[DOCS] ai legal disclaimer (openvinotoolkit#23587)

[TRANSFORMATIONS] Create python binding for pattern::Optional (openvinotoolkit#23558)

[TRANSFORMATIONS] Create python binding for pattern::Optional

Expose the C++ op::pattern::Optional to Python in order to
simplify patterns creation.
Cover the functionality with the dedicated tests.

CVS-133523

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

---------

Signed-off-by: Andrii Staikov <andrii.staikov@intel.com>

[CPU] Fix SDPA pattern matching (openvinotoolkit#23581)

Limit the Concat layer to have maximum 3 children. The third one is
allowed to be a ShapeOf op only (to support Mixtral).

 - 135375

[chore] Use debug loglevel for github metrics script (openvinotoolkit#23633)

We can switch log level for GitHub metrics script only when the workflow
is restarted with debug logging

[TF FE] Enable parallel execution of TensorFlow Layer 2 python tests (openvinotoolkit#23344)

Addresses issue: openvinotoolkit#20919

- Enables parallel execution of TensorFlow Layer 2 python tests
- Fixes test_tf2_keras_conv_lstm_2d.py and test_tf2_map_fn.py to not
fail during parallel execution
- Appends args in github workflow to enable parallel execution

Errors fixed:
- Due to varying Kera activation function addresses causing the workers
to get different parameter inputs and thus failing. See [known
issue](https://pytest-xdist.readthedocs.io/en/stable/known-limitations.html#order-and-amount-of-test-must-be-consistent)
```
-tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 4, 'kernel_size': (3, 3), 'padding': 'same', 'return_sequences': False, 'activation': <function swish at 0x7f1fadf364d0>}, 'input_shapes': [[2, 5, 20, 30, 2]]} ]
-tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 6, 'kernel_size': (2, 3), 'padding': 'valid', 'dilation_rate': 3, 'recurrent_activation': <function elu at 0x7f1fe6a1a830>, 'return_sequences': True, 'use_bias': True, 'data_format': 'channels_first'}, 'input_shapes': [[2, 5, 1, 40, 30]]} ]
+tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 4, 'kernel_size': (3, 3), 'padding': 'same', 'return_sequences': False, 'activation': <function swish at 0x7f635e4d24d0>}, 'input_shapes': [[2, 5, 20, 30, 2]]} ]
+tensorflow2_keras_tests/test_tf2_keras_conv_lstm_2d.py::TestKerasConvLSTM2D::test_keras_conv_lstm_2d_basic[ ie_device:CPU - precision:FP32 - params:{'params': {'filters': 6, 'kernel_size': (2, 3), 'padding': 'valid', 'dilation_rate': 3, 'recurrent_activation': <function elu at 0x7f6396fa2830>, 'return_sequences': True, 'use_bias': True, 'data_format': 'channels_first'}, 'input_shapes': [[2, 5, 1, 40, 30]]} ]
```

- Due to lambda function definitions giving varying addresses as inputs
```
-tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP32 - params:{'fn': <function TestMapFN.<lambda> at 0x7f66c2c63c70>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
-tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP16 - params:{'fn': <function TestMapFN.<lambda> at 0x7f66c2c63c70>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
+tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP32 - params:{'fn': <function TestMapFN.<lambda> at 0x7f211b56fd00>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
+tensorflow2_keras_tests/test_tf2_map_fn.py::TestMapFN::test_multiple_inputs_outputs_int32[ ie_device:CPU - precision:FP16 - params:{'fn': <function TestMapFN.<lambda> at 0x7f211b56fd00>, 'input_type': tf.int32, 'fn_output_signature': (tf.int32, tf.int32, tf.int32), 'back_prop': True, 'input_names': ['x1', 'x2', 'x3'], 'input_shapes': [[2, 1, 3, 4], [2, 1, 3, 4], [2, 1, 3, 4]]} ]
```

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[ IE TESTS ] Update tensor comparation function according plugin requirments (openvinotoolkit#23226)

- *Comparation function was changed to compare tensors based on element
comparation*
- *`std::abs(ref_value - plugin_value) <= abs_threshold + rel_threshold
* ref_value`*
- *`abs_threshold ` =
std::max(std::numeric_limits::eps<plugin_element_type>(),
std::numeric_limits::eps<ref_element_type>())*
- *`ref_threshold = eps_by_expected_type()`, which is based on half `bit
length of mantissa`*

 - [CVS-133173](https://jira.devtools.intel.com/browse/CVS-133173)
 - [CVS-135540](https://jira.devtools.intel.com/browse/CVS-135540)

---------

Co-authored-by: sbalandi <sofya.balandina@intel.com>

[TF FE] Support Angle operation for TensorFlow models (openvinotoolkit#23028)

 - *Support Angle operation for TensorFlow models*

 - Closes openvinotoolkit#22083

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

[GPU] Extend gemm to fuse broadcast and reshape layers (openvinotoolkit#23513)

- Fuse `broadcast` and `reshape` layers into `gemm` layer for LLM's 2nd
latency optimization
     - before : [`broadcast`] --> [`reshape`] --> `gemm`
     - after : `gemm`
- `gemm` is extended to have `input0_target_shape`,
`input1_target_shape`, `input0_output_pattern` and
`input1_output_pattern` from `broadcast` and `reshape` layers

 - 128343

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>

[GPU] Extend pattern for ClampFP16Output (openvinotoolkit#23592)

- By PR(openvinotoolkit#22245),
`clamp_fp16_output` opt pass was moved to ngraph
- Because nodes such as eltwise(`Add`, `Subtract`, `Multiply`, `Divide`)
that were fused into target node `gemm` are not supported in pattern,
corresponding pattern was extended for this purpose

 - 135060

Fix the aten::mv for pytorch models openvinotoolkit#22073 (openvinotoolkit#22677)

 - *item1*
 - *...*
Add aten::mv operator
close openvinotoolkit#22073
 - *ticket-id*

---------

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>

Remove NGraphFunctions namespace (openvinotoolkit#23627)

 - Remove NGraphFunctions namespace

 - CVS-133379

[PY API] Fix the preoblem that Node.get_attributes() cannot return all attributes (openvinotoolkit#23530)

- extend the `util::DictAttributeSerializer::on_adapter()` method,
making it compatible with `ov::PartialShape` and
`ov::op::util::Variable` types;
 - add extra tests to test the correctness of `Node.get_attributes()`

 - openvinotoolkit#23455

---------

Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>

[CPU] Correct type configuration for i8 inner_product with f16 output (openvinotoolkit#23610)

 - 136298
 - 136163

Support aten::bucketize for pytorch models openvinotoolkit#23328 (openvinotoolkit#23527)

](openvinotoolkit#23328)
 - Support aten::bucketize for pytorch models

Move ConvertConvertPromoteTypes transformation from Common to MOC (openvinotoolkit#23630)

Move ConvertConvertPromoteTypes transformation from Common to MOC

 N/A

[CPU][ARM] Enable both f16 and f32 kernels for aarch64 and introduce runtime f16 support check (openvinotoolkit#22992)

Inherited from openvinotoolkit#22437

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

[ONNX] Reduced memory consumption while running tests (openvinotoolkit#23628)

 - Significantly reduced amount of using RAM while testing
- May introduce test regression in multi-worker scenario (-n auto), but
it isn't detected while validation

 - 129958

[TF FE] Add testing StringLower and TextVectorization operations on non-ASCII sentences (openvinotoolkit#23641)

**Details:** Add testing non-ASCII sentences for StringLower operation.
Needs to be merged after
openvinotoolkit/openvino_tokenizers#80.

**Ticket:** 135752

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

Symbol Tracking API updated and made public (openvinotoolkit#23136)

- dev_api `ov::DimensionTracker` and `ov::TableOfEquivalence` classes
deleted, logic moved to `ov::Symbol` which is now stored by
`ov::Dimension`
- new implementation moves responsibility to store and report relations
between Symbols directly to the Symbol object. Hence, there is no need
for `ov::TableOfEquivalence` and no need for synchronization point
anymore.
- Equivalence is being tracked by using
[Disjoint-set_data_structure](https://en.wikipedia.org/wiki/Disjoint-set_data_structure)
which uses less memory than previous implementation.

![image](https://github.com/openvinotoolkit/openvino/assets/55839243/f1266f32-976d-44f9-a6ea-cd04dce07407)

![image](https://github.com/openvinotoolkit/openvino/assets/55839243/3108d1ad-0d30-4041-aa93-c4de1f1fb979)

 - *CVS-133123*

Align friendly names uniqueization (openvinotoolkit#22729)

Removed code that makes friendly names unique from Serialization and a
name uniqueness check from Deserializator.
Enabled the mode of ResolveNameCollisions transformation to uniqueize
all friendly names, not only autogenerated in Frontends

 - *CVS-131567*

---------

Co-authored-by: Evgenya Nugmanova <evgeniia.nugmanova@intel.com>
Co-authored-by: Andrei Kochin <andrei.kochin@intel.com>

[CPU][REFACTORING] Use memory access helper methods where possible (openvinotoolkit#23442)

fix coverity issue 1540833 and 1540832 (openvinotoolkit#23635)

 - *fix coverity scan  issue1540833 and issue1540832*

 - *ticket-id*

[CPU] Prohibit fc avx2_vnni_2 decompression for bf16 input (openvinotoolkit#23638)

- The FC changes made in scope of openvinotoolkit#20486 were missed when rebasing
- The context is: Even the system and the node does support bf16
precision we have to fall back to f32 in/out precision
due to lack of support for decompression with bf16 avx2_vnni_2 in oneDNN
fork.
- To cover this limitation an additional type mapping parameter in form
of std::function was introduced for disabling particular type mapping
entry using a runtime check (isa support in this case)

 - 122347
 - 136163

Merged master changes

Update src/frontends/tensorflow_common/src/op/gelu.cpp

updated approximation access
bbielawx pushed a commit to bbielawx/openvino that referenced this issue Apr 12, 2024
…RCH` (openvinotoolkit#22930)

### Details:
 - Add specification for `MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

### Related PRs
- [Reference and
Core](openvinotoolkit#22796)
 - [Python API](openvinotoolkit#22966)
 - [PT FE](openvinotoolkit#23027)
- [Downgrade
transformations](openvinotoolkit#23381)

### Tickets:
 - 131961

### Context
openvinotoolkit#18731

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
bbielawx pushed a commit to bbielawx/openvino that referenced this issue Apr 12, 2024
…22966)

### Details:
 - Extend Python API with`MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

### Related PRs
 - openvinotoolkit#22930
 - openvinotoolkit#22796
 - openvinotoolkit#23027
 - openvinotoolkit#23381
 - openvinotoolkit#23582

### Tickets:
 - 131961

### Context
openvinotoolkit#18731

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
alvoron pushed a commit to alvoron/openvino that referenced this issue Apr 29, 2024
### Details:
 - Core implementation of MaxPool-14 and AvgPool-14
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area
 - No changes to reference implementation were necessary

### Related PRs
-
[Specification](openvinotoolkit#22930)
 - [Python API](openvinotoolkit#22966)
 - [PT FE](openvinotoolkit#23027)
- [Downgrade
transformations](openvinotoolkit#23381)

### Tickets:
 - 131961

### Context
openvinotoolkit#18731

---------

Co-authored-by: Pawel Raasz <pawel.raasz@intel.com>
alvoron pushed a commit to alvoron/openvino that referenced this issue Apr 29, 2024
…RCH` (openvinotoolkit#22930)

### Details:
 - Add specification for `MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

### Related PRs
- [Reference and
Core](openvinotoolkit#22796)
 - [Python API](openvinotoolkit#22966)
 - [PT FE](openvinotoolkit#23027)
- [Downgrade
transformations](openvinotoolkit#23381)

### Tickets:
 - 131961

### Context
openvinotoolkit#18731

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
alvoron pushed a commit to alvoron/openvino that referenced this issue Apr 29, 2024
…22966)

### Details:
 - Extend Python API with`MaxPool-14` and `AvgPool-14`
- They both introduce a new ceil mode:
`ov::op::RoundingType::CEIL_TORCH`
- The new ceiling mode does not allow the last pooling in a Dimension to
start in the padding area

### Related PRs
 - openvinotoolkit#22930
 - openvinotoolkit#22796
 - openvinotoolkit#23027
 - openvinotoolkit#23381
 - openvinotoolkit#23582

### Tickets:
 - 131961

### Context
openvinotoolkit#18731

---------

Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working category: nGraph OpenVINO Runtime Library - nGraph
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants