Skip to content

Commit

Permalink
Fix list type annotation and add an integration test for a query with…
Browse files Browse the repository at this point in the history
… multiple accept
  • Loading branch information
vemonet committed Jul 4, 2023
1 parent db491b2 commit 150754e
Show file tree
Hide file tree
Showing 6 changed files with 25 additions and 28 deletions.
8 changes: 2 additions & 6 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,11 @@ jobs:
with:
python-version: ${{ matrix.python-version }}

- name: Install dependencies
- name: 📥️ Install dependencies
run: |
pipx install hatch
- name: Check with Ruff and Mypy
run: |
hatch run check
- name: Test with coverage
- name: ☑️ Test with coverage
run: |
hatch run cov --cov-report xml
Expand Down
14 changes: 3 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ cd rdflib-endpoint
Install [Hatch](https://hatch.pypa.io), this will automatically handle virtual environments and make sure all dependencies are installed when you run a script in the project:

```bash
pip install --upgrade hatch
pipx install hatch
```

Install the dependencies in a local virtual environment (running this command is optional as `hatch` will automatically install and synchronize dependencies each time you run a script with `hatch run`):
Expand All @@ -314,7 +314,7 @@ Access the YASGUI interface at http://localhost:8000

### ☑️ Run tests

Make sure the existing tests still work by running ``pytest``. Note that any pull requests to the fairworkflows repository on github will automatically trigger running of the test suite;
Make sure the existing tests still work by running the test suite and linting checks. Note that any pull requests to the fairworkflows repository on github will automatically trigger running of the test suite:

```bash
hatch run test
Expand All @@ -326,7 +326,7 @@ To display all `print()`:
hatch run test -s
```

Run tests on multiple python versions:
You can also run the tests on multiple python versions:

```bash
hatch run all:test
Expand All @@ -346,14 +346,6 @@ Check the code for errors, and if it is in accordance with the PEP8 style guide,
hatch run check
```

### ✅ Run all checks

Run all checks (format, linting, tests) with:

```bash
hatch run all
```

### ♻️ Reset the environment

In case you are facing issues with dependencies not updating properly you can easily reset the virtual environment with:
Expand Down
11 changes: 5 additions & 6 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,6 @@ post-install-commands = [
]

[tool.hatch.envs.default.scripts]
test = "pytest {args}"
cov = "test --cov src {args}"
cov-report = "cov --cov-report html {args} && python -m http.server 3000 --directory ./htmlcov"
dev = "uvicorn example.app.main:app --reload {args}"
fmt = [
"black src tests example",
Expand All @@ -103,11 +100,13 @@ check = [
"black src tests --check",
"mypy src",
]
all = [
"fmt",
test = [
"pytest {args}",
"check",
"cov",
]
cov = "test --cov src {args}"
cov-report = "cov --cov-report html {args} && python -m http.server 3000 --directory ./htmlcov"


[[tool.hatch.envs.all.matrix]]
python = ["3.7", "3.8", "3.9", "3.10", "3.11"]
Expand Down
6 changes: 3 additions & 3 deletions src/rdflib_endpoint/sparql_router.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,10 +108,9 @@
}


def parse_accept_header(accept:str) -> list:
def parse_accept_header(accept: str) -> List[str]:
"""
Given an accept header string, return a list of
media types in order of preference.
Given an accept header string, return a list of media types in order of preference.
:param accept: Accept header value
:return: Ordered list of media type preferences
Expand Down Expand Up @@ -147,6 +146,7 @@ def _parse_preference(qpref: str) -> float:
preferences.sort(key=lambda x: -x[1])
return [pref[0] for pref in preferences]


class SparqlRouter(APIRouter):
"""
Class to deploy a SPARQL endpoint using a RDFLib Graph.
Expand Down
1 change: 1 addition & 0 deletions tests/test_parse_accept.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import pytest

import rdflib_endpoint.sparql_router

accept_cases = [
Expand Down
13 changes: 11 additions & 2 deletions tests/test_rdflib_endpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,22 @@ def test_custom_concat_json():
def test_select_noaccept_xml():
response = endpoint.post("/", data="query=" + concat_select)
assert response.status_code == 200
# assert response.json()['results']['bindings'][0]['concat']['value'] == "Firstlast"


def test_select_csv():
response = endpoint.post("/", data="query=" + concat_select, headers={"accept": "text/csv"})
assert response.status_code == 200
# assert response.json()['results']['bindings'][0]['concat']['value'] == "Firstlast"


def test_multiple_accept():
response = endpoint.get(
"/",
params={"query": concat_select},
headers={"accept": "text/html;q=0.3, application/xml, application/json;q=0.9, */*;q=0.8"},

This comment has been minimized.

Copy link
@datadavev

datadavev Jul 4, 2023

Contributor

This should actually return application/xml since the default preference if not specified is 1.0. I found the issue and will provide a patch shortly.

# Returns JSON
)
assert response.status_code == 200
assert response.json()["results"]["bindings"][0]["concat"]["value"] == "Firstlast"


def test_fail_select_turtle():
Expand Down

0 comments on commit 150754e

Please sign in to comment.