Skip to content

Commit

Permalink
Implement new validation spec (#204)
Browse files Browse the repository at this point in the history
* Rename custom error type

* Create validation module

* Helper validation method used in nextArgs

* GQL scalar type for DocumentViewId

* Include new modules

* Refactor next_entry_args

* WIP: refactor publish entry

* WIP: refactor publish entry

* WIP: nearly there with publish_entry

* Introduce domain module

* Make more fine grained methods for validation

* Remove duplicate insertion of operation from replication service

* Fix skiplink and backlink orsering mistake

* Comment out test which hangs

* Introduce test methods to be used instead of publish() and next_args()

* Use cbor macro for more readabe test operations

* update expected test hashes

* Split test_utils in store into seperate files

* Don't validate operation against schema for now

* Make existing tests pass

* Comment out operation schema validation for now

* Test for ensure_log_id

* Update test const names

* Commit Cargo.lock

* Use cbor macro for more readabe test operations

* Remove modules which accidentaly came across in cherry-pick

* Update tests for LogId's which now start from 0

* Update CHANGELOG

* Specify p2panda-rs dep by commit hash

* Fixes after p2panda bump

* Test for ensure_entry_contains_expected_log_id

* Split up data and api validation tests for publish_entry

* Refactor verify seq_num, log_id, skiplink and backlink methods

* Propergate panic from test runner back to test runtime

* Rework seq and log id validation

* Test for seq num validation

* Refactor verify_log_id and add tests

* Update test values

* get_expected_skiplink doesn't return option plus tests

* Don't return option from get_expected_backlink plus tests

* A few more e2e tests

* Make clippy happy

* Fix after merge

* Tidy test publish and next_args methods

* Don't depend on DocumentStore in ensure_document_not_deleted

* Remove unverified test versions of publish and next_args (whoop!)

* Tests for ensure_document_not_deleted

* Publish new entry on service bus in replication service

* Remove unused deps

* Remove unused deps

* Add comment about code which will be deprecated after this PR

* Remove param validation from next_args

* Fix comment

* Implement get_latest_log_id on SqlStorage and then safely increment with error handling

* Helper methods for safely incrementing seq numbers

* Validate passed parameters in gql query method

* Refactor verify_seq_num

* Move validation of entry and operation out of

* Add doc strings for next_args

* Remove determine_document_id helper

* Add doc strings for `publish`

* Fix test error string

* Better seperation for pre-publish validation logic

* Some nice refactoring in next_args

* Move location of determine next args in publish

* Implement SqlStorage using new associated type pattern

* Make store implementation generic

* Correct test names

* Test for publish with missing skiplink using MemoryStore

* Commit Cargo.lock

* Clippy & fmt

* Clippy...

* Refactor one test

* Test for errors when calling publish

* Refactor seq_num and backlink getter to avoid unnecessary db call

* More cases for missing entrytests

* WIP: tests for publishing operations

* Helpers for domain tests

* Update duplicate entry test

* Tests for next_args

* Remove some TODOs

* Fix one test

* Remove println

* Remove deprecated request and response structs

* More tests for next args

* Change test order

* Remove wrapper method around bemboo verification

* Test for DocumentViewId GQL scalar

* Revert to using DocumentId in next_args GQL query plus test

* Doc strings in verify module

* Tests for publishing to valid and invalid logs

* Deleted document tests

* Test for missing next skiplink

* Increment seq and log tests

* Fix expected error strings

* Test for latest_log_id

* Update CHANGELOG

* Docstring and comment review

* Clearer error messages

* Remove commited test logging file

* License header

* Comment about updating nex_entry_args API

* Typo

* Remove println

* Better SQL query for latest_log_id

* Correct comment string

* Don't use nested imports

* Correct comment

* Move error into match statement

* Revert get_expected_skiplink ensure behaviour and improve error message

* Another test for get_expected_skiplink

* Target p2panda-rs main branch

* Fix CHANGELOG.md formatting

* Use new string methods and `Human` trait for display in errors

* Error in publish when next args have MAX_SEQ_NUM with tests

* Comments in test

* Rename function with get_checked prefix

* A little more clippy happy

* Refactor `get_expected_skiplink()` (#220)

* Refactor `get_expected_skiplink()`

Matches current behavior of `skiplink_seq_num()` method

* Remove import

* Remove comment

* Fix tests

* Use rev for p2panda-rs version

* Group imports

* Refactor initialize_db method

* Use only one thread for tarpaulin

Co-authored-by: Vincent Ahrend <mail@vincentahrend.com>
Co-authored-by: Andreas Dzialocha <x12@adz.garden>
  • Loading branch information
3 people authored Aug 6, 2022
1 parent e574ecd commit 4e514ca
Show file tree
Hide file tree
Showing 35 changed files with 2,718 additions and 965 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -203,11 +203,13 @@ jobs:
- name: Run cargo-tarpaulin
uses: actions-rs/tarpaulin@v0.1
with:
version: '0.20.1'
# Force cleaning via `--force-clean` flag to prevent buggy code coverage
args: >-
--manifest-path ${{ env.cargo_manifest }}
--locked
--force-clean
-- --test-threads=1
env:
# Ensure debug output is also tested
RUST_LOG: debug
Expand Down
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added

- GraphQL replication service gets and verifies new entries and inserts them into the db [#137](https://github.com/p2panda/aquadoggo/pull/137)
- `validation` and `domain` modules used for publish and next args API [#204](https://github.com/p2panda/aquadoggo/pull/204)
- Add schema task and schema provider that update when new schema views are materialised [#166](https://github.com/p2panda/aquadoggo/pull/166)
- Service ready signal [#218](https://github.com/p2panda/aquadoggo/pull/218)

Expand Down
2 changes: 1 addition & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions aquadoggo/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ gql_client = "^1.0.6"
lipmaa-link = "^0.2.2"
log = "^0.4.17"
openssl-probe = "^0.1.5"
p2panda-rs = { git = "https://github.com/p2panda/p2panda", rev = "e06fd08c45253d60fcd42778f59e946a9ed73f71" }
p2panda-rs = { git = "https://github.com/p2panda/p2panda", rev = "5d6508d5a9b4b766621c3bd14879cc568fbac02d" }
serde = { version = "^1.0.137", features = ["derive"] }
sqlx = { version = "^0.6.0", features = [
"any",
Expand Down Expand Up @@ -61,7 +61,7 @@ hex = "0.4.3"
http = "^0.2.8"
hyper = "^0.14.19"
once_cell = "^1.12.0"
p2panda-rs = { git = "https://github.com/p2panda/p2panda", rev = "e06fd08c45253d60fcd42778f59e946a9ed73f71", features = [
p2panda-rs = { git = "https://github.com/p2panda/p2panda", rev = "5d6508d5a9b4b766621c3bd14879cc568fbac02d", features = [
"testing",
] }
rand = "^0.8.5"
Expand Down
1 change: 0 additions & 1 deletion aquadoggo/src/db/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ use sqlx::migrate::MigrateDatabase;
pub mod errors;
pub mod models;
pub mod provider;
pub mod request;
pub mod stores;
pub mod traits;
pub mod utils;
Expand Down
33 changes: 18 additions & 15 deletions aquadoggo/src/db/provider.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,14 @@ use p2panda_rs::operation::VerifiedOperation;
use p2panda_rs::schema::SchemaId;
use p2panda_rs::storage_provider::errors::OperationStorageError;
use p2panda_rs::storage_provider::traits::StorageProvider;
use p2panda_rs::test_utils::db::{
EntryArgsRequest, EntryArgsResponse, PublishEntryRequest, PublishEntryResponse,
};
use sqlx::query_scalar;

use crate::db::request::{EntryArgsRequest, PublishEntryRequest};
use crate::db::stores::{StorageEntry, StorageLog};
use crate::db::Pool;
use crate::errors::StorageProviderResult;
use crate::graphql::client::NextEntryArguments;
use crate::errors::Result;

/// Sql based storage that implements `StorageProvider`.
#[derive(Clone, Debug)]
Expand All @@ -31,21 +32,21 @@ impl SqlStorage {
/// A `StorageProvider` implementation based on `sqlx` that supports SQLite and PostgreSQL
/// databases.
#[async_trait]
impl StorageProvider<StorageEntry, StorageLog, VerifiedOperation> for SqlStorage {
type EntryArgsResponse = NextEntryArguments;
impl StorageProvider for SqlStorage {
type EntryArgsRequest = EntryArgsRequest;
type PublishEntryResponse = NextEntryArguments;
type EntryArgsResponse = EntryArgsResponse;
type PublishEntryRequest = PublishEntryRequest;
type PublishEntryResponse = PublishEntryResponse;
type StorageLog = StorageLog;
type StorageEntry = StorageEntry;
type StorageOperation = VerifiedOperation;

/// Returns the related document for any entry.
///
/// Every entry is part of a document and, through that, associated with a specific log id used
/// by this document and author. This method returns that document id by looking up the log
/// that the entry was stored in.
async fn get_document_by_entry(
&self,
entry_hash: &Hash,
) -> StorageProviderResult<Option<DocumentId>> {
async fn get_document_by_entry(&self, entry_hash: &Hash) -> Result<Option<DocumentId>> {
let result: Option<String> = query_scalar(
"
SELECT
Expand Down Expand Up @@ -81,7 +82,7 @@ impl SqlStorage {
pub async fn get_schema_by_document_view(
&self,
view_id: &DocumentViewId,
) -> StorageProviderResult<Option<SchemaId>> {
) -> Result<Option<SchemaId>> {
let result: Option<String> = query_scalar(
"
SELECT
Expand All @@ -92,7 +93,7 @@ impl SqlStorage {
document_view_id = $1
",
)
.bind(view_id.as_str())
.bind(view_id.to_string())
.fetch_optional(&self.pool)
.await
.map_err(|e| OperationStorageError::FatalStorageError(e.to_string()))?;
Expand Down Expand Up @@ -120,8 +121,10 @@ mod tests {
use crate::db::stores::test_utils::{test_db, TestDatabase, TestDatabaseRunner};
use crate::db::traits::DocumentStore;

use super::SqlStorage;

/// Inserts a `DocumentView` into the db and returns its view id.
async fn insert_document_view(db: &TestDatabase) -> DocumentViewId {
async fn insert_document_view(db: &TestDatabase<SqlStorage>) -> DocumentViewId {
let author = Author::try_from(db.test_data.key_pairs[0].public_key().to_owned()).unwrap();
let entry = db
.store
Expand Down Expand Up @@ -153,7 +156,7 @@ mod tests {
#[with(1, 1, 1)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let document_view_id = insert_document_view(&db).await;
let result = db
.store
Expand All @@ -172,7 +175,7 @@ mod tests {
#[with(1, 1, 1)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let result = db
.store
.get_schema_by_document_view(&random_document_view_id)
Expand Down
80 changes: 0 additions & 80 deletions aquadoggo/src/db/request.rs

This file was deleted.

35 changes: 18 additions & 17 deletions aquadoggo/src/db/stores/document.rs
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ impl DocumentStore for SqlStorage {
($1, $2, $3)
",
)
.bind(document_view.id().as_str())
.bind(document_view.id().to_string())
.bind(value.id().as_str().to_owned())
.bind(name)
.execute(&self.pool)
Expand All @@ -71,8 +71,8 @@ impl DocumentStore for SqlStorage {
($1, $2)
",
)
.bind(document_view.id().as_str())
.bind(schema_id.as_str())
.bind(document_view.id().to_string())
.bind(schema_id.to_string())
.execute(&self.pool)
.await
.map_err(|e| DocumentStorageError::FatalStorageError(e.to_string()))?;
Expand Down Expand Up @@ -129,7 +129,7 @@ impl DocumentStore for SqlStorage {
operation_fields_v1.list_index ASC
",
)
.bind(id.as_str())
.bind(id.to_string())
.fetch_all(&self.pool)
.await
.map_err(|e| DocumentStorageError::FatalStorageError(e.to_string()))?;
Expand Down Expand Up @@ -173,9 +173,9 @@ impl DocumentStore for SqlStorage {
",
)
.bind(document.id().as_str())
.bind(document.view_id().as_str())
.bind(document.view_id().to_string())
.bind(document.is_deleted())
.bind(document.schema().as_str())
.bind(document.schema().to_string())
.execute(&self.pool)
.await
.map_err(|e| DocumentStorageError::FatalStorageError(e.to_string()))?;
Expand Down Expand Up @@ -283,7 +283,7 @@ impl DocumentStore for SqlStorage {
operation_fields_v1.list_index ASC
",
)
.bind(schema_id.as_str())
.bind(schema_id.to_string())
.fetch_all(&self.pool)
.await
.map_err(|e| DocumentStorageError::FatalStorageError(e.to_string()))?;
Expand Down Expand Up @@ -333,6 +333,7 @@ mod tests {
};
use rstest::rstest;

use crate::db::provider::SqlStorage;
use crate::db::stores::document::{DocumentStore, DocumentView};
use crate::db::stores::entry::StorageEntry;
use crate::db::stores::test_utils::{test_db, TestDatabase, TestDatabaseRunner};
Expand Down Expand Up @@ -373,7 +374,7 @@ mod tests {
#[with(1, 1, 1)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let author =
Author::try_from(db.test_data.key_pairs[0].public_key().to_owned()).unwrap();

Expand Down Expand Up @@ -435,7 +436,7 @@ mod tests {
#[with(1, 1, 1)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let view_does_not_exist = db
.store
.get_document_view_by_id(&random_document_view_id)
Expand All @@ -452,7 +453,7 @@ mod tests {
#[with(10, 1, 1, false, SCHEMA_ID.parse().unwrap(), vec![("username", OperationValue::Text("panda".into()))], vec![("username", OperationValue::Text("PANDA".into()))])]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let author =
Author::try_from(db.test_data.key_pairs[0].public_key().to_owned()).unwrap();
let schema_id = SchemaId::from_str(SCHEMA_ID).unwrap();
Expand Down Expand Up @@ -510,7 +511,7 @@ mod tests {
#[from(test_db)] runner: TestDatabaseRunner,
operation: Operation,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let document_view = DocumentView::new(
&document_view_id,
&DocumentViewFields::new_from_operation_fields(
Expand All @@ -534,7 +535,7 @@ mod tests {
#[with(1, 1, 1)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let document_id = db.test_data.documents[0].clone();

let document_operations = db
Expand Down Expand Up @@ -581,7 +582,7 @@ mod tests {
#[with(1, 1, 1)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let document_id = db.test_data.documents[0].clone();

let document_operations = db
Expand Down Expand Up @@ -628,7 +629,7 @@ mod tests {
#[with(10, 1, 1, true)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let document_id = db.test_data.documents[0].clone();

let document_operations = db
Expand All @@ -655,7 +656,7 @@ mod tests {
#[with(10, 1, 1, true)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let document_id = db.test_data.documents[0].clone();

let document_operations = db
Expand Down Expand Up @@ -686,7 +687,7 @@ mod tests {
#[with(10, 1, 1)]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let document_id = db.test_data.documents[0].clone();

let document_operations = db
Expand Down Expand Up @@ -721,7 +722,7 @@ mod tests {
#[with(10, 2, 1, false, SCHEMA_ID.parse().unwrap())]
runner: TestDatabaseRunner,
) {
runner.with_db_teardown(|db: TestDatabase| async move {
runner.with_db_teardown(|db: TestDatabase<SqlStorage>| async move {
let schema_id = SchemaId::from_str(SCHEMA_ID).unwrap();

for document_id in &db.test_data.documents {
Expand Down
Loading

0 comments on commit 4e514ca

Please sign in to comment.