Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhance(apps/analytics): add computation logic for instance and activity performance analytics #4391

Merged
merged 5 commits into from
Dec 6, 2024

Conversation

sjschlapbach
Copy link
Member

@sjschlapbach sjschlapbach commented Dec 6, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced several new functions for enhanced performance analytics, including:
      • get_course_activities: Fetches activities for a specific course.
      • compute_instance_performance: Computes performance metrics for instances.
      • agg_activity_performance: Aggregates performance data.
      • save_instance_performances and save_activity_performance: Saves performance data to the database.
    • Added a new Jupyter notebook for structured analysis of participant performance.
  • Database Changes

    • New tables and fields created to support detailed tracking of participant and activity performance, including ParticipantPerformance, InstancePerformance, and ActivityPerformance.
    • Introduced a new ENUM type for performance levels.
    • Added a new field responseCount to the InstancePerformance model for tracking responses.

Copy link

coderabbitai bot commented Dec 6, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

This pull request modifies the import structure in the __init__.py file and introduces several new functions and database structures for performance analytics. It consolidates imports for various modules, adds functions for handling instance activity performance, and enhances the database schema with new tables and fields to track performance metrics.

Changes

File Change Summary
apps/analytics/src/modules/init.py Replaced specific function imports with wildcard imports from several modules related to analytics.
apps/analytics/src/modules/instance_activity_performance/init.py Added imports for five functions related to instance activity performance.
apps/analytics/src/modules/instance_activity_performance/agg_activity_performance.py Introduced agg_activity_performance function to compute mean values from a DataFrame.
apps/analytics/src/modules/instance_activity_performance/compute_instance_performance.py Introduced compute_instance_performance function to calculate performance metrics from activity data.
apps/analytics/src/modules/instance_activity_performance/get_course_activities.py Introduced get_course_activities function to retrieve activities based on course ID.
apps/analytics/src/modules/instance_activity_performance/save_activity_performance.py Introduced save_activity_performance function to save or update activity performance data.
apps/analytics/src/modules/instance_activity_performance/save_instance_performances.py Introduced save_instance_performances function to save instance performance metrics to the database.
apps/analytics/src/notebooks/instance_activity_performance.ipynb Added a Jupyter notebook for analyzing participant performance in courses.
packages/prisma/src/prisma/migrations/20241205172606_learning_analytics_performance_progress/migration.sql Added new tables and constraints for performance analytics, including ParticipantPerformance, InstancePerformance, etc.
packages/prisma/src/prisma/schema/analytics.prisma Added responseCount field to the InstancePerformance model.

Possibly related PRs

Suggested reviewers

  • rschlaefli

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🧹 Outside diff range and nitpick comments (6)
apps/analytics/src/modules/__init__.py (1)

15-21: Add proper module exports for new analytics functions

To address the unused import warnings and make the module's public API explicit, consider adding these imports to __all__.

Add this at the top of the file:

+__all__ = [
+    "compute_correctness",
+    "get_participant_responses",
+    "compute_aggregated_analytics",
+    "get_running_past_courses",
+    "get_active_weeks",
+    "compute_participant_activity",
+    "save_participant_course_analytics",
+    "compute_weekday_activity",
+    "compute_response_error_rates",
+    "compute_performance_levels",
+    "save_participant_performance",
+    # New analytics functions
+    "get_course_activities",
+    "compute_instance_performance",
+    "agg_activity_performance",
+    "save_instance_performances",
+    "save_activity_performance",
+]
🧰 Tools
🪛 Ruff (0.8.0)

16-16: .instance_activity_performance.get_course_activities imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


17-17: .instance_activity_performance.compute_instance_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


18-18: .instance_activity_performance.agg_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


19-19: .instance_activity_performance.save_instance_performances imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


20-20: .instance_activity_performance.save_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)

apps/analytics/src/modules/instance_activity_performance/save_activity_performance.py (2)

38-61: Remove commented out code

The old implementation should be removed as it's now dead code.

Remove the entire commented block from line 38 to 61 as it's no longer needed and clutters the code.


1-65: Consider adding type hints

Adding type hints would improve code maintainability and IDE support.

+from typing import Optional, Dict, Any
+from prisma import Prisma
+
-def save_activity_performance(
+def save_activity_performance(
     db: Prisma,
-    activity_performance,
-    course_id,
-    practice_quiz_id=None,
-    microlearning_id=None
+    activity_performance: Any,
+    course_id: str,
+    practice_quiz_id: Optional[str] = None,
+    microlearning_id: Optional[str] = None
+) -> None:
packages/prisma/src/prisma/migrations/20241205172606_learning_analytics_performance_progress/migration.sql (1)

Based on the findings, I can now generate the final response:

Consider implementing soft delete or archival strategy for analytics data

The current schema uses CASCADE delete behavior extensively, which means when a course is deleted, all related analytics data (ParticipantPerformance, InstancePerformance, ActivityPerformance, ActivityProgress) will be permanently deleted. This could lead to loss of valuable historical data and insights.

Recommendations:

  • Implement soft delete by adding deletedAt timestamp column to Course and analytics tables
  • Create archive tables to preserve historical analytics data before deletion
  • Consider using RESTRICT instead of CASCADE for analytics tables to prevent accidental data loss
  • Add data retention policies and archival procedures in documentation

The codebase shows consistent use of CASCADE delete across many relations, but analytics data typically needs different treatment for:

  • Historical trend analysis
  • Compliance and audit requirements
  • Performance benchmarking across time periods
🔗 Analysis chain

Line range hint 1-144: Consider the implications of CASCADE delete behavior.

The CASCADE delete behavior on foreign key constraints could lead to unintended data loss. For instance, when a course is deleted, all related analytics data will be automatically removed. Consider if this is the desired behavior or if you should:

  1. Archive the analytics data instead of deleting it
  2. Use RESTRICT to prevent deletion of courses with analytics data
  3. Add a soft delete mechanism

Let me gather more information about soft delete and archival patterns in the codebase.


Let me try searching with a different path pattern to locate the schema file.


Let me try one more search pattern to locate the schema file and check for cascade behaviors.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for existing soft delete implementations in the codebase
rg -A 2 'deleted.*at|is.*deleted'

# Check for archive table implementations
fd -e sql -e prisma | xargs rg -i 'archive|history'

Length of output: 71694


Script:

# Check for soft delete columns in schema
rg -A 2 'deleted.*at|is.*deleted|deleted.*on' packages/prisma/src/prisma/schema.prisma

# Check for archive/history tables in schema
rg -i 'archive|history' packages/prisma/src/prisma/schema.prisma

# Check for existing cascade delete behaviors in schema
rg -A 2 'onDelete: Cascade' packages/prisma/src/prisma/schema.prisma

Length of output: 469


Script:

# Find prisma schema file
fd -e prisma schema

# Once found, check for soft delete patterns
fd -e prisma schema | xargs rg -A 2 'deleted.*at|is.*deleted|deleted.*on'

# Check for archive/history tables
fd -e prisma schema | xargs rg -i 'archive|history'

# Check cascade delete behaviors
fd -e prisma schema | xargs rg -A 2 'onDelete: Cascade'

Length of output: 74536


Script:

# Find schema file with a broader search
fd schema prisma

# Check for cascade delete behaviors in any prisma file
fd -t f -e prisma . | xargs rg -A 2 'onDelete.*Cascade'

Length of output: 30509

apps/analytics/src/notebooks/instance_activity_performance.ipynb (1)

94-109: Consider batch processing for large datasets.

Processing each microlearning activity individually might be inefficient for large datasets. Consider:

  1. Batch processing multiple activities
  2. Using DataFrame operations instead of loops where possible
apps/analytics/src/modules/instance_activity_performance/__init__.py (1)

1-5: Add __all__ declaration to explicitly define public API

The imports look good and follow a logical flow for analytics computation. However, to make the public interface explicit and silence the unused import warnings, consider adding an __all__ declaration.

Add this at the end of the file:

from .get_course_activities import get_course_activities
from .compute_instance_performance import compute_instance_performance
from .agg_activity_performance import agg_activity_performance
from .save_instance_performances import save_instance_performances
from .save_activity_performance import save_activity_performance
+
+__all__ = [
+    'get_course_activities',
+    'compute_instance_performance',
+    'agg_activity_performance',
+    'save_instance_performances',
+    'save_activity_performance',
+]
🧰 Tools
🪛 Ruff (0.8.0)

1-1: .get_course_activities.get_course_activities imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


2-2: .compute_instance_performance.compute_instance_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


3-3: .agg_activity_performance.agg_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


4-4: .save_instance_performances.save_instance_performances imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


5-5: .save_activity_performance.save_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between cb471c3 and 9c51c42.

📒 Files selected for processing (10)
  • apps/analytics/src/modules/__init__.py (1 hunks)
  • apps/analytics/src/modules/instance_activity_performance/__init__.py (1 hunks)
  • apps/analytics/src/modules/instance_activity_performance/agg_activity_performance.py (1 hunks)
  • apps/analytics/src/modules/instance_activity_performance/compute_instance_performance.py (1 hunks)
  • apps/analytics/src/modules/instance_activity_performance/get_course_activities.py (1 hunks)
  • apps/analytics/src/modules/instance_activity_performance/save_activity_performance.py (1 hunks)
  • apps/analytics/src/modules/instance_activity_performance/save_instance_performances.py (1 hunks)
  • apps/analytics/src/notebooks/instance_activity_performance.ipynb (1 hunks)
  • packages/prisma/src/prisma/migrations/20241205172606_learning_analytics_performance_progress/migration.sql (1 hunks)
  • packages/prisma/src/prisma/schema/analytics.prisma (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.0)
apps/analytics/src/modules/__init__.py

16-16: .instance_activity_performance.get_course_activities imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


17-17: .instance_activity_performance.compute_instance_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


18-18: .instance_activity_performance.agg_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


19-19: .instance_activity_performance.save_instance_performances imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


20-20: .instance_activity_performance.save_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)

apps/analytics/src/modules/instance_activity_performance/__init__.py

1-1: .get_course_activities.get_course_activities imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


2-2: .compute_instance_performance.compute_instance_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


3-3: .agg_activity_performance.agg_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


4-4: .save_instance_performances.save_instance_performances imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


5-5: .save_activity_performance.save_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)

🔇 Additional comments (3)
packages/prisma/src/prisma/migrations/20241205172606_learning_analytics_performance_progress/migration.sql (1)

Line range hint 39-57: Consider validating nullable fields in ActivityPerformance.

The ActivityPerformance table allows nullable fields for first/last attempt rates, but requires total rates. This might lead to inconsistent data if not properly validated at the application level.

packages/prisma/src/prisma/schema/analytics.prisma (1)

198-199: LGTM! The responseCount field is well-defined.

The addition of the responseCount field to the InstancePerformance model is:

  1. Properly typed as Int
  2. Well-documented
  3. Consistent with the schema design
apps/analytics/src/modules/instance_activity_performance/__init__.py (1)

1-5: Verify module structure and imported functions

The imports establish a clear analytics pipeline. Let's verify that all the referenced modules and functions exist.

✅ Verification successful

Let me generate another verification script to check the actual function definitions since the previous ast-grep pattern was too simple and didn't find matches.


Module structure and imported functions are correctly implemented

All the imported modules exist in the correct location and each module contains its corresponding function with appropriate signatures:

  • get_course_activities(db, course_id)
  • compute_instance_performance(db, activity, total_only=False)
  • agg_activity_performance(df_instance_performance)
  • save_instance_performances(db, df_instance_performance, course_id, total_only=False)
  • save_activity_performance(db, activity_performance, course_id, practice_quiz_id=None, microlearning_id=None)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the existence of all imported modules and their functions

# Check if all the module files exist
echo "Checking module files..."
fd --type f "^(get_course_activities|compute_instance_performance|agg_activity_performance|save_instance_performances|save_activity_performance).py$" apps/analytics/src/modules/instance_activity_performance/

# Verify function declarations in each module
echo -e "\nVerifying function declarations..."
for func in "get_course_activities" "compute_instance_performance" "agg_activity_performance" "save_instance_performances" "save_activity_performance"; do
  echo "Looking for function: $func"
  ast-grep --pattern "def $func"
done

Length of output: 2334


Script:

#!/bin/bash
# Check each file's content for function definitions
echo "Checking get_course_activities.py:"
rg "def get_course_activities" -A 1 apps/analytics/src/modules/instance_activity_performance/get_course_activities.py

echo -e "\nChecking compute_instance_performance.py:"
rg "def compute_instance_performance" -A 1 apps/analytics/src/modules/instance_activity_performance/compute_instance_performance.py

echo -e "\nChecking agg_activity_performance.py:"
rg "def agg_activity_performance" -A 1 apps/analytics/src/modules/instance_activity_performance/agg_activity_performance.py

echo -e "\nChecking save_instance_performances.py:"
rg "def save_instance_performances" -A 1 apps/analytics/src/modules/instance_activity_performance/save_instance_performances.py

echo -e "\nChecking save_activity_performance.py:"
rg "def save_activity_performance" -A 1 apps/analytics/src/modules/instance_activity_performance/save_activity_performance.py

Length of output: 1615

🧰 Tools
🪛 Ruff (0.8.0)

1-1: .get_course_activities.get_course_activities imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


2-2: .compute_instance_performance.compute_instance_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


3-3: .agg_activity_performance.agg_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


4-4: .save_instance_performances.save_instance_performances imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)


5-5: .save_activity_performance.save_activity_performance imported but unused; consider removing, adding to __all__, or using a redundant alias

(F401)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
apps/analytics/src/modules/__init__.py (1)

1-6: Consider documenting exported symbols

While the wildcard imports are acceptable for an __init__.py file, it would be helpful to document which symbols are being exported from each module. This improves maintainability and helps other developers understand the public API.

Consider adding a docstring at the top of the file listing the key exported symbols:

"""
Analytics module exports:

From participant_analytics:
    - function1
    - function2
    ...

From aggregated_analytics:
    - function3
    - function4
    ...
"""
🧰 Tools
🪛 Ruff (0.8.0)

1-1: from .participant_analytics import * used; unable to detect undefined names

(F403)


2-2: from .aggregated_analytics import * used; unable to detect undefined names

(F403)


3-3: from .participant_course_analytics import * used; unable to detect undefined names

(F403)


4-4: from .aggregated_course_analytics import * used; unable to detect undefined names

(F403)


5-5: from .participant_performance import * used; unable to detect undefined names

(F403)


6-6: from .instance_activity_performance import * used; unable to detect undefined names

(F403)

apps/analytics/src/modules/instance_activity_performance/save_activity_performance.py (1)

10-25: Consider extracting practice quiz logic to a separate function

The practice quiz specific logic could be extracted to improve readability and maintainability.

+def _create_practice_quiz_values(activity_performance, practice_quiz_id, course_id, base_values):
+    """Create values dictionary for practice quiz performance."""
+    values = base_values.copy()
+    values.update({
+        "firstErrorRate": activity_performance.firstErrorRate,
+        "firstPartialRate": activity_performance.firstPartialRate,
+        "firstCorrectRate": activity_performance.firstCorrectRate,
+        "lastErrorRate": activity_performance.lastErrorRate,
+        "lastPartialRate": activity_performance.lastPartialRate,
+        "lastCorrectRate": activity_performance.lastCorrectRate,
+    })
+    
+    create_values = values.copy()
+    create_values["practiceQuiz"] = {"connect": {"id": practice_quiz_id}}
+    create_values["course"] = {"connect": {"id": course_id}}
+    return values, create_values, {"practiceQuizId": practice_quiz_id}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 9c51c42 and 5d3dec5.

📒 Files selected for processing (2)
  • apps/analytics/src/modules/__init__.py (1 hunks)
  • apps/analytics/src/modules/instance_activity_performance/save_activity_performance.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.0)
apps/analytics/src/modules/__init__.py

1-1: from .participant_analytics import * used; unable to detect undefined names

(F403)


2-2: from .aggregated_analytics import * used; unable to detect undefined names

(F403)


3-3: from .participant_course_analytics import * used; unable to detect undefined names

(F403)


4-4: from .aggregated_course_analytics import * used; unable to detect undefined names

(F403)


5-5: from .participant_performance import * used; unable to detect undefined names

(F403)


6-6: from .instance_activity_performance import * used; unable to detect undefined names

(F403)

🔇 Additional comments (3)
apps/analytics/src/modules/instance_activity_performance/save_activity_performance.py (3)

4-8: Add input validation for activity_performance object

The function assumes all required attributes exist in activity_performance.


38-41: Add error handling for database operations

The upsert operation should be wrapped in a try-catch block.


1-41: Verify database schema constraints

Let's verify that the database schema properly enforces the relationship constraints for activity performance.

✅ Verification successful

Database schema constraints are properly enforced

Based on the migration file, the database schema correctly enforces the relationship constraints for activity performance:

  • Unique constraints are in place for both practiceQuizId and microLearningId
  • Foreign key constraints are properly set up with CASCADE deletion for both practice quiz and microlearning relationships
  • The schema ensures that an activity performance record can only be associated with either a practice quiz or a microlearning (through unique constraints)
  • The courseId relationship is properly enforced with a foreign key constraint
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check database schema constraints for activity performance

# Search for activity performance related schema definitions
rg -A 10 "model\s+ActivityPerformance" --type prisma

# Check for unique constraints and relations
rg "@@unique|@@index|@@map" --type prisma | grep -i "activity"

Length of output: 179


Script:

#!/bin/bash
# Let's try to find the schema file first, then examine its contents
fd "schema.prisma"

# If we find the schema file, let's also search for any SQL migration files
fd -e sql

Length of output: 10429


Script:

#!/bin/bash
# Let's examine the most recent migration file related to activity performance
cat "packages/prisma/src/prisma/migrations/20241205172606_learning_analytics_performance_progress/migration.sql"

# Also let's try to find the schema file content
cat "packages/prisma/src/prisma/schema.prisma"

Length of output: 5360

Copy link

sonarqubecloud bot commented Dec 6, 2024

Copy link

cypress bot commented Dec 6, 2024

klicker-uzh    Run #3782

Run Properties:  status check passed Passed #3782  •  git commit eb0b369f10 ℹ️: Merge 5d3dec5f05d3525b6df0a52c123d1eb88b78e137 into cb471c3f16659b4e763d028815d6...
Project klicker-uzh
Branch Review instance-activity-performance
Run status status check passed Passed #3782
Run duration 12m 14s
Commit git commit eb0b369f10 ℹ️: Merge 5d3dec5f05d3525b6df0a52c123d1eb88b78e137 into cb471c3f16659b4e763d028815d6...
Committer Julius Schlapbach
View all properties for this run ↗︎

Test results
Tests that failed  Failures 0
Tests that were flaky  Flaky 0
Tests that did not run due to a developer annotating a test with .skip  Pending 0
Tests that did not run due to a failure in a mocha hook  Skipped 0
Tests that passed  Passing 148
View all changes introduced in this branch ↗︎

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant