Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhance(apps/analytics): implement quiz analytics dashboard for asynchronous activities #4405

Merged
merged 8 commits into from
Dec 22, 2024

Conversation

sjschlapbach
Copy link
Member

@sjschlapbach sjschlapbach commented Dec 22, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Added averageTimeSpent metric to instance performance analytics.
    • Introduced new components: ErrorRatesLegend, ActivityAnalyticsCharts, CircularPerformancePlot, InstanceQuizAnalytics, and QuizSelectionNavigation.
    • Implemented new GraphQL queries: GetActivityAnalytics and GetCourseActivities.
  • Enhancements

    • Improved analytics dashboard with detailed metrics on quizzes and activities.
    • Enhanced localization with new keys for German and English.
  • Bug Fixes

    • Fixed various issues related to data fetching and rendering in analytics components.
  • Chores

    • Updated internal documentation and GraphQL schema to reflect new types and queries.

Copy link

coderabbitai bot commented Dec 22, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

This pull request introduces comprehensive analytics features for quizzes and activities in the KlickerUZH platform. The changes span multiple components, including backend data processing, GraphQL schema updates, frontend components, and internationalization. The primary focus is on enhancing performance tracking by introducing new metrics like average time spent, error rates, and detailed analytics for quiz instances and activities. The implementation covers data computation, API endpoints, React components for visualization, and localization support across German and English languages.

Changes

File Path Change Summary
apps/analytics/src/modules/instance_activity_performance/compute_instance_performance.py Added averageTimeSpent column calculation in DataFrame
apps/analytics/src/modules/instance_activity_performance/save_instance_performances.py Updated to include averageTimeSpent in database storage
apps/frontend-manage/src/components/analytics/performance/* New components for performance visualization and legend
apps/frontend-manage/src/components/analytics/quiz/* New components for quiz and activity analytics display
apps/frontend-manage/src/pages/analytics/[courseId]/quizzes/* New pages for quiz and activity analytics routing
packages/graphql/src/graphql/ops/* New GraphQL queries for activity and course analytics
packages/graphql/src/schema/* Updated schema with new types for quiz analytics
packages/i18n/messages/de.ts & en.ts Added translations for new analytics features
packages/prisma/src/prisma/migrations/* Migration to add averageTimeSpent column
packages/types/src/index.ts Added new types for quiz analytics

Possibly related PRs

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🔭 Outside diff range comments (2)
apps/analytics/src/modules/instance_activity_performance/save_instance_performances.py (1)

Line range hint 1-44: Add error handling for database operations.

The function should handle potential database errors gracefully to prevent silent failures.

 def save_instance_performances(
     db, df_instance_performance, course_id, total_only=False
 ):
+    from prisma.errors import PrismaError
+    import logging
+
+    logger = logging.getLogger(__name__)
+
     for _, row in df_instance_performance.iterrows():
-        # extract values from dataframe
-        values = {
-            "responseCount": row["responseCount"],
-            "totalErrorRate": row["totalErrorRate"],
-            "totalPartialRate": row["totalPartialRate"],
-            "totalCorrectRate": row["totalCorrectRate"],
-            "averageTimeSpent": row["averageTimeSpent"],
-        }
+        try:
+            # extract values from dataframe
+            values = {
+                "responseCount": row["responseCount"],
+                "totalErrorRate": row["totalErrorRate"],
+                "totalPartialRate": row["totalPartialRate"],
+                "totalCorrectRate": row["totalCorrectRate"],
+                "averageTimeSpent": row["averageTimeSpent"],
+            }

-        # only define first and last response rates if applicable
-        if not total_only:
-            values.update(
-                {
-                    "firstErrorRate": row["firstErrorRate"],
-                    "firstPartialRate": row["firstPartialRate"],
-                    "firstCorrectRate": row["firstCorrectRate"],
-                    "lastErrorRate": row["lastErrorRate"],
-                    "lastPartialRate": row["lastPartialRate"],
-                    "lastCorrectRate": row["lastCorrectRate"],
-                }
-            )
+            # only define first and last response rates if applicable
+            if not total_only:
+                values.update(
+                    {
+                        "firstErrorRate": row["firstErrorRate"],
+                        "firstPartialRate": row["firstPartialRate"],
+                        "firstCorrectRate": row["firstCorrectRate"],
+                        "lastErrorRate": row["lastErrorRate"],
+                        "lastPartialRate": row["lastPartialRate"],
+                        "lastCorrectRate": row["lastCorrectRate"],
+                    }
+                )

-        # add relational links during creation
-        create_values = values.copy()
-        create_values.update(
-            {
-                "instance": {"connect": {"id": row["instanceId"]}},
-                "course": {"connect": {"id": course_id}},
-            }
-        )
+            # add relational links during creation
+            create_values = values.copy()
+            create_values.update(
+                {
+                    "instance": {"connect": {"id": row["instanceId"]}},
+                    "course": {"connect": {"id": course_id}},
+                }
+            )

-        db.instanceperformance.upsert(
-            where={
-                "instanceId": row["instanceId"],
-            },
-            data={
-                "create": create_values,
-                "update": values,
-            },
-        )
+            db.instanceperformance.upsert(
+                where={
+                    "instanceId": row["instanceId"],
+                },
+                data={
+                    "create": create_values,
+                    "update": values,
+                },
+            )
+        except PrismaError as e:
+            logger.error(f"Failed to save performance for instance {row['instanceId']}: {str(e)}")
+            raise
apps/analytics/src/modules/instance_activity_performance/compute_instance_performance.py (1)

Line range hint 1-104: Add input validation and error handling

The function lacks input validation and error handling for malformed activity data.

Add proper validation:

def validate_activity(activity):
    if not activity or "stacks" not in activity:
        raise ValueError("Invalid activity structure")
    for stack in activity["stacks"]:
        if "elements" not in stack:
            raise ValueError("Invalid stack structure")

Call this at the start of compute_instance_performance.

🧹 Nitpick comments (31)
packages/graphql/src/ops.ts (2)

82-96: Consider adding JSDoc documentation for the analytics type.

The ActivityQuizAnalytics type is well-structured with appropriate nullable types for rates. Consider adding JSDoc documentation to explain the purpose of each metric and their calculation methods.

Add documentation like this:

+/**
+ * Analytics data for quiz activities
+ * @property averageTimeSpent - Average time spent by participants in seconds
+ * @property firstCorrectRate - Percentage of correct answers on first attempt
+ * @property firstErrorRate - Percentage of incorrect answers on first attempt
+ * @property firstPartialRate - Percentage of partially correct answers on first attempt
+ * ...
+ */
export type ActivityQuizAnalytics = {
  __typename?: 'ActivityQuizAnalytics';
  averageTimeSpent: Scalars['Float']['output'];
  // ... rest of the fields
};

2266-2269: Consider grouping related query argument types.

The query argument types for activity-related operations are scattered. Consider grouping them together for better maintainability.

Move these types closer to each other in the file:

  • QueryGetActivityAnalyticsArgs
  • QueryGetCourseActivitiesArgs
  • QueryGetCourseActivityAnalyticsArgs

Also applies to: 2282-2285

apps/frontend-manage/src/pages/analytics/[courseId]/quizzes/index.tsx (5)

1-13: Consider lazy-loading or code splitting GraphQL dependencies.
Using Apollo Client config at the page level is perfectly valid. However, if this page is large or has routes that are seldom visited, you could consider dynamic imports or code splitting to minimize initial bundle size.


14-20: Maintain consistent import style for external libraries.
Importing libraries from multiple levels (e.g., local components, global components, external libs) is fine, but consider grouping or reordering to match your team's agreed-upon style, if one exists.


25-42: Revisit inline arrow function usage for readability.
While the inline arrow function is concise, some teams prefer a separate component definition to increase readability and maintainability.


107-174: Break down this large render block into smaller components.
It’s a fairly large block of JSX. Refactoring some sections (e.g., “PracticeQuizzesPanel” and “MicroLearningPanel”) would improve readability and maintainability.


184-189: Verify that fallback: 'blocking' is the intended strategy.
When using fallback: 'blocking', you might want to confirm performance and user experience.

packages/graphql/src/public/schema.graphql (1)

839-859: Ensure consistent naming across types.
The naming of fields is descriptive, but confirm that “elementName” and “elementType” align with other naming in the schema.

apps/frontend-manage/src/components/analytics/quiz/QuizSelectionNavigation.tsx (1)

5-14: LGTM! Consider extracting URL paths to constants

The component is well-structured and follows good practices. Consider extracting the URL paths to constants to improve maintainability and reusability.

Example refactor:

+const ANALYTICS_ROUTES = {
+  PERFORMANCE: (courseId: string) => `/analytics/${courseId}/performance`,
+  ACTIVITY: (courseId: string) => `/analytics/${courseId}/activity`,
+} as const;

function QuizSelectionNavigation({ courseId }: { courseId: string }) {
  return (
    <AnalyticsNavigation
-      hrefLeft={`/analytics/${courseId}/performance`}
+      hrefLeft={ANALYTICS_ROUTES.PERFORMANCE(courseId)}
      labelLeft={<PerformanceDashboardLabel />}
-      hrefRight={`/analytics/${courseId}/activity`}
+      hrefRight={ANALYTICS_ROUTES.ACTIVITY(courseId)}
      labelRight={<ActivityDashboardLabel />}
    />
  )
}
apps/frontend-manage/src/components/analytics/quiz/QuizAnalyticsNavigation.tsx (1)

10-20: Enhance accessibility for navigation link

The navigation implementation looks good but could benefit from improved accessibility.

Consider these enhancements:

 <div className="flex w-full flex-row justify-between">
   <Link
     href={`/analytics/${courseId}/quizzes`}
     className="mb-6 flex flex-row items-center gap-2"
+    aria-label={t('manage.analytics.backToActivitySelection')}
   >
-    <FontAwesomeIcon icon={faChevronLeft} size="lg" />
+    <FontAwesomeIcon 
+      icon={faChevronLeft} 
+      size="lg"
+      aria-hidden="true"
+    />
     <div className="flex flex-row items-center gap-0.5">
       {t('manage.analytics.backToActivitySelection')}
     </div>
   </Link>
 </div>
apps/frontend-manage/src/components/analytics/performance/ErrorRatesLegend.tsx (1)

15-39: Consider enhancing type safety for Legend payload.

The implementation looks good, but consider defining a type for the Legend payload to improve maintainability and type safety.

+type LegendPayloadItem = {
+  value: string
+  color: string
+  type: 'rect'
+}

 function ErrorRatesLegend({
   colors,
   wrapperStyle,
 }: {
   colors: {
     correct: string
     partial: string
     incorrect: string
   }
   wrapperStyle?: React.CSSProperties
 }) {
   const t = useTranslations()
 
   return (
     <Legend
-      payload={[
+      payload={[
         {
           value: t('manage.analytics.errorRate'),
           color: colors.incorrect,
           type: 'rect',
         },
         {
           value: t('manage.analytics.partialRate'),
           color: colors.partial,
           type: 'rect',
         },
         {
           value: t('manage.analytics.correctRate'),
           color: colors.correct,
           type: 'rect',
         },
-      ]}
+      ] as LegendPayloadItem[]}
       wrapperStyle={wrapperStyle ?? { top: 0, right: 0 }}
     />
   )
 }
packages/graphql/src/graphql/ops/QGetActivityAnalytics.graphql (1)

1-44: Consider query optimization through field selection.

The query structure is well-organized, but there's potential for optimization:

  1. Consider implementing field selection to allow clients to request only needed fields, reducing response size.
  2. The duplicate metrics between instanceQuizAnalytics and activityQuizAnalytics could be abstracted into a shared fragment.
fragment QuizMetrics on QuizAnalytics {
  numberOfAnswers
  averageTimeSpent
  firstErrorRate
  firstPartialRate
  firstCorrectRate
  lastErrorRate
  lastPartialRate
  lastCorrectRate
  totalErrorRate
  totalPartialRate
  totalCorrectRate
}

query GetActivityAnalytics($activityId: String!, $includeInstance: Boolean = true) {
  getActivityAnalytics(activityId: $activityId) {
    __typename
    activityName
    activityType
    courseParticipants
    instanceQuizAnalytics @include(if: $includeInstance) {
      __typename
      id
      elementName
      elementType
      ...QuizMetrics
      upvoteRate
      downvoteRate
      feedbackCount
    }
    activityQuizAnalytics {
      __typename
      id
      ...QuizMetrics
    }
  }
}
apps/frontend-manage/src/components/analytics/performance/PerformanceRatesBarChart.tsx (1)

13-15: LGTM! Consider using Record type for className

The addition of the className prop for title customization is well-structured. For better type safety, consider using Record:

-  className?: {
-    title?: string
-  }
+  className?: Record<'title', string | undefined>
apps/frontend-manage/src/components/analytics/quiz/CircularPerformancePlot.tsx (1)

6-14: Extract and type the props interface

For better maintainability and reusability, consider extracting the props interface:

+interface CircularPerformanceProps {
+  title: string;
+  rates: {
+    correctRate: number;
+    partialRate: number;
+    incorrectRate: number;
+  };
+  colors: {
+    correct: string;
+    partial: string;
+    incorrect: string;
+  };
+}

-function CircularPerformancePlot({
-  title,
-  rates,
-  colors,
-}: {
-  title: string
-  rates: { correctRate: number; partialRate: number; incorrectRate: number }
-  colors: { correct: string; partial: string; incorrect: string }
-})
+function CircularPerformancePlot({ title, rates, colors }: CircularPerformanceProps)
apps/frontend-manage/src/pages/analytics/[courseId]/quizzes/[id].tsx (2)

28-32: Extract chart colors to constants

Move the color configuration to a separate constants file for better maintainability:

+// src/constants/charts.ts
+export const CHART_COLORS = {
+  correct: '#064e3b',
+  partial: '#f59e0b',
+  incorrect: '#cc0000',
+} as const;

-  const chartColors = {
-    correct: '#064e3b',
-    partial: '#f59e0b',
-    incorrect: '#cc0000',
-  }
+  const chartColors = CHART_COLORS;

86-93: Consider environment-based ISR configuration

The revalidation time is hardcoded to 600 seconds. Consider making it configurable:

+const REVALIDATION_TIME = process.env.NEXT_PUBLIC_ISR_REVALIDATE_TIME ?? 600

 export async function getStaticProps({ locale }: GetStaticPropsContext) {
   return {
     props: {
       messages: (await import(`@klicker-uzh/i18n/messages/${locale}`)).default,
     },
-    revalidate: 600,
+    revalidate: Number(REVALIDATION_TIME),
   }
 }
apps/frontend-manage/src/components/analytics/quiz/ActivityAnalyticsCharts.tsx (3)

16-26: Consider adding prop validation for analytics data

The analytics prop is marked as optional with both undefined and null possibilities. This could lead to unnecessary null checks throughout the component.

Consider using a default value or requiring the prop:

-  analytics?: ActivityQuizAnalytics | null
+  analytics: ActivityQuizAnalytics

If the analytics data can truly be missing, consider adding a loading state or placeholder component.


40-46: Improve time formatting logic

The time formatting logic is duplicated across components and could benefit from being extracted into a utility function.

Consider creating a shared utility:

// utils/formatTime.ts
export const formatTime = (seconds: number) => ({
  minutes: Math.floor(seconds / 60),
  seconds: Math.floor(seconds % 60).toString().padStart(2, '0')
})

This would improve maintainability and reduce code duplication.


59-90: Enhance chart layout responsiveness

The current layout uses a fixed height (h-80) and flex-row which might not work well on smaller screens.

Consider making the layout more responsive:

-  <div className="flex h-80 w-full flex-row">
+  <div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2 lg:grid-cols-3">

This would allow charts to stack on smaller screens while maintaining the side-by-side layout on larger screens.

apps/frontend-manage/src/components/analytics/quiz/InstanceQuizAnalytics.tsx (1)

15-23: Add prop type validation for analytics data

The analytics prop is required but lacks runtime validation.

Consider adding runtime validation:

import { z } from 'zod';

const analyticsSchema = z.object({
  elementName: z.string(),
  totalCorrectRate: z.number(),
  // ... other fields
});

// Add runtime validation in component
const validatedAnalytics = analyticsSchema.parse(analytics);
apps/analytics/src/modules/instance_activity_performance/compute_instance_performance.py (1)

Line range hint 1-104: Consider optimizing performance calculations

The current implementation performs multiple iterations over the DataFrame for different calculations.

Consider using vectorized operations:

def compute_rates(df, column_prefix, response_count):
    rates = df[f"{column_prefix}ResponseCorrectness"].value_counts()
    return {
        f"{column_prefix}_error_rate": rates.get("WRONG", 0) / response_count,
        f"{column_prefix}_partial_rate": rates.get("PARTIAL", 0) / response_count,
        f"{column_prefix}_correct_rate": rates.get("CORRECT", 0) / response_count,
    }

This would improve performance for large datasets.

packages/types/src/index.ts (1)

427-446: Consider documenting field descriptions

The fields in InstanceQuizAnalytics would benefit from JSDoc comments explaining their purpose and units (e.g., is averageTimeSpent in seconds or milliseconds?).

apps/frontend-manage/src/components/analytics/performance/PerformanceRates.tsx (1)

145-145: Consider adding ARIA labels for accessibility

The new ErrorRatesLegend component should include appropriate ARIA labels to ensure accessibility for screen readers.

packages/graphql/src/schema/analytics.ts (2)

320-328: Consider adding rate calculation documentation.

The error, partial, and correct rates are crucial metrics. Consider adding JSDoc comments to document how these rates are calculated, especially the difference between first, last, and total rates.

Add documentation above the type definition:

+/**
+ * Represents analytics for a quiz instance
+ * @property firstErrorRate - Error rate of first attempts
+ * @property lastErrorRate - Error rate of last attempts
+ * @property totalErrorRate - Overall error rate across all attempts
+ * Similar documentation for partial and correct rates
+ */
 export const InstanceQuizAnalytics = builder.objectType(

358-380: Well-structured analytics hierarchy!

The QuizAnalytics type effectively combines instance-level and activity-level analytics, providing a comprehensive view of quiz performance. The nullable activityQuizAnalytics field appropriately handles cases where activity-level analytics might not be available.

Consider implementing caching for these analytics queries as they might be computationally expensive for quizzes with many participants.

packages/graphql/src/schema/query.ts (2)

785-794: LGTM! Consider adding error handling.

The query field is well-structured but could benefit from explicit error handling.

Consider wrapping the resolver in a try-catch block:

 resolve(_, args, ctx) {
+  try {
     return CourseService.getCourseActivities(args, ctx)
+  } catch (error) {
+    ctx.logger?.error('Failed to fetch course activities', { error, courseId: args.courseId })
+    throw error
+  }
 }

796-805: LGTM! Consider adding error handling.

The analytics query is well-implemented but could benefit from explicit error handling.

Consider wrapping the resolver in a try-catch block:

 resolve(_, args, ctx) {
+  try {
     return AnalyticsService.getActivityAnalytics(args, ctx)
+  } catch (error) {
+    ctx.logger?.error('Failed to fetch activity analytics', { error, activityId: args.activityId })
+    throw error
+  }
 }
packages/i18n/messages/de.ts (1)

1889-1889: Fix minor typos in German text

There are a couple of small grammatical issues:

  1. "Total" should be "Totale" to match the feminine noun "Fehlerrate"
  2. "Lösungversuchen" should be "Lösungsversuchen"
-      total: 'Total',
+      total: 'Totale',
-      microLearningOneSubmissionHint: 'Für Microlearnings ist keine Trennung von ersten und letzten Lösungversuchen möglich, da Teilnehmende jedes Element der Aktivität nur einmal lösen können.',
+      microLearningOneSubmissionHint: 'Für Microlearnings ist keine Trennung von ersten und letzten Lösungsversuchen möglich, da Teilnehmende jedes Element der Aktivität nur einmal lösen können.',

Also applies to: 1916-1916

packages/graphql/src/ops.schema.json (3)

551-730: Consider consistent nullability for rate fields

The rate fields have inconsistent nullability:

  • firstCorrectRate, firstErrorRate, firstPartialRate, lastCorrectRate, lastErrorRate, lastPartialRate are nullable
  • totalCorrectRate, totalErrorRate, totalPartialRate are non-null

Consider making all rate fields non-null with a default value of 0.0 for consistency.

Add field descriptions for better documentation

Consider adding descriptions to fields to improve schema documentation and API usability.


Line range hint 9379-9646: Consider organizing fields into logical groups

The InstanceQuizAnalytics type combines different aspects of analytics. Consider organizing related fields using GraphQL interfaces or separate types:

  • Performance metrics (correctRate, errorRate, etc.)
  • Feedback metrics (upvoteRate, downvoteRate)
  • Instance metadata (elementName, elementType)

This would improve maintainability and make the schema more modular.

Example reorganization:

interface BaseAnalytics {
  id: Int!
  elementName: String!
  elementType: ElementType!
}

type PerformanceMetrics {
  firstCorrectRate: Float
  lastCorrectRate: Float
  # ... other rates
}

type FeedbackMetrics {
  upvoteRate: Float!
  downvoteRate: Float!
  feedbackCount: Int!
}

type InstanceQuizAnalytics implements BaseAnalytics {
  # ... base fields
  performance: PerformanceMetrics!
  feedback: FeedbackMetrics!
}

20147-20175: Add descriptions for new queries

The new queries getActivityAnalytics and getCourseActivities lack descriptions. Consider adding descriptions to help API consumers understand:

  • The purpose of each query
  • When to use each query
  • Any important considerations or limitations

Example:

"""
Retrieves detailed analytics for a specific activity, including performance metrics and feedback.
"""
getActivityAnalytics(activityId: String!): QuizAnalytics

"""
Retrieves all activities associated with a course, including their current status and metadata.
"""
getCourseActivities(courseId: String!): Course

Also applies to: 20262-20290

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1a7b2eb and 801f7c0.

📒 Files selected for processing (30)
  • apps/analytics/src/modules/instance_activity_performance/compute_instance_performance.py (2 hunks)
  • apps/analytics/src/modules/instance_activity_performance/save_instance_performances.py (1 hunks)
  • apps/frontend-manage/src/components/analytics/performance/ErrorRatesLegend.tsx (1 hunks)
  • apps/frontend-manage/src/components/analytics/performance/PerformanceRates.tsx (2 hunks)
  • apps/frontend-manage/src/components/analytics/performance/PerformanceRatesBarChart.tsx (2 hunks)
  • apps/frontend-manage/src/components/analytics/quiz/ActivityAnalyticsCharts.tsx (1 hunks)
  • apps/frontend-manage/src/components/analytics/quiz/CircularPerformancePlot.tsx (1 hunks)
  • apps/frontend-manage/src/components/analytics/quiz/InstanceQuizAnalytics.tsx (1 hunks)
  • apps/frontend-manage/src/components/analytics/quiz/QuizAnalyticsNavigation.tsx (1 hunks)
  • apps/frontend-manage/src/components/analytics/quiz/QuizSelectionNavigation.tsx (1 hunks)
  • apps/frontend-manage/src/pages/analytics/[courseId]/quizzes.tsx (0 hunks)
  • apps/frontend-manage/src/pages/analytics/[courseId]/quizzes/[id].tsx (1 hunks)
  • apps/frontend-manage/src/pages/analytics/[courseId]/quizzes/index.tsx (1 hunks)
  • packages/graphql/src/graphql/ops/QGetActivityAnalytics.graphql (1 hunks)
  • packages/graphql/src/graphql/ops/QGetCourseActivities.graphql (1 hunks)
  • packages/graphql/src/ops.schema.json (9 hunks)
  • packages/graphql/src/ops.ts (12 hunks)
  • packages/graphql/src/public/client.json (2 hunks)
  • packages/graphql/src/public/schema.graphql (6 hunks)
  • packages/graphql/src/public/server.json (2 hunks)
  • packages/graphql/src/schema/analytics.ts (2 hunks)
  • packages/graphql/src/schema/practiceQuiz.ts (1 hunks)
  • packages/graphql/src/schema/query.ts (2 hunks)
  • packages/graphql/src/services/analytics.ts (2 hunks)
  • packages/graphql/src/services/courses.ts (1 hunks)
  • packages/i18n/messages/de.ts (3 hunks)
  • packages/i18n/messages/en.ts (3 hunks)
  • packages/prisma/src/prisma/migrations/20241221102445_average_time_instance_performance/migration.sql (1 hunks)
  • packages/prisma/src/prisma/schema/analytics.prisma (1 hunks)
  • packages/types/src/index.ts (1 hunks)
🔥 Files not summarized due to errors (1)
  • packages/graphql/src/ops.ts: Error: Server error: no LLM provider could handle the message
💤 Files with no reviewable changes (1)
  • apps/frontend-manage/src/pages/analytics/[courseId]/quizzes.tsx
🔇 Additional comments (32)
packages/graphql/src/schema/practiceQuiz.ts (2)

141-142: LGTM: Timestamp fields properly added

The addition of createdAt and updatedAt fields is well-structured and follows GraphQL schema best practices. These fields will be valuable for tracking quiz analytics over time.


Line range hint 136-139: Consider uncommenting analytics fields for dashboard implementation

The commented-out fields (startedCount, completedCount, repeatedCount) seem directly relevant to the quiz analytics dashboard being implemented. These metrics would provide valuable insights into quiz engagement patterns.

Let's verify if these fields are implemented elsewhere or if they should be included:

Consider including these fields if they're not tracked elsewhere, as they would enhance the analytics dashboard with key engagement metrics.

packages/graphql/src/ops.ts (3)

922-942: Well-structured analytics type with comprehensive metrics.

The InstanceQuizAnalytics type effectively captures both performance and engagement metrics (upvotes, downvotes, feedback). The type definitions are consistent and properly typed.


Line range hint 4275-4284: Well-defined GraphQL document for activity analytics.

The GetActivityAnalyticsDocument is properly structured with:

  • Clear variable definitions
  • Comprehensive field selection
  • Proper type safety with TypeScript

3668-3674: Verify the completeness of analytics query fields.

The GetActivityAnalyticsQuery includes comprehensive analytics data. However, let's verify that all necessary fields are included for the dashboard implementation.

✅ Verification successful

Query fields are complete and well-structured

Based on the schema definition and query implementation, the GetActivityAnalyticsQuery includes all the necessary analytics fields:

  • Activity-level data: name, type, total course participants
  • Instance-level analytics: comprehensive metrics for each quiz instance including participation rates, time spent, error/success rates, and feedback metrics
  • Activity-wide analytics: aggregated metrics for the entire activity including participation, time spent, and performance rates
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any analytics-related components that might need additional fields

# Look for dashboard components that might consume this data
rg -t tsx -t jsx "ActivityAnalytics|QuizAnalytics" 

# Look for any TODO comments about missing analytics fields
rg -l "TODO.*analytics" 

Length of output: 106


Script:

#!/bin/bash
# Search for files containing analytics-related components with proper file extensions
rg -t ts -t js -t jsx "ActivityAnalytics|QuizAnalytics" -A 5

# Search for GraphQL query usages
rg "GetActivityAnalyticsQuery" -A 2

# Look for any analytics-related schema definitions
fd "schema|types?" --type f --extension graphql | xargs cat

# Check for any test files that might reveal expected analytics fields
fd "test|spec" --type f --extension ts --extension tsx | xargs rg "ActivityAnalytics|QuizAnalytics"

Length of output: 52243

apps/frontend-manage/src/pages/analytics/[courseId]/quizzes/index.tsx (6)

44-54: Potential improvement for router validation.
If the user navigates without a valid courseId globally, the page may break if other calls rely on it. Consider adding a more robust check or fallback for the missing courseId scenario.


59-67: Ensure that the search logic covers edge cases.
The code looks good, but ensure that search scenarios like uppercase inputs or leading/trailing whitespace are handled as intended.


68-76: Same remark for microLearning search.


77-81: Watch out for null or undefined course data.
Currently it’s guarded mostly by line 88. This is fine, but a second fallback check can help avoid undefined references if course data changes or is updated asynchronously.


176-182: Good approach for loading i18n messages.
Looks consistent with Next.js best practices.


191-191: Exporting default is correct.
No further issues here.

packages/graphql/src/services/analytics.ts (5)

502-503: Good naming convention for getActivityAnalytics.
This keeps consistent with existing naming patterns in the file.


507-520: Security check on includes.
Ensure no PII (personally identifiable information) is unintentionally exposed. The includes are minimal, so that’s likely not a concern, but keep an eye on data expansions in the future.


531-534: Check for relevant error handling when the activity is not found.
Return value is null if practiceQuiz is not found. Ensure the caller handles null properly.


544-546: Returning null on missing activity.
This is consistent with the approach used in other analytics functions. Ensure the caller side has a null check.


624-637: Double-check the shape of returned data.
Heads up: Some downstream consumers might expect numeric IDs but the function can return a 0 fallback if performance is missing. Confirm that is indeed how you want to handle missing performance.

packages/graphql/src/public/schema.graphql (3)

57-70: Check optional vs. required fields in ActivityQuizAnalytics.
All rates are non-nullable. If the user never answered, do you want to store zero, or consider them optional?


Line range hint 1335-1347: Timestamp usage for practice quizzes.
createdAt/updatedAt are a good addition. Make sure to convert them properly between server time and client time, if necessary.


1442-1449: Useful new type for quiz analytics.
This anchors your analytics domain model well.

packages/graphql/src/graphql/ops/QGetCourseActivities.graphql (1)

1-16: No immediate concerns.
Query structure is straightforward and returns needed fields (id, name, status).

apps/frontend-manage/src/components/analytics/performance/ErrorRatesLegend.tsx (1)

4-14: LGTM! Well-structured props interface.

The props interface is well-defined with proper typing for colors and optional wrapperStyle.

apps/frontend-manage/src/components/analytics/performance/PerformanceRatesBarChart.tsx (1)

30-35: LGTM! Class merging implementation is correct

The use of twMerge for combining classes is appropriate and maintains the base styles while allowing customization.

apps/frontend-manage/src/pages/analytics/[courseId]/quizzes/[id].tsx (1)

95-100: LGTM! Blocking fallback is appropriate

The use of blocking fallback for static paths is a good choice for analytics pages where SEO isn't critical and a loading state might be jarring.

packages/prisma/src/prisma/schema/analytics.prisma (1)

198-199: Verify the non-nullable constraint for averageTimeSpent

The averageTimeSpent field is defined as non-nullable. Consider whether there might be edge cases where the average time cannot be computed (e.g., no responses yet) and whether it should allow null values in such cases.

✅ Verification successful

Let me gather more information about how the average time spent is being handled in the codebase.


Based on the gathered information, I can now generate the final response:

The non-nullable constraint for averageTimeSpent is appropriate

The field should remain non-nullable because:

  • The cleanup script initializes averageTimeSpent to 0 by default
  • The migration history shows it was intentionally added as a non-nullable field
  • The codebase handles the average time calculation by setting it to 0 when there's no data, rather than using null
  • This aligns with the field's purpose of representing average time, where 0 is a meaningful value indicating no time spent
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if there are any existing instances without time data
rg -A 5 "averageTimeSpent.*null" 

Length of output: 1466


Script:

#!/bin/bash
# Check how averageTimeSpent is being calculated and handled
rg -A 5 "averageTimeSpent.*=" 

# Check for any database migrations or schema changes related to this field
fd "migration" --exec cat {} \; | rg -A 5 "averageTimeSpent"

# Look for any test cases or business logic handling this field
ast-grep --pattern 'averageTimeSpent = $_'

Length of output: 6771

packages/types/src/index.ts (1)

448-461: Verify if ActivityQuizAnalytics should include feedback metrics

ActivityQuizAnalytics is missing feedback-related fields (upvoteRate, downvoteRate, feedbackCount) that exist in InstanceQuizAnalytics. If this is intentional, consider documenting why. If not, consider adding these fields for consistency.

apps/frontend-manage/src/components/analytics/performance/PerformanceRates.tsx (1)

Line range hint 145-156: LGTM! Clean implementation of chart visualization

The replacement of the recharts Legend with a custom ErrorRatesLegend component provides better control over the visualization while maintaining clean integration with the existing chart components.

packages/graphql/src/public/client.json (1)

109-109: LGTM!

The new entries are correctly formatted and maintain consistency with the existing pattern.

Also applies to: 118-118

packages/i18n/messages/en.ts (1)

Line range hint 1824-1907: LGTM! Analytics translations are well-structured

The new translations for analytics features are clear, consistent, and follow the established patterns.

packages/graphql/src/public/server.json (1)

Line range hint 109-118: LGTM! GraphQL queries are well-defined

The new queries for activity analytics and course activities are properly structured and align with the service implementation.

packages/i18n/messages/de.ts (2)

Line range hint 1836-1922: Analytics section translations look good!

The new analytics-related translations are well-structured, use consistent terminology, and properly handle placeholders and HTML formatting.


Line range hint 1-1922: Translations maintain consistency with English source!

The German translations appropriately preserve:

  • Technical terminology
  • Placeholder variables
  • Semantic meaning
  • Professional tone
packages/graphql/src/ops.schema.json (1)

21797-21892: Well-structured analytics hierarchy!

The QuizAnalytics type effectively combines activity-level and instance-level analytics while maintaining a clear separation of concerns. The inclusion of courseParticipants helps provide context for the analytics data.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (7)
packages/graphql/src/public/schema.graphql (2)

57-70: LGTM! Consider adding field descriptions.

The ActivityQuizAnalytics type is well-structured with comprehensive metrics. The nullable/non-nullable field choices are appropriate.

Consider adding GraphQL descriptions to document the meaning of each field, especially the difference between first/last/total rates. Example:

 type ActivityQuizAnalytics {
+  """
+  Average time spent by participants in seconds
+  """
   averageTimeSpent: Float!
+  """
+  Rate of correct answers on first attempt
+  """
   firstCorrectRate: Float
   # ... add descriptions for other fields
 }

839-858: Consider reducing field duplication with interfaces.

The InstanceQuizAnalytics type shares many fields with ActivityQuizAnalytics. Consider using GraphQL interfaces to reduce duplication and ensure consistency.

Example refactor:

+interface BaseQuizAnalytics {
+  averageTimeSpent: Float!
+  firstCorrectRate: Float
+  firstErrorRate: Float
+  firstPartialRate: Float
+  id: Int!
+  lastCorrectRate: Float
+  lastErrorRate: Float
+  lastPartialRate: Float
+  numberOfAnswers: Int!
+  totalCorrectRate: Float!
+  totalErrorRate: Float!
+  totalPartialRate: Float!
+}

-type ActivityQuizAnalytics {
+type ActivityQuizAnalytics implements BaseQuizAnalytics {
   # ... existing fields
 }

-type InstanceQuizAnalytics {
+type InstanceQuizAnalytics implements BaseQuizAnalytics {
   # ... existing fields
   elementName: String!
   elementType: ElementType!
   uniqueParticipants: Int!
   upvoteRate: Float!
   downvoteRate: Float!
   feedbackCount: Int!
 }
packages/graphql/src/ops.ts (1)

82-96: LGTM! Well-structured analytics type definition.

The ActivityQuizAnalytics type is well-designed with appropriate nullable types for rates and consistent naming conventions.

Consider adding JSDoc comments to document the purpose of each rate metric:

+/**
+ * Analytics data for a quiz activity, including participation rates and performance metrics
+ */
 export type ActivityQuizAnalytics = {
   __typename?: 'ActivityQuizAnalytics';
+  /** Average time spent by participants in seconds */
   averageTimeSpent: Scalars['Float']['output'];
+  /** Percentage of correct answers on first attempt */
   firstCorrectRate?: Maybe<Scalars['Float']['output']>;
   // ... add similar comments for other metrics
 };
packages/graphql/src/ops.schema.json (4)

551-730: Consider adding field descriptions to ActivityQuizAnalytics type.

The type structure is well-defined, but adding descriptions to fields would improve schema documentation and help API consumers better understand the metrics.

Example for a few fields:

 {
   "name": "averageTimeSpent",
-  "description": null,
+  "description": "Average time spent by participants in seconds",
   ...
 },
 {
   "name": "firstCorrectRate",
-  "description": null,
+  "description": "Percentage of correct answers on first attempts",
   ...
 }

Line range hint 9379-9616: Ensure consistent nullability for rate fields in InstanceQuizAnalytics.

Some rate fields are nullable while others are non-null, which might lead to inconsistent handling in clients:

  • firstCorrectRate, lastCorrectRate are nullable
  • totalCorrectRate, upvoteRate, downvoteRate are non-null

Consider making all rate fields consistently non-null with a default value of 0.0 for better API consistency.


20139-20167: Document error scenarios for getActivityAnalytics query.

While the query structure is good, consider documenting potential error scenarios in the description field to help API consumers handle errors appropriately, such as:

  • Activity not found
  • Unauthorized access
  • Invalid activity type
 {
   "name": "getActivityAnalytics",
-  "description": null,
+  "description": "Retrieves analytics for a specific activity. Errors: NOT_FOUND if activity doesn't exist, UNAUTHORIZED if user lacks access, INVALID_TYPE if activity type doesn't support analytics.",
   ...
 }

Line range hint 551-21884: Consider performance implications of the analytics schema.

The comprehensive analytics schema is well-designed but consider:

  1. The nested structure of QuizAnalytics might result in large response payloads
  2. Fetching instance analytics for activities with many instances could be expensive

Consider implementing:

  • Pagination for instanceQuizAnalytics
  • Field-level cost directives
  • Caching strategies for computed rates
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 801f7c0 and cf2165a.

📒 Files selected for processing (4)
  • packages/graphql/src/ops.schema.json (9 hunks)
  • packages/graphql/src/ops.ts (12 hunks)
  • packages/graphql/src/public/schema.graphql (6 hunks)
  • packages/graphql/src/schema/practiceQuiz.ts (1 hunks)
🔥 Files not summarized due to errors (1)
  • packages/graphql/src/ops.ts: Error: Server error: no LLM provider could handle the message
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/graphql/src/schema/practiceQuiz.ts
🔇 Additional comments (5)
packages/graphql/src/public/schema.graphql (2)

1442-1448: LGTM! Well-structured analytics aggregation.

The QuizAnalytics type provides a clean hierarchical structure for aggregating analytics data at both activity and instance levels.


1335-1335: LGTM! Proper timestamp tracking added.

The addition of createdAt and updatedAt fields to PracticeQuiz type follows best practices for tracking entity lifecycle.

Also applies to: 1347-1347

packages/graphql/src/ops.ts (2)

Line range hint 4275-4284: LGTM! Well-structured GraphQL query definitions.

The GetActivityAnalytics query and document definitions are properly typed with clear variable definitions and comprehensive field selections.

Also applies to: 3668-3674


922-942: LGTM! Comprehensive instance-level analytics type.

The InstanceQuizAnalytics type effectively extends the activity-level metrics with instance-specific data like feedback rates.

Let's verify the consistency of analytics metrics between activity and instance levels:

✅ Verification successful

Analytics types are consistent with appropriate extensions for instance-level metrics

The verification confirms that InstanceQuizAnalytics properly maintains all core analytics fields from ActivityQuizAnalytics while adding relevant instance-specific fields:

  • Both types share consistent field names and types for core metrics (averageTimeSpent, all rate fields, numberOfAnswers)
  • InstanceQuizAnalytics appropriately extends with instance-specific fields:
    • Feedback-related: upvoteRate, downvoteRate, feedbackCount
    • Instance metadata: elementName, elementType
    • Participation tracking: uniqueParticipants
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistency in analytics metric naming and types
# Expected: Metrics should have consistent names and types across levels

# Compare metric field names and types
rg -A 20 "type (Activity|Instance)QuizAnalytics" | grep -E "Rate|Time|count|participants"

Length of output: 4609


Script:

#!/bin/bash
# Let's get a complete view of both types to ensure we're not missing any fields
rg -A 25 "type ActivityQuizAnalytics" packages/graphql/src/ops.ts
echo "---"
rg -A 25 "type InstanceQuizAnalytics" packages/graphql/src/ops.ts

Length of output: 2208

packages/graphql/src/ops.schema.json (1)

21789-21884: Well-structured analytics hierarchy!

The QuizAnalytics type effectively organizes analytics data with:

  • Clear separation between activity and instance level analytics
  • Appropriate non-null constraints
  • Good relationship modeling between different analytics types

@sjschlapbach sjschlapbach merged commit 6fa3bb6 into v3-analytics Dec 22, 2024
12 of 13 checks passed
@sjschlapbach sjschlapbach deleted the quiz-analytics branch December 22, 2024 12:30
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
5.2% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant