Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement job priorities in daily report #100

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions database/scripts/get_known_issue_errors.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
SELECT job_name,
error_name
FROM test_fail_issues
WHERE github_issue = "@param1@"
GROUP BY github_issue,
job_name;
Comment on lines +1 to +6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sql lgtm, but the name not completely, should it be something along the lines of get_known_issue_by_url?

41 changes: 40 additions & 1 deletion database/scripts/lib/buildfarm_tools.rb
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# frozen_string_literal: true

require 'open3'
require 'csv'

module BuildfarmToolsLib
class BuildfarmToolsError < RuntimeError; end
Expand All @@ -15,6 +16,8 @@ class BuildfarmToolsError < RuntimeError; end
FLAKY_BUILDS_DEFAULT_RANGE = '15 days'
WARNING_AGE_CONSTANT = -1

JOB_PRIORITIES = CSV.read('lib/job_priorities.csv', converters: :numeric).to_h

def self.build_regressions_today(filter_known: false)
# Keys: job_name, build_number, build_datetime, failure_reason, last_section
out = run_command('./sql_run.sh builds_failing_today.sql')
Expand Down Expand Up @@ -128,13 +131,49 @@ def self.jobs_last_success_date(older_than_days: 0)
out
end

def self.test_regressions_known
def self.test_regressions_known(sort_by: 'priority')
out = known_issues(status: 'open')
out.concat known_issues(status: 'disabled')
out = out.group_by { |e| e["github_issue"] }.to_a.map { |e| e[1] }
out.each do |error_list|
priority = issue_priority(error_list.first["github_issue"])
error_list.each do |error|
error["priority"] = priority
end
end

unless sort_by.nil?
out.sort_by! { |r| -r.first['priority'] }
end
out
end

def self.issue_priority(issue_link)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calculate_issue_priority instead?

sql_out = run_command('./sql_run.sh get_known_issue_errors.sql', args: [issue_link])
errors = sql_out.map {|e| e['error_name']}.uniq
jobs = sql_out.map {|e| e['job_name']}.uniq

error_score_jobs = {}

errors.each do |e|
jobs.each do |job|
flaky_result = run_command('./sql_run.sh calculate_flakiness_jobs.sql', args: [e, FLAKY_BUILDS_DEFAULT_RANGE, job])
next if flaky_result.empty?
# This is not guaranteed to be 'not consistent', we need to re-check if the last 3 builds were failing because of this
flaky_ratio = flaky_result.first['failure_percentage'].to_f/100.0

job_priority = JOB_PRIORITIES[job]
job_priority = job_priority*1.5 if flaky_ratio == 1

error_score_jobs[job] = [] if error_score_jobs[job].nil?
error_score_jobs[job] << (job_priority*flaky_ratio)
end
end

# Get only maximum score for each job
error_score_jobs.each_value.map {|e| e.max}.sum.round(3)
Comment on lines +173 to +174
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need the maximum here? If you are iterating over the error flakiness in each of the jobs, isn't the total priority of them the complete sum?

end

def self.run_command(cmd, args: [], keys: [])
cmd += " '#{args.shift}'" until args.empty?
begin
Expand Down
Loading
Loading