-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new APIs for recommendations summary at Clusters and Namespaces level #893
Conversation
Signed-off-by: saakhan <saakhan@redhat.com>
…od in listRecomm API to fetch data in summarizeAPI, add converter method for summarizeAPI and other relevant changes Signed-off-by: saakhan <saakhan@redhat.com>
# Conflicts: # src/main/java/com/autotune/analyzer/serviceObjects/Converters.java
Signed-off-by: saakhan <saakhan@redhat.com>
…y and namespaces Signed-off-by: saakhan <saakhan@redhat.com>
Signed-off-by: saakhan <saakhan@redhat.com>
… recomm and filtering data locally Signed-off-by: saakhan <saakhan@redhat.com>
… same cluster name issue and other minor changes Signed-off-by: saakhan <saakhan@redhat.com>
# Conflicts: # src/main/java/com/autotune/database/dao/ExperimentDAO.java # src/main/java/com/autotune/database/dao/ExperimentDAOImpl.java # src/main/java/com/autotune/database/helper/DBConstants.java
Signed-off-by: saakhan <saakhan@redhat.com>
# Conflicts: # src/main/java/com/autotune/database/dao/ExperimentDAO.java # src/main/java/com/autotune/database/dao/ExperimentDAOImpl.java # src/main/java/com/autotune/database/helper/DBConstants.java # src/main/java/com/autotune/database/service/ExperimentDBService.java
Signed-off-by: saakhan <saakhan@redhat.com>
… flag to force pull from the DB when required Signed-off-by: saakhan <saakhan@redhat.com>
# Conflicts: # src/main/java/com/autotune/database/dao/ExperimentDAOImpl.java # src/main/java/com/autotune/database/helper/DBConstants.java
# Conflicts: # src/main/java/com/autotune/analyzer/services/ListRecommendations.java
…d new error and info category for action summary Signed-off-by: saakhan <saakhan@redhat.com>
if (summarizeType == null || summarizeType.isEmpty()) | ||
summarizeType = KruizeConstants.JSONKeys.CLUSTER; | ||
// by default, db_flag will be false so the data will be fetched from cache only | ||
if (fetchFromDB == null || fetchFromDB.isEmpty()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based on description should be renamed
as fetchFromCache
fetchFrom(DB or Thanos) varaible can be used when we move to Thanos server
Just a thought so you can ignore this comment .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Obtain a preliminary assessment of the data volume that Kruize needs to handle, and verify the proper functionality of the summarization API, both with and without pagination. Currently, the summary API operates in either of two modes: it retrieves and summarizes data for either
(1) all clusters and namespaces, or
(2) a specified cluster and namespace. It would be advantageous to introduce additional options, such as
(3) fetching data for all cluster names that begin with 'xyz*', or allowing a list of comma-separated cluster names to be specified.
|
||
public class Summarize extends HttpServlet { | ||
private static final Logger LOGGER = LoggerFactory.getLogger(Summarize.class); | ||
HashMap<String, SummarizeAPIObject> clusterSummaryCacheMap = new HashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move initialization to init() method and leave only declaration here. And evaluate if any parallel request to this API cause any issues here.
public class Summarize extends HttpServlet { | ||
private static final Logger LOGGER = LoggerFactory.getLogger(Summarize.class); | ||
HashMap<String, SummarizeAPIObject> clusterSummaryCacheMap = new HashMap<>(); | ||
HashMap<String, SummarizeAPIObject> namespaceSummaryCacheMap = new HashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Transfer the initialization process to the init() method and retain only the declaration here. Then, assess whether any potential problems arise from concurrent requests to this API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to make use of ServletContext
provides a more controlled and managed way to share data among different servlets, and it handles the potential concurrency issues that may arise in a multithreaded environment.
} | ||
} | ||
|
||
public void loadExperimentsAndRecommendationsByClusterName(Map<String, KruizeObject> mKruizeExperimentMap, String clusterName) throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement a try-catch block to capture potential failures while loading experiment details or experiment recommendations. In case of failure for either or both of these tasks, provide appropriate user notification. Additionally, ensure that the loop continues to execute even if a few corrupt entries are encountered. It's important that the failure of a few entries doesn't disrupt the entire process.
@@ -15,9 +17,13 @@ public static final class SQLQUERY { | |||
public static final String SELECT_FROM_RESULTS_BY_EXP_NAME_AND_MAX_END_TIME = String.format("from KruizeResultsEntry k WHERE k.experiment_name = :%s and k.interval_end_time = (SELECT MAX(e.interval_end_time) FROM KruizeResultsEntry e where e.experiment_name = :%s )", KruizeConstants.JSONKeys.EXPERIMENT_NAME, KruizeConstants.JSONKeys.EXPERIMENT_NAME); | |||
public static final String SELECT_FROM_RECOMMENDATIONS_BY_EXP_NAME = "from KruizeRecommendationEntry k WHERE k.experiment_name = :experimentName"; | |||
public static final String SELECT_FROM_RECOMMENDATIONS_BY_EXP_NAME_AND_END_TIME = String.format("from KruizeRecommendationEntry k WHERE k.experiment_name = :%s and k.interval_end_time= :%s", KruizeConstants.JSONKeys.EXPERIMENT_NAME, KruizeConstants.JSONKeys.INTERVAL_END_TIME); | |||
public static final String SELECT_FROM_RECOMMENDATIONS_BY_CLUSTER_NAME = "from KruizeRecommendationEntry k WHERE k.cluster_name = :clusterName and k.experiment_name IN :experimentList ORDER by interval_end_time DESC LIMIT :limit"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refrain from employing the 'limit' statement on its own...
Instead, employ the 'limit' statement in conjunction with carefully optimized 'where' conditions. The 'limit' statement processes the entire content and then returns a specified count. However, we can enhance efficiency by strategically utilizing 'where' conditions along with the 'limit' statement, thus mitigating the need to process the entire content.
Signed-off-by: saakhan <saakhan@redhat.com>
…per the review Signed-off-by: saakhan <saakhan@redhat.com>
Signed-off-by: saakhan <saakhan@redhat.com>
import java.util.HashMap; | ||
|
||
public class SummarizeAPIObject { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs to have a API version similar to all the other APIs
# Conflicts: # src/main/java/com/autotune/utils/KruizeConstants.java
# Conflicts: # src/main/java/com/autotune/database/dao/ExperimentDAOImpl.java # src/main/java/com/autotune/database/helper/DBConstants.java # src/main/java/com/autotune/utils/KruizeConstants.java
Signed-off-by: saakhan <saakhan@redhat.com>
Closing stale PRs |
This PR fixes #892
/listClusters
and/summarize
are added which provides summary of the recommendations at cluster and namespace level