Calculating Feature Importance Weight Like Xgboost for Catboost

Feature Importance Weight Calculator – XGBoost vs CatBoost :root { –primary-color: #004a99; –success-color: #28a745; –background-color: #f8f9fa; –text-color: #333; –border-color: #ddd; –shadow-color: rgba(0, 0, 0, 0.1); } body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; background-color: var(–background-color); color: var(–text-color); line-height: 1.6; margin: 0; padding: 0; } .container { max-width: 960px; margin: 20px auto; padding: 20px; background-color: #fff; border-radius: 8px; box-shadow: 0 4px 15px var(–shadow-color); } header { background-color: var(–primary-color); color: #fff; padding: 20px 0; text-align: center; border-radius: 8px 8px 0 0; margin: -20px -20px 20px -20px; } header h1 { margin: 0; font-size: 2.2em; } .calculator-section { margin-bottom: 40px; padding: 25px; border: 1px solid var(–border-color); border-radius: 5px; background-color: #fff; } .calculator-section h2 { color: var(–primary-color); margin-top: 0; text-align: center; font-size: 1.8em; margin-bottom: 25px; } .loan-calc-container { display: flex; flex-direction: column; gap: 15px; } .input-group { display: flex; flex-direction: column; gap: 8px; } .input-group label { font-weight: 600; color: var(–primary-color); } .input-group input, .input-group select { padding: 10px; border: 1px solid var(–border-color); border-radius: 4px; font-size: 1em; box-sizing: border-box; } .input-group input:focus, .input-group select:focus { border-color: var(–primary-color); outline: none; box-shadow: 0 0 0 2px rgba(0, 74, 153, 0.2); } .input-group .helper-text { font-size: 0.85em; color: #666; } .error-message { color: #dc3545; font-size: 0.85em; margin-top: 5px; display: none; /* Hidden by default */ } .button-group { display: flex; gap: 10px; margin-top: 20px; justify-content: center; flex-wrap: wrap; } .btn { padding: 10px 20px; border: none; border-radius: 4px; font-size: 1em; font-weight: 600; cursor: pointer; transition: background-color 0.3s ease, transform 0.2s ease; text-decoration: none; display: inline-block; text-align: center; } .btn-primary { background-color: var(–primary-color); color: #fff; } .btn-primary:hover { background-color: #003f80; transform: translateY(-2px); } .btn-success { background-color: var(–success-color); color: #fff; } .btn-success:hover { background-color: #218838; transform: translateY(-2px); } .btn-secondary { background-color: #6c757d; color: #fff; } .btn-secondary:hover { background-color: #5a6268; transform: translateY(-2px); } #results-container { margin-top: 30px; padding: 25px; background-color: var(–primary-color); color: #fff; border-radius: 5px; text-align: center; } #results-container h3 { margin-top: 0; font-size: 1.5em; margin-bottom: 15px; color: #fff; } .main-result { font-size: 2.5em; font-weight: bold; margin-bottom: 15px; color: #ffc107; /* A contrasting yellow for highlight */ } .intermediate-results { display: flex; justify-content: space-around; flex-wrap: wrap; gap: 15px; margin-bottom: 20px; } .intermediate-result-item { text-align: center; } .intermediate-result-item span { display: block; font-size: 1.2em; font-weight: 600; } .intermediate-result-item p { font-size: 0.9em; margin: 5px 0 0 0; opacity: 0.9; } .formula-explanation { font-size: 0.9em; opacity: 0.9; margin-top: 15px; } .table-container, .chart-container { margin-top: 30px; padding: 25px; background-color: #fff; border: 1px solid var(–border-color); border-radius: 5px; } caption { font-size: 1.2em; font-weight: 600; color: var(–primary-color); margin-bottom: 15px; text-align: left; } table { width: 100%; border-collapse: collapse; margin-top: 15px; } th, td { padding: 10px; text-align: right; border-bottom: 1px solid var(–border-color); } th { background-color: var(–primary-color); color: #fff; text-align: center; } td:first-child { text-align: left; font-weight: 500; } canvas { display: block; margin: 15px auto 0 auto; max-width: 100%; height: auto; } .article-section { margin-top: 40px; } .article-section h2, .article-section h3 { color: var(–primary-color); margin-bottom: 15px; } .article-section h2 { font-size: 2em; border-bottom: 2px solid var(–primary-color); padding-bottom: 5px; } .article-section h3 { font-size: 1.6em; } .article-section p, .article-section ul, .article-section ol { margin-bottom: 15px; font-size: 1.1em; color: #444; } .article-section ul, .article-section ol { padding-left: 25px; } .article-section li { margin-bottom: 8px; } .faq-item { margin-bottom: 15px; } .faq-item strong { display: block; color: var(–primary-color); font-size: 1.1em; margin-bottom: 5px; } .highlight-text { color: var(–primary-color); font-weight: bold; } .internal-links-list { list-style: none; padding: 0; } .internal-links-list li { margin-bottom: 10px; } .internal-links-list a { color: var(–primary-color); text-decoration: none; font-weight: 600; } .internal-links-list a:hover { text-decoration: underline; } .internal-links-list span { font-size: 0.9em; color: #666; display: block; margin-top: 3px; } @media (min-width: 768px) { .container { padding: 30px; } }

Feature Importance Weight Calculator

Compare XGBoost and CatBoost Feature Importance

Feature Importance Weight Calculator

Enter the total gain ratio for a feature from your XGBoost model.
Enter the total number of times a feature was used for splitting in your XGBoost model.
Enter the total cover count (number of samples affected) for a feature in your XGBoost model.
Enter the relative frequency of a feature's usage in split decisions across CatBoost trees.
Enter the average gain contributed by a feature per split in your CatBoost model.
Enter the total number of samples used to train your models.

Calculated Feature Importance Weights

XGBoost Composite Score

CatBoost Composite Score

Importance Ratio (CB/XGB)

Formula Used:

XGBoost Composite Score = (Gain Ratio * Split Count) / Total Samples
CatBoost Composite Score = (Feature Frequency * Average Gain) / Total Samples
Importance Ratio = CatBoost Composite Score / XGBoost Composite Score
Main Result = Max(XGBoost Composite Score, CatBoost Composite Score)

Feature Importance Metrics
Metric XGBoost Value CatBoost Value Unit
Gain Ratio / Feature Frequency Ratio/Frequency
Split Count / Average Gain Count/Gain
Total Samples Samples
Composite Score Score
Importance Ratio Ratio

Chart showing composite importance scores for XGBoost and CatBoost.

What is Feature Importance Weight Calculation?

Feature importance weight calculation is a crucial process in machine learning that quantifies the contribution of each input feature to the predictive power of a model. When working with complex algorithms like XGBoost and CatBoost, understanding which features the model relies on most heavily is vital for model interpretability, feature selection, and identifying underlying data patterns. This process helps us to calculate feature importance weight like XGBoost for CatBoost, providing a comparative view of feature influence across different gradient boosting frameworks.

Who should use it:
Data scientists, machine learning engineers, analysts, and researchers who build and interpret predictive models. It's particularly useful when deploying models in regulated industries or when explaining model decisions to stakeholders.

Common misconceptions:

  • Feature importance implies causation: Importance indicates correlation with the target variable as learned by the model, not necessarily a direct causal link.
  • All importance metrics are the same: XGBoost and CatBoost use different internal mechanisms and thus report feature importance in different ways, requiring careful interpretation and normalization.
  • Low importance means a feature is useless: A feature might have low importance in one model but be critical in another, or it might be important for detecting rare events.

Effectively, calculating feature importance weight allows us to move beyond a black-box model and gain insights into the driving factors behind its predictions, a key step in responsible AI development. This comparative analysis of how to calculate feature importance weight like XGBoost for CatBoost is essential for choosing the right framework or for understanding model behavior when using multiple gradient boosting methods.

Feature Importance Weight Calculation Formula and Mathematical Explanation

Gradient Boosting models like XGBoost and CatBoost derive feature importance from how often and how effectively features are used to split nodes within the ensemble of trees. However, their specific calculation methodologies differ, necessitating a method to normalize and compare these importance weights.

XGBoost Feature Importance Metrics

XGBoost typically provides several importance metrics, commonly including:

  • Gain: The average gain of splits which use this feature.
  • Split: The number of times a feature is used to split data.
  • Cover: The average number of training samples affected by a split which uses this feature.
For a composite score, we can combine these. A common approach, and one we utilize here for a comparable score, is to consider the total impact.

CatBoost Feature Importance Metrics

CatBoost offers various importance types, such as:

  • Feature Frequency: The proportion of trees where a feature was used for splitting.
  • Average Gain: The average contribution of a feature to the model's accuracy across all splits.
CatBoost's default metrics can be directly aggregated for a composite score.

Normalized Comparison Formula

To compare feature importance across XGBoost and CatBoost, we need a normalized metric. The calculator uses the following approach:

XGBoost Composite Score:
$ \text{XGBoost}_{\text{CompScore}} = \frac{\text{Gain Ratio} \times \text{Split Count}}{\text{Total Samples}} $
This formula attempts to capture the overall impact by multiplying the qualitative measure (Gain Ratio, a proxy for average gain) by the quantitative measure (Split Count) and then normalizing by the dataset size.

CatBoost Composite Score:
$ \text{CatBoost}_{\text{CompScore}} = \frac{\text{Feature Frequency} \times \text{Average Gain}}{\text{Total Samples}} $
This formula combines the likelihood of use (Feature Frequency) with its effectiveness (Average Gain), normalized by the dataset size.

Importance Ratio:
$ \text{Importance Ratio} = \frac{\text{CatBoost}_{\text{CompScore}}}{\text{XGBoost}_{\text{CompScore}}} $
This ratio indicates how the composite importance of a feature compares between the two models. A ratio > 1 suggests higher relative importance in CatBoost, while < 1 suggests higher relative importance in XGBoost.

Primary Result:
$ \text{Primary Result} = \max(\text{XGBoost}_{\text{CompScore}}, \text{CatBoost}_{\text{CompScore}}) $
The main result highlights the maximum composite importance score observed for the feature across both models.

Variable Explanations

Here is a breakdown of the variables used:

Variable Meaning Unit Typical Range
Gain Ratio (XGBoost) Proportion of total gain attributed to a feature. Ratio (0 to 1) 0.0 to 1.0
Split Count (XGBoost) Total number of splits using the feature. Count Positive Integer
Cover Count (XGBoost) Total samples affected by splits using the feature. Count Positive Integer
Feature Frequency (CatBoost) Proportion of trees using the feature. Ratio (0 to 1) 0.0 to 1.0
Average Gain (CatBoost) Average gain from splits using the feature. Gain Value Typically Non-negative
Total Training Samples Total number of data points in the training set. Count Positive Integer
XGBoost Composite Score Normalized, combined importance score for XGBoost. Score Non-negative
CatBoost Composite Score Normalized, combined importance score for CatBoost. Score Non-negative
Importance Ratio Ratio of CatBoost composite score to XGBoost composite score. Ratio Non-negative

Practical Examples (Real-World Use Cases)

Example 1: Customer Churn Prediction

A telecom company is using XGBoost and CatBoost to predict customer churn. They extract feature importance for the feature 'Contract Duration'.

Inputs:

  • Feature: 'Contract Duration'
  • XGBoost Gain Ratio: 0.45
  • XGBoost Split Count: 210
  • XGBoost Cover Count: 850
  • CatBoost Feature Frequency: 0.70
  • CatBoost Average Gain: 0.08
  • Total Training Samples: 15000

Calculation Breakdown:

  • XGBoost Composite Score = (0.45 * 210) / 15000 = 94.5 / 15000 = 0.0063
  • CatBoost Composite Score = (0.70 * 0.08) / 15000 = 0.056 / 15000 = 0.00000373
  • Importance Ratio = 0.00000373 / 0.0063 ≈ 0.00059
  • Main Result = max(0.0063, 0.00000373) = 0.0063

Interpretation: In this scenario, 'Contract Duration' shows a significantly higher composite importance score in XGBoost (0.0063) compared to CatBoost (0.00000373). The importance ratio of ~0.0006 indicates that XGBoost leverages this feature much more heavily for its predictions in this specific dataset and model configuration. This might suggest that XGBoost's splitting criteria are more sensitive to the variations in contract duration, or that CatBoost relies more on other features to capture this predictive signal.

Example 2: House Price Prediction

A real estate analytics firm is building a model to predict house prices. They analyze the importance of the feature 'Square Footage'.

Inputs:

  • Feature: 'Square Footage'
  • XGBoost Gain Ratio: 0.65
  • XGBoost Split Count: 350
  • XGBoost Cover Count: 1200
  • CatBoost Feature Frequency: 0.90
  • CatBoost Average Gain: 0.15
  • Total Training Samples: 25000

Calculation Breakdown:

  • XGBoost Composite Score = (0.65 * 350) / 25000 = 227.5 / 25000 = 0.0091
  • CatBoost Composite Score = (0.90 * 0.15) / 25000 = 0.135 / 25000 = 0.0000054
  • Importance Ratio = 0.0000054 / 0.0091 ≈ 0.00059
  • Main Result = max(0.0091, 0.0000054) = 0.0091

Interpretation: Similar to the previous example, 'Square Footage' appears to be a much more dominant feature in the XGBoost model (composite score 0.0091) than in CatBoost (0.0000054). The low importance ratio highlights this disparity. While square footage is intuitively a strong predictor of house prices, the difference in how XGBoost and CatBoost utilize it suggests variations in their internal feature selection and gain calculation processes. This could prompt further investigation into CatBoost's handling of continuous variables or explore different feature engineering techniques for it.

These examples illustrate how to calculate feature importance weight like XGBoost for CatBoost, revealing differences in how these powerful algorithms perceive the significance of specific features within the same dataset. Understanding these nuances is key for robust model development and interpretation.

How to Use This Feature Importance Weight Calculator

This calculator is designed to provide a comparative analysis of feature importance derived from XGBoost and CatBoost models. Follow these steps to get meaningful insights:

  1. Gather Model Metrics: First, you need to extract specific feature importance metrics from both your trained XGBoost and CatBoost models for a particular feature.
    • For XGBoost: Obtain the 'Gain Ratio' (or 'Total Gain' if Gain Ratio is not directly available, and adjust calculation accordingly), 'Split Count', and 'Cover Count' for the feature.
    • For CatBoost: Obtain the 'Feature Frequency' and 'Average Gain' for the feature.
  2. Input Total Samples: Enter the total number of training samples used for BOTH models. This is crucial for normalization.
  3. Enter Values: Input the extracted metrics into the corresponding fields in the calculator. Ensure you use the correct metric for the correct model type (XGBoost values for XGBoost fields, CatBoost values for CatBoost fields).
  4. Calculate: Click the "Calculate Importance" button. The calculator will immediately display:
    • Main Result: The highest composite importance score between the two models for the feature.
    • Intermediate Values: The calculated XGBoost Composite Score, CatBoost Composite Score, and the Importance Ratio.
    • Results Table: A detailed breakdown of the input metrics and calculated scores.
    • Dynamic Chart: A visual comparison of the composite scores.

How to Read Results:

  • Composite Scores: These normalized scores allow for a direct comparison. A higher score indicates greater perceived importance by that specific model algorithm.
  • Importance Ratio: This is a key metric.
    • Ratio > 1: The feature is relatively more important in the CatBoost model.
    • Ratio < 1: The feature is relatively more important in the XGBoost model.
    • Ratio ≈ 1: The feature's importance is perceived similarly by both models.
    • Ratio very close to 0: Feature has negligible importance in CatBoost compared to XGBoost.
  • Main Result: Simply indicates the peak importance observed for the feature across the two models.

Decision-Making Guidance:

  • Feature Selection: If a feature consistently shows low importance across both models, it might be a candidate for removal to simplify the model. Conversely, high importance suggests it's a strong driver.
  • Model Comparison: Significant differences in importance ratios can highlight algorithmic biases or strengths. If a feature is critical for one model but not the other, it might influence your choice of which model to deploy or suggest areas for hyperparameter tuning.
  • Model Debugging: Unexpectedly high or low importance for a feature can signal issues with data preprocessing, feature engineering, or model training.
  • Domain Expertise Validation: Compare the calculated importance weights with your understanding of the domain. Do the most important features make intuitive sense?

By using this tool, you can effectively calculate feature importance weight like XGBoost for CatBoost and gain deeper insights into your machine learning models.

Key Factors That Affect Feature Importance Results

The calculated feature importance weights are not static values; they are highly dependent on several factors related to the data, the modeling process, and the algorithms themselves. Understanding these influences is critical for accurate interpretation when you calculate feature importance weight like XGBoost for CatBoost.

  1. Data Quality and Preprocessing:
    Missing values, outliers, and incorrect data types can skew importance. For instance, if a highly informative feature is poorly imputed, its importance might be artificially lowered. Feature scaling can also impact certain importance metrics, though less so for tree-based models like XGBoost and CatBoost which inherently handle different scales.
  2. Feature Engineering:
    Creating new features or transforming existing ones can dramatically change importance. A feature that seems unimportant on its own might become highly significant when combined with another in a new engineered feature. The choice of transformation (e.g., log, polynomial) also plays a role.
  3. Correlated Features:
    When two or more features are highly correlated and predictive, gradient boosting models might arbitrarily assign importance to one over the other, or split the importance between them. This can lead to seemingly lower importance for individually strong predictors if they are redundant. Analyzing importance for groups of correlated features is often necessary.
  4. Hyperparameter Tuning:
    Parameters such as `max_depth`, `learning_rate`, `n_estimators` (XGBoost), `depth`, `iterations`, `learning_rate` (CatBoost), and regularization terms significantly influence how trees are built and thus how features are utilized. Different hyperparameter settings can lead to vastly different feature importance rankings.
  5. Dataset Size and Complexity:
    Larger datasets might require more splits to capture patterns, potentially increasing split counts. Highly complex datasets with intricate relationships might lead to more nuanced importance distributions. The normalization by `Total Samples` in our calculator helps mitigate some dataset size effects but doesn't eliminate the underlying complexity influence.
  6. Choice of Importance Metric:
    As demonstrated, XGBoost and CatBoost use different base metrics. Even within XGBoost, 'Gain', 'Split', and 'Cover' can yield different rankings. The specific metric chosen and how it's combined (as in our composite score) directly shapes the resulting importance weights. It's crucial to understand what each metric truly represents.
  7. Model Objective Function:
    The loss function the model is optimizing for (e.g., MSE for regression, LogLoss for classification) influences what constitutes "importance." A feature that significantly reduces error according to one loss function might have a different impact on another.
  8. Presence of Noise:
    Random noise in the data or target variable can sometimes be learned by the model, leading to features appearing more important than they truly are, especially if regularization is insufficient.

Careful consideration of these factors is essential when interpreting the output of any feature importance calculation, including our comparative tool for XGBoost and CatBoost.

Frequently Asked Questions (FAQ)

Q1: Are the composite scores from this calculator directly comparable to default XGBoost/CatBoost importance outputs?

No, the composite scores are specifically designed for normalized comparison between XGBoost and CatBoost using the defined formulas. Default outputs from each library often use different units and calculation bases (e.g., raw gain vs. permutation importance). This calculator normalizes them using the provided inputs and a consistent denominator (Total Samples).

Q2: What does an Importance Ratio of 0 mean?

An Importance Ratio of 0 (or extremely close to 0) implies that the CatBoost Composite Score is effectively zero relative to the XGBoost Composite Score. This typically happens if the Feature Frequency or Average Gain in CatBoost is negligible for that specific feature, while XGBoost finds it somewhat important.

Q3: Can I use this calculator for feature importance after training a model?

Yes, this calculator is intended for use *after* you have trained your XGBoost and CatBoost models and have extracted the relevant importance metrics for a specific feature. It helps interpret and compare those extracted metrics.

Q4: What if my XGBoost model only provides 'Total Gain' and not 'Gain Ratio'?

If you have 'Total Gain' and 'Split Count', you could potentially estimate an average gain per split ($ \text{Average Gain}_{\text{XGBoost}} = \text{Total Gain} / \text{Split Count} $). You could then try to construct a comparable score, but direct use of 'Gain Ratio' is preferred for accuracy. Alternatively, if you have 'Total Gain' and 'Cover Count', you might use these as proxies, but the interpretation of the composite score would need careful adjustment. For this calculator, providing the Gain Ratio is ideal.

Q5: Does high importance guarantee a feature is causal?

No. Feature importance measures how much a feature contributes to the model's prediction accuracy, based on the patterns learned from the data. It does not imply a direct cause-and-effect relationship in the real world. Correlation does not equal causation.

Q6: How do I handle categorical features in CatBoost and XGBoost for importance calculation?

Both XGBoost and CatBoost have built-in mechanisms to handle categorical features. CatBoost, in particular, has sophisticated methods. The importance metrics reported by the libraries should reflect how these features contribute after being processed internally. Ensure you are using the correct feature names as output by your respective models.

Q7: Is the 'Cover Count' from XGBoost always necessary for the composite score?

The calculator uses 'Split Count' as per the formula $ (\text{Gain Ratio} \times \text{Split Count}) / \text{Total Samples} $. 'Cover Count' is another metric XGBoost provides, representing the number of samples affected by splits using that feature. While it can offer a different perspective on importance (impact breadth vs. split frequency), 'Split Count' is often used in conjunction with 'Gain' for a combined measure. If you prefer to use 'Cover Count' instead of 'Split Count', you would need to modify the formula and calculator logic.

Q8: What if the total samples differ between my XGBoost and CatBoost training runs?

You should use the total number of samples from the dataset that was most recently used for training *either* model, or ideally, the number of samples used in the training set common to both models. Normalization requires a consistent baseline. If they were trained on vastly different sample sizes, the comparison's validity diminishes.

Related Tools and Internal Resources

© 2023 Your Company Name. All rights reserved.

function validateInput(id, errorId, minValue = null, maxValue = null) { var input = document.getElementById(id); var errorElement = document.getElementById(errorId); var value = parseFloat(input.value); errorElement.style.display = 'none'; // Hide error by default if (input.value === ") { errorElement.textContent = 'This field cannot be empty.'; errorElement.style.display = 'block'; return false; } if (isNaN(value)) { errorElement.textContent = 'Please enter a valid number.'; errorElement.style.display = 'block'; return false; } if (minValue !== null && value maxValue) { errorElement.textContent = 'Value out of range.'; errorElement.style.display = 'block'; return false; } return true; } function calculateFeatureImportance() { var isValid = true; isValid &= validateInput('gainRatio', 'gainRatioError', 0); isValid &= validateInput('splitCount', 'splitCountError', 0); isValid &= validateInput('coverCount', 'coverCountError', 0); isValid &= validateInput('featureFrequency', 'featureFrequencyError', 0, 1); isValid &= validateInput('averageGain', 'averageGainError', 0); isValid &= validateInput('totalSamples', 'totalSamplesError', 1); // Must be at least 1 sample if (!isValid) { document.getElementById('mainResult').textContent = '–'; document.getElementById('intermediateXGBoost').textContent = '–'; document.getElementById('intermediateCatBoost').textContent = '–'; document.getElementById('comparisonRatio').textContent = '–'; updateTableAndChart('–', '–', '–', '–', '–', '–', '–', '–'); return; } var gainRatio = parseFloat(document.getElementById('gainRatio').value); var splitCount = parseFloat(document.getElementById('splitCount').value); var coverCount = parseFloat(document.getElementById('coverCount').value); var featureFrequency = parseFloat(document.getElementById('featureFrequency').value); var averageGain = parseFloat(document.getElementById('averageGain').value); var totalSamples = parseFloat(document.getElementById('totalSamples').value); // Adjusted XGBoost score calculation to use Gain Ratio var xgbCompositeScore = (gainRatio * splitCount) / totalSamples; var catBoostCompositeScore = (featureFrequency * averageGain) / totalSamples; var importanceRatio = xgbCompositeScore === 0 ? (catBoostCompositeScore === 0 ? 1 : Infinity) : catBoostCompositeScore / xgbCompositeScore; var mainResult = Math.max(xgbCompositeScore, catBoostCompositeScore); document.getElementById('mainResult').textContent = mainResult.toFixed(6); document.getElementById('intermediateXGBoost').textContent = xgbCompositeScore.toFixed(6); document.getElementById('intermediateCatBoost').textContent = catBoostCompositeScore.toFixed(6); document.getElementById('comparisonRatio').textContent = isFinite(importanceRatio) ? importanceRatio.toFixed(4) : 'Inf'; updateTableAndChart( gainRatio.toFixed(4), splitCount.toFixed(0), featureFrequency.toFixed(4), averageGain.toFixed(4), totalSamples.toFixed(0), xgbCompositeScore.toFixed(6), catBoostCompositeScore.toFixed(6), isFinite(importanceRatio) ? importanceRatio.toFixed(4) : 'Inf' ); drawChart(xgbCompositeScore, catBoostCompositeScore); } function updateTableAndChart(xgbGainRatio, xgbSplitCount, catFeatureFreq, catAvgGain, totalSamplesVal, xgbScore, catScore, ratio) { document.getElementById('tableGainRatio').textContent = xgbGainRatio; document.getElementById('tableSplitCount').textContent = xgbSplitCount; document.getElementById('tableFeatureFrequency').textContent = catFeatureFreq; document.getElementById('tableAverageGain').textContent = catAvgGain; document.getElementById('tableTotalSamples').textContent = totalSamplesVal; document.getElementById('tableXGBoostScore').textContent = xgbScore; document.getElementById('tableCatBoostScore').textContent = catScore; document.getElementById('tableImportanceRatio').textContent = ratio; } function drawChart(xgbScore, catScore) { var ctx = document.getElementById('importanceChart').getContext('2d'); // Destroy previous chart instance if it exists if (window.importanceChartInstance) { window.importanceChartInstance.destroy(); } window.importanceChartInstance = new Chart(ctx, { type: 'bar', data: { labels: ['XGBoost', 'CatBoost'], datasets: [{ label: 'Composite Importance Score', data: [xgbScore, catScore], backgroundColor: [ 'rgba(0, 74, 153, 0.7)', // Primary color for XGBoost 'rgba(40, 167, 69, 0.7)' // Success color for CatBoost ], borderColor: [ 'rgba(0, 74, 153, 1)', 'rgba(40, 167, 69, 1)' ], borderWidth: 1 }] }, options: { responsive: true, maintainAspectRatio: false, scales: { y: { beginAtZero: true, title: { display: true, text: 'Composite Importance Score' } } }, plugins: { legend: { display: false // Hiding legend as labels are clear }, tooltip: { callbacks: { label: function(context) { var label = context.dataset.label || "; if (label) { label += ': '; } if (context.parsed.y !== null) { label += context.parsed.y.toFixed(6); } return label; } } } } } }); } function copyResults() { var mainResult = document.getElementById('mainResult').textContent; var xgbScore = document.getElementById('intermediateXGBoost').textContent; var catScore = document.getElementById('intermediateCatBoost').textContent; var ratio = document.getElementById('comparisonRatio').textContent; var tableGainRatio = document.getElementById('tableGainRatio').textContent; var tableSplitCount = document.getElementById('tableSplitCount').textContent; var tableFeatureFrequency = document.getElementById('tableFeatureFrequency').textContent; var tableAverageGain = document.getElementById('tableAverageGain').textContent; var tableTotalSamples = document.getElementById('tableTotalSamples').textContent; var assumptions = "Key Assumptions:\n" + "- XGBoost Gain Ratio: " + tableGainRatio + "\n" + "- XGBoost Split Count: " + tableSplitCount + "\n" + "- CatBoost Feature Frequency: " + tableFeatureFrequency + "\n" + "- CatBoost Average Gain: " + tableAverageGain + "\n" + "- Total Training Samples: " + tableTotalSamples; var resultsText = "Feature Importance Comparison:\n\n" + "Main Result (Max Composite Score): " + mainResult + "\n" + "XGBoost Composite Score: " + xgbScore + "\n" + "CatBoost Composite Score: " + catScore + "\n" + "Importance Ratio (CB/XGB): " + ratio + "\n\n" + assumptions; navigator.clipboard.writeText(resultsText).then(function() { // Optional: Show a confirmation message var copyButton = document.querySelector('button[onclick="copyResults()"]'); var originalText = copyButton.textContent; copyButton.textContent = 'Copied!'; copyButton.style.backgroundColor = '#28a745'; setTimeout(function() { copyButton.textContent = originalText; copyButton.style.backgroundColor = '#6c757d'; // Reset to secondary color }, 1500); }).catch(function(err) { console.error('Failed to copy text: ', err); // Optionally show an error message to the user }); } function resetForm() { document.getElementById('gainRatio').value = '0.55'; document.getElementById('splitCount').value = '150'; document.getElementById('coverCount').value = '500'; // Added for completeness although not used in score document.getElementById('featureFrequency').value = '0.85'; document.getElementById('averageGain').value = '0.12'; document.getElementById('totalSamples').value = '10000'; // Clear error messages var errorElements = document.querySelectorAll('.error-message'); for (var i = 0; i < errorElements.length; i++) { errorElements[i].style.display = 'none'; errorElements[i].textContent = ''; } calculateFeatureImportance(); // Recalculate with default values } // Initial calculation on page load with default values document.addEventListener('DOMContentLoaded', function() { resetForm(); // Load with default values and calculate });

Leave a Comment