Precision | Recall | F1-score | Support | |
Died | 0.79 | 0.81 | 0.80 | 80 |
Lived | 0.79 | 0.77 | 0.78 | 75 |
Accuracy | 0.79 | 155 | ||
Macro Avg | 0.79 | 0.79 | 0.79 | 155 |
Weighted Avg | 0.79 | 0.79 | 0.79 | 155 |
The accuracy score is a measure used to evaluate how well a machine learning model performs. It tells us the percentage of correct predictions the model makes out of all the predictions it tries to make.
We have a dataset containing information on whether a horse with colic will live or die based on certain symptoms and treatments. We also have the actual outcomes (whether each horse survived or not) within the dataset for each case. The accuracy score compares the model's predictions to the actual outcomes. It calculates how many times the model's predictions were correct and divides this by the total number of predictions made using this formula:
Accuracy = (Total Number of Correct Predictions) / (Total Number of Predictions)
The accuracy score gives us a straightforward way to understand the effectiveness of the model. A higher accuracy score means the model is making more correct predictions, which is desirable in most scenarios. An accuracy measurement of anything between 70%-90% is consistent with industry standards.
In this case, which is predicting the survival probability of horses with colic, an accuracy score of 79.35% means the model would be considered reliable in helping horse owners and veterinarians make informed decisions.
The AUC-ROC Score measures how well a model can distinguish between two classes (in our case, lived and died). The score ranges from 0 to 1.
An AUC-ROC Score of 0.831 means that our model is excellent at distinguishing between horses that will survive and those that won't. This score indicates that our model has an 83.1% chance of correctly predicting a live/die outcome.
The classification report is a summary of how well our machine learning model performs when making predictions. It provides important metrics that help us understand the accuracy and reliability of the model's predictions. Think of it as a report card for our model, showing us how well it did in predicting whether horses with colic would live or die. It helps us see the strengths and weaknesses of the model's predictions.
This classification report is considered good for our model. The metrics are well-balanced and show that the model is performing well in predicting both classes.