Monday, 3 March 2025

Evaluation Class 10 AI Notes

These AI Class 10 Notes Chapter 7 Evaluation Class 10 Notes simplify complex AI concepts for easy understanding.

Class 10 AI Evaluation Notes

Model Evaluation Class 10 Notes

Imagine that you have come up with an AI based prediction model which has been deployed in a forest which is prone to forest fires. Now, the objective of the model is to predict whether a forest fire has broken out in the forest or not.

There exist two conditions which weneed to ponder upon Prediction and Reality. The prediction is the output which is given by the machine has been male. Now let us look at the various combinations that we can have with these two conditions.

Evaluation Class 10 AI Notes 1

Here, we can see in the picture thala forest fire has broken out in the forest. The model predics a Yes which means there is a forest fire. The predictior matches with the Reality. Hence, this condition is temed as True Positive.

Evaluation Class 10 AI Notes 2

Here there is no fire in the forest hence the Reality is No. In this case, the machine too has predicted it correctly as a No. Therefore, this condition is termed as True Negative.

Evaluation Class 10 AI Notes 3

Here the reality is that there is no forest fire. But the machine has incorrectly predicted that there is a forest fire. This case is termed as False Positive.

Evaluation Class 10 AI Notes 4

Here, a forest fire has broken out in the forest because of which the Reality is Yes but the machine has incorrectly predicted it as a No. which means the machine predicts that there is no Forest Fire. Therefore, this case becomes False Negative.

Evaluation Class 10 AI Notes

Confusion Matrix Class 10 Notes

The result of comparison between the prediction and reality can be recorded in a matrix called as confusion matrix.
It is valuable tool for evaluating the effectiveness of an AI model. It displays the number of accurate and inaccurate instances based on the model’s predictions.

Let us once again take a look at the four conditions that we went through in the Forest Fire scenario:

Evaluation Class 10 AI Notes 5

Here’s a detailed account of the terminologies associated with the confusion matrix:

Evaluation Class 10 AI Notes 6

The matrix displays the number of instances produced by the model on the test data.

  • True Positives (TPs) occur when the model accurately predicts a positive data point.
  • True Negatives (TNs) occur when the model accurately predicts a negative data point.
  • False Positives (FPs) occur when the model predicts a positive data point incorrectly.
  • False Negatives (FNs) occur when the model mispredict a negative data point.

Evaluation Methods Class 10 Notes

Accuracy

Accuracy is used to measure the performance of the model. It measures the proportion of correctly classified instances out of the total instances in the dataset.

Accuracy = \(\frac{\text { Number of Correct Predictions }}{\text { Total Number of Predictions }}\) × 100 %

Example
Suppose you have a binary classification problem where you’re predicting whether emails are spam or not spam. You have a dataset of 100 emails, of which 70 are labelled correctly by your model and 30 are misclassified.
Number of Correct Predictions: 70
Total Number of Predictions: 100

Accuracy = \(\frac{70}{100}\) × 100 % = 70%

So, in this example, the accuracy of the model is 70 %. This means that 70 % of the emails were correctly classified as either spam or not spam.
Accuracy is a straightforward metric to interpret and is widely used, but it has limitations, especially when dealing with imbalanced datasets. In such cases, other metrics like precision, recall, and F1 score may provide a more comprehensive evaluation of the model’s performance.

Evaluation Class 10 AI Notes

Precision

Precision is a metric that evaluates the accuracy of positive predictions made by a model. It measures the proportion of true positive predictions out of all positive predictions made by the model.

Precision = \(\frac{\text { True Positives }}{\text { True Positives }+ \text { False Positives }}\)

Example:
Suppose you have a binary classification problem, where you’re predicting whether patients have a certain disease or not. You have a dataset of 150 patients, out of which your model predicts that 50 have the disease. Upon further examination, it turns out that 40 of these predictions are correct (true positives), but 10 are incorrect (false positives).
True Positives: 40
False Positives: 10

Precision = \(\frac{40}{40+10}\) × 100 % = \(\frac{40}{50}\) × 100% = 80%

So, in this example, the precision of the model is 0.8 or 80 %. This means that out of all the patients predicted to have the disease, 80 % of them actually have the disease.
Precision is particularly useful when the cost of false positives is high, such as in medical diagnosis or fraud detection.

Recall

Recall measures the effectiveness of a classification model in identifying all relevant instances from a dataset. It is the ratio of the number of true positives instances to the sum of true positive and false negative instances.

Recall = \(\frac{\text { True Positives }}{\text { True Positives }+ \text { False Negatives }}\)

Example
Suppose you have a binary classification problem where you’re predicting whether patients have a certain disease or not. You have a dataset of 200 patients, out of which 100 actually have the disease. Your model predicts that 70 patients have the disease, and upon further examination, it turns out that 60 of these predictions are correct (true positives), but 40 patients with the disease were missed (false negatives).
True Positives: 60
False Negatives: 40

Recall = \(\frac{60}{60+40}\) × 100 % = \(\frac{60}{100}\) × 100% = 60%

So, in this example, the recall of the model is 0.6 or 60 %. This means that out of all the patients who actually have the disease, the model correctly identified 60 % of them.

Recall is particularly useful when the cost of false negatives is high, as it focuses on minimising missed positive instances.

Evaluation Class 10 AI Notes

F1 Score

The F1 score is a harmonic mean of precision and recall, providing a single metric that balances both measures. It is particularly useful when you want to consider both false positives and false negatives in evaluating model performance.

Evaluation Class 10 AI Notes 9

Example
Suppose you have a binary classification problem, where you’re predicting whether emails are spam or not spam. You have a dataset of 200 emails, out of which 120 are correctly classified as spam and 50 are misclassified as spam.

  • True Positives (correctly classified as spam): 120
  • False Positives (misclassified as spam): 50
  • Additionally, out of the actual 150 spam emails, your model correctly identifies 120 , but misses 30.
  • True Positives (correctly identified spam): 120
  • False Negatives (missed spam): 30

Evaluation Class 10 AI Notes 10

So, in this example, the F1 score of the model is 0.75 . This indicates a balance between precision and recall, where higher values represent better overall model performance.

Importance of Metric Class 10 Notes

Let’s see different cases before coming to the conclusion which metric is more important “Precision” or “Recall”
1. Choosing between Precision and Recall depends on the condition in which the model has been deployed. In a case like Forest Fire, a False Negative can cost us a lot and is risky too. Imagine no alert being given even when there is a Forest Fire. The whole forest might burn down.

Evaluation Class 10 AI Notes 11

2. Another case where a False Negative can be dangerous is Viral Outbreak. Imagine a deadly virus has started spreading and the model which is supposed to predict a viral outbreak does not detect it.

Evaluation Class 10 AI Notes 12

The virus might spread widely and infect a lot of people.

3. On the other hand, there can be cases in which the False Positive condition costs us more than False Negatives. One such case is Mining. Imagine a model telling you that there exists treasure at a point and you keep on digging there but it turns out that it is a false alarm. Here, the False Positive case (predicting there is a treasure but there is no treasure) can be very costly.

Evaluation Class 10 AI Notes 13

4. Consider a model that predicts whether a mail is spam or not. If the model always predicts that the mail is spam, people would not look at it and eventually might lose important information. Here also False Positive condition (Predicting the mail as spam while the mail is not spam) would have a high cost.

Evaluation Class 10 AI Notes 14

Calculation of Scores Class 10 Notes

Scenario I
In schools, a lot of times it happens that there is no water to drink. At a few places, cases of water shortage in schools are very common and prominent. Hence, an AI model is designed to predict, if there is going to be a water shortage in the school in the near future or not.

The confusion matrix for the same is:

The Confusion Matrix Reality 1 Reality 0
Predicted 1 22 12
Predicted 0 47 18

Evaluation Class 10 AI Notes

Calculation of Accuracy, Precision, Recall and F1 Score for the above problem.

TP – FP
22 – 12
47 –  18

Evaluation Class 10 AI Notes 15

Glossary:

  • Evaluation It is basically to check the performance of your Al model. This is done by mainly two things “Prediction” & “Reality”.
  • Accuracy It measures the proportion of correctly classified instances out of the total instances in the dataset.
  • Precision It evaluates the accuracy of positive predictions made by a model.
  • Recall It also known as sensitivity or true positive rate, measures the proportion of actual positive instances that were correctly identified by the model.
  • F1 score It is a harmonic mean of precision and recall, providing a single metric that balances both measures.

The post Evaluation Class 10 AI Notes appeared first on Learn CBSE.



from Learn CBSE https://ift.tt/IAejViz
via IFTTT

No comments:

Post a Comment