Your task is to implement a function dice_score(y_true, y_pred)
that calculates the Dice Score, also known as the Sørensen–Dice coefficient or F1-score, for binary classification. The Dice Score is used to measure the similarity between two sets, particularly in tasks like image segmentation and binary classification.
Return the Dice Score rounded to 3 decimal places, and handle edge cases appropriately (e.g., when there are no true or predicted positives).
Example: y_true = np.array([1, 1, 0, 1, 0, 1]) y_pred = np.array([1, 1, 0, 0, 0, 1]) print(dice_score(y_true, y_pred)) Output: 0.857
The Dice Score, also known as the Sørensen–Dice coefficient or F1-score, is a statistical measure used to gauge the similarity of two samples. It's particularly popular in image segmentation tasks and binary classification problems.
The Dice coefficient is defined as twice the intersection divided by the sum of the cardinalities of both sets:
In terms of binary classification:
The Dice coefficient is identical to the F1-score, which is the harmonic mean of precision and recall:
Consider two binary vectors:
In this case:
The Dice score has several advantages:
The Dice score is commonly used in:
When implementing the Dice score, it's important to handle edge cases properly, such as when both sets are empty (typically defined as 0.0 (according to sklearn)).
import numpy as np def dice_score(y_true, y_pred): intersection = np.logical_and(y_true, y_pred).sum() true_sum = y_true.sum() pred_sum = y_pred.sum() # Handle edge cases if true_sum == 0 or pred_sum == 0: return 0.0 dice = (2.0 * intersection) / (true_sum + pred_sum) return round(float(dice), 3)
There’s no video solution available yet 😔, but you can be the first to submit one at: GitHub link.