In this task, you are required to implement a function rmse(y_true, y_pred)
that calculates the Root Mean Square Error (RMSE) between the actual values and the predicted values. RMSE is a commonly used metric for evaluating the accuracy of regression models, providing insight into the standard deviation of residuals.
Your Task: Implement the function rmse(y_true, y_pred)
to return the RMSE value rounded to three decimal places. Ensure your function handles edge cases such as mismatched array shapes and empty arrays appropriately.
Example: y_true = np.array([3, -0.5, 2, 7]) y_pred = np.array([2.5, 0.0, 2, 8]) print(rmse(y_true, y_pred)) Output: 0.612
RMSE is used to measure the accuracy of predictions in regression models. It represents the difference between the predictions and the actual values. In other words, it is the standard deviation of the residuals or prediction errors.
The RMSE is defined as:
\[ \text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_{\text{true}_i} - y_{\text{pred}_i})^2} \]where:
RMSE is generally used when large deviations/errors are more problematic and should be penalized more heavily. MAE, on the other hand, is used when errors should be treated equally, regardless of their size.
import numpy as np def rmse(y_true, y_pred): if y_true.shape != y_pred.shape: raise ValueError("Arrays must have the same shape") if y_true.size == 0: raise ValueError("Arrays cannot be empty") return round(np.sqrt(np.mean((y_true - y_pred) ** 2)), 3)
There’s no video solution available yet 😔, but you can be the first to submit one at: GitHub link.