Skip to content

This is my pull request for the issue 647 #649

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: develop
Choose a base branch
from

Conversation

rosignol08
Copy link

@rosignol08 rosignol08 commented Feb 25, 2025

References to issues or other PRs

Describe the proposed changes

Additional information

Checklist before requesting a review

  • I have performed a self-review of my code
  • The code conforms to the style used in this package
  • The code is fully documented and typed (type-checked with Mypy)
  • I have added thorough tests for the new/changed functionality

rosignol08 and others added 7 commits February 10, 2025 17:36
Copy link

codecov bot commented Mar 9, 2025

Codecov Report

Attention: Patch coverage is 48.64865% with 19 lines in your changes missing coverage. Please review.

Project coverage is 86.75%. Comparing base (2f48baa) to head (211cf07).
Report is 30 commits behind head on develop.

Files with missing lines Patch % Lines
skfda/misc/scoring.py 48.64% 19 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #649      +/-   ##
===========================================
- Coverage    86.83%   86.75%   -0.09%     
===========================================
  Files          157      157              
  Lines        13522    13603      +81     
===========================================
+ Hits         11742    11801      +59     
- Misses        1780     1802      +22     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@vnmabus vnmabus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think some tests are needed for this functionality, to check with some simple examples that we are computing the right values.

@rosignol08
Copy link
Author

rosignol08 commented Mar 29, 2025

hello I'm coming back after a little while of abscence I fixed the main problems and for the tests I did this: ```
import numpy as np
from skfda.misc.scoring import root_mean_squared_error, root_mean_squared_log_error
from skfda.representation.grid import FDataGrid

def test_root_mean_squared_error_ndarray():
# Cas avec des np.ndarray
y_true = np.array([3.0, -0.5, 2.0, 7.0])
y_pred = np.array([2.5, 0.0, 2.1, 7.8])

# Test sans poids
result = root_mean_squared_error(y_true, y_pred)
expected_result = np.sqrt(np.mean((y_true - y_pred) ** 2))  # Calcul attendu

# Debugging: Print intermediate results
print(f"y_true: {y_true}, y_pred: {y_pred}")
print(f"Manual RMSE calculation: {expected_result}")
print(f"skfda RMSE result: {result}")

#assert np.isclose(result, expected_result), f"Expected {expected_result}, but got {result}"

print("\ntest_root_mean_squared_error_ndarray\n")
test_root_mean_squared_error_ndarray()
def test_root_mean_squared_error_fdata():
# Cas avec des objets FData (exemple générique)
# Suppose que FDataGrid, FDataBasis ou FDataIrregular ont des méthodes adaptées
y_true_fdata = FDataGrid(data_matrix=np.array([3.0, -0.5, 2.0, 7.0])) # Exemple générique
y_pred_fdata = FDataGrid(data_matrix=np.array([2.5, 0.0, 2.1, 7.8])) # Exemple générique

# Test pour FData
result = root_mean_squared_error(y_true_fdata, y_pred_fdata)
expected_result = np.sqrt(np.mean((y_true_fdata.data_matrix - y_pred_fdata.data_matrix) ** 2))
print(f"Manual RMSE calculation: {expected_result}")
print(f"skfda RMSE result: {result}")
#assert np.isclose(result, expected_result), f"Expected {expected_result}, but got {result}"

print("\ntest_root_mean_squared_error_fdata\n")
test_root_mean_squared_error_fdata()
def test_root_mean_squared_error_multioutput():
# Cas avec multioutput
y_true = np.array([[3.0, -0.5], [2.0, 7.0]])
y_pred = np.array([[2.5, 0.0], [2.1, 7.8]])

result = root_mean_squared_error(y_true, y_pred, multioutput='uniform_average')
expected_result = np.sqrt(np.mean((y_true - y_pred) ** 2, axis=None))  # Match skfda's behavior
print(f"Manual RMSE calculation: {expected_result}")
print(f"skfda RMSE result: {result}")
#assert np.isclose(result, expected_result), f"Expected {expected_result}, but got {result}"

print("\ntest_root_mean_squared_error_multioutput\n")
test_root_mean_squared_error_multioutput()
def test_root_mean_squared_log_error():
# Cas avec np.ndarray
y_true = np.array([3.0, 0.5, 2.0, 7.0])
y_pred = np.array([2.5, 0.0, 2.1, 7.8])

# Test sans poids
result = root_mean_squared_log_error(y_true, y_pred)
expected_result = np.sqrt(np.mean((np.log1p(y_true) - np.log1p(y_pred)) ** 2))
print(f"Manual RMSE calculation: {expected_result}")
print(f"skfda RMSE result: {result}")
#assert np.isclose(result, expected_result), f"Expected {expected_result}, but got {result}"

print("\ntest_root_mean_squared_log_error\n")
test_root_mean_squared_log_error()

 it returns this :

test_root_mean_squared_error_ndarray

y_true: [ 3.  -0.5  2.   7. ], y_pred: [2.5 0.  2.1 7.8]
Manual RMSE calculation: 0.5361902647381803
skfda RMSE result: 0.5361902647381803

test_root_mean_squared_error_fdata

Manual RMSE calculation: 0.5361902647381803
skfda RMSE result: 0.47346242371139297

test_root_mean_squared_error_multioutput

Manual RMSE calculation: 0.5361902647381803
skfda RMSE result: 0.5361902647381803

test_root_mean_squared_log_error

Manual RMSE calculation: 0.21931244239925476
skfda RMSE result: 0.21931244239925476 which seems coherent but I don't really understand why the function "test_root_mean_squared_error_fdata" gives different results I need to figure why before redo my merge request ?

Copy link
Member

@vnmabus vnmabus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes look good.

Please, add your test functions to skfda\tests\test_scoring.py, so that they are tested with the rest of our tests. I think the ones you showed in the comments are already in the right format (they have a test_ prefix and use asserts) but you will need to remove the print statements. I will do a final round of review when the tests are in place.

@rosignol08
Copy link
Author

I have added the test fonction now I hope it's good.

y_pred: np.ndarray,
*,
sample_weight: np.ndarray | None = None,
multioutput: Literal['uniform_average'] = 'uniform_average',
Copy link
Contributor

@eliegoudout eliegoudout Apr 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think overloads should replace default values with ... to avoid giving the impression hat they are taken into account (applies to every @overload).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added only for those line.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants