Skip to content

Probabilities of Feasibility for Classifier based constraints in Acquisition Functions #2776

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

jduerholt
Copy link
Collaborator

Motivation

Using classifiers as output constraints in MC based acquisition functions is a topic discussed at least in #725 and in #2700. The current solution is to take the probabilities from the classifier and to unsigmoid them. This is a very unintuitive approach, especially as sometimes a sigmoid and sometimes a fatmoid is used. This PR introduces a new attribute for SampleReducingMCAcquisitionFunctions named probabilities_of_feasibility that expects a callable that returns a tensor holding values between zero and one, where one means feasible and zero infeasible.

Currently, it is only implemented in the abstract SampleReducingMCAcquisitionFunction using the additional attribute. As the constraints argument is just a special case of the probabilities_of_feasibility argument, where the output of the callable is not directly applied to the objective but further processed by a sigmoid or fatmoid one could also think about uniting both functionalities into one argument, and modify fat to List[bool | None] | bool that indicates if a fatmoid, a sigmoid or nothing is applied. When the user just provides a bool, it applies either a fatmoid or sigmoid for all. This approach would also have the advantage that only compute_smoothed_feasibility_indicator needs to be modified and almost nothing for the individual acqfs (besides updating the types for constraints.) Furthermore, it follows the approach that we took when we implemented individual etas for the constraints. So I would favor this one in contrast to the one actually outlined in the code ;) I am looking forward to you ideas on this.

@SebastianAment: In #2700, you mention that from your perspective the probabilities_of_feasibility would not be applied on a per sample basis as the regular constraints. Why? Also in the community notebook by @FrankWanger using the unsigmoid trick it is applied on a per sample basis. I would keep it on the per sample basis and if a classifier for some reason do not returns the probabilities on a per sample basis, it would be the task of the user to expand the tensor accordingly. What do you think?

Have you read the Contributing Guidelines on pull requests?

Yes.

Test Plan

Unit test, but not yet added due to the draft status and pending architecture choices.

cc: @Balandat

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Mar 19, 2025
Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from your perspective the probabilities_of_feasibility would not be applied on a per sample basis as the regular constraints. Why?

Yeah I think it's just a question of what the convention should be for the probabilities_of_feasibility return value - a docstring explaining input and output shapes would help :)

As the constraints argument is just a special case of the probabilities_of_feasibility argument, where the output of the callable is not directly applied to the objective but further processed by a sigmoid or fatmoid one could also think about uniting both functionalities into one argument, and modify fat to List[bool | None] | bool that indicates if a fatmoid, a sigmoid or nothing is applied. When the user just provides a bool, it applies either a fatmoid or sigmoid for all. This approach would also have the advantage that only compute_smoothed_feasibility_indicator needs to be modified and almost nothing for the individual acqfs (besides updating the types for constraints.

This makes sense to me. We'll want to make sure to document that well in the docstrings and probably also have an example for this we can point to.

@jduerholt jduerholt marked this pull request as ready for review March 27, 2025 15:11
@jduerholt
Copy link
Collaborator Author

Hi @Balandat,

thanks for your review. I updated the PR in the sense of my suggestion from above including updates on docstrings, but so far only on ampleReducingMCAcquisitionFunction. I will update the docstrings on the derived classes when we have agreed on the basic iplementation and docstrings ;)

I also update the community notebook with the classifier to the new implementation.

Best,

Johannes

@jduerholt jduerholt requested a review from Balandat March 27, 2025 15:14
Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jduerholt Thanks for the update. This design makes sense to me. There is some overloading going on here, but this is a pretty advanced usage so I'm not too worried about the interface.

Curious if @SebastianAment has any additional thoughts.

@jduerholt
Copy link
Collaborator Author

Hi @Balandat,

thanks for reviewing. I implemented the suggested changes.

Could you have a look why the test pipeline is not anymore starting automatically? Or is this a new Meta policy that an employee has to approve them before running?

Best,

Johannes

@jduerholt jduerholt requested a review from Balandat April 2, 2025 14:10
Copy link

codecov bot commented Apr 2, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.98%. Comparing base (44b6400) to head (1c95c9c).
Report is 26 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2776      +/-   ##
==========================================
- Coverage   99.99%   99.98%   -0.02%     
==========================================
  Files         202      207       +5     
  Lines       18508    18701     +193     
==========================================
+ Hits        18507    18698     +191     
- Misses          1        3       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@jduerholt
Copy link
Collaborator Author

Hmm, I am not sure what the reason for the codecov/project error is ...

@Balandat
Copy link
Contributor

Balandat commented Apr 2, 2025

Could you have a look why the test pipeline is not anymore starting automatically? Or is this a new Meta policy that an employee has to approve them before running?

Yeah, this is unfortunately currently the state of affairs. There are some potential security concerns with doing this on self-hosted (non-GitHub) runners and as a results the ability to auto-run CI on new commits was disabled in the whole pytorch GitHub org (even though the self-hosted runner concern does not apply to botorch). Let me see if I can find a solution for this.

Copy link
Contributor

@Balandat Balandat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent, thanks for contributing this!

Hmm, I am not sure what the reason for the codecov/project error is ...

We had a regression in coverage it seems. This is unrelated to your PR.

@facebook-github-bot
Copy link
Contributor

@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@saitcakmak saitcakmak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a clean way to support both use cases. Thanks for thinking this through!

@facebook-github-bot
Copy link
Contributor

@Balandat merged this pull request in 6565d3f.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants