You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are running an experiment on a mechanical system to determine the best parameter values while minimizing computation time. Our experiment involves the latest dimensional scaled vanilla BO-based GPEI batch trials, where we vary the batch size per trial while keeping the total number of iterations constant at 100.
Setup:
Batch sizes tested: 1, 2, 3, 4, and 5
The mechanical system only allows sequential execution due to design constraints
Final parameter values across all batch sizes are comparable
Observations on Computation Time:
Batch size 1: 100.1 min
Batch size 2: 99.9 min
Batch size 3: 98.21 min (lowest)
Batch size 4: 110.51 min (increase)
Batch size 5: 120.21 min (highest)
We noticed that computation time initially decreased from batch size 1 to 3 but then increased significantly from batch size 4 to 5. Given that all trials execute sequentially on the mechanical system, we are trying to understand the reason behind this pattern.
Question:
Why does the computation time first decrease from batch size 1 to 3 and then increase from batch size 4 to 5, despite comparable final parameter values? Could this be due to system-specific overhead, memory constraints, or some inefficiency in batch processing?
Any insights or explanations would be greatly appreciated!
Please provide any relevant code snippet if applicable.
Code of Conduct
I agree to follow this Ax's Code of Conduct
The text was updated successfully, but these errors were encountered:
What's the variance in these observations if you were to repeatedly run the optimization? It seems to me that this will likely be high enough that the observation here is subject to a lot of noise you can't really conclude anything from these numbers without running more replications to estimate the average generation time.
Question
We are running an experiment on a mechanical system to determine the best parameter values while minimizing computation time. Our experiment involves the latest dimensional scaled vanilla BO-based GPEI batch trials, where we vary the batch size per trial while keeping the total number of iterations constant at 100.
Setup:
Batch sizes tested: 1, 2, 3, 4, and 5
The mechanical system only allows sequential execution due to design constraints
Final parameter values across all batch sizes are comparable
Observations on Computation Time:
Batch size 1: 100.1 min
Batch size 2: 99.9 min
Batch size 3: 98.21 min (lowest)
Batch size 4: 110.51 min (increase)
Batch size 5: 120.21 min (highest)
We noticed that computation time initially decreased from batch size 1 to 3 but then increased significantly from batch size 4 to 5. Given that all trials execute sequentially on the mechanical system, we are trying to understand the reason behind this pattern.
Question:
Why does the computation time first decrease from batch size 1 to 3 and then increase from batch size 4 to 5, despite comparable final parameter values? Could this be due to system-specific overhead, memory constraints, or some inefficiency in batch processing?
Any insights or explanations would be greatly appreciated!
Please provide any relevant code snippet if applicable.
Code of Conduct
The text was updated successfully, but these errors were encountered: