-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summary table #442
Summary table #442
Conversation
@jzhang18 mind taking a quick look? |
@vikram-narayan Sure. I will read it asap. |
@vikram-narayan It looks good to me! Thanks! |
Thanks @vikram-narayan. I think we should have in-sample and out-of-sample broken out separately if the user passes |
Gotcha - @gusgordon should I add that to other plots, and the main perf attrib tearsheet? In a different PR @twiecki said
|
@vikram-narayan Right, so we just run the perf_attrib function as normal, then whatever it outputs we slice on the Definitely want it broken out in the summary table (All, In-sample, Out-of-sample), but I'm not sure about the plots. Might be nice to have some sort of marker for the live start date, like a vertical line? I would say whatever you guys think is useful and looks ok. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks good otherwise. @gusgordon comments are great but we can leave that for a separate PR too.
pyfolio/bayesian.py
Outdated
@@ -86,7 +86,7 @@ def model_returns_t_alpha_beta(data, bmark, samples=2000): | |||
mu=mu_reg, | |||
sd=sigma, | |||
observed=y) | |||
trace = pm.sample(samples) | |||
trace = pm.sample(samples, progressbar=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seemed to have fixed the issue but it's not a proper fix. Can you move this up to be a kwarg of the function, default to True
and set it to False
in the unittest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just had a couple minor suggestions
@@ -14,10 +14,11 @@ | |||
# limitations under the License. | |||
from __future__ import division | |||
|
|||
from collections import OrderedDict |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be grouped with the standard lib imports
@@ -164,46 +169,63 @@ def show_perf_attrib_stats(returns, positions, factor_returns, | |||
pos_in_dollars=pos_in_dollars, | |||
) | |||
|
|||
perf_attrib_stats = create_perf_attrib_stats(perf_attrib_data) | |||
perf_attrib_stats, risk_exposure_stats =\ | |||
create_perf_attrib_stats(perf_attrib_data, risk_exposures) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts on having perf_attrib()
and create_perf_attrib_stats()
returning things in the same order for consistency? I.e. changing this to risk_exposure_stat, perf_attrib_stats = ...
would it be good on PRs that involve new (or changed) charts and tables in pyfolio (and alphalens) to include screenshots of them in the PR like we do for UI work? |
Yes it would :) |
add an optional
cost
param toplot_returns
. if present, total_returns less cost is plotted.