Skip to content

Commit b4143a1

Browse files
simple changes
1 parent ea8ec15 commit b4143a1

File tree

5 files changed

+42
-45
lines changed

5 files changed

+42
-45
lines changed

agenda.md

+6-4
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
1-
# this is a github python repo in which user can install this using pip. to use this repo you have to load the data using `data_loader.py` and train the model using `trainer.py` file and infer the trained model using `inferencer.py` file. all the files should contain appropriate comments and docstring for better readability of the code
1+
# New agenda
22

3-
# the `data_loader.py` file should have three functionalites in it's code , first is to autheniutacte user by prompitng for entering user auth details of zerodha account, then ask for the stock symbol for which a user is using this repo. then download the data using zerodha kite class. then saving it approriately in a local file for furhter model training usecases. here the file `notebook62326ade97_1X.ipynb` and `inference_notebook-Copy1.ipynb` can be referenced for this
3+
## this is a github python repo in which user can install this using pip. to use this repo you have to load the data using `data_loader.py` and train the model using `trainer.py` file and infer the trained model using `inferencer.py` file. all the files should contain appropriate comments and docstring for better readability of the code
44

5-
# the `trainer.py` file should have code for this . this file should contain code for loading local file that is saved by the `data_loader.py` file's code execution , if not found then give assering error for this . then asking user for transofrmer parameters for training the model. then one method for starting the training process with live plot display and saving the plots . after completion of training of this model it should saved locally with well defined name
5+
## the `data_loader.py` file should have three functionalites in it's code , first is to autheniutacte user by prompitng for entering user auth details of zerodha account, then ask for the stock symbol for which a user is using this repo. then download the data using zerodha kite class. then saving it approriately in a local file for furhter model training usecases. here the file `notebook62326ade97_1X.ipynb` and `inference_notebook-Copy1.ipynb` can be referenced for this
66

7-
# here in the file `inferencer.py` it should have code for loading the model that is saved by the file `trainer.py` and then for inferring this model it should ask for authenticating using zerodha using preovious files if not already logged in. and then ask in a prompt for inferring the stock symbol for this . then after getting name of the symbol that the user want to infer it should get data and load the model and run inference code and how the results in a plot and analytical table
7+
## the `trainer.py` file should have code for this . this file should contain code for loading local file that is saved by the `data_loader.py` file's code execution , if not found then give assering error for this . then asking user for transofrmer parameters for training the model. then one method for starting the training process with live plot display and saving the plots . after completion of training of this model it should saved locally with well defined name
8+
9+
## here in the file `inferencer.py` it should have code for loading the model that is saved by the file `trainer.py` and then for inferring this model it should ask for authenticating using zerodha using preovious files if not already logged in. and then ask in a prompt for inferring the stock symbol for this . then after getting name of the symbol that the user want to infer it should get data and load the model and run inference code and how the results in a plot and analytical table

examples/example.ipynb

+31-37
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@
8080
},
8181
{
8282
"cell_type": "code",
83-
"execution_count": 4,
83+
"execution_count": 5,
8484
"metadata": {},
8585
"outputs": [
8686
{
@@ -90,13 +90,7 @@
9090
"C:\\Users\\heman\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\trendmaster\\trainer.py:111: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\utils\\tensor_new.cpp:210.)\n",
9191
" inputs = torch.FloatTensor([item[0] for item in batch]).to(self.device)\n",
9292
"C:\\Users\\heman\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\torch\\nn\\modules\\loss.py:529: UserWarning: Using a target size (torch.Size([1, 10])) that is different to the input size (torch.Size([1, 1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.\n",
93-
" return F.mse_loss(input, target, reduction=self.reduction)\n"
94-
]
95-
},
96-
{
97-
"name": "stderr",
98-
"output_type": "stream",
99-
"text": [
93+
" return F.mse_loss(input, target, reduction=self.reduction)\n",
10094
"C:\\Users\\heman\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\torch\\optim\\lr_scheduler.py:371: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`.\n",
10195
" warnings.warn(\"To get the last learning rate computed by the scheduler, \"\n"
10296
]
@@ -105,30 +99,30 @@
10599
"name": "stdout",
106100
"output_type": "stream",
107101
"text": [
108-
"| epoch 1 | 100/ 2425 batches | lr 0.000001 | 33.25 ms | loss 85.93818 | ppl 21012529671469285338037947109145051136.00\n",
109-
"| epoch 1 | 200/ 2425 batches | lr 0.000001 | 34.48 ms | loss 79.05340 | ppl 21500878653947704947460846699675648.00\n",
110-
"| epoch 1 | 300/ 2425 batches | lr 0.000001 | 30.11 ms | loss 57.26056 | ppl 7378087694598164585644032.00\n",
111-
"| epoch 1 | 400/ 2425 batches | lr 0.000001 | 32.50 ms | loss 7.85224 | ppl 2571.49\n",
112-
"| epoch 1 | 500/ 2425 batches | lr 0.000001 | 30.85 ms | loss 17.58041 | ppl 43159535.77\n",
113-
"| epoch 1 | 600/ 2425 batches | lr 0.000001 | 29.69 ms | loss 49.18059 | ppl 2284862028840607154176.00\n",
114-
"| epoch 1 | 700/ 2425 batches | lr 0.000001 | 35.00 ms | loss 64.30408 | ppl 8450941249314608014900068352.00\n",
115-
"| epoch 1 | 800/ 2425 batches | lr 0.000001 | 29.52 ms | loss 17.45296 | ppl 37994820.30\n",
116-
"| epoch 1 | 900/ 2425 batches | lr 0.000001 | 30.86 ms | loss 9.96337 | ppl 21234.34\n",
117-
"| epoch 1 | 1000/ 2425 batches | lr 0.000001 | 30.33 ms | loss 6.68095 | ppl 797.08\n",
118-
"| epoch 1 | 1100/ 2425 batches | lr 0.000001 | 29.80 ms | loss 76.66705 | ppl 1977309318356556931923982092861440.00\n",
119-
"| epoch 1 | 1200/ 2425 batches | lr 0.000001 | 30.73 ms | loss 55.55959 | ppl 1346549031319642426048512.00\n",
120-
"| epoch 1 | 1300/ 2425 batches | lr 0.000001 | 34.24 ms | loss 15.73517 | ppl 6818646.35\n",
121-
"| epoch 1 | 1400/ 2425 batches | lr 0.000001 | 31.29 ms | loss 2.80549 | ppl 16.54\n",
122-
"| epoch 1 | 1500/ 2425 batches | lr 0.000001 | 31.47 ms | loss 4.61758 | ppl 101.25\n",
123-
"| epoch 1 | 1600/ 2425 batches | lr 0.000001 | 30.07 ms | loss 17.29784 | ppl 32535318.04\n",
124-
"| epoch 1 | 1700/ 2425 batches | lr 0.000001 | 32.03 ms | loss 4.81263 | ppl 123.06\n",
125-
"| epoch 1 | 1800/ 2425 batches | lr 0.000001 | 31.33 ms | loss 3.37098 | ppl 29.11\n",
126-
"| epoch 1 | 1900/ 2425 batches | lr 0.000001 | 32.92 ms | loss 4.62711 | ppl 102.22\n",
127-
"| epoch 1 | 2000/ 2425 batches | lr 0.000001 | 29.95 ms | loss 63.15213 | ppl 2670677042449227309505314816.00\n",
128-
"| epoch 1 | 2100/ 2425 batches | lr 0.000001 | 31.34 ms | loss 19.06731 | ppl 190908925.16\n",
129-
"| epoch 1 | 2200/ 2425 batches | lr 0.000001 | 32.46 ms | loss 7.35343 | ppl 1561.54\n",
130-
"| epoch 1 | 2300/ 2425 batches | lr 0.000001 | 31.90 ms | loss 2.26775 | ppl 9.66\n",
131-
"| epoch 1 | 2400/ 2425 batches | lr 0.000001 | 33.42 ms | loss 2.16332 | ppl 8.70\n"
102+
"| epoch 1 | 100/ 2425 batches | lr 0.000001 | 34.04 ms | loss 28.42016 | ppl 2201500551498.64\n",
103+
"| epoch 1 | 200/ 2425 batches | lr 0.000001 | 37.88 ms | loss 27.48427 | ppl 863512891626.25\n",
104+
"| epoch 1 | 300/ 2425 batches | lr 0.000001 | 31.47 ms | loss 16.79133 | ppl 19605643.05\n",
105+
"| epoch 1 | 400/ 2425 batches | lr 0.000001 | 31.75 ms | loss 2.42401 | ppl 11.29\n",
106+
"| epoch 1 | 500/ 2425 batches | lr 0.000001 | 35.81 ms | loss 4.50919 | ppl 90.85\n",
107+
"| epoch 1 | 600/ 2425 batches | lr 0.000001 | 31.02 ms | loss 52.29151 | ppl 51275971459472947150848.00\n",
108+
"| epoch 1 | 700/ 2425 batches | lr 0.000001 | 31.18 ms | loss 44.59446 | ppl 23287759048583860224.00\n",
109+
"| epoch 1 | 800/ 2425 batches | lr 0.000001 | 30.21 ms | loss 1.99442 | ppl 7.35\n",
110+
"| epoch 1 | 900/ 2425 batches | lr 0.000001 | 34.87 ms | loss 0.99142 | ppl 2.70\n",
111+
"| epoch 1 | 1000/ 2425 batches | lr 0.000001 | 31.98 ms | loss 4.26460 | ppl 71.14\n",
112+
"| epoch 1 | 1100/ 2425 batches | lr 0.000001 | 31.67 ms | loss 42.18362 | ppl 2089848714934452480.00\n",
113+
"| epoch 1 | 1200/ 2425 batches | lr 0.000001 | 36.95 ms | loss 30.01062 | ppl 10800592971091.07\n",
114+
"| epoch 1 | 1300/ 2425 batches | lr 0.000001 | 32.27 ms | loss 7.96660 | ppl 2883.03\n",
115+
"| epoch 1 | 1400/ 2425 batches | lr 0.000001 | 31.94 ms | loss 2.51443 | ppl 12.36\n",
116+
"| epoch 1 | 1500/ 2425 batches | lr 0.000001 | 31.70 ms | loss 6.14128 | ppl 464.65\n",
117+
"| epoch 1 | 1600/ 2425 batches | lr 0.000001 | 35.89 ms | loss 12.71198 | ppl 331697.28\n",
118+
"| epoch 1 | 1700/ 2425 batches | lr 0.000001 | 36.29 ms | loss 9.38306 | ppl 11885.27\n",
119+
"| epoch 1 | 1800/ 2425 batches | lr 0.000001 | 32.61 ms | loss 2.67167 | ppl 14.46\n",
120+
"| epoch 1 | 1900/ 2425 batches | lr 0.000001 | 30.92 ms | loss 6.04993 | ppl 424.08\n",
121+
"| epoch 1 | 2000/ 2425 batches | lr 0.000001 | 32.00 ms | loss 53.79560 | ppl 230744611081473279131648.00\n",
122+
"| epoch 1 | 2100/ 2425 batches | lr 0.000001 | 30.12 ms | loss 6.98467 | ppl 1079.95\n",
123+
"| epoch 1 | 2200/ 2425 batches | lr 0.000001 | 31.83 ms | loss 2.47668 | ppl 11.90\n",
124+
"| epoch 1 | 2300/ 2425 batches | lr 0.000001 | 35.42 ms | loss 2.59819 | ppl 13.44\n",
125+
"| epoch 1 | 2400/ 2425 batches | lr 0.000001 | 31.30 ms | loss 3.28963 | ppl 26.83\n"
132126
]
133127
}
134128
],
@@ -166,16 +160,16 @@
166160
"metadata": {},
167161
"outputs": [
168162
{
169-
"ename": "ValueError",
170-
"evalue": "too many dimensions 'str'",
163+
"ename": "TypeError",
164+
"evalue": "must be real number, not Timestamp",
171165
"output_type": "error",
172166
"traceback": [
173167
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
174-
"\u001b[1;31mValueError\u001b[0m Traceback (most recent call last)",
168+
"\u001b[1;31mTypeError\u001b[0m Traceback (most recent call last)",
175169
"Input \u001b[1;32mIn [7]\u001b[0m, in \u001b[0;36m<cell line: 5>\u001b[1;34m()\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtrendmaster\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01minferencer\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Inferencer\n\u001b[0;32m 4\u001b[0m inferencer \u001b[38;5;241m=\u001b[39m Inferencer(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m./models/SBIN_model.pkl\u001b[39m\u001b[38;5;124m'\u001b[39m, kite\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m----> 5\u001b[0m predictions \u001b[38;5;241m=\u001b[39m \u001b[43minferencer\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpredict_future\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdata\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mfuture_steps\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m10\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43msymbol\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[38;5;124;43mSBIN\u001b[39;49m\u001b[38;5;124;43m'\u001b[39;49m\u001b[43m)\u001b[49m\n",
176-
"File \u001b[1;32m~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\trendmaster\\inferencer.py:62\u001b[0m, in \u001b[0;36mInferencer.predict_future\u001b[1;34m(self, val_data, future_steps, symbol)\u001b[0m\n\u001b[0;32m 60\u001b[0m test_result \u001b[38;5;241m=\u001b[39m torch\u001b[38;5;241m.\u001b[39mTensor(\u001b[38;5;241m0\u001b[39m) \n\u001b[0;32m 61\u001b[0m truth \u001b[38;5;241m=\u001b[39m torch\u001b[38;5;241m.\u001b[39mTensor(\u001b[38;5;241m0\u001b[39m)\n\u001b[1;32m---> 62\u001b[0m _ , data \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_batch\u001b[49m\u001b[43m(\u001b[49m\u001b[43mval_data\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m,\u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[0;32m 63\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m torch\u001b[38;5;241m.\u001b[39mno_grad():\n\u001b[0;32m 64\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m i \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mrange\u001b[39m(\u001b[38;5;241m0\u001b[39m, future_steps,\u001b[38;5;241m1\u001b[39m):\n",
170+
"File \u001b[1;32m~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\trendmaster\\inferencer.py:63\u001b[0m, in \u001b[0;36mInferencer.predict_future\u001b[1;34m(self, val_data, future_steps, symbol)\u001b[0m\n\u001b[0;32m 61\u001b[0m truth \u001b[38;5;241m=\u001b[39m torch\u001b[38;5;241m.\u001b[39mTensor(\u001b[38;5;241m0\u001b[39m)\n\u001b[0;32m 62\u001b[0m val_data \u001b[38;5;241m=\u001b[39m val_data\u001b[38;5;241m.\u001b[39mvalues \u001b[38;5;66;03m# Convert pandas DataFrame to numpy array\u001b[39;00m\n\u001b[1;32m---> 63\u001b[0m _ , data \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_batch\u001b[49m\u001b[43m(\u001b[49m\u001b[43mval_data\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m,\u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[0;32m 64\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m torch\u001b[38;5;241m.\u001b[39mno_grad():\n\u001b[0;32m 65\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m i \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mrange\u001b[39m(\u001b[38;5;241m0\u001b[39m, future_steps,\u001b[38;5;241m1\u001b[39m):\n",
177171
"File \u001b[1;32m~\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\trendmaster\\inferencer.py:121\u001b[0m, in \u001b[0;36mInferencer.get_batch\u001b[1;34m(self, source, i, batch_size)\u001b[0m\n\u001b[0;32m 119\u001b[0m seq_len \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mmin\u001b[39m(batch_size, \u001b[38;5;28mlen\u001b[39m(source) \u001b[38;5;241m-\u001b[39m \u001b[38;5;241m1\u001b[39m \u001b[38;5;241m-\u001b[39m i)\n\u001b[0;32m 120\u001b[0m batch \u001b[38;5;241m=\u001b[39m source[i:i\u001b[38;5;241m+\u001b[39mseq_len]\n\u001b[1;32m--> 121\u001b[0m inputs \u001b[38;5;241m=\u001b[39m \u001b[43mtorch\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mFloatTensor\u001b[49m\u001b[43m(\u001b[49m\u001b[43m[\u001b[49m\u001b[43mitem\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mfor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mitem\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mbatch\u001b[49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241m.\u001b[39mto(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdevice)\n\u001b[0;32m 122\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m inputs\n",
178-
"\u001b[1;31mValueError\u001b[0m: too many dimensions 'str'"
172+
"\u001b[1;31mTypeError\u001b[0m: must be real number, not Timestamp"
179173
]
180174
}
181175
],

examples/models/SBIN_model.pkl

0 Bytes
Binary file not shown.

new_agenda.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,11 @@
1-
# in this we want to combine all the files of this repo in the folder `/trenmaster` into one file by combining all the individual classes that are present in all the files.
1+
# in this we want to combine all the files of this repo in the folder `/trenmaster` into one file by combining all the individual classes that are present in all the files
2+
3+
## we want to have this kind of methods for the combined class
24

3-
# we want to have this kind of methods for the combined class .
45
1: authenticate : here this method chekcs if the user has already saved object of the kite zeroddha type using joblib library if yes then load it and check if the authentication is valid if not then he should login using the code given in current files using zerodha and save this object locally using joblib so that it can be loaded again to get authentication when this program runs.
56

67
2 : load_data : in this the argument should be a symbol name of a stock that user can enter and then the data can be achieved by the authentication object that has zerodha access after which the data is saved locally using the symbol name of the stock using joblib library. in this it should first check if the user is logged in or not if not then run authetication method first.
78

89
3 : train_model : in this the symbol name of a stock should be enterd as argument and training transformet paramenters. after which it should check for the locally saved data of the given symbol in the save directory , if not availablle then it should run load_data function to get the data of given symbol for last 30 days. then it should start training if data found in appropriate directory of given symbol with given transformer paramenters. after the completion of training of the model it should save the model to models directory with name as symbol name of the given stock name.
910

10-
4 : inferance_model : in this method the arguments shuold be the name of the symbol of the stock that we want to run the prediction model and the date time range for which we are predicting the future values of the model. here consider that the date time range should take input as upto which date and time it should run prediction as taking current date and time as from and the user entered valued as to date time. after getting this data from the arguments it should load the authentication object that is saved locally using joblib and check if alreay autheticated if not then run authentication method to authnticate. then it should load the model of given symbol from the models directiry and then run inference from given data and arguments. after this it should show appropriate graphs and tabular data to analyze the results.
11+
4 : inferance_model : in this method the arguments shuold be the name of the symbol of the stock that we want to run the prediction model and the date time range for which we are predicting the future values of the model. here consider that the date time range should take input as upto which date and time it should run prediction as taking current date and time as from and the user entered valued as to date time. after getting this data from the arguments it should load the authentication object that is saved locally using joblib and check if alreay autheticated if not then run authentication method to authnticate. then it should load the model of given symbol from the models directiry and then run inference from given data and arguments. after this it should show appropriate graphs and tabular data to analyze the results

0 commit comments

Comments
 (0)