Skip to content

Commit 17f8a5d

Browse files
authored
Add files via upload
1 parent e22e905 commit 17f8a5d

13 files changed

+125049
-0
lines changed

logs/40_EPiDA_Offline_EDA_Final.out

+123,079
Large diffs are not rendered by default.

logs/EPiDA_CWE_SST.out

+421
Large diffs are not rendered by default.

logs/EPiDA_EDA_SST.out

+421
Large diffs are not rendered by default.

logs/Speed_Results.out

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
2021-08-21 06:53:05.699761: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
3+
took 5.4483466148376465 secs.
4+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight']
5+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
6+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
7+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
8+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
9+
Simple Test
10+
['So cute the small baby is crying!', 'So cute the little baby is crying!']
11+
test speed!
12+
EDA cost 188.4038935779322
13+
CWE cost 30.74567894718709
14+
EPDA EDA cost 43.84778790666555
15+
EPDA + CWE cost 10.050052534914453

logs/div_qua.out

+31
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
2+
Reading /remote-home/***/Code/NLP/EPDA/lms/offense.arpa
3+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
4+
****************************************************************************************************
5+
Simple Test
6+
['thusly cute the little baby is crying!', 'So cute the little baby is crying!']
7+
LR= 5e-05
8+
Start to read: new_data/sentiment/train_1.txt
9+
Load Over, Find: 153 datas.
10+
Start to read: new_data/sentiment/test.txt
11+
Load Over, Find: 3027 datas.
12+
took 5.79877781867981 secs.
13+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight']
14+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
15+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
16+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
17+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
18+
Bert Tokenizer
19+
Update EPOCHES to 16
20+
Start to update Dataset
21+
?? 2295 2295 2295
22+
Before 153
23+
Start Update Dataset, Find 153 datas.
24+
Update Dataset Finish, Find 2295 datas.
25+
< Update Done.
26+
After 2295
27+
start to extract something
28+
torch.Size([2295, 3072]) 2295
29+
Error Rate EPida0.0153 EDA0.0305 CEM0.0065 REM0.0675
30+
Distance EPida0.0078 EDA0.0054 CEM0.0025 REM0.0121
31+
> Done. Model Training

logs/ppl/CEM_Irony_PPL.out

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
2+
Reading /remote-home/***/Code/NLP/EPDA/lms/irony.arpa
3+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
4+
****************************************************************************************************
5+
Simple Test
6+
['So cunning the little baby is crying!', 'So cute the little baby is crying!']
7+
LR= 2e-05
8+
Start to read: new_data/irony/train_10.txt
9+
Load Over, Find: 328 datas.
10+
Start to read: new_data/irony/test.txt
11+
Load Over, Find: 656 datas.
12+
took 5.60038685798645 secs.
13+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias']
14+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
15+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
16+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
17+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
18+
Loading the LM will be faster if you build a binary file.
19+
Reading /remote-home/***/Code/NLP/EPDA/lms/irony.arpa
20+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
21+
****************************************************************************************************
22+
Update EPOCHES to 80
23+
Start to update Dataset
24+
Start to calculate PPL Score.
25+
PPL Score 12.11044971795114

logs/ppl/CEM_Offense_PPL.out

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
2+
Reading /remote-home/***/Code/NLP/EPDA/lms/offense.arpa
3+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
4+
****************************************************************************************************
5+
Simple Test
6+
['So cunning the little baby is crying!', 'So cute the little baby is crying!']
7+
LR= 5e-05
8+
Start to read: new_data/offense/train_10.txt
9+
Load Over, Find: 8844 datas.
10+
Start to read: new_data/offense/test.txt
11+
Load Over, Find: 17679 datas.
12+
took 5.584129810333252 secs.
13+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.seq_relationship.weight']
14+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
15+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
16+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
17+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
18+
Loading the LM will be faster if you build a binary file.
19+
Reading /remote-home/***/Code/NLP/EPDA/lms/offense.arpa
20+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
21+
****************************************************************************************************
22+
Update EPOCHES to 4
23+
Start to update Dataset
24+
Start to calculate PPL Score.
25+
PPL Score 8.570172684665541

logs/ppl/CEM_Sentiment_PPL.out

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
2+
Reading /remote-home/***/Code/NLP/EPDA/lms/sentiment.arpa
3+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
4+
****************************************************************************************************
5+
Simple Test
6+
['So cunning the little baby is crying!', 'So cute the little baby is crying!']
7+
LR= 5e-05
8+
Start to read: new_data/sentiment/train_10.txt
9+
Load Over, Find: 1514 datas.
10+
Start to read: new_data/sentiment/test.txt
11+
Load Over, Find: 3027 datas.
12+
took 6.0170722007751465 secs.
13+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias']
14+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
15+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
16+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
17+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
18+
Loading the LM will be faster if you build a binary file.
19+
Reading /remote-home/***/Code/NLP/EPDA/lms/sentiment.arpa
20+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
21+
****************************************************************************************************
22+
Update EPOCHES to 16
23+
Start to update Dataset
24+
Start to calculate PPL Score.
25+
PPL Score 8.096230467979732

logs/ppl/REM_Irony_PPL.out

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
2+
Reading /remote-home/***/Code/NLP/EPDA/lms/irony.arpa
3+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
4+
****************************************************************************************************
5+
Simple Test
6+
['So cute the small baby is crying!', 'So cute the little baby is crying!']
7+
LR= 2e-05
8+
Start to read: new_data/irony/train_10.txt
9+
Load Over, Find: 328 datas.
10+
Start to read: new_data/irony/test.txt
11+
Load Over, Find: 656 datas.
12+
took 6.335399627685547 secs.
13+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias']
14+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
15+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
16+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
17+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
18+
Loading the LM will be faster if you build a binary file.
19+
Reading /remote-home/***/Code/NLP/EPDA/lms/irony.arpa
20+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
21+
****************************************************************************************************
22+
Update EPOCHES to 80
23+
Start to update Dataset
24+
Start to calculate PPL Score.
25+
PPL Score 77.08645492508874

logs/ppl/REM_Offense_PPL.out

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
2+
Reading /remote-home/***/Code/NLP/EPDA/lms/offense.arpa
3+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
4+
****************************************************************************************************
5+
Simple Test
6+
['thusly cute the little baby is crying!', 'So cute the little baby is crying!']
7+
LR= 5e-05
8+
Start to read: new_data/offense/train_10.txt
9+
Load Over, Find: 8844 datas.
10+
Start to read: new_data/offense/test.txt
11+
Load Over, Find: 17679 datas.
12+
took 5.67768120765686 secs.
13+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight']
14+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
15+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
16+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
17+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
18+
Loading the LM will be faster if you build a binary file.
19+
Reading /remote-home/***/Code/NLP/EPDA/lms/offense.arpa
20+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
21+
****************************************************************************************************
22+
Update EPOCHES to 4
23+
Start to update Dataset
24+
Start to calculate PPL Score.
25+
PPL Score 81.13078085129892

logs/ppl/REM_Sentiment_PPL.out

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
Warming up PyWSD (takes ~10 secs)... Loading the LM will be faster if you build a binary file.
2+
Reading /remote-home/***/Code/NLP/EPDA/lms/sentiment.arpa
3+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
4+
****************************************************************************************************
5+
Simple Test
6+
['thusly cute the little baby is crying!', 'So cute the little baby is crying!']
7+
LR= 5e-05
8+
Start to read: new_data/sentiment/train_10.txt
9+
Load Over, Find: 1514 datas.
10+
Start to read: new_data/sentiment/test.txt
11+
Load Over, Find: 3027 datas.
12+
took 6.893878936767578 secs.
13+
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias']
14+
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
15+
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
16+
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
17+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
18+
Loading the LM will be faster if you build a binary file.
19+
Reading /remote-home/***/Code/NLP/EPDA/lms/sentiment.arpa
20+
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
21+
****************************************************************************************************
22+
Update EPOCHES to 16
23+
Start to update Dataset
24+
Start to calculate PPL Score.
25+
PPL Score 66.86463311930764

0 commit comments

Comments
 (0)