LabelEncodeIntegerGraded in Multi-instance tutorial #1488
-
Since #1478, I was able to implement pseudo-labels for my project. However, I found out that I Struggle to understand the concept of LabelEncoderIntegerGraded, why is a reason in encoding ones up to label index, instead of simple one-hot encoding? Thanks in advance |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @relyativist, Regarding Multiple Instance Learning (MIL), it's important to note that we lack individual labels for all the patches within the dataset. Instead, we can only assign a label to the entire bag. In my opinion, there's no necessity to apply a one-hot encoding to this label. This is because the label essentially consists of an integer value, and our objective is simply to transform this integer label into an encoded array representation with a length equal to the number of classes. I hope it helps, thanks! |
Beta Was this translation helpful? Give feedback.
@KumoLiu Thank for your reply, and apologize for a long answer. I haven't seen up-to-index label encoding, before. It still a challenge to understand why you used BCELossWithLogits? Since you are solving multi-class classification problem, the model should assign maximum probability to one class. From you example, criterion is applied on output of the linear layer, sigmoid() inside the BCE loss is applied to each of the values in linear tensor, so we can have multiple values with high logits values. Seems, that this loss is tries to solve multi-label classification rather than multi-class problem.