-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Hello! Thank you for providing a simple implementation of so many models.
I have a question regarding the Attention Based MIL.
In the original implementation, the attention scores are computed as
that is, using a (masked) sotfmax on the obtained latent function
However, in your implementation, you are using a sigmoid:
weights = torch.sigmoid(scores)
which causes the attention values to lose the property of adding up to one. Wouldn't a softmax be more appropiate?
Also, the code of the functions used in this model are not provided:
self.score = MonoAdditiveAttentionScore(D, D)
self.pool = CountMILPool(D)
Could you also give the implementation of this functions?
Thank you!
Metadata
Metadata
Assignees
Labels
No labels