Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.7k views
in Technique[技术] by (71.8m points)

tensorflow - VNET Dice loss higher than 1

I am training a Vnet on a segmentation problem with Keras 2.2.4 and Tensorflow 1.12.0 (I cannot change the tensorflow version). The masks, i.e. y_true, are (1,200,150,100,2) arrays. I want to minimize the dice loss defined as:

'''

def dice_loss_foreground(y_true, y_pred):

    elements_per_class=tf.math.reduce_sum(y_true[:,:,:,:,1])
    predicted_per_class=tf.math.reduce_sum(y_pred[:,:,:,:,1])
    intersection=tf.math.scalar_mul(2.0,tf.math.reduce_sum(tf.math.multiply(y_pred[:,:,:,:,1],y_true[:,:,:,:,1])))
    union=elements_per_class+predicted_per_class
    acc=intersection/(union+0.0001)
    return 1.0-acc

'''

I have tested this definition on mock examples and it goes from 0 to 1 but, during training, the loss reaches a number higher than 1. Can anybody help me in understanding why? Thanks!


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

The loss function is not merely made by the dice definition but it contains the regularization too. So, even if dice coefficient loss should be in range 0-1, it may be greater than 1 depending on regularization.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...