Skip to content Skip to sidebar Skip to footer

Writing A Custom Loss Function Element By Element For Keras

I am new to machine learning, python and tensorflow. I am used to code in C++ or C# and it is difficult for me to use tf.backend. I am trying to write a custom loss function for an

Solution 1:

From keras backend functions, you have the function greater that you can use:

import keras.backend as K

defcustomLossFunction(yTrue,yPred)

    greater = K.greater(yPred,0.5)
    greater = K.cast(greater,K.floatx()) #has zeros and ones
    multiply = (2*greater) - 1#has -1 and 1

    modifiedTrue = multiply * yTrue

    #here, it's important to know which dimension you want to sumreturn K.sum(modifiedTrue, axis=?)

The axis parameter should be used according to what you want to sum.

axis=0-> batch or sample dimension (number of sequences)     
axis=1-> time steps dimension (if you're using return_sequences = True until the end)     
axis=2-> predictions foreach step 

Now, if you have only a 2D target:

axis=0 -> batch or sample dimension (number of sequences)
axis=1 -> predictions for each sequence

If you simply want to sum everything for every sequence, then just don't put the axis parameter.

Important note about this function:

Since it contains only values from yTrue, it cannot backpropagate to change the weights. This will lead to a "none values not supported" error or something very similar.

Although yPred (the one that is connected to the model's weights) is used in the function, it's used only for getting a true x false condition, which is not differentiable.

Post a Comment for "Writing A Custom Loss Function Element By Element For Keras"