its one of the basic terms you will come across in machine learning . This is how the model gets optimized and sort of directs itself to the correct solution. When the expected output differs from the actual output , you have difference and you can program the model training to adjust itself to reduce this error in the next iteration. This is where loss function comes to play. Different loss functions will perform better with different problem and this is where its important to pick the right kind of loss function. For regression kind of problem , mean square error is a better fit whereas for classification problems we go with cross entropy ( log fn ) is a better fit .