Roughly speaking, gamma encoding of color space can have “any value”. You only have to know its value to decode image file. If you look only at a print output image, there is no quality difference between image from an 1.8 encoded source and image from 2.2 encoded source.
But, it is better to have an encoding gamma near 1.8-1.2 (and not 1 or 3) because, combined with “natural” display gamma, we get a transfert function close to inverse of perception transfert function. And, with 8 bits encoding, we know this avoid display banding.
Encoding based on L* is not very far from constant gamma. It is closer to visual perception transfert function. But, if display calibration is still based on constant gamma, I dont see where is the improvement.
So, my main question is : is L* encoding an improvement ONLY if display is L* calibrated ?