The photometry of a galaxy is the integrated energy of its emitted light. It summarizes the full spectrum as magnitudes measured in different bands. As the spectrum of distant galaxies is redshifted, their band magnitudes vary. Therefore, galaxy photometry is a common technique used for estimating the redshift.
There exist two classes of methods for estimating photometric redshift (photo-$z$): template fitting and machine learning. Template-fitting methods try to match observations with magnitudes derived from differently redshifted spectrum templates. Machine-learning (ML) methods resolve a regression problem and estimate the redshift of the galaxies in question bsaed on the training samples.
Recent studies of photo-$z$ using ML claimed to reach very high precision. However, it is also well known that the prediction result from ML would be biased if the feature distributions of the training set and the testing set are very different. This is the case of photo-$z$ since we train on a spectrum-confirmed sample and test on the whole photometric sample and their color distributions differ a lot.
In our study, we showed that the real uncertainty of photo-$z$ using ML is much larger than what is usually believed. We quantified this difference and suggested that if a survey can measure more spectra of faint galaxies, the photo-$z$ estimation could be largely improved.
We also examined how magnitude errors affect photo-$z$. We found that these errors introduced some impacts and could not be fully corrected by introducing weights. Thus, the magnitude measurement uncertainty was a source of photo-$z$ errors that ML methods should not ignore.