Hacker News new | past | comments | ask | show | jobs | submit login

what stops you from using a few convolution layers and then use the GLN in the last layer? You achieve the best of both worlds if your theory is true.



That would work, but it would likely re-introduce the catastrophic forgetting problem, because now the GLN is dependent on an intermediate representation determined by conv layers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: