Hacker News new | past | comments | ask | show | jobs | submit login

Ah yes, you're right, I was thinking of it that way. Thanks a bunch for your clear and thorough explaination, it makes a lot of sense! So if I understand what you're saying, a 1x1 convolutional layer for collapsing 100 channels to 10 channels would take a 100x512x512 tensor and collapse it to a 10x512x512 tensor?

[Also, sorry for attempting to answer your quesiton incorrectly. I was thinking of putting a disclaimer saying I hadn't worked with CNNs and so might be misunderstanding what the convolutions are doing; probably should have haha]

Maybe when the author was saying 'one can think the 1x1 convolutions are against the original principles of LeNet', he was anticipating my kind of confusion? :)




> So if I understand what you're saying, a 1x1 convolutional layer for collapsing 100 channels to 10 channels would take a 100x512x512 tensor and collapse it to a 10x512x512 tensor?

Correct. As I understand it, this would be applying a 1x1 covolution with 10 filters to a 100x512x512 tensor.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: