Convolution
- Continuous convolution
- Discrete convolution
- 2D image convolution
K(3X3 filter) * I(7X7 image) = Output(5X5)
- 2D convolution in action
- Blur, Emboss, Outline
RGB Image Convolution
5X5X3 filter / 32X32X3 Image / 28X28X1 feature
32X32X3 Image * Four 5X5X3 filter -> 28X28X4 feature
Stack of Convolutions
[32X32X3] -> CONV(4 5X5X3 filters), ReLU -> [28X28X4] -> CONV(10 5X5X4 filters), ReLU -> [24X24X10] ->
Convolutional Neural Networks
- CNN consists of convolution layer, pooling layer, and fully connected layer.
- Convolution and pooling layers: feature extraction
- Fully connected layer: decision making(e.g., classification)
파라미터 숫자를 줄이기 위해 테크닉이 존재, 새로운 아키텍처를 볼 때 몇 개의 파라미터로 이뤄지고, 전체 파라미터에 대해 알아야 한다.
Stride: 넓게 걷는다.
stride = 1 (1개씩 차이), 2(2개씩 차이)
Padding
No padding (stride=1)
Zero padding (stride=1)
Convolution Arithmetic
- Padding(1), Stride(1), 3 X 3 Kernel
W:40, H: 50, C: 128 -> W: 40, H: 50, C: 64
What is the number of parameters of this model?
The answer is 3 X 3 X 128 X 64 = 73,728
Exercise
What is the number of parameters of this model?
- 11 X 11 X 3 X 48 * 2 = 35k
- 5 X 5 X 48 X 128 * 2 = 307k
- 3 X 3 X 128 * 2 X 192 * 2 = 884k
- 3 X 3 X 192 X 192 * 2 = 663k
- 3 X 3 X 192 X 128 * 2 = 442k
- 13 * 13 * 128 * 2 X 2048 * 2 = 177M
- 2048 * 2 X 2048 * 2 = 16M
- 2048 * 2 X 1000 = 4M
1 X 1 Convolution
256 X 256 X 128 -> CONV(1X1X128X32) -> 256 X 256 X 32
- why?
- Dimension reduction
- To reduce the number of parameters while increasing the depth
- e.g., bottleneck architecture
'BOOTCAMP > boostcamp AI Tech Pre-Course' 카테고리의 다른 글
Deep Learning Basics Lecture3: Optimization (0) | 2023.01.05 |
---|---|
Mathematics for Artificial Intelligence 8강: 베이즈 통계학 맛보기 (0) | 2023.01.04 |
Linear Transformation (0) | 2023.01.03 |
Deep Learning Basics Lecture2: Neural Networks & Multi-Layer Perceptron (0) | 2023.01.03 |
Mathematics for Artificial Intelligence 5강: 딥러닝 학습방법 이해하기 (0) | 2023.01.02 |