You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[Q5: Self-Supervised Learning for Image Classification (15 points)](#q5-self-supervised-learning-15-points)
26
-
-[Optional (Extra Credit): Image Captioning with LSTMs (5 points)](#optional-image-captioning-with-lstms-5-points)
27
-
-[Optional (Extra Credit): Style Transfer (5 points)](#optional-style-transfer-5-points)
28
-
-[Submitting your work](#submitting-your-work)
29
-
8
+
<spanstyle="color:red">This assignment is due on **Tuesday, May 25 2021** at 11:59pm PST.</span>
30
9
31
10
### Setup
32
11
@@ -46,13 +25,10 @@ In this assignment, you will implement language networks and apply them to image
46
25
47
26
The goals of this assignment are as follows:
48
27
49
-
- Understand the architecture of recurrent neural networks (RNNs) and how they operate on sequences by sharing weights over time.
50
-
- Understand and implement Vanilla RNNs, Long-Short Term Memory (LSTM), and Transformer networks for Image captioning.
51
-
- Understand how to combine convolutional neural nets and recurrent nets to implement an image captioning system.
28
+
- Understand and implement RNN and Transformer networks. Combine them with CNN networks for image captioning.
52
29
- Explore various applications of image gradients, including saliency maps, fooling images, class visualizations.
53
30
- Understand how to train and implement a Generative Adversarial Network (GAN) to produce images that resemble samples from a dataset.
54
31
- Understand how to leverage self-supervised learning techniques to help with image classification tasks.
55
-
-*(optional) Understand and implement techniques for image style transfer.
56
32
57
33
**You will use PyTorch for the majority of this homework.**
58
34
@@ -62,28 +38,24 @@ The notebook `RNN_Captioning.ipynb` will walk you through the implementation of
62
38
63
39
### Q2: Image Captioning with Transformers (20 points)
64
40
65
-
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of a Transformer model and apply it to image captioning on COCO.**When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
41
+
The notebook `Transformer_Captioning.ipynb` will walk you through the implementation of a Transformer model and apply it to image captioning on COCO.
66
42
67
-
### Q3: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
43
+
### Q3: Network Visualization: Saliency Maps, Class Visualization, and Fooling Images (15 points)
68
44
69
45
The notebook `Network_Visualization.ipynb` will introduce the pretrained SqueezeNet model, compute gradients with respect to images, and use them to produce saliency maps and fooling images.
In the notebook `Generative_Adversarial_Networks.ipynb` you will learn how to generate images that match a training dataset and use these models to improve classifier performance when training on a large amount of unlabeled data and a small amount of labeled data. **When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
74
50
75
-
### Q5: Self-Supervised Learning (20 points)
51
+
### Q5: Self-Supervised Learning for Image Classification (20 points)
76
52
77
-
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to leverage self-supervised pretraining to obtain better performance on image classification task**When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
53
+
In the notebook `Self_Supervised_Learning.ipynb`, you will learn how to leverage self-supervised pretraining to obtain better performance on image classification tasks.**When first opening the notebook, go to `Runtime > Change runtime type` and set `Hardware accelerator` to `GPU`.**
78
54
79
-
### Optional (Extra Credit): Image Captioning with LSTMs (5 points)
55
+
### Extra Credit: Image Captioning with LSTMs (5 points)
80
56
81
57
The notebook `LSTM_Captioning.ipynb` will walk you through the implementation of Long-Short Term Memory (LSTM) RNNs and apply them to image captioning on COCO.
82
58
83
-
### Optional (Extra Credit): Style Transfer (5 points)
84
-
85
-
In the notebook `Style_Transfer.ipynb`, you will learn how to create images with the content of one image but the style of another.
86
-
87
59
### Submitting your work
88
60
89
61
**Important**. Please make sure that the submitted notebooks have been run and the cell outputs are visible.
@@ -94,13 +66,13 @@ Once you have completed all notebooks and filled out the necessary code, you nee
94
66
95
67
This notebook/script will:
96
68
97
-
* Generate a zip file of your code (`.py` and `.ipynb`) called `a3.zip`.
98
-
* Convert all notebooks into a single PDF file.
69
+
* Generate a zip file of your code (`.py` and `.ipynb`) called `a3_code_submission.zip`.
70
+
* Convert all notebooks into a single PDF file called `a3_inline_submission.pdf`.
99
71
100
72
If your submission for this step was successful, you should see the following display message:
101
73
102
-
`### Done! Please submit a3.zip and the pdfs to Gradescope. ###`
74
+
`### Done! Please submit a3_code_submission.zip and a3_inline_submission.pdf to Gradescope. ###`
103
75
104
76
**2.** Submit the PDF and the zip file to [Gradescope](https://www.gradescope.com/courses/257661).
105
77
106
-
Remember to download `a3.zip` and `assignment.pdf` locally before submitting to Gradescope.
78
+
Remember to download `a3_code_submission.zip` and `a3_inline_submission.pdf` locally before submitting to Gradescope.
0 commit comments