Time, Place | Tuesdays 14:00-16:00 in room: MI 02.13.010 |
Begin | October 17., 2023 Kick-Off: Tuesday, October 17., 2023 at 14:00 On Site in room: MI 02.13.010 |
Prerequisites | Introduction to Deep Learning |
Content
In this course, students will autonomously investigate recent research about machine learning techniques in computer graphics. Independent investigation for further reading, critical analysis, and evaluation of the topic are required.
Requirements
Participants are required to first read the assigned paper and start writing a report. This will help you prepare for your presentation.
Attendance
- It is only allowed to miss two talks. If you have to miss any, please let us know in advance, and write a one-page summary about the paper in your own words. Missing the third one means failing the seminar.
Report
- A short report (2 pages max. excluding references in the ACM SIGGRAPH TOG format (acmtog) - you can download the precompiled latex template) should be prepared and sent two weeks after the talk, i.e., by 23:59 on Tuesday.
- Guideline: You can begin with writing a summary of the work you present as a start point; but, it would be better if you focus more on your own research rather than just finishing with the summary of the paper. We, including you, are not interested in revisiting the work done before; it is more meaningful if you make an effort to put your own reasoning about the work, such as pros and cons, limitation, possible future work, your own ideas for the issues, etc.
- For questions regarding your paper or feedback for a semi-final version of your report you can contact your advisor.
Presentation (slides)
- You will present your topic in English, and the talk should last 20 minutes. After that, a discussion session of about 10 minutes will follow.
- The slides should be structured according to your presentation. You can use any layout or template you like, but make sure to choose suitable colors and font sizes for readability.
- Plagiarism should be avoided; please do not simply copy the original authors' slides. You can certainly refer to them.
- The semi-final slides (PDF) should be sent one week before the talk, otherwise the talk will be canceled.
- We strongly encourage you to finalize the semi-final version as far as possible. We will take a look at the version and give feedback. You can revise your slides until your presentation.
- The final slides should be sent after the talk.
Topics
Paper Number | Paper |
1 | 2017, Chaitanya et al., Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder |
2 | 2021, Işık et al., Interactive Monte Carlo denoising using affinity of neural features |
3 | 2019, Chu et al., Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation, arXiv.org |
4 | 2021, Karras et al., Alias-free generative adversarial networks |
5 | 2020, Wang et al., Attribute2Font: Creating Fonts You Want From Attributes, ACM Trans. Graph |
6 | 2022, Lin et al., 3D GAN Inversion for Controllable Portrait Image Animation |
7 | 2022, Saharia et al., Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding |
8 | 2023, Pan et al. , Drag your gan: Interactive point-based manipulation on the generative image manifold |
9 | 2019, Meka et al., Deep Reflectance Fields - High-Quality Facial Reflectance Field Inference From Color Gradient Illumination, ACM Trans. Graph |
10 | 2020, Dupont et al., Equivariant Neural Rendering, ICML |
11 | 2020, Kopf et al., One Shot 3D Photography, ACM Trans. Graph |
12 | 2020, Mildenhall et al., Representing Scenes as Neural Radiance Fields for View Synthesis |
13 | 2021, Yin et al., Learning to Recover 3D Scene Shape from a Single Image, CVPR |
14 | 2022, Chen et al., TensoRF: Tensorial Radiance Fields |
15 | 2022, Mueller et al., Instant Neural Graphics Primitives with a Multiresolution Hash Encoding |
16 | 2023, Kerbl et al., 3D Gaussian Splatting for Real-Time Radiance Field Rendering |
17 | 2021, Müller et al., Real-time neural radiance caching for path tracing |
18 | 2022, Franz et al., Global Transport for Fluid Reconstruction with Learned Self-Supervision |
19 | 2022, Xie et al., TemporalUV: Capturing Loose Clothing with Temporally Coherent UV Coordinates |
20 | 2022, Vicini et al., Differentiable Signed Distance Function Rendering |
21 | 2020, Xiao et al., Neural Supersampling for Real-Time Rendering, ACM Trans. Graph. |
22 | 2019, Choi & Kweon, Deep Iterative Frame Interpolation for Full-frame Video Stabilization, arXiv.org |
23 | 2023, Blattman et al., Align your latents: High-resolution video synthesis with latent diffusion models |
24 | 2022, Harvey et al., Flexible diffusion modeling of long videos |
Presentation Schedule
Date | Paper | Paper ID | Student | | |
17.10.2023 | INTRO LECTURE | | | | |
24.10.2023 | no seminar | | | | |
31.10.2023 | no seminar | | | | |
07.11.2023 | 2017, Chaitanya et al., Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder | 1 | Hkiri | | |
07.11.2023 | 2021, Işık et al., Interactive Monte Carlo denoising using affinity of neural features | 2 | Kobalt | | |
14.11.2023 | no seminar | | | | |
21.11.2023 | 2023, Kerbl et al., 3D Gaussian Splatting for Real-Time Radiance Field Rendering | 21 | Wargitsch | | |
21.11.2023 | 2022, Franz et al., Global Transport for Fluid Reconstruction with Learned Self-Supervision | 23 | Kuang | | |
28.11.2023 | please attend talk by Prof. Hanrahan at 14:00 in HS1 (FMI) | | | | |
05.12.2023 | 2021, Karras et al., Alias-free generative adversarial networks | 4 | Xu | | |
05.12.2023 | 2022, Lin et al., 3D GAN Inversion for Controllable Portrait Image Animation | 6 | Mandarapu | | |
12.12.2023 | 2020, Dupont et al., Equivariant Neural Rendering, ICML | 10 | Aytekin | | |
12.12.2023 | 2020, Mildenhall et al., Representing Scenes as Neural Radiance Fields for View Synthesis | 12 | Bellaaj | | |
19.12.2023 | no seminar | | | | |
26.12.2023 | no seminar | | | | |
02.01.2024 | no seminar | | | | |
09.01.2024 | 2022, Franz et al., Global Transport for Fluid Reconstruction with Learned Self-Supervision | 18 | Richter | | |
09.01.2024 | 2023, Kerbl et al., 3D Gaussian Splatting for Real-Time Radiance Field Rendering | 16 | Kong | | |
16.01.2024 | 2020, Wang et al., Attribute2Font: Creating Fonts You Want From Attributes, ACM Trans. Graph | 5 | Chang | | |
16.01.2024 | 2022, Vicini et al., Differentiable Signed Distance Function Rendering | 20 | Vogel | | |
| | | | | |
| | | | | |
| | | | | |
| | | | |