Skip to content

Commit acfb227

Browse files
Update synthim.md
1 parent 6acfb4a commit acfb227

File tree

1 file changed

+39
-18
lines changed

1 file changed

+39
-18
lines changed

_editions/2025/tasks/synthim.md

+39-18
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ The training and validation datasets will include a combination of well-known sy
4545

4646
All data will be curated under open-source or permissive licenses to ensure ethical use and compliance with data-sharing guidelines.
4747

48-
#### Evaluation methodology
48+
#### Evaluation methodology - Real vs. Synthetic Task (Binary Classification)
4949

5050
For the evaluation of synthetic image detection, the metrics used by the SIDBench framework [1] will be employed to assess performance in depth.
5151

@@ -59,6 +59,10 @@ Equal Error Rate (EER): The rate at which false acceptance and false rejection a
5959

6060
To evaluate model robustness in detecting synthetic images under uncontrolled conditions, such as transformations applied by online platforms, we will test the submitted models on the dataset of images collected from social media previously used in disinformation campaigns. The variations, collected directly from the internet, reflect real-world, black-box transformations where the exact processes are unknown. The evaluation will focus on calculating the True Positive Rate (TPR) to measure detection effectiveness across all variations.
6161

62+
#### Evaluation methodology - Manipulated Region Localization Task
63+
64+
For this subtask, we first evaluate whether the model correctly identifies an image as manipulated or not. The same metrics as the binary classification will be used, with F1 being the metrics used for ranking. To evaluate how well the model identifies the specific regions in an image that have been manipulated the Intersection over Union (IoU) will be used. This metrics measures the overlap between the predicted manipulated region and the ground truth region: _IoU = Area of Overlap / Area of Union_
65+
6266
#### Quest for insight
6367
Here are several research questions related to this challenge that participants can strive to answer in order to go beyond just looking at the evaluation metrics:
6468
* <!-- # First research question-->
@@ -70,28 +74,45 @@ Here are several research questions related to this challenge that participants
7074
<!-- # * Signing up: Fill in the [registration form]() and fill out and return the [usage agreement](). -->
7175
<!-- # * Making your submission: To be announced (check the task read me) <!-- Please add instructions on how to create and submit runs to your task replacing "To be announced." -->
7276
<!-- # * Preparing your working notes paper: Instructions on preparing you working notes paper can be found in [MediaEval 2023 Working Notes Paper Instructions]().-->
77+
More details will follow.
7378

7479
#### References and recommended reading
75-
<!-- # Please use the ACM format for references https://www.acm.org/publications/authors/reference-formatting (but no DOI needed)-->
76-
<!-- # The paper title should be a hyperlink leading to the paper online-->
80+
[1] Schinas, M., & Papadopoulos, S. (2024, June). SIDBench: A Python framework for reliably assessing synthetic image detection methods. In Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation (pp. 55-64).
7781

78-
#### Task organizers
79-
* <!-- # First organizer-->
80-
* <!-- # Second organizer-->
81-
<!-- # and so on-->
82+
[2] Karageorgiou, D., Bammey, Q., Porcellini, V., Goupil, B., Teyssou, D., & Papadopoulos, S. (2024, September). Evolution of Detection Performance throughout the Online Lifespan of Synthetic Images. In Trust What You learN (TWYN) Workshop ECCV 2024.
8283

83-
#### Task auxiliaries
84-
<!-- # optional, delete if not used-->
85-
* <!-- # First auxiliary-->
86-
* <!-- # Second auxiliary-->
87-
<!-- # and so on-->
84+
[3] Mareen, H., Karageorgiou, D., Van Wallendael, G., Lambert, P., & Papadopoulos, S. (2024, December). TGIF: Text-Guided Inpainting Forgery Dataset. In 2024 IEEE International Workshop on Information Forensics and Security (WIFS) (pp. 1-6).
85+
86+
[4] Koutlis, C., & Papadopoulos, S. (2025). Leveraging representations from intermediate encoder-blocks for synthetic image detection. In European Conference on Computer Vision (pp. 394-411). Springer, Cham.
87+
88+
[5] Konstantinidou, D., Koutlis, C., & Papadopoulos, S. (2024). TextureCrop: Enhancing cbSynthetic Image Detection through Texture-based Cropping. arXiv preprint arXiv:2407.15500.
89+
90+
[6] Karageorgiou, D., Papadopoulos, S., Kompatsiaris, I., & Gavves, E. (2024). Any-Resolution AI-Generated Image Detection by Spectral Learning. arXiv preprint arXiv:2411.19417.
91+
92+
[7] Bammey, Q. (2024). Synthbuster: Towards Detection of Diffusion Model Generated Images. IEEE Open Journal of Signal Processing, 5, 1-9.
93+
94+
95+
#### Task organizers
96+
* Manos Schinas, MeVer group, CERTH-ITI, Greece
97+
* Dimitrios Karageogiou, MeVer group, CERTH-ITI, Greece
98+
* Despina Konstantinidou, MeVer group, CERTH-ITI, Greece
99+
* Olga Papadopoulou, MeVer group, CERTH-ITI, Greece
100+
* Symeon Papadopoulos, MeVer group, CERTH-ITI, Greece
101+
* Christos Koutlis, MeVer group, CERTH-ITI, Greece
102+
* Hannes Mareen, IDLab-MEDIA, Univ. Ghent, Belgium
103+
* Efstratios Gavves, VIS Lab, UvA, Netherlands
104+
* Luisa Verdoliva, GRIP, Univ. Naples Federico II, Italy
105+
* Davide Cozzolino, GRIP, Univ. Naples Federico II, Italy
106+
* Fabrizio Guillaro, GRIP, Univ. Naples Federico II, Italy
88107

89108
#### Task schedule
90-
* XX May 2025: Development Data release <!-- * XX May 2025: Data release <!-- # Replace XX with your date. We suggest setting the date in May - of course if you want to realease sooner it's OK. -->
91-
* XX June 2025: Development Data release <!-- * XX June 2025: Data release <!-- # Replace XX with your date. We suggest setting the date in June - of course if you want to realease sooner it's OK. -->
92-
* XX September 2025: Runs due and results returned. Exact dates to be announced. <!--* XX September 2025: Runs due <!-- # Replace XX with your date. We suggest setting enough time in order to have enough time to assess and return the results by the Results returned.-->
93-
* 08 October 2025: Working notes paper <!-- Fixed. Please do not change.-->
94-
* 25-26 October 2025: MediaEval Workshop, Dublin, Ireland and Online. <!-- Fixed. Please do not change.-->
109+
The program will be updated with the exact dates.
110+
111+
* May 2025: Development Data release
112+
* June 2025: Development Data release
113+
* September 2025: Runs due and results returned. Exact dates to be announced.
114+
* 08 October 2025: Working notes paper
115+
* 25-26 October 2025: MediaEval Workshop, Dublin, Ireland and Online..
95116

96117
#### Acknowledgements
97-
<!-- # optional, delete if not used-->
118+
The task organization is supported by the Horizon Europe AI-CODE and vera.ai projects that focus on the development of AI tools for supporting media professionals in their verification and fact-checking activities.

0 commit comments

Comments
 (0)