<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:70.85pt 85.05pt 70.85pt 85.05pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:417485327;
mso-list-template-ids:-1887161456;}
@list l0:level1
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level2
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:1.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level3
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:1.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level4
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:2.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level5
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:2.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level6
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:3.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level7
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:3.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level8
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:4.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l0:level9
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:4.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l1
{mso-list-id:1010832176;
mso-list-template-ids:-1413980562;}
@list l1:level1
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Symbol;}
@list l1:level2
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:1.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:"Courier New";
mso-bidi-font-family:"Times New Roman";}
@list l1:level3
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:1.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Wingdings;}
@list l1:level4
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:2.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Wingdings;}
@list l1:level5
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:2.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Wingdings;}
@list l1:level6
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:3.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Wingdings;}
@list l1:level7
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:3.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Wingdings;}
@list l1:level8
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:4.0in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Wingdings;}
@list l1:level9
{mso-level-number-format:bullet;
mso-level-text:;
mso-level-tab-stop:4.5in;
mso-level-number-position:left;
text-indent:-.25in;
mso-ansi-font-size:10.0pt;
font-family:Wingdings;}
ol
{margin-bottom:0in;}
ul
{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EL link=blue vlink=purple style='word-wrap:break-word'><div class=WordSection1><div><p><span lang=EN-GB>The ACM Transactions on Multimedia Computing, Communications, and Applications organises a <b>Special Issue on "Realistic Synthetic Data: Generation, Learning, Evaluation"</b>. The Special Issue is <b>endorsed by the AI4Media project</b> and the Guest Editors are members of the AI4Media consortium.<br><br>The call for papers can be found below and in the attachment. The submission deadline is March 31st, 2023. <o:p></o:p></span></p><div><p class=MsoNormal><span lang=EN-GB>The Topics of Interest include:<o:p></o:p></span></p></div><div><ul type=disc><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Synthetic data for various modalities, e.g., signals, images, volumes, audio, etc.<o:p></o:p></span></li><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Controllable generation for learning from synthetic data.<o:p></o:p></span></li><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Transfer learning and generalization of models.<o:p></o:p></span></li><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Causality in data generation.<o:p></o:p></span></li><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Addressing bias, limitations, and trustworthiness in data generation.<o:p></o:p></span></li><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Evaluation measures/protocols and benchmarks to assess quality of synthetic content.<o:p></o:p></span></li><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Open synthetic datasets and software tools.<o:p></o:p></span></li><li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;mso-list:l1 level1 lfo3'><span lang=EN-GB>Ethical aspects of synthetic data.<o:p></o:p></span></li></ul></div><div><p><span lang=EN-GB>Please consider submitting your work to this special issue!<o:p></o:p></span></p><p><span lang=EN-GB>Thank you,<o:p></o:p></span></p><p><span lang=EN-GB>Best regards,<o:p></o:p></span></p></div><p class=MsoNormal><span lang=EN-GB><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-GB><o:p> </o:p></span></p><div><p class=MsoNormal style='margin-bottom:12.0pt'><span lang=EN-GB>----------------------------------------------------------------------------------------------------<br><br>Call-for-Papers: ACM TOMM SI on Realistic Synthetic Data: Generation, Learning, Evaluation<br><br><br>[Apologies for multiple postings]<br><br>ACM Transactions on Multimedia Computing, Communications, and Applications<br>Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation<br>Impact Factor 4.094<br><a href="https://mc.manuscriptcentral.com/tomm">https://mc.manuscriptcentral.com/tomm</a><br>Submission deadline: 31 March 2023<br><br><br>*** CALL FOR PAPERS ***<br><br>[Guest Editors]<br>Bogdan Ionescu, Universitatea Politehnica din Bucuresti, România<br>Ioannis Patras, Queen Mary University of London, UK<br>Henning Muller, University of Applied Sciences Western Switzerland, Switzerland<br>Alberto Del Bimbo, Università degli Studi di Firenze, Italy<br><br>[Scope]<br>In the current context of Machine Learning (ML) and Deep Learning<br>(DL), data and especially high-quality data are central for ensuring<br>proper training of the networks. It is well known that DL models<br>require an important quantity of annotated data to be able to reach<br>their full potential. Annotating content for models is traditionally<br>made by human experts or at least by typical users, e.g., via<br>crowdsourcing. This is a tedious task that is time consuming and<br>expensive -- massive resources are required, content has to be curated<br>and so on. Moreover, there are specific domains where data<br>confidentiality makes this process even more challenging, e.g., in the<br>medical domain where patient data cannot be made publicly available,<br>easily.<br><br>With the advancement of neural generative models such as Generative<br>Adversarial Networks (GAN), or, recently diffusion models, a promising<br>way of solving or alleviating such problems that are associated with<br>the need for domain specific annotated data is to go toward realistic<br>synthetic data generation. These data are generated by learning<br>specific characteristics of different classes of target data. The<br>advantage is that these networks would allow for infinite variations<br>within those classes while producing realistic outcomes, typically<br>hard to distinguish from the real data. These data have no proprietary<br>or confidentiality restrictions and seem a viable solution to generate<br>new datasets or augment existing ones. Existing results show very<br>promising results for signal generation, images etc.<br><br>Nevertheless, there are some limitations that need to be overcome so<br>as to advance the field. For instance, how can one control/manipulate<br>the latent codes of GANs, or the diffusion process, so as to produce<br>in the output the desired classes and the desired variations like real<br>data? In many cases, results are not of high quality and selection<br>should be made by the user, which is like manual annotation. Bias may<br>intervene in the generation process due to the bias in the input<br>dataset. Are the networks trustworthy? Is the generated content<br>violating data privacy? In some cases one can predict based on a<br>generated image the actual data source used for training the network.<br>Would it be possible to train the networks to produce new classes and<br>learn causality of the data? How do we objectively assess the quality<br>of the generated data? These are just a few open research questions.<br><br>[Topics]<br>In this context, the special issue is seeking innovative algorithms<br>and approaches addressing the following topics (but is not limited<br>to):<br>- Synthetic data for various modalities, e.g., signals, images,<br>volumes, audio, etc.<br>- Controllable generation for learning from synthetic data.<br>- Transfer learning and generalization of models.<br>- Causality in data generation.<br>- Addressing bias, limitations, and trustworthiness in data generation.<br>- Evaluation measures/protocols and benchmarks to assess quality of<br>synthetic content.<br>- Open synthetic datasets and software tools.<br>- Ethical aspects of synthetic data.<br><br>[Important Dates]<br>- Submission deadline: 31 March 2023<br>- First-round review decisions: 30 June 2023<br>- Deadline for revised submissions: 31 July 2023<br>- Notification of final decisions: 30 September 2023<br>- Tentative publication: December 2023<br><br>[Submission Information]<br>Prospective authors are invited to submit their manuscripts<br>electronically through the ACM TOMM online submission system (see<br><a href="https://mc.manuscriptcentral.com/tomm">https://mc.manuscriptcentral.com/tomm</a>) while adhering strictly to the<br>journal guidelines (see <a href="https://tomm.acm.org/authors.cfm">https://tomm.acm.org/authors.cfm</a>). For the<br>article type, please select the Special Issue denoted SI: Realistic<br>Synthetic Data: Generation, Learning, Evaluation.<br><br>Submitted manuscripts should not have been published previously, nor<br>be under consideration for publication elsewhere. If the submission is<br>an extended work of a previously published conference paper, please<br>include the original work and a cover letter describing the new<br>content and results that were added. According to ACM TOMM publication<br>policy, previously published conference papers can be eligible for<br>publication provided that at least 40% new material is included in the<br>journal version.<br><br>[Contact]<br>For questions and further information, please contact Bogdan Ionescu /<br><a href="mailto:bogdan.ionescu@upb.ro">bogdan.ionescu@upb.ro</a>.<br><br>[Acknowledgement]<br>The Special Issue is endorsed by the AI4Media "A Centre of Excellence<br>delivering next generation AI Research and Training at the service of<br>Media, Society and Democracy" H2020 ICT-48-2020 project<br><a href="https://www.ai4media.eu/">https://www.ai4media.eu/</a>.<br><br>On behalf of the Guest Editors,<br>Bogdan Ionescu<br><a href="https://www.aimultimedialab.ro/">https://www.aimultimedialab.ro/</a><o:p></o:p></span></p></div></div></div></body></html>