Talks by distinguished ECE NTUA alumni at mini-workshop on Machine Learning

On January 7, 2019, three distinguished ECE NTUA alumni, Alexandros Dimakis, Christos Tzamos and Costis Daskalakis, spoke in front of a large audience at a Machine Learning mini-workshop in the new Building of the School of Electrical and Computer Engineering of the NTUA. The workshop took place as part of a seminar on Optimization and Machine Learning, organized by MIT PhD students, also ECE NTUA alumni, Kyriakos Axiotis, Dimitris Tsipras and Manolis Zampetakis, under the auspices of the Computation and Reasoning Lab (CoReLab) of ECE NTUA.

Alexandros Dimakis: Deep Generative Models and Inverse Problems.

Prof. Dimakis explained what deep generative models (GANs) are and how they can be used to solve linear inverse problems.

Abstract: Linear inverse problems involve the reconstruction of an unknown vector (e.g. a tomography image) from an underdetermined system of noisy linear measurements. Most results in the literature require that the reconstructed signal has some known structure, e.g. it is sparse in some basis (usually Fourier or Wavelet). In this work we show how to remove such prior assumptions and rely instead on deep generative models (e.g. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs)). We show how the problems of image inpainting (completing missing pixels) and super-resolution are special cases of our general framework. We generalize theoretical results on compressive sensing for deep generative models and discuss several open problems.

Talk slides can be found here.

Christos Tzamos: Learning From Positive Examples

Prof. Tzamos introduced new approaches on learning from positive examples.

Abstract: We consider the learnability of geometric concepts and distributions given only positive examples. While learning from positive examples is generally not possible, we propose two approaches that enable us to obtain positive results. When samples are generated from a structured distribution such as a Gaussian, we show that any concept class that has low Gaussian surface area can be learned using only few positive samples. In addition, we show that when an oracle is available that can check the validity of generated examples during training, stronger results can be obtained even under arbitrary distributions and under the presence of "model errors". We show that, while proper learning often requires exponentially many queries to the invalidity oracle, improper distribution learning can be done efficiently using polynomially many queries.

Talk slides can be found here.

Constantinos Daskalakis: Improving Generative Adversarial Networks using Game Theory and Statistics

Prof. Daskalakis demonstrated how to employ Game Theory and Statistics in order to improve Generative Adversarial Networks.

Abstract: Generative Adversarial Networks (aka GANs) are a recently proposed approach for learning samplers of high-dimensional distributions with intricate structure, such as distributions over natural images, given samples from these distributions. They are trained by setting up a two-player zero-sum game between two neural networks, which learn statistics of a target distribution by adapting their strategies in the game using gradient descent. Despite their intriguing performance in practice, GANs pose great challenges to both Optimization and Statistics. Their training suffers from oscillations, and they are difficult to scale to high-dimensional settings. We study how game-theoretic and statistical techniques can be brought to bare on these important challenges. We use Game Theory towards improving GAN training, and Statistics towards scaling up the dimensionality of the generated distributions.

Talk slides can be found here.

Short Bios of Speakers

Alex Dimakis is an Associate Professor at the Electrical and Computer Engineering department, University of Texas at Austin. From 2009 until 2012 he was with the Viterbi School of Engineering, University of Southern California. He received his Ph.D. in 2008 and M.S. degree in 2005 in electrical engineering and computer sciences from UC Berkeley and the Diploma degree from the National Technical University of Athens in 2003. During 2009 he was a CMI postdoctoral scholar at Caltech. He received an ARO young investigator award in 2014, the NSF Career award in 2011, a Google faculty research award in 2012 and the Eli Jury dissertation award in 2008. He is the co-recipient of several best paper awards including the joint Information Theory and Communications Society Best Paper Award in 2012. He served two terms as an associate editor for IEEE Signal Processing letters and is currently serving as an associate editor for IEEE Transactions on Information Theory. His research interests include information theory, coding theory and machine learning.

Christos Tzamos is an As­sis­tant Pro­fes­sor in the De­part­ment of Com­puter Sci­ences at Uni­ver­sity of Wis­con­sin‐Madi­son and a mem­ber of the The­ory of Com­put­ing group. His re­search in­ter­ests lie in the in­ter­face of The­ory of Com­pu­ta­tion with Eco­nom­ics and Game The­ory, Ma­chine Learn­ing, Sta­tis­tics and Prob­a­bil­ity The­ory. He com­pleted his PhD in the The­ory of Com­pu­ta­tion group of MIT ad­vised by Costis Daskalakis and worked as a post‐doc­toral re­searcher at Mi­crosoft Re­search New Eng­land. Be­fore that, he stud­ied Elec­tri­cal and Com­puter En­gi­neer­ing at NTUA and was a mem­ber of Core­lab work­ing with Dim­itris Fo­takis. Christos received the George M. Sprowls Award for the best Com­puter Sci­ence PhD the­sis in the EECS De­part­ment of MIT and is a recipient of the Simons Graduate Award in Theoretical Computer Science.

Constantinos Daskalakis is a professor of computer science and electrical engineering at MIT. He holds a diploma in electrical and computer engineering from the National Technical University of Athens, and a Ph.D. in electrical engineering and computer sciences from UC-Berkeley. His research interests lie in theoretical computer science and its interface with economics, probability, learning and statistics. He has been honored with the 2007 Microsoft Graduate Research Fellowship, the 2008 ACM Doctoral Dissertation Award, the Game Theory and Computer Science (Kalai) Prize from the Game Theory Society, the 2010 Sloan Fellowship in Computer Science, the 2011 SIAM Outstanding Paper Prize, the 2011 Ruth and Joel Spira Award for Distinguished Teaching, the 2012 Microsoft Research Faculty Fellowship, the 2015 Research and Development Award by the Vatican Giuseppe Sciacca Foundation, the 2017 Google Faculty Research Award, the 2018 Simons Investigator Award, and the 2018 Rolf Nevanlinna Prize from the International Mathematical Union. He is also a recipient of Best Paper awards at the ACM Conference on Economics and Computation in 2006 and in 2013.