Who are we?
A school of the Institut Mines-Télécom, Télécom Paris is the leading French school for generalist digital engineers. With its excellent teaching and research, Télécom Paris is at the heart of a unique innovation ecosystem based on the transversality of its training, its research departments and its business incubator.
A founding member of the Institut Polytechnique de Paris, Télécom Paris is positioned as an open-air laboratory for all the major technological and societal challenges.
Recent advances in computing and widespread access to massive digital information are leading to an unprecedented deployment of optimization algorithms in many domains (e.g. health\medicine, (cyber-) security, intelligent transport, predictive maintenance, etc.). Most of the algorithmic approaches developed over the last decade have mainly aimed to solve scaling issues, so as to be able to exploit Big Data in an exhaustive way.
The objective of the future researcher will be to develop numerical optimization in the service of the mathematics of frugal artificial intelligence.
First, the value of future algorithms increasingly depends on their frugality. In addition to the classic high accuracy objective, the imperative of efficiency in terms of energy consumption which translates into data, memory and time efficiency now takes a major place in many applications. Data-efficiency can be achieved by leveraging learning algorithms that reduce the necessary amount of training dataset by appropriate transfer from one task to another or by active learning. However for many tasks (e.g. natural language processing/computer vision / machine listening) an extremely large amount of data is still required to achieve very good performance. The memory and time efficiency can be obtained through the design of sparse models or by the design of online algorithms, with fast convergence rates and simple iterations.
Moreover, there is a need for studies at the interplay between artificial intelligence models and the optimization algorithms used to solve them. When data sets became bigger and bigger, this line of research showed that stochastic gradient methods were the most efficient to digest all this data. This came through theoretical studies involving complexity bounds from statistics and optimization together. A new challenge has emerged since then: data sets and models have become so large that training and using them requires an unprecedented use of computing power and this has a non negligible environmental impact. New models and algorithms taking into account their
environmental impact, together with traditional approximation, estimation and optimization errors, in a principled way, are still to be discovered.
Second, neural networks (or the technologies that make use of them such as variational auto-encoders, or GAN's) have introduced new problems in the community of numerical optimization. A certain number of these new problems require the minimization of cost functions on non-Euclidean spaces, such as functional spaces, manifolds, or even the space of probability measures. This new avenue requires revisiting the traditional approaches outside the Euclidean context in which they have been developed over the past twenty years. These new problems appear in at least three disciplinary fields at the heart of artificial intelligence: computational optimal transport, Monte Carlo methods and the optimization of neural networks. The problem of computational optimal transport arises, among others, in the case of generative models, where one seeks to generate samples faithfully to a certain data set, by approximating an optimal transport plan between a simple distribution and the data distribution. Monte Carlo methods are at the heart of stochastic optimization methods. Indeed, within the iterative algorithms used in artificial intelligence, it is crucial to construct smart approximations of certain non-computable integrals. Lastly, the optimization of neural networks is a central problem for the community and requires to set up various ways of reducing the computation time by quantization or sparsification of the computations (low-rank, sketching techniques). On a theoretical level, recent works provide intuitions on the global convergence of algorithms in neural networks. On the practical level, the search for sparse neural models is an essential problem regarding the portability and the frugality of the algorithms.
Finally, the optimization problems encountered in neural networks being nondifferentiable and non-convex in nature, it is clear that a deep understanding of the geometry of non-convex and non differentiable functions is a key for the study and design of algorithms.
The Hi! Paris center has been founded to answer a need for cutting-edge research, ranging from fundamental research on methods for AI and data analytics, to business applications across all sectors and implications for society. This position fits completely into this approach and the professor recruited, though based in the S2A team of Télécom Paris, will take an active part in the center’s activities.
PREFERED SCIENTIFIC EXPERTISE
Main expertise :
• Numerical optimization
• Stochastic optimization
• Frugal artificial intelligence
Other expertise of interest :
• Monte Carlo methods
• Optimal transport
• Sequential learning, active learning
Required skills, experience, and knowledge:
Preferred skills, experience, and knowledge:
Other abilities and skills:
Applicants should submit a single PDF file that includes and using the following link:
Contact for further information:
The selection process consists of 4 steps:
ADDITIONAL INFORMATION :