Speakers

Keynote speakers


Don TOWSLEY
Distinguished Professor,
University of Massachusetts,
College of Information & Computer Sciences

 

 

 

Day: Monday, 26th of February 2018

Title:
Network Tomography

Abstract:
Network tomography, refers to the use of inference techniques to reconstruct detailed view of the network internal state (e.g., delay/loss/jitter on individual links) from external measurements taken between a selected subset of nodes referred to as monitors. In contrast to the conventional approach of direct measurement, as found in most commercial networks, network tomography works on end-to-end measurements taken between monitors, and thus avoids the needs to rely on monitoring agents or protocol support at internal nodes, which is particularly useful in monitoring closed network systems.
Network tomography is a very rich area that relies on several branches of mathematics and statistics to answer and address two important sets of questions and challenges. First, given a set of monitors, what network internal state can be inferred? For example, for what links can one infer loss rates and delay statistics? A related question regards how network structure and monitor placement impact the ability to identify all such as link loss rates and link delay statistics. Yet another question regards how to optimally place monitors so as to infer all internal network state. We will find that answers to such questions rely on ideas from linear algebra coupled with ideas from graph theory and that they often lead to efficient algorithms, say to place monitors.
Second, what types of measurements should be made and how should the observations be combined so as to infer internal network state, such as delay/loss characteristics of individual links. Moreover, one would like to do so with a goal of reducing statistical error for a given amount of observation effort. Answers to these questions rely on ideas from estimation theory, maximum likelihood estimation, and experimental design.
In summary, network tomography is a rich area of practical import that draws on ideas from a number of different areas of mathematics and statistics. Much is understood about the engineering of systems for performing network tomography while at the same time it remains a rich and fertile research area.

Successful network tomography raises a number of questions and challenges. These include:
1. Determining what network state can be inferred from a given a set of monitors.
2. How to make inferences regarding network state.

Short Bio
Don Towsley holds a B.A. in Physics (1971) and a Ph.D. in Computer Science (1975) from the University of Texas. He is currently a Distinguished professor at the University of Massachusetts. He has held visiting positions at numerous universities and research labs including the University of Paris VI, IBM Research, AT&T Research, Microsoft Research, and INRIA. His research interests include security, quantum communications, networks and performance evaluation. He is a co-founder ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) and served as one of its first co-Editor in Chiefs. He served as Editor-in-Chief of the IEEE/ACM Transactions on Networking and as editor on numerous other editorial boards. He has served as Program Co-chair of INFOCOM 2009, Performance’02, and the joint 1992 ACM SIGMETRICS/Performance Conference as well as General Chair of COMSNETS 2012. He is a corresponding member of the Brazilian Academy of Sciences and has received numerous IEEE and ACM awards including the 2007 IEEE Koji Kobayashi Award, 2007 ACM SIGMETRICS Achievement Award, 2008 ACM SIGCOMM Achievement Award, and 2011 IEEE INFOCOM Achievement Award. He has also received numerous best paper awards including the IEEE Communications Society 1998 William Bennett Paper Award, a 2008 ACM SIGCOMM Test of Time Award, the 10+ Year 2010 DASFAA Best Paper Award, the 2012 ACM SIGMETRICS Test of Time Award and five ACM SIGMETRICS Best paper awards. Last, he is Fellow of both the ACM and IEEE.

 


Nidhi HEGDE
Member of Technical Staff at Nokia Bell Labs France,
Team leader
on Machine Learning in Networks

 

 

Day: Monday, 26th of February 2018

Title:
Applied Mathematics Research in Industry: Relevance and Challenges

Abstract:
Applied mathematics research in industry has a long history and the storied past of Bell Labs is testament to the importance of research in industry. Its success relies essentially on how relevant the work and results are to the industry, and how challenging the problems are to the researchers. These twin aspects of relevance and challenge will be evident in two technical problems I will present. The first is a classical problem of scheduling in networks. We will see how recent developments in industry keep this problem challenging and how our fundamental research approach has adapted in keeping our results relevant. The second problem concerns recent challenges in Augmented Intelligence, offering services that rely on results from fundamental research in machine learning.

Short Bio
Nidhi Hegde is a Member of Technical Staff at Nokia Bell Labs France, leading a new team on Machine Learning in Networks. Previously she has been a principal scientist in the Technicolor Research Lab in Paris from 2010 to 2014, where she studied models for control in the smart grid and analysis of social networks for information dissemination, and a research engineer at France Teclecom R&D from 2005 to 2010, where she developed models for network dimensioning and wireless scheduling. Prior to this she has held positions at INRIA (Sophia-Antipolis, France) and CWI and EURANDOM (The Netherlands) where she worked on models for performance analysis.


Carl GRAHAM
Researcher at the Centre National de la Recherche Scientifique (CNRS),
part-time associate professor at the École polytechnique

 

 

 

Day: Tuesday, 27th of February 2018

Title:
Some examples of performance evaluation of distributed algorithms in communication networks.

Abstract:
Communication networks are large complex systems subject to congestion and failures, and the diverse tasks hosted by them have strict Quality of Service (QoS) specifications. These networks must be regulated by distributed algorithms performed by the nodes with little or no global supervision or information transfer. An important scientific and industrial issue is the evaluation of the performance of these algorithms, with the aim to improve them and the network architecture in order to save network resources and user time. The talk will develop and illustrate this issue in three parts. The first one will concern a much-studied class of algorithms based on load balancing. The second part will describe modeling and mathematical issues for the Transmission Control Protocol (TCP) which is implemented by users to regulate Internet traffic. The third part will describe work done in collaboration with OrangeLabs and a PhD student on a “Bourse Cifre” grant jointly supervised by them and myself, which consisted in devising appropriate teletraffic models for the exploitation of the proprietary Internet traces of Orange and in evaluating the performance of Internet cache eviction policies such as the Least Recently Used (LRU) policy.

Short Bio
Carl Graham. École Normale Supérieure de la rue d’Ulm (ENS Ulm, Paris), PhD, Habilitation à diriger les recherches (HdR). Researcher at the Centre National de la Recherche Scientifique (CNRS), part-time associate professor at the École polytechnique.



Invited Companies

Day: Tuesday, 27th of February


PROLOGISM (14:00 – 14:30)

Invited speaker:
Nicaise CHOUNGMO FOFACK Ph.D, MSc, Ing

Title:
Big & Fast Data: some opens problems

Abstract:
In this talk, we will present two major problems that some IT departments are facing when deploying and operating Big/Fast Data technologies. With an emphasis on Apache Kafka and Apache Hadoop YARN, we will show that engineers are lacking efficient tools to assess performances, to size the components, and to optimally control these distributed systems. The situation become worst when tackling this issues in an online settings. We will conclude this talk by presenting the Ph.D. fellowships that our compagny offers to candidates who are interested those challenges.

Short Bio
Doctor Nicaise CHOUNGMO FOFACK is a former Ph.D. student at INRIA Sophia Antipolis and he is now working at PROLOGISM as Big Data & Data Science Application Architect. He has been followed up several customers (in telco as Orange, in finance/bank as Societe Generale) on the various stages of the valuation of their data: since (i) the identification of the opportunity of Business, (ii) the framing and the cost estimation of the use cases, (iii ) the development of PoCs in agile mode, (iv) the drafting of specifications and the proposal of architectures, up to (v) the production of the solution. He also works at bridging the gap between research and industry informing both communities on common issues that they could solve with their collaboration.

 

AMADEUS (14:30 – 15:00)

Invited speaker:
Milos COLIC, Ir.

Title:
Big Data, difficulties are not always where you would expect

Abstract:
Big Data became a buzzword almost over night, hand in hand with artificial intelligence and data-mining, usually encompassing aspects of both statistical analysis and prediction theory. These are all hard problems, problems that require ingenuity and intellectual agility to comprehend and model. Difficulties arise from various root causes: amount of data processed, algorithmic design, theoretical constraints (such as “No free lunch theorem”), inability to perfectly model observed phenomena, etc. However, beyond theoretically imposed constraints, there is a vast amount of hidden organizational and architectural issues that are lurking and that could cause an amazing idea to never reach the customers. This talk is revolving around these hidden problems that may and will arise in a corporate environment, due to several parallel race conditions between various stakeholders. In a corporate environment Big Data analysis becomes more of an item of arcane knowledge and alchemy than anything else. One is faced with many data sources coming from individual clients that need to be made coherent regardless of the fact that they are not properly standardized and do not convey to in-common logic. In addition to external data-sources it became the fact of the industry that hardware is as well externalized implying yet another stake holder, another dependency, another point of failure, another source of deadlines. Finally adding into the recipe the constantly evolving market and a need to satisfy ever more frequent clients requirements one comes into settings in which new problems arise on daily basis. In this situation due to commitments to provide excellency and high quality to ones clients it becomes apparent that “Big Data” is hard but not where you would expect it to be. This talk is Amadeus’ testimonial of how big data evidently brings big gain but also a large bucket of constraints, race conditions, and problems that are not usually associated with big data.

 

SAP Labs France (15:00 – 15:30)

Invited speaker:
Antonino SABETTA

Title:
From “classical” machine-learning models to deep learning for mining vulnerabilities and fixes in open-source repositories

Abstract:

Open-source software comes with great opportunities, but no free lunch. Establishing an effective vulnerability management process to maintain a secure open-source software supply chain, requires that the sources of vulnerability information provide reliable, timely, and detailed data down to the level of which lines of code need to be updated to mitigate the vulnerability. Unfortunately, the current de-facto standard sources (e.g., the NVD) often need a significant amount of manual work, which does not scale well with the increasing usage of open-source software .

In this talk, we overview the work done by SAP Security Research to overcome this problem, investigating how machine learning techniques could be used to extract the needed information directly from source code repositories, classifying automatically the commits that are security-relevant. We report on our experience, starting from our early attempts with established “classical” machine learning models, until our most recent explorations in the territory of deep neural networks applied to the analysis of source code changes.

Short Bio

Antonino Sabetta is a senior researcher at the Security Research department of SAP. The main focus of his recent work is the analysis and management of vulnerabilities of open-source components embedded in large-scale enterprise applications. In particular, Antonino is interested in the application of machine-learning to the mining of open-source software repositories and the automation of the vulnerability management workflow.

Antonino holds a PhD in Computer Science and Automation Engineering from the University of Rome Tor Vergata, Italy in 2007. From the same university he had received in his “Laurea cum Laude” in 2003.

 

Symag by BNP Paribas (16:00 – 16:30)

Invited speaker:
Julien BONNEL, Responsable Innovation

Title:
Blockchain, from promise to reality

Abstract:
Locks to resolve before envisage value-added industrial deployments in the financial and retail sectors.

Short Bio

Chief Innovation Officer at Symag by BNP Paribas PF. More than 27 years of professional experience in the sectors of retail, operational marketing and financial: management of software products, teams and projects.

 

Olea Medical – Canon Medical Systems Corp (16:30 – 17:00)

Invited speaker:
Stefano CASAGRANDA, PhD

Title:
Making sense in an imperfect world

Abstract:
Dealing with medical data and bio-images is a very delicate topic because it means interpreting data generated from living bodies which are proxies of complex low level biological phenomena. Furthermore, the results of a wrong model prediction based on these data could lead to serious consequences on patient health. While medical data share many Big Data challenges, a distinctive problematic in healthcare relies on the fact that patient data are often imperfect, non-quantitative and highly variable, making reproducible studies difficult. We need to find a way to help make a good diagnosis out of an awfully low quality data, rarely knowing what the ground truth is. Through some examples we will explore how to make sense from these imperfect data, focusing particularly on the possible use of Deep Learning for advancing CAD software (Computer Aided Diagnosis).

Short Bio
Stefano Casagranda is a Research and Innovation Engineer at Olea Medical in La Ciotat (France). He received his PhD Degree in 2017 in Applied Mathematics to Biological Systems from INRIA Sophia Antipolis Méditerranée, after joining the team BIOCORE.

AMADEUS (17:00 – 17:30)

Invited speaker:
Simon NANTY, Ph.D.

Title:
Travel search engine optimization

Abstract:
When searching for a trip, Amadeus search engines have to browse hundreds of thousands of different possible flights and routes, and to return, at the end, only a few travel solutions among the most relevant ones. These few tens or hundreds of solutions have to be chosen and ranked with care to display to the end-user the ones which suit his needs the most and, thus, to improve the booking rate. The presented work is part of an Amadeus project whose overall objective is to significantly improve our customer conversion via the implementation of data-driven and adaptive search strategies. This improved search engine takes into account convenience criteria of the travel solutions in the process of selecting the ones to be displayed. In this work, we set up choice modelling and machine learning-based tools to define the weighting of convenience criteria optimizing the efficiency of the product with respect to its objectives.

Short Bio
Data scientist at Amadeus, Sophia-Antipolis since 2015 – PhD degree obtained in 2015 jointly at Grenoble University and CEA.

 

ExactCure (17:30 – 18:00)

Invited speaker:
Fabien Astic, Co-founder

Title:
AI for a personalized eHealth

Abstract:
Problem: Inappropriate medical treatments sometimes cause side effects that lead to 20,000 deaths and cost the public healthcare system about €10 billion per year in France alone.

Solution: Our Digital Twin simulates the efficacy and interactions of drugs in the body of a patient based on his/her personal characteristics. It helps him/her to avoid under-doses, overdoses and drugs interactions.

Our complex bio-models result from years of fundamental research with Inria. Our solution will be certified as a Medical Device.

 

Mitsubishi Electric (17:30 – 18:00)

Invited speaker:
Antonio BAZCO

Title:
Capacity approximation of Cooperative Channel with Distributed CSIT

Abstract:
Although the capacity of cooperative scenarios in wireless networks is still unknown for many cases where the Channel State Information is not perfect, capacity approximation such as Generalized Degrees-of-Freedom offers important insights in the behaviour and dimensionality of the network. For the case where each ttansmitter has a different estimation of the channel, we analyze how the Broadcast Channel is affected by those differences with respect to the case where all the transmitters share the same estimation. Our result shows that, for some CSIT topologies, there is not loss of Generalized Degrees-of-Freedom even if one transmitter has not channel information.

 

Comments are closed.