Calendar

March 29, 2021

PhD Seminars VII

PhD Seminars VII


March 29, 2021

Talk 1

Speaker

Simone Ebert (Biovision)

Title

The Role of Dynamical Synapses in Retinal Surprise Coding

Abstract

At the first stage of visual perception, the retina transforms a visual scene into an efficient neural code that is conveyed to the rest of the brain. The retina’s organization is constituted of parallel pathways that selectively carry information about specific features of the visual input rather than the raw image. In addition, the retina predominantly encodes changes in the visual scene by responding to deviations from an expectation based on the visual scene history. Indeed, in a rapidly changing visual environment, the retina must rapidly adapt its prediction to the current input to efficiently detect a deviation. In this context, we are exploring the role of dynamical synapses, adapting on a short timescale, in the retinas’ ability to accurately detect ‘surprise’.
To this end, we take advantage of an experimentally observable example of surprise detection in the retina, the Omitted Stimulus Response (OSR): when a regular sequence of flashes suddenly ends, the retina responds to this “surprise” by generating a pulse of activity signaling the missing stimulus, which is precisely timed to the period of the previously presented flash sequence. It is not clear yet which computations within and between the retinal pathways can provide such a high content of information in the output spiking rate. We believe that this example is a key to understand how retina respond to surprise in more complex visual scenes.
We conduct electrophysiological experiments in which we selectively inhibit retinal pathways and cell types to identify the circuit components necessary for this output behavior. Based on these findings, we construct the architecture of a computational model in which cells are connected via dynamical synapses. We then simulate the retinas response to a periodic stimulus to examine the role of short-term plasticity in shaping the response pattern to OSR.


Talk 2

Speaker

Othmane Belmoukadam (DIANA)

Title

From Encrypted Video Traces to Viewport Classification

Abstract

The Internet has changed drastically in recent years, multiple novel applications and services have emerged, all about consuming digital content. In parallel, users are no longer satisfied by the Internet’s best effort service, instead, they expect a seamless service of high quality from the side of the network. This has increased the pressure on Internet service providers (ISP) in their effort to efficiently engineer their traffic and improve their end-users’ experience. Content providers from their side, and to further protect the content of their customers, have shifted towards end-to-end encryption (e.g., TLS/SSL), which has complicated even further the task of ISPs in handling the traffic in their network. The challenge is notable for video streaming traffic which is driving the Internet traffic growth, and which imposes tight constraints on the quality of service provided by the network depending on the content of the video stream and the equipment on the end-user premises. Video streaming relies on the dynamic adaptive streaming over HTTP (DASH) protocol which takes into consideration the underlying network conditions (e.g., delay, loss rate, and throughput) and the viewport capacity (e.g., screen resolution) to improve the experience of the end user in the limit of available resources. Nevertheless, knowing the reality of the encrypted video traffic is of great help to ISPs as it allows taking appropriate network management actions. In this work, we propose an experimental framework able to infer fine-grained video flow information such as chunk sizes from encrypted YouTube video traces. We also present a novel technique to separate video and audio chunks from encrypted traces based on Gaussian Mixture Models (GMM). We evaluate our technique with real chunk sizes (Audio/Video) collected through the browser using the Chrome Web Request API [1]. Then, we leverage these results and our dataset to train a model able to predict the class of viewport (either SD or HD) per video session with an average 92% accuracy and 85% F1 score.

Comments are closed.