With the advance of machine learning techniques, we have seen a quick adoption of this
technology in a broad range of application scenarios.
As machine learning is becoming more broadly deployed, such approaches become part of the attack
of modern IT systems and infrastructures. The class
will cover several attack vectors and defenses of today's and future intelligent systems build on AI
machine learning technology.
Machine Learning Overview
Machine Learning is a quickly advancing research area that has led to
several breakthroughs in the past years. We will give a
short introduction into some of the most relevant concepts -- including Deep Learning techniques.
While there has been a leap in performance of machine learning systems in the past
decade, still many open issues remain in order to deploy
such models in critical systems with guarantees on robustness. In particular, Deep Learning techniques
have shown strong performance on a wide range of tasks,
but are equally highly susceptible to adversarial manipulation of the input data. Successful attacks
that change the output and behavior of an intelligent
system can have severe consequences ranging from accidents of autonomous driving systems to by-passing
malware or intrusion detection. We cover techniques
in the domain of adversarial machine learning that aim at manipulation the predictions of machine
learning models and show defenses in order to protect
against such attacks.
Machine Learning services are offered by a range of providers that make it easy
for clients e.g to enable intelligent services
for their business. Based on a dataset, a machine learning model is trained that then can be access
e.g. via an online API. The data and the machine learning
model itself are important assets and often constitute intellectual property. Our recent research has
revealed that such assets leak to customers that use the
service. Hence, an adversary can exploit the leaked information to gain access to data and/or the
machine learning model by only using the service. We will
cover novel inference attacks on machine learning models and show defenses that allow secure and
protected deployment of machine learning models.
The success of today’s machine learning algorithms is largely fueled by large datasets. Many
domains of practical interest are human centric and
are target at operating under real-world conditions. Therefore, gathering real-world data is often key
the success of such methods. This is frequently achieved
by leveraging user data or crowdsourcing efforts. We will present privacy preserving machine learning
techniques that prevent leakage of private information or