Skip to yearly menu bar Skip to main content


Breakout Session
in
Affinity Workshop: Women in Machine Learning (WiML) Un-Workshop

Breakout Session 1.1: Catching Out-of-Context Misinformation with Self-supervised Learning


Abstract:

Despite the recent attention to DeepFakes and other forms of image manipulations, one of the most prevalent ways to mislead audiences on social media is the use of unaltered images in a new but false context, commonly known as out-of-context image use. The danger of out-of-context images is that little technical expertise is required, as one can simply take an image from a different event and create a highly convincing but potentially misleading message. At the same time, it is extremely challenging to detect misinformation based on out-of-context images given that the visual content by itself is not manipulated; only the image-text combination creates misleading or false information. In order to detect these out-of-context images, several online fact-checking initiatives have been launched by newsrooms. However, they all heavily rely on manual human efforts to verify each post factually and to determine if a fact-checking claim should be labeled as "out-of-context". In this talk, I will discuss how can we build models that help determine the conflicting image-caption pairs and could be potential out-of-context misuse.