RIMLLAB Telegram 140
๐Ÿ’  Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

โœ… This Week's Presentation:

๐Ÿ”น Title: Backdooring Bias into Text-to-Image Models

๐Ÿ”ธ Presenter: Mehrdad Aksari Mahabadi

๐ŸŒ€ Abstract:
This paper investigates the misuse of text-conditional diffusion models, particularly text-to-image models, which create visually appealing images based on user descriptions. While these images generally represent harmless concepts, they can be manipulated for harmful purposes like propaganda. The authors show that adversaries can introduce biases through backdoor attacks, affecting even well-meaning users. Despite users verifying image-text alignment, the attack remains hidden by preserving the text's semantic content while altering other image features to embed biases, amplifying them by 4-8 times. The study reveals that current generative models make such attacks cost-effective and feasible, with costs ranging from 12 to 18 units. Various triggers, objectives, and biases are evaluated, with discussions on mitigations and future research directions.

๐Ÿ“„ Paper: Backdooring Bias into Text-to-Image Models

Session Details:
- ๐Ÿ“… Date: Sunday
- ๐Ÿ•’ Time: 5:00 - 6:00 PM
- ๐ŸŒ Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! โœŒ๏ธ



tgoop.com/RIMLLab/140
Create:
Last Update:

๐Ÿ’  Compositional Learning Journal Club

Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.

โœ… This Week's Presentation:

๐Ÿ”น Title: Backdooring Bias into Text-to-Image Models

๐Ÿ”ธ Presenter: Mehrdad Aksari Mahabadi

๐ŸŒ€ Abstract:
This paper investigates the misuse of text-conditional diffusion models, particularly text-to-image models, which create visually appealing images based on user descriptions. While these images generally represent harmless concepts, they can be manipulated for harmful purposes like propaganda. The authors show that adversaries can introduce biases through backdoor attacks, affecting even well-meaning users. Despite users verifying image-text alignment, the attack remains hidden by preserving the text's semantic content while altering other image features to embed biases, amplifying them by 4-8 times. The study reveals that current generative models make such attacks cost-effective and feasible, with costs ranging from 12 to 18 units. Various triggers, objectives, and biases are evaluated, with discussions on mitigations and future research directions.

๐Ÿ“„ Paper: Backdooring Bias into Text-to-Image Models

Session Details:
- ๐Ÿ“… Date: Sunday
- ๐Ÿ•’ Time: 5:00 - 6:00 PM
- ๐ŸŒ Location: Online at vc.sharif.edu/ch/rohban


We look forward to your participation! โœŒ๏ธ

BY RIML Lab




Share with your friend now:
tgoop.com/RIMLLab/140

View MORE
Open in Telegram


Telegram News

Date: |

โ€œHey degen, are you stressed? Just let it all out,โ€ he wrote, along with a link to join the group. Over 33,000 people sent out over 1,000 doxxing messages in the group. Although the administrators tried to delete all of the messages, the posting speed was far too much for them to keep up. With Bitcoin down 30% in the past week, some crypto traders have taken to Telegram to โ€œvoiceโ€ their feelings. ZDNET RECOMMENDS Earlier, crypto enthusiasts had created a self-described โ€œmeme appโ€ dubbed โ€œgmโ€ app wherein users would greet each other with โ€œgmโ€ or โ€œgood morningโ€ messages. However, in September 2021, the gm app was down after a hacker reportedly gained access to the user data.
from us


Telegram RIML Lab
FROM American