Logistics

Date: Wednesday, November 20, Thursday, November 21, and Friday, November 22, 2024 (3-part workshop which will span 3 days across 3 IDEAL campuses)

Location:

Registration: https://forms.gle/VNHnw84aBnxaD3kH9

Description

To forge healthy and productive Human-AI ecosystems, researchers need to anticipate the nature of this interaction at every stage to stave off concerns of societal disruption and to usher in a harmonious future. A primary way in which AI is anticipated to become part of human life is through augmenting human capabilities instead of replacing them. What are the greatest potentials for this augmentation in various fields and what ought to be its limits? In the short term, AI is expected to continue to rely on the vast recorded and demonstrated knowledge and experience of people. How can the contributors of this knowledge feel adequately protected in their rights and compensated for their role in ushering in AI? As these intelligent systems are woven into the lives and livelihood of people, insight into how they operate and what they know becomes crucial to establish trust and regulate them. How can human privacy be maintained in such pervasive ecosystems and is it possible to interpret the operations, thoughts, and actions of AI? IDEAL will address these critical questions in a 3-part workshop as part of its Fall 2024 Special Program on Interpretability, Privacy, and Fairness, which will span 3 days across 3 IDEAL campuses

Day Descriptions

Wednesday, November 20, 2024: AI Agents and Augmentation: Navigating the Ethics of Human-AI Collaboration 

The rapid integration of AI systems into various domains of human life has sparked both enthusiasm and concern. While some envision unprecedented productivity gains, economic growth, and improved healthcare access, others worry about unfair discrimination, workforce displacement, and threats to human autonomy. Treating AI as a tool for augmenting human capabilities rather than replacing them, could reap the benefits of AI while preserving human agency and values. AI Agents (i.e., AI systems that can pursue complex goals with limited direct supervision) hold great promises for the augmentation of human capabilities. However, the emergence of more sophisticated and capable AI agents, which may play increasingly large roles in human lives, raises new questions about the nature of this augmentation and the evolving relationship between humans and AI. 

 

Thursday, November 21, 2024: Fairness toward Content Producers: Credit and Protection in Generative AI

The rapid growth of AI systems has relied on the availability of massive quantities of training data.  This raises questions of what it means to be fair toward the people who produce the content used as training data and in particular how to provide them with appropriate credit for the value created by their contributions or protections from various possible forms of harm.  Developing understanding to allow us to better answer these questions is crucial to unlocking the societal value of AI systems and ensuring the benefits are broadly shared.

Friday, November 22, 2024: Privacy and Interpretability in Generative AI: Peering into the Black Box

The rapid advancement of Generative AI and large language models (LLMs), such as GPT-4, has raised critical concerns about privacy and interpretability. These models are trained on vast datasets, which may inadvertently include sensitive or personal information, creating the risk of unintentionally disclosing private data through their outputs. Consequently, privacy-preserving mechanisms have become essential to mitigate these risks. At the same time, the inherent complexity and opacity of LLMs make it difficult to understand their decision-making processes, undermining trust and accountability. Enhancing interpretability is key to ensuring that users and developers can comprehend how these models produce specific outputs, thereby improving transparency and fostering trust. Addressing these challenges is essential for building AI systems that are not only secure but also ethical and comprehensible.

  

Organizers:

  • AI Agents and Augmentation: Navigating the Ethics of Human-AI Collaboration
    • Organizers: Steven Keith Platt (LUC) and Diana Acosta Navas (LUC)
  • Fairness in Generative AI: Protecting and Compensating Content Producers
    • Organizers: Mesrob I. Ohannessian (UIC)
  • Privacy and Interpretability in Generative AI: Peering into the Black Box
  • Organizers: Bingui Wang (IIT), Ren Wang (IIT), and Gyorgy Turan (UIC)

Join Our Newsletter