The R&D project ALAIT is developing an innovative ALAIT Technology Impact Assessment for artificial intelligence and is promoting public discourse on AI trust through the ALAIT laboratory and a train-the-trainer network. Results are published in AI trust dossiers and widely disseminated. Our goal is to strengthen trust in AI technology through transparency and information.

 

The challenge

Establishing trust in artificial intelligence is a key challenge, as fears and hopes collide in society. In Austria, trust in AI is declining, which requires measures to build trust. This includes transparency about AI technologies and their effects, governance through legal and social norms, and the promotion of AI literacy , i.e. the knowledge and skills in dealing with AI. An open dialogue about the goals and effects of AI is essential, and all those involved must have sufficient knowledge.

 

The ALAIT project aims to empower key social groups to use AI technologies responsibly and to establish ethical and high-quality standards for the use of AI. Three dimensions are intended to contribute to the development:

  • Transparency

    in the sense of accessibility of compact, comprehensive information on the effects of AI technologies

  • Governance

    for the use of AI and its direct embedding and translation of norms into society and

  • KI-Literacy

    in the sense of acquiring knowledge and skills for dealing with current technologies.

 

For detailed information about the project and its cooperation partners, visit: ALAIT, opens an external URL in a new window