
ΑΠΕ-ΜΠΕ
ΑΠΕ-ΜΠΕ
2 Projects, page 1 of 1
assignment_turned_in ProjectPartners:University of Vienna, AIND - Associação Portuguesa de Imprensa, Instituto de Tecnologias Avançadas para a Formação Lda, LIQUID PUBLISHING SA, UoA +2 partnersUniversity of Vienna,AIND - Associação Portuguesa de Imprensa,Instituto de Tecnologias Avançadas para a Formação Lda,LIQUID PUBLISHING SA,UoA,APA - Austria Presse Agentur eG,ΑΠΕ-ΜΠΕFunder: European Commission Project Code: 2022-1-EL01-KA220-VET-000086883Funder Contribution: 400,000 EUR"<< Objectives >>The battle against disinformation is one of the priorities of the European Commission that stresses the need for journalists to have the advanced digital skills needed for news verification. ANALYSIS intends to contribute to the development of a media ecosystem that is competent, highly trained and uses advanced, high-impact digital tech while increasing the availability, attractiveness, flexibility of training opportunities and contributing to the digital transformation of the media sector.<< Implementation >>ANALYSIS will produce a joint transnational consortium led by the Faculty of Communication of the UoA, in collaboration with 6 partners from Politics, Media & Computer Science backgrounds. Activities: - Design & Development of the Fact-Checking Academy for Journalists - Creation of a MOOC and a community on EPALE - Project website, events - Piloting - Evaluation<< Results >>The results will be a) a new joint cross-disciplinary training curriculum on fact-checking, b) a 6-week intensive programc)free MOOC, d) ""Truth Tellers"" community, e) project website, f) podcasts g) satisfaction surveys resultsBenefits: advanced training to current and future media professionals; meeting the need for new skills; employability; improving the quality of news; media literacy through dissemination activities."
more_vert Open Access Mandate for Publications and Research data assignment_turned_in Project2023 - 2026Partners:VPF, ENGINEERING - INGEGNERIA INFORMATICA SPA, ATC, KUL, ICCS +9 partnersVPF,ENGINEERING - INGEGNERIA INFORMATICA SPA,ATC,KUL,ICCS,IBM France,DEMOCRACY X,INFORMATION CATALYST,SINTEF AS,ΑΠΕ-ΜΠΕ,TRUSTILIO BV,Medical University Plovdiv,MAGGIOLI,INSTITUTE OF PHILOSOPHY AND TECHNOLOGYFunder: European Commission Project Code: 101121042Overall Budget: 7,075,170 EURFunder Contribution: 7,075,170 EURTHEMIS 5.0 draws researchers and practitioners from diverse disciplines in order to secure that AI-driven hybrid decision support is trustworthy and takes place in accordance with the particular human user needs and moral values as well as adhere with the key success indicators of the embedding socio-technical environment. It implements an AI-driven, human-centered Trustworthiness Optimisation Ecosystem that users can use to achieve fairness, transparency, and accountability. In THEMIS 5.0, the trustworthiness vulnerabilities of the AI-systems are determined using an AI-driven risk assessment approach, which, effectively, translates the directions given in the Trustworthy AI Act and relevant standards into technical implementations. THEMIS 5.0 will innovate in its consideration of the human perspective as well as the wider socio-technical systems’ perspective in the risk management-based trustworthiness evaluation. An innovative AI-driven conversational agent will productively engage humans in intelligent dialogues capable of driving the execution of continuous trustworthiness improvement cycles. THEMIS 5.0 adopts the European human-centric approach to the design, development, deployment and operation of the THEMIS 5.0 ecosystem and, in this respect, THEMIS 5.0 will base the implementation of its AI-driven ecosystem on strong co-creation processes. THEMIS 5.0 will pilot and evaluate the humancentric ecosystem using 3 well-defined use cases, each addressing a specific high priority and critical application and industrial sectors. The THEMIS 5.0 solution enhances and accelerates the shift towards more trusted AI-enabled services by unlocking the power of humans to evaluate the trustworthiness of AI solutions and provide feedback on how to improve the AI systems. Users can now better challenge AI systems, pinpoint any biases or problems, embed their own values and norms, and provide feedback to AI developers and providers for improvement.
more_vert