Powered by OpenAIRE graph
Found an issue? Give us feedback

Turing AI Fellowship: Interactive Annotations in AI

Funder: UK Research and InnovationProject code: EP/V024817/1
Funded under: EPSRC Funder Contribution: 1,308,960 GBP
visibility
views
OpenAIRE UsageCountsViews provided by UsageCounts
12

Turing AI Fellowship: Interactive Annotations in AI

Description

With the prevalence of data-hungry deep learning approaches in Artificial Intelligent (AI) as the de facto standard, now more than ever there is a need for labelled data. However, while there have been interesting recent discussions on the definition of readiness levels of data, the same type of scrutiny on annotations is still missing in general: we do not know how or when the annotations were collected or what their inherent biases are. Additionally, there are now forms of annotation beyond standard static sets of labels that call for a formalisation and redefinition of the annotation concept (e.g., rewards in reinforcement learning or directed links in causality). During this Fellowship we will design and establish the protocols for transparent annotations that empowers the data curator to report on the process, the practitioner to automatically evaluate the value of annotations and the users to provide the most informative and actionable feedback. This Fellowship will address all these through a holistic human-centric research agenda, bridging gaps in fundamental research and public engagement with AI. The Fellowship aims to lay the foundations for a two-way approach to annotations, where the paradigm is shifted from annotations simply being a resource to them becoming a means for AI systems and humans to interact. The bigger picture is that, with annotations seen as an interface between both entities, we will be in a much better position to guide the relation of trust in between learning systems and users, where users translate their preferences into the learning systems' objective functions. This approach will help produce a much needed transformation in how potentially sensitive aspects of AI become a step closer to being reliable and trustworthy.

Data Management Plans
  • OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 12
  • 12
    views
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

All Research products
arrow_drop_down
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::216d8667f5f6e1f3ec9956784969646a&type=result"></script>');
-->
</script>
For further information contact us at helpdesk@openaire.eu

No option selected
arrow_drop_down