
Korea Advanced Institute of Sci & Tech
Korea Advanced Institute of Sci & Tech
12 Projects, page 1 of 3
assignment_turned_in Project2018 - 2024Partners:Imperial College London, UNIVERSITY OF CAMBRIDGE, Inria, INRIA, Max-Planck-Gymnasium +21 partnersImperial College London,UNIVERSITY OF CAMBRIDGE,Inria,INRIA,Max-Planck-Gymnasium,IBM Corporation (International),University of Cambridge,Google Inc,Google Inc,University of Toronto, Canada,Amazon Web Services (UK),IBM,GCHQ,Facebook UK,IBM,Advanced Risc Machines (Arm),Max Planck Institutes,Facebook UK,Advanced Risc Machines (Arm),AU,Cambridge Integrated Knowledge Centre,KAIST,Amazon Web Services (UK),GCHQ,INRA Sophia Antipolis,Korea Advanced Institute of Sci & TechFunder: UK Research and Innovation Project Code: EP/R034567/1Funder Contribution: 1,579,790 GBPModern society faces a fundamental problem: the reliability of complex, evolving software systems on which it critically depends cannot be guaranteed by the established, non-mathematical techniques, such as informal prose specification and ad-hoc testing. Modern companies are moving fast, leaving little time for code analysis and testing; concurrent and distributed programs cannot be adequately assessed via traditional testing methods; users of mobile applications neglect to apply software fixes; and malicious users increasingly exploit programming errors, causing major security disruptions. Trustworthy, reliable software is becoming harder to achieve, whilst new business and cyber-security challenges make it of escalating importance. Developers cope with complexity using abstraction: the breaking up of systems into components and layers connected via software interfaces. These interfaces are described using specifications: for example, documentation in English; test suites with varying degrees of rigour; static typing embedded in programming languages; and formal specifications written in various logics. In computer science, despite widespread agreement on the importance of abstraction, specifications are often seen as an afterthought and a hindrance to software development, and are rarely justified. Formal specification as part of the industrial software design process is in its infancy. My over-arching research vision is to bring scientific, mathematical method to the specification and verification of modern software systems. A fundamental unifying theme of my current work is my unique emphasis on what it means for a formal specification to be appropriate for the task in hand, properly evaluated and useful for real-world applications. Specifications should be validated, with proper evidence that they describe what they should. This validation can come in many forms, from formal verification through systematic testing to precise argumentation that a formal specification accurately captures an English standard. Specifications should be useful, identifying compositional building blocks that are intuitive and helpful to clients both now and in future. Specifications should be just right, providing a clear logical boundary between implementations and client programs. VeTSpec has four related objectives, exploring different strengths of program specification, real-world program library specification and mechanised language specification, in each case determining what it means for the specification to be appropriate, properly evaluated and useful for real-world applications. Objective A: Tractable reasoning about concurrency and distribution is a long-standing, difficult problem. I will develop the fundamental theory for the verified specification of concurrent programs and distributed systems, focussing on safety properties for programs based on primitive atomic commands, safety properties for programs based on more complex atomic transactions used in software transactional memory and distributed databases, and progress properties. Objective B: JavaScript is the most widespread dynamic language, used by 94.8% of websites. Its dynamic nature and complex semantics make it a difficult target for verified specification. I will develop logic-based analysis tools for the specification, verification and testing of JavaScript programs, intertwining theoretical results with properly engineered tool development. Objective C: The mechanised specification of real-world programming languages is well-established. Such specifications are difficult to maintain and their use is not fully explored. I will provide a maintainable mechanised specification of Javascript, together with systematic test generation from this specification. Objective D: I will explore fundamental, conceptual questions associated with the ambitious VeTSpec goal to bring scientific, mathematical method to the specification of modern software systems.
more_vert assignment_turned_in Project2024 - 2026Partners:University of Passau, Korea Advanced Institute of Sci & Tech, University of Sheffield, SOLUTIONLINK, [no title available]University of Passau,Korea Advanced Institute of Sci & Tech,University of Sheffield,SOLUTIONLINK,[no title available]Funder: UK Research and Innovation Project Code: EP/Y014219/1Funder Contribution: 371,475 GBPWith the latest developments in Machine Learning (ML), ML-enabled Autonomous Systems (MLAS), such as Automated Driving Systems (ADS) for self-driving cars, are coming close to our everyday lives. By 2035, 40% of new cars in the UK could have self-driving capabilities, and the UK market could be worth £42 billion, providing up to 38,000 new jobs in the industry. However, these promising figures would not mean anything if we do not make sure MLAS are safe and reliable. To ensure the system does not cause any problems (e.g., colliding with surrounding cars), we can test the system under different "scenarios". For example, in the automotive domain, we can use high-fidelity simulators to automatically change different driving conditions like the shape of the road, trees, buildings, traffic signs, other vehicles, and pedestrians. Often, we find many scenarios where the system fails. These failure scenarios give us useful information and chances to fix and upgrade the system. However, we need to thoroughly understand why the system failed in these scenarios. Unfortunately, failure scenarios are already very complicated, with many entities involved (e.g., moving cars, pedestrians, and other roadside objects). Therefore, it is hard to determine exactly which scenario entities caused the failure. To make it easier to figure out the root cause of the failure, we need to make failure scenarios simpler by finding a minimal set of failure-inducing scenario entities. However, simplifying failure scenarios entails several challenges. First, the number of possible combinations of scenario entities in a failure scenario increases quickly as the number of elements grows, making it impossible to check all of them. Second, different groups of scenario entities can lead to the same failure because of the non-linear behaviours of ML components and how they interact. Third, to see if a (simplified) failure scenario causes the same failure as the original failure scenario, we need a time-consuming, realistic simulation. Fourth, MLAS can include components (especially ML models) made by 3rd parties, so we can't assume we have the source code and other internal details. This project, SimpliFaiS, is designed to address the challenges mentioned above. It will use Search-Based Software Engineering (SBSE) and Surrogate-Assisted Optimisation (SAO). SBSE is a branch of software engineering that effectively solves complex problems by formulating them into optimisation problems when there are too many candidate solutions to be exhaustively enumerated or explored. SAO is an optimisation approach that uses computationally lightweight surrogate models instead of computationally expensive simulations to minimise relevant computations. Ultimately, SimpliFaiS will enable us to efficiently investigate and thoroughly address the root causes of MLAS failures, facilitating the safety and reliability of MLAS.
more_vert assignment_turned_in Project2015 - 2018Partners:Edelman, KAIST, Cabinet Office, Microsoft Research, Korea Advanced Institute of Sci & Tech +35 partnersEdelman,KAIST,Cabinet Office,Microsoft Research,Korea Advanced Institute of Sci & Tech,Tsinghua University,Big White Wall Ltd,Google Inc,Google Inc,Big White Wall (United Kingdom),Agency for Science Technology-A Star,Ctrl Shift Ltd,IBM,IBM (United Kingdom),Deloitte UK,Tsinghua University,NEU,Agency for Science Technology (A Star),British Telecommunications plc,HO,Microsoft Research,ESRC,Hampshire Constabulary,BT Group (United Kingdom),The Home Office,IBM,ESRC,Ctrl Shift Ltd,Hampshire Constabulary,IBM (United States),The Cabinet Office,Edelman,Group Partners Ltd,IBM UK Labs Ltd,Group Partners Ltd,University of Oxford,Baxi Partnership Ltd,BAXI PARTNERSHIP LIMITED,Deloitte UK,Northwestern UniversityFunder: UK Research and Innovation Project Code: EP/J017728/2Funder Contribution: 2,667,740 GBPSOCIAM - Social Machines - will research into pioneering methods of supporting purposeful human interaction on the World Wide Web, of the kind exemplified by phenomena such as Wikipedia and Galaxy Zoo. These collaborations are empowering, as communities identify and solve their own problems, harnessing their commitment, local knowledge and embedded skills, without having to rely on remote experts or governments. Such interaction is characterised by a new kind of emergent, collective problem solving, in which we see (i) problems solved by very large scale human participation via the Web, (ii) access to, or the ability to generate, large amounts of relevant data using open data standards, (iii) confidence in the quality of the data and (iv) intuitive interfaces. "Machines" used to be programmed by programmers and used by users. The Web, and the massive participation in it, has dissolved this boundary: we now see configurations of people interacting with content and each other, typified by social web sites. Rather than dividing between the human and machine parts of the collaboration (as computer science has traditionally done), we should draw a line around them and treat each such assembly as a machine in its own right comprising digital and human components - a Social Machine. This crucial transition in thinking acknowledges the reality of today's sociotechnical systems. This view is of an ecosystem not of humans and computers but of co-evolving Social Machines. The ambition of SOCIAM is to enable us to build social machines that solve the routine tasks of daily life as well as the emergencies. Its aim is to develop the theory and practice so that we can create the next generation of decentralised, data intensive, social machines. Understanding the attributes of the current generation of successful social machines will help us build the next. The research undertakes four necessary tasks. First, we need to discover how social computing can emerge given that society has to undertake much of the burden of identifying problems, designing solutions and dealing with the complexity of the problem solving. Online scaleable algorithms need to be put to the service of the users. This leads us to the second task, providing seamless access to a Web of Data including user generated data. Third, we need to understand how to make social machines accountable and to build the trust essential to their operation. Fourth, we need to design the interactions between all elements of social machines: between machine and human, between humans mediated by machines, and between machines, humans and the data they use and generate. SOCIAM's work will be empirically grounded by a Social Machines Observatory to track, monitor and classify existing social machines and new ones as they evolve, and act as an early warning facility for disruptive new social machines. These lines of interlinked research will initially be tested and evaluated in the context of real-world applications in health, transport, policing and the drive towards open data cities (where all public data across an urban area is linked together) in collaboration with SOCIAM's partners. Putting research ideas into the field to encounter unvarnished reality provides a check as to their utility and durability. For example the Open City application will seek to harness citywide participation in shared problems (e.g. with health, transport and policing) exploiting common open data resources. SOCIAM will undertake a breadth of integrated research, engaging with real application contexts, including the use of our observatory for longitudinal studies, to provide cutting edge theory and practice for social computation and social machines. It will support fundamental research; the creation of a multidisciplinary team; collaboration with industry and government in realization of the research; promote growth and innovation - most importantly - impact in changing the direction of ICT.
more_vert assignment_turned_in Project2014 - 2025Partners:University of Southampton, BT Group (United Kingdom), Open Data Institute (ODI), Serious Organised Crime Agency SOCA, British Telecom +35 partnersUniversity of Southampton,BT Group (United Kingdom),Open Data Institute (ODI),Serious Organised Crime Agency SOCA,British Telecom,Inqb8r Limited,OS,[no title available],Samsung Electronics,Defence Science & Tech Lab DSTL,Business South,BBC Television Centre/Wood Lane,Digital Catapult,Edelman,DSTL,Business South,ODI,Samsung R&D Institute UK,RMRL,British Telecommunications plc,Switch Concepts Ltd,Inqb8r Limited,Ordnance Survey,BBC,Connected Digital Economy Catapult,Serious Organised Crime Agency SOCA,KAIST,University of Southampton,Nominet Limited,VU,Nominet Limited,Defence Science & Tech Lab DSTL,Roke Manor Research Ltd,Home Office Science,Edelman,Samsung Electronics,Switch Concepts Ltd,Free (VU) University of Amsterdam,Korea Advanced Institute of Sci & Tech,British Broadcasting Corporation - BBCFunder: UK Research and Innovation Project Code: EP/L016117/1Funder Contribution: 3,680,060 GBPWeb Science is the science of the World Wide Web and its impact, both positive and negative, on society. The Web is a socio-technical mixture of the people, organizations, browsers, policies, applications, standards, data centres, shopping baskets and social network status updates that have come to shape our everyday lives and global futures. Web Science offers the insights necessary to understand the flow of data and knowledge around the globe, and the social and technical processes that can turn gigabytes and terabytes of raw data and into valuable new applications or evidence-based policy. Web Science helps us appreciate the threats to our online identities but also the opportunities of allowing our personal digital avatars to participate in new kinds of online businesses, online politics and online social engagements. Web Science offers a basis for innovating new personal practices and new social formations, and the ability to predict the consequences for the UK's digitally connected citizens. With an integrated understanding of these research areas, Web Science doctoral graduates will be able to innovate in the shaping of Web growth and Web policy, positioned to lead UK industry and government to reap the maximum economic and social value from its emerging digital economy. The Centre will recruit 13 excellent candidates annually from a variety of science, engineering, social science and humanities backgrounds. It will provide a cohort-based, 4-year doctoral programme with an initial training year that combines foundational aspects of Web Science research with technical aspects of the Web's architecture, an intensive training in interdisciplinarity and a grounding in innovation. A student-centred process of PhD research selection will begin at the end of the first semester with students starting to negotiate a potential project topic and multidisciplinary supervisor team with members of the Supervisor Forum. The CDT will offer a thorough programme of postgraduate research and professional training in co-ordination with the University Research and Graduate School. Complementary cohort-specific training will be offered to support and enhance the opportunities offered by the CDT (e.g. more intensive team building courses or communication training to prepare for specific industry events). The cohort experience is maintained throughout the PhD with frequent team-based events including collaborations with industry partners and international research exchanges. The Web Science CDT will use a multidisciplinary training approach that has successfully cut across traditional disciplinary silos in research practice, institutional structure and University administration. Its novel cohort-based training environment creates a socially cohesive and self-supporting group of students that successfully integrate their diverse disciplinary expertise in collaborative teams. Its programme of cross-cohort activities encourages mentorship, thus making the CDT self-sustaining and allowing it to amplify the research leadership of the supervisory staff. The net effect of these cohort benefits is to allow each student to undertake more challenges and to achieve more excellent training outcomes than possible in an individual training regime.
more_vert assignment_turned_in Project2020 - 2023Partners:Fujitsu Laboratories of America, Inc., Korea Advanced Institute of Sci & Tech, KAIST, University of Sheffield, Fujitsu Laboratories of America, Inc. +12 partnersFujitsu Laboratories of America, Inc.,Korea Advanced Institute of Sci & Tech,KAIST,University of Sheffield,Fujitsu Laboratories of America, Inc.,USC,University of Passau,University of Quebec at Chicoutimi,SpotQA,University of Quebec at Chicoutimi,University of Sheffield,Fujitsu (United States),University of Southern California,Allegheny College,SpotQA,Allegheny College,[no title available]Funder: UK Research and Innovation Project Code: EP/T015764/1Funder Contribution: 27,773 GBPPresentation failures are defects in the visual appearance of a web page. They range from flaws in the page's layout such as overlapping content, and text rendered off the edge of the page, to usability problems such as unreadable text and inaccessible navigation. An organisation's website is often one of its primary means of driving its business and establishing information about itself. As such, presentation failures undermine an organisation's message, its credibility, and potentially its revenue. Repairing presentation failures is difficult for web developers. Websites need to display correctly on a wide range of devices from mobile phones to desktops, meaning that developers need to ensure web pages lay out correctly on a vast range of screen sizes, with varying amounts of space available to lay out content and graphical elements. Websites need to format correctly regardless of the browser that a user is using, or what language it has been translated to. Furthermore, they must be accessible to disabled users. The complexity of presentational code (developed using a combination of HTML, CSS, and JavaScript) means that accounting for each of these different aspects when repairing a presentation failure manually is challenging. Manual "repairs" can even inadvertently lead to further defects. Automated repair techniques would therefore greatly assist developers in this task. RE-PRESENT is a proposal for an overseas travel grant intended to allow the PI to continue and develop international collaborations to solve these problems. It intends to develop search-based techniques to automatically generate repairs to HTML, CSS, and JavaScript code used to manage the layout and design of web pages. Search-based techniques treat the current version of the code as a point in a search space, and use a problem-specific fitness function to guide a search method to another point in the space that constitutes a repaired, or "fixed" version of the page. RE-PRESENT will make the following innovations: - It will develop automated repair techniques for presentation failures currently unrepairable automatically, including those related to "responsive designs" (web page layouts that are intended to adjust to different screen sizes), accessibility issues, and defects related to faulty JavaScript code responsible for handling user interaction. - It will develop techniques that are capable of accounting for different types of presentation failure at once, rather than in isolation. This is important, because the act of fixing one presentation failure (e.g., reducing the size of a button, so that it no longer overlaps other content on a page) may inadvertently cause others (e.g., an accessibility issue, because the button is now too small for visually impaired users to see). - It will investigate techniques that produce results fast enough for developers to use in practice. Search-based approaches are effective, but often slow because repairs need to be evaluated by rendering the page with the fix applied in a browser. RE-PRESENT will investigate ways of modelling web page layout to avoid the need for more, lengthy, fitness calculations than are strictly needed.
more_vert
chevron_left - 1
- 2
- 3
chevron_right