Invited Presentation: Critical assessment and AI
Confirmed Presenter: John Moult, University of Maryland, US
Format: In person
Moderator(s): Gustavo Stolovitzky
Authors List: Show
- John Moult, University of Maryland, US
Presentation Overview: Show
Current AI methods have transformed aspects computational structural biology,
delivering protein structures with accuracy in many cases rivalling that of experiment
and huge improvements in the accuracy of protein assemblies. But so far results from
CASP show AI has not overtaken the traditional (and quite poor) methods for modeling
RNA structure, are only beginning to rival traditional (and inadequate) ligand docking
methods, and struggle with alternative conformations. What is now limiting progress and
what might be done about it? One obvious explanation is lack of training data. Can
approaches developed for other AI areas, such as bootstrapped training data, help here
too? Can alternative structure representations provide a more effective training regime?
Can AI approaches be productively combined with traditional methods, as recent CASP
results suggest? What does that landscape of possibilities look like?
The success of AI in structural biology has led to an avalanche of papers proposing new
AI approaches. These papers contain a wealth of interesting and stimulating ideas, but
the amount of information is overwhelming and its usually unclear how reliably methods
have been benchmarked. Consequently, it is very difficult for investigators to assimilate
the results and to know how much weight to give each, so slowing progress. CASP and
other rigorous benchmarking will eventually sort all this out, but in the current hyper-
output situation that’s too slow to be fully effective. Initial experiments suggest AI
methods can critically assess these papers, in terms of both evaluating the results and
potentially in constructing a dynamic ontology of methods, how they have been applied,
and with what success. But this AI approach is in its infancy. How can this new area of
critical assessment be most effectively advanced? Can AI-driven critical assessment be
extended to other areas of biological research?