CASP
Critical Assessment of protein Structure Prediction, or CASP, is a community-wide, worldwide experiment for protein structure prediction taking place every two years since 1994.[1] CASP provides research groups with an opportunity to objectively test their structure prediction methods and delivers an independent assessment of the state of the art in protein structure modeling to the research community and software users. Even though the primary goal of CASP is to help advance the methods of identifying protein three-dimensional structure from its amino acid sequence, many view the experiment more as a “world championship” in this field of science. More than 100 research groups from all over the world participate in CASP on a regular basis and it is not uncommon for entire groups to suspend their other research for months while they focus on getting their servers ready for the experiment and on performing the detailed predictions.
Selection of target proteins
In order to ensure that no predictor can have prior information about a protein's structure that would put him/her at an advantage, it is important that the experiment be conducted in a double-blind fashion: Neither predictors nor the organizers and assessors know the structures of the target proteins at the time when predictions are made. Targets for structure prediction are either structures soon-to-be solved by X-ray crystallography or NMR spectroscopy, or structures that have just been solved (mainly by one of the structural genomics centers) and are kept on hold by the Protein Data Bank. If the given sequence is found to be related by common descent to a protein sequence of known structure (called a template), comparative protein modeling may be used to predict the tertiary structure. Templates can be found using sequence alignment methods (e.g. BLAST or HHsearch) or protein threading methods, which are better in finding distantly related templates. Otherwise, de novo protein structure prediction must be applied (e.g. Rosetta), which is much less reliable but can sometimes yield models with the correct fold (usually, for proteins less than 100-150 amino acids). Truly new folds are becoming quite rare among the targets,[2][3] making that category smaller than desirable.
Evaluation
The primary method of evaluation[4] is a comparison of the predicted model α-carbon positions with those in the target structure. The comparison is shown visually by cumulative plots of distances between pairs of equivalents α-carbon in the alignment of the model and the structure, such as shown in the figure (a perfect model would stay at zero all the way across), and is assigned a numerical score GDT-TS (Global Distance Test — Total Score) describing percentage of well-modeled residues in the model with respect to the target.[5] Free modeling (template-free, or de novo) is also evaluated visually by the assessors, since the numerical scores do not work as well for finding loose resemblances in the most difficult cases.[6] High-accuracy template-based predictions were evaluated in CASP7 by whether they worked for molecular-replacement phasing of the target crystal structure[7] with successes followed up later,[8] and by full-model (not just α-carbon) model quality and full-model match to the target in CASP8.[9]
Evaluation of the results is carried out in the following prediction categories:
- tertiary structure prediction (all CASPs)
- secondary structure prediction (dropped after CASP5)
- prediction of structure complexes (CASP2 only; a separate experiment—CAPRI—carries on this subject)
- residue-residue contact prediction (starting CASP4)
- disordered regions prediction (starting CASP5)
- domain boundary prediction (CASP6–CASP8)
- function prediction (starting CASP6)
- model quality assessment (starting CASP7)
- model refinement (starting CASP7)
- high-accuracy template-based prediction (starting CASP7)
Tertiary structure prediction category was further subdivided into:
- homology modeling
- fold recognition (also called protein threading; Note, this is incorrect as threading is a method)
- de novo structure prediction, now referred to as 'New Fold' as many methods apply evaluation, or scoring, functions that are biased by knowledge of native protein structures, such as an artificial neural network.
Starting with CASP7, categories have been redefined to reflect developments in methods. The 'Template based modeling' category includes all former comparative modeling, homologous fold based models and some analogous fold based models. The 'template free modeling (FM)' category includes models of proteins with previously unseen folds and hard analogous fold based models. Due to limited numbers of template free targets (they are quite rare), in 2011 so called CASP ROLL was introduced. This continuous (rolling) CASP experiment aims at more rigorous evaluation of template free prediction methods through assessment of a larger number of targets outside of the regular CASP prediction season. Unlike LiveBench and EVA, this experiment is in the blind-prediction spirit of CASP, i.e. all the predictions are made on yet unknown structures.[10]
The CASP results are published in special supplement issues of the scientific journal Proteins, all of which are accessible through the CASP website.[11] A lead article in each of these supplements describes specifics of the experiment[12][13] while a closing article evaluates progress in the field.[14][15]
In December 2018, CASP13 made headlines when it was won by AlphaFold, an artificial intelligence program created by DeepMind.[16] In November 2020, an improved version 2 of AlphaFold won CASP14.[17] According to one of CASP co-founders John Moult, AlphaFold scored around 90 on a 100-point scale of prediction accuracy for moderately difficult protein targets.[18]
See also
References
- Moult, J.; et al. (1995). "A large-scale experiment to assess protein structure prediction methods". Proteins. 23 (3): ii–iv. doi:10.1002/prot.340230303. PMID 8710822. S2CID 11216440.
- Tress, M.; et al. (2009). "Target domain definition and classification in CASP8". Proteins. 77 (Suppl 9): 10–17. doi:10.1002/prot.22497. PMC 2805415. PMID 19603487.
- Zhang Y, Skolnick J (2005). "The protein structure prediction problem could be solved using the current PDB library". Proc Natl Acad Sci USA. 102 (4): 1029–1034. Bibcode:2005PNAS..102.1029Z. doi:10.1073/pnas.0407152101. PMC 545829. PMID 15653774.
- Cozzetto, D.; et al. (2009). "Evaluation of template-based models in CASP8 with standard measures". Proteins. 77 (Suppl 9): 18–28. doi:10.1002/prot.22561. PMC 4589151. PMID 19731382.
- Zemla A (2003). "LGA: A method for finding 3D similarities in protein structures". Nucleic Acids Research. 31 (13): 3370–3374. doi:10.1093/nar/gkg571. PMC 168977. PMID 12824330.
- Ben-David, M.; et al. (2009). "Assessment of CASP8 structure predictions for template free targets". Proteins. 77 (Suppl 9): 50–65. doi:10.1002/prot.22591. PMID 19774550. S2CID 16517118.
- Read, R.J.; Chavali, G. (2007). "Assessment of CASP7 predictions in the high accuracy template-based modeling category". Proteins: Structure, Function, and Bioinformatics. 69 (Suppl 8): 27–37. doi:10.1002/prot.21662. PMID 17894351. S2CID 33172629.
- Qian, B.; et al. (2007). "High-resolution structure prediction and the crystallographic phase problem". Nature. 450 (7167): 259–264. Bibcode:2007Natur.450..259Q. doi:10.1038/nature06249. PMC 2504711. PMID 17934447.
- Keedy, D.A.; Williams, CJ; Headd, JJ; Arendall, WB; Chen, VB; Kapral, GJ; Gillespie, RA; Block, JN; Zemla, A; Richardson, DC; Richardson, JS (2009). "The other 90% of the protein: Assessment beyond the α-carbon for CASP8 template-based and high-accuracy models". Proteins. 77 (Suppl 9): 29–49. doi:10.1002/prot.22551. PMC 2877634. PMID 19731372.
- Kryshtafovych, A; Monastyrskyy, B; Fidelis, K (2014). "CASP prediction center infrastructure and evaluation measures in CASP10 and CASP ROLL". Proteins: Structure, Function, and Bioinformatics. 82 Suppl 2: 7–13. doi:10.1002/prot.24399. PMC 4396618. PMID 24038551.
- "CASP Proceedings".
- Moult, J.; et al. (2007). "Critical assessment of methods of protein structure prediction — Round VII". Proteins. 69 (Suppl 8): 3–9. doi:10.1002/prot.21767. PMC 2653632. PMID 17918729.
- Moult, J.; et al. (2009). "Critical assessment of methods of protein structure prediction — Round VIII". Proteins. 77 (Suppl 9): 1–4. doi:10.1002/prot.22589. PMID 19774620. S2CID 9704851.
- Kryshtafovych, A.; et al. (2007). "Progress from CASP6 to CASP7". Proteins: Structure, Function, and Bioinformatics. 69 (Suppl 8): 194–207. doi:10.1002/prot.21769. PMID 17918728. S2CID 40200832.
- Kryshtafovych, A.; et al. (2009). "CASP8 results in context of previous experiments". Proteins. 77 (Suppl 9): 217–228. doi:10.1002/prot.22562. PMC 5479686. PMID 19722266.
- Sample, Ian (2 December 2018). "Google's DeepMind predicts 3D shapes of proteins". The Guardian. Retrieved 19 July 2019.
- "DeepMind's protein-folding AI has solved a 50-year-old grand challenge of biology". MIT Technology Review. Retrieved 30 November 2020.
- ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures
External links
Result ranking
Automated assessments for CASP13 (2018)
Automated assessments for CASP12 (2016)
Automated assessments for CASP11 (2014)
- Official ranking for servers only (126 targets)
- Official ranking for humans and servers (78 targets)
Automated assessments for CASP10 (2012)
- Official ranking for servers only (127 targets)
- Official ranking for humans and servers (71 targets)
- Ranking by Zhang Lab
Automated assessments for CASP9 (2010)
- Official ranking for servers only (147 targets)
- Official ranking for humans and servers (78 targets)
- Ranking by Grishin Lab (for server only)
- Ranking by Grishin Lab (for human and servers)
- Ranking by Zhang Lab
- Ranking by Cheng Lab
Automated assessments for CASP8 (2008)
- Official ranking for servers only
- Official ranking for humans and servers
- Ranking by Zhang Lab
- Ranking by Grishin Lab
- Ranking McGuffin Lab
- Ranking by Cheng Lab
Automated assessments for CASP7 (2006)