What is benchmarking, and why are we doing it?
Did you know that there has never been a truly comprehensive and in-depth benchmarking of OpenSees? It is the opinion of several researchers and practitioners that this is a serious problem that needs to be addressed. Benchmarking is a fundamental part of the software development cycle that ensures the quality, reliability, and performance of software. As OpenSees is the backbone of modeling for seismic engineering, it seemed to us that this was a fairly urgent issue to address. We thought about this problem and decided to tackle it ourselves. After months of planning, setting benchmarks, and deciding how to approach the issue, we were ready to start.
First, let’s discuss what verification and validation benchmarking is. Verification is a two-part process to determine the accuracy of a computational model and it’s solution. There are two parts to this process; the first is code verification, in which a software’s mathematical model and algorithms are tested to ensure they are working correctly. The second is verifying the calculations, meaning that the discrete solutions of the mathematical model should be analyzed for accuracy.
Validation takes the data from the verification process and uses it to determine how accurately a model represents the real world according to the model’s intended use. Validation is achieved by comparing experimental and numerical results, ensuring that the software accurately predicts the experimental outcome.