How does the error of a measured value change when squared?
In an experiment, I measured the period T of a spring pendulum and calculated the corresponding absolute error. Now I need to calculate my period T^2 squared, but how does the absolute error change when I square it?
Since squaring the period is simply multiplying the period by itself, I thought I could simply add the relative error of my period to itself and then convert that back into an absolute error. That is, the way I would normally proceed with error propagation during multiplication. However, I'm not sure if I can do it that way.
I would be happy to receive answers.
In rough approximation, the relative error – e.g. 10% to 20% in (10±1)2 = 100±20, can be understood immediately because (10±1)2 = 100 ± 20 + 1≅100 ± 20
Usually you use a reproductive error for this…
https://de.wikipedia.org/wiki/error reproduction#Rules_to_error reproduction
If the errors do not become too large (which is usually the case), is according to the Taylor polynomial 1. Order…
So…
For example, one could also directly recognize Δy/Δx ≅dy/dx.
================================================================================================================================================================================================================================================================
In the concrete case, it is now about…
with derivation
Then you get…
============
For comparison… you would have assumed that the relative error is simply added here. It’s like…
In fact, this is the same result.
According to the formula Mf= (M+ΔF)^2