Range is a fact – the difference between largest and smallest values for a group of samples taken from a larger population (or, sometimes, the difference between consecutive samples).

Standard Deviation (for the samples) is also a fact (the ‘root mean square’ of deviations from the average value) but it’s really all about probability: a measure of the ‘likely’ deviation from the average. As other contributors have said, there is a shortcut for calculating it from successive range figures (as an alternative to the ‘rms’ method) but, whichever way it’s calculated, there’s an implicit assumption that the probability of a small deviation is greater than the probability of a large deviation. This is typically referred to as a NORMAL distribution, for which almost 100% (actually 99.7%) of samples would be expected to have a deviation of less than 3 x Std Dev. In other words, if you took 1000 samples you might expect to get 3 with a larger deviation than this (you might actually get more, or you might not get any!). Approx 68% would be expected to have a deviation less than the Std Dev.