I think there's a lot of confusion going around here. Back to the basics
![Smile :)](./images/smilies/icon_smile.gif)
. The main purpose of stereo is to create a more natural impression of the perceived sound. Take an acoustic concert with no pa as example. You have different instruments, all spread over the stage. What makes you able to distinguish the different locations of the instruments on the stage (even when you close your eyes) are your ears, and the fact that you have 2 of them. Each ear perceives a very slightly different sound (from the guitar for example) over time and our brain calculates where that sound comes from. Now, to preserve the place of the instruments on a record and have a more natural listening experience, you need to make a stereo record. Indeed, by using the 2 speakers the different instruments can be virtually replaced on the stage by spreading the different instruments over the stereo field. Each speaker reproducing all the instruments, but at different levels... and timings (very short timings tough, but enough for our ears to make the difference)
That being said,
When I want to capture an instrument, (a guitar, a percussion,...) is there a need to capture/store those in stereo ? I consider those as a single sources. So I think there's no need for me to have stereo information at this point. Only later, when I create a song, then I will place each single source in the stereo field of my song. So, according to all what I have said, I think that stereo only matters on a song level, but not on a sample level.
PS: I also did run some tests on mono samples and stereo samples (same data on both channels) and didn't notice any difference when applying panning to it in my sequencer. The volumes where the same, panning behaved the same, being it a mono or a stereo sample.